text
stringlengths 14
1.76M
|
|---|
# Surrogacy Validation for Time-to-Event Outcomes with Illness-Death Frailty
Models
Emily K. Roberts1∗, Michael R. Elliott2,3, Jeremy M. G. Taylor2
1Department of Biostatistics, University of Iowa, Iowa City, IA
2Department of Biostatistics, University of Michigan, Ann Arbor, MI
3Survey Methodology Program, Institute for Social Research Ann Arbor, MI
∗145 N. Riverside Dr. Iowa City, IA 52242
<EMAIL_ADDRESS>
## Key Words:
Bayesian methods, clinical trial, illness death model, surrogacy validation,
time-to-event data
## Abstract
A common practice in clinical trials is to evaluate a treatment effect on an
intermediate endpoint when the true outcome of interest would be difficult or
costly to measure. We consider how to validate intermediate endpoints in a
causally-valid way when the trial outcomes are time-to-event. Using
counterfactual outcomes, those that would be observed if the counterfactual
treatment had been given, the causal association paradigm assesses the
relationship of the treatment effect on the surrogate $S$ with the treatment
effect on the true endpoint $T$. In particular, we propose illness death
models to accommodate the censored and semi-competing risk structure of
survival data. The proposed causal version of these models involves estimable
and counterfactual frailty terms. Via these multi-state models, we
characterize what a valid surrogate would look like using a causal effect
predictiveness plot. We evaluate the estimation properties of a Bayesian
method using Markov Chain Monte Carlo and assess the sensitivity of our model
assumptions. Our motivating data source is a localized prostate cancer
clinical trial where the two survival endpoints are time to distant metastasis
and time to death.
## 1 Introduction
Time-to-event endpoints are common in oncology trials, though it can often
take many years to accrue enough observed events to complete the study (Kemp
et al. 2017). In a randomized clinical trial, an appropriate surrogate
endpoint can serve as a substitute indicator for if a treatment effect exists
on some true outcome of interest. In this work, our data come from a prostate
cancer clinical trial with a binary treatment of adding anti-androgen therapy
to an existing regimen (Shipley et al. 2017). The two endpoints of interest
are the occurrence of distant metastasis and overall survival. Here the
terminal event is death from any cause and is the primary endpoint for the
trial. For these patients, death from prostate cancer will only occur if the
person has had metastases. However, some men will experience death during
follow-up with or without experiencing distant metastases spreading first.
Overall survival is therefore a mixture of two death types, death from
prostate cancer and death from other causes. However, in the data the cause of
death may not be known. Mechanistically understanding whether distant
metastases is a desirable surrogate for overall survival in this setting may
be beneficial for clinicians and trialists.
Given the substantial risk of potentially using an invalid surrogate endpoint
in a large-scale trial, rigorous standards have been proposed to validate a
surrogate (Vanderweele, 2013). The first criteria to determine the validity of
candidate surrogate endpoints were suggested by Prentice (1989) which test
whether a treatment affects the true endpoint only through the pathway of the
surrogate endpoint. While the criteria are applicable to different outcomes
such as time-to-event endpoints that we will be focusing on, they involve
regression models that rely on conditioning on the observed value of $S$,
leading to a non-causal interpretation. More recent frameworks to determine if
a surrogate marker is appropriate for use in a future trial can be broadly
grouped into the causal effects and causal association paradigms (Joffe and
Greene, 2009). The causal association framework aims to evaluate the
relationship of the treatment effect on the surrogate $S$ with the treatment
effect on the true clinical endpoint $T$. These methods are often built upon
counterfactual outcomes $T(z)$, which are the clinical outcomes of interest,
and $S(z)$, the surrogate endpoints, where the notation $Z=z$ represents
treatment under either the observed or counterfactual assignment.
Methods within the causal association framework have been proposed for trials
where the true outcome $T$ is a time-to-event outcome under different
corresponding surrogate endpoint types. Tanaka et al. (2017) consider a binary
surrogate for a survival primary outcome within the meta-analytic framework,
and Gao (2012) considers a time-to-event $T$ and binary $S$ for a single trial
using principal stratification methods (Frangakis and Rubin, 2002). Taylor et
al. (2015) propose a Gaussian copula model with a survival endpoint for $T$
and ordinal endpoint $S$. The principal stratification estimand proposed by
Qin et al. (2008) allows for a continuous $S$ and time-to-event $T$. This was
expanded upon in Gabriel and Gilbert (2014) and Gabriel, Sachs, and Gilbert
(2015) in pursuit of a causal effect interpretation. Causal solutions for
validation become more challenging when the surrogate is also subject to
censoring. Instead, others such as Parast and colleagues (2017) rely on
different measures such as proportion explained for time-to-event outcomes,
and likewise Hsu et al. (2015), Vandenberghe et al. (2018), and Weir et al.
(2021) address time-varying surrogates using mediation approaches that rely on
proportion mediated metrics within the causal effects paradigm.
To our knowledge, the setting where both $S$ and $T$ are time-to-event
endpoints has not been fully addressed within the principal stratification
framework. Building on the work of Frangakis and Rubin (2002), we aim to
develop a corresponding Causal Effect Predictiveness (CEP) curve proposed by
Gilbert and Hudgens (2008) to validate a surrogate endpoint when both $S$ and
$T$ are time-to-event. The key to obtaining a causal assessment in this
paradigm is classifying individuals based on their set of potential values of
the post-treatment variable, which here would be the surrogate endpoint. In a
simple case where $S$ and $T$ are Gaussian outcomes and $Z$ takes on the value
0 or 1, the analog to surrogate-specific strata and the corresponding CEP
curve for validation is based on the quantity $E(T(1)-T(0)|S(1)-S(0)=s).$
Briefly, the CEP criteria intuitively assert that there be no average
treatment effect on $T$ for the strata of patients defined by no treatment
effect on $S$, and conversely that there exist an overall treatment effect on
$T$ for the strata of patients defined by a treatment effect on $S$. A
comparable contrast and consideration of principal strata when $T(z)$ and
$S(z)$ are subject to censoring and a semi-competing risk structure will be
explored in this paper.
Outside of the surrogacy validation setting, semi-competing risks based on
counterfactual hazards have been explored (Huang, 2021). Within the principal
stratification framework, unobserved outcomes due to truncation by death can
be addressed by defining strata based on survivorship cohorts (Zhang and
Rubin, 2003). Comment et al. (2019) define a survivor average causal effect in
the presence of a semi-competing risk where principal causal effects are
defined for individuals who would survive regardless of the assigned
treatment. Xu et al. (2020) propose a causal estimand for a semi-competing
risk structure to address truncation by death
$\frac{P(S(1)<\tau|T(0)\geq\tau,T(1)\geq\tau)}{P(S(0)<\tau|T(0)\geq\tau,T(1)\geq\tau)}$
which conditions on these survivor principal strata.
The estimands for surrogacy validation with a continuous $S$ by Qin et al.
(2008) and Gabriel, Sachs, and Gilbert (2015) described earlier can be written
as
$1-\frac{P(T(1)=\tau|T(1)\geq\tau_{k-1},S(1)=s_{1},S(0)=s_{0})}{P(T(0)=\tau|T(0)\geq\tau_{k-1},S(1)=s_{1},S(0)=s_{0})}$
and
$\frac{1-P(T(1)>t|T(0)\geq\tau,T(1)\geq\tau,S(1)=s_{1},S(0)=s_{0})}{1-P(T(0)>t|T(0)\geq\tau,T(1)\geq\tau,S(1)=s_{1},S(0)=s_{0})}$
for some time $\tau$, respectively. Whereas the previous CEP quantities
suggest conditioning on counterfactual surrogate outcomes, this becomes less
straightforward in our setting. While existing models are suitable for these
data that account for semi-competing risks, $S$ may not be well-defined if it
is not observed before $T$. The proper corresponding surrogacy validation
estimand is less readily apparent since it may not be possible to condition on
strata defined by $S(0)$ and $S(1)$ occurring or not by time $\tau$ (see a
discussion regarding estimands in Buhler et al. 2022). For example, while we
can construct
$\frac{P(T(1)<\tau|S(0)\geq\tau,S(1)\geq\tau)}{P(T(0)<\tau|S(0)\geq\tau,S(1)\geq\tau)}$
or
$\frac{P(T(1)<\tau|T(0)\geq\tau,S(0)\geq\tau,S(1)\geq\tau)}{P(T(0)<\tau|T(1)\geq\tau,S(0)\geq\tau,S(1)\geq\tau)}$
for some time $\tau$, it is not clear which would be a principled estimand to
use for validation with our endpoint types.
Rather than conditioning on surrogate outcomes, we develop a principal
stratification approach that conditions on counterfactual hazards and outline
causal quantities based on these. We propose an illness-death model to
incorporate the censored and semi-competing risk structure of the data.
Previous work using principal surrogacy for repeated outcome measurements
incorporates estimation of subject-specific random effects (Roberts et al.,
2022). Here we utilize frailty terms to capture subject specific heterogeneity
and allow dependence among the transitions of the illness-death model.
Frailties have been proposed for surrogate validation settings that differ
from our single trial with subject-level, counterfactual outcomes. These
methods include joint frailty-copula models for meta-analysis to define valid
surrogates (Emura et al., 2017; Sofeau, Emura, and Rondeau, 2019; Sofeau,
Emura, and Rondeau, 2020).
In Section 2, we propose the causal modeling strategy based on the illness-
death approach for a single trial and link this formulation to the Prentice
criteria. In Section 3, we provide the likelihood of the illness-death model
and propose a Bayesian estimation strategy. Section 4 describes our proposed
CEP quantities and explores CEP plots that correspond to different data
settings to help define what an ideal surrogate would look like. A simulation
study is provided in Section 5 with a real data analysis from a prostate
cancer trial in Section 6. Discussion and future work are provided in Section
7.
## 2 Illness-Death Approach
The structure of the illness-death model is a natural way to describe data
with the semi-competing risk structure and has potential use for surrogacy
validation (O’Quigley and Flandre, 2012). Here we consider counterfactual
illness-death models and the principal stratification framework. Let
$T_{jk}(z)$ denote the gap time between two states $(j=1,2,k=2,3)$ and
corresponding transition intensities $\lambda_{jk}^{z}$ between states in the
treatment-specific illness-death models for treatment $Z=z$ as shown in Figure
1.
Notably, this conceptualization is related to the models used in the Prentice
criteria (1989). In short, the Prentice criteria assess whether a) the
treatment and true endpoint are conditionally independent, given the surrogate
endpoint, and b) the surrogate and the treatment are correlated. This
determination is made by fitting two regression models and determining if the
coefficient for the treatment effect on $T$ becomes null after adjusting for
the surrogate in the model. These ensure that a treatment effect on the true
endpoint will imply a treatment effect on the surrogate endpoint. In
particular, Prentice’s measures, which identify statistical surrogates, are
only correlative.
We propose a more rigorous and flexible strategy to identify a consistent
surrogate using potential outcomes and counterfactual illness-death models in
pursuit of a causal interpretation (VanderWeele, 2013). Motivation for our
proposed models can be seen through a special case of regression models that
are related to models used to evaluate the Prentice criteria. In the Prentice
criteria, we can consider three models:
For time to $S$, A) $\lambda(t)\exp(\phi_{0}Z_{i}+\eta_{0}X_{i})$
For time to $T$, B) $\lambda(t)\exp(\phi_{1}Z_{i}+\eta_{1}X_{i})$ and C)
$\lambda(t)\exp(\phi_{2}Z_{i}+\eta_{2}X_{i}+\omega I(t>S_{i}))$
where $S$ denotes the time of the surrogate outcome occurring, $Z$ denotes
treatment, $X$ denotes baseline covariates, and time $t$ is measured from
randomization. Then the difference in the $\phi_{1}$ and $\phi_{2}$
coefficients between B and C largely captures the value of the surrogate. In
comparison, consider a general set of observed data models for the three
transitions
$\lambda_{12}(t)\exp(\omega_{12i}+\phi_{3}\ Z_{i}+\eta_{3}\ X_{i})$ (1)
$\lambda_{13}(t)\exp(\omega_{13i}+\phi_{4}\ Z_{i}+\eta_{4}\ X_{i})$
$\lambda_{23}(t)\exp(\omega_{23i}+\theta\ S_{i}+\phi_{5}\ Z_{i}+\eta_{5}\
X_{i}+\beta\ S_{i}\ Z_{i})$
where $\omega_{jk}$ denote frailty terms. Our proposed models include a model
following the occurrence of $S$, allow for more interaction terms, and include
frailties. Further extension of the models and their connection with the
counterfactual illness-death models in Figure 1 can be found in an appendix.
In the model we propose and explore in detail in the following sections, each
counterfactual arm has its own set of transition hazard models. We will first
consider all counterfactual quantities that appear in the complete data
likelihood for the proposed model.
### 2.1 Defining Causal Quantities Based on Hazards and Frailty Models
We propose to model the transition hazards that correspond to the gap times
$T_{jk}(z)$ in Figure 1. Shared or common frailty terms, which quantify the
dependence between the different processes within the same person, can provide
information on the dependence structure between the time to intermediate event
and the time to terminating event in multi-state models (Zhang et al., 2014;
Xu et al., 2010). In models for time-to-event data frailties are commonly
incorporated to model correlation among events, to allow for heterogeneity
among individuals, or to capture the effect of some omitted covariate. In our
setting, we consider both counterfactual outcomes and transitions, and we want
to allow for possible dependence between the counterfactual outcomes. As this
association is integral to the value of the surrogate, we propose to use
illness-death frailty models where the hazards are linked via frailty terms.
Here we consider multiple hazards with frailties both to allow dependence
across state transitions and to link observable transitions in arm $Z=z$ to
the counterfactual transitions for $Z=1-z$.
For a single time-to-event and a general frailty $\omega$, the hazard can be
written
$\lambda(t|X,\beta,\omega,\kappa)=\lambda_{0}(t)\exp(\kappa\omega+X\beta)$,
where $\omega$ has some pre-specified distribution and may have an associated
coefficient parameter $\kappa$. Various assumptions can be made about the
frailty term $\omega$, such as that it follows a Normal or Gamma distribution,
for simplicity and computational feasibility. For the illness-death models
specified in Figure 1, a set of the six correlated frailties are required, one
for each model. However, for identifiability and computational concerns, we
impose some restrictions and simplifying assumptions. We initially propose two
different formulations of the sets of models, and for ease of notation, we
exclude baseline covariates $X$.
### Model A using Time Dependent Covariates
For $z=0$,
$\lambda_{12}^{0}(t|\omega_{12i}^{0})=\lambda_{12,0}^{0}(t)\exp(\kappa_{12}^{0}\omega_{12i}^{0})$
(2)
$\lambda_{13}^{0}(t|\omega_{13i}^{0})=\lambda_{13,0}^{0}(t)\exp(\kappa_{13}^{0}\omega_{13i}^{0})$
$\lambda_{23}^{0}(t|T_{12i}(0),\omega_{23i}^{0})=\lambda_{23,0}^{0}(t-T_{12i}(0))\exp(\kappa_{23}^{0}\omega_{23i}^{0}+\theta_{23}^{0}T_{12i}(0))I(t>T_{12i}(0))$
Similarly for $z=1,$
$\lambda_{12}^{1}(t|\omega_{12i}^{1})=\lambda_{12,0}^{1}(t)\exp(\kappa_{12}^{1}\omega_{12i}^{1})$
$\lambda_{13}^{1}(t|\omega_{13i}^{1})=\lambda_{13,0}^{1}(t)\exp(\kappa_{13}^{1}\omega_{13i}^{1})$
$\lambda_{23}^{1}(t|T_{12i}(1),\omega_{23i}^{1})=\lambda_{23,0}^{1}(t-T_{12i}(1))\exp(\kappa_{23}^{1}\omega_{23i}^{1}+\theta_{23}^{1}T_{12i}(1))I(t>T_{12i}(1))$
where $T_{12i}$ is the time that subject $i$ moves into state $S$. We include
$\theta_{23}$ in the $\lambda_{23}$ model as the coefficient for our time
dependent covariate $T_{12}$. The purpose is to capture the effect of this
transition time, and the time that an individual experiences $S$ may help to
assess the strength of association between $S$ and $T$. We model the
transition using a clock reset for $\lambda_{23}$ (ie the time scale is
$t-T_{12}(z)$).
The restrictions and assumptions we will be considering are to make
$\omega_{13i}^{z}=\omega_{23i}^{z}$ and to set some of the
$\kappa_{jk}^{z}=1.$ If the $\kappa$ parameters vary, they essentially
influence how variable the frailty terms are. We will refer to $\kappa$ as
frailty coefficients. One rationale for assuming
$\omega_{13i}^{z}=\omega_{23i}^{z}$ in this setting is that both are frailties
that influence time to death from other causes in our motivating trial. For
example, since our variable $T$ is death from any cause, we may expect that
some men will die of old age. It may be reasonable to expect that an
individual may have their own propensity for experiencing death from other
causes irrespective of whether or not $S$ has occurred. Another consideration
is that by including the coefficient for our time-varying covariate,
$\theta_{23}^{z}$, the model captures the magnitude of the effect for the time
it takes to experience the intermediate outcome $S$. This makes it more
plausible that certain frailties are equal and conditional independence
assumptions may be more likely. Lastly, the frailties capture heterogeneity on
the individual level. There may still be heterogeneity on the population level
for the variability in the hazard of going from baseline to $T$ or from $S$ to
$T$ which can be reflected in the baseline hazards. We explore these
variations in later sections.
### Model B using Multiple Frailties in Place of Time Dependent Covariates
We include an alternate option to incorporate the dependence between the
different transitions such as a model that includes two frailty terms in the
$S\rightarrow T$ transition
$\lambda_{12}^{0}(t|\omega_{12i}^{0})=\lambda_{12,0}^{0}(t)\exp(\kappa_{12}^{0}\omega_{12i}^{0})$
(3)
$\lambda_{13}^{0}(t|\omega_{13i}^{0})=\lambda_{13,0}^{0}(t)\exp(\kappa_{13}^{0}\omega_{13i}^{0})$
$\lambda_{23}^{0}(t|T_{12i}(0),\omega_{13i}^{*0},\omega_{12i}^{*0})=\lambda_{23,0}^{0}(t-T_{12i}(0))\exp(\kappa_{12}^{*0}\omega_{12i}^{0}+\kappa_{13}^{*0}\omega_{13i}^{0})I(t>T_{12i}(0))$
$\lambda_{12}^{1}(t|\omega_{12i}^{1})=\lambda_{12,0}^{1}(t)\exp(\kappa_{12}^{1}\omega_{12i}^{1})$
$\lambda_{13}^{1}(t|\omega_{13i}^{1})=\lambda_{13,0}^{1}(t)\exp(\kappa_{13}^{1}\omega_{13i}^{1})$
$\lambda_{23}^{1}(t|T_{12i}(1),\omega_{13i}^{*1},\omega_{12i}^{*1})=\lambda_{23,0}^{1}(t-T_{12i}(1))\exp(\kappa_{12}^{*1}\omega_{12i}^{1}+\kappa_{13}^{*1}\omega_{13i}^{1})I(t>T_{12i}(1))$
The motivation of this model is an alternative way to capture the subject
specific relationship between the different transitions via the
$\kappa_{12}^{*}$ and $\kappa_{13}^{*}$ coefficients. This model does not
include $T_{12}$ as a time-varying covariate. When we assume
$\omega_{23}^{z}=\omega_{13}^{z},$ the key difference between models A and B
is the way in which the transition from baseline to the intermediate outcome
and the time following that transition are related; these are linked using
either a time varying covariate (in model A) or another frailty term (in model
B). Again, the frailty coefficients $\kappa$ can be thought of parameters that
increase or decrease the magnitude of the effect of the frailties. We would
not expect $\kappa_{12}^{*z}$ and $\kappa_{12}^{z}$ to be necessarily equal
across the models given the different assumptions in each model.
### Frailty Structures
In its most generality, model A has six correlated frailties, which we assume
have a multivariate normal distribution.
$\left(\begin{array}[]{c}\omega_{12i}^{0}\\\ \omega_{12i}^{1}\\\
\omega_{13i}^{0}\\\ \omega_{13i}^{1}\\\ \omega_{23i}^{0}\\\
\omega_{23i}^{1}\end{array}\right)\sim N\left(\left(\begin{array}[]{c}0\\\
0\\\ 0\\\ 0\\\ 0\\\
0\end{array}\right),\left(\begin{array}[]{cccccc}1&\rho_{S}&\rho_{00}&\rho_{01}&\rho_{S1}&\rho_{S2}\\\
&1&\rho_{10}&\rho_{11}&\rho_{S3}&\rho_{S4}\\\
&&1&\rho_{T}&\rho_{T1}&\rho_{T2}\\\ &&&1&\rho_{T3}&\rho_{T4}\\\
&&&&1&\rho_{ST}\\\ &&&&&1\\\ \end{array}\right)\right)$
While this model has a very general form, it may not be necessary or even
desirable to consider this level of generality. We will be focusing on special
cases of this general model, which we think are appropriate for the setting of
surrogacy assessment.
To reduce the number of frailties to estimate to four in model A, we assume
that both transitions into $T$ have the same frailty
($\omega_{13}^{z}=\omega_{23}^{z}$) since they are both relevant for time to
the terminal event. As discussed above, since the terminal event is death from
any cause, it seems justifiable to assume that conditional on all other terms
in the model, frailties toward death from any cause would be the same on the
individual level with or without the occurrence of the intermediate event $S$.
This assumption will be useful for estimation since $T_{23i}$ is not defined
for all individuals. With this assumption, our transition models from $S$ to
$T$ in model A can be written
$\lambda_{23}^{0}(t|T_{12i}(0),\omega_{13i}^{0})=\lambda_{23,0}^{0}(t-T_{12i}(0))\exp(\kappa_{23}^{0}\omega_{13i}^{0}+\theta_{23}^{0}T_{12i}(0))I(t>T_{12i}(0))$
$\lambda_{23}^{1}(t|T_{12i}(1),\omega_{13i}^{1})=\lambda_{23,0}^{1}(t-T_{12i}(1))\exp(\kappa_{23}^{1}\omega_{13i}^{1}+\theta_{23}^{1}T_{12i}(1))I(t>T_{12i}(1))$
Ideally, we could allow $\kappa_{23}^{z}$ to take on different values from
$\kappa_{13}^{z}$ to accommodate different amounts of dependence between the
transitions. For both models A and B we consider the joint distribution
$\left(\begin{array}[]{c}\omega_{12i}^{0}\\\ \omega_{12i}^{1}\\\
\omega_{13i}^{0}\\\ \omega_{13i}^{1}\end{array}\right)\sim
N\left(\left(\begin{array}[]{c}0\\\ 0\\\ 0\\\
0\end{array}\right),\left(\begin{array}[]{cccc}1&\rho_{S}&\rho_{00}&\rho_{01}\\\
&1&\rho_{10}&\rho_{11}\\\ &&1&\rho_{T}\\\ &&&1\\\ \end{array}\right)\right)$
In most of the work presented here, we will also assume
$\omega_{12i}^{z}\perp\omega_{13i}^{z}$ (the frailties for an individual are
independent across states), meaning
$\rho_{00}=\rho_{01}=\rho_{11}=\rho_{10}=0$. We thus assume
$\left(\begin{array}[]{c}\omega_{12i}^{0}\\\
\omega_{12i}^{1}\end{array}\right)\sim N\left(\left(\begin{array}[]{c}0\\\
0\\\ \end{array}\right),\left(\begin{array}[]{cc}1&\rho_{S}\\\ \rho_{S}&1\\\
\end{array}\right)\right)\ \ \ \ \ \ \ \
\left(\begin{array}[]{c}\omega_{13i}^{0}\\\
\omega_{13i}^{1}\end{array}\right)\sim N\left(\left(\begin{array}[]{c}0\\\
0\\\ \end{array}\right),\left(\begin{array}[]{cc}1&\rho_{T}\\\ \rho_{T}&1\\\
\end{array}\right)\right)$ (4)
This type of assumption may aid in estimation. We could instead impose a
strong assumption of shared frailties for each arm:
$\omega_{12i}^{0}=\omega_{13i}^{0}=\omega_{23i}^{0}$ and
$\omega_{12i}^{1}=\omega_{13i}^{1}=\omega_{23i}^{1}$. The motivation for this
comes from considering the frailty as representing an omitted covariate. We do
not further pursue this assumption.
### 2.2 Identifiability and Sensitivity Analysis
Certain parameters within our model are non-identifiable because they describe
relationships between counterfactual variables ($\rho_{S}$ and $\rho_{T}$ for
example), while others are “barely” identifiable (the combination of the
baseline hazard, frailties, and the $\kappa$ parameters, for example) and are
therefore hard to estimate. In particular, the frailty terms are weakly
identified based on which events, $S$ and/or $T$, are actually observed. Since
we have made modeling assumptions to aid in estimation, we can evaluate the
sensitivity of the assumed models in several ways. Because the parameters
$\rho_{S}$ and $\rho_{T}$ in the complete data likelihood are not
identifiable, they will be fixed at preset values in our proposed method (and
later we will discuss if the complete data likelihood is necessary). Based on
biological considerations under the counterfactual framework, we may not
expect these correlation parameters to be negative or exactly equal to one.
Further, we can vary which frailties are assumed to be independent or equal,
alter which values of $\kappa_{jk}^{z}$ are set to one, change the baseline
hazard from a Weibull distribution to piecewise exponential or something more
flexible, assess different effects of covariates in the transitions, and
modify our proposed time-reset parameterization. We provide a tool for
assessing the sensitivity of these values and commentary on the feasibility
and identifiability of estimating these models with and without these
assumptions in later sections.
## 3 Likelihood and Estimation
### 3.1 Likelihood Contributions
We consider a randomized clinical trial of $n$ subjects for a binary treatment
$Z$. For generality, let $n_{z}$ denote the number of subjects in treatment
arm $Z=z$ (and we may assume that $n/2$ subjects are in treatment group $z=1$
and $n/2$ are in treatment group $z=0$ since the treatment assignment is
randomized and under the control of the investigator). Let
$\\{S_{i},\delta_{Si},T_{i},\delta_{Ti},X_{i},Z_{i}\\}$ be the observed data
for subject $i$ for $i=1,...,n$. We will also consider a random or
administrative censoring time $C_{i}$. $S_{i}$ denotes the time to transition
to state $S$, $T_{i}$ denotes the time that the terminal event $T$ occurs, and
$\delta_{T}$ and $\delta_{S}$ denote the censoring indicators for $T$ and $S$
being observed. Then $\delta_{Ti}=1$ when $T_{i}<C_{i}$ and $\delta_{Si}=1$
when $S_{i}<C_{i}$ and $S_{i}<T_{i}$.
We can also conceptualize the data in terms of the random variables in Figure
1. Based on gap times between states $T_{jk}^{z}$, the data can also be
represented as
$\\{T_{12i},T_{13i},T_{23i},\delta_{Si},\delta_{Ti},X_{i},Z_{i}\\}$, with
$T_{23i}$ not defined when $S_{i}$ is not observed. In the illness-death
formulation, there are four possible combinations of observable $\delta_{Si}$
and $\delta_{Ti}$. We assume that when neither event is observed, meaning
$\delta_{Si}=\delta_{Ti}=0,$ then $T_{12i}(z)$ and $T_{13i}(z)$ take on the
same value as being censored at $C_{i}$. Consider when $T$ is observed before
$S$, meaning $\delta_{Ti}=1,\delta_{Si}=0$. Then the observed data related to
$S_{i}$ for individual $i$ is equal to $\\{T_{13i},\delta_{Si}=0\\}$, and
observed $T_{i}$ is based on $\\{T_{13i},\delta_{Ti}=1\\}$, while $T_{23i}$ is
not defined. Now consider when only $S$ is observed, meaning
$\delta_{Ti}=0,\delta_{Si}=1$. Then the observed data for individual $i$ is
$S_{i}$ based on $\\{T_{12i},\delta_{Si}=1\\}$. Assuming $T$ is not observed
after, the value $T_{i}$ takes on is censored at $\\{C_{i},\delta_{Ti}=0\\}$.
If both $S$ and $T$ are observed with $\delta_{Ti}=\delta_{Si}=1$, then
$S_{i}$ is based on $\\{T_{12i},\delta_{Si}=1\\},$ and $T_{i}$ is based on
$\\{T_{12i}+T_{23i},\delta_{Ti}=1\\}.$ We provide the likelihood under these
scenarios next.
We assume that each hazard in Figure 1 follows a Weibull distribution, so
$T_{12}(z)\sim$ $Weibull(\alpha_{12}^{z},$ $\gamma_{12}^{z}),T_{13}(z)\sim
Weibull(\alpha_{13}^{z},\gamma_{13}^{z}),$ and $T_{23}(z)\sim
Weibull(\alpha_{23}^{z},\gamma_{23}^{z})$ for shape parameters
$\alpha_{jk}^{z}$ and scale parameters $\gamma_{jk}^{z}$. The scale and shape
parameters must be positive: $\gamma_{jk}^{z}>0,\alpha_{jk}^{z}>0$. We
parameterize the cumulative baseline hazard function as
$\Lambda_{jk0}^{z}(t)=\gamma_{jk}^{z}t^{\alpha_{jk}^{z}}=\int_{0}^{t}\lambda_{jk0}^{z}(u)du$
for a given Weibull model, where
$\lambda_{jk0}^{z}(t)=\gamma_{jk}^{z}\alpha_{jk}^{z}t^{\alpha_{jk}^{z}-1}$ and
$\lambda_{jk}^{z}(t)=\lambda_{jk0}^{z}(t)\exp(\kappa_{jk}^{z}\omega_{jk}^{z})$
for $jk$ = 12 or 13. The model for $\lambda_{23}^{z}$ is more complex and
depends on whether model A or B is assumed; for example, model A corresponds
to
$\lambda_{230}^{z}(t)\exp(\kappa_{23}^{z}\omega_{23}^{z}+\theta_{23}^{z}T_{12}(z))$.
For estimation there are two likelihoods that could be used, either the
observed data likelihood, or the complete data likelihood. The complete data
likelihood is derived using the random variables in Figure 1 with both sets of
counterfactual outcomes under the two treatment arms
$T_{12}(0),T_{12}(1),T_{13}(0),T_{13}(1),T_{23}(0),T_{23}(1)$. This approach
considers the joint model of the outcomes and involve all elements $\rho$ of
the correlation matrix in equation 4. Using this specification, an imputation
scheme could be proposed to fill in all missing outcomes. Any relation between
the potential outcomes across treatment arms for an individual in the complete
data likelihood is not identified. Based on previous exploration of methods
that use either the observed or the complete data likelihood (Roberts et al.
2021), using this complete data likelihood and employing imputation is not
necessary to carry out the validation procedure. Here we will only focus on
the observed data likelihood during estimation and consider each arm of the
trial separately. For ease of notation, we will drop the superscript in this
section as the derivations apply to both treatment arms. Any counterfactual
quantities needed for calculation of the CEP curve will be described
separately in Section 4. We note that $\\{T_{23i},\omega_{23i}\\}$ are not
defined when $\delta_{Si}=0$ and do not contribute to the likelihood, which is
the case for either the complete data or observed data likelihood.
The likelihood contributions can be written similarly to work done by Conlon
et al. (2014b). Conditional on the frailties and the other parameters the
likelihood contribution for subject $i$ is,
$L_{i}=L(T_{12i},T_{13i},T_{23i},\delta_{Si},\delta_{Ti};\omega_{12i},\omega_{13i},\omega_{23i},\gamma_{12},\alpha_{12},\gamma_{13},\alpha_{13},\gamma_{23},\alpha_{23},\theta_{23},\kappa_{12},\kappa_{13},\kappa_{23})$.
For those who had not experienced $S$, we are in the setting where
$\delta_{Si}=0$, $T_{12i}=T_{13i}$ and $T_{23i}$ is not defined, then
$L_{i}=\lambda_{13}(T_{13i})^{\delta_{Ti}}\exp(-\int_{0}^{T_{13i}}\lambda_{13}(u)du-\int_{0}^{T_{13i}}\lambda_{12}(u)du)$
For those who experience $S$, and are either dead or alive, $\delta_{Si}=1,$
and $T_{23i}$ is defined. $\delta_{Ti}$ may be equal to either 0 or 1
depending on if the terminal event is observed:
$L_{i}=\lambda_{12}(T_{12i})\exp(-\int_{0}^{T_{12i}}\lambda_{12}(u)du-\int_{0}^{T_{12i}}\lambda_{13}(u)du)\lambda_{23}(T_{23i}|T_{12i})^{\delta_{Ti}}\exp(-\int_{0}^{T_{23i}}\lambda_{23}(u|T_{12i})du)$
### 3.2 Bayesian Estimation
To facilitate estimation, we take a Bayesian approach using Markov Chain Monte
Carlo (MCMC). We use prior distributions similar to those suggested in Gao et
al. (2012) and Sahu et al. (1997). Regression coefficients are assumed to have
a diffuse normal prior (Sahu et al. 1997). We assume a Gamma($p_{1},p_{2})$
prior for the scale parameters $\gamma_{jk}$ of the Weibull distribution, and
we also assume a Gamma($p_{3},p_{4})$ prior for the shape parameters
$\alpha_{jk}$ with hyperparameters $p_{1}=p_{2}=p_{3}=p_{4}=0.1.$
Any parameters that do not have a closed-form posterior distribution
($\alpha_{jk}^{z},\gamma_{jk}^{z},\omega_{jk}^{z},\theta_{23}^{z},\kappa_{23}^{z}$)
are drawn using a Metropolis-Hastings step (Robert and Casella, 2004). At each
iteration of the MCMC, proposed draws of the parameters are taken from a
Gaussian proposal distribution $\pi$ with mean equal to the previous accepted
draw. For a general parameter $\beta$ and iteration $p$ of the MCMC, we draw a
proposed value of $\beta^{{}^{\prime}}\sim N(\beta^{p-1},\sigma^{2})$ based on
using the previous iteration $\beta^{p-1}$. The acceptance ratio is calculated
as
$\frac{P(\beta^{{}^{\prime}})}{P(\beta^{p-1})}\times\frac{\pi(\beta^{{}^{\prime}})}{\pi(\beta^{p-1})}$
where $P(\beta)$ represents the posterior distribution of $\beta$ and $\pi$
represents the proposal density. For a general Gaussian density,
$g(\beta^{{}^{\prime}}|\beta^{p-1})=$
$\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp(-1/2\sigma^{2})(\beta^{{}^{\prime}}-\beta^{p-1})^{2}$
and $g(\beta^{p-1}|\beta^{{}^{\prime}})=$
$\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp(-1/2\sigma^{2})(\beta^{p-1}-\beta^{{}^{\prime}})^{2}$.
Based on our proposal distribution, the exponential terms in the ratio of
Gaussian densities will cancel, so the proposed draw $\beta^{{}^{\prime}}$ is
accepted with the simplified probability
$min(1,\frac{P(\beta^{{}^{\prime}})}{P(\beta^{p-1})})$. The variance of the
proposal distribution $\sigma^{2}$ is tuned to obtain convergence of parameter
draws and target a reasonable acceptance rate (Gelman et al. 1996).
The frailties are also drawn using a Metropolis-Hastings step with a Gaussian
proposal distribution with mean equal to the previous value and a Gaussian
prior with mean zero and standard deviation equal to 0.4. Each proposed
frailty term for an individual has its own acceptance ratio. For
$i=1,...,\frac{n}{2}$, we obtain draws of $\omega_{12i}^{0},\omega_{13i}^{0}$,
and for $i=\frac{n}{2}+1,...,n$, we obtain draws of
$\omega_{12i}^{1},\omega_{13i}^{1}$ using the posterior distribution. When we
do not make assumptions of frailties being equal
($\omega_{13i}^{z}=\omega_{23i}^{z})$, we must estimate the set
$\omega_{12i}^{z},\omega_{13i}^{z},\omega_{23i}^{z}$ for each individual.
$T_{23i}^{z}$ and corresponding $\omega_{23i}^{z}$ does not exist for any
individual that does not experience the intermediate event. In this case,
$\omega_{23}$ can be drawn directly from the prior or its conditional
multivariate normal distribution in Section 2.1 using our model formulation
with six frailties and a fixed covariance matrix.
The likelihood contributions for $L$ for each parameter can be found in an
appendix. Based on the given likelihood components and prior distributions
$\pi^{*}$, the posterior $P$ for a given $Z=z$ is the product over individuals
$i$ who received $z$:
$\prod_{i}\big{(}L_{i}(T_{13i}(z),T_{23i}(z),T_{12i}(z),\delta_{Si},\delta_{Ti};\omega_{12i}^{z},\omega_{13i}^{z},\omega_{23i}^{z},\beta_{12}^{z},\gamma_{12}^{z},\alpha_{12}^{z},\beta_{13}^{z},\gamma_{13}^{z},\alpha_{13}^{z},\beta_{23}^{z},\gamma_{23}^{z},\alpha_{23}^{z},\theta_{23}^{z},\kappa_{12}^{z},\kappa_{13}^{z},\kappa_{23}^{z})\times$
$\pi^{*}(\omega_{12i}^{z},\omega_{13i}^{z},\omega_{23i}^{z})\big{)}\pi^{*}(\beta_{12}^{z})\pi^{*}(\gamma_{12}^{z})\pi^{*}(\alpha_{12}^{z})\pi^{*}(\beta_{13}^{z})\pi^{*}(\gamma_{13}^{z})\pi^{*}(\alpha_{13}^{z})\pi^{*}(\beta_{23}^{z})\pi^{*}(\gamma_{23}^{z})\pi^{*}(\alpha_{23}^{z})\pi^{*}(\theta_{23}^{z})\pi^{*}(\kappa_{12}^{z})\pi^{*}(\kappa_{13}^{z})\pi^{*}(\kappa_{23}^{z})$
Visually, we can see the hierarchy of parameters across different treatments
and transitions and how the terms are related in Figure 2.
Initial estimates of the frailties may be calculated using the frailtypack or
frailtyEM packages in R (R Core Team; Rondeau and Gonzalez, 2005; Balan and
Putter, 2019). Parameter estimates are each drawn from the proposal
distribution individually. Under the parameterization in model A,
$\theta_{23}^{z}$ is drawn from a proposal distribution with a mean based on
the estimated coefficient from a hazard model fit using observed data
regressing time to $T$ on time to $S$, among those who experience $S$. By
doing this, $\theta_{23}^{1}$ and $\theta_{23}^{0}$ have unique starting
values. The draws are accepted in blocks for the Metropolis-Hastings step. The
blocks are divided into treatment arm transitions, and the parameters within a
block are jointly accepted or rejected. For model A, we have blocks
$\omega_{12i}^{0};\\{\gamma_{12}^{0},\alpha_{12}^{0}\\};\omega_{13i}^{0};\\{\gamma_{13}^{0},\alpha_{13}^{0}\\};\\{\gamma_{23}^{0},\alpha_{23}^{0},\theta_{23}^{0},\kappa_{23}^{0}\\};$
$\omega_{12i}^{1};$
$\\{\gamma_{12}^{1},\alpha_{12}^{1}\\};\omega_{13i}^{1};\\{\gamma_{13}^{1},\alpha_{13}^{1}\\};$
$\\{\gamma_{23}^{1},\alpha_{23}^{1},\theta_{23}^{1},\kappa_{23}^{1}\\}$ when
all of the model parameters are being estimated. The proposal distributions
have standard deviation $\sigma=0.1$.
## 4 CEP Quantities
We develop a method for validating a surrogate endpoint using the principal
stratification framework (Frangakis and Rubin, 2002). The goal of this
validation procedure is to develop causal quantities that rigorously determine
if a time-to-event $S$ is a valid surrogate for use in a future trial in place
of $T$ by conditioning on the joint distribution of the observed and
counterfactual of $S$, specifically the log cumulative hazard ratio of the
time to $S$ under control versus treatment. In a non-survival setting, Gilbert
and Hudgens (2008) define a principal surrogate endpoint for a binary $T$
based on the comparison of the quantities $risk_{(1)}(s_{1},s_{0})\equiv
P(T(1)=1|S(1)=s_{1},S(0)=s_{0})$ and $risk_{(0)}(s_{1},s_{0})\equiv
P(T(0)=1|S(1)=s_{1},S(0)=s_{0})$. The condition that these must be equal for
all $s_{1}=s_{0}$ is known as average causal necessity. Average causal
sufficiency is defined as $risk_{(1)}(s_{1},s_{0})\neq
risk_{(0)}(s_{1},s_{0})$ for all $|s_{1}-s_{0}|>C$ for some non-negative
constant $C$. They define the causal effect of the treatment on the true
endpoints as $h(P(T(1)=1),P(T(0)=1))$ for some $h(,)$ contrast function that
satisfies $h(x,y)=0$ if and only if $x=y$. The CEP surface is therefore equal
to $h(risk_{(1)},risk_{(0)})$ over values of $s=(s_{1},s_{0})$. A specific
case of this is the CEP plot of $\Delta T=E(T(1)-T(0)|S(1)-S(0)=s)$ over
values of $\Delta S=S(1)-S(0)=s$ when $S$ and $T$ are continuous. Based on
these criteria, an ideal CEP plot for a valid surrogate will go through the
origin and have a positive slope. We generalize this by defining new
contrasts, $\Delta T_{i}$ and $\Delta S_{i}$ for each subject in this time-to-
event setting, forming a scatterplot of $(\Delta S_{i},\Delta T_{i})$, and
assessing whether a straight line through the points on this scatterplot goes
through the origin and has a positive slope. For $\Delta T_{i}$ we will use
$P(T_{i}(1)>\tau_{T})-P(T_{i}(0)>\tau_{T})$ evaluated at time $\tau_{T}$. For
$\Delta S_{i}$ we will use
$\log\left(\frac{\Lambda_{12i}^{0}(\tau_{S})}{\Lambda_{12i}^{1}(\tau_{S})}\right)$
that depends on some time $\tau_{S}$. Since the intercept and the slope of the
line depend on $\tau_{S}$ and $\tau_{T}$ we can write the line as $\Delta
T_{i}(\tau_{T})=\gamma(\tau_{S},\tau_{T})_{0}+\gamma(\tau_{S},\tau_{T})_{1}\Delta
S_{i}(\tau_{S})$. A good surrogate will have $\gamma(\tau_{S},\tau_{T})_{0}=0$
and $\gamma(\tau_{S},\tau_{T})_{1}>0$, with larger values of
$\gamma(\tau_{S},\tau_{T})_{1}$ implying better surrogacy. Furthermore, for
the surrogate to be relevant we would want a treatment effect on $S$, so from
the CEP plot we would also assess whether the mean of $\Delta S_{i}$ is equal
to zero.
$\tau_{S}$ and $\tau_{T}$ must be chosen at meaningful or sensible times.
$\tau_{T}$ would usually be determined by the clinical context, and $\tau_{S}$
needs to be less than $\tau_{T}$ for the surrogate to be useful. While small
times for $\tau_{S}$ and $\tau_{T}$ are desirable they should also be chosen
such that a sufficient number of events have occurred in order to make
sensible decisions about the surrogate. It is also possible to use the other
quantities for both $\Delta S$ and $\Delta T$. Here we have chosen this
$\Delta T$ as an interpretable quantity that might be used as the true
endpoint in the trial, that can be calculated regardless of whether $S$ has
occurred. We have chosen $\Delta S$ to be directly related to the transition
from state 1 to state 2 in the illness death model, as this is what the
therapies are usually aiming to modify. Other choices for $\Delta S$ are
possible in which it is based on a probability rather than a cumulative hazard
or involves more than just the transition from state 1 to state 2. These will
be considered in the discussion section.
While counterfactual draws of the frailties are not needed for the estimation
procedure, they are needed to form the proposed CEP plot. As the correlations
between the observed and counterfactual outcomes are non-identified, we fix
$\rho_{S},\rho_{T}$ from the distributions in equation 4 to draw the
counterfactual frailty terms. We use correlations of 0.5 as a starting point
since it is a mid-point between perfect and no correlation and then vary
$\rho_{S}$ and $\rho_{T}$ for sensitivity analysis. We use the normal prior
distribution and fixed $\rho_{S},\rho_{T}$ to obtain draws of the $\omega$
estimates in the counterfactual arm from the appropriate conditional normal
distributions, such as $\omega_{12}^{z}|\omega_{12}^{1-z}\sim
N(0+\rho_{S}(\omega_{12}^{1-z}),1-\rho_{S}^{2})$ and similarly
$\omega_{13}^{z}|\omega_{13}^{1-z}\sim
N(0+\rho_{T}(\omega_{13}^{1-z}),1-\rho_{T}^{2})$. We repeat the process for
the other treatment arm to obtain sets of counterfactual frailties for each
individual.
Each individual has a set of subject-specific hazards that will be used in a
CEP plot. Let $\Delta
S_{i}=\log\frac{{\Lambda}_{12}^{0}(\tau_{S}|\omega_{12i}^{0},x_{i})}{{\Lambda}_{12}^{1}(\tau_{S}|\omega_{12i}^{1},x_{i})}$
be on the x-axis of the plot where
${\Lambda}_{12}^{0}(\tau_{S}|\omega_{12}^{0},x)=\int_{0}^{\tau_{S}}{\lambda}_{12}^{0}(t|\omega_{12}^{0},x)dt$
and
${\Lambda}_{12}^{1}(\tau_{S}|\omega_{12}^{1},x)=\int_{0}^{\tau_{S}}{\lambda}_{12}^{1}(t|\omega_{12}^{1},x)dt$.
For the y-axis, consider $\Delta
T_{i}=P(T_{i}(1)>\tau_{T}|\omega_{12i}^{1},\omega_{13i}^{1},\omega_{23i}^{1},x_{i})-$
$P(T_{i}(0)>\tau_{T}|\omega_{12i}^{0},\omega_{13i}^{0},\omega_{23i}^{0},x_{i})$
based on the frailties in model A. For example, using model A, $\Delta
S_{i}=\log\frac{\Lambda_{12,0}^{0}(t)\exp(\kappa_{12}^{0}\omega_{12i}^{0})}{\Lambda_{12,0}^{1}(t)\exp(\kappa_{12}^{1}\omega_{12i}^{1})}$
if baseline covariates are not included.
Overall survival at time $\tau$ can be decomposed into components based on
$P($do not experience $S$ or $T$) + $P($experience $S$ but not $T$). More
formally, this framework is similar to the likelihood for a joint illness-
death model developed in Suresh et al. (2017) and for illness-death with a
cure fraction proposed by Conlon et al. (2014b) and Beesley et al. (2019). In
formal notation, we are interested in the quantities
$P(T(0)>\tau_{T})=P(T(0)>\tau_{T},S(0)>\tau_{T})+P(T(0)>\tau_{T},S(0)<\tau_{T})$
and
$P(T(1)>\tau_{T})=P(T(1)>\tau_{T},S(1)>\tau_{T})+P(T(1)>\tau_{T},S(1)<\tau_{T})$
These quantities can be written in terms of parameters
$\exp(-\int_{0}^{\tau_{T}}\lambda_{12}(u)du-\int_{0}^{\tau_{T}}\lambda_{13}(u)du)+\int_{0}^{\tau_{T}}\exp(-\int_{0}^{u}\lambda_{12}(v)dv-\int_{0}^{u}\lambda_{13}(v)dv)\lambda_{12}(u)\exp(-\int_{0}^{\tau_{T}-u}\lambda_{23}(v|u)dv)du$
$=\exp(-\Lambda_{12}(\tau_{T})-\Lambda_{13}(\tau_{T}))+\int_{0}^{\tau_{T}}\exp(-\Lambda_{12}(u)-\Lambda_{13}(u))\lambda_{12}(u)\exp(-\int_{0}^{\tau_{T}-u}\lambda_{23}(v|u)dv)du$
Based on the draws of model parameters for a given iteration of the MCMC, we
estimate observed and counterfactual hazards for each individual. After
calculating $\Delta{T_{i}}$ and $\Delta{S_{i}}$ conditional on the set of
$\omega_{i}$, we create a scatterplot of $\Delta T_{i}$ vs. $\Delta S_{i}$ and
draw a loess or linear curve through the points for a single iteration of the
algorithm. Our $\gamma_{0}$ and $\gamma_{1}$ summary quantities are equal to
the intercept and slope of this line (whereas these quantities may need to be
redefined for a loess curve). This process is repeated for the next set of
draws of model parameters and frailties for all individuals. These quantities
are then averaged over MCMC iterations after a burn-in period.
### 4.1 Valid Surrogates under an Illness-Death CEP Curve
As our CEP curve is a fairly complex function of the parameters and frailties,
we empirically investigate what combination of illness-death models, meaning
relationship between $S$ and $T$, leads to CEP plots that align with an
intuitive notion of whether $S$ is a good surrogate for $T$. We primarily
consider the eight scenarios that may exist based on which transitions have
treatment effects (defined as whether or not the counterfactual hazards are
equal) in Table LABEL:tab:sim3a and in an appendix. These scenarios and the
magnitude of the effects determine whether there are marginal treatment
effects on $S$ and $T$.
We characterize the CEP curves under these scenarios using true generating
parameter values to calculate $\Delta T$ and $\Delta S$. In an appendix, we
show scatterplots of $\Delta S_{i}$ vs. $\Delta T_{i}$ for simulated data, for
which the values of the frailties are known. An Rshiny app is also available
at https://emilyroberts.shinyapps.io/id_cep_parameters/ that allows users to
characterize the CEP curve for different parameter values. We also allow for
the user to vary which independence or equivalence assumptions are made about
the frailty terms and the corresponding impact on the CEP curve.
Based on several settings investigated in an appendix, we suggest which data
scenarios should correspond to a decision that the intermediate outcome is in
fact a valid surrogate. We identify that for a perfect surrogate, the paths
that treatment effects should exist are through the baseline to intermediate
outcome transition only (ie $\lambda_{12}^{0}\neq\lambda_{12}^{1}$). In the
null case with no treatment effects, Scenario 1, and this ideal case Scenario
2, the estimated slope is positive, and the intercept is equal to 0. This is
consistent with our consideration of the more flexible Prentice Criteria,
which also suggest that hazards from baseline to $S$ should be non-equal
($\lambda_{12}^{0}\neq\lambda_{12}^{1}$) and the hazards from baseline to $T$
should be equal ($\lambda_{13}^{0}=\lambda_{13}^{1}$) across treatment arms.
Largely, small changes in the values of $\rho$ in the correlation matrix of
the frailty terms does not have a major impact on the CEP slope and
intercepts, though other settings in the online app demonstrate specific
settings where these correlations may be more consequential.
We can examine the marginal effects on $S$ and $T$ based on the average of
$\Delta S_{i}$ and $\Delta T_{i}$ and via Kaplan Meier curves in the app and
quantities in an appendix. For scenario 1, treatment effects on both outcomes
are zero, which may correspond to a treatment not worth future investigation.
For other scenarios, the marginal effect on $T$ is somewhat small under the
parameter values we are presenting. We did observe that Scenarios 3-8 (denoted
as partial and non-surrogates) produced CEP curves that did not go through the
origin and therefore were invalid. We anticipated differences between perfect,
partial, and non-surrogates would be easily apparent, and while the intercepts
did differ, the slope does not drastically change between the different
scenarios. Under the particular parameters we investigated, the slope was
positive for all of the scenarios when the baseline hazard to $T$ was larger
after experiencing $S$ (ie the baseline hazard
$\lambda_{0,23}^{z}>\lambda_{0,13}^{z}$ so that death occurs faster after
progression). In other words, the relative magnitude of the baseline hazards
for transition times $T_{12}(z),T_{13}(z),$ and $T_{23}(z)$ for a given
treatment arm influences the slope and intercept of a CEP curve. A possible
explanation for the small differences in slope values across scenarios is that
the y-axis will always be constrained between -1 and 1 since it represents a
difference in two probabilities. This quantity $\Delta T_{i}$ on the y-axis is
a relatively complex function of multiple model parameters that may not change
drastically based on relatively small changes in the baseline hazards.
### 4.2 Additional Scenarios
In addition to which hazards are moderated by treatment being considered in
the eight settings above, each combination can be crossed with whether
$\theta_{23}$ and $\kappa_{23}$ are zero vs. nonzero in a factorial design. We
briefly considered the former and do see that incorporating non-zero values of
$\theta_{23}^{z}$ does change the slope and intercept of the CEP curve in an
appendix. While the settings in Table LABEL:tab:sim3a and the extra settings
through varying $\theta_{23}$ and $\kappa_{23}$ represent a broad range, there
are many other possible scenarios that could be achieved with specific choices
of the parameters. For example, even if a treatment slows the rate of
progression to the surrogate endpoint, it is possible that time to death after
progression may be more rapid on the treatment arm. In our setting, that would
be seen in a positive treatment effect on the transition from baseline to $S$,
but a negative treatment effect from $S$ to $T$ either through increasing the
baseline hazard $\lambda_{23}^{1}$ or a positive value of $\theta_{23}^{1}$.
Another possibility exists where the treatment slows the rate of progression,
corresponding to a positive treatment effect from baseline to $S$, however
toxicities or side effects from the treatment effect cause death from other
causes, affecting the baseline to $T$ transition to have a negative treatment
effect. More complex study designs might allow for patients to switch to the
active treatment arm after experiencing the surrogate endpoint $S$ which could
be potentially incorporated into our illness death framework by reducing
$\lambda_{23}^{0}$.
## 5 Simulation Study
### 5.1 Simulation Set-up
Here we start with a simulation setting where we assume each baseline hazard
follows a Weibull distribution where shape parameters for the baseline hazards
and frailty coefficients are equal to 1. We conduct a simulation with 200
replicated datasets and $n=600$. Data are generated under simple settings that
follow the $\theta$ parameterization shown in model A. The true values of the
parameters are shown in the simulation results in the first row of the table
of results. Survival times are simulated based on a Weibull baseline hazard
specification (Austin, 2012). We generate treatment effects by differing the
scale parameters between arms, meaning $\gamma_{jk}^{1}\neq\gamma_{jk}^{0}$.
We simulate the frailties to have mean 0 and a standard deviation of 0.4 and
assume that $\omega_{13}^{z}=\omega_{23}^{z}$ in our primary results settings.
We conduct the estimation procedure described in section 3 from our eight
simulation scenarios, highlighting Scenario 1 with no marginal treatment
effects on either endpoint (a null setting), Scenario 2 where there is a
treatment effect only on $S$ (which we label as a perfect surrogate), and
scenarios 3-8 where treatment effects exist such that we do not expect $S$ to
be a surrogate. Because of non-identifiability due to the close link between
the baseline hazard, frailties, and coefficients associated with the
frailties, we assume during estimation that all $\kappa_{jk}^{z}=1$. In an
appendix, we also conduct sensitivity analyses by varying the assumptions that
$\omega_{12}^{z}\perp\omega_{13}^{z}$ and $\omega_{13}^{z}=\omega_{23}^{z}$.
There we assume that either
$\omega_{12}^{z}\perp\omega_{13}^{z}\perp\omega_{23}^{z}$ or that all three
frailties are correlated within a given counterfactual treatment arm. In this
case we assume $\rho_{T1}=\rho_{T4}=0.95$ and
$\rho_{T3}=\rho_{T2}=\rho_{ST}=\rho_{T}=0.5.$ and set $\tau_{S}=1$ and
$\tau_{T}=5$.
### 5.2 Simulation Results
In this section, we show results of the estimated model parameters as well as
validation quantities, the intercept $\gamma_{0}$, and slope $\gamma_{1}$. The
estimation of the $\gamma_{0}$ and $\gamma_{1}$ quantities are calculated from
fitting a linear best fit line through the CEP cloud at each iteration and
reporting the posterior mean of these quantities for each simulated dataset.
Parameter estimates are based on the posterior means and corresponding
measures of variability; the average estimated standard error (SE) and the
standard deviation (SD) of the posterior means are shown for the model
parameters. We run the simulations for 3,000 iterations with 900 burn in
draws. In addition to trace plots of the parameter draws, we assess the
empirical mean and standard deviation of the estimated frailty terms over the
iterations.
In Figure 3 we show the CEP curve conditional on estimated frailties for one
dataset under Scenario 2. Each point is the posterior mean of ($\Delta S_{i}$,
$\Delta T_{i}$) across MCMC iterations. The posterior values of the slope and
intercept are shown, which convey the amount of variability based on the
posterior coordinates of ($\Delta S_{i}$, $\Delta T_{i}$) for each individual
$i$. We see that the estimated slope and intercept correctly meet our criteria
of a valid surrogate under our proposed set of model assumptions. Though there
is substantial variability in the estimates of $\gamma_{0}$ and $\gamma_{1}$,
the respective posterior mean and credible intervals are -0.018 (-0.078,
0.042) and 0.049 (0.020, 0.078) for this dataset. Furthermore, there is a
marginal effect of the treatment on both $S$ and $T$ for this dataset, as
denoted by the non-zero position of the dashed lines.
In the main set of simulations in Table LABEL:tab:sim3a, the identified
parameters are estimated fairly well and seem to converge based on the
assumptions we have made. We observe that the distribution of the estimated
frailty terms can deviate from the generating distribution with mean zero and
fixed variance. While our method involves prior and proposal distributions for
the frailties, we are not directly enforcing any assumptions about the mean or
variability of the frailty parameters during the estimation algorithm. The
shape of the likelihood for frailty terms, particularly $\omega_{12}^{z}$
terms for individuals with $\delta_{Si}=0$, seems to be fairly flat, so the
draws move around considerably during the algorithm. In these considered
simulations, the credible intervals around $\gamma_{1},\gamma_{0}$ are
somewhat wide for all scenarios. Since an ideal surrogate will have values
$\gamma_{0}=0$ and $\gamma_{1}>0,$ too much uncertainty can make it difficult
to determine the value of the surrogate.
In our sensitivity analyses about the assumptions on the frailty terms, shown
in an appendix, we see some sensitivity to the assumptions being made, such as
increased variability in the subject-specific points. How these factors
influence the CEP curves should be investigated under trial specific contexts.
## 6 Prostate Cancer Example
Our motivating clinical study is a phase III, randomized trial for men with
prostate cancer, NRG/RTOG 9601 (Shipley et al., 2017). The trial features 760
men with recurrently or persistently elevated prostate-specific antigen (PSA)
levels whose prostate was initially removed by prostatectomy. The two
treatments being compared are post-prostatectomy radiation therapy with or
without antiandrogen therapy. There are 384 and 376 men in each treatment arm.
The two survival endpoints of interest are time to distant metastasis, defined
as radiographic evidence of metastatic cancer, and overall survival (OS).
Notably, composite endpoints such as metastasis-free survival (MFS) are often
evaluated. It has been previously established by The Intermediate Clinical
Endpoints in Cancer of the Prostate (ICECaP) that MFS is a valid surrogate for
OS in the setting of the initial treatment for localized prostate cancer (Xie
et al., 2017). Others have evaluated if MFS is a valid surrogate when
assessing the impact of antiandrogen therapy in recurrent prostate cancer
following post-prostatectomy salvage radiation therapy (Jackson et al., 2020).
However, within our illness-death framework we consider time to distant
metastasis and time to death separately. Covariates in the dataset are also
available, including PSA values at the time of randomization, Gleason score,
and age in grouped categories.
We show in Figure 5 the Kaplan Meier curves for the intermediate and true
outcomes without considering the semi-competing risk as well as the curve for
the transition from $S$ to $T$ for those who experienced distant metastasis.
$S$ may be censored because it was not observed during the study period or
because the terminal event $T$ occurred first. In an appendix, in Figure A6,
we also present the cumulative incidence curve for $S$ considering $T$ as a
semi-competing risk based on the non-parametric Aalen-Johansen estimate of the
cumulative incidence function from the mstate R package (Putter, 2011). The
plots show that the addition of antiandrogen therapy decreases the hazard of
distant metastases and increase the survival probability, but after metastases
the survival probability is reduced and does not appear to be greatly
influenced by whether the antiandrogen therapy was part of the treatment.
### 6.1 Conventional Models
We consider the $z=1$ group to be the treatment group for salvage radiation
therapy with antiandrogen therapy, and the $z=0$ represents the group treated
without antiandrogen therapy. There is a significant treatment effect of the
additional antiandrogen therapy on time to distant metastasis using a
parametric hazard model with a Weibull baseline hazard ($HR=0.622,p=0.004)$.
There is a marginally significant treatment effect on overall survival when
considering the cause-specific hazard ($HR=0.722,p=0.049)$. As a way to
consider the Prentice criteria, we also fit a model for time to overall
survival adjusting for the occurrence of distant metastases as a time-
dependent covariate. We found that the effect was attenuated toward null
($HR=0.890,p=0.592)$ and no longer statistically significant. Based on the
Kaplan Meier curves and typical survival times, we chose $\tau_{S}=5$ and
$\tau_{T}$ = 8. We calculate the number of individuals who go through each
transition and experience the events in our illness-death models. In total,
156 patients experienced distant metastases, and 239 total deaths were
observed between the two arms. These numbers are shown in Figure 4.
### 6.2 Surrogacy Evaluation
Here we perform the analysis marginally, without including baseline
covariates. We show an estimated CEP curve based on several assumptions: the
baseline hazard follows an exponential distribution, and we use model A using
$T_{12}$ as a time-varying covariate where we assume
$\kappa_{12}^{z}=\kappa_{13}^{z}=\kappa_{23}^{z}=1$. Table 2 shows the
posterior mean and corresponding 95% credible interval for each parameter
being estimated. We plot the posterior mean of $\Delta S_{i}$ and $\Delta
T_{i}$ for each individual across iterations in a CEP plot. We also show the
estimated slope and intercept lines on the CEP curve for each iteration of the
MCMC chain to assess the variability of the estimates of these validation
quantities.
Based on this example dataset and CEP curve in Figure 6, the vertical and
horizontal lines for the marginal treatment effects are separated from zero,
and the posterior mean for the intercept term $\gamma_{0}$ is -0.036 with 95%
credible interval (-0.152, 0.080). For the slope $\gamma_{1}$, the posterior
mean is 0.076 with 95% credible interval is (0.017, 0.135). Based on these
estimates, we would conclude that the slope $\gamma_{1}$ is positive, and the
estimated intercept $\gamma_{0}$ is near zero since the credible interval for
$\gamma_{0}$ does include 0. These results would indicate that the surrogate
seems valid, though the credible interval for $\gamma_{0}$ is somewhat wide.
We also conducted a sensitivity analysis where instead of assuming
$\omega_{13}^{z}=\omega_{23}^{z}$ and that
$\omega_{12}^{z}\perp\omega_{13}^{z}$, we assumed that all six counterfactual
frailties were correlated within an individual. These results gave reasonably
similar conclusions, with an estimated $\gamma_{0}$ of -0.046 (-0.157, 0.073)
and estimated $\gamma_{1}$ of 0.108 (0.045, 0.195).
## 7 Discussion and Conclusion
In this work, we have considered how to validate surrogate endpoints when
trial outcomes are time-to-event using principal stratification and illness-
death models. We believe the illness-death framework is foundational to
modeling these data, though a single, optimal estimand corresponding to the
model is less obvious. We have provided examples and an online app to explore
CEP curves under different data settings. While the values of the CEP curve
can be written in a closed, analytic form when the outcomes are Gaussian in
previous work (Conlon et al., 2014a; Roberts et al., 2021), it is necessary to
define and empirically assess what an ideal CEP curve looks like for time-to-
event data. A novel distinction in this work is that in the Gaussian case, the
CEP conditions on $S_{i}(1)-S_{i}(0)=s$, where the conditioning is on a
contrast between potentially observable values, $S_{i}(1)$ and $S_{i}(0)$. In
this paper, we are looking at the contrast between $\Lambda_{12i}^{z}$ and
$\Lambda_{12i}^{1-z}$, which is a contrast between distributions.
While not the case in our considered scenarios, some extrapolation may be
required to determine if the CEP curve goes through the origin of the plot
depending on the size of the treatment effect on $S$. The subject-specific
points may not appear in all four quadrants of the plot. There is an
interesting connection regarding individual specific $\Delta S_{i}$ and
$\Delta T_{i}$ within the quadrants of the graph that has been considered
across trials in the meta-analytic setting (Elliott et al., 2015). In
particular, certain subject-specific coordinates may suggest that the
treatment has a beneficial effect on the surrogate endpoint but a detrimental
effect on the true outcome for certain individuals. This may be informative
when considering the possibility of the surrogate paradox (VanderWeele, 2013).
There are several areas for sensitivity analyses and exploration of
identifiability for surrogacy validation (Ghosh, 2012). While the variance of
the frailty should be identifiable by including sufficient covariates (Gao,
2012; Putter et al., 2015), it may still be difficult to accurately estimate
frailty terms in a complex model. In our proposed models, we include a prior
distribution for the variance of the frailty terms but do not assume the
variance is known. Since allowing for too much flexibility in the models may
result in non-identifiability of parameters, this can lead to computational
problems when trying to estimate the coefficients associated with the
frailties. We believe our assumptions that $\kappa_{jk}=1$ or that
$\omega_{13}^{z}=\omega_{23}^{z}$ about the frailty terms are justifiable for
this data example. They also help with computation during estimation, but they
are still potentially strong assumptions. Relaxing the assumption that the
frailties going into the $T$ state are equal (ie
$\omega_{13}^{z}=\omega_{23}^{z}$) may impact identifiability since there will
be less information available to estimate these terms. To the extent that
frailties can be estimated for one event time per person, the data might
inform these assumptions (e.g., the assumption is testable to the extent that
frailties can be estimated well). We might try to assess the identifiability
of frailty terms in the proposed causal model by comparing the prior and
posterior distributions for the frailty terms (Gao, 2012). Other convergence
metrics can be used to assess the convergence of the parameters, and more
complex algorithms or different distributional assumptions about the frailties
may alleviate computational problems (Clayton, 1991; Wen et al., 2016 for
example). For assessment of robustness, our models can be evaluated under
model misspecification. To increase the flexibility of the method, we could
also consider fitting a non-linear loess curve through the points on the CEP
plot as opposed to a linear fit. We can compare our proposed methods to copula
models (Taylor et al., 2015). These particular Gaussian copula models have
potential of extending the closed-form correlation structure we have focused
on in previous work while incorporating conditional independence assumptions
on the appropriate correlation scale.
We could fit conditional surrogacy validation models and include baseline PSA,
age, and Gleason score as baseline covariates. It is likely that controlling
for covariates will change the estimated frailties, as frailty terms capture
unexplained heterogeneity in treatment effects which would then be partially
explained by the covariates. Based on these analyses, we could also determine
if the surrogate is valid for certain subgroups of people (Roberts et al.
2021). Different covariates may be more important in different transition
models. For example, we may expect age to be more important for the direct
transition from baseline to death, while baseline PSA and Gleason score will
likely be more important for time to distant metastases. Model selection could
lower the number of parameters to estimate (Reeder et al., 2022).
In the future, we can consider changing our model parameterization from our
proposal to use a time-varying covariate in the transition model from $S$ to
$T$ to the alternative Model B or a different structure. We may extend beyond
the proposed illness-death model to a different or more complex multi-state
model depending on the endpoints being evaluated. In different disease areas,
consideration about individuals being cured may be appropriate (Conlon et al,
2014b). We have assumed here that time to $S$ is known, but it may be subject
to interval censoring (Zeng et al. 2018). In some cases we may even have exact
information about time to $T$ based on death registries without knowing if $S$
occurred (Beesley et al., 2019). Different models, definitions of the
endpoint, and corresponding $\Delta S_{i}$ may change our determination
whether the surrogate is valid, and the assumptions made about the models and
frailties may be more appropriate for certain contexts.
While we believe the illness-death model is natural for modeling these data,
different estimands could be considered for validation. In the CEP plot we
have used the ratio of the cumulative hazards on the horizontal axis as a
measure of the treatment effect on $S$. This was chosen because it is
explicitly related to the transition from the baseline state to the state of
experiencing the surrogate, and in most settings, including our prostate
cancer one, the primary way in which the treatment is expected to work is by
preventing or delaying the occurrence of the events in the intermediate state.
There are other possible choices for what to use for $\Delta S$ on the
horizontal axis. One would be based on the difference in the cumulative
incidence of $S$ by time $\tau_{S}$ between the two arms, another could be
based on the composite endpoint of either $S$ or $T$ occurring by time
$\tau_{S}$. Both of these can be calculated from the illness death model
parameter estimates, but both are also impacted by the transition rate from
the baseline state to the terminal state.
It would be interesting to evaluate this illness death model when $\Delta
S_{i}$ is based on a composite endpoint with $T$. For example, in the prostate
cancer setting, distant metastases-free survival has been considered as a
surrogate endpoint for overall survival. Other potential surrogates have been
considered such as biochemical recurrence or time to local recurrence, and an
alternative true clinical outcome could be prostate cancer-specific survival.
In our setting, it is likely that individuals may only die from prostate
cancer if they experience distant metastases, so there may be fewer
individuals transitioning directly from baseline to cancer-specific death
compared to baseline to death from other causes.
In this paper we have considered the situation of a single trial, in contrast
to the meta analysis setting in which data from multiple trials are analyzed.
We developed an approach to assessing whether $S$ is a valid surrogate for $T$
from a causal perspective. The hope would be that if $S$ is a good surrogate
for $T$ in one trial, then it would also be a good surrogate for $T$ in other
trials with similar treatments. The fact that the surrogacy measure is based
on causal concepts, not just measures of association, may make it more likely
to transport from one trial to the next. In previous work, in a different data
setting, we find similar CEP plots across four different treatments
comparisons (Taylor et al., 2015). Furthermore, in this paper, a mechanistic
approach to disease progression, implicit in the illness-death model, has been
taken. This illness-death structure does transport from one trial to the next,
so may believe that our approach will assess surrogates in a way that is more
generalizable across treatments than methods that rely on composite endpoints,
that do not require the illness death structure. This comparison, concept of
transportability, and potential need for replication across several trials
remain as future work (Pearl and Bareinboim, 2011).
There are other directions for extending this work, particularly when
considering the overlap of causal inference and survival analysis and delicate
interpretation of hazard ratios with multiple time-to-event endpoints. Others
(Gran et al. 2015; Valeri et al. 2021) explore other causal tools for multi-
state models such as inverse probability weighting, G-computation, and
manipulating hypothetical transition intensities. Other directions for future
work are to formally compare the proposed models with the similar structures
of the Prentice criteria, models using mediation strategies, or other causal
methods.
## References
* Austin, 2012 Austin, P. C. (2012). Generating survival times to simulate Cox proportional hazards models with time-varying covariates. Statistics in Medicine, 31(29):3946–3958.
* Balan and Putter, 2019 Balan, T. A. and Putter, H. (2019). frailtyEM: An R package for estimating semiparametric shared frailty models. Journal of Statistical Software, 90(7):1–29.
* Beesley and Taylor, 2019 Beesley, L. J. and Taylor, J. M. (2019). EM algorithms for fitting multistate cure models. Biostatistics, 20(3):416–432.
* Beyer et al., 2020 Beyer, U., Dejardin, D., Meller, M., Rufibach, K., and Burger, H. U. (2020). A multistate model for early decision-making in oncology. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 62(3):550–567.
* Bühler et al., 2022 Bühler, A., Cook, R. J., and Lawless, J. F. (2022). Multistate models as a framework for estimand specification in clinical trials of complex processes. arXiv preprint arXiv:2209.13658.
* Clayton, 1991 Clayton, D. G. (1991). A monte carlo method for bayesian inference in frailty models. Biometrics, pages 467–485.
* Comment et al., 2019 Comment, L., Mealli, F., Haneuse, S., and Zigler, C. (2019). Survivor average causal effects for continuous time: a principal stratification approach to causal inference with semicompeting risks. arXiv preprint arXiv:1902.09304.
* 8 Conlon, A., Taylor, J., and Elliott, M. (2017a). Surrogacy assessment using principal stratification and a gaussian copula model. Statistical methods in medical research, 26(1):88–107.
* 9 Conlon, A. S., Taylor, J. M. G., and Elliott, M. R. (2014a). Surrogacy assessment using principal stratification when surrogate and outcome measures are multivariate normal. Biostatistics, 15(2):266–283.
* 10 Conlon, A. S. C., Taylor, J. M. G., Li, Y., Diaz-Ordaz, K., and Elliott, M. R. (2017b). Links between causal effects and causal association for surrogacy evaluation in a gaussian setting. Statistics in Medicine, 36(7):4243–4265.
* 11 Conlon, A. S. C., Taylor, J. M. G., and Sargent, D. J. (2014b). Multi-state models for colon cancer recurrence and death with a cured fraction. Statistics in Medicine, 33(10):1750–1766.
* de Castro et al., 2015 de Castro, M., Chen, M. H., and Zhang, Y. (2015). Bayesian path specific frailty models for multi-state survival data with applications. Biometrics, 71(3):760–771.
* Elliott et al., 2015 Elliott, M. R., Conlon, A. S., Li, Y., Kaciroti, N., and Taylor, J. M. G. (2015). Surrogacy marker paradox measures in meta-analytic settings. Biostatistics, 16(2):400–412.
* Emura et al., 2017 Emura, T., Nakatochi, M., Murotani, K., and Rondeau, V. (2017). A joint frailty-copula model between tumour progression and death for meta- analysis. Statistical Methods in Medical Research, 26:2649–2666.
* Frangakis and Rubin, 2002 Frangakis, C. and Rubin, D. (2002). Principal stratification in causal inference. Biometrics, 58(1):21–29.
* Gabriel and Gilbert, 2014 Gabriel, E. E. and Gilbert, P. B. (2014). Evaluating principal surrogate endpoints with time-to-event data accounting for time-varying treatment efficacy. Biostatistics, 15(2):251–265.
* Gabriel et al., 2015 Gabriel, E. E., Sachs, M. C., and Gilbert, P. B. (2015). Comparing and combining biomarkers as principal surrogates for time-to-event clinical endpoints. Statistics in Medicine, 34(3):76–105.
* Gao, 2012 Gao, X. (2012). Causal Modeling with Principal Stratification to Assess Effects of Treatment with Partial Compliance, Noncompliance, and Principal Surrogacy in Longitudinal and Time-to-Event Settings. PhD thesis, University of Michigan.
* Gelman et al., 1996 Gelman, A., Roberts, G. O., and Gilks, W. R. (1996). Efficient metropolis jumping rules. Bayesian Statistics, 5:42.
* 20 Ghosh, D. (2012a). A causal framework for surrogate endpoints with semi-competing risks data. Statistics & Probability Letters, 82(11):1898–1902.
* 21 Ghosh, D. (2012b). A causal framework for surrogate endpoints with semi-competing risks data. Statistics & probability letters, 82(11):1898–1902.
* Ghosh et al., 2012 Ghosh, D., Taylor, J. M., and Sargent, D. J. (2012). Meta-analysis for surrogacy: Accelerated failure time models and semicompeting risks modeling. Biometrics, 68(1):226–232.
* Gilbert and Hudgens, 2008 Gilbert, P. and Hudgens, M. (2008). Evaluating candidate principal surrogate endpoints. Biometrics, 64(4):1146–1154.
* Gran et al., 2015 Gran, J. M., Lie, S. A., Øyeflaten, I., Borgan, Ø., and Aalen, O. (2015). Causal inference in multi-state models sickness absence and work for 1145 participants after work rehabilitation. BMC Public Health, 15(1):1–16.
* Huuang, 2021 Huuang, Y. T. (2021). Causal mediation of semicompeting risks. Biometrics, 77(4):1143–1154.
* Jackson et al., 2022 Jackson, W. C., Tang, M., Schipper, M., Sandler, H. M., Zumsteg, Z. S., Efstathiou, J. A., Shipley, W. U., Seiferheld, W., Lukka, H., Bahary, J. P., Zietman, A. L., Pisansky, T. M., Zeitzer, K. L., Hall, W. A., Dess, R. T., Lovett, R. D., Balogh, A., Feng, F. Y., and Spratt, D. E. (2022). Biochemical failure is not a surrogate end point for overall survival in recurrent prostate cancer: Analysis of nrg oncology/rtog 9601. Journal of Clinical Oncology, pages JCO–21.
* Joffe and Greene, 2009 Joffe, M. M. and Greene, T. (2009). Related causal frameworks for surrogate outcomes. Biometrics, 65(2):530–538.
* Kemp and Prasad, 2017 Kemp, R. and Prasad, V. (2017). Surrogate endpoints in oncology: when are they acceptable for regulatory and clinical decisions, and are they currently overused? BMC medicine, 15(1):1–7.
* Li and Zhang, 2015 Li, Y. and Zhang, Q. (2015). A weibull multi-state model for the dependence of progression-free survival and overall survival. Statistics in Medicine, 34(17):2497–2513.
* Little and Rubin, 2000 Little, R. and Rubin, D. (2000). Causal effects in clinical and epidemiological studies via potential outcomes: concepts and analytical approaches. Annual Review of Public Health, 21(1):121–145.
* O’Quigley and Flandre, 2012 O’Quigley, J. and Flandre, P. (2012). Discussion on “meta-analysis for surrogacy: Accelerated failure time models and semicompeting risks modeling”. Biometrics, 68(1):242–245.
* Parast et al., 2017 Parast, L., Cai, T., and Tian, L. (2017). Evaluating surrogate marker information using censored data. Statistics in Medicine, 36(11):1767–1782.
* Parmar et al., 2008 Parmar, M. K., Barthel, F. M. S., Sydes, M., Langley, R., Kaplan, R., Eisenhauer, E., and Qian, W. (2008). Speeding up the evaluation of new agents in cancer. Journal of the National Cancer Institute, 100(17):1204–1214.
* Pearl and Bareinboim, 2022 Pearl, J. and Bareinboim, E. (2022). External validity: From do-calculus to transportability across populations. In Probabilistic and Causal Inference: The Works of Judea Pearl, pages 451–482.
* Prentice, 1989 Prentice, R. (1989). Surrogate endpoints in clinical trials: definition and operational criteria. Statistics in Medicine, 8(4):431–40.
* Putter, 2011 Putter, H. (2011). Tutorial in biostatistics: Competing risks and multi-state models Analyses using the mstate package. Leiden University Medical Center, Department of Medical Statistics and Bioinformatics. Online Tutorial, Leiden.
* Putter and van Houwelingen, 2015 Putter, H. and van Houwelingen, H. C. (2015). Frailties in multi-state models: Are they identifiable? Do we need them? Statistical Methods in Medical Research, 24(6):675–692.
* Qin et al., 2008 Qin, L., Gilbert, P. B., Follmann, D., and Li, D. (2008). Assessing surrogate endpoints in vaccine trials with case-cohort sampling and the Cox model. The Annals of Applied Statistics, 2(1):386.
* R Core Team, 2021 R Core Team (2021). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
* Reeder et al., 2022 Reeder, H. T., Lu, J., and Haneuse, S. (2022). Penalized estimation of frailty-based illness-death models for semi-competing risks. arXiv preprint arXiv:2202.00618.
* Roberts et al., 2021 Roberts, E. K., Elliott, M. R., and Taylor, J. M. (2021). Incorporating baseline covariates to validate surrogate endpoints with a constant biomarker under control arm. Statistics in Medicine, 40(29):6605–6618.
* Roberts et al., 2022 Roberts, E. K., Elliott, M. R., and Taylor, J. M. (2022). Solutions for surrogacy validation with longitudinal outcomes for a gene therapy. Biometrics.
* Rondeau and Gonzalez, 2005 Rondeau, V. and Gonzalez, J. R. (2005). Frailtypack: a computer program for the analysis of correlated failure time data using penalized likelihood estimation. Computer Methods and Programs in Biomedicine, 80(2):154–164.
* Sahu et al., 1997 Sahu, S. K., Dey, D. K., Aslanidou, H., and Sinha, D. (1997). A weibull regression model with gamma frailties for multivariate survival data. Lifetime Data Analysis, 3(2):123–137.
* Shipley et al., 2017 Shipley, W. U., Seiferheld, W., Lukka, H. R., et al. (2017). Radiation with or without antiandrogen therapy in recurrent prostate cancer. New England Journal of Medicine, 376(5):417–428.
* Sofeu et al., 2019 Sofeu, C. L., Emura, T., and Rondeau, V. (2019). One-step validation method for surrogate endpoints using data from multiple randomized cancer clinical trials with failure-time endpoints. Statistics in Medicine, 38(16):2928–2942.
* Sofeu et al., 2020 Sofeu, C. L., Emura, T., and Rondeau, V. (2020). A joint frailty-copula model for meta-analytic validation of failure time surrogate endpoints in clinical trials. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 63(2):423–446.
* Suresh et al., 2017 Suresh, K., Taylor, J. M., Spratt, D. E., Daignault, S., and Tsodikov, A. (2017). Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 59(6):1277–1300.
* Tanaka et al., 2017 Tanaka, S., Matsuyama, Y., and Ohashi, Y. (2017). Validation of surrogate endpoints in cancer clinical trials via principal stratification with an application to a prostate cancer trial. Statistics in Medicine, 36(19):2963–2977.
* Taylor et al., 2015 Taylor, J. M. G., Conlon, A. S., and Elliott, M. R. (2015). Surrogacy assessment using principal stratification with multivariate normal and gaussian copula models. Clinical Trials, 12(4):317–322.
* Valeri et al., 2021 Valeri, L., Proust-Lima, C., Fan, W., Chen, J. T., and Jacqmin-Gadda, H. (2021). A multistate approach for mediation analysis in the presence of semi-competing risks with application in cancer survival disparities. arXiv preprint arXiv:2102.13252.
* Vandenberghe et al., 2018 Vandenberghe, S., Duchateau, L., Slaets, L., Bogaerts, J., and Vansteelandt, S. (2018). Surrogate marker analysis in cancer clinical trials through time-to-event mediation techniques. Statistical Methods in Medical Research, 27(11):3367–3385.
* VanderWeele, 2013 VanderWeele, T. J. (2013). Surrogate measures and consistent surrogates. Biometrics, 69(3):561–565.
* Weir et al., 2021 Weir, I. R., Rider, J. R., and Trinquart, L. (2021). Counterfactual mediation analysis in the multistate model framework for surrogate and clinical time-to-event outcomes in randomized controlled trials. Pharmaceutical Statistics, 21(1):163–175.
* Wen et al., 2016 Wen, S., Huang, X., Frankowski, R. F., Cormier, J. N., and Pisters, P. (2016). A bayesian multivariate joint frailty model for disease recurrences and survival. Statistics in Medicine, 35(26):4794–4812.
* Xie et al., 2017 Xie, W., Regan, M. M., Buyse, M., et al. (2017). Metastasis-free survival is a strong surrogate of overall survival in localized prostate cancer. Journal of Clinical Oncology, 35(27):3097–3104.
* Xu et al., 2010 Xu, J., Kalbfleisch, J. D., and Tai, B. (2010). Statistical analysis of illness-death processes and semicompeting risks data. Biometrics, 66(3):716–725.
* Xu et al., 2020 Xu, Y., Scharfstein, D., Moeller, P., and Daniels, M. (2020). A bayesian nonparametric approach for evaluating the causal effect of treatment in randomized trials with semi-competing risks. Biostatistics, 23(1):34–49.
* Zeng et al., 2018 Zeng, L., Cook, R. J., and Lee, K.-A. (2018). Design of cancer trials based on progression-free survival with intermittent assessment. Statistics in Medicine, 37(12):1947–1959.
* Zhang and Rubin, 2003 Zhang, J. L. and Rubin, D. B. (2003). Estimation of causal effects via principal stratification when some outcomes are truncated by “death”. Journal of Educational and Behavioral Statistics, 28(4):353–368.
* Zhang et al., 2014 Zhang, Y., Chen, M. H., Ibrahim, J. G., Zeng, D., Chen, Q., Pan, Z., and Xue, X. (2014). Bayesian gamma frailty models for survival data with semi-competing risks and treatment switching. Lifetime Data Analysis, 20(1):76–105.
## Conflict of Interest Statement:
None to report.
## Acknowledgements
We would like to acknowledge NRG for the RTOG data and thoughtful feedback by
Drs. Matthew Schipper, Walter Dempsey, and Ben Hansen on this work.
## 8 Figures and Tables
T(0)BaselineS(0)$T_{12}(0)$: $\lambda_{12}^{0}$$T_{13}(0)$: $\lambda_{13}^{0}$$T_{23}(0)$: $\lambda_{23}^{0}$T(1)BaselineS(1)$T_{12}(1)$: $\lambda_{12}^{1}$$T_{13}(1)$: $\lambda_{13}^{1}$$T_{23}(1)$: $\lambda_{23}^{1}$ Figure 1: Counterfactual illness-death models for baseline, illness ($S$), and death ($T$). The potential pathways are labeled with the gap time and corresponding transition intensity for each treatment arm. Figure 2: This diagram demonstrates the relationships between the parameters and data in the proposed model (model A assuming that $\omega_{13}^{z}=\omega_{23}^{z}$). | $\lambda_{12}^{0}=\lambda_{12}^{1}$ | $\lambda_{13}^{0}=\lambda_{13}^{1}$ | $\lambda_{23}^{0}=\lambda_{23}^{1}$ | Surrogacy
---|---|---|---|---
Scenario 1 | T | T | T | Null Case
Scenario 2 | F | T | T | Perfect
Scenario 3 | F | T | F | Partial
Scenario 4 | F | F | T | Partial
Scenario 5 | F | F | F | Partial
Scenario 6 | T | F | F | Not a surrogate
Scenario 7 | T | T | F | Not a surrogate
Scenario 8 | T | F | T | Not a surrogate
Table 1: Eight possible scenarios of which pathways in the illness death
models exhibit treatment effects based on the causal hazards. $T$ denotes true
and $F$ denotes false. The right hand column represents an intuitive notion of
whether $S$ is a good surrogate for $T$. Figure 3: Example of an estimated CEP
curve, conditional on frailties, for a single simulated dataset under Scenario
2.
T(0)BaselineS(0)$T_{12}(0)$: 93$T_{13}(0)$: 77$T_{23}(0)$:
54T(1)BaselineS(1)$T_{12}(1)$: 63$T_{13}(1)$: 74$T_{23}(1)$: 34 Figure 4:
Counterfactual Illness-Death Models for baseline, illness ($S$), and death
($T$) with the number of individuals experiencing the events in each
transition for the prostate cancer trial.
Figure 5: Kaplan Meier curves for the intermediate and true outcome
demonstrating significant treatment effects for the prostate cancer trial. We
also show the Kaplan Meier curve for the transition from $S$ to $T$ among
those who experienced $S$.
Parameter | $\gamma_{0}$ | $\gamma_{1}$ | $\gamma_{12}^{0}$ | $\gamma_{13}^{0}$ | $\gamma_{23}^{0}$ | $\gamma_{12}^{1}$ | $\gamma_{13}^{1}$ | $\gamma_{23}^{1}$ | $\theta_{23}^{0}$ | $\theta_{23}^{1}$
---|---|---|---|---|---|---|---|---|---|---
Posterior Mean | -0.036 | 0.076 | 0.018 | 0.018 | 0.172 | 0.013 | 0.015 | 0.266 | 0.097 | 0.035
SE | 0.059 | 0.030 | 0.002 | 0.002 | 0.180 | 0.002 | 0.002 | 0.371 | 0.248 | 0.243
Table 2: Parameter estimates for the prostate cancer data example. The
posterior mean and estimated standard error are shown for each parameter. All
$\alpha_{jk}$ and $\kappa_{jk}$ are set to 1. Figure 6: Causal effect
predictiveness plot for the motivating prostate cancer trial dataset. Each
point represents the posterior mean of $\Delta S_{i}$ and $\Delta T_{i}$ for
an individual. The collection of linear best fit lines in gray represent the
posterior slope $\gamma_{1}$ and intercept $\gamma_{0}$ evaluated at each
iteration of the MCMC. The posterior marginal effects on $S$ and $T$ are shown
in the red dotted lines. No covariates are considered in this model.
|
11institutetext: Instituto Nacional de Pesquisas Espaciais, Av. dos
Astronautas 1758 – Jardim da Granja
São José dos Campos, SP 12227–010, Brazil
# Potential contributions of Pop III and intermediate-mass Pop II stars to
cosmic chemical enrichment
Lia C. Corazza 11 Oswaldo D. Miranda 11 Carlos A. Wuensche 11
(Received ; accepted )
###### Abstract
Context. We propose a semi-analytic model that is developed to understand the
cosmological evolution of the mean metallicity in the Universe. In particular,
we study the contributions of Population III (Pop III) and Population II (Pop
II) stars to the production of $\mathrm{Fe,\leavevmode\nobreak\
Si,\leavevmode\nobreak\ Zn,\leavevmode\nobreak\ Ni,\leavevmode\nobreak\
P,\leavevmode\nobreak\ Mg,\leavevmode\nobreak\ Al,\leavevmode\nobreak\
S,\leavevmode\nobreak\ C,\leavevmode\nobreak\ N}$, and
$\mathrm{\leavevmode\nobreak\ O}$.
Aims. We aim to quantify the roles of two different models in the chemical
enrichment of the Universe. The first model (A) considers both stars with Pop
III and Pop II yields. For the second model (B), the yields involved are only
for Pop II stars.
Methods. We start by describing the cosmic star formation rate (CSFR) through
an adaptation of a scenario developed within the hierarchical scenario of
structure formation with a Press-Schechter-like formalism. We adapt the
formalism to implement the CSFR to the standard chemical evolution scenario to
investigate the course of chemical evolution on a cosmological basis.
Calculations start at redshift $z\sim 20$, and we compare the results of our
two models with data from damped Lyman-$\alpha$ systems (DLAs), and globular
clusters (GCs).
Results. Our main results find that metal production in the Universe occurred
very early, quickly increasing with the formation of the first stars. When
comparing results for [Fe/H] with observations from GCs, yields of Pop II
stars are not enough to explain the observed chemical abundances, requiring
stars with physical properties similar those expected from Pop III stars.
Conclusions. Our semi-analytic model can deliver consistent results for the
evolution of cosmic metallicities. Our results show that the chemical
enrichment in the early Universe is rapid, and at redshift $\sim 12.5$, the
metallicity reaches $10^{-4}\,Z_{\sun}$ for the model that includes Pop III
stars. In addition, we explore values for the initial mass function (IMF)
within the range $[0.85,1.85]$.
###### Key Words.:
cosmology: observations — cosmology: theory — dark ages, reionization, first
stars — large-scale structure of Universe — stars: Population II — stars:
Population III
## 1 Introduction
In order to understand the chemical evolution of the entire Universe, it is
vital that we understand the global mechanisms that have dominated the
production of chemical elements since the Big Bang. Primordial nucleosynthesis
is responsible for the synthesis of deuterium, 3He, 4He, and traces of 7Li,
and it ceased after the first few minutes of the existence of the Universe.
After that period, a global chemical enrichment would resume only when stellar
nucleosynthesis started inside the nuclei of stars.
Two classes of stars are of primary interest for the chemical evolution
scenario: primordial, metal-free stars, known as first stars or Population III
(Pop III) stars, and second-generation, more enriched stars, known as
Population II (Pop II) stars. It is believed that Pop III stars had very
unusual proprieties, and despite intense observational efforts, these stars
have not yet been observed, although a few candidates have been proposed
(e.g., Kashikawa et al., 2012; Sobral et al., 2015; Vanzella et al., 2020).
Thus, researchers have been combining efforts to build consistent modeling of
these stars in recent decades (Heger & Woosley, 2002; Schaerer, 2002; Chieffi
& Limongi, 2004; Heger & Woosley, 2010; Takahashi et al., 2018). Results
indicate that they would have very large masses, between $100$ and
$200\,\mathrm{M}_{\sun}$, or even higher, around $500$ to
$1000\,\mathrm{M}_{\sun}$ (Ohkubo et al., 2006), mainly due to the lack of
metals in the gas, making cooling processes very inefficient (Galli & Palla,
2013; Hirano & Yoshida, 2013).
Also, the question about their initial mass function (IMF) and their role in
the reionization of the Universe (Abel et al., 2000; Nakamura & Umemura, 2001;
Gnedin & Ostriker, 1997; Cojazzi et al., 2000; Tumlinson & Shull, 2000; Ciardi
et al., 2001; Loeb & Barkana, 2001; Venkatesan et al., 2003; Tumlinson et al.,
2004; Barkana, 2006), among other properties, have been discussed to better
understand the physics of this high-mass stellar population.
In terms of chemical production, the above results indicate they were
substantially important. For instance, Pop III stars with masses from $140$ to
$260\,\textrm{M}_{\sun}$ produced huge amounts of metals. They ended their
lives as pair-instability supernovae (PISNe), injecting highly enriched
material back into the interstellar medium (ISM) and intergalactic medium
(IGM), and leaving no remnants behind after a complete disruptive process
(Heger & Woosley, 2002; Takahashi et al., 2018).
After the first generation of Pop III stars started to die, injecting large
amounts of metals into the ISM and IGM, Pop II stars started to form. With
more and more enriched material available, cooling processes started to become
more efficient, giving origin to less massive stars with physical proprieties
close to those observed today, and also extensively modeled mainly according
to their masses and metallicities (Chieffi & Limongi, 2004; Kobayashi, 2005;
Campbell & Lattanzio, 2008; Karakas, 2010; Doherty et al., 2013, 2014).
Cosmological chemical evolution models have also been largely explored. There
are several different models that seek to evaluate different aspects related
to the chemical enrichment, such as the evolution of the mass-metallicity
relation (Ma et al., 2016; Torrey et al., 2019), the establishment of a
critical value for the metallicity of the Universe, enabling the transition
from Pop III to Pop II stars (Bromm et al., 2001; Bromm & Loeb, 2003; Fang &
Cen, 2004; Matteucci & Calura, 2005; Santoro & Shull, 2006; Tornatore et al.,
2007; Maio et al., 2010; Schneider, 2010), Hypernovae (HNe) feedback
(Kobayashi et al., 2007), the role of galactic outflows (Davé & Oppenheimer,
2007), the chemical properties of local galaxies based on their formation
through the hierarchical model of structure formation (Calura & Menci, 2009),
the evolution of N abundance in the Universe and the reason for a large
dispersion in observational data (Vangioni et al., 2018), and the influence of
dark matter (DM) halos on the gas reservoir available for star formation
(Lilly et al., 2013), among other examples of interesting contributions to the
study of the Universe through its chemical enrichment.
The models can be generically classified into semi-analytic and hydrodynamic
simulations. Nevertheless, there is increasing uncertainty connected to the
chemical evolution as we move from local to cosmological scales, which are
independent of the analytic or computational modeling. From the small scale
represented by nuclear reaction rates and stellar masses to larger, galactic,
and cosmological scales, there is a cumulative uncertainty since each scale
carries its own set of considerations and uncertainties. For a specific
discussion on this subject, see, in particular, Côté et al. (2016). These
models help discuss general and particular aspects of metallicity evolution in
cosmological terms, but several do not detail the contributions to the
evolution of single elements. Moreover, comparing observations with high-
redshift simulations is a challenge. Based upon the points mentioned above, we
propose a semi-analytic model to investigate the contributions of Pop III and
Pop II stars to the cosmological evolution of single elements across the
redshift interval $0\leq z\lesssim 20$, not including the details of
hydrodynamic simulations, and taking into account different perspectives to
compare our results with observations in a range of different redshifts.
We start in Sect. 2, introducing and justifying the choices for the
cosmological background, which is going to be the basis for the chemical
evolution model: we describe the model developed by Pereira & Miranda (2010),
and the incorporated changes in the scenario that allow for an adequate
coupling of the star formation model with the equations of the chemical
evolution of the Universe. We address the adapted cosmological model as
Corazza, Miranda & Wuensche (hereinafter represented as CMW along with the
text). In addition, the modifications of the model introduced in this work
allow for a better adjustment of the cosmic star formation rate (CSFR) to the
observational data available up to redshift $\sim 10$, and satisfy all the
points studied by Gribel et al. (2017) in their unified model connecting the
CSFR with the local star formation.
We also adapted the chemical models developed over the past 40 years for the
Galaxy (see, e.g., Tinsley & Larson, 1978; Larson et al., 1980; Matteucci,
2016). This adaptation allows us to build an adequate model for the chemical
enrichment of the Universe. Implementing chemical yields for stars with masses
between $0.85$ and $260\,\textrm{M}_{\sun}$ and metallicities from $0$ up to
$Z=0.02\;Z_{\sun}$ allows us to provide, in Sect. 3, several results, data
comparison, and discussions about the cosmic evolution of 11 chemical
elements: iron (Fe), silicon (Si), zinc (Zn), nickel (Ni), phosphorus (P),
magnesium (Mg), aluminum (Al), sulfur (S), carbon (C), nitrogen (N), and
oxygen (O). We draw the conclusions in Sect. 4.
## 2 Methodology
In this section, we show how we obtain the CSFR from the process of large-
scale structure formation and how we build a scenario that describes the
cosmic chemical enrichment from redshift $\sim 20$ to the present. The first
DM halos decoupled from the Universe’s expansion, collapsing and virializing,
probably between the end of recombination and redshift $\sim 20$. The
potential wells of these first halos generated the conditions for the baryonic
matter to flow into these structures, agglomerate, and form the first stars.
The characterization of the cosmological star formation and the consequent
chemical enrichment of the Universe is, in this way, connected to the DM halo
formation within a given mass range and as function of the redshift.
### 2.1 Cosmological scenario
Dark matter halos drag the baryonic matter into their interiors. We can
describe this process through the adaptation of the formalism developed
originally by Press & Schechter (1974), which allows us to directly estimate
the fraction of baryons ($f_{\mathrm{b}}$) incorporated into the halos:
$f_{\mathrm{b}}(z)=\frac{\int_{M_{\mathrm{min}}}^{M_{\mathrm{max}}}\,f(M,z)\,M\,dM}{\int_{0}^{\infty}f(M,z)\,M\,dM},$
(1)
where
$df(M,z)=\frac{\rho_{\mathrm{m}}}{M}\frac{d\ln{\sigma^{-1}}}{dM}f_{\mathrm{ST}}(\sigma)\,dM$
(2)
is the number of DM halos per comoving volume at a given redshift within the
mass interval $[M,M+dM]$, and $\rho_{\mathrm{m}}$ is the matter density of the
Universe.
The halo mass function, $f_{\mathrm{ST}}(\sigma)$, proposed by Sheth & Tormen
(1999), is:
$f_{\mathrm{ST}}(\sigma)=0.3222\sqrt{\frac{2a}{\pi}}\frac{\delta_{\mathrm{c}}}{\sigma}\exp\left(\frac{-a\delta_{\mathrm{c}}^{2}}{2\sigma^{2}}\right)\left[1+\left(\frac{\sigma^{2}}{a\delta_{\mathrm{c}}^{2}}\right)^{p}\right],$
(3)
with $\delta_{\mathrm{c}}=1.686$, $a=0.707$, $p=0.3$, and $\sigma(M,z)$ is the
variance of the linear density field.
The fact that stars form only in suitably dense structures is parameterized in
Eq. (1) by the threshold mass $M_{\mathrm{min}}$. We consider
$M_{\mathrm{min}}=10^{6}\,\mathrm{M}_{\sun}$ to be the minimum mass for the
first star-forming halos to appear in hierarchical models. The upper limit
$M_{\mathrm{max}}$ can take values up to $\gtrsim 10^{17}\,\mathrm{M}_{\sun}$.
This limit is set according to the mass scale of galaxy superclusters
(Salvadori et al., 2007; Pereira & Miranda, 2010), limiting the scale of the
largest structures formed in the present Universe. In any case, the results
have shown to be weakly dependent on the upper limit if
$M_{\mathrm{max}}>10^{17}\,\mathrm{M}_{\sun}$.
The function $\sigma(M,z)$ in Eq. (3) can be determined from the power
spectrum $P(k)$ smoothed with a spherical top-hat filter function of radius
$R$, which, on average, encloses a mass $M$ $(R=[3M/4\pi\rho(z)]^{1/3})$.
Thus,
$\sigma^{2}(M,z)=\frac{D^{2}(z)}{2\pi^{2}}\int_{0}^{\infty}{k^{2}\,P(k)\,W^{2}(k,M)\,dk},$
(4)
where $W(k,M)$ is the top-hat filter in the $k$-space:
$W(k,M)=\frac{3}{(kR)^{3}}\,[\sin(kR)-kR\cos(kR)].$ (5)
The dependence on redshift comes from the growth factor $D(z)$, that is,
$\sigma(M,z)=\sigma(M,0)D(z)$. Here we use the analytical approach for $D(z)$
as derived by Carroll et al. (1992).
The rate at which fluctuations grow on different scales depends on the
interplay between self-gravitation, pressure support, and damping processes.
All of these processes are part of the power spectrum given by $P(k)\propto
k^{n_{\mathrm{p}}}$ (see, e.g., Gribel et al., 2017 for details).
From these equations, it is possible to determine how halos of different
masses decouple from the Universe’s expansion and how baryonic matter is
gradually incorporated into the center of the virialized halos. Structures
more massive than $\sim 10^{6}\,\mathrm{M}_{\sun}$ are formed at later times,
as the redshift decreases. Thus, as more halos are formed, more baryonic
matter flows into these structures, generating conditions for star formation.
This allows us to define the baryon accretion rate as:
$a_{\mathrm{b}}(t)=\Omega_{0,\mathrm{b}}\,\rho_{0,\mathrm{c}}\left(\frac{dt}{dz}\right)^{-1}\left|\frac{df_{\mathrm{b}}}{dz}\right|,$
(6)
where $\Omega_{0,\mathrm{b}}$ is the baryonic density parameter at $z=0$,
$\rho_{0,\mathrm{c}}=3H_{0}^{2}/8\pi G$ is the critical density of the
Universe ($H_{0}=100\,h\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$ is
the value of the Hubble parameter at the current time), and
$\frac{dt}{dz}=\frac{1}{H_{0}\,(1+z)\sqrt{\Omega_{0,\Lambda}+\Omega_{0,\mathrm{m}}\,(1+z)^{3}}}.$
(7)
The cosmological framework described through the set of equations presented
above is similar to the one used by different authors (e.g., Daigne et al.,
2006; Pereira & Miranda, 2010; Tan et al., 2016; Gribel et al., 2017; Vangioni
et al., 2018).
### 2.2 Cosmic star formation rate
Once we set the cosmological framework, it is possible to compute the CSFR by
incorporating the IMF and the star formation rate (SFR). In particular, the
number of stars formed per unit of mass ($m$), volume ($V$), and time ($t$) is
given by
$\frac{d^{3}N(m,V,t)}{dmdVdt}=\varphi(m)\,\psi(t),$ (8)
where $\psi(t)\propto\rho_{\mathrm{g}}^{\alpha}$ corresponds to the SFR
($\rho_{\mathrm{g}}$ is the gas density). It should be noted that $\psi(t)$
follows the functional form known as Schmidt’s law (Schmidt, 1959). On the
other hand, the IMF is given by $\varphi(m)\propto m^{-(1+x)}$ and its
functional form with $x=1.35$ is called as Sapeter’s IMF (Salpeter, 1959).
The IMF of the first stars is still an open question. We see that the Salpeter
IMF favors the formation of low-mass stars, and various authors adopted it in
their chemical evolution models (see, e.g., Calura & Matteucci, 2004, 2006;
Casey et al., 2012; Shu et al., 2016; Fraser et al., 2017; Vangioni et al.,
2018) while some others (see, e.g., Nakamura & Umemura, 2001; Schneider et
al., 2006; Ma et al., 2017) also allow for the possibility of a top-heavy or
bi-modal IMF.
In our study, we consider $x=1.35$ as the reference value. However, to
identify the influence of the IMF exponent on the chemical enrichment of the
Universe, we also considered four other values, nominally, $0.85$, $1.0$,
$1.7$, and $1.85$ allowing the formation of a higher (the first two values) or
smaller (the last two values) number of high-mass stars when compared to the
reference value $1.35$. We also used $\alpha=1$ in agreement with Gribel et
al. (2017), which shows that different properties from the star formation
regions in the Galaxy, including the so-called Larson’s law, can be well
reproduced with $\alpha=1$.
Therefore, Eq. (8) describes the number of stars formed within the DM halos
that aggregate and concentrate baryons in their centers. A fraction of the
mass in stars is ejected (through stellar winds and supernovae, for example)
and returned to the ISM formed by these structures. The ejected mass fraction
is given by:
$\frac{d^{2}M_{\mathrm{ej}}}{dVdt}=\int^{m_{\mathrm{s}}}_{m{(t)}}(m-m_{\mathrm{r}})\,\psi(t-\tau_{\mathrm{m}})\,\varphi(m)\,dm,$
(9)
where $m(t)$ is the stellar mass whose lifetime is equal to $t$, and
$m_{\mathrm{r}}$ represents the mass of the remnant, which depends both on the
progenitor mass ($m$) and on the environment metallicity $(Z)$. The star
formation is taken at the time ($t-\tau_{\mathrm{m}}$), where
$\tau_{\mathrm{m}}$, also a function of the metallicity $(Z)$, is the lifetime
of a star of mass $m$.
We used the results of Spera et al. (2015) to obtain the masses of the stellar
remnants $(m_{\mathrm{r}})$ as functions of the metallicity and the initial
stellar masses. The authors obtained their results from SEVN (population-
synthesis code) coupled with the PARSEC code for stellar evolution tracks. In
particular, in this work we use the fitting formulas presented in Appendix C
of Spera et al. (2015).
Concerning the parameter $\tau_{\mathrm{m}}$, we use the metallicity-dependent
formula given by Raiteri et al. (1996):
$\log\,\tau_{\mathrm{m}}=a_{0}(Z)+a_{1}(Z)\log\left(\frac{M_{\star}}{\mathrm{M}_{\sun}}\right)+a_{2}(Z)\left[\log\left(\frac{M_{\star}}{\mathrm{M}_{\sun}}\right)\right]^{2},$
(10)
where $\tau_{\mathrm{m}}$ is expressed in years, and the metallicity-dependent
coefficients are (see Raiteri et al., 1996 for details):
$a_{0}(Z)=10.13+0.07547\,\log Z-0.008084\,(\log Z)^{2},$ (11)
$a_{1}(Z)=-4.424-0.7939\,\log Z-0.1187\,(\log Z)^{2}\ ,$ (12)
$a_{2}(Z)=1.262+0.3385\,\log Z+0.05417\,(\log Z)^{2},$ (13)
and $Z$ is the absolute metallicity.
It is worth stressing that $\tau_{\mathrm{m}}$ determined by Eq. (10) has an
excellent agreement when compared to the stellar lifetimes presented in Table
2 of Ekström et al. (2008) for different values of mass and metallicity. In
particular, the difference between the results for $\tau_{\mathrm{m}}$ is
lower than $5\%$, which has little effect on our results.
Following this formalism and combining the previous equations, we derive the
equation that governs the total gas density $\rho_{\mathrm{g}}$ in the halos:
$\dot{\rho}_{\mathrm{g}}=-\frac{d^{2}M_{\star}}{dVdt}+\frac{d^{2}M_{\mathrm{ej}}}{dVdt}+a_{\mathrm{b}}(t),$
(14)
where the term $a_{\mathrm{b}}(t)$ gives, for the halos, a matter of
primordial composition. The system becomes closed without the term
$a_{\mathrm{b}}(t)$ in Eq. (14). Thus, this term corresponds to a primordial
gas infall in the structures in formation. In other words, it describes the
primordial baryonic matter that is captured by the potential wells generated
by the halos.
On the other hand, the first term on the right side gives the mass of gas
converted to stars per unit of volume and time. By Schmidt’s law, we have:
$\psi(t)=\frac{d^{2}M_{\star}}{dVdt}=k\,\rho_{\mathrm{g}}.$ (15)
It should be noted that the term $k$ is the inverse of timescale for star
formation, that is, $k=1/\tau_{\mathrm{s}}$.
The total gas density can be calculated by numerical integration of Eq. (14),
providing values for $\rho_{\mathrm{g}}(t)$ at each time $t$ or redshift $z$
as long as the $\tau_{\mathrm{s}}$ parameter is set. The initial condition is
zero gas density at $t=0$ $(z=20)$ for solving Eq. (14). Moreover, there are
some steps for obtaining the correct characterization of the function
$\rho_{\mathrm{g}}$. First of all, it is necessary to determine the function
$\tau_{\mathrm{s}}$. This parameter is related to the CSFR via Schmidt’s Law,
that is, $\dot{\rho}_{\star}$ is directly proportional to the gas density and
inversely proportional to the characteristic timescale for the conversion of
gas in stars. Second, if all the gas entering the system, plus the gas
returning to the system through Eq. (9), is converted into stars, there is an
overabundance of both stars and metals. Thus, $\dot{\rho}_{\star}$ must also
be dependent on a parameter that measures the efficiency
($<\varepsilon_{\star}>$) in which gas is converted into stars.
From the above considerations, we have:
$\dot{\rho}_{\star}(z)=<\varepsilon_{\star}>\frac{\rho_{\mathrm{g}}}{\tau_{\mathrm{s}}}.$
(16)
Once the CSFR is determined, the $a_{\mathrm{b}}$ function is fixed by the
structure formation scenario, and the IMF is determined by the choice of the
$x$ exponent, and then it will be possible to determine the function
$\rho_{\mathrm{g}}$ by Eq. (14).
The aforementioned steps are essential to characterize the total gas density
function. The calculation algorithm integrates the differential equation (14)
through the sixth-order Runge-Kutta method. The differential equations for the
various chemical elements and total metallicity of the Universe (Sect. 2.3)
are solved by the same method.
#### 2.2.1 Characterizing the functions $<\varepsilon_{\star}>$ and
$\tau_{\mathrm{s}}$
The $<\varepsilon_{\star}>$ and $\tau_{\mathrm{s}}$ functions work together to
produce the CSFR with the best fit to the observational data. In particular,
the cold gas used to form stars is given by:
$\rho_{\mathrm{cold}}(z)=<\varepsilon_{\star}>\,{\rho_{\mathrm{g}}(z)},$ (17)
where $<\varepsilon_{\star}>$ acts as efficiency for star formation. In
principle, $\rho_{\mathrm{cold}}$ is composed of the sum of two components:
molecular gas ($\rho_{\mathrm{H2}}$) and atomic gas ($\rho_{\mathrm{HI}}$).
This allows us to rewrite $\tau_{\mathrm{s}}$ as:
$\tau_{\mathrm{s}}(z)=\frac{\rho_{\mathrm{cold}}}{\dot{\rho}_{\star}}=\frac{\rho_{\mathrm{H2}}}{\dot{\rho}_{\star}}+\frac{\rho_{\mathrm{HI}}}{\dot{\rho}_{\star}}=\tau_{\mathrm{depl,H2}}+\tau_{\mathrm{depl,HI}},$
(18)
with $\tau_{\mathrm{depl,H2}}$ and $\tau_{\mathrm{depl,HI}}$ representing,
respectively, the depletion scales for molecular and atomic gases.
Equations (16), (17), and (18) are solved together to characterize the CSFR
and to determine the correct dependence of the $<\varepsilon_{\star}>$ and
$\tau_{\mathrm{s}}$ functions on the redshift. The constraints associated with
these equations are the following: the best adjustment of the
$\dot{\rho}_{\star}(z)$ curve to the observational data available within the
range $[0-10]$ in redshift must be produced; $\dot{\rho}_{\star}$ should be
normalize to return the value $\sim
0.016\,\mathrm{M}_{\sun}\,\mathrm{yr}^{-1}\,\mathrm{Mpc}^{-3}$ at $z=0$,
similar to the one determined by Madau & Dickinson (2014); the CSFR peak
should be made at redshift $z=2$. This value was chosen so that
$\dot{\rho}_{\star}$ obtained here is in accordance with the peak of the CSFR
used by Vangioni et al. (2018) and the one determined by Madau & Dickinson
(2014). Furthermore, $<\varepsilon_{\star}>\,\sim 0.01-0.02$ should be
produced at $z=0$. This causes the value $<\varepsilon_{\star}>$ to be on the
order of $\varepsilon_{\mathrm{ff}}$, the so-called SFR per free-fall time,
inferred for the star-forming regions of the local Universe (see, e.g.,
Krumholz & McKee, 2005; Gribel et al., 2017). Lastly,
$\tau_{\mathrm{s}}(z=0)\sim 0.5-2.5\,\mathrm{Gyr}$ should be produced, similar
to the value of $\tau_{\mathrm{dep}}$ as inferred by Schinnerer et al. (2016)
for the local Universe ($z=0$).
Figure 1 shows the CSFR as a function of redshift and its behavior concerning
the observational data. For comparison, we also present the CSFR used by
Vangioni et al. (2018). Both our CSFR and the one used by Vangioni et al.
(2018) reach a maximum value at $z=2$, as mentioned above. Up to redshift
$\sim 4$, the two CSFRs have a very similar behavior. We fit both CSFRs to
results from IR, UV, and GRB observations, as described in the figure caption.
In Fig. 2 we present the behavior of the $\tau_{\mathrm{s}}$ parameter for
different values of the $x$ exponent. Additionally, as determined by Péroux &
Howk (2020) and Maio et al. (2022), the corresponding $\mathrm{H}_{2}$
depletion time closely follows the dynamical time ($\tau_{\mathrm{dyn}}$),
taken to be 10% of Hubble time. The curves for $\tau_{\mathrm{s}}$ and
$\tau_{\mathrm{dyn}}\sim\tau_{\mathrm{depl,H2}}$ are shown in the left panel.
It should be noted that the values for $\tau_{\mathrm{s}}$ are above
$\tau_{\mathrm{dyn}}$ for higher redshifts, which means that the largest
contribution to the characteristic timescale for star formation, in Eq. (18),
is due to the atomic gas instead of molecular gas.
At lower redshifts ($z<5$), the curve $\tau_{\mathrm{dyn}}$ gradually
approaches $\tau_{\mathrm{s}}$, implying a greater contribution of molecular
gas to star formation. We note that for the two IMFs we have
$\tau_{\mathrm{dyn}}>\tau_{\mathrm{s}}$, which happens at redshift $\sim 0.75$
($\sim 0.38$) to $x=0.85$ ($1.0$). In these cases, the approximation
$\tau_{\mathrm{dyn}}\simeq t_{\mathrm{hubble}}/10$ is not adequate to describe
the characteristic timescale for star formation. As a result,
$\tau_{\mathrm{depl,H2}}$ should be constant. For the other IMFs,
$\tau_{\mathrm{s}}$ is dominated by the molecular depletion time for $z<0.5$.
The right panel of Fig. 2 shows our results compared with the timescales for
the conversion of gas (filled areas) obtained by Péroux & Howk (2020) and Maio
et al. (2022) for molecular and cold ($\mathrm{HI+H2}$) gas. The behavior of
$\tau_{\mathrm{s}}$ between the two filled areas shows that, in our scenario,
star formation is fueled by cold gas in atomic form for $z>6$. When the curves
describing $\tau_{\mathrm{s}}$ approach the gray filled area, the contribution
of molecular gas becomes gradually dominant to supply star formation.
As commented above, for IMF exponents 0.85 and 1.0, the approximation
$\tau_{\mathrm{dyn}}\simeq t_{\mathrm{hubble}}/10$ fails for $z<1$ and
$\tau_{\mathrm{dyn}}$ should be constant in order to supply the star formation
with gas in molecular form. Our model shows that for $x$ in the range
$1.35-1.85$, about 70% to 90% of the star formation would come from molecular
gas, for $z\lesssim 1$, if $\tau_{\mathrm{depl,H2}}=\tau_{\mathrm{dyn}}\simeq
t_{\mathrm{hubble}}/10$.
Figure 1: Solution for $\dot{\rho}_{\star}(z)$ as derived in this work (red
line) and, using black dashed line, the CSFR used by Vangioni et al. (2018)
plotted for comparison. Left: Evolution of the CSFR from the local Universe to
$z=20$. Right: Same as for the left panel, but zooming into $0\leq z\leq 10$,
allowing for a better visualization of the two CSFRs within the range of the
available observational data. Data used in this figure: IR (Magnelli et al.
(2011, 2013) \- dark gray filled circles; Gruppioni et al. (2013) \- dark
orange open circles); UV ( Wyder et al. (2005) \- slate gray open triangle;
Schiminovich et al. (2005) \- blue crosses; Dahlen et al. (2007) \- turquoise
open squares; Reddy & Steidel (2009) \- dark green filled squares; Robotham &
Driver (2011) \- chocolate cross; Cucciati et al. (2012) \- green stars;
Bouwens et al. (2012b, a) \- teal filled triangles); and GRB (Kistler et al.
(2009) \- deep pink open triangles). Figure 2: Evolution of the
characteristic timescale for star formation as a function of the redshift.
Left: $\tau_{\mathrm{s}}$ for different IMF exponent and the evolution of the
dynamical time ($t_{\mathrm{dyn}}$), taken to be 10% of Hubble time. Right:
Our results compared with the timescales for the conversion of gas (filled
areas) obtained by Péroux & Howk (2020) and Maio et al. (2022) for molecular
and cold (HI+H2) gas. Figure 3: Evolution of the fraction of gas converted
into stars for different values of the IMF used in this work. Left: Evolution
of the average star formation efficiency (defined as $<\varepsilon_{\star}>$)
as a function of $z$. Right: Evolution of the fraction of total cold gas
associated with star formation ($f_{\mathrm{gas}}$). The symbols show the
measurements compiled as follows: circles by Tacconi et al. (2018), squares by
Scoville et al. (2014), diamonds by Scoville et al. (2016), stars by
Dessauges-Zavadsky, M. et al. (2015), and triangles by Schinnerer et al.
(2016). See also Fig. 13b of Hodge & da Cunha (2020).
The left panel of Fig. 3 shows the evolution of star formation efficiency
($<\varepsilon_{\star}>$) with redshift. This parameter reaches values between
$\sim 0.15-0.40$, depending on the value of $x$ for redshifts $z>3$.
Efficiency gradually decreases with redshift reaching values close to
$0.01-0.02$ at $z=0$. In the right panel, we present the fraction of total
cold gas mass ($f_{\mathrm{gas}}$) determined by the relation:
$f_{\mathrm{gas}}=\frac{M_{\mathrm{gas}}}{M_{\mathrm{gas}}+M_{\star}},$ (19)
where $M_{\mathrm{gas}}$ is the cold gas mass and $M_{\star}$ is the mass in
stars.
Equation (19) permits a direct comparison with results obtained by other
authors. In particular, Hodge & da Cunha (2020) present in their fig. 13b
measurements compiled for the cold gas fraction ($\mathrm{H2+HI}$) and
compared with scaling relations derived in three different works (Scoville et
al., 2017; Tacconi et al., 2018; Liu et al., 2019). Our curves for the cold
gas fraction show good agreement with the scaling relation
$M_{\mathrm{gas}}\sim M_{\star}^{0.65}$ derived by Tacconi et al. (2018).
In Table 1, we summarize the parameters used to obtain the CSFR –
$\dot{\rho}_{\star}(z)$. It depends on the cosmological parameters
$\Omega_{0,\mathrm{m}}$, $\Omega_{0,\mathrm{b}}$, and $\Omega_{0,\Lambda}$,
and the parameters related to the formation of large-scale structures of the
Universe ($\sigma_{8}$, $n_{\mathrm{p}}$, and $M_{\mathrm{min}}$).
Table 1: Cosmological and structure formation parameters used to obtain the
CSFR
$\Omega_{0,\mathrm{m}}$ | $\Omega_{0,\mathrm{b}}$ | $\Omega_{0,\Lambda}$ | $h$ | $z_{\mathrm{i}}$ | $\sigma_{8}$ | $n_{\mathrm{p}}$ | $M_{\mathrm{min}}(\mathrm{M}_{\sun})$
---|---|---|---|---|---|---|---
0.279 | 0.0463 | 0.721 | 0.7 | 20 | 0.84 | 0.967 | $10^{6}$
Note. $\Omega_{0,\mathrm{m}}$ corresponds to the total matter (baryonic plus
DM) density parameter; $\Omega_{0,\mathrm{b}}$ is the baryonic density
parameter; $\Omega_{0,\Lambda}$ is the density parameter associated with dark
energy (cosmological constant); $h$ is the Hubble constant written as
$H_{0}=100\,h\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$;
$z_{\mathrm{i}}$ is the redshift at which star formation begins; $\sigma_{8}$
is the normalization of the power spectrum, in other words $\sigma(M,0)$;
$n_{\mathrm{p}}$ is the spectral index of the power spectrum;
$M_{\mathrm{min}}$ corresponds to the lowest mass a DM halo must have to
detach from the expansion of the Universe, to collapse, and to virialize (it
is approximately equal to the Jeans mass at recombination).
The CSFR used by Vangioni et al. (2018) follows the expression originally
derived by Springel & Hernquist (2003), but changing the values of the four
free parameters to adjust $\dot{\rho}_{\star}(z)$ to the most recent
observational data. In this work, we include the available GRB data at high
redshifts since the UV data will naturally suffer from the selection effect;
only the brightest objects are observed.
Although our model is semi-analytic, by adding the redshift dependence to the
functions $<\varepsilon_{\star}>$ and $\tau_{\mathrm{s}}$, it becomes possible
to obtain $\dot{\rho}_{\star}(z)$ with an adequate behavior within the
redshift range where the CSFR data exist. In addition, the way we build
$<\varepsilon_{\star}>$ and $\tau_{\mathrm{s}}$ with the constraints that
these functions must satisfy at $z=0$, shows good agreement with the
observational data. In particular, the ratio
$<\varepsilon_{\star}(z)>/<\varepsilon_{\star}(z=0)>$ provided by our model is
in good agreement with that obtained by Scoville et al. (2017) within the
redshift range $[0-3]$.
### 2.3 Chemical evolution scenario
The first chemical evolution models were developed for the framework of the
Galaxy by Tinsley & Larson (1978), Larson et al. (1980), and later by
Matteucci (2001). Their simple model of chemical evolution considers a closed-
box evolving system with no inflows or outflows. Also, the IMF is constant in
time, the chemical composition of the gas is primordial, and the mixing
between the chemical products ejected by stars and the ISM is instantaneous.
We can adapt these concepts, which are the basis of the chemical evolution
models of the Galaxy, straightforwardly. The main difference is that in the
cosmological scenario, the halos continuously incorporate baryons (primordial
gas) from the ambient (Universe). This is described by the function
$a_{\mathrm{b}}(t)$.
Once inside the halos, the gas is removed from the system to form stars at
time $t-\tau_{\mathrm{m}}$. This is described using the CSFR
$\dot{\rho}_{\star}(t-\tau_{\mathrm{m}})$. Later, the gas returns to the
system, at time $t$, when the stars die. A certain fraction of the gas used
for the star formation in $t-\tau_{\mathrm{m}}$ will be retained in the
remnant population $m_{\mathrm{r}}$ that forms in the time $t$. A new
generation of stars will be formed at the instant $t$, removing gas from the
system, and this processes is repeated in a cycle of continuous gas capture
and chemical enrichment of the environment.
To determine the chemical enrichment of a given $i$-element, in addition to
the functions $\dot{\rho}_{\star}$ and $a_{\mathrm{b}}$, we need to know how
much mass of the $i$-element is returned when the star of mass $m$ dies. This
is described by the parameter $P_{Z_{\mathrm{i\,m}}}$ that provides the
“stellar yield” of the $i$-element.
Once all of these functions and parameters are characterized, we can write a
differential equation for the mass density of the $i$-element as:
$\frac{d\rho_{\mathrm{g\,i}}}{dt}=\int_{m(t)}^{m_{\mathrm{s}}}\,[(m-m_{\mathrm{r}})\,Z_{\mathrm{i}}\,(t-\tau_{\mathrm{m}})+P_{Z_{\mathrm{i\,m}}}]\,\dot{\rho}_{\star}(t-\tau_{\mathrm{m}})\\\
\varphi(m)\,dm-Z_{\mathrm{i}}\,\dot{\rho}_{\star}(t),$ (20)
where the term $(m-m_{\mathrm{r}})\,Z_{\mathrm{i}}(t-\tau_{\mathrm{m}})$
accounts for the amount of $i$-element incorporated when the star was born,
and which later returns to the ISM (we see that
$m_{\mathrm{r}}\,Z_{\mathrm{i}}\,(t-\tau_{\mathrm{m}})$ is the part of the
$i$-element retained into the remnant). We note that the resulting
$\rho_{\mathrm{g}}$ is dependent on $a_{\mathrm{b}}(t)$ through Eq. (14), and
thus the term $Z_{\mathrm{i}}=\rho_{\mathrm{g\,i}}/\rho_{\mathrm{g}}$ takes
into account $a_{\mathrm{b}}(t)$ through $\rho_{\mathrm{g}}$. The
$P_{Z_{\mathrm{i\,m}}}$ parameter is the mass produced of the $i$-element by a
star of mass $m$. The term $Z_{\mathrm{i}}\,\dot{\rho}_{\star}(t)$ takes into
account the removal of part of the $i$-element to form a new star generation.
Through the time integration of Eq. (20), we obtain the mass density
$\rho_{\mathrm{g\,i}}$ of the $i$-element present in the gas contained within
the halos. This allows us to determine quantities such as
$[\mathrm{X}_{\mathrm{i}}/\mathrm{H}]$ as a function of redshift (or time) and
to compare the results of our model with different observational data. It
should be noted that Eq. (20) incorporates all the physics and constraints
discussed in the previous sections.
In order to incorporate the contributions of particular stars, depending on
their masses and metallicities, we selected stellar yields. These chemical
yields are used to determine the elements that were ejected into the ISM at a
given time by a star of a given mass and metallicity. They are calculated
through detailed nucleosynthesis computational simulations, considering the
main reactions that happen inside the stars. We consider the first stars to be
zero-metallicity stars (Pop III); the subsequent more enriched Pop II stars
are chosen within a range of different masses and metallicities. Tables 2 and
3 describe the stellar mass and metallicity ranges from where the chemical
yields were chosen.
Table 2: Masses selected for Pop III chemical yields
Model | CL08 | HW10 | HW02
---|---|---|---
Metallicity ($Z_{\sun}$) | Mass ($\mathrm{M}_{\sun}$)
$0$ | 0.85 - 3.0 | 10 - 100 | 140 - 260
Note. CL08: Campbell & Lattanzio (2008), HW10: Heger & Woosley (2010), and
HW02: Heger & Woosley (2002).
Table 3: Masses and metallicities selected for Pop II chemical yields.
Model | K10 | D13 | D14 | CL04
---|---|---|---|---
Metallicity ($Z_{\sun}$) | Mass ($\mathrm{M}_{\sun}$)
$10^{-6}$ | - | - | - | 13 - 35
$10^{-4}$ | 1 - 6 | - | 6.5 - 9.0 | 13 - 35
$10^{-3}$ | - | - | 6.5 - 9.0 | 13 - 35
$4\times 10^{-3}$ | 1 - 6 | 6.5 - 9.0 | - | -
$6\times 10^{-3}$ | - | - | - | 13 - 35
$8\times 10^{-3}$ | 1 - 6 | 6.5 - 9.0 | - | -
$2\times 10^{-2}$ | 1 - 6 | 6.5 - 9.0 | - | 13 - 35
Note. K10: Karakas (2010), D13: Doherty et al. (2013), D14: Doherty et al.
(2014), and CL04: Chieffi & Limongi (2004).
Properly modeling chemical yields for Pop III stars is a challenging and
complex task. For the range where they become PISNe, we chose to work with
results from Heger & Woosley (2002), which are compatible with recent chemical
yields calculated by Takahashi et al. (2018, hereafter TK18), which take into
account rotating progenitors. The two models, HW02 and TK18, show no
significant differences in the explosive yields for the elements chosen here,
except for the large production of N in the TK18 nonmagnetic rotating models.
The N behavior can be better understood in a detailed, recently developed
model for the cosmological evolution of this element (Vangioni et al., 2018).
For Pop II stars, samples were chosen according to the best combination of
mass and metallicity ranges, and also according to the stellar evolution
models and parameters used to produce each sample; Karakas (2010) and Doherty
et al. (2013, 2014) used the MONSTAR code for stellar evolution (Frost &
Lattanzio, 1996), OPAL opacities (Iglesias & Rogers, 1996), and compatible
mass-loss models (Reimers, 1975; Vassiliadis & Wood, 1993; Bloecker, 1995).
The chemical evolution to be presented in the following sections makes use of
the following definition:
$[\mathrm{X_{i}/H}]=\log_{10}[\mathrm{N(X_{i})/N(H)}]_{\mathrm{gas}}-\log_{10}[\mathrm{N(X_{i})/N(H)}]_{\sun},$
(21)
where $\mathrm{N(X_{i})}$ and $\mathrm{N(H)}$ are respectively the densities
for the element $\mathrm{X_{i}}$ and for hydrogen.
#### 2.3.1 Numerical technique to determine the elements
${\rho_{\mathrm{g\,i}}}$ and $\rho_{\mathrm{g}}$
The solution for the set of differential equations for $\rho_{\mathrm{g}}$ and
$\rho_{\mathrm{g\,i}}$ is obtained by a sixth-order Runge-Kutta algorithm. As
an initial condition, we have zero values for the densities of gas and metals.
To integrate the Pop III and Pop II chemical yields, we use a cubic spline
interpolation algorithm for each dataset in Tables 2 and 3. For example, for
the HW10 model (Pop III in Table 2) with a mass range of
$10-100\,\mathrm{M}_{\sun}$, we have 18 stellar mass values, nominally, 10,
12, 15, 17, 20, 22, 25, 27, 30, 33, 35, 40, 45, 50, 60, 75, 85, and 100 solar
masses. Thus, the determination of the yields of a star with, for example,
$70\,\mathrm{M}_{\sun}$, occurs through the interpolation algorithm.
We see that there are no yields in the intervals $]3.0,10[$ and $]100,140[$
(Table 2). Thus, the interpolation algorithm does not include these open sets
in the calculation. A similar implementation applies to Pop II yields (Table
3).
The total density of the gas depends on the term $a_{\mathrm{b}}(t)$, which
corresponds to the infall of primordial gas, basically hydrogen and helium. To
check the consistency of our results, we determined the maximum numerical
discrepancy $(D)$ between $\rho_{\mathrm{g}}$ and the sum over all chemical
species (hydrogen, helium, and metals), which is:
$D=\frac{\rho_{\mathrm{g}}-\rho_{\mathrm{H}}-\rho_{\mathrm{He}}-\sum\,\rho_{\mathrm{g\,i}}}{\rho_{\mathrm{g}}}.$
(22)
The maximum numerical discrepancy in the various time steps is
$\left|D\right|=4.4\times 10^{-7}$.
### 2.4 Constructing the models
We start the calculation with all the elements for the cosmological scenario
and chemical evolution set, and explore two scenarios. The first (model A)
considers the chemical evolution of the Universe starting with the Pop III
(masses and yields as shown in Table 2) stars. Once the metallicity of the
Universe reaches $Z=10^{-6}Z_{\sun}$, no more Pop III stars can be formed. As
a consequence, the Pop II stellar branch with $Z=10^{-6}Z_{\sun}$ is born and
evolves (masses and yields as shown in Table 3). This second step finishes
when the metallicity of the Universe reaches $Z=10^{-4}Z_{\sun}$ and, as a
consequence, no more Pop II stars can be formed within the branch
$Z=10^{-6}Z_{\sun}$. This process repeats every time the metallicity of the
Universe crosses the limits indicated in Tables 2 and 3. It should be noted
that the stars within the low-metallicity branches cannot form anymore as a
consequence of the increase in the chemical enrichment of the Universe.
However, stars with low mass born within the lowest-metal branch can still be
alive now. Thus, they can coexist with stars of much higher metallicity during
part of their lives.
For the second scenario (model B), we consider the chemical evolution only
with Pop II stars. In this case, the first-star generation ($Z=0$) of the
Universe was composed of stars with masses and chemical yields similar to the
Pop II stars of the branch $Z=10^{-6}Z_{\sun}$ studied by Chieffi & Limongi
(2004). The following steps are similar to those described for model A. Both
scenarios are generated with the CSFR described in Sect. 2.1.
An important observation is that models A and B have different normalizations
for the IMF. For model A, the normalization is obtained in the range
$0.1-260\,\mathrm{M}_{\sun}$, while model B (Pop II only) is normalized in the
interval $0.1-120\,\mathrm{M}_{\sun}$. This choice is due to the nonexistence
of the stellar branch $140-260\,\mathrm{M}_{\sun}$ in the Pop II models
discussed in the literature.
We describe the evolution of models A and B in Sect. 2.4.1 and Sect. 2.4.2.
#### 2.4.1 Model A
Model A runs the following steps for the entire calculation. First, the total
metallicity of the Universe ($Z_{\mathrm{total}}$) is used as a guide for the
chemical yields of the different classes (or branches) presented in Tables 2
and 3. Assuming that the first stars, formed from pristine (H and He only)
gas, started to die and enrich the ISM at redshift $z=20$,
$Z_{\mathrm{total}}$ provides values for the evolution of the production of
all elements heavier than He for the entire redshift interval. This parameter
is then used as a “switch” between different metallicities, removing the
chemical yields of a given metallicity and successively introducing those of
the higher metallicity classes according to Tables 2 and 3, and as discussed
above.
Abundances of individual chemical elements are then computed for the entire
redshift range. $Z_{\mathrm{total}}$ starts at zero, producing Pop III stars.
The higher-mass Pop III stars ($\sim 260\,\mathrm{M}_{\sun}$) start dying
first, throwing metals into the ISM and enriching the medium around them. Pop
III stars continue to die and enrich the ISM until the medium reaches
$10^{-6}Z_{\sun}$. At that point, new Pop III stars stop forming. It is
important to note that Pop III stars with masses $\lesssim
0.9\mathrm{M}_{\sun}$ have longer lifetimes and should be still in their main-
sequence phase today, just following the increase in the total metallicity of
the Universe, and will participate in subsequent steps of the enrichment of
the Universe when they leave the main sequence, along with the contribution of
new, higher-metallicity stars formed later than that population, at much lower
redshifts.
Once the metallicity of the ISM and/or IGM reaches $10^{-6}Z_{\sun}$, new
stars with this metallicity signature start being formed. As they die, the
model starts processing the yields from this class of stars, and the same
process repeats. The metallicity from the medium increases as the higher-mass
stars die first, while lower-mass stars live longer. Even with stars with a
higher metallicity ($10^{-4}\,Z_{\sun}$, for example) starting to form, the
lower-mass ones will continue their lives unaffected by the external increase
in metallicity.
The process continues as the Universe progresses toward the present
metallicity, as described in Tables 2 and 3. It is important to emphasize that
each metallicity class has its own stellar population clock triggered when the
Universe’s metallicity crosses the various thresholds indicated in Tables 2
and 3. Thus, at certain intervals of time, the chemical enrichment of the
Universe takes place by the joint action of stars of different metallicity
classes.
#### 2.4.2 Model B
Model B uses only Pop II yields. The chemical enrichment of the Universe
starts from $Z_{\mathrm{total}}=0$ at redshift $20$. For the second model, we
consider that the stellar branch $Z=10^{-6}\,Z_{\sun}$ also represents the
metal-free stars. Thus, chemical enrichment will occur through this class (or
branch) until reaching $Z=10^{-4}$, when then the next metallicity class (with
$Z=10^{-3}\,Z_{\sun}$) will assume the control of chemical enrichment. The
following branches will take the enrichment command at the points indicated in
Table 3.
Chemical elements analyzed in this work were selected considering the
availability of chemical yields in the literature and observational data
available for chemical abundances in damped Lyman-$\alpha$ systems (DLAs) and
globular clusters (GCs), as described below (Sect. 3).
## 3 Results and discussion
Figure 4: Evolution of metallicity for different values of $x$. Left: Model A.
Right: Model B. Colors represent different values of $x$ for the IMF.
The results from models A and B for $[Z_{\mathrm{total}}/\mathrm{H}]$ are
presented in Fig. 4. The results are obtained from the CMW-CSFR with five
different IMF values and integrated for all ranges from Pop III to Pop II
stars, for model A, and for all ranges of Pop II stars, for model B. We take
into account the progressive enrichment of the Universe, and consequently the
transition between Pop III stars and the next, more metal-rich Pop II
generations until $Z=0.02\;Z_{\sun}$.
For model A, at redshift $z=20$, the first stars formed from metal-free gas
start to die, and the chemical enrichment is very fast. For $x=0.85$ and
$x=1.00$, the pristine Universe leaves from $Z=0$ to reach
$Z=10^{-6}\,Z_{\sun}$ in less than $\sim 4\times 10^{5}\,\mathrm{yr}$, given
the higher number of high-mass stars that would form in this scenario. For
$x=1.70$, the same metallicity is reached $\sim 30\,\mathrm{Myr}$ after the
death of the first Pop III star, while for $x=1.85$, it would take $\sim
70\,\mathrm{Myr}$ for the same process to occur. The mean behavior is
described by $x=1.35$, where the Universe would reach $Z=10^{-6}\,Z_{\sun}$ in
$\sim 3\,\mathrm{Myr}$. For model B, the same process takes from $\sim
2\,\mathrm{Myr}$ up to $85\,\mathrm{Myr}$, depending on the IMF.
This rapid chemical enrichment in the initial phase can be explained by the
metal production of Pop III$-$PISNe, which characterizes a chemical “flood” in
the high-redshift Universe, in the case of model A. For model B, the chemical
enrichment occurs mainly through stars with masses $\sim
30-35\,\mathrm{M}_{\sun}$, and the condition $Z=10^{-6}\,Z_{\sun}$ is reached
eight times slower than when considering higher-mass Pop III stars.
Except for N, all other elements are mainly produced by PISNe in the Pop III
era. According to Abia et al. (2001), the metallicity observed at high
redshifts can be easily obtained from stellar pregalactic (Pop III)
nucleosynthesis by postulating that only $\sim 10^{-2}$ of the total pristine
gas is converted into stars. Considering that the star formation efficiency of
the CMW scenario is $\sim 0.3$ in the redshift range $[5-20]$, which is $\sim
30$ times larger than the value estimated by Abia et al. (2001), we verify
that adding Pop III stars in the CMW scenario for the CSFR quickly floods the
primordial Universe with metals.
Other evidence that PISNe are very efficient in enriching the ISM comes from
the work of Matteucci & Calura (2005), where they show that only 110 to 115
PISNe would be needed to enrich a cubic megaparsec of the IGM to
$Z=10^{-4}\,Z_{\sun}$ (with the index of the IMF varying between 1.35 and
0.5).
In our model the rate for PISNe can be calculated using the relation:
$R_{\mathrm{PISNe}}=\frac{\dot{M}_{\star}(t)}{<M_{\mathrm{PISNe}}>}\times\int_{140}^{260}\phi(m)\,m\,dm,$
(23)
where $<M_{\mathrm{PISNe}}>$ is the average mass of the stars that ended their
lives as PISNe, and $\dot{M}_{\star}(t)$ is the CSFR in
$\mathrm{M}_{\sun}\,\mathrm{yr}^{-1}$ when $Z=10^{-6}\,Z_{\sun}$.
If we consider the average mass of PISNe as $\sim 200\,\mathrm{M}_{\sun}$,
then $R_{\mathrm{PISNe}}\sim 6\times 10^{-5}\,\mathrm{yr}^{-1}$ or 1 PISNe
every $\sim 16\,000\,\mathrm{yr}$. The number of PISNe needed to enrich the
Universe from $Z=0$ to $Z=10^{-6}\,Z_{\sun}$ is $\sim 175$ for our models.
Similar to Matteucci & Calura (2005), our work shows the importance of PISNe
for rapidly enriching the ISM. Matteucci & Calura (2005) find that $\sim
110-115$ would be needed to enrich a cubic megaparsec of the IGM to $\sim
10^{-4}\,Z_{\sun}$. Our results show that $\sim 175$ PISNe are needed to
enrich the medium to $\sim 10^{-6}\,Z_{\sun}$. However, our results involve a
volume equivalent to $\sim 10^{5}\,\mathrm{Mpc}^{3}$. Thus, the numerical
comparison is not so direct between the two works.
In order to adequately address the question of the rapid contamination of the
early Universe, we propose comparing the results with abundances from old GCs.
Such GCs (with ages close to the age of the Universe) present an opportunity
to explore the chemical and physical conditions of the earliest star-forming
environments in the Universe (Dotter et al., 2011; Frebel & Norris, 2015), in
other words, they should present a metallicity value very similar to the
Universe’s mean metallicity at the time they were formed.
Figure 5: Comparison between models A (left) and B (right) with data from GCs
from Frebel et al. (2007), Bond et al. (2013), Cowan et al. (2002), Cayrel et
al. (2001), Dotter et al. (2011), and Wagner-Kaiser et al. (2017).
Analysis of old GCs’ data can provide information about the age and
metallicity for the entire cluster, enabling better estimations than for
isolated, metal-poor stars, for example. Figure 5 shows the behavior of
$Z_{\mathrm{total}}$ for the two models compared with observations from GCs.
Model A accounts for the majority of observations, regardless of the IMF,
while model B is unable to fit the metal abundances for $Z_{\mathrm{total}}$,
even for the lower values of $x$. Although Pop II stars with low metallicity
and large masses do indeed play an essential role in the first steps of cosmic
enrichment, it is clear that their contribution alone is insufficient to allow
the ISM to maintain efficient star formation in order to reach the observed
metal abundances along the cosmic history.
Another analysis is performed for the interval $z=[0-6]$, where we compare the
results with dust-corrected abundances from DLAs (Fig 6). The DLAs provide the
most accurate measurements of chemical abundances on the gas-phase for the
high-redshift Universe (Wolfe et al., 2005), and abundances can be determined
with errors $\leq 0.1$ dex (Vladilo, 2002). The DLAs are also the perfect site
for the initial stages of gas cooling and star formation (Maio & Tescari,
2015); they dominate the neutral gas content of the Universe in the redshift
interval $z=[0-5]$, and therefore are the most crucial neutral gas reservoir
for star formation.
Figure 6: Comparison of models A (left) and B (right) with data from dust-
corrected DLAs (gray crosses) from De Cia et al. (2018). We include an
$\alpha$-enhancement correction, with
$[Z/\mathrm{H}]=[\mathrm{Fe/H}]+0.3\,\mathrm{dex}$, as suggested by Rafelski
et al. (2012).
In this redshift interval, regardless of the observations, we expect to see an
increase in $[Z_{\mathrm{total}}/\mathrm{H}]$ with decreasing redshift, with
the total metallicity reaching values close to $\sim\leavevmode\nobreak\ 0$
(solar) near redshift $z\sim\leavevmode\nobreak\ 0$. This behavior is
consistent with observations presented in similar contexts (Fynbo et al.,
2006; Davé & Oppenheimer, 2007; Kobayashi et al., 2007; Rollinde et al., 2009;
Vangioni et al., 2018). Also, as pointed out by Calura & Matteucci (2004),
main metal production in spirals and irregulars is always increasing with
time.
It is possible to observe that, for model B, for $x<1.35,$ the abundances
reach a maximum at redshift $z\sim 4$ and start decreasing toward $z=0$, while
for $x\geq 1.35$, metallicities tend to rise with decreasing redshift.
Nevertheless, all the B models remain between $-1.0$ and $-1.5\,\mathrm{dex}$
lower than the expected value for $z\sim 0$.
We also note that Pop II models with $x<1.35$ produce more high-mass stars
returning more metals through the CL04 channel. For these IMFs, a bottleneck
occurs when the metallicity of the system reaches $8\times 10^{-3}\,Z_{\sun}$
since there is no contribution to chemical enrichment through the CL04
channel. This explains the flat behavior verified in Fig. 5.
On the other hand, when taking into account Pop III stars (model A),
metallicities increase continuously as redshift approaches $z\sim 0$. For
$x=0.85$, $x=1.00$, and $x=1.35$, models reach
$[Z_{\mathrm{total}}/\mathrm{H}]$ close to 0, as expected, while for $x=1.70$
and $1.85$, the total metallicity is underestimated by approximately $0.25$ to
$0.30\,\mathrm{dex}$.
When comparing results with DLA observations, there are two main problems that
are relevant to the interpretation of our results. The first is the high
dispersion between points relative to the same (or very close) redshift. There
are a variety of models that investigate dispersion in DLAs (see, e.g.,
Dvorkin et al. (2015)), and some authors agree that it happens due to peculiar
nucleosynthetic signatures from each system and also due to different star
formation histories (Centurion et al., 1998; Pettini et al., 2000; Dessauges-
Zavadsky et al., 2002; De Cia et al., 2016), which leads to the production of
different amounts of each chemical element.
According to De Cia et al. (2016), regardless of the star formation history,
the availability of refractory metals in the ISM is a crucial driver of dust
production, and DLA galaxies may have a wide range of star formation
histories, which in principle are also different from those of the Galaxy (De
Cia et al., 2016). Therefore, we plot the mean value between the models with
different IMFs, suggesting that results for $x=0.85$ represent the upper limit
(due to the favorable formation of high-mass stars in this model) and $x=1.85$
as the lower limit (due to the favorable formation of lower-mass stars).
The second problem in comparing results with DLA observations relates to dust
depletion. Some chemical elements react with different species, forming
molecular compounds that can get trapped on the surface of dust grains and
cannot be detected by the observations of abundances in the gas phase, that is
to say, abundances would look lower than their actual values. The majority of
the results indicate that the behavior of dust depletion on DLAs is complex
and varies from system to system (Vladilo et al., 2011; De Cia et al., 2016,
2018).
Therefore, we compare our results with dust-corrected DLA metallicities from
De Cia et al. (2018). The author shows that, when including dust corrections,
the average DLA metallicities are between $0.4$ and $0.5\,\mathrm{dex}$ higher
than without corrections. Where the author provides values for
$[\mathrm{Fe/H}]$, we include an $\alpha$-enhancement correction, with
$[\mathrm{Z/H}]=[\mathrm{Fe/H}]+0.3\,\mathrm{dex}$ as suggested by Rafelski et
al. (2012).
Either way, we reinforce that we aim to demonstrate that Pop III stars are
required to represent mean cosmic abundances, which can be straightforwardly
observed in Figs 5 and 6. The impact of different depletion-corrected
methodologies, fitting of the data, and the use of different chemical yields
should be addressed in detail in the future.
The knees that appear for model B in Figs. 5 and 6 are associated with the IMF
exponents 0.85 and 1.00. The main reason for that comes from the nonexistence
of stars enriching the medium with masses above $9\,\mathrm{M}_{\sun}$ for
metallicity classes $4\times 10^{-3}$ and $8\times 10^{-3}$. We see that these
$x$ exponents are responsible for forming a higher number of high-mass stars
when compared to the other IMFs. This introduces a flat behavior for the
$[\mathrm{Z/H}]$ relation for these IMF exponents.
Although there are uncertainties about the mass spectrum of Pop III stars, our
results show the importance of this stellar population for reproducing the
observational data. There is no good agreement for the metallicity observed
for the Universe only with Pop II stars. We see that Pop III stars rapidly
raise the metallicity of the Universe between redshifts $\sim 15-20$, mainly
due to the HW02 branch. They reinforce chemical enrichment in the range $\sim
5-15$ through the HW10 model and complement the interval $0-5$ through the
CL08 channel. The chemical avalanche produced by Pop III stars at high and
moderate redshifts acts as a booster so that Pop II stars can add their
contribution, through the different branches of Pop II, to the chemical
enrichment of the Universe.
Figure 7: Chemical evolution for $\mathrm{Fe,\leavevmode\nobreak\
Si,\leavevmode\nobreak\ Zn,\leavevmode\nobreak\ Ni,\leavevmode\nobreak\
P,\leavevmode\nobreak\ Mg,\leavevmode\nobreak\ Al,\leavevmode\nobreak\
S,\leavevmode\nobreak\ C,\leavevmode\nobreak\ N}$, and
$\mathrm{\leavevmode\nobreak\ O}$ since the first stars started to die
($z=20$) until $z=0$. Model A starts with zero-metallicity stars, and as the
Universe gets enriched, subsequent Pop II stars with increasing metallicity
start to appear, until reaching $Z=2\times 10^{-2}\,Z_{\sun}$, according to
the model described in Sect. 2.4. It is possible to observe the chemical
avalanche in the early Universe given by the high production of metals from
Pop III stars. As discussed in the text, model B considers only Pop II stars.
The gray crosses represent data from Pettini et al. (1997), Centurion et al.
(1998), Vladilo (1998), Pettini et al. (2000), Dessauges-Zavadsky et al.
(2002, 2003), Nissen et al. (2004), Akerman et al. (2005), Kulkarni et al.
(2007), Prochaska et al. (2007), Pettini et al. (2008), Penprase et al.
(2010), Cooke et al. (2011), Kulkarni et al. (2012), Rafelski et al. (2012),
Jorgenson et al. (2013), Neeleman et al. (2013), Zafar et al. (2014), and De
Cia et al. (2016). All abundances were rescaled to solar values from Asplund
et al. (2009).
### 3.1 Properties of yields
In this section, we present the cosmic chemical evolution for 11 chemical
elements for models A and B, compared with data from DLAs taken from the
literature (Fig. 7), and briefly discuss the main subjects regarding each of
the elements. Observational data for other elements are not dust-corrected due
to the lack of sufficient data points with enough information for correction
(such as $[\mathrm{Zn/Fe}]$ or $[\mathrm{Si/Fe}]$).
Iron and silicon: Fe and Si could be altered by depletion (De Cia et al.,
2013). Observational abundances could increase by $\sim 0.5$ dex if depletion
is considered in the comparison of the model with DLA data. Details about the
methodology used to correct Fe depletion can be checked in the work by Vladilo
(2002). As for Si, Prochaska & Wolfe (2002) show that although it is a
refractory element, its depletion is not strong enough to significantly alter
the abundances of DLA systems. Vladilo et al. (2011) show that Si depletion is
mild in the ISM and it is expected to be weaker in most DLA systems. The
depletion of Fe and Mg are measured for comparison, and it is found that the
mean depletion of Si is almost as high as that of Fe, despite Fe being much
more depleted than Si in the galactic ISM. They also explain that Si depletion
in DLA systems does not correlate with metallicity, unlike Fe, whose depletion
rises along with metallicity increase.
Zinc: Zn is produced mainly in HNe explosions characterized by a more
significant production of Zn, Co, V, and Ti than normal SNe (Nomoto et al.,
2006). Stars with $500-1000\,\mathrm{M}_{\sun}$ produce high amounts of Zn
compared to O, C, and other metals (Ohkubo et al., 2006). Kobayashi et al.
(2006) suggest that HNe can enhance the production of Zn, and that Zn is
considered to be undepleted in DLAs.
Nickel and phosphorus: The lack of Ni observations in DLAs poses a challenge
in analyzing this element; nevertheless, SNe Ia produce between $4\times
10^{-3}$ and $1.4\times 10^{-2}\,\mathrm{M}_{\sun}$ of Ni, and from $8.5\times
10^{-5}$ to $4\times 10^{-4}\,\mathrm{M}_{\sun}$ of P, depending on the
specific model (Nomoto et al., 1997), and it is important to observe the
outcome of these types of stars in the present model.
Magnesium: Mg is a refractory element, and its depleting effect must be
considered. A challenge that arises in Mg determination comes from the
saturation of the doublet used for its characterization, leaving only one
possible line to provide Mg abundance. Given the problems related to its
determination, current observations should be confirmed by additional Mg
measurements to conclude if it could have a particular nucleosynthesis effect
in DLA systems (Dessauges-Zavadsky et al., 2002).
Aluminum: Al has the strongest metal line transition observed in DLAs, the Al
II $\lambda\,1670$ line (Prochaska & Wolfe, 2002). However, in the majority of
systems, the line is heavily saturated and, together with the blending of
lines and blending with the Ly-$\alpha$ forest, determining Al abundances can
be a real challenge (Dessauges-Zavadsky et al., 2003).
Sulfur: S is considered non-refractory by some authors (Prochaska & Wolfe,
2002), but there is still discussion about its actual behavior and if it could
be used as a parameter for measuring depletion (Jenkins, 2009; De Cia et al.,
2016).
Carbon, nitrogen, and oxygen: There is an excess in the abundance of C, N,
and O. Regarding depletion effects, O is only mildly refractory according to
observations of DLAs and is not highly affected by depletion, although it is
challenging to observe once the majority of the lines fall into the
Ly-$\alpha$ forest and tend to be saturated (Prochaska & Wolfe, 2002). On the
other hand, C is considered mildly refractory (Prochaska & Wolfe, 2002). Once
it is a major constituent of interstellar dust (Henry et al., 2000; Jenkins,
2009), a substantial part of C could be trapped on dust grains. Also, the lack
of observations of C in DLAs is a problem (Jenkins, 2009). N does not exhibit
progressively stronger depletions (Jenkins, 2009) and appears to be better
represented by model B, that is, by the behavior of Pop II stars.
There are, however, other physical processes that participate in the C, N, and
O production dynamics, which could be interfering with the results. Jenkins
(2009) shows that, depending on the case, the consumption of O for producing
oxides and silicates is not consistent with results for differential depletion
for this element. The lack of O in gas-phase observations is much higher than
what is needed for producing these silicates and oxides, and it is very hard
to correlate the lack of O in the ISM with models of interstellar grain
production. The author suggests that the formation of compounds involving
elements such as H or C could play an important role in taking these elements
from the ISM. Therefore, although cooling processes considerably demand C, N,
and O for gas cooling and fragmentation, the grain formation processes do not
entirely solve the problem for all three of these overabundant elements. An
interesting result from Ioppolo et al. (2008) suggests that O is incorporated
in the form of amorphous $\mathrm{H_{2}O}$ ice on the grain surfaces. Work
recently developed by Loeb et al. (2016) suggests that there is a possibility
that carbon-enhanced metal-poor (CEMP) stars from the second generation of
stars could host or have hosted planetary systems in their habitable zones.
The planets would likely have a major C component in their composition. Also,
the degree of C enhancement in CEMP stars has been shown to notably increase
as a function of decreasing metallicity (Carollo et al., 2012), that is, the C
enhancement in this type of star is likely much higher in the primordial
Universe. Sonnentrucker et al. (2010) also show that the abundance of water
vapor in gas clouds in the Galaxy holds $\sim 0.1\%$ of the available O.
Lastly, the DLA data have scattering larger than that produced by our models.
This result can be associated with several effects. For example, DLA data
depend on the characteristics of the host galaxies. As our model is semi-
analytic, it cannot resolve individual galaxies. In addition, the CSFR
obtained in our models ends up being a weighted average over the halo masses
through the formulation presented in Sect. 2.1. On the other hand, diffusion
is a physical process that can transport metals away from their production
sites, which could introduce scattering into our models. Aggregate diffusion
is a task to be explored in future work.
### 3.2 The effect of feedback on our results
The feedback effects configure in a complex task, and they are beyond the
scope of this work. However, through our modeling, we can make some inferences
about the possible impacts on our results. In a recent article, Lancaster et
al. (2021) presented simulations in which one of the main characteristics of
feedback effects is the reduction of star formation efficiency. The authors’
simulations involved clouds with characteristic radii of $2.5$ to
$20\,\mathrm{pc}$. The efficiency reduction achieved in these simulations
ranged from 3% to 47% over the no-feedback efficiency.
In our case, the reduction of the star formation efficiency implies a decrease
in the $\tau_{\mathrm{s}}$ scale to keep the CSFR adjusted to the
observational data. In this case, some chemical elements such as Zn, P, and S
would deviate more from the DLA data, especially with the IMF exponent $x\geq
1.70$. A possible way to solve this problem would be the inclusion of other
channels for the production of chemical elements, such as the inclusion of
hypernova yields and SNe Ia.
## 4 Summary and conclusions
The main goal of this work was to investigate cosmic chemical enrichment
through the evolution of chemical elements in the redshift interval $0\leq
z\leq 20$, as well as the contributions of Pop III and Pop II stars to the
cosmic enrichment of the Universe. It was achieved by building a cosmic
chemical evolution model that couples a semi-analytic cosmological model,
which computes the CSFR, to chemical evolution models for the galactic
framework. We computed the evolution of production of
$\mathrm{Fe,\leavevmode\nobreak\ Si,\leavevmode\nobreak\
Zn,\leavevmode\nobreak\ Ni,\leavevmode\nobreak\ P,\leavevmode\nobreak\
Mg,\leavevmode\nobreak\ Al,\leavevmode\nobreak\ S,\leavevmode\nobreak\
C,\leavevmode\nobreak\ N}$, and $\mathrm{\leavevmode\nobreak\ O}$ and compared
our results with observational data taken from DLAs in the redshift interval
$[0-6]$ and with GCs.
Our main results show that we can consistently model the evolution of cosmic
abundances in the Universe using a semi-analytic approach. Also, the “chemical
avalanche” on the primordial Universe, which quickly enriches the medium and
provides conditions for Pop II stars to appear, is consistent with the
literature on Pop III stars’ behavior and chemical evolution models (Heger &
Woosley, 2002, 2010; Takahashi et al., 2018).
Regarding the behavior of Pop III and Pop II stars separately, the main
difference appears in the behavior of abundances toward $z=0$. At the same
time, our model considering regular intermediate and high-mass Pop II stars
(model B) shows a decrease in the abundances (except for N and
$Z_{\mathrm{total}}$), while the model including very massive Pop III stars
(model A) reproduces increasing abundances as redshift decreases, which is
consistent with observations and similar models in the literature. Model A
also offers a better fit of $Z_{\mathrm{total}}$ to GC data than model B.
We conclude stating that model A, where the inclusion of Pop III stars appears
as the main difference, presents a very good description of mean chemical
values across the studied redshift range and the key behavior for the
evolution of cosmic abundances in the Universe. Our main results are
summarized below:
* •
The chemical enrichment process in the early Universe occurs very quickly
regardless of the stellar population. The pristine Universe reaches
$Z=10^{-6}\,Z_{\sun}$ in $\sim 3.0\,\mathrm{Myr}$ for the model with both Pop
II and Pop III stars and with IMF $1.35$, and $\sim 25\,\mathrm{Myr}$ for the
model with only Pop II stars (with $x=1.35$). However, when considering only
high-mass Pop II stars, the metals are quickly consumed, and the scenario
cannot represent chemical abundances at lower redshifts.
* •
Abundances from GCs for $Z_{\mathrm{total}}$ are consistently represented by
the model with both Pop II and Pop III stars, while the model without Pop III
stars is unable to represent observational data, regardless of the IMF.
* •
Abundances from DLAs for $Z_{\mathrm{total}}$ are consistently represented by
our model with Pop III and Pop II stars. When comparing the model with
abundances corrected for dust depletion and alpha enhancement, the
observations show proper accordance with the model considering both Pop II and
Pop III stars, while the model with only Pop II stars cannot account for the
behavior of metals toward $z=0$.
* •
Regarding the modeling for other elements, there are a few deviations in the
results when comparing the models with data from DLAs. However, the
combination of mechanisms needed to improve the results is self-completing and
can be easily understood, such as the absence of some mechanisms SNe Ia, HNe,
dust depletion affecting observational data, and the combination of yields
from Pop II stars. HNe and maybe a higher-mass branch of stars $\sim
500-1000\,\mathrm{M}_{\sun}$, (Ohkubo et al., 2006) should improve results for
Zn, P, and Ni without raising O and C. In principle, these mechanisms are all
consistent with each other and will be studied in a subsequent work.
* •
The reason for the overabundances of C, N, and O shown in our results remains
an open question. New observations focusing on depletion processes in the ISM
could explain the overabundances found in the present work (and/or the lack of
these elements in the ISM).
Altogether, our results indicate that the evolution of chemical abundances in
the cosmological framework can be consistently tracked. Our most important
result shows that Pop III stars’ contribution to the Universe’s chemical
history should be better understood, and observational campaigns with
instruments capable of actually identifying these objects should be seriously
considered and implemented. Pop III observations are a long-awaited result,
and a firm detection will shed new light on the cosmic history in earlier
times. The other questions raised in this paper are being studied and will be
the subject of forthcoming works.
## Acknowledgements
We thank the Brazilian agency FAPESP for support under the thematic project
2014/11156-4. LCC would like to thank the Coordenação de Aperfeiçoamento de
Pessoal de Nível Superior (CAPES) - Finance code 001 - for a graduate research
fellowship. ODM and CAW thank CNPq for partial financial support (grants
303350/2015-6 and 313597/2014-6, respectively).
## References
* Abel et al. (2000) Abel, T., Bryan, G. L., & Norman, M. L. 2000, Astrophys. J., 540, 39
* Abia et al. (2001) Abia, C., Dominguez, I., Straniero, O., et al. 2001, Astrophys. J., 557, 126
* Akerman et al. (2005) Akerman, C. J., Ellison, S. L., Pettini, M., & Steidel, C. C. 2005, A&A, 440, 499
* Asplund et al. (2009) Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, Ann. Rev. Astron. Astrophys., 47, 481
* Barkana (2006) Barkana, R. 2006, Science (80-. )., 313, 931
* Bloecker (1995) Bloecker, T. 1995, A&A, 299, 755
* Bond et al. (2013) Bond, H. E., Nelan, E. P., VandenBerg, D. A., Schaefer, G. H., & Harmer, D. 2013, Astrophys. J. Lett., 765, L12
* Bouwens et al. (2012a) Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2012a, Astrophys. J., 754, 83
* Bouwens et al. (2012b) Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2012b, Astrophys. J. Lett., 752, L5
* Bromm et al. (2001) Bromm, V., Ferrara, A., Coppi, P. S., & Larson, R. B. 2001, Mon. Not. R. Astron. Soc., 328, 969
* Bromm & Loeb (2003) Bromm, V. & Loeb, A. 2003, Nature, 425, 812
* Calura & Matteucci (2004) Calura, F. & Matteucci, F. 2004, Mon. Not. R. Astron. Soc., 350, 351
* Calura & Matteucci (2006) Calura, F. & Matteucci, F. 2006, in American Institute of Physics Conference Series, Vol. 847, Origin of Matter and Evolution of Galaxies, ed. S. Kubono, W. Aoki, T. Kajino, T. Motobayashi, & K. Nomoto, 371–373
* Calura & Menci (2009) Calura, F. & Menci, N. 2009, Mon. Not. R. Astron. Soc., 400, 1347
* Campbell & Lattanzio (2008) Campbell, S. W. & Lattanzio, J. C. 2008, A&A, 490, 769
* Carollo et al. (2012) Carollo, D., Beers, T. C., Bovy, J., et al. 2012, Astrophys. J., 744, 195
* Carroll et al. (1992) Carroll, S. M., Press, W. H., & Turner, E. L. 1992, Ann. Rev. Astron. Astrophys., 30, 499
* Casey et al. (2012) Casey, C. M., Berta, S., Béthermin, M., et al. 2012, Astrophys. J., 761, 139
* Cayrel et al. (2001) Cayrel, R., Hill, V., Beers, T. C., et al. 2001, Nature, 409, 691
* Centurion et al. (1998) Centurion, M., Bonifacio, P., Molaro, P., & Vladilo, G. 1998, Astrophys. J., 509, 620
* Chieffi & Limongi (2004) Chieffi, A. & Limongi, M. 2004, Astrophys. J., 608, 405
* Ciardi et al. (2001) Ciardi, B., Ferrara, A., Marri, S., & Raimondo, G. 2001, Mon. Not. R. Astron. Soc., 324, 381
* Cojazzi et al. (2000) Cojazzi, P., Bressan, A., Lucchin, F., Pantano, O., & Chavez, M. 2000, Mon. Not. R. Astron. Soc., 315, L51
* Cooke et al. (2011) Cooke, R., Pettini, M., Steidel, C. C., Rudie, G. C., & Nissen, P. E. 2011, Mon. Not. R. Astron. Soc., 417, 1534
* Côté et al. (2016) Côté, B., Ritter, C., O’Shea, B. W., et al. 2016, Astrophys. J., 824, 82
* Cowan et al. (2002) Cowan, J. J., Sneden, C., Burles, S., et al. 2002, Astrophys. J., 572, 861
* Cucciati et al. (2012) Cucciati, O., Tresse, L., Ilbert, O., et al. 2012, A&A, 539, A31
* Dahlen et al. (2007) Dahlen, T., Mobasher, B., Dickinson, M., et al. 2007, Astrophys. J., 654, 172
* Daigne et al. (2006) Daigne, F., Olive, K. A., Silk, J., Stoehr, F., & Vangioni, E. 2006, The Astrophysical Journal, 647, 773
* Davé & Oppenheimer (2007) Davé, R. & Oppenheimer, B. D. 2007, Mon. Not. R. Astron. Soc., 374, 427
* De Cia et al. (2016) De Cia, A., Ledoux, C., Mattsson, L., et al. 2016, A&A, 596, A97
* De Cia et al. (2018) De Cia, A., Ledoux, C., Petitjean, P., & Savaglio, S. 2018, A&A, 611, A76
* De Cia et al. (2013) De Cia, A., Ledoux, C., Savaglio, S., Schady, P., & Vreeswijk, P. M. 2013, A&A, 560, A88
* Dessauges-Zavadsky et al. (2003) Dessauges-Zavadsky, M., Peroux, C., Kim, T.-S., D’Odorico, S., & McMahon, R. G. 2003, Mon. Not. R. Astron. Soc., 345, 447
* Dessauges-Zavadsky et al. (2002) Dessauges-Zavadsky, M., Prochaska, J. X., & D’Odorico, S. 2002, A&A, 391, 801
* Dessauges-Zavadsky, M. et al. (2015) Dessauges-Zavadsky, M., Zamojski, M., Schaerer, D., et al. 2015, A&A, 577, A50
* Doherty et al. (2013) Doherty, C. L., Gil-Pons, P., Lau, H. H. B., Lattanzio, J. C., & Siess, L. 2013, Mon. Not. R. Astron. Soc., 437, 195
* Doherty et al. (2014) Doherty, C. L., Gil-Pons, P., Lau, H. H. B., et al. 2014, Mon. Not. R. Astron. Soc., 441, 582
* Dotter et al. (2011) Dotter, A., Sarajedini, A., & Anderson, J. 2011, Astrophys. J., 738, 74
* Dvorkin et al. (2015) Dvorkin, I., Silk, J., Vangioni, E., Petitjean, P., & Olive, K. A. 2015, Mon. Not. R. Astron. Soc., 452, L36
* Ekström et al. (2008) Ekström, S., Meynet, G., Maeder, A., & Barblan, F. 2008, A&A, 478, 467
* Fang & Cen (2004) Fang, T. & Cen, R. 2004, Astrophys. J., 616, L87
* Fraser et al. (2017) Fraser, M., Casey, A. R., Gilmore, G., Heger, A., & Chan, C. 2017, Mon. Not. R. Astron. Soc., 468, 418
* Frebel et al. (2007) Frebel, A., Christlieb, N., Norris, J. E., et al. 2007, Astrophys. J. Lett., 660, L117
* Frebel & Norris (2015) Frebel, A. & Norris, J. E. 2015, Annu. Rev. Astron. Astrophys., 53, 631
* Frost & Lattanzio (1996) Frost, C. A. & Lattanzio, J. C. 1996, Astrophys. J., 473, 383
* Fynbo et al. (2006) Fynbo, J. P. U., Starling, R. L. C., Ledoux, C., et al. 2006, A&A, 451, L47
* Galli & Palla (2013) Galli, D. & Palla, F. 2013, Annu. Rev. Astron. Astrophys., 51, 163
* Gnedin & Ostriker (1997) Gnedin, N. Y. & Ostriker, J. P. 1997, Astrophys. J., 486, 581
* Gribel et al. (2017) Gribel, C., Miranda, O. D., & Vilas-Boas, J. W. 2017, Astrophys. J., 849, 108
* Gruppioni et al. (2013) Gruppioni, C., Pozzi, F., Rodighiero, G., et al. 2013, Mon. Not. R. Astron. Soc., 432, 23
* Heger & Woosley (2002) Heger, A. & Woosley, S. E. 2002, Astrophys. J., 567, 532
* Heger & Woosley (2010) Heger, A. & Woosley, S. E. 2010, Astrophys. J., 724, 341
* Henry et al. (2000) Henry, R. B. C., Edmunds, M. G., & Koppen, J. 2000, Astrophys. J., 541, 660
* Hirano & Yoshida (2013) Hirano, S. & Yoshida, N. 2013, Astrophys. J., 763, 52
* Hodge & da Cunha (2020) Hodge, J. A. & da Cunha, E. 2020, R. Soc. open sci., 7:200556, 1
* Iglesias & Rogers (1996) Iglesias, C. A. & Rogers, F. J. 1996, Astrophys. J., 464, 943
* Ioppolo et al. (2008) Ioppolo, S., Cuppen, H. M., Romanzin, C., van Dishoeck, E. F., & Linnartz, H. 2008, Astrophys. J., 686, 1474
* Jenkins (2009) Jenkins, E. B. 2009, Astrophys. J., 700, 1299
* Jorgenson et al. (2013) Jorgenson, R. A., Murphy, M. T., & Thompson, R. 2013, Mon. Not. R. Astron. Soc., 435, 482
* Karakas (2010) Karakas, A. I. 2010, Mon. Not. R. Astron. Soc., 403, 1413
* Kashikawa et al. (2012) Kashikawa, N., Nagao, T., Toshikawa, J., et al. 2012, Astrophys. J., 761, 85
* Kistler et al. (2009) Kistler, M. D., Yüksel, H., Beacom, J. F., Hopkins, A. M., & Wyithe, J. S. B. 2009, Astrophys. J. Lett., 705, L104
* Kobayashi (2005) Kobayashi, C. 2005, in From Lithium to Uranium: Elemental Tracers of Early Cosmic Evolution, ed. V. Hill, P. Francois, & F. Primas, Vol. 228, 315–321
* Kobayashi et al. (2007) Kobayashi, C., Springel, V., & White, S. D. M. 2007, Mon. Not. R. Astron. Soc., 376, 1465
* Kobayashi et al. (2006) Kobayashi, C., Umeda, H., Nomoto, K., Tominaga, N., & Ohkubo, T. 2006, Astrophys. J., 653, 1145
* Krumholz & McKee (2005) Krumholz, M. R. & McKee, C. F. 2005, Astrophys. J., 630, 250
* Kulkarni et al. (2007) Kulkarni, V. P., Khare, P., Péroux, C., et al. 2007, Astrophys. J., 661, 88
* Kulkarni et al. (2012) Kulkarni, V. P., Meiring, J., Som, D., et al. 2012, Astrophys. J., 749, 176
* Lancaster et al. (2021) Lancaster, L., Ostriker, E. C., Kim, J.-G., & Kim, C.-G. 2021, The Astrophysical Journal Letters, 922, L3
* Larson et al. (1980) Larson, R. B., Tinsley, B. M., & Caldwell, C. N. 1980, Astrophys. J., 237, 692
* Lilly et al. (2013) Lilly, S. J., Carollo, C. M., Pipino, A., Renzini, A., & Peng, Y. 2013, Astrophys. J., 772, 119
* Liu et al. (2019) Liu, D., Schinnerer, E., Groves, B., et al. 2019, The Astrophysical Journal, 887, 235
* Loeb & Barkana (2001) Loeb, A. & Barkana, R. 2001, Annu. Rev. Astron. Astrophys., 39, 19
* Loeb et al. (2016) Loeb, A., Batista, R. A., & Sloan, D. 2016, J. Cosmology Astropart. Phys., 2016, 040
* Ma et al. (2017) Ma, Q., Maio, U., Ciardi, B., & Salvaterra, R. 2017, Mon. Not. R. Astron. Soc., 466, 1140
* Ma et al. (2016) Ma, X., Hopkins, P. F., Faucher-Giguère, C.-A., et al. 2016, Mon. Not. R. Astron. Soc., 456, 2140
* Madau & Dickinson (2014) Madau, P. & Dickinson, M. 2014, Ann. Rev. Astron. Astrophys., 52, 415
* Magnelli et al. (2011) Magnelli, B., Elbaz, D., Chary, R. R., et al. 2011, A&A, 528, A35
* Magnelli et al. (2013) Magnelli, B., Popesso, P., Berta, S., et al. 2013, A&A, 553, A132
* Maio et al. (2010) Maio, U., Ciardi, B., Dolag, K., Tornatore, L., & Khochfar, S. 2010, Mon. Not. R. Astron. Soc., 407, 1003
* Maio et al. (2022) Maio, U., Péroux, C., & Ciardi, B. 2022, A&A, 657, A47
* Maio & Tescari (2015) Maio, U. & Tescari, E. 2015, Mon. Not. R. Astron. Soc., 453, 3799
* Matteucci (2001) Matteucci, F. 2001, The chemical evolution of the Galaxy, Vol. 253 (Springer)
* Matteucci (2016) Matteucci, F. 2016, J. Phys. Conf. Ser., 703, 012004
* Matteucci & Calura (2005) Matteucci, F. & Calura, F. 2005, Mon. Not. R. Astron. Soc., 360, 447
* Nakamura & Umemura (2001) Nakamura, F. & Umemura, M. 2001, Astrophys. J., 548, 19
* Neeleman et al. (2013) Neeleman, M., Wolfe, A. M., Prochaska, J. X., & Rafelski, M. 2013, Astrophys. J., 769, 54
* Nissen et al. (2004) Nissen, P. E., Chen, Y. Q., Asplund, M., & Pettini, M. 2004, A&A, 415, 993
* Nomoto et al. (1997) Nomoto, K., Iwamoto, K., Nakasato, N., et al. 1997, Nucl. Phys. A, 621, 467
* Nomoto et al. (2006) Nomoto, K., Tominaga, N., Umeda, H., Kobayashi, C., & Maeda, K. 2006, Nucl. Phys. A, 777, 424
* Ohkubo et al. (2006) Ohkubo, T., Umeda, H., Maeda, K., et al. 2006, Astrophys. J., 645, 1352
* Penprase et al. (2010) Penprase, B. E., Prochaska, J. X., Sargent, W. L. W., Toro-Martinez, I., & Beeler, D. J. 2010, Astrophys. J., 721, 1
* Pereira & Miranda (2010) Pereira, E. S. & Miranda, O. D. 2010, Mon. Not. R. Astron. Soc., 401, 1924
* Péroux & Howk (2020) Péroux, C. & Howk, J. C. 2020, Annual Review of Astronomy and Astrophysics, 58, 363
* Pettini et al. (2000) Pettini, M., Ellison, S. L., Steidel, C. C., Shapley, A. E., & Bowen, D. V. 2000, Astrophys. J., 532, 65
* Pettini et al. (1997) Pettini, M., King, D. L., Smith, L. J., & Hunstead, R. W. 1997, Astrophys. J., 478, 536
* Pettini et al. (2008) Pettini, M., Zych, B. J., Steidel, C. C., & Chaffee, F. H. 2008, Mon. Not. R. Astron. Soc., 385, 2011
* Press & Schechter (1974) Press, W. H. & Schechter, P. 1974, Astrophys. J., 187, 425
* Prochaska & Wolfe (2002) Prochaska, J. X. & Wolfe, A. M. 2002, Astrophys. J., 566, 68
* Prochaska et al. (2007) Prochaska, J. X., Wolfe, A. M., Howk, J. C., et al. 2007, Astrophys. J. Suppl. Ser., 171, 29
* Rafelski et al. (2012) Rafelski, M., Wolfe, A. M., Prochaska, J. X., Neeleman, M., & Mendez, A. J. 2012, Astrophys. J., 755, 89
* Raiteri et al. (1996) Raiteri, C. M., Villata, M., & Navarro, J. F. 1996, A&A, 315, 105
* Reddy & Steidel (2009) Reddy, N. A. & Steidel, C. C. 2009, Astrophys. J., 692, 778
* Reimers (1975) Reimers, D. 1975, Mémoires Société R. des Sci. Liège, 8, 369
* Robotham & Driver (2011) Robotham, A. S. G. & Driver, S. P. 2011, Mon. Not. R. Astron. Soc., 413, 2570
* Rollinde et al. (2009) Rollinde, E., Vangioni, E., Maurin, D., et al. 2009, Mon. Not. R. Astron. Soc., 398, 1782
* Salpeter (1959) Salpeter, E. E. 1959, Astrophys. J., 129, 608
* Salvadori et al. (2007) Salvadori, S., Schneider, R., & Ferrara, A. 2007, Mon. Not. R. Astron. Soc., 381, 647
* Santoro & Shull (2006) Santoro, F. & Shull, J. M. 2006, Astrophys. J., 643, 26
* Schaerer (2002) Schaerer, D. 2002, A&A, 382, 28
* Schiminovich et al. (2005) Schiminovich, D., Ilbert, O., Arnouts, S., et al. 2005, Astrophys. J. Lett., 619, L47
* Schinnerer et al. (2016) Schinnerer, E., Groves, B., Sargent, M. T., et al. 2016, Astrophys. J., 833, 112
* Schmidt (1959) Schmidt, M. 1959, Astrophys. J., 129, 243
* Schneider (2010) Schneider, R. 2010, in AIP Conference Series, Vol. 1294, First Stars and Galaxies: Challenges for the Next Decade, ed. D. J. Whalen, V. Bromm, & N. Yoshida, 102–109
* Schneider et al. (2006) Schneider, R., Salvaterra, R., Ferrara, A., & Ciardi, B. 2006, Mon. Not. R. Astron. Soc., 369, 825
* Scoville et al. (2014) Scoville, N., Aussel, H., Sheth, K., et al. 2014, The Astrophysical Journal, 783, 84
* Scoville et al. (2017) Scoville, N., Lee, N., Vanden Bout, P., et al. 2017, Astrophys. J., 837, 150
* Scoville et al. (2016) Scoville, N., Sheth, K., Aussel, H., et al. 2016, The Astrophysical Journal, 820, 83
* Sheth & Tormen (1999) Sheth, R. K. & Tormen, G. 1999, Mon. Not. R. Astron. Soc., 308, 119
* Shu et al. (2016) Shu, X. W., Elbaz, D., Bourne, N., et al. 2016, Astrophys. J. Supp., 222, 4
* Sobral et al. (2015) Sobral, D., Matthee, J., Darvish, B., et al. 2015, Astrophys. J., 808, 139
* Sonnentrucker et al. (2010) Sonnentrucker, P., Neufeld, D. A., Phillips, T. G., et al. 2010, A&A, 521, L12
* Spera et al. (2015) Spera, M., Mapelli, M., & Bressan, A. 2015, Mon. Not. R. Astron. Soc., 451, 4086
* Springel & Hernquist (2003) Springel, V. & Hernquist, L. 2003, Mon. Not. R. Astron. Soc., 339, 312
* Tacconi et al. (2018) Tacconi, L. J., Genzel, R., Saintonge, A., et al. 2018, The Astrophysical Journal, 853, 179
* Takahashi et al. (2018) Takahashi, K., Yoshida, T., & Umeda, H. 2018, Astrophys. J., 857, 111
* Tan et al. (2016) Tan, W.-W., Wang, F. Y., & Cheng, K. S. 2016, The Astrophysical Journal, 829, 29
* Tinsley & Larson (1978) Tinsley, B. M. & Larson, R. B. 1978, Astrophys. J., 221, 554
* Tornatore et al. (2007) Tornatore, L., Ferrara, A., & Schneider, R. 2007, Mon. Not. R. Astron. Soc., 382, 945
* Torrey et al. (2019) Torrey, P., Vogelsberger, M., Marinacci, F., et al. 2019, Mon. Not. R. Astron. Soc., 484, 5587
* Tumlinson & Shull (2000) Tumlinson, J. & Shull, J. M. 2000, Astrophys. J., 528, L65
* Tumlinson et al. (2004) Tumlinson, J., Venkatesan, A., & Shull, J. M. 2004, Astrophys. J., 612, 602
* Vangioni et al. (2018) Vangioni, E., Dvorkin, I., Olive, K. A., et al. 2018, Mon. Not. R. Astron. Soc., 477, 56
* Vanzella et al. (2020) Vanzella, E., Meneghetti, M., Caminha, G. B., et al. 2020, Mon. Not. R. Astron. Soc., 494, L81
* Vassiliadis & Wood (1993) Vassiliadis, E. & Wood, P. R. 1993, Astrophys. J., 413, 641
* Venkatesan et al. (2003) Venkatesan, A., Tumlinson, J., & Shull, J. M. 2003, Astrophys. J., 584, 621
* Vladilo (1998) Vladilo, G. 1998, Astrophys. J., 493, 583
* Vladilo (2002) Vladilo, G. 2002, A&A, 391, 407
* Vladilo et al. (2011) Vladilo, G., Abate, C., Yin, J., Cescutti, G., & Matteucci, F. 2011, A&A, 530, A33
* Wagner-Kaiser et al. (2017) Wagner-Kaiser, R., Mackey, D., Sarajedini, A., et al. 2017, Mon. Not. R. Astron. Soc., 471, 3347
* Wolfe et al. (2005) Wolfe, A. M., Gawiser, E., & Prochaska, J. X. 2005, Annu. Rev. Astron. Astrophys., 43, 861
* Wyder et al. (2005) Wyder, T. K., Treyer, M. A., Milliard, B., et al. 2005, Astrophys. J. Lett., 619, L15
* Zafar et al. (2014) Zafar, T., Centurión, M., Péroux, C., et al. 2014, Mon. Not. R. Astron. Soc., 444, 744
|
# Y-cube model and fractal structure of subdimensional particles on hyperbolic
lattices
Han Yan (闫寒<EMAIL_ADDRESS>Department of Physics & Astronomy, Rice
University, Houston, Texas 77005, USA Kevin Slagle Department of Electrical
and Computer Engineering, Rice University, Houston, Texas 77005 USA
Department of Physics, California Institute of Technology, Pasadena,
California 91125, USA Institute for Quantum Information and Matter and Walter
Burke Institute for Theoretical Physics, California Institute of Technology,
Pasadena, California 91125, USA Andriy H. Nevidomskyy Department of Physics
& Astronomy, Rice University, Houston, Texas 77005, USA
###### Abstract
Unlike ordinary topological quantum phases, fracton orders are intimately
dependent on the underlying lattice geometry. In this work, we study a
generalization of the X-cube model, dubbed the Y-cube model, on lattices
embedded in $H_{2}\times S^{1}$ space, i.e., a stack of hyperbolic planes. The
name ‘Y-cube’ comes from the Y-shape of the analog of the X-cube’s X-shaped
vertex operator. We demonstrate that for certain hyperbolic lattice
tesselations, the Y-cube model hosts a new kind of subdimensional particle,
treeons, which can only move on a fractal-shaped subset of the lattice. Such
an excitation only appears on hyperbolic geometries; on flat spaces treeons
becomes either a lineon or a planeon. Intriguingly, we find that for certain
hyperbolic tesselations, a fracton can be created by a membrane operator (as
in the X-cube model) _or_ by a fractal-shaped operator within the hyperbolic
plane.
Introduction: Fracton orders Pretko _et al._ (2020); Nandkishore and Hermele
(2019); Bravyi _et al._ (2011); Haah (2011); Vijay _et al._ (2015, 2016);
Chamon (2005) are examples of highly entangled gapped phases of matter that
lie beyond the Landau–Ginzburg paradigm in that no symmetry is broken, yet
they are also distinct from the more familiar topological phases of matter in
that they do not possess a universal long-wavelength description in terms of a
topological quantum field theory (TQFT) Seiberg and Shao (2021); Gorantla _et
al._ (2021); Slagle (2021); Ohmori and Shimamura (2022); Fontana _et al._
(2022); Slagle and Kim (2017); Aasen _et al._ (2020); Wen (2020); Wang
(2020); Slagle _et al._ (2019); Qi _et al._ (2021). Rather, fracton orders
have a nontrivial ground state degeneracy that depends not only on the
topology but also on the system size Bravyi _et al._ (2011); Haah (2011);
Vijay _et al._ (2015, 2016) and lattice geometry Shirley _et al._ (2018);
Slagle and Kim (2017). Furthermore, fracton orders support excitations whose
mobility is restricted when we do not allow any additional excitations to be
created: fractons are immobile; lineons can only move along a one-dimensional
line, and planeons can only move within a two-dimensional plane Pai and
Hermele (2019); Li and Ye (2020).
Much attention has been devoted to exactly solvable models that host fracton
order in flat space, such as the X-cube model formulated on the cubic lattice
Vijay _et al._ (2016). By comparison, relatively little is known about the
behavior of such models in curved spaces Yan (2019, 2019, 2020); Gorantla _et
al._ (2022a, b); Radicevic (2019) (see Refs. Slagle _et al._ (2019); Bidussi
_et al._ (2022); Jain and Jensen (2022); Gromov (2019) for works that study
gapless fracton models Pretko (2017); Seiberg and Shao (2020); Pretko and
Radzihovsky (2018); Chen _et al._ (2022) on curved spaces ). The fundamental
motivation for introducing curvature is to investigate how it affects the
properties of the fracton order, in much the same way as placing a TQFT on a
manifold with a different genus teaches us about the topological nature of the
ground state degeneracy. For example, it has been previously noted that
curvature can lead to a robust ground state degeneracy of X-cube model even on
manifolds that are topologically trivial Slagle and Kim (2017); and curvature
can grant the subdimensional particles additional mobility Slagle and Kim
(2018); Slagle _et al._ (2019). Another practical motivation for introducing
curvature is to search for better error correcting fracton codes. In
particular, codes on hyperbolic spaces can have favorable quantum error
correcting properties Breuckmann and Eberhardt (2021). It is thus interesting
to examine this aspect for fracton order Tian _et al._ (2020); Tian and Wang
(2019).
In this Letter, we investigate a generalization of the X-cube model to the
hyperbolic space $H_{2}\times S^{1}$, which can be visualized as a stack of
hyperbolic planes ($H^{2}$) with the top and bottom layers identified. Unlike
the flat space, which only permits a small number of different lattices viewed
as tessellations by regular polygons/polyhedra (i.e. the familiar square,
triangular and hexagonal lattices in two dimensions), the number of distinct
tessellations is infinite in hyperbolic spaces. Regular two-dimensional
hyperbolic plane tessellations are enumerated by a pair of integers $(p,q)$
satisfying $\frac{1}{p}+\frac{1}{q}<1/2$, which is called the Schläfli symbol.
These tessellations consist of of $p$-gonal regular polygons, with $q$
polygons meeting at each vertex. Two examples of such tessellations are shown
in Fig. 1.
We find that the generalized X-cube model on this hyperbolic geometry depends
sensitively on the tessellation. The simplest is the case of $q\\!=\\!4$,
where each vertex is locally isomorphic to that of a cubic lattice, allowing
for the standard definition of the vertex operators as products of four Pauli
$X$ in each of the three locally orthogonal intersecting planes. The resulting
$(p,4)$ model has one-dimensional (1D) particles, lineons, which propagate
along the geodesics of the $H_{2}$ plane (as opposed to straight lines in the
flat space), but otherwise are very similar to the X-cube lineons. There are
nuances with the operators necessary to create individual fractons however, as
we shall see below.
The most intriguing findings are for tessellations of $q>4$. For even $q>4$,
we find new models, which we dub Y-cube (because of the Y shape of the in-
plane part of two of the vertex operators in the simplest $q=6$ case
illustrated in Fig. 1b). Unlike the X-cube model, the Y-cube model with $q>4$
does not possess lineons; instead the lineons are replaced with a new kind of
quasiparticles, treeons, that can only propagate on a fractal tree as shown in
Fig. 5b. Moreover, a pair or “dipole” of neighboring fractons remains immobile
within the $H_{2}$ plane, in contrast to X-cube model where fracton dipoles
forms a planeon.
(a) (5,4) tessellation
(b) (4,6) tessellation
Figure 1: Examples of tessellations of the $H_{2}\times S^{1}$manifold using
the Poincaré disk representation: all polygons on the disk have identical
area, but look smaller when drawn farther from the center. (a) Hyperbolic
tessellation with $(p,q)=(5,4)$ (left) and Hamiltonian terms (right). (b)
Hyperbolic tessellation with $(p,q)=(4,6)$ (left) and Hamiltonian terms
(right).
X-cube and Y-cube models in $H_{2}\times S^{1}$: The generalized X-cube
models are constructed as shown in Fig. 1. The Hamiltonian consists of two
types of terms: the vertex and the prism (generalization of the cube) terms.
For $(p,q=4)$ tessellations, the model is the natural generalization of the
X-cube model Shirley _et al._ (2018); Slagle and Kim (2018): the vertex terms
are identical to those in the X-cube model, while the “cube” terms become
products of $Z$ operators over the edges of the $p$-gonal prisms, as shown in
Fig. 1a.
In the general case of $(p,q)$ tessellations (with even $q$), we keep the
$p$-gonal prism $Z$-operators in the Hamiltonian. There are in general two
kinds of vertex terms: (1) a product of $X$ operators on the two neighboring
out-of-plane edges and $q/2$ nonadjacent in-plane edges neighboring the
vertex; and (2) a product of $X$ operators on the $q$ in-plane edges
neighboring the vertex. See Fig. 1b for a $q=6$ example. Each vertex term has
an even number of X operators that overlap with a prism Z operators, making
the model a stabilizer code Hamiltonian. We name this model the Y-cube model,
alluding to the “Y” shape of the in-plane vertex terms when $q=6$ [Fig. 1b].
Before we move on to describe the new features of the hyperbolic X/Y-cube
models, we briefly note that all of them share some common properties due to
the flat $S^{1}$ dimension. Acting with an $X$ operator on an in-plane edge
will create four fracton excitations in the four prisms neighboring the edge.
A pair of fractons displaced out of the plane is mobile within the hyperbolic
plane (via $X$ operators acting on the in-plane edges). A pair of fractons
displaced in-plane (see e.g. Fig. 2b) is a lineon that can move in the out of
plane direction. A pair of fractons displaced out of the plane is mobile
within the hyperbolic plane (via $X$ operators acting on the in-plane edges).
Acting with a $Z$ operator on an out-of-plane edge will create two lineons on
the two vertices at the ends of the edge. These lineons are free to move in
the out-of-plane direction. These excitations are similar to excitations in
the cubic lattice X-cube model. However the excitations (fractons, lineons,
and the new treeons) created otherwise—via $X$ operators acting on a out-of-
plane edges or $Z$ operators on in-plane edges—have new physics that depends
on the tessellation, as we discuss in detail below.
(a)
(b)
(c)
Figure 2: Fracton operators for the $(5,4)$ tessellation: (a) An $X$ operator
(red dot) on an out-of-plane edge (perpendicular to the shown hyperbolic
plane) creates four fractons (yellow stars). (b) A truncated geodesic of $X$
operators creates two fractons, which can move along the geodesic. (c) $X$
operators on a stack of truncated geodesics create a single fracton.
Fractons in $(5,4)$ X-cube model: Let us first consider the model on the
hyperbolic lattice with $(p,q)=(5,4)$ [Fig. 1a], whose physics generalizes
straight-forwardly to all $(\text{odd }p\geq 5,q=4)$ tessellations.
We first examine the effects of an $X$ operator acting on an out-of-plane
edge. It creates four fractons (a quadrupole) on the four neighboring prisms
[Fig. 2a]. By consecutively applying $X$ operator on the out-of-plane edges
attached to the same geodesic of the $H_{2}$ plane, a pair (or dipole) of the
fractons can be moved away. Extending one side of the string to the infinite
boundary of the hyperbolic plane will leave a single pair of fractons in the
bulk. Equivalently, a truncated geodesic of $X$ operators creates a pair of
fractons at its end [Fig. 2b].
Unlike in the X-cube model, a single fracton cannot be created in the bulk at
the corner of a membrane operator. This is because a membrane operator creates
fractons inside the membrane since each pentagon prism is surrounded by an odd
number ($p=5$) of out-of-plane edges. Instead, a single fracton can be created
using a series of truncated geodesic strings of $X$ operators, as illustrated
in Fig. 2c. Each truncated geodesic of out-of-plane $X$ operators creates a
pair of fractons. Thus, the first truncated geodesic operator creates a pair
of fractons, and the others moves one fracton in the pair away to infinity.
(a)
(b)
(c)
Figure 3: Lineon operators for the $(5,4)$ tessellation: (a) A $Z$ operator
(teal) on an in-plane edge creates two lineons (blue diamonds). The inset
shows the two excited terms in the Hamiltonian for a single lineon. (b)
Applying a string of $Z$ operators along a geodesic of in-plane edges creates
a single lineon, which can move along the geodesic. (c) The logical operator
constructed by $Z$ operators, which can also be viewed as moving a lineon from
one boundary to the other.
Lineons in $(5,4)$ X-cube model: Next we examine the action of $Z$ operators
on in-plane edges. When $q=4$, each vertex is locally identical to a vertex of
the cubic lattice. Hence, a $Z$ operator on an in-plane link creates two
lineons [Fig. 3a] that behave similarly to lineons in the X-cube model on a
cubic lattice. Each lineon is an excited state of two vertex operators, shown
in the inset of Fig. 3a. Lineons are restricted to move on a $H_{2}$ geodesic,
as shown in Fig. 3b. Under rough boundary conditions (which condense lineons)
of the hyperbolic planes, the product of $Z$ operators along a geodesic [Fig.
3c] becomes a logical operator that does not create any excitations.
(a)
(b)
(c)
(d)
(e)
Figure 4: Fracton operators for the $(4,5)$ tessellation: (a) A bulk logical
$X$ operator on a fractal tree (thick black), which is a product of $X$
operators (red dots) on out-of-plane edges neighboring one (out-of-plane) side
of the fractal tree. (b) A pruned fractal-tree of $X$ operators creates a pair
of fractons, which is a lineon with mobility only in the out-of-plane
direction. (c) An infinite series of the pruned fractal-trees creates a single
fracton. (d) A bulk logical $X$ operator on a fractal tree wedge (colored
red). It is the product of all $X$ operators (red dots) on the out-of-plane
edges neighboring one side of the wedge. The fractal tree is drawn in thick,
dashed line. (e) A membrane of $X$ operators supported on the intersection of
two wedges (red and teal) also creates a single fracton.
Fractons in $(4,6)$ Y-cube model: Next we consider the Y-cube model on
$(\text{even }p\geq 4,q\geq 6)$ tessellations. We find that $q\geq 6$ results
in novel physics with a new kind of restricted particle mobility. We focus on
the representative example of $(4,6)$, with the Hamiltonian shown in Fig. 1b.
We first discuss the properties of fractons, then we consider the treeons
(which are analogs of lineons).
On the $(4,6)$ tessellation, there are two ways to create fractons,
illustrated in Fig. 4. To understand the first way, it is helpful to first
construct logical out-of-plane $X$ operators that do not create any fractons
in the bulk, shown in Fig. 4a. The logical operator is a product of $X$
operators on a fractal tree (also known as a Bruahat-Tits tree). The fractal
tree is constructed by choosing $q/2$ non-adjacent edges at a vertex and
repeating this procedure at every vertex it extends to. We shall refer to
these logical operators as $T_{X}$. The $T_{X}$ operators anti-commute with
strings of $Z$ operators in the out-of-plane direction. Here, we assume either
an infinite hyperbolic plane or a finite plane with a boundary that condenses
fracton dipoles Bulmash and Iadecola (2019); Luo _et al._ (2022).
Now we can see how single fracton and fracton dipoles are created by the first
type of fractal tree operator: an in-plane fracton dipole is created by
pruning the fractal tree operator $T_{X}$ at a vertex in the bulk, as shown in
Fig. 4b. Unlike the X-cube model on a cubic lattice or a $(p,q=4)$
tessellation, for even $q\geq 6$ the in-plane fracton dipoles are lineons that
can only move in the out-of-plane direction. A single fracton can be created
by aligning many pruned fractal trees along a series of adjacent vertices, as
shown in Fig. 4c. Each pruned tree, creating a dipole, serves the purpose of
moving a fracton closer to the boundary.
Now we turn to a second type of fracton-creation operator. Recall that when
$p$ is odd (such as $p=5$ in the earlier example in Fig. 2), a membrane of $X$
operators creates an extensive number of excitations because each prism term
overlaps with the membrane operate by an odd number ($p$) of out-of-plane
edges. However when $p$ is even, $X$ membrane operators commute with the prism
term in the bulk.
The key to constructing such membrane operators is that its boundary should
overlap with the neighbouring prisms by two out-of-plane edges. Following this
principle, the membrane boundary should follow this pattern: we first find a
$q/2$-degree fractal tree we constructed earlier. Then, we select a region
bounded by the branches of the tree that contains the entire tree [shaded
region in Fig. 4d]. The membrane operator is the product of all the $X$
operators on the out-of-plane edges attached to this region (on the top or
bottom side of the $H^{2}$ plane). We name this geometric shape the fractal-
tree wedge.
To construct a membrane operator that creates a single fracton, we can select
two partially overlapping fractal-tree wedges. The membrane operator supported
on the overlap creates a single fracton near the intersection of wedge
boundaries, as shown in Fig. 4e.
(a)
(b)
(c)
Figure 5: Treeon operators for the $(6,4)$ tessellation: (a) A $Z$ operator (teal) on an in-plane edge creates two treeons (blue stars). The inset shows the two excited terms in the Hamiltonian for a single treeon. (b) Applying a string of $Z$ operators (colored teal) within the fractal tree (colored light teal) creates a single treeon. The treeon can only move within the fractal tree. (c) An infinite string of $Z$ operators within the fractal tree is a logical operator. Table 1: Properties of hyperbolic X/Y-cube models on different tessellations of $H_{2}\times S^{1}$ lattice and model | | in-plane
---
lineon/treeon
| in-plane
---
fracton dipole
| in-plane
---
$X$ logical op.
| in-plane
---
fracton creation op.
$(p\text{ odd},q=4)$ X-cube | lineon, Fig. 3b | 1D mobility, Fig. 2b | geodesic | truncated geodesics, Fig. 2b
$(p\text{ even},q=4)$ X-cube | lineon | 1D mobility | | geodesic,
---
geodesic wedge, Fig. 6a
| truncated geodesics,
---
wedge corner, Fig. 6b
$(p\text{ odd},q\geq 6)$ Y-cube | treeon | no mobility | fractal tree | pruned fractal trees
$(p\text{ even},q\geq 6)$ Y-cube | treeon, Fig. 5b | no mobility, Fig. 4b | | fractal tree, Fig. 4a,
---
fractal-tree wedge, Fig. 4d
| pruned fractal trees, Fig. 4c,
---
wedge corner, Fig. 4e
Treeons in $(4,6)$ Y-cube model: Let us now discuss the excitations that
result from $Z$ operators acting on the in-plane edges in the $(4,6)$ Y-cube
model. A single in-plane $Z$ operator creates two composite excitations, each
consisting of two excited vertex operators, as shown in Fig. 5a. The two
excited vertex operators (inset of Fig. 5a) share three in-plane edges. Acting
with a $Z$ operator on one of these shared edges will move the composite
excitation to the adjacent vertex on the other side of the edge. The
excitation cannot move along other edges without creating additional
excitations.
Repeating this procedure, we find that this composite excitation can move
anywhere on the fractal tree shown in Fig. 5b. The construction of such a tree
is the same as $T_{X}$ in Fig. 4a. This composite excitation is similar to a
lineon, except at each vertex it can choose between multiple fractal paths. We
call this new kind of mobility-restricted excitation a _treeon_ since its
mobility is restricted to a fractal tree.
One non-trivial consequence of the treeons is how they form logical operators.
A treeon can travel from any one of the many branches of the tree to any other
one. The product of the $Z$ operators along any such path is then a logical
operator (assuming rough boundary conditions that condense treeons). One
example is shown in Fig. 5c.
Generalization to all tessellations: Let us now summarize some properties of
the hyperbolic X/Y-cube models on all $(p,q)$ tessellations with even $q$.
When $q\geq 6$, X-cube lineons are replaced by a new type of excitations,
treeons, that move on the fractal tree. The $q=4$ case can be viewed as a
special limit of a tree with only two branches at each vertex, which becomes a
geodesic. In this limit the treeons become lineons, and fracton dipoles gain
mobility along the geodesics. The properties of different hyperbolic lattices
are summarized in Table. 1.
When $p$ is odd, a fracton can be created at the end of a series of truncated
geodesics or fractal trees [Figs. 4c,2c]. When $p$ is even, logical $X$
membrane operators are allowed in the shape of fractal-tree wedges for $q\geq
6$ [Fig. 4e] or geodesic wedges for $q=4$ [Fig. 6a]. The intersection of two
of these logical operators creates a single fractons at the corner [Figs. 4e
and 6b].
Finally, tessellations of $1/p+1/q=1/2$ are special limits of the embedding
space becoming flat rather than hyperbolic. In the case of $p=q=4$ (square
lattice) we recover the 3D X-cube model on a cubic lattice. When $(p=3,q=6)$,
the 2D tessellation forms a triangular lattice, and we can define the Y-cube
model on a stack of these triangular lattices. In this case, the Y-cube model
treeons we encountered for $(p>3,q=6)$ become planeons that move on a
honeycomb network embedded in the flat triangular layer. This results because
when the hyperbolic geometry is made flat, the fractal tree that the treeon
can traverse collapses onto itself and reduces to a 2D honeycomb, see the
Supplementary Materials for more detail (although we leave the in-depth study
of this triangular model to future work).
(a)
(b)
Figure 6: (a) A logical membrane $X$ operator as the geodesic wedge for the
$(6,4)$ model. (b) A membrane $X$ operator with a corner that creates a
fracton.
Summary and outlook: We introduced the Y-cube model on stacked tessellations
of the hyperbolic plane. We discovered that the Y-cube model features a new
kind of particle with restricted mobility: a treeon, which is constrained to
move along a fractal tree. We also find that in the hyperbolic X-cube and
Y-cube models with even $p$, fractons can be created by either in-plane
membrane or fractal operators (Figs. 4c and 4). We are not aware of any
previously studied models with this property.
These models also serve as a concrete example of how the lattice geometry
(tessellation) of fracton orders can determine their fundamental properties,
even when the embedding space (in this case $H_{2}\times S^{1}$) is the same.
Our discovery suggests that there are still many fracton orders with new and
exotic features not seen before, especially when their underlying
graphs/lattices are beyond the flat space ones, i.e., lattices with no
translational symmetry. Like Type-II fracton orders, the hyperbolic Y-cube
models are not foliated fracton orders Shirley _et al._ (2018, 2019),
challenging us for new insight of classification schemes of fracton order Dua
_et al._ (2020).
This work provides one of the simplest examples of fracton order beyond the
flat space, but there is much room left for future exploration. One future
topic is to impose boundary conditions on the hyperbolic plane and study the
ground state degeneracy, logical operators, and quantum information encoding.
It is also useful to ask if the new physics from the hyperbolic structure
provides benefits in quantum memory storage. Another direction is to
investigate fracton models on the 3D hyperbolic space $H_{3}$, or general
graphs without translational symmetry Gorantla _et al._ (2022a).
H.Y. and A.H.N. were supported by the National Science Foundation Division of
Materials Research under the Award DMR-1917511. K.S. was partially supported
by the Walter Burke Institute for Theoretical Physics at Caltech; and the U.S.
Department of Energy, Office of Science, National Quantum Information Science
Research Centers, Quantum Science Center.
## References
* Pretko _et al._ (2020) M. Pretko, X. Chen, and Y. You, Int. J. Mod. Phys. A 35, 2030003 (2020), publisher: World Scientific Publishing Co.
* Nandkishore and Hermele (2019) R. M. Nandkishore and M. Hermele, Annual Review of Condensed Matter Physics 10, 295 (2019), arXiv:1803.11196 .
* Bravyi _et al._ (2011) S. Bravyi, B. Leemhuis, and B. M. Terhal, Annals of Physics 326, 839 (2011), arXiv:1006.4871 .
* Haah (2011) J. Haah, Phys. Rev. A 83, 042330 (2011), publisher: American Physical Society.
* Vijay _et al._ (2015) S. Vijay, J. Haah, and L. Fu, Phys. Rev. B 92, 235136 (2015).
* Vijay _et al._ (2016) S. Vijay, J. Haah, and L. Fu, Phys. Rev. B 94, 235157 (2016).
* Chamon (2005) C. Chamon, Phys. Rev. Lett. 94, 040402 (2005).
* Seiberg and Shao (2021) N. Seiberg and S.-H. Shao, SciPost Phys. 10, 3 (2021), arXiv:2004.06115 .
* Gorantla _et al._ (2021) P. Gorantla, H. T. Lam, N. Seiberg, and S.-H. Shao, Phys. Rev. B 104, 235116 (2021), arXiv:2108.00020 .
* Slagle (2021) K. Slagle, Phys. Rev. Lett. 126, 101603 (2021), arXiv:2008.03852 .
* Ohmori and Shimamura (2022) K. Ohmori and S. Shimamura, arXiv e-prints (2022), arXiv:2210.11001 .
* Fontana _et al._ (2022) W. Fontana, P. Gomes, and C. Chamon, SciPost Physics 12, 064 (2022), arXiv:2103.02713 .
* Slagle and Kim (2017) K. Slagle and Y. B. Kim, Phys. Rev. B 96, 195139 (2017).
* Aasen _et al._ (2020) D. Aasen, D. Bulmash, A. Prem, K. Slagle, and D. J. Williamson, Phys. Rev. Research 2, 043165 (2020), arXiv:2002.05166 .
* Wen (2020) X.-G. Wen, Phys. Rev. Research 2, 033300 (2020), arXiv:2002.02433 .
* Wang (2020) J. Wang, “Non-Liquid Cellular States,” (2020), arXiv:2002.12932 .
* Slagle _et al._ (2019) K. Slagle, D. Aasen, and D. Williamson, SciPost Physics 6, 043 (2019), arXiv:1812.01613 .
* Qi _et al._ (2021) M. Qi, L. Radzihovsky, and M. Hermele, Annals of Physics 424, 168360 (2021), arXiv:2010.02254 [cond-mat.str-el] .
* Shirley _et al._ (2018) W. Shirley, K. Slagle, Z. Wang, and X. Chen, Phys. Rev. X 8, 031051 (2018), arXiv:1712.05892 .
* Pai and Hermele (2019) S. Pai and M. Hermele, Phys. Rev. B 100, 195136 (2019), arXiv:1903.11625 .
* Li and Ye (2020) M.-Y. Li and P. Ye, Phys. Rev. B 101, 245134 (2020), arXiv:1909.02814 [cond-mat.str-el] .
* Yan (2019) H. Yan, Phys. Rev. B 99, 155126 (2019), arXiv:1807.05942 [hep-th] .
* Yan (2019) H. Yan, Phys. Rev. B 100, 245138 (2019).
* Yan (2020) H. Yan, Phys. Rev. B 102, 161119 (2020).
* Gorantla _et al._ (2022a) P. Gorantla, H. Tat Lam, and S.-H. Shao, arXiv e-prints (2022a), arXiv:2207.08585 .
* Gorantla _et al._ (2022b) P. Gorantla, H. Tat Lam, N. Seiberg, and S.-H. Shao, arXiv e-prints , arXiv:2210.03727 (2022b), arXiv:2210.03727 [cond-mat.str-el] .
* Radicevic (2019) D. Radicevic, arXiv e-prints (2019), arXiv:1910.06336 .
* Slagle _et al._ (2019) K. Slagle, A. Prem, and M. Pretko, Annals of Physics 410, 167910 (2019), arXiv: 1807.00827.
* Bidussi _et al._ (2022) L. Bidussi, J. Hartong, E. Have, J. Musaeus, and S. Prohazka, SciPost Physics 12, 205 (2022), arXiv:2111.03668 [hep-th] .
* Jain and Jensen (2022) A. Jain and K. Jensen, SciPost Phys. 12, 142 (2022).
* Gromov (2019) A. Gromov, Phys. Rev. Lett. 122, 076403 (2019), arXiv:1712.06600 .
* Pretko (2017) M. Pretko, Phys. Rev. B 95, 115139 (2017).
* Seiberg and Shao (2020) N. Seiberg and S.-H. Shao, SciPost Physics 9, 046 (2020), arXiv:2004.00015 .
* Pretko and Radzihovsky (2018) M. Pretko and L. Radzihovsky, Phys. Rev. Lett. 120, 195301 (2018), arXiv:1711.11044 .
* Chen _et al._ (2022) X. Chen, H. T. Lam, and X. Ma, arXiv e-prints , arXiv:2211.10458 (2022), arXiv:2211.10458 [cond-mat.str-el] .
* Slagle and Kim (2018) K. Slagle and Y. B. Kim, Phys. Rev. B 97, 165106 (2018), arXiv:1712.04511 .
* Slagle _et al._ (2019) K. Slagle, A. Prem, and M. Pretko, Annals of Physics 410, 167910 (2019), arXiv:1807.00827 .
* Breuckmann and Eberhardt (2021) N. P. Breuckmann and J. N. Eberhardt, IEEE Transactions on Information Theory 67, 6653 (2021), arXiv:2012.09271 .
* Tian _et al._ (2020) K. T. Tian, E. Samperton, and Z. Wang, Annals of Physics 412, 168014 (2020), arXiv:1812.02101 .
* Tian and Wang (2019) K. T. Tian and Z. Wang, arXiv e-prints (2019), arXiv:1902.04543 .
* Bulmash and Iadecola (2019) D. Bulmash and T. Iadecola, Phys. Rev. B 99, 125132 (2019), arXiv:1810.00012 .
* Luo _et al._ (2022) Z.-X. Luo, R. C. Spieler, H.-Y. Sun, and A. Karch, Phys. Rev. B 106, 195102 (2022).
* Shirley _et al._ (2019) W. Shirley, K. Slagle, and X. Chen, SciPost Phys. 6, 015 (2019), arXiv:1803.10426 .
* Dua _et al._ (2020) A. Dua, P. Sarkar, D. J. Williamson, and M. Cheng, Physical Review Research 2, 033021 (2020), arXiv:1909.12304 [cond-mat.str-el] .
* Chen _et al._ (2010) X. Chen, Z.-C. Gu, and X.-G. Wen, Phys. Rev. B 82, 155138 (2010), arXiv:1004.3835 .
* Tantivasadakarn _et al._ (2021) N. Tantivasadakarn, W. Ji, and S. Vijay, Phys. Rev. B 103, 245136 (2021), arXiv:2102.09555 .
Supplementary Materials for “Y-cube model and fractal structure of
subdimensional particles on hyperbolic lattices”
## I The case of $(3,6)$ tessellation
Here, we discuss some properties of the Y-cube model on the $(3,6)$
tessellation (triangular lattice) $\times S^{1}$. This is a special limit of
the $(p,q)$ tessellations that is geometrically flat instead of hyperbolic,
which drastically affects the mobility properties of the excitations. We leave
a more complete study of this model to future work.
The Hamiltonian consists of three types of terms, shown in Fig. S1. The first
two types are the product of $Z$’s on the edges of a triangle prism, and
certain products of $X$’s around vertices. On this flat tessellation, an
additional term can be added to the Hamiltonian, which is not analogous to any
term in hyperbolic tessellations: namely the product of $Z$’s around a
hexagon. The existence of this third term only on flat space is related to the
fact (explained below) that $Z$ operators on in-plane edges create planons,
rather than treeons as in the hyperbolic Y-cube model.
Figure S1: The triangular$\times S^{1}$ lattice and its Hamiltonian terms.
We first examine the excited states of the vertex terms. Recall from the main
text that when $p>3$ and $q=6$, $Z$ operators acting on in-plane edges create
treeon excitations that are restricted to move on a fractal tree. However on
the $(3,6)$ tessellation, we find that these vertex excitations are instead
planeons. This can be seen as follows. Locally, a $Z$ operator on an in-plane
edge creates two such planeons [Fig. S2a]. Each planeon is in the excited
state of two vertex operators [inset of Fig. S2a]. Similar to a treeon, these
planeons can move along any one of three in-plane edges connected to its
vertex. However since the lattice is no longer hyperbolic, this mobility
results in the mobility of a planeon, which can move within a hexgonal
sublattice, as shown in Fig. S2b. There are three flavors of this planeon, one
flavor for each hexagonal sublattice. The flavor of the planeon can be changed
at the expense of creating the fully mobile excitation described in the
following paragraph.
(a)
(b)
Figure S2: (a) A $Z$ operator (teal) acting on an in-plane edge creates two
planeons (blue stars). The inset shows the two excited terms in the
Hamiltonian for a single planeon. (b) A single planeon can travel on the
hexagonal sublattice of the triangular lattice, colored in light blue.
In hyperbolic geometry, the composite excited state of two vertex operators
created by an out-of-plane $Z$ operator is a lineon that can move in the
vertical direction only. On the $(3,6)$ tessellation, the analogous excitation
is instead free to move in 3D. Three $Z$ operators on the edges of an in-plane
triangle creates three such excitations [Fig. S3a]. Each composite excitation
is in the excited state of two vertex operators [inset of Fig. S3a]. Unlike on
the hyperbolic plane, the flat geometry allows these composite excitations to
move along six in-plane directions, as shown in Fig. S3b. Repeated in-plane
movement can span a triangular sub-lattice on the original lattice. These
composite excitations can also move along the vertical direction as shown in
Fig. S3c, giving them 3D mobility.
(a)
(b)
(c)
Figure S3: (a) Three $Z$ operators (teal) acting on an in-plane triangle
creates three excitations (green hexagons). The inset shows the two excited
terms in the Hamiltonian for a single excitation. (b) A single excitation can
travel on a sub-triangular lattice of the original triangular lattice. This
movement is done using four $Z$ operators shown on the figure. (c) A single
excitation can also travel vertically. This movement is done using a $Z$
operator on the vertical link shown on the figure. Figure S4: A flux
operator consisting of a product of $X$ operators (red) within a 2D membrane.
The flux operator anticommutes with the string operator (Fig. S3b) of the
mobile charge excitation.
These composite vertex excitations are similar to 3D toric code charges.
Naively, there are three flavors: one on each of the 3 sublattices. But by
acting with a triangle of $Z$ operators [Fig. S3a], the composite of three
flavors annihilate. Thus, there are actually only two independent flavors. The
corresponding flux operator (similar to a 3D toric code flux operator) is the
membrane operator consisting of an out-of-plane stack of products of $X$
operators on the red links in Fig. S4. This 2D membrane operator creates a
loop excitation around its boundary. Presumably, there are two flavors of this
membrane operator.
The action of in-plane $X$ operators creates planeon excitations with in-plane
mobility. The planeon is a composite excitations of the $Z$ terms shown in
Fig. S5. This planeon anticommutes with the planeon shown in Fig. S2.
Figure S5: The action of an in-plane $X$ operator moves a planeon (orange
triangle in the top panel). The planeon is a composite excitations of the five
$Z$ terms (teal) shown in the lower panel.
Finally, we note that for an $L\times L\times L$ periodic lattice (with
lattice constants $\bm{\hat{a}}_{1,2,3}$ shown in Fig. S1), the ground state
degeneracy is GSD = $2^{6+2L}$ when $L$ is multiple of $3$ (for which
different flavors of particles do not turn into each other due to boundary
conditions).
Therefore, this model supports a pair of anticommuting planeons on each layer,
similar to stacks of toric code. The model also supports two sets of fully
mobile charges along with flux-line excitations, similar to two copies of 3D
toric code. Thus, it seems plausible that the model is local-unitary
equivalent Chen _et al._ (2010) to the following hybrid fracton order
Tantivasadakarn _et al._ (2021): two copies of 3D toric code and decoupled
stacks of 2D toric codes. Indeed, the ground state degeneracy is also
consistent with this possibility. We leave the resolution of this possibility
to future work.
|
# Recursive Quantum Approximate Optimization Algorithm
for the MAX-CUT problem on Complete graphs
Eunok Bae Department of Mathematics, Research Institute for Natural Sciences,
Hanyang University, Seoul 04763, Korea Soojoon Lee Department of
Mathematics, Kyung Hee University, Seoul, 02447, Republic of Korea
###### Abstract
Quantum approximate optimization algorithms are hybrid quantum-classical
variational algorithms designed to approximately solve combinatorial
optimization problems such as the MAX-CUT problem. In spite of its potential
for near-term quantum applications, it has been known that quantum approximate
optimization algorithms have limitations for certain instances to solve the
MAX-CUT problem, at any constant level $p$. Recently, the recursive quantum
approximate optimization algorithm, which is a non-local version of quantum
approximate optimization algorithm, has been proposed to overcome these
limitations. However, it has been shown by mostly numerical evidences that the
recursive quantum approximate optimization algorithm outperforms the original
quantum approximate optimization algorithm for specific instances. In this
work, we analytically prove that the recursive quantum approximate
optimization algorithm is more competitive than the original one to solve the
MAX-CUT problem for complete graphs with respect to the approximation ratio.
## I Introduction
There has been a growing interest in practical quantum computing in the noisy
intermediate-scale quantum (NISQ) era. The NISQ devices have several
restrictions due to noisy in quantum gates and limited quantum resources
Preskill (2018). Diverse disciplines, for instances, combinatorial
optimization, quantum chemistry, and machine learning, are regarded as
potential areas of application to demonstrate a quantum advantage over the
best known classical methods in the NISQ devices.
Quantum approximate optimization algorithm (QAOA) was designed to solve hard
combinatorial optimization problems such as the MAX-CUT problem Farhi et al.
(2014). QAOA is a hybrid quantum-classical algorithm consisting of a
parametrized quantum circuit and a classical optimizer to train it, and it has
been proposed as one of the principal approaches to address the restrictions
of the NISQ devices since the parameters such as the circuit depth can be
handled Farhi et al. (2014). Even though QAOA is one of the promising
candidates of the NISQ algorithms, it has been known that QAOA has limited
performance to solve the MAX-CUT problem on several instances for any constant
depth Hastings (2019); Bravyi et al. (2020); Chou et al. (2021); Farhi et al.
(2020); Marwaha (2021); Marwaha and Hadfield ; Barak and Marwaha (2021).
The recursive quantum approximate optimization algorithm, the RQAOA for short,
has been recently proposed to overcome the limitations of QAOA Bravyi et al.
(2020). There have been known very few results on the RQAOA Bravyi et al.
(2020, 2021, 2022). Only one of them analytically proves that the level-1
RQAOA performs better than any constant level QAOA for solving the MAX-CUT
problem on cycle graphs Bravyi et al. (2020) while the others give only
numerical evidences to claim similar arguments for graph coloring problem
Bravyi et al. (2022) and for finding the largest energy of Ising Hamiltonian
Bravyi et al. (2021).
In this paper, we compare the performance of the level-1 QAOA and the level-1
RQAOA for solving the MAX-CUT problem on complete graphs, and show that the
approximation ratio of the level-1 RQAOA is exactly one whereas that of the
level-1 QAOA is strictly less than one for any complete graphs with $2n$
vertices. This implies that the RQAOA could be a better algorithm than the
QAOA for the NISQ devices since higher level algorithms can produce
uncorrectable errors on the NISQ devices.
This paper is organized as follows. In Sec. II, we briefly review the MAX-CUT
problem and QAOA to solve it. In Sec. III, we introduce the RQAOA which is the
non-local variant of QAOA, and prove that the level-1 RQAOA outperforms the
level-1 QAOA for solving the MAX-CUT problem on complete groups. In Sec. V, we
summarize our result, and discuss future works.
## II QAOA for the MAX-CUT problem
Let $G=(V,E)$ be a graph with the set of vertices $V=\\{1,2,\dots,n\\}$ and
the set of edges $E=\\{(i,j):i,j\in V\\}$. The MAX-CUT problem is a well-known
combinatorial optimization problem which aims to split $V$ into two disjoint
parts such that the number of edges spanning two parts is maximized. The MAX-
CUT problem can be formulated by maximizing the cost function
$C(\mathbf{x})=\frac{1}{2}\sum_{(i,j)\in E}\left(1-x_{i}x_{j}\right)$
for $\mathbf{x}=(x_{1},x_{2},\dots,x_{n})\in\\{-1,1\\}^{n}$.
The Quantum approximate optimization algorithm (QAOA) is a quantum algorithm
to deal with combinatorial optimization problems such as the MAX-CUT problem
Farhi et al. (2014). In this work, we focus on its application to the MAX-CUT
problem. In this case, the classical cost function can be converted to a
quantum problem Hamiltonian
$H_{C}=\frac{1}{2}\sum_{(i,j)\in E}\left(I-Z_{i}Z_{j}\right),$
where $Z_{i}$ is the Pauli operator $Z$ acting on the $i$-th qubit. The
$p$-level QAOA, denoted by QAOAp, for the MAX-CUT problem can be described as
the following steps.
###### Algorithm 1 (QAOAp Farhi et al. (2014)).
The QAOAp is as follows.
1. 1.
Initialize the quantum processor in $\ket{+}^{\otimes n}$.
2. 2.
Generate a variational wave function
$\ket{\psi_{p}(\bm{\gamma},\bm{\beta})}=e^{-i\beta_{p}H_{B}}e^{-i\gamma_{p}H_{C}}\cdots
e^{-i\beta_{1}H_{B}}e^{-i\gamma_{1}H_{C}}\ket{+}^{\otimes n}$, where
$H_{B}=\sum_{i=1}^{n}X_{i}$ is a mixing Hamiltonian and $X_{i}$ is the Pauli
operator $X$ acting on the $i$-th qubit.
3. 3.
Compute the expectation value
$F_{p}(\bm{\gamma},\bm{\beta})=\bra{\psi_{p}(\bm{\gamma},\bm{\beta})}H_{C}\ket{\psi_{p}(\bm{\gamma},\bm{\beta})}$
by performing the measurement in the computational basis.
4. 4.
Find the optimal parameters
$(\bm{\gamma}^{*},\bm{\beta}^{*})=\mathrm{argmax}_{\bm{\gamma},\bm{\beta}}F_{p}(\bm{\gamma},\bm{\beta})$
using a classical optimization algorithm.
The approximation ratio of QAOAp is defined as
$r=\frac{F_{p}(\bm{\gamma}^{*},\bm{\beta}^{*})}{C_{max}}$, where
$C_{\max}=\max_{\mathbf{x}\in\\{-1,1\\}^{n}}C(\mathbf{x})$. It has been shown
that the QAOA has limitations for certain instances Bravyi et al. (2020);
Mbeng et al. (2019); Wurtz and Love (2021). Bravyi et al. Bravyi et al. (2020)
stated that the locality and symmetry of the QAOA cause these limitations, and
they proposed that a non-local version of QAOA called the recursive quantum
approximate optimization algorithm (RQAOA).
## III RQAOA
In this section, we first briefly review the concept of the RQAOA. For the
level-$p$ RQAOA, denoted by RQAOAp, we consider an Ising-like Hamiltonian
$H_{n}=\sum_{(i,j)\in E}J_{i,j}Z_{i}Z_{j}$
which is defined on a graph $G_{n}=(V,E)$ with $|V|=n$, where
$J_{i,j}\in\mathbb{R}$ are arbitrary. The RQAOAp attempts to approximate
$\max_{\mathbf{x}\in\\{-1,1\\}^{n}}\bra{\mathbf{x}}H_{n}\ket{\mathbf{x}},$
and it can be described by the following steps.
1. 1.
Apply the original QAOA to find the optimal state
$\ket{\psi_{p}(\bm{\gamma}^{*},\bm{\beta}^{*})}$ which maximizes
$\bra{\psi_{p}(\bm{\gamma},\bm{\beta})}H_{n}\ket{\psi_{p}(\bm{\gamma},\bm{\beta})}$.
2. 2.
Compute
$M_{i,j}=\bra{\psi_{p}(\bm{\gamma}^{*},\bm{\beta}^{*})}Z_{i}Z_{j}\ket{\psi_{p}(\bm{\gamma}^{*},\bm{\beta}^{*})}$
for every edges $(i,j)\in E$.
3. 3.
Choose a pair $(k,l)$ which maximizes the magnitude of $M_{i,j}$
4. 4.
Impose the constraint $Z_{k}=\textrm{sgn}(M_{k,l})Z_{l}$, and replace it into
the Hamiltonian $H_{n}$.
$\displaystyle H_{n}$ $\displaystyle=$ $\displaystyle\sum_{(i,k)\in
E}J_{i,k}Z_{i}Z_{k}+\sum_{i,j\neq
k}J_{i,j}Z_{i}Z_{j}=\textrm{sgn}(M_{k,l})\left[\sum_{(i,k)\in
E}J_{i,k}Z_{i}Z_{l}\right]+\sum_{i,j\neq k}J_{i,j}Z_{i}Z_{j}$
5. 5.
Call RQAOA recursively to maximize the expected value of a new Ising
Hamiltonian $H_{n-1}$ depending on $n-1$ variables:
$H_{n-1}=\sum_{(i,l)\in
E^{\prime}_{0}}J^{\prime}_{i,j}Z_{i}Z_{l}+\sum_{(i,j)\in
E^{\prime}_{1}}J^{\prime}_{i,j}Z_{i}Z_{j},$
where
$\displaystyle E^{\prime}_{0}$ $\displaystyle=$
$\displaystyle\\{(i,l):(i,k)\in E\\},$ $\displaystyle E^{\prime}_{1}$
$\displaystyle=$ $\displaystyle\\{(i,j):i,j\neq k\\},$
and
$J^{\prime}_{i,j}=\begin{cases}\mathrm{sgn}(M_{k,l})J_{i,k}&\mathrm{if}~{}(i,l)\in
E^{\prime}_{0},\\\ J_{i,j}&\mathrm{if}~{}(i,j)\in E^{\prime}_{1}.\end{cases}$
6. 6.
The recursion stops when the number of variables reaches some suitable
threshold value $n_{c}\ll n$, and find
$\mathbf{x}^{*}=\mathrm{argmax}_{\mathbf{x}\in\\{-1,1\\}^{n_{c}}}\bra{\mathbf{x}}H_{n_{c}}\ket{\mathbf{x}}$
by a classical algorithm.
7. 7.
Reconstruct the original (approximate) solution
$\tilde{\mathbf{x}}\in\\{-1,1\\}^{n}$ from $\mathbf{x}^{*}$ using the
constraints.
## IV Our result
Now, we investigate the performance of the original QAOA1 and RQAOA1 for the
MAX-CUT problem on complete graphs, and we have the following theorem.
###### Theorem 1.
Let $K_{2n}$ be the complete graph with $2n$ vertices for $n\geq 2$ and let
$H_{C}=\frac{1}{2}\sum_{(i,j)\in E}\left(I-Z_{i}Z_{j}\right)$ be the problem
Hamiltonian for the MAX-CUT problem. Then
1. 1.
RQAOA1 achieves the approximation ratio $1$.
2. 2.
The approximation ratio of QAOA1 $<1$. In particular, for $n\geq 4$, the
approximation ratio of QAOA1 $<1-\frac{1}{8n^{2}}$.
###### Proof.
1. 1.
Let
$H_{2n}=\frac{1}{2}\sum_{(i,j)\in E}\left(I-Z_{i}Z_{j}\right),$
where $i,j$ are vertices of $K_{2n}$. Consider a cost function of the form
$C_{2n}(\mathbb{x})=\sum_{(i,j)\in E}\left(I-x_{i}x_{j}\right)$
Suppose that
$(\beta^{*},\gamma^{*})=\textrm{argmax}_{\beta,\gamma}\bra{\psi_{p}(\bm{\gamma},\bm{\beta})}H_{2n}\ket{\psi_{p}(\bm{\gamma},\bm{\beta})}$.
The exact form for the expectation value for QAOA with $p=1$ has been known in
Wang et al. (2018) and it allows us to calculate $M_{ij}$ as follows. For each
edge $(i,j)$,
$\displaystyle M_{ij}$ $\displaystyle=$
$\displaystyle\bra{\psi_{p}(\bm{\gamma^{*}},\bm{\beta^{*}})}Z_{i}Z_{j}\ket{\psi_{p}(\bm{\gamma^{*}},\bm{\beta^{*}})}$
$\displaystyle=$ $\displaystyle\frac{1}{4}\left[\sin
4\beta^{*}\cdot\sin\gamma^{*}\cdot
2\cos^{2n-2}\gamma^{*}-\sin^{2}2\beta^{*}\left(1-\cos^{2n-2}2\gamma^{*}\right)\right]$
$\displaystyle=$ $\displaystyle\frac{1}{2}\sin
4\beta^{*}\cdot\sin\gamma^{*}\cdot\cos^{2n-2}\gamma^{*}-\frac{1}{8}\left(1-\cos
4\beta^{*}\right)\left[1-\left(2\cos^{2}\gamma^{*}-1\right)^{n-1}\right].$
For the recursion step, we can pick a pair $(k,l)$ in $E$ randomly since all
$M_{ij}$’s coincide. Without loss of generality, assume that
$(k,l)=(2n-1,2n)$. It can be easily shown that $M_{i,j}<0$ for all edges
$(i,j)$ and thus, by imposing the constraint
$x_{2n}=-x_{2n-1},$ (1)
the RQAOA removes the variable $x_{2n}$ from the cost function
$C_{2n}(\mathbb{x})$, we obtain the new cost function of the form
$\displaystyle\frac{1}{2}|E_{2n}|+\frac{1}{2}\left(x_{1}x_{2n-1}+\cdots+x_{2n-2}x_{2n-1}+x_{2n-1}x_{2n-1}-\sum_{(i,j)\in
E_{2n-1}}x_{i}x_{j}\right)$ (2) $\displaystyle=$
$\displaystyle\frac{1}{2}|E_{2n}|+\frac{1}{2}-\frac{1}{2}\sum_{(i,j)\in
E_{2n-2}}x_{i}x_{j}.$ $\displaystyle=$
$\displaystyle\frac{1}{2}|E_{2n}|+\frac{1}{2}-\frac{1}{2}|E_{2n-2}|+C_{2n-2}(\mathbb{x}^{\prime}).$
(3)
where $\mathbb{x}^{\prime}\in\\{-1,1\\}^{2n-2}$.
Figure 1: This schematic diagram shows the change of the cost function after
one iteration of the RQAOA1 through the graph. The RQAOA1 eliminates the
variable $x_{6}$ in $K_{6}$ by imposing the constraint $x_{5}=-x_{6}$ on the
cost function $C_{6}(\mathbb{x})$. In the middle graph, the red dashed edges
indicate the terms including $x_{6}$ in $C_{6}(\mathbb{x})$ and these terms
will be canceled out after substituting $-x_{5}$ for $x_{6}$ due to the
different sign. As a consequence, we obtain the new cost function in terms of
$C_{4}(\mathbb{x}^{\prime})$ with additional terms as we can see in Eq. (3).
Similarly, the RQAOA eliminates the variable $x_{2n-2}$ by imposing the
constraint
$x_{2n-2}=-x_{2n-3}$ (4)
on the cost function $C_{2n}(\mathbb{x})$, and we have the next cost function
of the form
$\displaystyle\frac{1}{2}|E_{2n}|+\frac{1}{2}-\frac{1}{2}\sum_{(i,j)\in
E_{2n-2}}x_{i}x_{j}$ $\displaystyle=$
$\displaystyle\frac{1}{2}|E_{2n}|+\frac{1}{2}+\frac{1}{2}\left(1-\sum_{(i,j)\in
E_{2n-4}}x_{i}x_{j}\right)$ $\displaystyle=$
$\displaystyle\frac{1}{2}|E_{2n}|+\frac{1}{2}+\frac{1}{2}-\frac{1}{2}|E_{2n-4}|+C_{2n-4}(\mathbb{x}^{\prime\prime}),$
where $\mathbb{x}^{\prime\prime}\in\\{-1,1\\}^{2n-4}$. By imposing the
constraints inductively,
$\displaystyle x_{2n}$ $\displaystyle=$ $\displaystyle-x_{2n-1}$
$\displaystyle x_{2n-2}$ $\displaystyle=$ $\displaystyle-x_{2n-3}$
$\displaystyle\vdots$ $\displaystyle x_{2n-(2k-2)}$ $\displaystyle=$
$\displaystyle-x_{2n-(2k-1)},$ (5)
the cost function $C_{2n}(\mathbb{x})$ after eliminating variables
$x_{2n},x_{2n-2},\dots,x_{2n-(2k-2)}$ becomes
$\displaystyle\frac{1}{2}|E_{2n}|+\frac{k}{2}-\frac{1}{2}|E_{2n-2k}|+C_{2n-2k}(\mathbb{\tilde{x}}),$
where $\mathbb{\tilde{x}}\in\\{-1,1\\}^{2n-2k}$. Now, we observe that
$\displaystyle\max_{\mathbb{x}\in\\{-1,1\\}^{2n}}C_{2n}(\mathbb{x})$
$\displaystyle\geq$
$\displaystyle\max_{\mathbb{x}\in\\{-1,1\\}^{2n}\textrm{with}~{}(\ref{eq:constraint2})}C_{2n}(\mathbb{x})$
$\displaystyle=$
$\displaystyle\frac{1}{2}n(2n-1)-\frac{1}{4}(2n-2k)(2n-2k-1)+\frac{k}{2}+\max_{\mathbb{\tilde{x}}\in\\{-1,1\\}^{2n-2k}}C_{2n-2k}(\mathbb{\tilde{x}})$
$\displaystyle=$
$\displaystyle\frac{1}{2}n(2n-1)-\frac{1}{2}(n-k)(2n-2k-1)+\frac{k}{2}+(n-k)^{2}$
$\displaystyle=$ $\displaystyle n^{2}$
Thus, this completes the proof.
2. 2.
In order to obtain the bounds for the approximation ratio of $QAOA_{1}$, we
take the exact formula in Wang et al. (2018) once again. For a complete graph
with $2n$ vertices and $n\geq 2$, we have
$\displaystyle\left<C_{ij}\right>$ $\displaystyle=$
$\displaystyle\frac{1}{2}-\frac{1}{4}\sin^{2}(2\beta)\left(1-\cos^{2n-2}(2\gamma)\right)+\frac{1}{2}\sin(4\beta)\sin\gamma\cos^{2n-2}(\gamma),$
(6)
where $C_{ij}=\frac{1}{2}I-Z_{i}Z_{j}$ and
$\left<C_{ij}\right>:=\bra{\psi_{1}(\bm{\gamma},\bm{\beta})}C_{ij}\ket{\psi_{1}(\bm{\gamma},\bm{\beta})}$.
The QAOA1 for the MAX-CUT problem on the complete graph $K_{2n}$ maximizes the
expectation value
$\displaystyle F_{1}(\gamma,\beta)$ $\displaystyle=$
$\displaystyle\bra{\psi_{1}(\bm{\gamma},\bm{\beta})}H_{C}\ket{\psi_{1}(\bm{\gamma},\bm{\beta})}$
$\displaystyle=$ $\displaystyle|E_{2n}|\left<C_{ij}\right>,$
or, equivalently, it maximizes the following function with respect to the
parameters $\gamma,\beta$
$\displaystyle f(\gamma,\beta)$ $\displaystyle:=$
$\displaystyle\frac{1}{2}\sin(4\beta)\sin\gamma\cos^{2n-2}(\gamma)-\frac{1}{4}\sin^{2}(2\beta)\left(1-\cos^{2n-2}(2\gamma)\right)$
$\displaystyle=$
$\displaystyle\frac{1}{2}\sin(4\beta)\sin\gamma\cos^{2n-2}(\gamma)-\frac{1}{8}\left(1-\cos(4\beta)\right)\left(1-\cos^{2n-2}(2\gamma)\right)$
Let us first differentiate the function $f$ by $\beta$ to obtain the optimal
$\beta$ as a function of $\gamma$.
$\displaystyle\frac{\partial
f}{\partial\beta}=2\cos(4\beta)\sin\gamma\cos^{2n-2}(\gamma)-\frac{1}{2}\sin(4\beta)\left(1-\cos^{2n-2}(2\gamma)\right)$
Thus, we have
$\frac{\partial
f}{\partial\beta}=0\iff\tan(4\beta)=\frac{4\sin\gamma\cos^{2n-2}(\gamma)}{1-\cos^{2n-2}(2\gamma)},$
(7)
and hence the optimal parameter $\beta^{*}$ is
$\arctan\left(\frac{4\sin\gamma\cos^{2n-2}(\gamma)}{1-\cos^{2n-2}(2\gamma)}\right)$.
Using the trigonometric identities
$\sin\left(\arctan(x)\right)=\frac{x}{\sqrt{1+x^{2}}}~{}~{}\textrm{and}~{}~{}\cos\left(\arctan(x)\right)=\frac{1}{\sqrt{1+x^{2}}}$
for $x>0$, we obtain
$\displaystyle f(\beta^{*},\gamma)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\sin\left(\arctan\left(\frac{4\sin\gamma\cos^{2n-2}(\gamma)}{1-\cos^{2n-2}(2\gamma)}\right)\right)\sin\gamma\cos^{2n-2}(\gamma)$
$\displaystyle-$
$\displaystyle\frac{1}{8}\left[1-\cos\left(\arctan\left(\frac{4\sin\gamma\cos^{2n-2}(\gamma)}{1-\cos^{2n-2}(2\gamma)}\right)\right)\right]\left(1-\cos^{2n-2}(2\gamma)\right)$
For the simplicity of calculation, let $d=2n-2$, $s_{\gamma}=\sin\gamma$,
$c_{\gamma}=\cos\gamma$, and
$x(\gamma):=\frac{4\sin\gamma\cos^{2n-2}(\gamma)}{1-\cos^{2n-2}(2\gamma)}=\frac{4s_{\gamma}c_{\gamma}^{d}}{1-\left(2c_{\gamma}^{2}-1\right)^{d}}.$
Then the function $f$ can be rewritten and simplified by using the constraint
in Eq. (7) as
$\displaystyle f(\gamma):=f(\beta^{*},\gamma)$ $\displaystyle=$
$\displaystyle\frac{1}{2}\frac{x(\gamma)}{\sqrt{1+x(\gamma)^{2}}}s_{\gamma}c_{\gamma}^{d}-\frac{1}{8}\left(1-\frac{1}{\sqrt{1+x(\gamma)^{2}}}\right)\left(1-\left(2c_{\gamma}^{2}-1\right)^{d}\right)$
$\displaystyle=$
$\displaystyle\frac{1}{8}\frac{x(\gamma)}{\sqrt{1+x(\gamma)^{2}}}\left(1-\left(2c_{\gamma}^{2}-1\right)^{d}\right)x(\gamma)-\frac{1}{8}\left(1-\frac{1}{\sqrt{1+x(\gamma)^{2}}}\right)\left(1-\left(2c_{\gamma}^{2}-1\right)^{d}\right)$
$\displaystyle=$
$\displaystyle\frac{1}{8}\left(1-\left(2c_{\gamma}^{2}-1\right)^{d}\right)\left(\sqrt{1+x(\gamma)^{2}}-1\right)$
$\displaystyle=$
$\displaystyle\frac{1}{8}\left(\sqrt{\left(1-(2c_{\gamma}^{2}-1)^{d}\right)^{2}+16s_{\gamma}^{2}c_{\gamma}^{2d}}-\left(1-(2c_{\gamma}^{2}-1)^{d}\right)\right)$
We want to show that
$\displaystyle
f(\gamma)=\frac{1}{8}\left(1-\left(2c_{\gamma}^{2}-1\right)^{d}\right)\left(\sqrt{1+x(\gamma)^{2}}-1\right)<\frac{1}{4n-1}$
(8)
for all $\gamma$. Then the approximation ratio of QAOA1 for the MAX-CUT
problem on complete graphs $K_{2n}$ is
$\displaystyle\frac{F_{p}(\bm{\gamma}^{*},\bm{\beta}^{*})}{\max_{\mathbb{x}}C_{2n}(\mathbb{x})}$
$\displaystyle=$
$\displaystyle\frac{\max_{\gamma,\beta}F_{1}(\gamma,\beta)}{n^{2}}=\frac{|E_{2n}|\left(\frac{1}{2}+\max_{\gamma}f(\gamma)\right)}{n^{2}}$
$\displaystyle=$
$\displaystyle\frac{(2n-1)\left(\frac{1}{2}+\max_{\gamma}f(\gamma)\right)}{n}$
$\displaystyle<$ $\displaystyle 1-\frac{1}{2n(4n-1)}$ $\displaystyle<$
$\displaystyle 1-\frac{1}{8n^{2}}.$
By substituting the definition of $x(\gamma)$, we can see that the inequality
in Eq. (8) holds if and only if
$\displaystyle\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}\left(1-\left(2c_{\gamma}^{2}-1\right)^{d}\right)-(1-c_{\gamma}^{2})c_{\gamma}^{2d}>0$
(9)
for all $\gamma$. Now, let us define
$g(t):=\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}\left(1-\left(2t-1\right)^{d}\right)-(1-t)t^{d}$
for $t:=c_{\gamma}^{2}\in[0,1]$. Then we can prove that $g(t)>0$ for all
$0\leq t\leq 1$ (See Appendix A for the details)
∎
## V Conclusion
In this work, we have analyzed the performance of the level-1 RAOA and the
level-1 QAOA to solve the MAX-CUT problem on complete graphs with $2n$. In
this case, we have proved that the level-1 RQAOA achieves the approximation
ratio exactly one which means that it can always find the exact solution. On
the other hand, we have also showed that the approximation ration of the
level-1 QAOA is strictly less than one for any $n$. We expect that a similar
result for complete graphs with $2n-1$ vertices by exploiting the same
argument.
There have been known not many results on the RQAOA. For the next step, it
would be interesting to analyze the performance of the level-1 RAOA for
solving the MAX-CUT problem on other graphs. Furthermore, it would be also
considered the same argument for other combinatorial optimization problems
such as the MAX-clique problem.
## Appendix A The positivity of the function $g(t)$
In this section, we want to show that for all $t:=c_{\gamma}^{2}\in[0,1]$,
$g(t):=\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}\left(1-\left(2t-1\right)^{2n-2}\right)-(1-t)t^{2n-2}>0.$
To find the minimum of $g(t)$, we observe that the necessary condition for the
equation
$\displaystyle
g^{\prime}(t)=-\frac{4n-4}{4n-1}(2t-1)^{2n-3}+t^{2n-3}\left(-(2n-2)+(2n-1)t\right)=0.$
(10)
Since $g$ is continuous, we need to see that $g(0)>0$, $g(1)>0$, and
$g(t^{*})>0$ for all critical points $t^{*}\in\left[0,1\right]$. Observe that
$g(0)=g(1)=\frac{1}{(4n-1)^{2}}>0.$
We consider the critical points $t^{*}\in(0,1)$.
$g^{\prime}(t^{*})=-\frac{4n-4}{4n-1}(2t^{*}-1)^{2n-3}+{t^{*}}^{2n-3}\left(-(2n-2)+(2n-1)t^{*}\right)=0,$
(11)
or, equivalently,
$\frac{4n-4}{4n-1}(2t^{*}-1)^{2n-3}={t^{*}}^{2n-3}\left(-(2n-2)+(2n-1)t^{*}\right).$
(12)
Now, by imposing the condition in Eq. (12) on the function $g$, we have
$\displaystyle g(t^{*})$ $\displaystyle=$
$\displaystyle\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}\left(1-\left(2t^{*}-1\right)^{2n-2}\right)-(1-t^{*}){t^{*}}^{2n-2}$
$\displaystyle=$
$\displaystyle\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}\left[1-\frac{4n-1}{4n-4}(2t^{*}-1){t^{*}}^{2n-3}(-(2n-2)+(2n-1)t^{*})\right]-(1-t^{*}){t^{*}}^{2n-2}$
$\displaystyle=$
$\displaystyle\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}+{t^{*}}^{2n-3}\left(-\frac{1}{2n-2}{t^{*}}^{2}+\frac{2n-1}{4n-4}t^{*}-\frac{1}{2}\right)$
$\displaystyle=$
$\displaystyle\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}+\frac{{t^{*}}^{2n-3}}{2n-2}\left(-\left(t^{*}-\frac{2n-1}{4}\right)^{2}+\frac{(2n-1)^{2}}{16}-(n-1)\right).$
If we regard the third term in the last equation as a function of $t$, we can
easily see that it is decreasing on $(0,1)$. Therefore,
$\displaystyle g(t^{*})$ $\displaystyle=$
$\displaystyle\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}+\frac{{t^{*}}^{2n-3}}{2n-2}\left(-\left(t^{*}-\frac{2n-1}{4}\right)^{2}+\frac{(2n-1)^{2}}{16}-(n-1)\right)$
$\displaystyle>$
$\displaystyle\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}+\frac{1}{2n-2}\left(-\left(1-\frac{2n-1}{4}\right)^{2}+\frac{(2n-1)^{2}}{16}-(n-1)\right)$
$\displaystyle=$
$\displaystyle\frac{4}{(4n-1)^{2}}+\frac{1}{4n-1}+\frac{1}{2n-2}\left(-\frac{1}{2}\right)$
$\displaystyle=$ $\displaystyle\frac{4n-13}{4(n-1)(4n-1)^{2}}$
$\displaystyle>$ $\displaystyle 0$
for all $n\geq 4$ and hence, $g(t)>0$ for all $t\in[0,1]$.
## References
* Preskill (2018) J. Preskill, Quantum 2 79 (2018).
* Farhi et al. (2014) E. Farhi, J. Goldstone, and S. Gutmann (2014), arXiv:1411.4028.
* Hastings (2019) M. B. Hastings (2019), arXiv:1905.07047.
* Bravyi et al. (2020) S. Bravyi, A. Kliesch, R. Koenig, and E. Tang, Physical Review Letter 125, 260504 (2020).
* Chou et al. (2021) C.-N. Chou, P. J. Love, J. S. Sandhu, and J. Shi (2021), arXiv:2108.06049.
* Farhi et al. (2020) E. Farhi, J. Goldstone, and S. Gutmann (2020), arXiv:2004.09002.
* Marwaha (2021) K. Marwaha, Quantum 5, 437 (2021).
* (8) K. Marwaha and S. Hadfield, arXiv:2109.10833.
* Barak and Marwaha (2021) B. Barak and K. Marwaha (2021), arXiv:2106.05900.
* Bravyi et al. (2021) S. Bravyi, D. Gosset, D. Grier, and L. Schaeffer (2021), arXiv:2102.06963.
* Bravyi et al. (2022) S. Bravyi, A. Kliesch, R. Koenig, and E. Tang, Quantum 6, 678 (2022).
* Mbeng et al. (2019) G. B. Mbeng, R. Fazio, and G. Santoro (2019), arXiv:1906.08948.
* Wurtz and Love (2021) J. Wurtz and P. Love, Physical Review A 103, 042612 (2021).
* Wang et al. (2018) Z. Wang, S. Hadfield, Z. Jiang, and E. G. Rieffel, Physical Review A 97, 022304 (2018).
|
# Survey on Self-Supervised Multimodal Representation Learning and Foundation
Models
Sushil Thapa
Department of Computer Science and Engineering
New Mexico Tech
<EMAIL_ADDRESS>
Work done as a part of coursework "CSE-585: Graduate Seminar" under Prof. Dr.
Clinton L. Jeffery.
###### Abstract
Deep learning has been the subject of growing interest in recent years.
Specifically, a specific type called Multimodal learning has shown great
promise for solving a wide range of problems in domains such as language,
vision, audio, etc. One promising research direction to improve this further
has been learning rich and robust low-dimensional data representation of the
high-dimensional world with the help of large-scale datasets present on the
internet. Because of its potential to avoid the cost of annotating large-scale
datasets, self-supervised learning has been the de facto standard for this
task in recent years. This paper summarizes some of the landmark research
papers that are directly or indirectly responsible to build the foundation of
multimodal self-supervised learning of representation today. The paper goes
over the development of representation learning over the last few years for
each modality and how they were combined to get a multimodal agent later.
## 1 Introduction
Deep learning has advanced to the point that it is now one of the most
important components of most intelligent systems. Deep neural networks (DNNs)
are a compelling approach in computer vision (CV) tasks and natural language
processing (NLP) tasks due to their ability to learn rich patterns from the
data available today. However, because of the high-cost requirements to
annotate datasets, the supervised approach of learning features from labeled
data has practically achieved saturation. To avoid this, researchers nowadays
have started to learn the supervisory signals without explicit supervision
from humans. Since the model self-learns the supervision from the data itself,
it is called self-supervised learning which is different from Supervised
learning where we explicitly annotate supervision, Unsupervised where we have
no supervision whatsoever, and Reinforcement learning where we get the rewards
from the environment for our steps. Such models use billions of public images,
texts, or other modality datasets to learn features that help to have a
fundamental understanding of the world around us. It started out as using
neural networks to learn language models[Bengio et al., 2003]and learning
distributed representations of words [Mikolov et al., 2013]. Once the
attention came in place[Bahdanau et al., 2015] [Vaswani et al., 2017], NLP has
had breakthrough progress on various NLP tasks[Devlin et al., 2019][Floridi
and Chiriatti, 2020][Lan et al., 2019]. Following the advances in NLP,
researchers also tried exploring similar problem formulation on other
modalities like images [Dosovitskiy et al., 2020][Wu et al., 2021], audio [Chi
et al., 2021][Chuang et al., 2020] and the combination of at least two of
those modalities[Akbari et al., 2021][Su et al., 2019][Wang et al., 2020].
This paper surveys the development of each of such modalities and the progress
of combining those modalities into a single system. The goal is to discuss the
emerging self-supervised techniques for such modalities and understand the
motivation of multimodal fusion to build a model that can perceive the world
much like how our senses do. There are hundreds of papers that investigate
this, but we filtered them by choosing only the influential papers with good
reputations and progress over previous systems. This is, as per our knowledge,
the most recent and most comprehensive survey that focuses on this area of
research.
## 2 Research Questions
The paper initially establishes how each modality representation improved
through a literature review. It discusses language, vision, audio, and
robotics applications separately. It then follows up with the other type of
research where they combine such modalities in one way or the other. In
summary, it asks the following research questions focusing on the
methodologies of learning such representations.
* •
What was the motivation for learning self-supervised representations?
* •
How the different techniques could be applied for learning representations for
multiple types of modalities?
* •
How can the multiple separate modalities be combined to get more effective AI
agents? What do we gain by combining them?
## 3 Related Work
This section focuses on the progress in the individual modalities separately.
The success of each of these modalities is the foundation of the current
success in multimodal systems.
### 3.1 Language
#### 3.1.1 Language Model and Embedding
With the motivation of learning statistical models for defining languages,
this paper proposes a method to learn the statistical joint probability
distribution function of sequences of words that come together in sentences.
This paper eliminates the curse of dimensionality by learning a distributed
representation for words by training with neighboring words in sentences. In
this statistical model[Bengio et al., 2003], the words and language can be
represented by the conditional probability of the next word given all previous
ones, such that
$\hat{P}(w_{1}^{T})=\prod_{t=1}^{T}\hat{P}(w_{t}|w_{1}^{t-1})$
where $w_{t}$ is the $t$-th word and writing sub-sequence
$w_{i}^{j}=(w_{i},w_{i+1},...,w_{j-1},w_{j})$. However, analyzing all the
previous occurred can be compute-intensive and slow, so they used a setup that
would only look at past $n$ words by building $n-grams$ to learn the context.
Larger the value of $n$, the bigger context they would learn from it. Now the
right expression approximately becomes:
$\hat{P}(w_{t}|w_{1}^{t-1})\approx\hat{P}(w_{t}|w_{{\color[rgb]{1,0,0}t-n+1}}^{t-1})$
Such language models were great to get a sense of what is the next possible
word given a sequence of previous words. However, instead of learning a model
with previous words, later works[Mikolov et al., 2013] [Pennington et al.,
2014] focused more on building a big embedding representation of words based
on the company they keep. They used contexts to predict the associated words
and also learn language models. They also used a simple linear model to go
through billions of words within a day. Interestingly, when we looked at the
embedding it learned through such a setup, we could get the interpretable
association of analogies as shown in Figure 1.
Figure 1: Visualizations of Learned Embedding[Mikolov et al., 2013]
#### 3.1.2 Attention
This section focuses on the origin and motivation for developing attention
mechanisms in neural networks. For mapping sequences of inputs to output
sequences of output like in Machine Translation, this work[Sutskever et al.,
2014] proposed modeling/encoding the input sequence for eg. an English
sentence to a fixed-sized vector representation and later decoding it to give
a sequence of output for eg. German in English to German language translation.
This was effective but encoding a variable-length sequence into a fixed-length
vector wasn’t an intuitive thing to do. Now, instead of trying to decode from
a fixed vector, this work[Bahdanau et al., 2015] introduced attention where
the decoder can essentially attend/focus on a specific region of the source
sequence directly. This allows the model to learn the one-one mapping of
relevant words and as shown in Figure 2 we could also interpret the attention
weights of words across a whole sentence.
Figure 2: Four sample alignments. The x-axis and y-axis of each plot
correspond to the words in the source sentence (English) and the generated
translation (French), respectively. Each pixel shows the weight of the match
of annotation of the $j-th$ source word for the $i-th$ target word[Bahdanau et
al., 2015]
Attention was so powerful and revolutionary at that time, this work[Vaswani et
al., 2017] essentially proposes to remove the whole sequence/recurrence bit
from the model. They fundamentally just used a type of attention to process
the whole sequence in parallel with the help of positional encoding. This
model was called Transformers[Vaswani et al., 2017] that would employ a self-
attention technique that allows the model to learn representations by looking
at the input sequence itself as shown in Figure 3. This Transformer
model[Vaswani et al., 2017] actually revolutionized the Machine learning
research as we know it today.
Figure 3: An example of the self-attention mechanism following long-distance
dependencies in the encoder self-attention. Many of the attention heads attend
to a distant dependency of the verb ‘making’, completing the phrase
‘making…more difficult’[Vaswani et al., 2017].
#### 3.1.3 BERT Family
With the help of self-attention in transformers, the encoders learn rich
representations of the words with the help of context. In following parallel
work[Peters et al., 2018], they tried making the embedding context-dependent
as well with the help of bi-directional LSTM[Hochreiter and Schmidhuber,
1997]. Likewise, works like this[Radford et al., 2018] focused more on the
idea of pretraining and learning representation for language understanding
with Transformers. Later, to condition the context not only on each side,
BERT[Devlin et al., 2019] learns representation looking both forward and
backward inspired by Cloze procedure[Taylor, 1953]. To learn the word-words
interaction, they tried masking a word and using context around it to predict
the masked word. The idea is if the model is able to predict it successfully,
it really learned about the syntax and semantics of the words. With the help
of dummy tasks like if the two sentences given to it are contiguous(Next
Sentence Prediction task) or not, it learns the more broad sentence level
representation. This was a major breakthrough in the NLP community. The fully
self-supervised representations that it learns were so rich that when applied
to 11 downstream tasks ranging from Machine translation, Question Answering,
etc. Later vanilla Bert[Devlin et al., 2019] is again scaled up for
robustness[Liu et al., 1907], added recurrence[Dai et al., 2019], replaced
Masked with Permutation Language modeling[Yang et al., 1906] among other
things to achieve a further gain in the performance.
Figure 4: Overall pre-training and fine-tuning procedures for BERT. Apart from
output layers, the same architectures are used in both pre-training and fine-
tuning. The same pre-trained model parameters are used to initialize models
for different downstream tasks.[Devlin et al., 2019].
Many of the state-of-the-art models that we see today are in one form or
another other a variant of BERT and the transformers.
### 3.2 Image
Just like NLP, Vision has also been adapting self-supervised techniques to
learn the representations. Self-supervision in image representation learning
uses image-related pretext tasks that work similarly to the mask Language
Model in BERT. It helps learn the actual representation without explicit
supervision.
#### 3.2.1 Pretext tasks
These tasks generate pseudo labels for a dummy prediction problem. The learned
model from this has features that are much richer and more condensed than the
high-dimensional images. Those features can then be used for different
downstream tasks like classification, detection, etc. Pretext tasks are
generally not associated with images but can be applied to other domains as
well. For the pretext task, in this case, the original dataset is an anchor,
the augmented version usually defines the positive samples and other images in
the dataset as the negative samples. Generally, Pretext tasks can be divided
into four main categories[Jaiswal et al., 2020]:
* •
visual transformation: generally basic adjustments that alter visual
pixels/visual aspects like blurring, noise, distortion, etc.
* •
geometric transformation: spatial transformations such as scaling, rotation,
cropping, etc. as shown in Figure 5
* •
context-based tasks: Manipulates context the data is in. For eg. Jigsaw
puzzles of image crops, future or missing frame prediction from videos, etc.
* •
view prediction tasks: The same object with different views are positive
sample in this case.
Figure 5: Geometric Transformation as pretext task [Chen et al., 2020]. (a)
Original (b) Crop and Resize (c) Rotate(90◦, 180◦, 270◦) (d) crop, resize,
flip.
#### 3.2.2 Contrastive learning methods
This is the self-supervised technique to learn the visual features by
contrasting different forms of images to other images. As shown in Figure 6
there can be different contrastive approaches. In general end-to-end
frameworks like simCLR[Chen et al., 2020] just computes contrastive loss
across augmented views of an anchor image to other images as shown in Figure
7. Likewise, by adding momentum[He et al., 2020] it was able to train bigger
batches of negative samples. In summary, the augmented version of anchor
$z_{i}$ maximizes the agreement whereas it is minimized with a different
image.
Figure 6: Different architecture pipelines for Contrastive Learning: (a) End-
to-End training of two encoders where one generates representation for
positive samples and the other for negative samples (b) Using a memory bank to
store and retrieve encoding of negative samples (c) Using a momentum encoder
which acts as a dynamic dictionary lookup for encoding of negative samples
during training (d) Implementing a clustering mechanism by using swapped
prediction of the obtained representations from both the encoders using end-
to-end architecture[Jaiswal et al., 2020]. Figure 7: A simple framework for
contrastive learning of visual representations. Two separate data augmentation
operators are sampled from the same family of augmentations ($t\sim T$ and
$t\;^{\prime}\sim T$) and applied to each data example to obtain two
correlated views. [Chen et al., 2020].
#### 3.2.3 Clustering methods
This method tries to cluster the augmented version of images together in an
unsupervised fashion like in DeepCluster[Caron et al., 2018]. It has a
specific number of clusters that need to cluster all the available datasets.
Whereas in the other types of methods like SwAV[Caron et al., 2020], there is
a prototypical representation of different views of the dataset like in Figure
8, it is then clustered and the similar vectors are pushed to the similar
cluster which doesn’t have to be online which is much easier to manage.
Figure 8: Contrastive instance learning (left) vs. SwAV (right). In
contrastive learning methods applied to instance classification, the features
from different transformations of the same images are compared directly to
each other. In SwAV, we first obtain “codes” by assigning features to
prototype vectors. We then solve a “swapped” prediction problem wherein the
codes obtained from one data-augmented view are predicted using the other
view. Thus, SwAV does not directly compare image features. Prototype vectors
are learned along with the ConvNet parameters by back-propagation. [Caron et
al., 2020]
### 3.3 Audio
audiobert, [Lakhotia et al., 2021] Just like images and language, Speech
modality has also seen some developments in learning speech representations
with the help of BERT and Transformers. Digital speech is just a waveform of
signals which can be encoded to speech features with the help of a
spectrogram. Those speech features are then fed into an audio encoder which
learns an embedding $z$. The decoder then takes $z$ and generates the speech
features back as shown in Figure 9. With the help of reconstruction loss, it
learns about the reconstruction of the features which ultimately leads to the
learning of rich speech representations. This is a type of generative self-
supervised learning.
Figure 9: Training procedure for the Initial Phonetic-Semantic Joint
Embedding. After training, the encoded vector (z in red) obtained here is used
to train the SpeechBERT[Chuang et al., 2020]
### 3.4 Robotics
Inspired by the recent advances in images like MoCo[He et al., 2020],
SwAV[Caron et al., 2020], Reinforcement learning and robotics have also
started embracing them to provide auxiliary rewards in case of sparse rewards
from the main RL environment. Since these auxiliary rewards are generated
self-supervised way, this helps build a powerful sample efficient
Reinforcement learning agent. Figure 10 shows one such agent. This work is
based on the SwAV[Caron et al., 2020] paper but applied to frames of RL
environments like games and simulated tasks.
Figure 10: Proto-RL proposes a self-supervised scheme that learns to encode
high-dimensional image observations $x_{t}$, $x_{t+1}$, using an encoder
$f\theta$ along with a set of prototypes ${c_{i}}^{M}_{i=1}$ that defines the
basis of the latent space[Yarats et al., 2021]
### 3.5 Multimodal
Till now, we discussed how self-supervised pre-training and fine-tuning have
been helpful in different modalities of datasets separately. In this section,
We will discuss the combinations of two or more modalities to solve a
multimodal task like image captioning, visual question answering, music
generation, etc.
#### 3.5.1 Visio-linguistic models
This is probably the most famous family of multimodal tasks. Due to the
abundance of text and image data sources from news portals, social media, etc.
We have seen tremendous progress in these types of models. To name a few,
these models [Lu et al., 2019] [Li et al., 2019] [Su et al., 2019] encode
respective modalities with separate encoders and then fuse them together to
learn a cross-modal embedding that can learn both visual and textual features.
The dummy task here is inherited from the Bert[Devlin et al., 2019] itself but
slightly modified to take in both visual and textual representations. As shown
in Figure 11, The mask language model becomes a masked multimodal learning
system where it masks a part of input in one modality and tries to predict it
from the remaining part as well as the other modality dataset. Similarly, the
next sentence prediction task is also modified to predict whether or not the
encoding of one modality matches the other modality. To provide a pseudo-
labeled alignment dataset, billions of images with their alt-text/captions
from the web. CLIP[Radford et al., 2021]
Figure 11: Vilbert training task. In masked multi-modal learning, the model
must reconstruct image region categories or words for masked inputs given the
observed inputs. In multi-modal alignment prediction, the model must predict
whether or not the caption describes the image content.[Lu et al., 2019]
#### 3.5.2 Adding other modalities
In addition to these types of models, recently a much bigger model has been
developed for supporting as many modalities as possible to learn a shared
representation. For example, the Perceiver model[Jaegle et al., 2021], does
not make any domain-specific assumptions about the input. It can support high
dimensional inputs such as images, videos, audio, point clouds, and other
multimodal combinations to predict the logits with the help of Transformers.
Figure 12: The Perceiver model architecture[Jaegle et al., 2021].
Similarly, there is other research where other types of convolution-free
transformers-based multimodal models like Video-Audio-Text
Transformers(VATT)[Akbari et al., 2021] are really effective. This type of
modality-agnostic single-backbone transformer learns by sharing weights among
the three modalities as shown in Figure 13.
Figure 13: Overview of the VATT architecture and the self-supervised,
multimodal learning strategy[Akbari et al., 2021]
## 4 Research Opportunities
Going through this literature, the power of using a carefully designed Machine
learning model combined with unlimited data sources can be realized. Even if
there has been a lot of progress in this field in recent years, there are a
few areas that still need attention from researchers. The main bottleneck in
today’s multimodal systems seems to be a handcrafted set of augmentations with
human understanding. But, If we can somehow get past it to build learnable
augmentation systems without human inductive bias and trial-and-error, the
progress could be multiplied. Some other direction could be building better
pretext tasks that could learn the more rich feature representations. Most of
the models that could handle modalities seem to be Transformer based- we think
improving the efficiency and setup of transformers could also be really
beneficial. Big companies these days are more focused on building large models
with large datasets but fail to answer smaller more impactful questions. Model
interpretability/explainability and its fairness, and being able to retain the
performance in adversarial attack and distribution shifts seem to be some
other research directions on these types of models.
## 5 Conclusion
This study extensively discussed a wide range of recent top-
performing self-supervised representation learning algorithms for vision, NLP
tasks, audio, robotics, and the combination of them. From their initial days
of learning embedding representations to contrastive features to employing the
acquired parameters for downstream tasks, we reviewed each module and modality
of such learning history. Multimodal representation has yielded encouraging
results in a variety of downstream applications. Finally, this paper discusses
some of the open issues with current techniques prevalent in today’s learning
methods and also their possible ways of improvement.
## 6 Disclaimer
This paper is supposed to provide a glimpse on the current research trends on
multimodal representation learning and is by no means fully comprehensive with
everything that is out there. We probably have missed a lot of awesome papers
and manuscripts. Please feel free to email me with any recommendations that
you think should be included here.
## References
* Bengio et al. [2003] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. _Journal of Machine Learning Research_ , 3(null):1137–1155, March 2003. ISSN 1532-4435.
* Mikolov et al. [2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In _Neural and Information Processing System (NIPS)_ , 2013.
* Bahdanau et al. [2015] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In _3rd International Conference on Learning Representations, ICLR 2015_ , 2015.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in Neural Information Processing Systems_ , volume 30, 2017.
* Devlin et al. [2019] J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In _NAACL_ , 2019.
* Floridi and Chiriatti [2020] Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. _Minds and Machines_ , 30(4):681–694, 2020.
* Lan et al. [2019] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. In _International Conference on Learning Representations_ , 2019.
* Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In _International Conference on Learning Representations_ , 2020.
* Wu et al. [2021] Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. Cvt: Introducing convolutions to vision transformers. _arXiv preprint arXiv:2103.15808_ , 2021.
* Chi et al. [2021] Po-Han Chi, Pei-Hung Chung, Tsung-Han Wu, Chun-Cheng Hsieh, Yen-Hao Chen, Shang-Wen Li, and Hung-yi Lee. Audio albert: A lite bert for self-supervised learning of audio representation. In _2021 IEEE Spoken Language Technology Workshop (SLT)_ , pages 344–350. IEEE, 2021.
* Chuang et al. [2020] Yung-Sung Chuang, Chi-Liang Liu, Hung yi Lee, and Lin-Shan Lee. Speechbert: An audio-and-text jointly learned language model for end-to-end spoken question answering. In _INTERSPEECH_ , 2020.
* Akbari et al. [2021] Hassan Akbari, Linagzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. _arXiv preprint arXiv:2104.11178_ , 2021.
* Su et al. [2019] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. In _International Conference on Learning Representations_ , 2019.
* Wang et al. [2020] Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. _arXiv preprint arXiv:2006.04768_ , 2020.
* Pennington et al. [2014] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_ , pages 1532–1543, 2014.
* Sutskever et al. [2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In _Advances in neural information processing systems_ , pages 3104–3112, 2014.
* Peters et al. [2018] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 2227–2237. Association for Computational Linguistics, June 2018.
* Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. _Neural computation_ , 9(8):1735–1780, 1997.
* Radford et al. [2018] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018\.
* Taylor [1953] Wilson L Taylor. “cloze procedure”: A new tool for measuring readability. _Journalism quarterly_ , 30(4):415–433, 1953\.
* Liu et al. [1907] Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, and V Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019. 364, 1907.
* Dai et al. [2019] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2978–2988, 2019.
* Yang et al. [1906] Z Yang, Z Dai, Y Yang, J Carbonell, RR Salakhutdinov, and V Le. & le, qv (2019). xlnet: Generalized autoregressive pretraining for language understanding. _Advances in neural information processing systems_ , pages 5754–5764, 1906.
* Jaiswal et al. [2020] Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Debapriya Banerjee, and Fillia Makedon. A survey on contrastive self-supervised learning. _Technologies_ , 9(1):2, Dec 2020. ISSN 2227-7080. doi: 10.3390/technologies9010002. URL http://dx.doi.org/10.3390/technologies9010002.
* Chen et al. [2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In Hal Daumé III and Aarti Singh, editors, _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 1597–1607. PMLR, 13–18 Jul 2020.
* He et al. [2020] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 9729–9738, 2020.
* Caron et al. [2018] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pages 132–149, 2018.
* Caron et al. [2020] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In _Advances in Neural Information Processing Systems_ , volume 33, page 9912–9924, 2020.
* Lakhotia et al. [2021] Kushal Lakhotia, Evgeny Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu-Anh Nguyen, Jade Copet, Alexei Baevski, Adelrahman Mohamed, and Emmanuel Dupoux. Generative spoken language modeling from raw audio, 2021.
* Yarats et al. [2021] Denis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Reinforcement learning with prototypical representations, 2021.
* Lu et al. [2019] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In _NeurIPS_ , 2019.
* Li et al. [2019] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language, 2019\.
* Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021.
* Jaegle et al. [2021] Andrew Jaegle, Felix Gimeno, Andrew Brock, Andrew Zisserman, Oriol Vinyals, and João Carreira. Perceiver: General perception with iterative attention. In _ICML_ , 2021.
|
[a]Matthew Feickert
# pyhf: pure-Python implementation of HistFactory with tensors and automatic
differentiation
Lukas Heinrich Giordon Stark
###### Abstract
The HistFactory p.d.f. template is per-se independent of its implementation in
ROOT and it is useful to be able to run statistical analysis outside of the
ROOT, RooFit, RooStats framework. pyhf is a pure-Python implementation of that
statistical model for multi-bin histogram-based analysis and its interval
estimation is based on the asymptotic formulas of “Asymptotic formulae for
likelihood-based tests of new physics”. pyhf supports modern computational
graph libraries such as TensorFlow, PyTorch, and JAX in order to make use of
features such as auto-differentiation and GPU acceleration. In addition,
pyhf’s JSON serialization specification for HistFactory models has been used
to publish 23 full probability models from published ATLAS collaboration
analyses to HEPData.
## 1 Introduction
Measurements in High Energy Physics (HEP) aim to determine the compatibility
of observed events with theoretical predictions. The relationship between them
is often formalised in a statistical model $f(\bm{x}|\bm{\phi})$ describing
the probability of data $\bm{x}$ given model parameters $\bm{\phi}$. Given
observed data, the likelihood $\mathcal{L}(\bm{\phi})$ then serves as the
basis to test hypotheses on the parameters $\bm{\phi}$. For measurements based
on binned data (histograms), the HistFactory [1] family of statistical models
has been widely used for likelihood construction in both Standard Model (SM)
measurements (e.g. Refs. [2, 3]) as well as searches for new physics (e.g.
Ref. [4]) and reinterpretation studies (e.g. Ref. [5]). pyhf [6, 7] is
presented as the first pure-Python implementation of the HistFactory
specification. In addition to providing a Python and command line API for
HistFactory model building and inspection, it leverages modern open source
$n$-dimensional array libraries to take advantage of automatic differentiation
and hardware acceleration to accelerate the statistical inference and reduce
the time to analyst insight.
## 2 HistFactory Formalism
HistFactory statistical models — described in depth in Ref. [8] and Ref. [9] —
center around the simultaneous measurement of disjoint binned distributions
(channels) observed as event counts $\bm{n}$. For each channel, the overall
expected event rate is the sum over a number of physics processes (samples).
The sample rates may be subject to parametrised variations, both to express
the effect of free parameters $\bm{\eta}$ and to account for systematic
uncertainties as a function of constrained parameters $\bm{\chi}$, whose
impact on the expected event rates from the nominal rates is limited by
constraint terms. In a frequentist framework these constraint terms can be
viewed as auxiliary measurements with additional global observable data
$\bm{a}$, which paired with the channel data $\bm{n}$ completes the
observation $\bm{x}=(\bm{n},\bm{a})$. The full parameter set can be
partitioned into free and constrained parameters
$\bm{\phi}=(\bm{\eta},\bm{\chi})$, where a subset of the free parameters are
declared parameters of interest (POI) $\bm{\psi}$ (e.g. the signal strength)
and all remaining parameters as nuisance parameters $\bm{\theta}$.
$f(\bm{x}|\bm{\phi})=f(\bm{x}|\overset{\begin{subarray}{c}\text{free}\\\
\downarrow\end{subarray}}{\bm{\eta}},\underset{\begin{subarray}{c}\uparrow\\\
\text{constrained\hskip
28.45274pt}\end{subarray}}{\bm{\chi}})=f(\bm{x}|\overset{\begin{subarray}{c}\text{\hskip
56.9055ptparameters of interest}\\\
\downarrow\end{subarray}}{\bm{\psi}},\underset{\begin{subarray}{c}\uparrow\\\
\text{\hskip 28.45274ptnuisance parameters}\end{subarray}}{\bm{\theta}})$ (1)
The overall structure of a HistFactory probability model is then a product of
the analysis-specific model term describing the measurements of the channels
and the analysis-independent set of constraint terms:
$f(\bm{n},\bm{a}\,|\,\bm{\eta},\bm{\chi})=\underbrace{\color[rgb]{0,0,1}{\prod_{c\in\mathrm{\,channels}}\prod_{b\in\mathrm{\,bins}_{c}}\textrm{Pois}\left(n_{cb}\,\middle|\,\nu_{cb}\left(\bm{\eta},\bm{\chi}\right)\right)}}_{\begin{subarray}{c}\text{Simultaneous
measurement}\\\ \text{of multiple
channels}\end{subarray}}\underbrace{\color[rgb]{1,0,0}{\prod_{\chi\in\bm{\chi}}c_{\chi}(a_{\chi}|\,\chi)}}_{\begin{subarray}{c}\text{constraint
terms}\\\ \text{for ``auxiliary measurements''}\end{subarray}},$ (2)
where within a certain integrated luminosity one observes $n_{cb}$ events
given the expected rate of events $\nu_{cb}(\bm{\eta},\bm{\chi})$ as a
function of unconstrained parameters $\bm{\eta}$ and constrained parameters
$\bm{\chi}$. The latter has corresponding one-dimensional constraint terms
$c_{\chi}(a_{\chi}|\,\chi)$ with auxiliary data $a_{\chi}$ constraining the
parameter $\chi$. The expected event rates $\nu_{cb}$ are defined as
$\nu_{cb}\left(\bm{\phi}\right)=\sum_{s\in\mathrm{\,samples}}\nu_{scb}\left(\bm{\eta},\bm{\chi}\right)=\sum_{s\in\mathrm{\,samples}}\underbrace{\left(\prod_{\kappa\in\,\bm{\kappa}}\kappa_{scb}\left(\bm{\eta},\bm{\chi}\right)\right)}_{\text{multiplicative
modifiers}}\,\Bigg{(}\nu_{scb}^{0}\left(\bm{\eta},\bm{\chi}\right)+\underbrace{\sum_{\Delta\in\bm{\Delta}}\Delta_{scb}\left(\bm{\eta},\bm{\chi}\right)}_{\text{additive
modifiers}}\Bigg{)}$ (3)
from constant nominal rate $\nu_{scb}^{0}$ and a set of multiplicative and
additive rate modifiers $\bm{\kappa}(\bm{\phi})$ and $\bm{\Delta}(\bm{\phi})$.
## 3 pyhf
Through adoption of open source $n$-dimensional array (“tensor” in the machine
learning world) computational Python libraries, pyhf decreases the
abstractions between a physicist performing an analysis and the statistical
modeling without sacrificing computational speed. By taking advantage of
tensor calculations and hardware acceleration, pyhf can achieve comparable or
better performance than the C++ implementation of HistFactory on data from
real LHC analyses in most situations. pyhf’s default computational backend is
built from NumPy and SciPy, and supports TensorFlow, PyTorch, and JAX as
alternative backend choices. These alternative backends support hardware
acceleration on GPUs, and in the case of JAX JIT compilation, as well as auto-
differentiation allowing for calculating the full gradient of the likelihood
function — all contributing to speeding up fits.
### 3.1 JSON Schema
The structure of the JSON specification of HistFactory models [8] used by pyhf
closely follows the original XML-based specification [1]. The JSON
specification for a HistFactory workspace is a primary focus of Ref. [8], but
a workspace can be summarised as consisting of a set of channels (an analysis
region) that include samples and possible parameterised modifiers, a set of
measurements (including the POI), and observations (the observed data). Figure
1 demonstrates a simple workspace representing the measurement of a single
two-bin channel with two samples: a signal sample and a background sample. The
signal sample has an unconstrained normalisation factor $\mu$, while the
background sample carries an uncorrelated shape systematic. The background
uncertainties for the bins are 10% and 20% respectively. Use of this JSON
specification has allowed for the publication of 23 full statistical models
from ATLAS analyses to HEPData at the time of writing in 2022. This has been a
significant step forward in enabling reinterpretation and recasting of LHC
results by the broader particle physics community [10].
⬇
{
"channels": [
{ "name": "singlechannel",
"samples": [
{ "name": "signal",
"data": [5.0, 10.0],
"modifiers": [ { "name": "mu", "type": "normfactor", "data": null} ]
},
{ "name": "background",
"data": [50.0, 60.0],
"modifiers": [ {"name": "uncorr_bkguncrt", "type": "shapesys", "data":
[5.0,12.0]} ]
}
]
}
],
"observations": [
{"name": "singlechannel", "data": [50, 60]}
],
"measurements": [
{ "name": "Measurement", "config": {"poi": "mu", "parameters": []} }
]
}
Figure 1: A toy example of a 2-bin single channel workspace with two samples.
The signal sample has expected event rates of 5.0 and 10.0 in each bin, while
the background sample has expected event rates of 50.0 and 60.0 in each bin.
An experiment provided the observed event rates of 50.0 and 60.0 for the bins
in that channel. The uncorrelated shape systematic on the background has 10%
and 20% uncertainties in each bin, specified as absolute uncertainties on the
background sample rates. A single measurement is defined which specifies $\mu$
as the POI [8].
### 3.2 Enabling Analysis Ecosystems
In addition to being used in ATLAS analyses, and in the flavor physics
community [11, 12], pyhf has been used as a computational engine for
reinterpretation studies by the particle physics phenomenology community [13,
14] and as the inference engine for Scikit-HEP library cabinetry [15], as well
as other more analysis specific open source projects [16, 17]. The adoption of
pyhf as a library for other projects to build upon has large implications for
establishing standards and providing improvements across ecosystems of
analysis tools. Of particular note, the Institute for Research and Innovation
in Software for High Energy Physics (IRIS-HEP) [18] has adopted pyhf as a core
part of its Analysis Systems pipeline — a demonstrator model for modern
distributed computing for experiments in the high-luminosity LHC (HL-LHC) era
— which has provided rigorous testing of its interoperability with other
tools. Improvements to pyhf directly impact all the areas highlighted in
Figure 2. In addition to its computational abilities, pyhf is highly portable
given its pure-Python nature and use of dependencies, like SciPy, that are
broadly trusted in computational science and have been built for ubiquitous
architectures. This allows for full pyhf runtimes to be natively used in novel
environments, such as the Pyodide port of CPython to WebAssembly/Emscripten.
While Pyodide is not optimal for serious computational use cases, the ability
to use the full pyhf API allows for creation of statistical linting and
visualization tools that use the same tooling as in production while
leveraging interactivity of web native platforms enabled by Pyodide and the
PyScript framework.
Figure 2: Overview of the IRIS-HEP Analysis Systems pipeline for analysis in
the HL-LHC era. The red outline indicates the areas of the pipeline in which
pyhf is used either directly or as an underlying library.
## 4 Conclusions
pyhf is the first pure-Python implementation of the HistFactory specification
that leverages modern open source $n$-dimensional array libraries as
computational backends to exploit automatic differentiation and hardware
acceleration to speed up fits and reduce the time to scientific insight. It
provides a Python and command line API for building, inspection, and to
perform statistical inference for HistFactory models, and its JSON model
serialization has enabled publication of full statistical models from the
ATLAS collaboration and improved reinterpretations. As pyhf is an open source
library that has been built as part of the Scikit-HEP community project it has
been readily adopted by a growing number of other libraries and tools as a
computational and inference engine, allowing for improvements in the library
API and computational backends to propagate to the broader user community.
Growing community support and interaction, adoption across the broader
particle physics community, and rigorous testing from LHC experiments and
IRIS-HEP systems has demonstrated that pyhf has become a key component of the
growing ecosystem of Pythonic open source scientific tools in particle
physics.
## Acknowledgments
Matthew Feickert’s contributions to this work were supported by the U.S.
National Science Foundation (NSF) under Cooperative Agreement OAC-1836650
(IRIS-HEP). Lukas Heinrich is supported by the Excellence Cluster ORIGINS,
which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) under Germany’s Excellence Strategy – EXC-2094-390783311.
## References
* [1] K. Cranmer, G. Lewis, L. Moneta, A. Shibata and W. Verkerke, _HistFactory: A tool for creating statistical models for use with RooFit and RooStats_ , Tech. Rep. CERN-OPEN-2012-016 (Jan, 2012).
* [2] ATLAS Collaboration, _Measurements of Higgs boson production and couplings in diboson final states with the ATLAS detector at the LHC_ , _Phys. Lett. B_ 726 (2013) 88.
* [3] LHCb Collaboration, _Dalitz plot analysis of $B^{0}\to\overline{D}^{0}\pi^{+}\pi^{-}$ decays_, _Phys. Rev. D_ 92 (2015) 032002.
* [4] ATLAS Collaboration. ATLAS-CONF-2018-041, 2018.
* [5] L. Heinrich, H. Schulz, J. Turner and Y.-L. Zhou, _Constraining A 4 leptonic flavour model parameters at colliders and beyond_, _JHEP_ 04 (2019) 144.
* [6] L. Heinrich, M. Feickert and G. Stark, “pyhf: v0.7.0.” 10.5281/zenodo.1169739.
* [7] L. Heinrich, M. Feickert, G. Stark and K. Cranmer, _pyhf: pure-python implementation of histfactory statistical models_ , _Journal of Open Source Software_ 6 (2021) 2823.
* [8] ATLAS Collaboration. ATL-PHYS-PUB-2019-029, 2019.
* [9] Feickert, Matthew, Heinrich, Lukas and Stark, Giordon, _Likelihood preservation and statistical reproduction of searches for new physics_ , _EPJ Web Conf._ 245 (2020) 06017.
* [10] K. Cranmer et al., _Publishing statistical models: Getting the most out of particle physics experiments_ , _SciPost Phys._ 12 (2022) 037 [2109.04981].
* [11] Belle II Collaboration, _Search for B+→K+ $\nu$$\nu$¯ Decays Using an Inclusive Tagging Method at Belle II_, _Phys. Rev. Lett._ 127 (2021) 181802 [2104.12624].
* [12] Belle Collaboration, _Search for a dark leptophilic scalar produced in association with $\tau^{+}\tau^{-}$ pair in $e^{+}e^{-}$ annihilation at center-of-mass energies near 10.58 GeV_, 2207.07476.
* [13] G. Alguero, S. Kraml and W. Waltenberger, _A SModelS interface for pyhf likelihoods_ , _Comput. Phys. Commun._ 264 (2021) 107909 [2009.01809].
* [14] G. Alguero, J. Heisig, C.K. Khosa, S. Kraml, S. Kulkarni, A. Lessa et al., _New developments in SModelS_ , _PoS_ TOOLS2020 (2021) 022 [2012.08192].
* [15] Alexander Held, “cabinetry: v0.5.1.” 10.5281/zenodo.4742752.
* [16] Mason Proffitt, “abcd-pyhf: v0.0.5.”
* [17] Nathan Simpson and Lukas Heinrich, _neos: End-to-End-Optimised Summary Statistics for High Energy Physics_ , 2203.05570.
* [18] IRIS-HEP, “Institute for Research and Innovation in Software for High Energy Physics (IRIS-HEP).” https://iris-hep.org/.
|
# Continuous Neural Algorithmic Planners
Yu He
University of Cambridge
<EMAIL_ADDRESS>Veličković
DeepMind
<EMAIL_ADDRESS>&Pietro Liò
University of Cambridge
<EMAIL_ADDRESS>&Andreea Deac
Mila, Université de Montréal
<EMAIL_ADDRESS>
###### Abstract
Neural algorithmic reasoning studies the problem of learning algorithms with
neural networks, especially with graph architectures. A recent proposal,
XLVIN, reaps the benefits of using a graph neural network that simulates the
value iteration algorithm in deep reinforcement learning agents. It allows
model-free planning without access to privileged information about the
environment, which is usually unavailable. However, XLVIN only supports
discrete action spaces, and is hence nontrivially applicable to most tasks of
real-world interest. We expand XLVIN to continuous action spaces by
discretization, and evaluate several selective expansion policies to deal with
the large planning graphs. Our proposal, CNAP, demonstrates how neural
algorithmic reasoning can make a measurable impact in higher-dimensional
continuous control settings, such as MuJoCo, bringing gains in low-data
settings and outperforming model-free baselines.
## 1 Introduction
Graph Neural Networks (GNNs) [1, 2, 3] have recently attracted attention in
performing algorithmic reasoning tasks [4]. Due to the close algorithmic
alignment, GNNs were shown to bring better sample efficiency and
generalization ability [5, 6] when learning algorithms such as shortest-path
and spanning-tree. There have been a number of other successful applications,
covering a range of problems such as bipartite matching [7], min-cut problem
[8], and Travelling Salesman Problem [9].
We look at the application of using a GNN that simulates value iteration
algorithm [10] in Reinforcement Learning (RL) problems. Value iteration [11]
is a dynamic programming algorithm that guarantees to provide an optimal
solution, but it is inhibited by the requirement of tabulated inputs. Earlier
works [12, 13, 14, 15, 16] introduced value iteration as an inductive bias to
facilitate RL agents to perform implicit planning, but were found to suffer
from an algorithmic bottleneck [17]. Conversely, eXecuted Latent Value
Iteration Net (XLVIN) [17] was proposed to leverage a value-iteration-behaving
GNN [10] by adopting the neural algorithmic framework [4]. XLVIN is able to
learn under a low-data regime, tackling the algorithmic bottleneck suffered by
other implicit planners.
So far, XLVIN only applies to environments with small, discrete action spaces.
The difficulty of a continuous action space comes from the infinite pool of
action spaces. Furthermore, XLVIN builds a planning graph, over which the pre-
trained GNN can simulate value iteration. The construction of the planning
graph requires an enumeration of the action space – starting from the current
state and expanding for a number of hops equal to the planning horizon. The
graph size quickly explodes as the dimensionality of the action space
increases, preventing XLVIN from more complex problems.
Nevertheless, continuous control is of significant importance, as most
simulation or robotics control tasks [18] have continuous action spaces by
design. High complexity also naturally arises as the problem moves towards
more powerful real-world domains. To extend such an agent powered by neural
algorithmic reasoning to complex, continuous control problems, we propose
Continuous Neural Algorithmic Planner (CNAP). It generalizes XLVIN to
continuous action spaces by discretizing them through binning. Moreover, CNAP
handles the large planning graph by following a sampling policy that carefully
selects actions during the neighbor expansion stage.
Beyond extending the XLVIN model, our work also opens up the discussion on
handling large state spaces in neural algorithmic reasoning. The main
motivation for using GNNs to learn algorithms comes from the benefit of
breaking the input constraints of classical algorithms and handling raw input
data directly with neural networks. Therefore, the large state space problem
goes beyond the RL context as we move to apply GNNs with algorithmic reasoning
power to other tasks.
In this paper, we confirm the feasibility of CNAP on a continuous relaxation
of a classical low-dimensional control task, where we can still fully expand
all of the binned actions after discretization. Then, we apply CNAP to general
MuJoCo [19] environments with complex continuous dynamics, where expanding the
planning graph by taking all actions is impossible. By expanding the
application scope from simple discrete control to complex continuous control,
we show that such an intelligent agent with algorithmic reasoning power can be
applied to tasks with more real-world interests.
## 2 Background
### 2.1 Markov Decision Process (MDP)
A reinforcement learning problem can be formally described using the MDP
framework. At each time step $t\in\\{0,1,...,T\\}$, the agent performs an
action $a_{t}\in\mathcal{A}$ given the current state $s_{t}\in\mathcal{S}$.
This spawns a transition into a new state $s_{t+1}\in\mathcal{S}$ according to
the transition probability $p(s_{t+1}|s_{t},a_{t})$, and produces a reward
$r_{t}=r(s_{t},a_{t})$. A policy $\pi(a_{t}|s_{t})$ guides an agent by
specifying the probability of choosing an action $a_{t}$ given a state
$s_{t}$. The trajectory $\tau$ is the sequence of actions and states the
agents took $(s_{0},a_{0},...,s_{T},a_{T})$. We define the infinite horizon
discounted return as $R(\tau)=\sum_{t=0}^{\infty}\gamma^{t}r_{t}$, where
$\gamma\in[0,1]$ is the discount factor. The goal of an agent is to maximize
the overall return by finding the optimal policy
$\pi^{*}=\mathrm{argmax}_{\pi}\mathbb{E}_{\tau\sim\pi}[R(\tau)]$. We can
measure the desirability of a state $s$ using the state-value function
$V^{*}(s)=\mathbb{E}_{\tau\sim\pi^{*}}[R(\tau)|s_{t}=s]$.
### 2.2 Value Iteration
Value iteration is a dynamic programming algorithm that computes the optimal
policy’s value function given a tabulated MDP that perfectly describes the
environment. It randomly initializes $V^{*}(s)$ and iteratively updates the
value function of each state $s$ using the Bellman optimality equation [11]:
$V^{*}_{i+1}(s)=\mathop{\max}_{a\in\mathcal{A}}\\{r(s,a)+\gamma\mathop{\sum}_{s^{\prime}\in\mathcal{S}}p(s^{\prime}|s,a)V^{*}_{t}(s^{\prime})\\}$
(1)
and we can extract the optimal policy using:
$\pi^{*}(s)=\mathop{\text{argmax}}_{a\in\mathcal{A}}\\{r(s,a)+\gamma\mathop{\sum}_{s^{\prime}\in\mathcal{S}}p(s^{\prime}|s,a)V^{*}(s^{\prime})\\}$
(2)
### 2.3 Message-Passing GNN (MPNN)
Graph Neural Networks (GNNs) generalize traditional deep learning techniques
onto graph-structured data [20][21]. A message-passing GNN [2] iteratively
updates its node feature $\vec{h}_{s}$ by aggregating messages from its
neighboring nodes. At each timestep $t$, a message can be computed between
each connected pair of nodes via a message function
$M(\vec{h}_{s}^{t},\vec{h}_{s^{\prime}}^{t},\vec{e}_{s^{\prime}\rightarrow
s})$, where $\vec{e}_{s^{\prime}\rightarrow s}$ is the edge feature. A node
receives messages from all its connected neighbors $\mathcal{N}(s)$ and
aggregates them via a permutation-invariant operator $\bigoplus$ that produces
the same output regardless of the spatial permutation of the inputs. The
aggregated message $\vec{m}_{s}^{t}$ of a node $s$ can be formulated as:
$\vec{m}_{s}^{t}=\bigoplus_{s^{\prime}\in\mathcal{N}(s)}M(\vec{h}_{s}^{t},\vec{h}_{s^{\prime}}^{t},\vec{e}_{s^{\prime}\rightarrow
s})$ (3)
The node feature $\vec{h}_{s}^{t}$ is then transformed via an update function
$U$:
$\vec{h}_{s}^{t+1}=U(\vec{h}_{s}^{t},\vec{m}_{s}^{t})$ (4)
### 2.4 Neural Algorithmic Reasoning
A dynamic programming (DP) algorithm breaks down the problem into smaller sub-
problems, and recursively computes the optimal solutions. DP algorithm has a
general form:
$\mathrm{Answer}[k+1][i]=\text{DP-
Update}(\\{\mathrm{Answer}[k][j]\\},j=1...n)$ (5)
We can interpret GNN process as DP algorithms [5] by aligning GNN’s message-
passing step with DP’s update step. Let $k$ be the current iteration, and $i$
be the node. A GNN node aggregates messages $\vec{m}_{i}^{k}$ from its
neighbors and updates its node representation to $\vec{h}_{i}^{k+1}$.
Similarly, a DP algorithm aggregates answers from sub-problems
Answer[$k$][$j$], then updates its own Answer[$k+1$][$i$]. The alignment can
thus be seen from mapping GNN’s node representation $\vec{h}_{i}^{k}$ to
Answer[$k$][$i$], and GNN’s aggregation function to DP-Update.
Previous work [5] proved that GNN could simulate DP algorithms with better
sample efficiency and generalization due to their close alignment.
Furthermore, [6] showed that learning the individual steps of graph algorithms
using GNNs brings generalization benefits. Results from [6][10] also showed
that MPNN with max aggregator had the best performance among a range of GNN
models.
## 3 Related Work
### 3.1 Continuous action space
A common technique for dealing with continuous control problems is to
discretize the action space, converting them into discrete control problems.
However, discretization leads to an explosion in action space. [22] proposed
to use a policy with factorized distribution across action dimensions, and
proved it effective on high-dimensional complex tasks with on-policy
optimization algorithms. Moreover, the explosion in action space also requires
sampling when constructing the planning graph. Sampled MuZero [23] extended
MuZero [24] with a sample-based policy based on parameter reuse for policy
iteration algorithms. Our work differs in the way that the sampling policy
should be aware of the algorithmic reasoning context. The actions sampled
would directly participate in the Bellman optimality equation (Eq.1), and
ideally should allow the pre-trained GNN to simulate value iteration optimally
in each iteration step.
### 3.2 Large-scale graphs
Sampling modules [25] are introduced into GNN architectures to deal with
large-scale graphs as a result of neighbor explosion from stacking multiple
layers. The unrolling process to construct a planning graph requires node-
level sampling. Previous work GraphSAGE [26] introduces a fixed size of node
expansion procedure into GCN [1]. This is followed by PinSage [27], which uses
a random-walk-based GCN to perform importance-based sampling. However, our
work looks at sampling for implicit planning, where the importance of each
node in sampling is more difficult to understand due to the lack of an exact
description of the environment dynamics. Furthermore, sampling in a multi-
dimensional action space also requires more careful thinking in the decision-
making process.
## 4 Architecture
Our architecture uses XLVIN as a starting point, which we introduce first.
This is followed by a discussion of the challenges that arise from extending
XLVIN to a continuous action space and the approaches we proposed to address
them.
### 4.1 XLVIN modules
Figure 1: XLVIN modules
Given the observation space $\boldsymbol{S}$ and the action space
$\mathcal{A}$, we let the dimension of state embeddings in the latent space be
$k$. The XLVIN architecture can be broken down into four modules:
Encoder ($z:\boldsymbol{S}\rightarrow\mathbb{R}^{k}$): A 3-layer MLP which
encodes the raw observation from the environment $s\in\boldsymbol{S}$, to a
state embedding $\vec{h}_{s}=z(s)$ in the latent space.
Transition ($T:\mathbb{R}^{k}\times\mathcal{A}\rightarrow\mathbb{R}^{k}$): A
3-layer MLP with layer norm taken before the last layer that takes two inputs:
the state embedding of an observation $z(s)\in\mathbb{R}^{k}$, and an action
$a\in\mathcal{A}$. It predicts the next state embedding
$z(s^{\prime})\in\mathbb{R}$, where $s^{\prime}$ is the next state
transitioned into when the agent performed an action $a$ under current state
$s$.
Executor ($X:\mathbb{R}^{k}\times\mathbb{R}^{|\mathcal{A}|\times
k}\rightarrow\mathbb{R}^{k}$): A message-passing GNN pre-trained to simulate
each individual step of the value iteration algorithm following the set-up in
[10]. Given the current state embedding $\vec{h}_{s}$, a graph is constructed
by enumerating all possible actions $a\in\mathcal{A}$ as edges to expand, and
then using the Transition module to predict the next state embeddings as
neighbors $\mathcal{N}(\vec{h}_{s})$. Finally, the Executor output is an
updated state embedding
$\vec{\mathcal{X}}_{s}=X(\vec{h}_{s},\mathcal{N}(\vec{h}_{s}))$.
Policy and Value
($P:\mathbb{R}^{k}\times\mathbb{R}^{k}\rightarrow[0,1]^{|\mathcal{A}|}$ and
$V:\mathbb{R}^{k}\times\mathbb{R}^{k}\rightarrow\mathbb{R})$: The Policy
module is a linear layer that takes the outputs from the Encoder and Executor,
i.e. the state embedding $\vec{h}_{s}$ and the updated state embedding
$\vec{\mathcal{X}}_{s}$, and produces a categorical distribution corresponding
to the estimated policy, $P(\vec{h}_{s},\vec{\mathcal{X}}_{s})$. The Tail
module is also a linear layer that takes the same inputs and produces the
estimated state-value function, $V(\vec{h}_{s},\vec{\mathcal{X}}_{s})$.
The training procedure follows the XLVIN paper [17], and Proximal Policy
Optimization (PPO) [28] is used to train the model, apart from the Executor.
We use the PPO implementation and hyperparameters by [29]. The Executor is
pre-trained as shown in [10] and directly plugged in.
### 4.2 Limitations of XLVIN
Discrete control: XLVIN agents can only choose an action from a discrete set,
such as pushing left or right, but not from a continuous range, such as
pushing with a magnitude in the range of [0, 1].
Small action space: XLVIN is limited by a small size of the action space,
while complex control problems come with large action spaces. More
importantly, as dimensionality increases, the action space experiences an
explosion in size. Take an example of a 3-dimensional robotic dog that
operates its 6 joints simultaneously. If we discretize the action space into
10 bins in each dimension, this leads to an explosion in action space to a
size of $10^{6^{3}}$, which exceeds the average computation capacity.
### 4.3 Discretization of the continuous action space
Assume the continuous action space $\mathcal{A}$ has $D$ dimensions, we
discretize each dimension $\mathcal{A}_{i}$ into $N$ evenly spaced actions
$\\{a_{i}^{1},a_{i}^{2},...,a_{i}^{N}\\}$ via binning. However, the
discretization of a multi-dimensional continuous action space leads to a
combinatorial explosion in action space size. There are two architectural
bottlenecks in XLVIN that require an enumeration of all actions, limiting its
ability to handle such a large action space.
The first bottleneck: The policy layer computes the probability of choosing
each action given the current state, resulting in a layer dimension of
$N^{D}$. We chose to use a factorized joint policy in Section 4.4, which
reduces the dimension down to $N*D$.
The second bottleneck: A planning graph is constructed for the pre-trained GNN
to simulate value iteration behavior. Given the current state as a node, we
enumerate the action space for neighbor expansion, leading to $N^{D}$ edges
per node. We proposed to use a neighbor sampling policy in Section 4.5 that
samples a much smaller number of actions $K\ll N^{D}$ during neighbor
expansion.
|
---|---
(a) | (b)
Figure 2: (a) Factorized joint policy on an action space with dimension of
two. (b) Neighbor sampling methods when constructing the planning graph in
Executor.
### 4.4 Factorized joint policy
A naive policy layer $\pi^{*}=p(\vec{a}|s)$ produces a categorical
distribution with $N^{D}$ logits. To overcome the first bottleneck, we follow
a factorized joint policy proposed in [22]:
$\pi^{*}(\vec{a}|s)=\prod_{i=1}^{D}\pi_{i}^{*}(a_{i}|s)$ (6)
As illustrated in Figure 2(a), a factorized joint policy
$P(\vec{h}_{s},\vec{\mathcal{X}}_{s})$ is a linear layer with an output
dimension of $N*D$. Each policy $\pi_{i}^{*}(a_{i}|s)$ indicates the
probability of choosing an action $a_{i}\in\mathcal{A}_{i}$ in the $i^{th}$
dimension, where $|\mathcal{A}_{i}|=N$. This reduces the exponential explosion
of action space due to increased dimensionality down to linear. Note there is
a trade-off in the choice of $N$, as a larger number of action bins retains
more information from the continuous action space, but it also implies larger
graphs and hence computation costs. We provide an ablation study in evaluation
on the impact of this choice.
### 4.5 Neighbor sampling methods
As illustrated in Figure 2(b), the second bottleneck occurs when constructing
a graph to execute the pre-trained GNN. Starting from the current state node
$\vec{h}_{s}$, we enumerate all possible actions $\vec{a}_{i}\in|\mathcal{A}|$
to connect neighbors via
$\vec{h}(s)\xrightarrow[]{\vec{a}_{i}}\vec{h}(s^{\prime}_{i})$. As a result,
each node has degree $|\mathcal{A}|$, and graph size grows even faster as it
expands deeper. We propose to use a neighbor sampling method so that we only
expand a small subset of actions. However, the important question is which
actions to select. Value iteration algorithm is a DP algorithm whose update
rule is the Bellman optimality equation (Eq.1). The max aggregator iterates
through the entire action space $a\in\mathcal{A}$ to ensure that we get the
optimal solution at each iteration. Therefore, the graph constructed should
allow our pre-trained GNN to predict optimally at each layer. It is thus
critical that we can include the action that produces a good approximation of
the state-value function in our sampling.
Below, we propose four possible methods to sample $K$ actions from
$\mathcal{A}$, where $K\ll|\mathcal{A}|$ is a fixed number, under the context
of value-iteration-based planning.
#### 4.5.1 Gaussian methods
Gaussian distribution is a common baseline policy distribution for continuous
action spaces, and it is straightforward to interpret. Furthermore, it
discourages extreme actions while encouraging neutral ones with some level of
continuity, which suits the requirement of many planning problems. We propose
two variants of sampling policy based on Gaussian distribution.
(a) Manual-Gaussian: A Gaussian distribution is used to randomly sample action
values in each dimension $a_{i}\in\mathcal{A}_{i}$, which are stacked together
as a final action vector $\vec{a}=[a_{0},...,a_{D-1}]^{T}\in\mathcal{A}$. We
repeat for $K$ times to sample a subset of $K$ action vectors. We set the mean
$\mu=N/2$ and standard deviation $\sigma=N/4$, where $N$ is the number of
discrete action bins. These two parameters are chosen to spread a reasonable
distribution over $[0,N-1]$. Outliers and non-integers are rounded to the
nearest whole number within the range of $[0,N-1]$.
(b) Learned-Gaussian: The two parameters manually chosen in the previous
method pose a constraint on placing the median action in each dimension as the
most likely. Here instead, two fully-connected linear layers are used to
separately estimate the mean $\mu$ and standard deviation $\sigma$. They take
the state embedding $\vec{h}_{s}$ from Encoder and output parameter
estimations for each dimension. We use the reparameterization trick [30] to
make the sampling differentiable.
#### 4.5.2 Parameter reuse
Gaussian methods still restrain a fixed distribution on the sampling
distribution, which may not necessarily fit. Previous work [23] studied the
action sampling problem on policy evaluation and improvement. They reasoned
that since the actions selected by the policy are expected to be more
valuable, we can directly use the policy for sampling.
(c) Reuse-Policy: We can reuse Policy layer
$P(\vec{h}_{s},\vec{\mathcal{X}}_{s})$ to sample the actions when we expand
the graph in Executor. This is equivalent to using the policy distribution
$\pi^{*}=p(\vec{a}|s)$ as the neighbor sampling distribution. However, the
second input $\vec{\mathcal{X}}_{s}$ for Policy layer comes from Executor,
which is not available at the time of constructing the graph. It is filled up
by setting $\vec{\mathcal{X}}_{s}=\vec{0}$ as placeholders.
#### 4.5.3 Learn to expand
Lastly, we can also use a separate layer to learn the neighbor sampling
distribution.
(d) Learned-Sampling: This uses a fully-connected linear layer that consumes
$\vec{h}_{s}$ and produces an output dimension of $|N\cdot D|$. It is expected
to learn the optimal neighbor sampling distribution in a factorized joint
manner, same as Figure 2(a). The outputs are logits for $D$ categorical
distributions, where we used Gumbel-Softmax [31] for differentiable sampling
actions in each dimension, together producing $\vec{a}=[a_{1},...,a_{D}]^{T}$.
Table 1: Summary of the four neighbor sampling policies on their pros & cons. Manual- Gaussian | (+) Sample-efficient as no training is required. | (-) Gaussian distribution may not fit. (-) Assume the median as the most likely.
---|---|---
Learned- Gaussian | (+) More flexible choice of distribution range. | (-) Gaussian distribution may not fit. (-) More parameters requires more training.
Reuse- Policy | (+) Parameter reuse. (+) Policy distribution alignment. | (-) Misalignment in input format due to the unavailability of $\vec{\mathcal{X}_{s}}$ in Executor.
Learned- Sampling | (+) Dedicated distribution learning. | (-) More parameters requires more training.
## 5 Results
### 5.1 Classic Control
To evaluate the performance of CNAP agents, we first ran the experiments on a
relatively simple MountainCarContinuous-v0 environment from OpenAI Gym Classic
Control suite [32], where the action space was one-dimensional. The training
of the agent used PPO under 20 rollouts with 5 training episodes each, so the
training consumed 100 episodes in total.
We compared two variants of CNAP agents: “CNAP-B” had its Executor pre-trained
on a type of binary graph that aimed to simulate the bi-directional control of
the car, and “CNAP-R” had its Executor pre-trained on random synthetic Erdős-
Rényi graphs. In Table 2, we compared both CNAP agents against a “PPO
Baseline” agent that consisted of only the Encoder and Policy/Tail modules.
Both the CNAP agents outperformed the baseline agent for this environment,
indicating the success of extending XLVIN onto continuous settings via
binning.
Table 2: Mean rewards for MountainCarContinuous-v0 using PPO Baseline and two variants of CNAP agents. All three agents ran on 10 action bins, and were trained on 100 episodes in total. Both CNAP agents executed one step of value iteration. The reward was averaged over 100 episodes and 10 seeds. Model | MountainCarContinuous-v0
---|---
PPO Baseline | -4.96 $\pm$ 1.24
CNAP-B | 55.73 $\pm$ 45.10
CNAP-R | 63.41 $\pm$ 37.89
#### 5.1.1 Effect of GNN width and depth
We then studied the effects of CNAP agents’ two hyperparameters. In Table 3,
we varied the number of action bins into which the continuous action space was
discretized. The results showed that 10 action bins led to the best
performance, suggesting the importance of balancing how much information we
can sacrifice for discretization. On the other hand, a larger number of action
bins results in a larger graph size, requiring more samples to train,
hindering sample efficiency. We provide additional results on increasing the
number of bins to 50 and 100 in Appendix A.1 which led to even worse results.
In Table 4, we varied the number of GNN steps, corresponding to the number of
steps we simulated in the value iteration algorithm. A degradation in
performance is also observed, with 1 GNN step bringing the best performance.
One possible reason was also how the number of training samples might also not
be sufficient when given larger graph depths. Also, a deeper graph required
repeatedly applying the Transition module, where the imprecision might add on,
leading to inappropriate state embeddings and hence less desirable results.
More ablation results on the combined effect of varying both the width and
depth on CNAP-R can be found in Appendix A.2.
Table 3: Mean rewards for MountainCarContinuous-v0 using Baseline and CNAP agents by varying number of action bins, i.e., width of graph. The results were averaged over 100 episodes and 10 seeds. Model | Action Bins | MountainCar-Continuous
---|---|---
PPO | 5 | -2.16 $\pm$ 1.25
| 10 | -4.96 $\pm$ 1.24
| 15 | -3.95 $\pm$ 0.77
CNAP-B | 5 | 29.46 $\pm$ 57.57
| 10 | 55.73 $\pm$ 45.10
| 15 | 22.79 $\pm$ 41.24
CNAP-R | 5 | 20.32 $\pm$ 53.13
| 10 | 63.41 $\pm$ 37.89
| 15 | 26.21 $\pm$ 46.44
Table 4: Mean rewards for MountainCarContinuous-v0 using CNAP agents by varying number of GNN steps, i.e., depth of graph. The results were averaged over 100 episodes and 10 seeds. Model | GNN Steps | MountainCar-Continuous
---|---|---
CNAP-B | 1 | 55.73 $\pm$ 45.10
| 2 | 46.93 $\pm$ 44.13
| 3 | 40.58 $\pm$ 48.20
CNAP-R | 1 | 63.41 $\pm$ 37.89
| 2 | 34.49 $\pm$ 47.77
| 3 | 43.61 $\pm$ 46.16
### 5.2 MuJoCo
We then ran experiments on more complex environments from OpenAI Gym’s MuJoCo
suite [32, 19] to evaluate how CNAPs could handle the high increase in scale.
Unlike the Classic Control suite, the MuJoCo environments have higher
dimensions in both its observation and action spaces. We started by evaluating
CNAP agents in two environments with relatively lower action dimensions, and
then we moved on to two more environments with much higher dimensions. The
discretization of the continuous action space also implied a combinatorial
explosion in the action space, resulting in a large graph constructed for the
GNN. We used the proposed factorized joint policy from Section 4.4 and the
neighbor sampling methods from Section 4.5 to address the limitations.
#### 5.2.1 On low-dimensional environments
In Figure 3, we experimented with the four sampling methods discussed in
Section 4.5 on Swimmer-v2 (action space dimension of 2) and HalfCheetah-v2
(action space dimension of 6). We chose to take the number of action bins
$N=11$ for all the experiments following [22], where the best performance on
MuJoCo environments was obtained when $7\leq N\leq 15$. The number of
neighbours to expand was set to $K=10$, so that we could evaluate the four
neighbour expansion policies when sampling a very small subset of actions. In
all cases, CNAP outperformed the baseline in the final performances. Moreover,
Manual-Gaussian and Reuse-Policy were the most promising sampling strategies
as they also demonstrated faster learning, hence better sample efficiency.
This pointed to the benefits of parameter reuse and the synergistic
improvement between learning to act and learning to sample relevant neighbors,
as well as the power of a well-chosen manual distribution. We also note that
choosing a manual distribution can become non-trivial when the task becomes
more complex, especially if choosing the average values for each dimension is
not the most desirable. Our work acts as a proof-of-concept of sampling
strategies and leaves the choice of parameters for future studies.
Swimmer | Swimmer | Swimmer | Swimmer
---|---|---|---
| | |
(a) Manual-Gaussian | (b) Learned-Gaussian | (c) Reuse-Policy | (d) Learned-Sampling
HalfCheetah | HalfCheetah | HalfCheetah | HalfCheetah
| | |
(e) Manual-Gaussian | (f) Learned-Gaussian | (g) Reuse-Policy | (h) Learned-Sampling
Figure 3: Average rewards over time for CNAP (red) and PPO baseline (blue), in
Swimmer (action dimension=2) and Halfcheetah (action dimension=6), using
different sampling methods. In Swimmer, CNAP with sampling methods were
compared with the original version by expanding all actions (green). In
(a)(e), the actions were sampled using Gaussian distribution with mean=$N/2$
and std=$N/4$, where $N$ was the number of action bins used to discretize the
continuous action space. In (b)(f), two linear layers were used to learn the
mean and std, respectively. In (c)(g), the Policy layer was reused in sampling
actions to expand. In (d)(h), a separate linear layer was used to learn the
optimal neighbor sampling distribution. The mean rewards were averaged over
100 episodes, and the learning curve was aggregated from 5 seeds.
#### 5.2.2 On high-dimensional environments
We then further evaluated the scalability of CNAP agents in more complex
environments where the dimensionality of the action space was significantly
larger, while retaining a relatively low-data regime ($10^{6}$ actor steps).
In Figure 4, we compared all the previously proposed CNAP methods on two
environments with highly complex dynamics, both having an action space
dimension of 17. In the Humanoid task, all variants of CNAPs outperformed PPO,
acquiring knowledge significantly faster.
Particularly, we found that _nonparametric_ approaches to sampling the graph
in CNAP (e.g. manual Gaussian and policy reuse) acquired this knowledge
significantly faster than any other CNAP approach tested. This supplements our
previous results well, and further testifies to the improved learning
stability when the sampling process does not contain additional parameters to
optimise.
We also evaluated all of the methods considered against PPO on the
HumanoidStandup task, with all methods learning to sit up, and no apparent
distinction in the rate of acquisition. However, we provide some qualitative
evidence that the solution found by CNAP appears to be more robust in the way
this knowledge acquired—see Appendix A.3.
Figure 4: Average rewards over time for CNAP (red) and PPO baseline (blue), in
Humanoid (action dimension=17) and HumanoidStandup (action dimension=17),
using the four sampling methods.
#### 5.2.3 Qualitative interpretation
We also captured the video recordings of the interactions between the agents
and the environments to provide a qualitative interpretation to the results
above. We chose to look at the selected frames (Appendix A.3) at equal time
intervals from one episode after the last training iteration by CNAP (Manual-
Gaussian) and PPO Baseline, respectively.
In HalfCheetah task, the agent instructed by PPO Baseline fell over quickly
and never managed to turn it back. However, CNAP’s agent could balance well
and kept running forward. This observation could support the higher average
episodic rewards gained by CNAP agents than by PPO Baseline in Figure 3.
Similarly for Humanoid task, PPO Baseline’s humanoid stayed stationary and
lost balance quickly, while CNAP’s humanoid could walk forward in small steps.
This observation aligned with the results in Figure 4 where the gain from CNAP
was significant. In addition, we note that, although quantitatively CNAP agent
did not differentiate from PPO Baseline in HumanoidStandup task as shown in
Figure 4, for the trajectories we observed, it successfully remained in a
sitting position, while the PPO Baseline fell quickly.
## 6 Conclusion
We present CNAP, a method that generalizes implicit planners to continuous
action spaces for the first time. In particular, we study implicit planners
based on neural algorithmic reasoners and the unstudied implications of not
having precise alignment between the learned graph algorithm and the setup
where the executor is applied. To deal with the challenges in building the
planning tree, as a result of the continuous, high-dimensional nature of the
action space, we combine previous advancements in XLVIN with binning, as well
as parametric and non-parametric neighbor sampling strategies. We evaluate the
agent against its model-free variant, observing its efficiency in low-data
settings and consistently better performance than the baseline. Moreover, this
paves the way for extending other implicit planners to continuous action
spaces and studying neural algorithmic reasoning beyond strict applications of
graph algorithms.
## References
* Kipf and Welling [2017] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In _ICLR_ , 2017.
* Gilmer et al. [2017] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural message passing for quantum chemistry. In _Proceedings of the 34th International Conference on Machine Learning_ , volume 70, pages 1263–1272. PMLR, 2017. URL http://proceedings.mlr.press/v70/gilmer17a.html.
* Veličković et al. [2018] Petar Veličković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph Attention Networks. In _ICLR_ , 2018.
* Veličković and Blundell [2021] Petar Veličković and Charles Blundell. Neural algorithmic reasoning. _arXiv preprint arXiv:2105.02761_ , 2021.
* Xu et al. [2020] Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? In _8th International Conference on Learning Representations_ , 2020\. URL https://openreview.net/forum?id=rJxbJeHFPS.
* Velickovic et al. [2020] Petar Velickovic, Rex Ying, Matilde Padovano, Raia Hadsell, and Charles Blundell. Neural execution of graph algorithms. In _8th International Conference on Learning Representations_ , 2020\. URL https://openreview.net/forum?id=SkgKO0EtvS.
* Georgiev and Lió [2020] Dobrik Georgiev and Pietro Lió. Neural bipartite matching. _CoRR_ , abs/2005.11304, 2020. URL https://arxiv.org/abs/2005.11304.
* Awasthi et al. [2021] Pranjal Awasthi, Abhimanyu Das, and Sreenivas Gollapudi. Beyond {gnn}s: A sample efficient architecture for graph problems, 2021. URL https://openreview.net/forum?id=Px7xIKHjmMS.
* Joshi et al. [2020] Chaitanya K. Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent, and Xavier Bresson. Learning TSP requires rethinking generalization. _CoRR_ , abs/2006.07054, 2020. URL https://arxiv.org/abs/2006.07054.
* Deac et al. [2020] Andreea Deac, Pierre-Luc Bacon, and Jian Tang. Graph neural induction of value iteration. _arXiv preprint arXiv:2009.12604_ , 2020.
* Bellman [1957] Richard Bellman. _Dynamic Programming_. Dover Publications, 1957. ISBN 9780486428093.
* Tamar et al. [2016] Aviv Tamar, Sergey Levine, Pieter Abbeel, Yi Wu, and Garrett Thomas. Value iteration networks. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, _NIPS_ , pages 2146–2154, 2016.
* Oh et al. [2017] Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. In _NIPS_ , pages 6118–6128, 2017.
* Niu et al. [2018] Sufeng Niu, Siheng Chen, Hanyu Guo, Colin Targonski, Melissa C. Smith, and Jelena Kovacevic. Generalized value iteration networks: Life beyond lattices. In _AAAI_ , pages 6246–6253. AAAI Press, 2018.
* Farquhar et al. [2018] Gregory Farquhar, Tim Rocktäschel, Maximilian Igl, and Shimon Whiteson. Treeqn and atreec: Differentiable tree-structured models for deep reinforcement learning. In _ICLR_ , 2018.
* Lee et al. [2018] Lisa Lee, Emilio Parisotto, Devendra Singh Chaplot, Eric Xing, and Ruslan Salakhutdinov. Gated path planning networks. In _International Conference on Machine Learning_ , pages 2947–2955. PMLR, 2018.
* Deac et al. [2021] Andreea Deac, Petar Velickovic, Ognjen Milinkovic, Pierre-Luc Bacon, Jian Tang, and Mladen Nikolic. Neural algorithmic reasoners are implicit planners. In _Advances in Neural Information Processing Systems 34_ , pages 15529–15542, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/82e9e7a12665240d13d0b928be28f230-Abstract.html.
* Levine et al. [2016] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. _J. Mach. Learn. Res._ , 17:39:1–39:40, 2016. URL http://jmlr.org/papers/v17/15-522.html.
* Todorov et al. [2012] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pages 5026–5033. IEEE, 2012. doi: 10.1109/IROS.2012.6386109. URL https://doi.org/10.1109/IROS.2012.6386109.
* Scarselli et al. [2009] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. _IEEE Transactions on Neural Networks_ , 20(1):61–80, 2009. doi: 10.1109/TNN.2008.2005605.
* Bronstein et al. [2021] Michael M. Bronstein, Joan Bruna, Taco Cohen, and Petar Velickovic. Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. _CoRR_ , abs/2104.13478, 2021. URL https://arxiv.org/abs/2104.13478.
* Tang and Agrawal [2020] Yunhao Tang and Shipra Agrawal. Discretizing continuous action space for on-policy optimization. In _Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence_ , pages 5981–5988. AAAI Press, 2020. URL https://ojs.aaai.org/index.php/AAAI/article/view/6059.
* Hubert et al. [2021] Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Mohammadamin Barekatain, Simon Schmitt, and David Silver. Learning and planning in complex action spaces. In _Proceedings of the 38th International Conference on Machine Learning_ , volume 139, pages 4476–4486. PMLR, 2021. URL http://proceedings.mlr.press/v139/hubert21a.html.
* Schrittwieser et al. [2020] Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. _Nature_ , 588(7839):604–609, 2020.
* Zhou et al. [2018] Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review of methods and applications. _CoRR_ , abs/1812.08434, 2018. URL http://arxiv.org/abs/1812.08434.
* Hamilton et al. [2017] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. _CoRR_ , abs/1706.02216, 2017. URL http://arxiv.org/abs/1706.02216.
* Ying et al. [2018] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. _CoRR_ , abs/1806.01973, 2018. URL http://arxiv.org/abs/1806.01973.
* Schulman et al. [2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _CoRR_ , abs/1707.06347, 2017. URL http://arxiv.org/abs/1707.06347.
* Kostrikov [2018] Ilya Kostrikov. Pytorch implementations of reinforcement learning algorithms. https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail, 2018.
* Kingma and Welling [2013] Diederik P Kingma and Max Welling. Auto-encoding variational bayes, 2013. URL https://arxiv.org/abs/1312.6114.
* Jang et al. [2016] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax, 2016. URL https://arxiv.org/abs/1611.01144.
* Brockman et al. [2016] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. _arXiv preprint arXiv:1606.01540_ , 2016.
## Appendix A Appendix
### A.1 Even larger number of action bins
In Table 5, we increase the number of action bins to even larger numbers of 50
and 100, where a further degradation is observed. More action bins results in
a larger graph that requires more training samples to be consumed, which
compromises the sample efficiency.
Table 5: Mean rewards for MountainCarContinuous-v0 using CNAP-R with 1 GNN step by varying number of action bins (width of graph). The results were averaged over 100 episodes and 10 seeds. Number of action bins | MountainCar-Continuous
---|---
10 | 63.41 $\pm$ 37.89
50 | -8.88 $\pm$ 1.13
100 | -13.50 $\pm$ 1.60
### A.2 Combined effect of varying width and depth of GNN
We show the combined effects of varying the number of GNN steps and action
bins of the graph in Table 6. We observe that within each row, an appropriate
number of action bins such as 10 obtains sufficient information from
discretization. Within each column, a smaller GNN step of 1 is generally more
preferrable.
Table 6: Mean rewards for MountainCarContinuous-v0 using CNAP-R by varying number of GNN steps (depth of graph), and number of action bins (width of graph). The results were averaged over 100 episodes and 10 seeds. | Number of action bins
---|---
GNN Steps | 5 | 10 | 15
1 | 20.32$\pm$53.13 | 63.41$\pm$37.89 | 26.21$\pm$46.44
2 | 25.33$\pm$47.08 | 34.49$\pm$47.77 | 17.15$\pm$46.26
3 | 19.23$\pm$54.19 | 43.61$\pm$46.16 | 18.99$\pm$44.69
### A.3 Selected frames for MuJoCo tasks
Swimmer: PPO
---
| | | | | | |
Swimmer: CNAP
| | | | | | |
Figure 5: Selected frames of two agents in Swimmer
As seen in Figure 5, CNAP could fold itself slightly faster than PPO Baseline
in this episode and swam more quickly.
HalfCheetah: PPO
---
| | | | | | |
HalfCheetah: CNAP
| | | | | | |
Figure 6: Selected frames of two agents in HalfCheetah
From Figure 6’s HalfCheetah task, we can see the agent instructed by PPO
Baseline fell over quickly and never managed to turn it back. However, CNAP’s
agent could balance well and kept running forward. This observation could
support the higher average episodic rewards gained by CNAP agents than by PPO
Baseline in Figure 3.
Humanoid: PPO
---
| | | | | | |
Humanoid: CNAP
| | | | | | |
Figure 7: Selected frames of two agents in Humanoid
Similarly, in Figure 7’s Humanoid task, PPO Baseline’s humanoid stayed
stationary and lost balance quickly, while CNAP’s humanoid could walk forward
in small steps. This observation aligned with the results in Figure 4 where
the gain from CNAP was significant.
HumanoidStandup: PPO
---
| | | | | | |
HumanoidStandup: CNAP
| | | | | | |
Figure 8: Selected frames of two agents in HumanoidStandup
Then we noticed that although in HumanoidStandup task, the quantitative
performances between PPO Baseline and CNAP were similar, Figure 8 revealed
some different results. Both agents did not manage to stand up, explaining why
the episodic rewards were similar numerically. However, the PPO Baseline agent
lost balance and fell back to the ground while the CNAP agent remained
sitting, trying to get up. Therefore, the CNAP qualitatively performed better
in this example.
|
# ICS-CTM2: Industrial Control System Cybersecurity Testbed Maturity Model
Souradeep Bhattacharya, Burhan Hyder, Manimaran Govindarasu Department of
Electrical and Computer Engineering
Iowa State University, Ames, IA 50011
Email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Industrial Control System (ICS) testbeds serve as a platform for evaluating
and validating control system performances, cybersecurity tools and
technologies. In order to build or enhance an ICS testbed, it is vital to have
a deeper understanding of its design specifications and characteristic
attributes. Satisfying this prerequisite involves examination and assessment
of these attributes for existing testbeds. To further increase confidence in a
testbed’s functionality, it is important to perform a comparative analysis of
its specifications with other ICS testbeds. However, at present, there is no
standardized methodology available to provide a comparative assessment of
different testbeds. In this paper, we propose a methodology for analyzing ICS
testbeds, inspired by the Cybersecurity Capability Maturity Model (C2M2). In
particular, we then define a ICS Cybersecurity Testbed Maturity Model, its
domains, and the associated maturity indicator levels. To demonstrate the
benefit of the model, we have conducted a case study analysis for several ICS
testbeds, representing different industrial sectors. Our analysis provides
deeper insights into the relative strengths and limitations of these testbeds,
together with scope for future enhancements, with respect to the domains
defined by the model.
###### Index Terms:
Industrial Control System (ICS), Cybersecurity, Testbed maturity model
## I Introduction
In recent years, with the emergence of Industry 4.0 criterion, there has been
a considerable increase in digitization of legacy Industrial Control Systems
(ICS). This accelerated process has resulted in growing convergence between
information technologies (IT) and operational technologies (OT), thereby
increasing the inter-connectivity of different systems. Owing to advancement
in communication, information, and operational technologies, new approaches in
the industrial chain have risen enabling tremendous growth for ICS, such as
industrial internet of things, big data analytics, cloud computing, robotics
and human-machine interactions [1]. However, the paradigm shift towards
digitization and inter-connectivity exposes ICSs to new vulnerabilities and
challenges. This in turn increases the attack surfaces for adversaries to
exploit. Therefore, academia, industry and government entities are
increasingly proposing development of ICS testbeds for evaluating tools and
technologies designed to assess system performance metrics and enhance
cybersecurity capabilities.
ICSs interconnect, monitor, and control processes in a variety of industries,
such as electric power generation and distribution, transportation, water,
gas, oil, chemical production, and manufacturing. ICS consist of two major
components - the OT network, and the IT network. The OT network comprises of
hardware and software that are used to monitor and manage industrial
equipment, processes, assets, and events, e.g., Programmable Logic Controllers
(PLC), Supervisory Control and Data Acquisition (SCADA), Distributed Control
System (DCS), sensors and actuators. The IT network comprises of workstations,
servers, databases, routers and network switches, which are used to control
and manipulate the flow of information.
This paper is structured as follows. In Section-I, we provide a brief
introduction of ICS, highlighting the differences between IT and OT networks.
In Section-II, we discuss why it is crucial to develop testbeds for ICS and
what are the requirements to consider. We also refer some of the related works
in this field. Lastly, in this section we have also discussed the need for
developing a systematic testbed assessment methodology for analyzing different
ICS testbeds. In Section-III we describes the design and implementation
methodology for the proposed ICS Cybersecurity Testbed Maturity Model (ICS-
CTM2) framework. Finally, Section-IV provides a case study for ICS testbed
analysis using the proposed ICS-CTM2 framework, and also presents a
visualization of the results.
## II Overview of ICS Testbeds
A testbed can be defined as a testing environment or a platform that simulates
real-world activities under supervision, in order to conduct experimental
evaluations. When researchers need to work with real-world ICS environments
for examining new technologies, testbeds provide a validated ecosystem
representing the real-world process.
### II-A Why Do We Need A Testbed For ICS
Testing IT security systems is a mature, well-defined field with a plethora of
best practices, procedures and information. However, testing industrial
control systems and OT networks is a comparatively newer frontier [2].
Evolution of today’s ICS has been a causal effect of amalgamation of IT
capabilities into legacy physical systems, often replacing or augmenting
existing physical control operations. However, it also comes with various
challenges that need to be mitigated - cybersecurity being critical. As per
annual reports published by Kaspersky, it can be observed that there is a
considerable increase in vulnerabilities detected in ICS every year (19 in
2010, 189 in 2015, 415 in 2018, and 509 in 2019) [3]. Therefore, developing
custom testing environments imitating real-world solutions are a crucial
component in ICS research. In addition to vulnerability research, cyber-
physical ICS testbeds provide several applications such as impact analysis
(capturing risk posed by security events); mitigation evaluation (to increase
robustness of cyber infrastructure); cyber-physical metrics (functional tests
for system reliability and performance); data and model development (models
and datasets to facilitate more accurate analysis); security validation
(assessment of cybersecurity compliance requirements); cyber forensics
(analysis of cyber attacks); operator training and education [4].
### II-B Requirements For Developing An ICS Testbed
Building an ICS testbed is a cost-dependent, resource-intensive, and complex
task, involving collaboration of multiple disciplines. As it is essential to
perform experimental activities and generate data from different industrial
scenarios, it is imperative to develop custom testbeds accurately replicating
the respective industrial practices.
#### II-B1 Testbed Classification
Depending on the process, protocol, and infrastructure elements involved in a
testbed, it can be classified into one of the four categories - Physical,
Simulation, Virtual, or Hybrid [5]. In a physical testbed, real hardware and
software are deployed in both the cyber and physical layers. Although, it aims
to be as close to the real system as possible providing highest fidelity,
physical testbeds are usually expensive and have a long development cycle.
Simulation testbeds are based on software simulations, whereas virtual
testbeds involve both simulations and emulation of hardware components.
Simulation and virtual testbeds provide a low-cost alternative to physical
testbeds, and are more flexible to upgrades. However, these testbeds may lack
precision and often does not provide real-time operation. The most commonly
used approach is the hybrid testbed. These testbeds are designed through a
combination of physical devices, simulation, and virtualized components.
Hybrid testbeds provides lower fidelity than physical testbeds, but allows for
lower development cost and higher scalability.
#### II-B2 Testbed Properties
Any practical testbed is required to satisfy certain clearly defined
objectives. These objectives constitute the properties of an ICS testbed which
indicate its reliability [6]. Although, it may be challenging to meet every
objective, it is vital for researchers to ascertain an effective and optimized
trade-off criteria during the design phase. All properties pertaining to an
ICS testbed can be grouped under three major inter-related properties as
follows:
Fidelity: Fidelity of a testbed can be defined as how closely and accurately
the testbed replicates a real-world ICS [6], [7]. The properties which
directly impact fidelity are: Measurability (allowing assessment of
experiments), Measurement Accuracy (degree of correctness of experimental
results), Repeatability (ability to replicate experiments), Reproducibility
(ability to produce consistent results), Usability (ability to be utilized for
defined purposes), Complexity (transparency of architecture), and Safety (safe
operation of testbed).
Scale: Scalability of a testbed can be defined as a testbed’s ability to be
expanded in functionality, architecture, and, infrastructure [4], [6]. The
properties which directly impact scale are: Cyber-Physical Integration,
Reconfigurability (ability to accommodate new design or components),
Extensibility (ability to broaden functional scope), Adaptability (ability to
support new or alternative test cases), Interoperability (ability to support
any combination of simulation or hybrid approaches), Federation (inter-
connectivity of multiple testbeds), and Critical Infrastructure.
Cost: Cost of a testbed depends on the approximate implementation cost of the
process, hardware, software, and licenses [4], [7]. The properties which
directly impact cost are: Diversity (ability to utilize products and services
from multiple vendors), Openness (ability to support open-source products and
remote access), Training & Development (for education of operators), and
Classification (Physical, Simulation, Virtual, or Hybrid).
#### II-B3 Compliance of Industrial Standards
NIST SP 800-82 Rev.2 [8] provides a guideline for ICS Security. It specifies
that the basic architecture for an ICS environment must include four core
components: Physical Process, Field Devices, Communication Architecture, and
Control Center. Therefore, adherence to standardized design attributes
contribute significantly to the reliability and trustworthiness of ICS
testbeds.
### II-C Related Work
In this paper, we directed our efforts towards performing an extensive review
of the current state in the field of Industrial Control System (ICS) testbeds.
Among the existing literature, survey article [5] focuses on exploring the
primary objectives and application possibilities of 30 different ICS testbeds.
Another survey [7] provides a comprehensive classification of ICS testbeds
into three groups (physical, virtual, or hybrid), along with an analysis of
ICS datasets available for developing new IDS security mechanisms. [9]
provides an outline for developing a SCADA Security Testbed. Some existing
literature (e.g. [10], [11]) provides detailed description of testbed
architecture and design, implementation and experiments conducted. For
instance, [10] discusses the PowerCyber testbed at Iowa State University,
which models a smart electric grid, and [11] discusses the SMART testbed at
University of Michigan, which models a warehouse production line.
Figure 1: The Proposed ICS Cybersecurity Testbed Maturity Model (ICS-CTM2)
Framework
### II-D Assessment of ICS Testbeds
Although survey articles provide a compilation of different testbeds, they
lack any quantitative assessment methodology which makes it difficult to
easily recognise relevant variations in the listed testbeds. Literature
pertaining to design and implementation of ICS testbeds focus only on the
individual testbeds, without any comparative analysis with other testbeds. To
the best of our knowledge, there is no standardized methodology available at
present, to provide a comparative assessment of different testbeds. Therefore,
a systematic methodology is needed which can provide a focal guide for
researchers developing future ICS testbeds or enhancing current testbeds.
## III Proposed ICS Cybersecurity Testbed Maturity Model
The central objective of the ICS Cybersecurity Testbed Maturity Model (ICS-
CTM2) framework is to provide a standardized and systematic method for
comprehensive and comparative analysis of ICS testbed design specifications.
The ICS-CTM2 framework is inspired from the Energy Subsector-Cybersecurity
Capability Maturity Model (ES-C2M2) developed by the US energy sector and
Department of Energy (DOE) [12].
Figure 1 shows the conceptual diagram for the proposed ICS-CTM2 framework.
Table I provides a list of ICS testbeds that we have considered for the case
study analysis of ICS-CTM2 and is further described in detail under Section
IV.
ES-C2M2: The ES-C2M2 [12] is a tool which allows electric utilities and grid
operators to assess, evaluate, and improve their cybersecurity capabilities,
and also to prioritize their actions and investments for enhancing
cybersecurity. The ES-C2M2 was developed in 2012 as part of a White House
initiative led by the Department of Energy (DOE) in partnership with the
Department of Homeland Security (DHS), involving collaboration with industry
members, other Federal agencies and stakeholders. ES-C2M2 provides a toolkit
(available on request from DOE) that can be utilized by any organization to
measure and improve its cybersecurity program. Owing to its self-evaluation
methodology and industry focused application which makes ES-C2M2 an easily
scalable tool, in this paper we adapted ES-C2M2 to develop our ICS-CTM2
framework.
### III-A Methodology
Maturity model can be defined as an organized and structured method to convey
a path of knowledge, experience, and learning. Such assessment models enable
researchers and organizations to evaluate and make improvements to their ICS
testbeds, by providing standardized benchmarks. Inspired by ES-C2M2, the ICS-
CTM2 model arises from a combination of existing industrial standards,
frameworks, and procedures adapted for developing an ICS testbed. The ICS-CTM2
architecture comprises of the following three sections, also illustrated in
Figure-1:
#### III-A1 Domains
ICS-CTM2 domains represent a structured set of parameters that include
essential information related to an ICS testbed. ICS-CTM2 framework consists
of five domains (Architecture, Fidelity, Scale, Cost, and Application) based
on which each testbed can be evaluated and its capabilities can be determined.
For each domain, the model provides a description, which summarizes the
objective of the domain along with the evaluation criterion considered within
the context of the domain. The defined objective is then assessed by
determining the extent of implementation of each evaluation criteria.
Figure 2 describes the evaluation criteria for each domain in ICS-CTM2
framework. Based on this assessment, a maturity level is assigned to each
domain for a testbed. The five domains for ICS-CTM2 model are:
TABLE I: list of selected testbeds for ICS-CTM2 case study analysis Sl.No. | Institute | Industry
---|---|---
1 | Iowa State University (ISU) [10] | Smart Grid
2 | SUTD, Singapore [13] | Water Treatment
3 | University of Michigan (U-M) [11] | Manufacturing
4 | ORNL [14] | HVAC/Cooling
5 | Univ. of Alabama, Huntsville (UAH) [15] | Gas Pipeline
6 | Univ. of Tennessee (U.T) [16] | Nuclear Plant
7 | Ohio State University (OSU) [17] | Automobile
8 | NIST [18] | Multi-scenario
Figure 2: Case Study: Maturity Indicator Levels of each ICS-CTM2 domain for
selected ICS Testbeds. Figure 3: Case Study: Radar Analysis for ICS Testbeds
(1)-(4) Figure 4: Case Study: Radar Analysis for ICS Testbeds (5)-(8)
Architecture: The Purdue Enterprise Reference Architecture (PERA) [19] is the
most widely accepted and utilized ICS network architecture model in the
industry. Since ICS consists of different equipments and protocols across
different industries and regions, PERA serves as a common reference model for
evaluating this domain of ICS-CTM2 framework. PERA was developed by Theodore
J. Williams in 1993, as a collaborative effort between Purdue University and
members of the industry. This model has been adopted by several industry
security standards such as ISA-99 (ISA/IEC 62443) [20] and is used as the
foundation for ICS IT/OT network design. PERA is segmented into five levels as
described below.
Level 0 - Field Level: Includes the physical process to be implemented.
Level 1 - Control Level: Includes field controllers and devices (PLC, RTU)
implemented for controlling the physical process.
Level 2 - Supervisory Level: Includes the supervisory level controllers and
devices (SCADA, HMI) implemented for controlling and monitoring of the field
level controllers.
Level 3 - Operations/Monitoring Level: Includes the components implemented for
information sharing between the IT and OT layer (such as Workstation, HMI,
DMZ).
Level 4 - Management/Enterprise Level: Includes implementation of the
management and planning resources at the enterprise level, such as resource
management or risk assessment.
Fidelity, Scale and, Cost: These three inter-related domains represent the
several properties of an ICS testbed, as described earlier in Section II-B.
Evaluation of the fidelity domain for ICS-CTM2 framework is based on the
testbed construction methodology adopted for implementation of three key
components - Process, Protocol, and Infrastructure (PPI). Implementation of
multiple process control scenarios and cyber-physical integration influence
the scalability of a testbed. Cost of design is determined by factors such as
physical components used, open-sourced or licensed products, training and
development expenses.
Application: We have evaluated ICS testbeds based on the practical
applications and use cases implemented [4]. The application scenarios have
been divided into four categories in the order of increasing criticality: (1)
Functionality Testing, (2) Education, (3) Cybersecurity Analysis and Research,
and (4) Development of Standards .
#### III-A2 Maturity Indicator Level (MIL)
ICS-CTM2 framework defines four Maturity Indicator Levels - MIL0 thru MIL3. A
fifth maturity level (MIL4) is reserved for future use. The MILs helps to
identify the degree of implementation in a domain. The MILs are independent to
each domain, and to earn a MIL in a given domain, a testbed must satisfy all
the evaluation criterion specified in that level. For instance, if a testbed
satisfies all criteria in MIL1 and MIL2, but not in MIL3, then the testbed
would be assigned a level of MIL2 in that domain.
Figure 5: Evaluation Criterion considered within the ICS-CTM2 framework
domains (case study performed for PowerCyber Testbed [10]).
#### III-A3 Diagnosis and Assessment:
For a testbed, each domain is assessed depending on the evaluation criterion
and a MIL is assigned. First, we analyze a testbed with respect to each of the
five domains. Analyzing a domain involves assessment of each evaluation
criterion within that domain. Once all criteria are evaluated, a MIL rating is
assigned to that domain. Similarly, all remaining domains are assessed.
Finally, the ICS-CTM2 diagnosis results are visualized though Radar Analysis
and Ring Analysis.
Radar Analysis: The radar analysis visualizes the MIL ratings obtained by a
testbed in each of the five domains. Radar analysis can be performed for a
single as well as multiple ICS testbeds, and the results can be aggregated and
displayed together for better comparison. All domains are arranged in a radial
chart and the respective maturity ratings are recorded after implementation of
ICS-CTM2 framework. Figure 3 and Figure 4 demonstrates the Radar Analysis.
Ring Analysis: The ring analysis provides a more in-depth assessment by
illustrating the implementation degree of each evaluation criterion for all
the five domains of the ICS-CTM2 framework. Ring analysis is a higher level
evaluation, hence it must be performed for each testbed individually. As such,
results of different testbeds cannot be combined for ring analysis. Figure 6
demonstrates the Ring Analysis.
Figure 6: Case Study: Ring Analysis of Evaluation Criterion for all five
domains (case study performed for PowerCyber Testbed [10]).
## IV Case Study Analysis
This section describes a case study implemented to analyze different testbeds,
mentioned in Table I, using the ICS-CTM2 framework. In our experiment, we have
considered one ICS testbed from 8 different industrial sectors, to demonstrate
their differences. However, the ICS-CTM2 framework is not limited to this
comparison only, and can be applied to any combination of ICS testbeds as
deemed necessary.
Figure 2 provides the MIL ratings for the ICS-CTM2 domains after analyzing the
selected testbeds. Figure 3 and Figure 4 represents the Radar Analysis
comparing the performance of testbeds in each domain utilizing results
demonstrated in Figure 2. The results of the selected testbeds have been
divided into two plots for ease of illustration. As shown in the figures, from
the Radar Analysis the researchers can easily identify the maturity of the
testbed in each domain.
Figure 6 demonstrates the Ring Analysis for the ICS-CTM2 framework. We have
performed this evaluation only for the PowerCyber Testbed [10] at Iowa State
University. The evaluation criterion for each domain have been specified in
Figure 5. The number specified in the inner ring indicates the total
evaluation criterion considered at that maturity level. The number specified
in the outer ring summarizes the total evaluation criterion satisfied or
implemented (either fully, partially, or not implemented). For example, in
Figure 6 Ring Analysis, the domain Architecture has 5 evaluation criterion for
MIL1; 4 criterion are added in the next level making the total 9 for MIL2, and
12 criterion are added in the next level making the total 21 for MIL3. The
testbed satisfies all criterion for MIL1 and MIL2, but not for MIL3.
Therefore, the testbed achieves an overall rating of MIL2 in this domain.
## V Conclusion and Future Work
The need for a standardized method to perform comparative assessment of ICS
Testbeds is quite compelling. The design goal of the ICS-CTM2 framework,
described in this paper, is to enable researchers to perform a self-evaluation
of ICS Testbeds with respect to their design specifications and characteristic
attributes, to determine their strengths and limitations. The ICS-CTM2
framework enables ICS testbeds to be analyzed on the basis of standardized
domains which serve as benchmarks. Each evaluation criterion within the
domains can be assessed to gauge the maturity in the design of a testbed. This
paper also provides a case study analysis performed on 8 ICS testbeds to
demonstrate the functionality of ICS-CTM2 framework and visualization of
results.
Future Work: The implementation methodology and the framework domains
described in this paper is being actively refined, to enhance the assessment
parameters. The future work includes: (1) a comparative assessment of a larger
number of ICS testbeds, applying ICS-CTM2 framework, based on the data about
testbed specifications collected through scientific survey, (2) the proposed
ICS-CTM2 model can be further refined by updating or expanding the domains,
evaluation criterion, or the analysis method.
## VI Acknowledgement
This work is funded in part by the NSF CPS grant ECCS 1739969.
## References
* [1] C. Alcaraz, “Secure Interconnection of IT-OT networks in Industry 4.0 in Critical Infrastructure Security and Resilience. Springer, 2019, pp.201–217.,” 2019.
* [2] F. Khorrami, P. Krishnamurthy, and R. Karri, “Cybersecurity for Control Systems: A Process-Aware Perspective,” 10 2016.
* [3] T. Menze, “Kaspersky, The State of Industrial Cybersecurity,” September 2020.
* [4] A. Hahn, A. Ashok, S. Sridhar, and M. Govindarasu, “Cyber-Physical Security Testbeds: Architecture, Application, and Evaluation for Smart Grid,” IEEE Transactions on Smart Grid, vol. 4, pp. 847–855, 2013.
* [5] H. Holm, M. Karresand, A. Vidström, and E. Westring, “A survey of industrial control system testbeds. In Proceedings of the Nordic Conference on Secure IT Systems, Stockholm, Sweden, 19–21 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 11–26.,” vol. 9417, Springer International Publishing, 2015.
* [6] U. P. D. Ani and J. Watson, “What Makes an Industrial Control System Security Testbed Credible and Acceptable? Towards a Design Consideration Framework,” pp. 181–190, July 2021.
* [7] M. Conti, D. Donadel, and F. Turrin, “A Survey on Industrial Control System Testbeds and Datasets for Security Research,” CoRR, vol. abs/2102.05631, 2021.
* [8] S. K., V. Pillitteri, S. Lightman, M. Abrams, and A. Hahn, “NIST SP 800-82 Rev. 2, Guide to Industrial Control Systems (ICS) Security,” 2015\.
* [9] E. Christiansson and H. Luiijf, “Creating a European SCADA Security Testbed,” pp. 237–247, Springer US, 2008.
* [10] A. Ashok, S. Krishnaswamy, and M. Govindarasu, “Powercyber: A Remotely Accessible Testbed for Cyber Physical Security of the Smart Grid,” Institute of Electrical and Electronics Engineers Inc., 12 2016.
* [11] I. Kovalenko, M. Saez, K. Barton, and D. Tilbury, “SMART: A System-level Manufacturing and Automation Research Testbed,” 2017.
* [12] “Department of Energy (DOE), Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2),” 2014.
* [13] A. P. Mathur and N. O. Tippenhauer, “SWaT: A Water Treatment Testbed for Research and Training on Ics Security,” pp. 31–36, 2016\.
* [14] R. E. Gillen, L. A. Anderson, C. Craig, J. Johnson, A. Columbia, R. Anderson, A. Craig, and S. L. Scott, “Design and Implementation of Full-Scale Industrial Control System Testbed for Assessing Cyber-Security Defenses,” pp. 341–346, 2020.
* [15] T. Alves, R. Das, and T. Morris, “Virtualization of Industrial Control System Testbeds for Cybersecurity,” pp. 10–14, Association for Computing Machinery, 2016.
* [16] F. Zhang, H. A. D. E. Kodituwakku, J. W. Hines, and J. Coble, “Multilayer Data-Driven Cyber-Attack Detection System for Industrial Control Systems Based on Network, System, and Process Data,” IEEE Transactions on Industrial Informatics, vol. 15, pp. 4362–4369, 2019\.
* [17] P. S. Oruganti, M. Appel, and Q. Ahmed, “Hardware-in-loop based Automotive Embedded Systems Cybersecurity Evaluation Testbed,” pp. 41–44, Association for Computing Machinery, Inc, March 2019.
* [18] R. Candell, D. M. Anand, and K. Stouffer, “A Cybersecurity Testbed for Industrial Control Systems,” 2014.
* [19] T. J. Williams, “The Purdue Enterprise Reference Architecture and Methodology (PERA),” 1994.
* [20] ISA, ”ANSI/ISA-62443-1-1 (99.01.01)-2007, Security for Industrial Automation and Control Systems: Part 1 Terminology, Concepts, and Models. American National Standards Institute, 2007.
|
# Lifelong Embedding Learning and Transfer for Growing Knowledge Graphs
Yuanning Cui1, Yuxin Wang1, Zequn Sun1, Wenqiang Liu3, Yiqiao Jiang3, Kexin
Han3, Wei Hu1,2 Corresponding author
###### Abstract
Existing knowledge graph (KG) embedding models have primarily focused on
static KGs. However, real-world KGs do not remain static, but rather evolve
and grow in tandem with the development of KG applications. Consequently, new
facts and previously unseen entities and relations continually emerge,
necessitating an embedding model that can quickly learn and transfer new
knowledge through growth. Motivated by this, we delve into an expanding field
of KG embedding in this paper, i.e., lifelong KG embedding. We consider
knowledge transfer and retention of the learning on growing snapshots of a KG
without having to learn embeddings from scratch. The proposed model includes a
masked KG autoencoder for embedding learning and update, with an embedding
transfer strategy to inject the learned knowledge into the new entity and
relation embeddings, and an embedding regularization method to avoid
catastrophic forgetting. To investigate the impacts of different aspects of KG
growth, we construct four datasets to evaluate the performance of lifelong KG
embedding. Experimental results show that the proposed model outperforms the
state-of-the-art inductive and lifelong embedding baselines.
## 1 Introduction
Many knowledge-driven applications are built on knowledge graphs (KGs), which
store massive amounts of structured facts about the real world (Ji et al.
2022). Throughout the life-cycle of KG construction and application, new
facts, unseen entities, and unseen relations continually emerge into the KG on
a regular basis. As a result, the real-world KG is rarely a static graph, but
rather evolves and grows alongside the development. Figure 1 illustrates an
example excerpted from Wikidata (Vrandecic and Krötzsch 2014), which shows the
growth of the KG along with the continuous knowledge extraction. However, KG
embedding, a critical task for downstream applications, has primarily focused
on static KGs over the years (Wang et al. 2017). Learning from scratch every
time is inefficient and wastes previously acquired knowledge, and simply fine-
tuning new facts would quickly disrupt previously acquired knowledge. Hence,
this paper proposes to investigate lifelong embedding learning and transfer
for growing KGs, with the goal of learning new facts while retaining old
knowledge without re-training from scratch.
The key idea of this paper comes from the human learning process. Humans are
typical lifelong learners, with knowledge transfer and retention being the
most important aspects of lifelong learning. Humans, in particular, can
continually learn new knowledge given new facts and use previously learned
knowledge to help new knowledge learning (knowledge learning and transfer), as
well as update old knowledge while retaining useful knowledge (knowledge
update and retention). Motivated by this, we seek to build a lifelong KG
embedding model, namely LKGE, which is capable of learning, transferring and
retaining knowledge for growing KGs efficiently. Existing related work, such
as inductive KG embedding (Hamaguchi et al. 2017; Wang et al. 2019b), mainly
focuses on knowledge transfer, ignoring new knowledge learning and old
knowledge update.
Figure 1: An example of a growing KG. In each snapshot $i$, new facts are
added into the KG, and previously unseen entities and relations emerge with
the new facts.
The proposed lifelong KG embedding task faces two major challenges. First, how
to strike a balance between new knowledge learning and old knowledge transfer?
Learning embeddings for new entities and relations from scratch cannot
leverage previously learned knowledge, and inductively generating embeddings
for them ignores new knowledge in the new snapshot. Second, how to update old
knowledge while retaining useful knowledge? Learning new facts about an old
entity usually requires updating the previously learned embeddings, which can
be harmful to the old model. This is because updating an old entity embedding
would affect many other old embeddings of related entities. This would cause
the catastrophic forgetting issue, and therefore affect the applications built
on the old KG snapshot.
To resolve the above challenges, we propose three solutions in our LKGE model.
First, as the base embedding model for new knowledge learning and old
knowledge update, we design a masked KG autoencoder that masks and
reconstructs the entities or relations in new facts. It builds connections
between locally-related old and new entities, acting as a bridge for knowledge
transfer. Second, to aid in the learning of new knowledge, we propose a
knowledge embedding transfer strategy that uses previously learned knowledge
to initialize the embeddings of new entities and relations. These embeddings
are used by the KG autoencoder to learn new facts. Third, to avoid
catastrophic forgetting in the old knowledge update, we propose embedding
regularization to balance the learning of new facts and the update of old
embeddings.
We build four datasets to assess the lifelong KG embedding performance,
including the entity-centric, relation-centric, fact-centric, and hybrid
growth. Each dataset examines a different aspect of KG growth. By contrast,
existing datasets (Hamaguchi et al. 2017; Kou et al. 2020; Daruna et al. 2021)
all assume that a KG grows in an ideal way, with balanced new entities or
facts in each new snapshot. In our experiments, we compare the link prediction
accuracy, knowledge transfer ability, and learning efficiency of the proposed
model against baselines on the four datasets. The results show that the
proposed LKGE not only achieves the best performance on the four datasets, but
also has the best forward knowledge transfer ability and learning efficiency.
The main contributions of this paper are summarized as follows:
* •
We study lifelong KG embedding learning and transfer. It is a practical task
since the real-world KGs continually evolve and grow, which requires the
embedding model to be capable of handling the knowledge growth.
* •
We propose a novel lifelong learning model, LKGE. It includes a masked KG
autoencoder as the basis of embedding learning and update, an embedding
transfer strategy for knowledge transfer, and an embedding regularization to
prevent catastrophic forgetting in knowledge update.
* •
We conduct extensive experiments on four new datasets. The results demonstrate
the effectiveness and efficiency of LKGE against a variety of state-of-the-art
models.
## 2 Related Work
### 2.1 Knowledge Graph Embedding
KG embedding seeks to encode the symbolic representations of KGs into vector
space to foster the application of KGs in downstream tasks. Most existing KG
embedding models (Bordes et al. 2013; Wang et al. 2014; Dettmers et al. 2018;
Schlichtkrull et al. 2018; Guo, Sun, and Hu 2019; Vashishth et al. 2020a, b)
focus on static graphs and cannot continually learn new knowledge on the
growing KGs.
To embed unseen entities, inductive KG embedding models learn to represent an
entity by aggregating its existing neighbors in the previous KG snapshot. MEAN
(Hamaguchi et al. 2017) uses a graph convolutional network (GCN) (Scarselli et
al. 2009) for neighborhood aggregation. When an unseen entity emerges, the GCN
would aggregate its previously seen neighboring entities to generate an
embedding. LAN (Wang et al. 2019b) adopts an attention mechanism to
attentively aggregate different neighbors. As MEAN and LAN rely on the entity
neighborhood for embedding learning, they cannot handle the new entities that
have no neighbors in the previous snapshot. Furthermore, inductive KG
embedding disregards learning the facts about new entities.
Our work is also relevant to dynamic KG embedding. puTransE (Tay, Luu, and Hui
2017) trains several new models when facts are added. DKGE (Wu et al. 2022)
learns contextual embeddings for entities and relations, which can be
automatically updated as the KG grows. They both need partial re-training on
old facts, but our model does not. In addition, some subgraph-based models,
such as GraIL (Teru, Denis, and Hamilton 2020), INDIGO (Liu et al. 2021), and
TACT (Chen et al. 2021), can also represent unseen entities using the entity-
independent features and subgraph aggregation. Their subgraph-building process
is time-consuming, making them only applicable to small KGs. In order to run
on large-scale KGs, NBFNet (Zhu et al. 2021) proposes a fast node pair
embedding model based on the Bellman-Ford algorithm, and NodePiece (Galkin et
al. 2022) uses tokenized anchor nodes and relational paths to represent new
entities. However, they do not consider learning new knowledge and cannot
support new relations.
### 2.2 Lifelong Learning
Lifelong learning seeks to solve new problems quickly without catastrophically
forgetting previously acquired knowledge. Lifelong learning models are broadly
classified into three categories. (i) Dynamic architecture models (Rusu et al.
2016; Lomonaco and Maltoni 2017) extend the network to learn new tasks and
avoid forgetting acquired knowledge. (ii) Regularization-based models
(Kirkpatrick et al. 2017; Zenke, Poole, and Ganguli 2017) capture the
importance of model parameters for old tasks and limit the update of important
parameters. (iii) Rehearsal-based models (Lopez-Paz and Ranzato 2017; Wang et
al. 2019a) memorize some data from old tasks and replay them when learning new
knowledge.
Few lifelong learning models focus on KG embedding. DiCGRL (Kou et al. 2020)
is a disentangle-based lifelong graph embedding model. It splits node
embeddings into different components and replays related historical facts to
avoid catastrophic forgetting. The work (Daruna et al. 2021) combines class-
incremental learning models with TransE (Bordes et al. 2013) for continual KG
embedding. However, it does not propose a specific lifelong KG embedding
model.
## 3 Lifelong Knowledge Graph Embedding
Figure 2: Overview of the proposed model for lifelong KG embedding.
### 3.1 Preliminaries
Growing KG. The growth process of a KG yields a snapshot sequence, i.e.,
$\mathcal{G}=\\{\mathcal{S}_{1},\mathcal{S}_{2},\ldots,\mathcal{S}_{t}\\}$.
Each snapshot $\mathcal{S}_{i}$ is defined as a triplet
$(\mathcal{T}_{i},\mathcal{E}_{i},\mathcal{R}_{i})$, where
$\mathcal{T}_{i},\mathcal{E}_{i}$ and $\mathcal{R}_{i}$ denote the fact,
entity and relation sets, respectively. We have
$\mathcal{T}_{i}\subseteq\mathcal{T}_{i+1}$,
$\mathcal{E}_{i}\subseteq\mathcal{E}_{i+1}$ and
$\mathcal{R}_{i}\subseteq\mathcal{R}_{i+1}$. We use $\mathcal{T}_{\Delta
i}=\mathcal{T}_{i}-\mathcal{T}_{i-1}$, $\mathcal{E}_{\Delta
i}=\mathcal{E}_{i}-\mathcal{E}_{i-1}$, and $\mathcal{R}_{\Delta
i}=\mathcal{R}_{i}-\mathcal{R}_{i-1}$ to denote the new facts, entities and
relations, respectively. Each fact is in the form of
$(s,r,o)\in\mathcal{T}_{i}$, where $s,o\in\mathcal{E}_{i}$ are the subject and
object entities, respectively, and $r\in\mathcal{R}_{i}$ is their relation.
Lifelong KG embedding. KG embedding seeks to encode the symbolic
representations of entities and relations into vector space and capture KG
semantics using vector operations. For a growing KG, a lifelong KG embedding
model learns to represent the snapshot sequence
$\mathcal{G}=\\{\mathcal{S}_{1},\mathcal{S}_{2},\ldots,\mathcal{S}_{t}\\}$
continually. When a new fact set $\mathcal{T}_{\Delta i}$ emerges, the current
KG embedding model $\mathcal{M}_{i-1}$ needs update to fit the new facts and
learn embeddings for the new entities $\mathcal{E}_{\Delta i}$ and new
relations $\mathcal{R}_{\Delta i}$. The resulting model is denoted by
$\mathcal{M}_{i}$.
Lifelong link prediction. The link prediction task asks the KG embedding model
to predict the missing subject or object entity in an incomplete fact like
$(s,r,?)$ or $(?,r,o)$. For each snapshot $\mathcal{S}_{i}$, the new fact set
$\mathcal{T}_{\Delta i}$ is divided into a training set $\mathcal{D}_{i}$, a
validation set $\mathcal{V}_{i}$ and a test set $\mathcal{Q}_{i}$. In the
lifelong setting, the model is required to learn the training data sets,
$\mathcal{D}_{1},\mathcal{D}_{2},\ldots,\mathcal{D}_{t}$, in turn. After
finishing the learning on $\mathcal{D}_{i}$, the model is evaluated on the
accumulated test data, which is $\cup_{j=1}^{i}\mathcal{Q}_{j}$, to assess the
overall learning performance. After learning $\mathcal{D}_{i}$, then
$\mathcal{D}_{i},\mathcal{V}_{i}$ would no longer be available for the
following learning. Note that the goal of lifelong KG embedding is to improve
the overall performance on all snapshots, which requires the KG embedding
model to continually learn new knowledge and retain the learned knowledge.
### 3.2 Model Overview
The overview of our model, LKGE, is shown in Figure 2. It continually learns
knowledge over a sequence of KG snapshots without re-training on previously
seen data. The foundation is a masked KG autoencoder that can reconstruct the
entity and relation embeddings from the masked related subgraph. To enable
knowledge transfer from old snapshots to the new one, we propose embedding
transfer to inject learned knowledge into unseen entities and relations and
then iteratively optimize the model to fit the new facts. We also propose a
lightweight regularization method to retain old knowledge.
### 3.3 Masked Knowledge Graph Autoencoder
In lifelong KG embedding learning, the new facts, e.g., $\mathcal{T}_{\Delta
i}$, bring new entities $\mathcal{E}_{\Delta i}$, and new relations
$\mathcal{R}_{\Delta i}$. It would also involve some old entities from
$\mathcal{E}_{i-1}$, and old relations from $\mathcal{R}_{i-1}$. Thus, a new
entity may be connected with both new and old entities (referred to as new
knowledge). The involved old entities also receive more facts, and therefore
their previously learned embeddings (referred to as old knowledge) need to be
updated to fit the new facts. Hence, the base KG embedding model for lifelong
learning should be capable of capturing new knowledge as well as updating old
knowledge. To this end, we propose a masked KG autoencoder motivated by the
recent success of self-supervised learning (Hou et al. 2022). The key idea is
to reconstruct the embedding for an entity or relation based on its masked
subgraph, which may include both other new entities and some old entities.
Specifically, we use the first-order subgraph of an entity or relation to
reconstruct its embedding $\bar{\mathbf{x}}_{i}$:
$\bar{\mathbf{x}}_{i}=\text{MAE}\big{(}\cup_{j=1}^{i}\mathcal{N}_{j}(x)\big{)},$
(1)
where $x$ denotes either an entity or a relation, and
$\mathcal{N}_{j}\subseteq\mathcal{D}_{j}$ denotes the involved facts of $x$ in
the $j$-th snapshot. $\text{MAE}()$ is an encoder to represent the input
subgraph. The objective of our KG encoder is to align the entity or relation
embedding with the reconstructed representation as follows:
$\mathcal{L}_{\text{MAE}}=\sum_{e\in\mathcal{E}_{i}}\|\mathbf{e}_{i}-\bar{\mathbf{e}}_{i}\|_{2}^{2}+\sum_{r\in\mathcal{R}_{i}}\|\mathbf{r}_{i}-\bar{\mathbf{r}}_{i}\|_{2}^{2}.$
(2)
The key then becomes how to design an effective and efficient encoder for
lifelong learning. GCN (Kipf and Welling 2017) and Transformer (Vaswani et al.
2017) are two common choices for the encoder. The two encoders both introduce
additional model parameters (e.g., the weight matrices). In our lifelong
learning setting, the encoder needs to be updated to fit new facts or
subgraphs. In this case, once the GCN or Transformer is updated, the changed
model parameters would affect the embedding generation of _all_ old entities
(not just the involved old entities in new facts), increasing the risk of
catastrophically forgetting previous snapshots. To avoid this issue, we use
the entity and relation embedding transition functions as encoders, which do
not introduce additional parameters. We borrow the idea of TransE (Bordes et
al. 2013) and interpret a relation embedding as the translation vector between
the subject and object entity embeddings, i.e.,
$\mathbf{s}+\mathbf{r}\approx\mathbf{o}$, where
$\mathbf{s},\mathbf{r},\mathbf{o}$ denote the embeddings of subject entity,
relation and object entity, respectively. Based on this, we can deduce two
transition functions for entity and relation embeddings. The subject entity of
$(s,r,o)$ can be represented by
$f_{\rm{sub}}(\mathbf{r},\mathbf{o})=\mathbf{o}-\mathbf{r}$, and the relation
embedding is $f_{\rm{rel}}(\mathbf{s},\mathbf{o})=\mathbf{o}-\mathbf{s}$. We
can define the encoders as
$\displaystyle\bar{\mathbf{e}}_{i}$
$\displaystyle=\frac{\sum_{j=1}^{i}\sum_{(s,r,o)\in\mathcal{N}_{j}(e)}f_{\rm{sub}}(\mathbf{r}_{i},\mathbf{o}_{i})}{\sum_{j=1}^{i}|\mathcal{N}_{j}(e)|},$
(3) $\displaystyle\bar{\mathbf{r}}_{i}$
$\displaystyle=\frac{\sum_{j=1}^{i}\sum_{(s,r,o)\in\mathcal{N}_{j}(r)}f_{\rm{rel}}(\mathbf{s}_{i},\mathbf{o}_{i})}{\sum_{j=1}^{i}|\mathcal{N}_{j}(r)|},$
(4)
where $\mathbf{s}_{i},\mathbf{r}_{i},\mathbf{o}_{i}$ are the embeddings of
$s,r,o$ during the training on $\mathcal{D}_{i}$.
$\mathcal{N}_{j}(x)\subseteq\mathcal{D}_{j}$ is the set of facts containing
$x$.
In lifelong learning, the model learns from a snapshot sequence. Eqs. (3) and
(4) require training samples from the first $i$ snapshots, which are not in
line with lifelong learning. To reduce the reliance on learned data, we use
$\mathbf{e}_{i-1}$ and $\mathbf{r}_{i-1}$ as the approximate average
embeddings of $e$ and $r$ in the first $i-1$ snapshots, respectively, and
rewrite the encoders as
$\bar{\mathbf{e}}_{i}\approx\frac{\sum_{j=1}^{i-1}|\mathcal{N}_{j}(e)|\mathbf{e}_{i-1}+\sum_{(s,r,o)\in\mathcal{N}_{i}(e)}f_{\rm{sub}}(\mathbf{r}_{i},\mathbf{o}_{i})}{\sum_{j=1}^{i-1}|\mathcal{N}_{j}(e)|+|\mathcal{N}_{i}(e)|},$
(5)
$\bar{\mathbf{r}}_{i}\approx\frac{\sum_{j=1}^{i-1}|\mathcal{N}_{j}(r)|\mathbf{r}_{i-1}+\sum_{(s,r,o)\in\mathcal{N}_{i}(r)}f_{\rm{rel}}(\mathbf{s}_{i},\mathbf{o}_{i})}{\sum_{j=1}^{i-1}|\mathcal{N}_{j}(r)|+|\mathcal{N}_{i}(r)|}.$
(6)
The encoders use the facts involving both old and new entities and relations
for embedding reconstruction and they build a bridge for knowledge transfer.
For each snapshot, to learn the knowledge from the new data and update the
learned parameters, we leverage TransE (Bordes et al. 2013) to train the
embedding model:
$\mathcal{L}_{\text{new}}=\sum\limits_{(s,r,o)\in\mathcal{D}_{i}}\max\big{(}0,\gamma+f(\mathbf{s},\mathbf{r},\mathbf{o})-f(\mathbf{s}^{\prime},\mathbf{r},\mathbf{o}^{\prime})\big{)},$
(7)
where $\gamma$ is the margin.
$(\mathbf{s}^{\prime},\mathbf{r},\mathbf{o}^{\prime})$ is the embedding of a
negative fact. For each positive fact, we randomly replace the subject or
object entity with a random entity $e^{\prime}\in\mathcal{E}_{i}$.
### 3.4 Embedding Transfer
During the lifecycle of a growing KG, there are abundant unseen entities and
some unseen relations emerge with the new facts. Learning effective embeddings
for them is an essential aspect of lifelong KG embedding learning. However,
these unseen ones are not included in any learned snapshots, so only
inheriting the learned parameters cannot transfer the acquired knowledge to
their embeddings. To avoid learning from scratch, we propose embedding
transfer that seeks to leverage the learned embeddings to help represent
unseen entities and relations. Specifically, we initialize the embeddings of
each unseen entity by aggregating its facts:
$\mathbf{e}_{i}=\frac{1}{|\mathcal{N}_{i}(e)|}\sum_{(e,r,o)\in\mathcal{N}_{i}(e)}f_{\rm{sub}}(\mathbf{r}_{i-1},\mathbf{o}_{i-1}),$
(8)
where $\mathcal{N}_{i}(e)\subseteq\mathcal{D}_{i}$ is the set of facts
containing $e$. For the new entities that do not have common facts involving
existing entities, we randomly initialize their embeddings. We also use this
strategy to initialize the embeddings of unseen relations:
$\mathbf{r}_{i}=\frac{1}{|\mathcal{N}_{i}(r)|}\sum_{(s,r,o)\in\mathcal{N}_{i}(r)}f_{\rm{rel}}(\mathbf{s}_{i-1},\mathbf{o}_{i-1}),$
(9)
where $\mathcal{N}_{i}(r)\subseteq\mathcal{D}_{i}$ is the set of facts
containing $r$.
### 3.5 Embedding Regularization
Learning new snapshots is likely to overwrite the learned knowledge from old
snapshots. To avoid catastrophic forgetting, some regularization methods
(Kirkpatrick et al. 2017; Lopez-Paz and Ranzato 2017) constrain the updates of
parameters that are important to old tasks. The loss function of
regularization methods is
$\mathcal{L}_{\text{old}}=\sum\limits_{e\in\mathcal{E}_{i-1}}\omega(e)\|\mathbf{e}_{i}-\mathbf{e}_{i-1}\|_{2}^{2}+\sum\limits_{r\in\mathcal{R}_{i-1}}\omega(r)\|\mathbf{r}_{i}-\mathbf{r}_{i-1}\|_{2}^{2},$
(10)
where $\omega(x)$ is the regularization weight for $x$.
Conventional regularization-based methods for classification tasks such as
(Kirkpatrick et al. 2017) model the importance of each parameter at a high
cost based on the gradient or parameter change during training. This problem
is even more severe for KG embedding models that have a large number of
embedding parameters (i.e. entity and relation embeddings). To resolve this
problem, we propose a lightweight embedding regularization method, which
calculates the regularization weight of each entity or relation by the number
of new and old facts containing it:
$\omega(x)=1-\frac{|\mathcal{N}_{i}(x)|}{\sum_{j=1}^{i}|\mathcal{N}_{j}(x)|}.$
(11)
As a lightweight technique, it only keeps the total number of involved trained
facts for each entity or relation and only updates regularization weights once
per snapshot.
### 3.6 Overall Learning Objective
To learn new knowledge while retaining acquired knowledge, the overall
lifelong learning objective $\mathcal{L}$ is defined as follows:
$\mathcal{L}=\mathcal{L}_{\text{new}}+\alpha\,\mathcal{L}_{\text{old}}+\beta\,\mathcal{L}_{\text{MAE}},$
(12)
where $\alpha,\beta$ are hyperparameters for balancing the objectives.
### 3.7 Complexity Analysis
Compared with fine-tuning, the proposed model requires few extra resources. It
does not increase the size of training samples like the rehearsal models (Wang
et al. 2019a; Lopez-Paz and Ranzato 2017; Kou et al. 2020). In addition to
fine-tuning, the proposed model calculates the loss of masked autoencoder and
embedding regularization. The additional time complexity in each iteration is
$O(|\mathcal{E}|+|\mathcal{R}|+|\mathcal{D}|)$. In practice, we find that the
loss of autoencoder can accelerate learning, and its time consumption is close
to that of fine-tuning. The space complexity of fine-tuning is
$O((|\mathcal{E}|+|\mathcal{R}|)\times d)$, and the space complexity of the
proposed model is $O((|\mathcal{E}|+|\mathcal{R}|)\times(d+1))$, where $d$ is
the dimension of embeddings.
Datasets | Snapshot 1 | Snapshot 2 | Snapshot 3 | Snapshot 4 | Snapshot 5
---|---|---|---|---|---
$|\mathcal{T}_{\Delta 1}|$ | $|\mathcal{E}_{1}|$ | $|\mathcal{R}_{1}|$ | $|\mathcal{T}_{\Delta 2}|$ | $|\mathcal{E}_{2}|$ | $|\mathcal{R}_{2}|$ | $|\mathcal{T}_{\Delta 3}|$ | $|\mathcal{E}_{3}|$ | $|\mathcal{R}_{3}|$ | $|\mathcal{T}_{\Delta 4}|$ | $|\mathcal{E}_{4}|$ | $|\mathcal{R}_{4}|$ | $|\mathcal{T}_{\Delta 5}|$ | $|\mathcal{E}_{5}|$ | $|\mathcal{R}_{5}|$
Entity | 46,388 | 2,909 | 233 | 72,111 | 5,817 | 236 | 73,785 | 8,275 | 236 | 70,506 | 11,633 | 237 | 47,326 | 14,541 | 237
Relation | 98,819 | 11,560 | 48 | 93,535 | 13,343 | 96 | 66,136 | 13,754 | 143 | 30,032 | 14,387 | 190 | 21,594 | 14,541 | 237
Fact | 62,024 | 10,513 | 237 | 62,023 | 12,779 | 237 | 62,023 | 13,586 | 237 | 62,023 | 13,894 | 237 | 62,023 | 14,541 | 237
Hybrid | 57,561 | 8,628 | 86 | 20,873 | 10,040 | 102 | 88,017 | 12,779 | 151 | 103,339 | 14,393 | 209 | 40,326 | 14,541 | 237
Table 1: Statistical data of the four constructed growing KG datasets. For the
$i$-th snapshot, $\mathcal{T}_{\Delta i}$ denotes the set of new facts in this
snapshot, and $\mathcal{E}_{i},\mathcal{R}_{i}$ denote the sets of cumulative
entities and relations in the first $i$ snapshots, respectively.
## 4 Dataset Construction
To simulate a variety of aspects of KG growth, we create four datasets based
on FB15K-237 (Toutanova and Chen 2015), which are entity-centric, relation-
centric, fact-centric, and hybrid. We denote them by Entity, Relation, Fact
and Hybrid, respectively. Given a KG
$\mathcal{G}=\\{\mathcal{E},\mathcal{R},\mathcal{T}\\}$, we construct five
snapshots with the following steps:
1. 1.
Seeding. We randomly sample 10 facts from $\mathcal{T}$ and add them into
$\mathcal{T}_{1}$ for initialization. The entities and relations in the 10
facts form the initial $\mathcal{E}_{1}$ and $\mathcal{R}_{1}$, respectively.
2. 2.
Expanding. To build Entity, Relation and Fact, we iteratively sample a fact
containing at least one seen entity in $\mathcal{E}_{i}$, add it into
$\mathcal{T}_{i}$, and extract the unseen entity and relation from it to
expand $\mathcal{E}_{i}$ and $\mathcal{R}_{i}$. For Entity, once
$|\mathcal{E}_{i}|\geq\frac{i+1}{5}|\mathcal{E}|$, we add all new facts
$\big{\\{}(s,r,o)\,|\,s\in\mathcal{E}_{i}\wedge o\in\mathcal{E}_{i}\big{\\}}$
into $\mathcal{T}_{i}$ and start building the next snapshot. In the same way,
we construct Relation and Fact. As for Hybrid, we uniformly sample an entity,
relation or fact without replacement from
$\mathcal{U}=\mathcal{E}\cup\mathcal{R}\cup\mathcal{T}$ to join
$\mathcal{E}_{i}$, $\mathcal{R}_{i}$ and $\mathcal{T}_{i}$. Note that when the
sampled fact contains an unseen entity or relation, we re-sample a fact that
only contains seen entities and relations to replace it. After each iteration,
we terminate the expansion of this snapshot with a probability
$\frac{5}{|\mathcal{U}|}$. Consequently, the expansion of Hybrid is uneven,
making it more realistic and challenging. For all datasets, we take the whole
KG as the last snapshot, i.e., $\mathcal{T}_{5}=\mathcal{T}$, and
$\mathcal{E}_{5}=\mathcal{E},\mathcal{R}_{5}=\mathcal{R}$.
3. 3.
Dividing. For each snapshot, we randomly divide the new fact set
$\mathcal{T}_{\Delta i}$ into a training set $\mathcal{D}_{i}$, a validation
set $\mathcal{V}_{i}$ and a test set $\mathcal{Q}_{i}$ by a split ratio of
3:1:1.
The statistics of the four datasets are presented in Table 1.
## 5 Experimental Results
We conduct experiments regarding link prediction accuracy, knowledge transfer
capability, and learning efficiency to validate the proposed model, LKGE. The
datasets and source code are available at https://github.com/nju-websoft/LKGE.
### 5.1 Experiment Settings
Competitors. We compare our model with 12 competitors, including (i) three
baseline models: snapshot only, re-training, and fine-tuning; (ii) two
inductive models: MEAN (Hamaguchi et al. 2017), and LAN (Wang et al. 2019b);
(iii) two dynamic architecture models: PNN (Rusu et al. 2016), and CWR
(Lomonaco and Maltoni 2017); (iv) two regularization-based models: SI (Zenke,
Poole, and Ganguli 2017), and EWC (Kirkpatrick et al. 2017); and (v) three
rehearsal-based models: GEM (Lopez-Paz and Ranzato 2017), EMR (Wang et al.
2019a), and DiCGRL (Kou et al. 2020).
Evaluation metrics. Following the convention, we conduct the experiments on
link prediction. Given a snapshot $\mathcal{S}_{i}$, for each test fact
$(s,r,o)\in\mathcal{Q}_{i}$, we construct two queries $(s,r,?)$ and $(?,r,t)$.
When evaluating on $\mathcal{Q}_{i}$, we set all seen entities in
$\mathcal{E}_{i}$ as candidate entities. We select seven metrics to evaluate
all models, including (i) Four metrics on link prediction accuracy: mean
reciprocal rank (MRR) and Hits@$k$ ($k=1,3,10$, and H@$k$ for shot). We
conduct the model $\mathcal{M}_{5}$ trained on the last snapshot to evaluate
on the union of the test sets in all snapshots. (ii) Two metrics on knowledge
transfer capability: forward transfer (FWT) and backward transfer (BWT)
(Lopez-Paz and Ranzato 2017). FWT is the influence of learning a task to the
performance on the future tasks, while BWT is the influence of learning to the
previous tasks:
(13)
where $n$ is the number of snapshots, $h_{i,j}$ is the MRR scores on
$\mathcal{Q}_{j}$ after training the model $\mathcal{M}_{i}$ on the $i$-th
snapshot. Higher scores indicate better performance. (iii) Time cost: The
cumulative time cost of the learning on each snapshot.
Models | Entity | Relation | Fact | Hybrid
---|---|---|---|---
MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10
Snapshot | 0.084 | 0.028 | 0.107 | 0.193 | 0.021 | 0.010 | 0.023 | 0.043 | 0.082 | 0.030 | 0.095 | 0.191 | 0.036 | 0.015 | 0.043 | 0.077
Re-train | 0.236 | 0.137 | 0.274 | 0.433 | 0.219 | 0.128 | 0.250 | 0.403 | 0.206 | 0.118 | 0.232 | 0.385 | 0.227 | 0.134 | 0.260 | 0.413
Fine-tune | 0.165 | 0.085 | 0.188 | 0.321 | 0.093 | 0.039 | 0.106 | 0.195 | 0.172 | 0.090 | 0.193 | 0.339 | 0.135 | 0.069 | 0.151 | 0.262
MEAN | 0.117 | 0.068 | 0.123 | 0.212 | 0.039 | 0.024 | 0.040 | 0.067 | 0.084 | 0.051 | 0.088 | 0.146 | 0.046 | 0.029 | 0.049 | 0.080
LAN | 0.141 | 0.082 | 0.149 | 0.256 | 0.052 | 0.033 | 0.052 | 0.092 | 0.106 | 0.056 | 0.113 | 0.200 | 0.059 | 0.032 | 0.062 | 0.113
PNN | 0.229 | 0.130 | 0.265 | 0.425 | 0.167 | 0.096 | 0.191 | 0.305 | 0.157 | 0.084 | 0.188 | 0.290 | 0.185 | 0.101 | 0.216 | 0.349
CWR | 0.088 | 0.028 | 0.114 | 0.202 | 0.021 | 0.010 | 0.024 | 0.043 | 0.083 | 0.030 | 0.095 | 0.192 | 0.037 | 0.015 | 0.044 | 0.077
SI | 0.154 | 0.072 | 0.179 | 0.311 | 0.113 | 0.055 | 0.131 | 0.224 | 0.172 | 0.088 | 0.194 | 0.343 | 0.111 | 0.049 | 0.126 | 0.229
EWC | 0.229 | 0.130 | 0.264 | 0.423 | 0.165 | 0.093 | 0.190 | 0.306 | 0.201 | 0.113 | 0.229 | 0.382 | 0.186 | 0.102 | 0.214 | 0.350
GEM | 0.165 | 0.085 | 0.188 | 0.321 | 0.093 | 0.040 | 0.106 | 0.196 | 0.175 | 0.092 | 0.196 | 0.345 | 0.136 | 0.070 | 0.152 | 0.263
EMR | 0.171 | 0.090 | 0.195 | 0.330 | 0.111 | 0.052 | 0.126 | 0.225 | 0.171 | 0.090 | 0.191 | 0.337 | 0.141 | 0.073 | 0.157 | 0.267
DiCGRL | 0.107 | 0.057 | 0.110 | 0.211 | 0.133 | 0.079 | 0.147 | 0.241 | 0.162 | 0.084 | 0.189 | 0.320 | 0.149 | 0.083 | 0.168 | 0.277
LKGE | 0.234 | 0.136 | 0.269 | 0.425 | 0.192 | 0.106 | 0.219 | 0.366 | 0.210 | 0.122 | 0.238 | 0.387 | 0.207 | 0.121 | 0.235 | 0.379
Table 2: Result comparison of link prediction on the union of the test sets in
all snapshots.
Implementation details. We use TransE (Bordes et al. 2013) as the base model
and modify the competitors to do our task:
* •
Snapshot only. For the $i$-th snapshot, we reinitialize and train a model only
on the training set $\mathcal{D}_{i}$.
* •
Re-training. For the $i$-th snapshot, we reinitialize and train a model on the
accumulated training data $\cup_{j=1}^{i}\mathcal{D}_{j}$.
* •
Fine-tuning. For the $i$-th snapshot, the model inherits the learned
parameters of the model trained on the previous snapshots, and we
incrementally train it on $\mathcal{D}_{i}$.
* •
Inductive models. We train each model on the first snapshot and obtain the
embeddings of unseen entities in the following snapshots by neighborhood
aggregation.
* •
Dynamic architecture models. For PNN, following the implementation of (Daruna
et al. 2021), we freeze the parameters learned on previous snapshots and
update new parameters. For CWR, after training on $\mathcal{D}_{1}$, we
replicate a model as the consolidated model. For the following $i$-th
snapshot, we reinitialize and train a temporal model on $\mathcal{D}_{i}$, and
merge the temporal model into the consolidated model by copying new parameters
or averaging old ones.
* •
Regularization models. Since the base model parameters increase with the
emergence of unseen entities and relations, we only use the parameters learned
from the previous snapshot to calculate the regularization loss.
* •
Rehearsal models. We store 5,000 training facts from previous snapshots and
add them to the current training set of the $i$-th snapshot. After the
learning, we randomly replace half of these facts with those in
$\mathcal{D}_{i}$.
For a fair comparison, we first tune the hyperparameters of the base model
using grid-search: learning rate in {0.0005, 0.0001, 0.001}, batch size in
{1024, 2048}, embedding dimension in {100, 200}. Then, we use the same base
model for LKGE and all competitors, and tune other hyperparameters. For the
regularization models, the $\alpha$ of regularization loss is in {0.01, 0.1,
1.0}. For our model, the $\beta$ of MAE loss is in {0.01, 0.1, 1.0}. For all
competitors, we use Adam optimizer and set the patience of early stopping to
3.
### 5.2 Link Prediction Accuracy
In Table 2, we run 5-seeds experiments for all models on our datasets and
report the means. The results show that: (i) our model consistently achieves
the best performance across all datasets. Some results of our model in Fact
even outperforms re-training. This is because our masked KG autoencoder
effectively improves information propagation based on both old and new
embeddings, and the embedding regularization avoids catastrophic forgetting.
Most competitors only work well on Entity, while our model shows stable and
promising results on all these datasets. (ii) Re-training is far superior to
most baseline models on Relation and Hybrid, while the gaps on Entity and Fact
are small. This is because the KG embedding model learns two aspects of
knowledge: relational patterns and entity embeddings. In Entity and Fact, the
relational patterns are stable, while in Relation and Hybrid, their relational
patterns are constantly changing due to unseen relations. These phenomena
illustrate that the variation of relational patterns is more challenging for
lifelong KG embedding. (iii) The inductive models are only trained on the
first snapshot, and cannot transfer knowledge to unseen relations. So, their
results are lower than other models, especially on Relation and Hybrid with
many unseen relations. (iv) Since the learned parameters are not updated, PNN
preserves the learned knowledge well. But on Fact, due to a few unseen
entities, it lacks new learnable parameters, and the performance is not well.
CWR averages the old and new model parameters, which does not work well on the
embedding learning task. (v) EWC performs well because it can model the
importance of each parameter using Fisher information matrices to the learned
snapshots. (vi) Unlike the classification tasks, most parameters of KG
embedding models correspond to specific entities, so the training data cannot
be divided into a few types, and we cannot use 5,000 old samples to replay the
learned facts for all entities. The performance of GEM, EMR and DiCGRL is
limited.
Figure 3: MRR changes. $\mathcal{M}_{i}$ is trained for the $i$-th snapshot
and evaluated using the test data of previous snapshots 1 to $i$.
Variants | Entity | Relation | Fact | Hybrid
---|---|---|---|---
MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10 | MRR | H@1 | H@3 | H@10
LKGE (full) | 0.234 | 0.136 | 0.269 | 0.425 | 0.192 | 0.106 | 0.219 | 0.366 | 0.210 | 0.122 | 0.238 | 0.387 | 0.207 | 0.121 | 0.235 | 0.379
$-$ fine-tuning | 0.123 | 0.068 | 0.136 | 0.225 | 0.126 | 0.073 | 0.146 | 0.231 | 0.154 | 0.091 | 0.176 | 0.269 | 0.109 | 0.060 | 0.126 | 0.201
$-$ autoencoder | 0.222 | 0.124 | 0.255 | 0.415 | 0.185 | 0.100 | 0.212 | 0.355 | 0.191 | 0.105 | 0.215 | 0.369 | 0.198 | 0.111 | 0.227 | 0.367
$-$ embedding transfer | 0.240 | 0.141 | 0.275 | 0.433 | 0.174 | 0.091 | 0.201 | 0.339 | 0.210 | 0.123 | 0.237 | 0.390 | 0.200 | 0.112 | 0.229 | 0.372
$-$ regularization | 0.166 | 0.089 | 0.184 | 0.316 | 0.040 | 0.014 | 0.049 | 0.089 | 0.175 | 0.095 | 0.195 | 0.338 | 0.154 | 0.079 | 0.171 | 0.300
Table 3: Ablation results of link prediction on the union of the test sets in
all snapshots.
To show the performance evolution of LKGE during the learning process, we
evaluate the model $\mathcal{M}_{i}$ trained for the $i$-th snapshot using the
test data from previous snapshots. The MRR results are in Figure 3. LKGE can
maintain the learned knowledge during lifelong learning. On some snapshots
like Entity Snapshot 3, the knowledge update improves the performance on old
test data, which shows that old knowledge update has the potential for
backward knowledge transfer.
### 5.3 Knowledge Transfer Capability
To evaluate the knowledge transfer and retention capability of all models, we
report the FWT and BWT of MRR results in Figure 4. Because of the embedding
transfer, the FWT of LKGE is higher than all lifelong learning competitors.
Even on Relation and Hybrid where the KG schema changes, LKGE still keeps the
FWT capability well. MEAN and LAN are designed to transfer knowledge forward
to embed new entities. So, they work well on Entity. However, their FWT
capability is limited on other datasets since they cannot update the old
embeddings to adapt to new snapshots.
BWT is usually negative due to the overwriting of learned knowledge. PNN,
MEAN, and LAN do not update old parameters. Their BWT scores are “NA”. The
poor scores of CWR show the harmful effects of the average operation. The
scores of rehearsal models are also not good as they cannot store enough
facts. LKGE gets good BWT scores as the embedding regularization can well
maintain the learned knowledge.
### 5.4 Learning Efficiency
We show the training time on Fact as all snapshots of Fact have the same
training set size. Figure 5 shows the results. Unsurprisingly, re-training is
most time-consuming. Snapshot is also costly because it cannot inherit
knowledge from previous snapshots. By contrast, our model is most efficient,
and its advantage is more significant in the final snapshot. This is because
the embedding transfer can use the learned knowledge to accelerate the
learning of new facts.
### 5.5 Ablation Study
We conduct an ablation study by designing four variants of LKGE: “w/o fine-
tuning”, “w/o autoencoder”, “w/o embedding transfer” and “w/o regularization”.
The “w/o fine-tuning” variant is trained on $\mathcal{D}_{1}$ and performs the
embedding transfer on other $\mathcal{D}_{i}$. The latter three variants
disable the specific components in LKGE. The results are shown in Table 3. We
see that (i) although fine-tuning is disabled, “w/o fine-tuning” can still
perform well with only the knowledge from the first snapshot, showing that
embedding transfer can effectively transfer learned knowledge to unseen
entities and relations. (ii) Both “w/o autoencoder” and “w/o regularization”
significantly drop, showing the effects of masked KG autoencoder and knowledge
retention. (iii) Embedding transfer enables the model to be trained at a
starting point closer to the optimal parameters and stabilizes the embedding
space. There are declines when using the embedding transfer on Entity. This is
because Entity contains massive new entities and needs more plasticity rather
than stability. But on Relation and Hybrid, the results of “w/o embedding
transfer” are lower than the full model, showing that embedding transfer can
reduce the interference caused by the KG schema changes. On Fact, the results
of “w/o embedding transfer” are similar to the full model. This shows that,
even without embedding transfer, the model can still capture the knowledge.
Figure 4: Forward transfer and backward transfer of MRR. Figure 5: Cumulative
time cost on Fact.
## 6 Conclusion and Future Work
This paper studies lifelong embedding learning for growing KGs. For better
knowledge transfer and retention, we propose a lifelong KG embedding model
consisting of masked KG autoencoder, embedding transfer, and embedding
regularization. Experiments on new datasets show better link prediction
accuracy, knowledge transfer capability, and learning efficiency of our model.
In future, we plan to study lifelong embedding learning in long-tail and low-
resource settings.
## Acknowledgments
This work was supported by National Natural Science Foundation of China (No.
62272219).
## References
* Bordes et al. (2013) Bordes, A.; Usunier, N.; García-Durán, A.; Weston, J.; and Yakhnenko, O. 2013. Translating Embeddings for Modeling Multi-relational Data. In _NeurIPS_ , 2787–2795.
* Chen et al. (2021) Chen, J.; He, H.; Wu, F.; and Wang, J. 2021. Topology-Aware Correlations Between Relations for Inductive Link Prediction in Knowledge Graphs. In _AAAI_ , 6271–6278.
* Daruna et al. (2021) Daruna, A. A.; Gupta, M.; Sridharan, M.; and Chernova, S. 2021. Continual Learning of Knowledge Graph Embeddings. _IEEE Robotics Autom. Lett._ , 6(2): 1128–1135.
* Dettmers et al. (2018) Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S. 2018. Convolutional 2D Knowledge Graph Embeddings. In _AAAI_ , 1811–1818.
* Galkin et al. (2022) Galkin, M.; Denis, E. G.; Wu, J.; and Hamilton, W. L. 2022. NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs. In _ICLR_.
* Guo, Sun, and Hu (2019) Guo, L.; Sun, Z.; and Hu, W. 2019. Learning to Exploit Long-term Relational Dependencies in Knowledge Graphs. In _ICML_ , volume 97, 2505–2514.
* Hamaguchi et al. (2017) Hamaguchi, T.; Oiwa, H.; Shimbo, M.; and Matsumoto, Y. 2017. Knowledge Transfer for Out-of-Knowledge-Base Entities: A Graph Neural Network Approach. In _IJCAI_ , 1802–1808.
* Hou et al. (2022) Hou, Z.; Liu, X.; Cen, Y.; Dong, Y.; Yang, H.; Wang, C.; and Tang, J. 2022. GraphMAE: Self-Supervised Masked Graph Autoencoders. _CoRR_ , abs/2205.10803: 1–11.
* Ji et al. (2022) Ji, S.; Pan, S.; Cambria, E.; Marttinen, P.; and Yu, P. S. 2022. A Survey on Knowledge Graphs: Representation, Acquisition, and Applications. _IEEE Trans. Neural Netw. Learn. Syst._ , 33(2): 494–514.
* Kipf and Welling (2017) Kipf, T. N.; and Welling, M. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In _ICLR_.
* Kirkpatrick et al. (2017) Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N. C.; Veness, J.; Desjardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; Hassabis, D.; Clopath, C.; Kumaran, D.; and Hadsell, R. 2017. Overcoming Catastrophic Forgetting in Neural Networks. _PNAS_ , 114(13): 3521–3526.
* Kou et al. (2020) Kou, X.; Lin, Y.; Liu, S.; Li, P.; Zhou, J.; and Zhang, Y. 2020. Disentangle-based Continual Graph Representation Learning. In _EMNLP_ , 2961–2972.
* Liu et al. (2021) Liu, S.; Grau, B. C.; Horrocks, I.; and Kostylev, E. V. 2021. INDIGO: GNN-Based Inductive Knowledge Graph Completion Using Pair-Wise Encoding. In _NeurIPS_ , 2034–2045.
* Lomonaco and Maltoni (2017) Lomonaco, V.; and Maltoni, D. 2017. CORe50: A New Dataset and Benchmark for Continuous Object Recognition. In _CoRL_ , volume 78, 17–26.
* Lopez-Paz and Ranzato (2017) Lopez-Paz, D.; and Ranzato, M. 2017. Gradient Episodic Memory for Continual Learning. In _NeurIPS_ , 6467–6476.
* Rusu et al. (2016) Rusu, A. A.; Rabinowitz, N. C.; Desjardins, G.; Soyer, H.; Kirkpatrick, J.; Kavukcuoglu, K.; Pascanu, R.; and Hadsell, R. 2016. Progressive Neural Networks. _CoRR_ , abs/1606.04671: 1–14.
* Scarselli et al. (2009) Scarselli, F.; Gori, M.; Tsoi, A. C.; Hagenbuchner, M.; and Monfardini, G. 2009\. The Graph Neural Network Model. _IEEE Trans. Neural Netw._ , 20(1): 61–80.
* Schlichtkrull et al. (2018) Schlichtkrull, M.; Kipf, T. N.; Bloem, P.; van den Berg, R.; Titov, I.; and Welling, M. 2018. Modeling Relational Data with Graph Convolutional Networks. In _ESWC_ , 593–607.
* Tay, Luu, and Hui (2017) Tay, Y.; Luu, A. T.; and Hui, S. C. 2017. Non-Parametric Estimation of Multiple Embeddings for Link Prediction on Dynamic Knowledge Graphs. In _AAAI_ , 1243–1249.
* Teru, Denis, and Hamilton (2020) Teru, K.; Denis, E.; and Hamilton, W. 2020. Inductive Relation Prediction by Subgraph Reasoning. In _ICML_ , 9448–9457.
* Toutanova and Chen (2015) Toutanova, K.; and Chen, D. 2015. Observed Versus Latent Features for Knowledge Base and Text Inference. In _CVSC_ , 57–66.
* Vashishth et al. (2020a) Vashishth, S.; Sanyal, S.; Nitin, V.; Agrawal, N.; and Talukdar, P. P. 2020a. InteractE: Improving Convolution-based Knowledge Graph Embeddings by Increasing Feature Interactions. In _AAAI_ , 3009–3016.
* Vashishth et al. (2020b) Vashishth, S.; Sanyal, S.; Nitin, V.; and Talukdar, P. P. 2020b. Composition-based Multi-Relational Graph Convolutional Networks. In _ICLR_.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In _NeurIPS_ , 5998–6008.
* Vrandecic and Krötzsch (2014) Vrandecic, D.; and Krötzsch, M. 2014. Wikidata: A Free Collaborative Knowledgebase. _Commun. ACM_ , 57(10): 78–85.
* Wang et al. (2019a) Wang, H.; Xiong, W.; Yu, M.; Guo, X.; Chang, S.; and Wang, W. Y. 2019a. Sentence Embedding Alignment for Lifelong Relation Extraction. In _NAACL_ , 796–806.
* Wang et al. (2019b) Wang, P.; Han, J.; Li, C.; and Pan, R. 2019b. Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding. In _AAAI_ , 7152–7159.
* Wang et al. (2017) Wang, Q.; Mao, Z.; Wang, B.; and Guo, L. 2017. Knowledge Graph Embedding: A Survey of Approaches and Applications. _IEEE Trans. Knowl. Data Eng._ , 29(12): 2724–2743.
* Wang et al. (2014) Wang, Z.; Zhang, J.; Feng, J.; and Chen, Z. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. In _AAAI_ , 1112–1119.
* Wu et al. (2022) Wu, T.; Khan, A.; Yong, M.; Qi, G.; and Wang, M. 2022. Efficiently Embedding Dynamic Knowledge Graphs. _Knowl. Based Syst._ , 250: 109124.
* Zenke, Poole, and Ganguli (2017) Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual Learning through Synaptic Intelligence. In _ICML_ , 3987–3995.
* Zhu et al. (2021) Zhu, Z.; Zhang, Z.; Xhonneux, L. A. C.; and Tang, J. 2021. Neural Bellman-Ford Networks: A General Graph Neural Network Framework for Link Prediction. In _NeurIPS_ , 29476–29490.
|
# ClueWeb22: 10 Billion Web Documents with Visual and Semantic Information
Arnold Overwijk<EMAIL_ADDRESS>MicrosoftRedmondWAUSA ,
Chenyan Xiong<EMAIL_ADDRESS>MicrosoftRedmondWAUSA , Xiao Liu
<EMAIL_ADDRESS>MicrosoftRedmondWAUSA , Cameron VandenBerg
<EMAIL_ADDRESS>Carnegie Mellon UniversityPittsburghPAUSA and Jamie Callan
<EMAIL_ADDRESS>Carnegie Mellon UniversityPittsburghPAUSA
(2022)
###### Abstract.
ClueWeb22, the newest iteration of the ClueWeb line of datasets, provides 10
billion web pages affiliated with rich information. Its design was influenced
by the need for a high quality, large scale web corpus to support a range of
academic and industry research, for example, in information systems,
retrieval-augmented AI systems, and model pretraining. Compared with earlier
ClueWeb corpora, the ClueWeb22 corpus is larger, more varied, of higher-
quality, and aligned with the document distributions in commercial web search.
Besides raw HTML, ClueWeb22 includes rich information about the web pages
provided by industry-standard document understanding systems, including the
visual representation of pages rendered by a web browser, parsed HTML
structure information from a neural network parser, and pre-processed cleaned
document text to lower the barrier to entry. Many of these signals have been
widely used in industry but are available to the research community for the
first time at this scale.
††copyright: rightsretained
## 1\. Introduction
Large scale web corpora are essential for the research and development of
technologies such as information retrieval (IR), natural language processing
(NLP), and deep learning. Previous ClueWeb corpora, ClueWeb09 (Callan and Hoy,
2009) and ClueWeb12 (Callan and Pane, 2012), have been the standard web
corpora for many research explorations during the last decade (e.g., (Clarke
et al., 2009, 2012; Yang et al., 2019)), but they are more than ten years old.
The web and users have changed, new research frontiers have emerged, and our
web corpora must evolve with them.
Many important research topics in information retrieval are centered around
web search. As one of the most used AI applications, web search presents
research challenges in user understanding, relevance modeling, indexing,
serving efficiency, and many more. All of them rely on a large scale,
realistic web corpus to reflect the data distribution and the realistic
challenges of web search. Perhaps due to the lack of updated web corpora, many
recent IR benchmarks use the derived MS MARCO corpus (Bajaj et al., 2016)
which was sampled from the top retrieved passages of a Bing question answering
system nearly six year ago, which is very different from the distribution of
the web.
An emerging usage of web corpora is in retrieval-augmented AI systems, for
example, to provide documents that serve as evidence for question answering
(Chen et al., 2017), grounding information to improve language generation
(Lewis et al., 2020), and enriched contexts for language models (Guu et al.,
2020). A large fraction of these systems use Wikipedia as the retrieval
corpus, which provides high quality but only a fraction of human knowledge. In
comparison, the web is a much richer source of human knowledge, but obtaining
a high quality large scale web corpus is costly and challenging in most
environments (Piktus et al., 2021).
Another increasingly important usage of web corpora is to provide data for
neural model pretraining. As the amount of network parameters has grown from
hundreds of millions to trillions in the past four years (Devlin et al., 2019;
Fedus et al., 2021), their pretraining also requires significantly more data
(Hoffmann et al., 2022). Many sought more pretraining data from the web. One
approach is to sift CommonCrawl snapshots to find relatively higher quality
web pages, for example, as the C4 dataset was cooked to pretrain T5 (Raffel et
al., 2020). Web corpora cooked from CommonCrawl provide sufficient quantity
but not necessarily the highest quality. A new series of efforts have curated
better web corpora to empower language model pretraining using proprietary
resources, such as MassiveText (Rae et al., 2021) in DeepMind and the
pretraining corpus of GaLM (Du et al., 2022) and PaLM (Chowdhery et al., 2022)
in Google. These higher quality web corpora contributed significantly to the
effectiveness of pretrained models (Du et al., 2022), but were only available
in a few places, perhaps exclusively to companies with a commercial web search
business.
Table 1. Size and sampling distribution of ClueWeb22. We follow the previous ClueWeb corpora to designate official subsets of web pages: Category B $\subset$ Category A $\subset$ Category L. Category | #Pages | #Tokens | Sampling Distribution
---|---|---|---
ClueWeb22-B | 200M | 696B | From Most Popular Web Pages (“Super Head”)
ClueWeb22-A | 2B | 6.1T | From Pages also Frequently Visited by Users (“Head”)
ClueWeb22-L | 10B | 16.7T | Mixed Head-Tail Pages (“Head and Tail”)
These research needs are among many we would like to support with ClueWeb22,
the large scale, industry-quality, web corpus that is now available to the
research community. The construction of ClueWeb22 emphasizes the following
goals: 1) Real Distribution: to reflect the distribution of the web in real-
world scenarios; 2) Large Scale Quality Content: to provide clean web content
at large scale with high quality; 3) Rich Information: to share information
beyond raw text on web pages that are widely used in industry, but previously
unavailable for academia.
Real Distribution. The web pages from ClueWeb22 come from the web discovered
by the crawler of a commercial search engine, which provides a comprehensive
representation of the web. Then its web pages are sampled from all these web
pages discovered and indexed by the search engine, following the distribution
of web search.
Specifically, a page more likely to satisfy potential information needs from
search engine users has a higher probability to be included in ClueWeb22. In
total, we sampled 10 billion web pages from the indexed corpus and grouped
them into three “categories”. As listed in Table 1, these categories follow
the tradition of ClueWeb09 while also closely mimicking real scenarios used in
web search.
1. (1)
ClueWeb22-B (Category B) approximates the super head scenario, the most
frequently visited part of the web, e.g., the main parts of Wikipedia, news
websites, and other top internet domains.
2. (2)
ClueWeb22-A (Category A) approximates the general part of the web regularly
visited through search. It includes two billion web pages and covers most
notable URL domains.
3. (3)
ClueWeb22-L (Category L) introduces the tail part of the web into the
collection, with a total of ten billion web pages sampled from the index after
spam and adult content filtering.
The three categories provide different trade-offs of quality and coverage.
Each one is a subset of the larger ones, B $\subset$ A $\subset$ L, providing
three choices for different research focuses. The size and the information
included in categories of ClueWeb22 are decided to balance the information
richness and the distribution cost. A too large dataset may result in
restricted access due to high distribution cost.
The details of the sampling process is described in Section 2. In Section 4 we
analyze the properties of web pages in ClueWeb22, which closely resemble the
distribution of the web visited from the web search engine, e.g., about 40%
web pages in ClueWeb22 are in English and the rest are in other languages.
Table 2. Information of web pages included in ClueWeb22. This information was
obtained by state-of-the-art production-quality pipelines. We balance the
availability of information in each ClueWeb22 category based on the eventual
data size, which determines the distribution and storage cost.
Information | Categories | Description
---|---|---
Raw HTML | A, B | The original, unprocessed HTML of the web page
Clean Text | A, B, L | Primary text content, i.e., without headers, footers, side bar, navigation panel, etc.
Semantic Annotations | A, B, L | Annotations of content structure: title, section headings, paragraphs, lists, and tables
Anchor Text & Inlink | A, B, L | The inlinks of a web page and their affiliated texts extracted from ClueWeb22-A
Anchor Text & Outlink | A, B | The outlinks of a web page and their affiliated texts extracted from its HTML
VDOM Features | A, B | The visual representation features of the content, e.g., location and size of each text piece
Rendered Visual Page | B | The screen shot of the rendered web page in a web browser
Language Tag | A, B, L | The language of the main content, detected by an updated version of BlingFire (Microsoft, 2019)
Topic Tag | A, B, L | The category classified by a supervised neural classifier
Large Scale Quality Content. To extract high quality content from raw HTML
pages requires access to information and engineering efforts that are rarely
available outside large commercial systems. Besides raw HTML data, ClueWeb22
also includes parsed clean text from the search engine’s production-quality
content understanding system, which includes engineering and research
advancements accumulated through years of dedicated work. The full content
extraction pipeline is described in Section 3, which involves a web browser to
render web pages and a deep neural network to classify HTML elements.
To lower the entry barrier, we provide the extraction results of several
notable fields of the web page, for example, title, primary text content,
tables, lists, and anchor links. In addition, all the information used in the
parsing API, and the API toolkit itself, are made available with ClueWeb22.
Researchers can customize the content extraction to obtain information
specific to their needs, for example, image-text pairs for vision-language
pretraining.
Rich Information. We include in ClueWeb22 a variety of information produced by
our document understanding system, listed in Table 2. Much of this information
provides important signals for industry systems, but was not available to the
research community. Making this information available to the research
community via ClueWeb22 could facilitate the technology development towards
directions important to real-world systems, but previously overlooked in
academia. For example, the semi-structured content, such as tables and lists,
are critical parts for modern QA systems, but most previous research is
restricted to Wikipedia due to limited data availability (Ma et al., 2021);
the visual presentation of documents is widely used in practical document
understanding systems but less studied in academia. There were no such data
available at this scale (Xiong et al., 2019).
Starting August 2022, ClueWeb22 is available for research usage, following the
licensing and distribution processes used with previous ClueWeb datasets. The
scale, distribution, quality, and rich information of ClueWeb22 make it the
only web corpus of its kind widely available for research. To the best of our
knowledge, this is as close as it gets to the real web corpora used in
cutting-edge industry systems. Previously this type of data was only available
in the private sector, making it a privilege for a few places with certain
data accesses. We believe the release of ClueWeb22 levels the playing field
for the research community, enables more research explorations, and will
facilitate future technology advancement in many areas.
In the rest of this paper, Section 2 describes the construction of ClueWeb22;
Section 3 discusses the content extraction pipeline, followed by analysis of
corpus properties in Section 4. Then we provide some comparison with
CommonCrawl in Section 5 and discuss related datasets in Section 6. After
that, we briefly share the licensing and distribution in Section 7 and
conclude in Section 8.
## 2\. Corpus Construction
The first design principle of ClueWeb22 is to reflect the distribution of the
web, which itself can be defined in different ways. For example, the
importance of a web page can be derived from how many other pages link to it
(Page et al., 1999), how it is discussed on Reddit (Gokaslan and Cohen, 2019),
or how frequently web users browse it. In ClueWeb22, we formulate the
distribution of web pages from the web search perspective, by modeling the
probability of each web page based on how likely it would satisfy web search
users information needs, i.e., receiving a satisfied user click.
Web Page Sampling. We leverage a production model to predict the likelihood of
a web page receiving a satisfied click from any search users via any queries.
The prediction model leverages a wide range of information available in web
search, for example, web graph connectivity, URL domain, document content, and
page structure, to name a few. The model is trained by user clicks in web
search and assigns a query-independent click likelihood to each web page in
the search index. We use this predicted click likelihood as the importance
score of the web page. With it, the web pages discovered by the crawler can be
roughly grouped into four groups:
* •
Super Head: The most popular web pages visited through web search, such as the
content pages of Wikipedia, popular news websites, and top domains people
visit in their daily life;
* •
Head: The regularly visited part of the web, where most search traffic lands;
* •
Tail: The diverse part of the web still regularly visited by users with
specific needs;
* •
Super Tail: The majority of the web discovered by crawlers but barely visited
by users.
We sampled ClueWeb22 from the first three groups via the following steps.
1. (1)
Uniformly sample 200 million web pages for Category-B (ClueWeb22-B) from the
Super Head with emphasis on covering web pages with highest predicted click
likelihood;
2. (2)
Uniformly sample 1.8 billion web pages from the Head and combine them with the
first 200 million to form Category-A;
3. (3)
Uniformly sample the rest 8 billion web pages from the tail group and mix them
with ClueWeb22-A to form Category-L .
Before sampling, we manually excluded some websites that we considered to be
mainly about personal information. Standard spam/adult filters were applied
before sampling. We decided not to include the Super Tail part of the web as
our manual examination found the quality of web pages there quite low. To
include sufficient useful information from the Super Tail, the scale of the
corpus would exceed a reasonable distribution cost. ClueWeb22-L already
includes tail web pages less explored in previous web corpora. We consider it
a good balance of quality and coverage.
Table 3. English Wikipedia pages included in three categories of ClueWeb22,
manually picked from one thousand random samples.
ClueWeb22-B | Page Type
---|---
https://en.wikipedia.org/wiki/Super_Bowl_XXXVIII | Entity
https://en.wikipedia.org/wiki/Chevrolet_Corvette_(C8) | Entity
https://en.wikipedia.org/wiki/Discourse | Entity
ClueWeb22-A | Page Type
https://en.wikipedia.org/wiki/1897_in_film | List
https://en.wikipedia.org/wiki/2019_FFA_Cup_Final | Entity
https://en.wikipedia.org/wiki/Category:Public_administration_schools_in_the_United_States | Category
ClueWeb22-L | Page Type
https://en.wikipedia.org/wiki/Template%3ANeighbourhoods_in_Kolkata | Edit Template
https://en.wikipedia.org/wiki/Category:Sports_leagues_established_in_1990 | Category
https://en.wikipedia.org/wiki/Talk:The_Shoes_of_the_Fisherman | Wiki Project
Examples of Web Pages with Different Sampling Importance. The query-
independent click prediction system has quite high prediction accuracy. It has
been used for awhile in many real world applications. To show how the
documents in these categories align with our intuitions of web page
popularity, we use Wikipedia as an example and list some random Wiki URLs from
ClueWeb22-B, ClueWeb22-A, and ClueWeb22-L in Table 3.
The total number of discovered URLs from Wikipedia.org is quite large, way
bigger than the number of pages in official Wikipedia dumps. Among these Wiki
URLs, the entity pages, the typical content pages we read on Wikipedia, only
form a tiny fraction. A notable amount of Wiki URLs are redirects, categories,
and functional pages for Wikipedia construction efforts. Many are not
informative. As shown in Table 3, the importance prediction model naturally
separates the Wiki URLs into different ClueWeb22 categories. Those in
ClueWeb22-B are mostly entity pages, the part search users visit the most.
Category and List pages, although less informative but still including
meaningful information, start to appear in ClueWeb22-A. The functional pages
that are for editors, not web users, are mostly assigned to ClueWeb22-L.
## 3\. Web Page Understanding and Content Extraction
An often necessary step working with web pages is to extract the content from
raw HTML, such as main body text, tables, lists, hyper links, and multi-media
content. Content extraction is challenging, because the web is sophisticated.
There are lots of diversity from both the user perspective (layout of the
page) and the system perspective (the underlying codes to present the web
page).
If restricted to only using the HTML of web pages, it is even more difficult
to extract clean content from them. The HTML alone only provides a partial
view of the web page. The page layouts, and sometimes even content, require
downloading and dynamically rendering with additional resources (e.g. CSS and
JavaScript). For example, without actually rendering the page, it is hard to
tell the visibility of text pieces, their location in the page, and the
organization of structural contents.
On average, there are 50+ secondary URLs needed to render one web page. Not
only downloading them imposes significant cost, rendering the web page and
executing the secondary contents is also non-trivial. It is effectively
running an actual web browser. As a result, many open-source web page parsing
tools restrict their operations to static HTML, limiting their ability to
extract the exact content a web page displays to users.
In this section we describe the industrial-strength web page understanding
system used to extract content from ClueWeb22. As illustrated in Figure 1, it
enhances traditional HTML parsing with the visual representations of the page
elements and a Transformer-based model. The full system includes three main
components, starting from a visual render (Sec. 3.1), that records the visual
appearance of HTML tree nodes, then a semantic annotator (Sec. 3.2), that
predicts the content categories of HTML elements, and finally an enhanced
parser (Sec. 3.3), which extracts content from HTML aided by visual
information and model predictions.
Figure 1. The system pipeline to extract content from ClueWeb22 web pages. The
extracted Clean Content is available for all ClueWeb22 pages which is a good
starting point for common applications. The Annotated vDOM is available in
ClueWeb22-A to support customized content extraction for more specific needs.
### 3.1. Visual Render
There are many different ways for web designers to present content to their
visitors. For example, the behavior of any HTML tag (e.g. $<$h1$>$) can be
flexibly customized through CSS and JavaScript. While there are standard
guidelines in web page construction, in practice many different ways are used
to accomplish the same presentation. Given the high variant nature of web
programming, the ultimate way to determine which part of the page includes
useful content is based on their design target: the presentation of web pages
in web browsers.
The web page understanding system used upon ClueWeb22 is built towards this
fundamental solution. When a web page is crawled, it fetches both the raw HTML
and its secondary resources. It then uses the full information to render the
web page in a headless browser. The browser uses a standardized desktop
environment of 1024 pixels width and records the most useful visual features
for each HTML DOM node. In total it records 30 visual features for each HTML
element node111In ClueWeb22, element nodes refer to those with HTML tags,
e.g., $\langle$html$\rangle$, $\langle$body$\rangle$, $\langle$div$\rangle$,
and $\langle$p$\rangle$.. The full list of visual features is in Table 4. Text
nodes—the text pieces inside element nodes—inherit visual features from their
parent element nodes.
As shown in previous research (Xiong et al., 2019; Ainslie et al., 2020), an
earlier version of these virtual features, though released at a much smaller
scale, contributed significantly to the accuracy of some document
understanding tasks, for example, in web page keyphrase extraction (Ainslie et
al., 2020). To balance the total size of ClueWeb22 and adhere to the
distribution cost constraints, we include HTML DOM tree enhanced with visual
features for ClueWeb22-A web pages; the screen shots (1024 width and max
height of 6144) of web pages rendered by the browser are available in
ClueWeb22-B.
Table 4. Visual Features. All features are extracted at the token level and the parent block level (the parent node of the text in the HTML DOM tree). We merge the description of features in the same group, separated by “/”. Features about position and size are measured at pixel level in the rendered web page. Name | Description
---|---
X/Y position | Horizontal/vertical position
Width/Height | Width/height in rendered pixels
Offset left/top | Left/top offset relative to the parent element
Offset width/height | Element’s width/height including padding and borders
Client left/top | Element’s left/top border width
Client width/height | Element’s width/height including padding
Font color: Transparent/Red/Green/Blue | The value of font color in the four dimensions
Font weight/size | Font weight (or boldness)/size
Font italic style | Whether Font in italic style
Text decoration style | Decoration added to text such as underline, overline
List style type | List-item marker type for a list
Display | Display behavior of an element, such as none, inline, block
Cursor | Mouse cursor display type when pointing over an element
Line Height | Line box height
Text transform | Text capitalization style
Opacity | Opacity level
Border style Left/Top/Right/Bottom | The border style of each side
### 3.2. Semantic Annotator
With the visual enhanced DOM tree (vDOM), the next step in the pipeline is to
use a neural semantic annotator to classify vDOM tree nodes into target
content groups. In next part of this section we discuss the annotation task,
the model architecture, and the training labels of the semantic annotator.
Semantic Annotation Task is to predict whether a node in the HTML DOM tree
belongs to a set of predefined categories. In ClueWeb22 the model uses the
following six categories:
1. (1)
Title: The title of the document content, which may be different from the HTML
$<$title$>$ tag.
2. (2)
Primary Content: The main content of the web page. Formally a piece of text is
primary content if it is the main information for visitors to consume. This
excludes elements that occur on other pages of the same site, such as headers,
footers, and navigation, as well as elements that change with page reloading,
e.g., advertisements. In addition, we remove elements that are not the core
content, for example, the comment section of a blog site, where the blog is
the core content.
3. (3)
Heading: The heading of each section in the primary content.
4. (4)
Paragraph: The natural language paragraphs of the primary content.
5. (5)
Table: Content tables in the primary content, grouped in tabular format. This
is determined by the page presentation to users, regardless their organization
in the static HTML or dynamic scripts.
6. (6)
List: Content lists in the primary content, grouped in the list format.
Because ClueWeb22-L does not include the HTML or vDOM due to size constraints,
we include additional format annotations in the clean content to convey the
structure information. These format annotations are derived from annotator
predictions.
1. (7)
Table Row: A row that contains multiple cells in a table.
2. (8)
Table Cell: The element storing one content unit in a table.
3. (9)
Table Header: The row or column header that contains labels for content in a
table.
4. (10)
Table Caption: The description or summary of table content.
5. (11)
List Item: The element storing one content unit in a list.
6. (12)
HTML Title: The page title displayed in the browser tab.
7. (13)
Invisible Text: Text that is invisible when rendered in the browser, i.e.
those with zero opacity or less than two pixel in both width and height.
The prediction of Title and Primary Content is applied on non-empty text nodes
in the vDOM. Heading and Paragraph predictions are on element nodes. The
classification of Table and List are performed on specific element nodes,
$\langle table\rangle$ for table, $<$ol$>$, $<$ul$>$, and $<$dl$>$ for list.
The prediction is carried as a multi-label task. A node can belong to multiple
categories, for example, both being a Table and Primary Content, or none of
them.
Model Architecture. The classification is conducted by a hierarchical
Transformer network applied on the DOM tree. At its first level, the network
leverages a pretrained multi-lingual BERT-style model to produce a text
representation for each target DOM tree node. Then it concatenates the text
representation of a node with the projected representation of its visual
features to form visual-enriched embeddings. At the second level, another
shallow Transformer network takes the input sequence of visual-enriched
embeddings from all nodes, produces contextualized representations, and
performs multi-label classification on each node. The full hierarchical
Transformer is trained end-to-end with the first level network initialized by
XLM-R (Conneau et al., 2020) and the second level from scratch.
Weak Supervision Labels. Manually labeling the role of vDOM nodes is quite
difficult. We instead train with a large amount of weak supervision labels,
accumulated through years of engineering and system iterations. The weak
supervision signals come from the predictions of rule-based content extractors
and classic feature-based models.
The trained semantic annotator is applied to the entire ClueWeb22 corpus and
the predicted labels for all ClueWeb22 documents are included in ClueWeb22.
### 3.3. Enriched Parser
The visual features and semantic annotations are linked to the HTML DOM tree
nodes using a node id attribute as the unique identifier. This can be used to
enhance standard HTML parsing systems with visual and semantic annotation
information. In the rest of this section, we describe the parser used to
generate the ClueWeb22 clean text corpora. It is a vanilla way to extract
content. We consider it sufficient for many applications and suggest use it as
the starting point to build more dedicated parsers, if needed.
In general, the parser includes into two steps: the first is to construct the
vDOM tree from raw HTML; the second is to extract target content fields from
the tree.
vDOM Tree Construction is to parse the HTML sequence into a tree. One can
leverage a typical HTML parser and use the injected “node id” attribute to
align the parsed tree nodes with their visual features and semantic
annotations. We used Beautiful
Soup222https://pypi.org/project/beautifulsoup4/4.9.3/ for ClueWeb22.
Target Field Extraction is to extract content for the annotated document
fields by traversing the parsed tree. Some of document fields are
straightforward. On them We followed the common practices. Some of them
require enhanced treatments and we discuss these in more detail.
Primary Content. All the texts under primary content tags are extracted to
form the primary content. A naive concatenation of the text sequences from
multiple tree nodes, however, does not yield well-formatted text passages.
There is often a noticeable amount of extra spaces, tabs, and line breakers,
which destroy the text flow. Also, quite frequently web pages include text
pieces invisible to the readers. The invisible texts are a mix of meaningful
content, functional text, and spam. Identifying them from raw HTML DOM tree
without visual features is nearly impossible.
To produce well-formatted primary content, we format the extracted text pieces
by considering DOM tree hierarchy relationships, texts and HTML tags, visual
features, and primary content annotations. The exact logic can be found in our
open source repository. To handle the invisible texts, we use visual features
(opacity, width, and height) to decide text visibility and include “invisible”
annotation tags in the clean contents. One can decide whether to filter them
when using ClueWeb22 based on specific needs.
Table and List. Tables and lists include multiple elements, such as cells and
items. Their semantic annotations are predicted at the element node level. Our
extraction system iterates through the vDOM tree and extracts the children
elements of tagged element nodes: table header, table caption, table row,
table cell, and list item elements. It then combines them into a table or
list.
Besides extracted content, we also release the intermediate information,
including the rendered pages (as JPEG images), virtual features, and semantic
annotations to support more research explorations. For example, one can extend
our open sourced extraction system to extract other fields such as image and
hyperlinks, or develop one’s own content extraction logic with the benefit of
visual features and semantic annotations. It is also possible to leverage the
annotations in ClueWeb22 as weak supervision signals to train new annotation
models.
### 3.4. Anchor Graph Construction.
The hyperlinks between web pages and the affiliated texts, often referred as
the anchor graph, provide valuable information for many applications. The link
structure is a significant source of signal for web page importance
estimation, for example, in PageRank (Page et al., 1999) and in the click
likelihood prediction model used to sample ClueWeb22. The hyperlink texts
affiliated with inlinks, the “anchor texts”, are important contexts for the
corresponding documents in search (Metzler et al., 2009). They are commonly
viewed as close approximations of search queries and used as weak supervisions
for search models (Zhang et al., 2020).
The hyperlink information is available in the HTML data of ClueWeb22-A. One
can leverage standard parsers to extract the hyperlinks and construct the
anchor graph. To facilitate research explorations and standardize evaluations,
we pre-construct the anchor graph using ClueWeb22-A HTML vDOM and include it
in the ClueWeb22 distribution.
Following the standard approach, we first extract the hyperlinks and their
associated texts for each document in ClueWeb22-A (outlinks) and then merge
them to get inlinks for each document. By design, outlinks can point to a non
ClueWeb document but inlinks all come from ClueWeb22-A pages. If a page
pointing to an outlink page multiple times, we randomly keep one of the
hyperlinks. At most one thousand inlinks and outlinks are kept per page. The
down sampling is done first on outlinks and then on inlinks, making the latter
a subset of the former.
In addition, we record whether a hyperlink comes from the header/footer of its
origin web page. Hyperlinks in the header/footer of a web page are often
functional, e.g. “homepage”, “contact us”, which may not be informative. We
keep the hyperlinks that come from the same domain but one can filter them
out, for example, to avoid self-voting when calculating PageRank.
(a) ClueWeb22-B.
(b) ClueWeb22-A.
(c) ClueWeb22-L.
Figure 2. The distributions of the top twenty URL domains in each part of the
ClueWeb22 dataset. The top twenty domains cover 12.6% (B), 12.2% (A), and 13.3
% (L) of entire corpus.
## 4\. Corpus Analysis
This section analyzes the property of ClueWeb22, including the corpus
distribution, statistics of document contents, and the quality of clean texts.
### 4.1. Corpus Distribution
We first analyze the distributions of URL domains, languages, and topic
categories of ClueWeb22.
URL Domain. The distributions of top URL domains in ClueWeb22 are plotted in
Figure 2. These top domains align with our intuitions about web popularity in
search at 2022.
The top domains in ClueWeb22-B are those the search engine users around the
world visit most frequently. They include high quality, trustworthy web sites
such as Wikipedia and IMDB, as well as places people visit regularly for their
daily activities, such as YouTube (entertainment) and Amazon (shopping).
Popular websites from the big non-English speaking markets like China and
Japan are also included. As discussed in previous sections, the documents are
sampled through the view of one commercial search engine. The distribution
reflects the perspective from the system. For example, the heavy focus of
certain domains is perhaps a result of certain experiences offered by the
search engine, for example question answering and domain-specific features.
Moving from ClueWeb22-B to ClueWeb22-A and then ClueWeb22-L, websites that
cover a more diverse range of real-world activities become more popular.
YouTube is the most popular site in ClueWeb22-A and ClueWeb22-L. Websites
about jobs, food, and real estate are more popular.
Compared to previous ClueWeb datasets, the top domains in ClueWeb22 reflect
the dramatic changes of the internet in past decade. There is a significant
growth of multi-media and user-created contents, often referred to as Web 2.0.
Websites such as Quora, Yelp, and Instagram have become significantly more
popular in the past decade.
(a) ClueWeb22-B.
(b) ClueWeb22-A.
(c) ClueWeb22-L.
Figure 3. The distributions of the top twenty languages in ClueWeb22. The top
twenty language cover 97.2% (B), 96.3% (A), and 95.0 % (L) of the full set.
(a) ClueWeb22-B.
(b) ClueWeb22-A.
(c) ClueWeb22-L.
Figure 4. The distribution of the top twenty topics in ClueWeb22, classified
by a standard text classifier. The top twenty topics cover 12.0% (B), 12.1%
(A), and 12.7 % (L) of the full set.
Language. Figure 3 plots the most popular languages in ClueWeb22. It is a
direct count of the document language tags included in ClueWeb22 release. Note
the language detection is done by an updated version of BlingFire, which
achieved 99% detection accuracy for 365 languages (Microsoft, 2019).
As expected, English is the most frequent language in ClueWeb22. That said,
the fraction of English documents is fewer than 50% in ClueWeb22-B, and drops
to below 40% in ClueWeb22-L as the corpus becomes more diverse. The other top
languages are correlated with the presence of the search engine in different
markets, with strong appearances from eastern Asian in comparison to other web
corpora.
The uneven language distribution is a known challenge for web data
construction. Crawling web pages of one geographic location using machines in
other places is difficult. The different characteristics of web pages in
different markets also raise various challenges to navigate through the web.
Although there is a long way to go to eliminate the biases towards certain
languages, ClueWeb22 moves one step forward with more diverse coverage of non-
English documents. In total ClueWeb22 includes 207 languages and has one of
the highest fractions of non-English among publicly available web corpora.
Topic. In ClueWeb22 we include the predicted document topics as auxiliary
data. The topic of a document is classified by a supervised neural model,
using a standard network architecture. It learns to classify documents into
100k topics. The topic catalog is extracted from the web in a semi-automatic
fashion. The training used weak supervision labels collected along the way.
This weak supervision nature makes the classified topics more for analysis and
reference purposes. We also like to note that the catalog and weak supervision
labels are obtained from the western internet and thus many reflect certain
focus and biases from there.
The top 20 topics are plotted in Figure 4. The popular topics reflect how
users leverage search engines in their daily lives. Health related is the most
popular among all categories. There are large appearances of transportation,
food, accommodation, entertainment, and learning. There is also an emphasis on
technology, especially on ClueWeb22-B.
(a) All HTML Character (117k)
(b) All Text Character (12k)
(c) All Text Token (3k)
Figure 5. The length distribution of HTML files of ClueWeb22 counted as the
number of raw HTML characters (code and text), characters of all texts, and
tokens of all texts. The mean is shown in the parentheses.
(a) Title (13)
(b) Primary Content (1.3k)
(c) Paragraph (54)
Figure 6. The length distributions of text tokens in title, primary content,
and paragraphs in ClueWeb22, using XLM-R vocabulary (Conneau et al., 2020).
Average lengths are in parenthesis.
### 4.2. Document Content Statistics
In the next part of this section, we conduct various studies on the extracted
contents of ClueWeb22.
We first show the length distributions of ClueWeb22 pages, from the raw HTML
characters, entire parsed texts, to the fine-grained document fields. All
statistics are on the full ClueWeb22-L set when there is no significant
difference between ClueWeb22 categorizes.
The length distributions of web pages are plotted in Figure 5. We show three
distributions.
1. (1)
RAW HTML Characters are the original characters in the HTML of ClueWeb22 web
pages, without any processing.
2. (2)
All Text Characters are the characters of the directly extracted texts from
the raw HTML, all fields mixed together, without any filtering and before our
content extraction (which produces clean text latter).
3. (3)
All Text Tokens are the XLM-R (Conneau et al., 2020) sub-token counts from the
directly extracted texts.
These three statistics overview the raw data of ClueWeb22. The directly
extracted text includes all text fragments in the HTML, many of which are
noises and are the super set to be filtered for clean content.
The distributions are similar among three categories. They start with a
Gaussian-like distribution on the shorter side, followed by a long tail
distribution towards very long pages. Notably, the average number of text
tokens per document is about 3k, with the 70th percentile at around 5k, much
longer than the 512 maximum token length used in many pretrained language
models.
Table 5. The statistics of the number of semantic annotation tags per document in ClueWeb22. Primary content coverage is 100% by definition, though a page can have empty primary content field. | Total | Average Per Doc | Standard Deviation
---|---|---|---
| B | A | L | B | A | L | B | A | L
Title | 153.6M | 1.6B | 8.1B | 0.8 | 0.8 | 0.8 | 0.6 | 0.6 | 0.6
Heading | 2.3B | 22.1B | 93.9B | 11.7 | 11.1 | 9.4 | 16.4 | 16.7 | 15.2
Paragraph | 2.2B | 16.4B | 120.9B | 10.8 | 8.2 | 12.1 | 20.5 | 18.0 | 25.0
List | 273.3M | 2.0B | 6.9B | 1.4 | 1.0 | 0.7 | 4.5 | 3.7 | 3.0
Table | 166.9M | 1.6B | 8.8B | 0.8 | 0.8 | 0.9 | 3.7 | 3.3 | 4.0
The statistics of extracted document fields are shown in Figure 6. We focus on
three fields, title, primary content, and paragraph. The last one splits the
primary content field. The length of these extracted fields followed similar
distribution trends with those at the raw HTML level. The average length of
primary content is 1,352, about 44% of the full text in Figure 5. Majority of
raw texts in web pages are not primary and filtered.
Content cleaning is a critical component in many web systems but often
overlooked in the research community, partly due to lack of sufficient data.
We hope the rich information released with ClueWeb22 can demonstrate the
importance of better content extraction and facilitate more research in this
under-explored direction.
The statistics of structured annotations. We calculate the number of semantic
annotation tags in ClueWeb22, e.g., the number of Tables detected in a web
page. Table 5 lists the total, average, and standard deviation of annotated
fields per document.
Twenty percent of web pages do not have explicit HTML titles. Many web
designers use other ways to present titles in pages, another reflection on the
diverse nature of web page construction. The web is also quite structured.
There are about 10 headings, 1 list, and 0.8 tables extracted per page. The
list and table annotations in ClueWeb provide large scale structured
information to be explored in directions such as information extraction,
question answering, and structured data understanding.
Table 6. The statistics of anchor links in ClueWeb22, including inlinks and outlinks. The numbers after removing hyperlinks from Header and Footers of web pages (w.o. H&F) and after removing hyperlinks in the same web domain (w.o. In-Domain) are also listed. Average and standard deviation are the statistics per document for Total Number and the per anchor texts for Token Length. | Total | Average | Standard Deviation
---|---|---|---
| B | A | L | B | A | L | B | A | L
Inlink | | | | | | | | |
Total Number | 9.1B | 31.2B | 52.7B | 49.4 | 17.6 | 12.1 | 155.9 | 85.0 | 64.9
w.o. H&F | 6.5B | 23.0B | 39.8B | 35.3 | 13.0 | 9.1 | 123.0 | 67.4 | 51.9
w.o. H&F&In-Domain | 548M | 1.1B | 1.5B | 3.0 | 0.6 | 0.3 | 29.3 | 12.8 | 9.0
Token Length | 75.5B | 263.6B | 452.4B | 8.3 | 8.5 | 8.6 | 132.2 | 107.8 | 92.1
w.o. H&F | 60.4B | 218.1B | 382.2B | 9.3 | 9.5 | 9.6 | 109.8 | 88.6 | 77.2
w.o. H&F&In-Domain | 4.4B | 9.1B | 12.8B | 8.0 | 8.2 | 8.4 | 57.7 | 75.7 | 86.8
Outlink | | | | | | | | |
Total Number | 22.3B | 211.5B | - | 112.1 | 106.1 | - | 125.9 | 121.1 | -
w.o. H&F | 16.1B | 151.6B | - | 81.0 | 76.0 | - | 109.9 | 104.6 | -
w.o. H&F&In-Domain | 2.3B | 18.5B | - | 11.4 | 9.3 | - | 29.8 | 26.3 | -
Token Length | 174.2B | 1.5T | - | 7.8 | 7.2 | - | 89.8 | 77.2 | -
w.o. H&F | 144.3B | 1.3T | - | 8.9 | 8.3 | - | 89.4 | 70.4 | -
w.o. H&F&In-Domain | 18.6B | 130.6B | - | 8.2 | 7.1 | - | 133.6 | 97.2 | -
Table 7. The fraction of outlink target in ClueWeb22 from ClueWeb22-B and ClueWeb22-A to ClueWeb22 categories and outside ClueWeb22. All anchors, those not appear in header and footer (w.o. H&F) and not in-domain (w.o. In-Domain) are shown. From | To ClueWeb22-B | To ClueWeb22-A | To ClueWeb22-L | Outside
---|---|---|---|---
ClueWeb22-B | 24.1% | 44.1% | 54.9% | 45.1%
w.o. H&F | 21.4% | 40.3% | 51.6% | 48.4%
w.o. H&F &In-Domain | 12.9% | 19.5% | 22.5% | 77.5%
ClueWeb22-A | 19.0% | 39.4% | 52.2% | 47.8%
w.o. H&F | 15.8% | 35.0% | 48.4% | 51.6%
w.o. H&F&In-Domain | 14.3% | 21.2% | 24.0% | 76.0%
Table 8. The fraction of inlink source in ClueWeb22 before and after applying
the without header footer (w.o. H&F) and without in-domain link (w.o. In-D)
filters. Note that all inlinks are extracted from the HTML pages of
ClueWeb22-A.
To $\rightarrow$ | ClueWeb22-B | ClueWeb22-A | ClueWeb22-L
---|---|---|---
From $\downarrow$ | Total | w.o. H&F | w.o. H&F | Total | w.o. H&F | w.o. H&F | Total | w.o. H&F | w.o. H&F
&In-Domain | &In-Domain | &In-Domain
From ClueWeb22-B | 26.3% | 26.9% | 18.9% | 17.1% | 16.9% | 17.4% | 14.0% | 13.8% | 16.6%
From ClueWeb22-A $\setminus$ B | 73.7% | 73.1% | 81.1% | 86.8% | 87.1% | 84.1% | 90.3% | 90.4% | 85.4%
Anchor Graph. The statistics of anchor graphs are shown in Table 6. We show
the statistics of the raw anchor graph data and that after two filtering
strategies. The first excludes hyperlinks in headers and footers, using the
flags included in the anchor data. The second filters out hyperlinks that
connect pages from the same domain, in addition to removing header and footer
links.
In total there is a large amount of anchor links extracted from ClueWeb22-A.
Most of them are in-domain. The number of inlinks per page drops significantly
from ClueWeb22-B to ClueWeb22-A and then to ClueWeb22-L. Head web pages
received more inlinks than tail ones as expected. The numbers of outlinks per
page in ClueWeb22-B and ClueWeb22-A are relatively the same.
The drastic drop of anchor links after removing in-domain ones shows that most
hyperlinks are pointing to other pages from the same website. How to better
identify anchor links that are more informative, i.e., including semantic
information of the destination URLs is an important research question to make
best use of the anchor information.
We also provide a preliminary analysis on the connectivity of the anchor graph
in ClueWeb22. The fraction of outlinks destination is shown in Table 7. The
source of inlinks is in Table 8. After the two filters, more than 75% of
outlinks in ClueWeb22-B and ClueWeb22-A point to web pages outside of
ClueWeb22. This shows the enormous nature of the web. ClueWeb22-B received
more hyperlinks from both ClueWeb22-B and ClueWeb22-A, 13% and 14%, in
comparison to its size, which is 200 Million out of 10 billion (2%). This
indicates that ClueWeb22-B is sampled from web sites closer to the center and
more connected part of the web graph. The ClueWeb22-B subset, which is 20% of
ClueWeb22-A, provided slightly less than 20% of inlinks, on the other hand.
### 4.3. Clean Text Quality
Manual examination indicates that the quality of ClueWeb22 clean text,
especially in ClueWeb22-B and ClueWeb22-A, is pretty good. They are sampled
from the popular part of the web, filtered by production quality systems, and
extracted by advanced content understanding techniques.
To demonstrate the content quality of ClueWeb22 documents in a quantitative
way, we conduct a pilot study using an important application of large web
corpora: to pretrain language models. As a preliminary study, we choose the
most common pretraining and downstream pipeline: pretrain vanilla RoBERTa base
(Liu et al., 2019) and evaluate by fine-tuning on the MNLI task (Williams et
al., 2018).
This study used a code base similar to the open-source version of Efficient LM
Pretraining.333https://github.com/microsoft/Efficient-Large-LM-Trainer We
followed the exact pretraining and fine-tuning configurations of the RoBERTa
base baseline by a previous research (Meng et al., 2021). We used the base
configuration which is to pretrain the 12 layer BERT for 125k steps, with a
maximum 512 tokens per sequence and the global batch size of 2048 sequences.
Everything was kept the same, except the pretraining corpus was changed from
Wikipedia and Book Corpus to Wikipedia+ClueWeb22, using the primary content of
ClueWeb22-B documents.
Specifically, experiments were run with the following text resources, all in
English.
1. (1)
Wiki: the official Wikipedia English dump widely used for language model
pretraining.
2. (2)
Book: Google Book Corpus (Zhu et al., 2015), which is often paired with Wiki
to form the base pretraining setup.
3. (3)
ClueWeb22-B Random: the primary content from randomly sampled ClueWeb22-B web
pages.
Table 9. MNLI Dev accuracy of RoBERTa${}_{\text{base}}$ model pretrained on different combinations of ClueWeb22-B documents, Wikipedia, and Book Corpus. The pretraining and fine-tuning use the exact same setting, following the standard configurations in recent research (Meng et al., 2021). Only pretraining text corpora differ. Model | Batch $\times$ Step | Pretraining Corpus (size) | MNLI-m/mm
---|---|---|---
RoBERTa${}_{\text{base}}$ | 2K$\times$125K | Wiki (12GB) + Book (6GB) | 86.2/85.9
RoBERTa${}_{\text{base}}$ | 2K$\times$125K | Wiki (12GB) + ClueWeb22-B Random (6GB) | 85.8/86.2
RoBERTa${}_{\text{base}}$ | 2K$\times$125K | Wiki (12GB) + ClueWeb22-B Random (18GB) | 86.2/86.4
The MNLI accuracy of the same RoBERTa${}_{\text{base}}$ model pretrained on
different corpora is listed in Table 9. Our RoBERTa${}_{\text{base}}$ results
with Wiki+Book are on par or better than those reported in previous research
(Liu et al., 2019; Meng et al., 2021). Pretraining with 6 GB of randomly-
selected ClueWeb22-B web pages, in replacement of the Book corpus and combined
with Wiki, leads to similar MNLI accuracy. This shows the quality of
ClueWeb22-B primary content is on par with the widely used Book Corpus for
language model pretraining. Increasing the amount of ClueWeb22-B text to 18GB
further improved MNLI accuracy, indicating the potential of using more high
quality texts from ClueWeb22 for pretraining.
In comparison, previous research (Raffel et al., 2020) found that the C4 web
corpus derived from CommonCrawl, even after strong filters, does not provide
pretraining texts as high quality as Wiki+Book. The advantage of CommonCrawl
is at its scale but not necessary content quality. Our pilot study shows that
ClueWeb22-B provides a large amount of high quality text that can be directly
used (without any further cleaning) to pretrain language models.
## 5\. Comparison with CommonCrawl
Table 10. Statistics of July and August 2022 CommonCrawl (CC) Snapshots. Distinct pages are de-duplicated on CommonCrawl URLs. Success Crawl only keep those pages with “HTTP 200” responses which indicate the crawling was successful. The de-duplication and successful crawl filtering are applied when constructing ClueWeb22. | Page Level | Domain Level
---|---|---
| #All | #Distinct | #Success Crawl | #All | #Success Crawl
July 2022 CommonCrawl | 3.6B | 3.6B | 3.1B | 40.8M | 37.2M
August 2022 CommonCrawl | 3.3B | 3.2B | 2.6B | 43.0M | 38.7M
ClueWeb22-B | 200M | 200M | 200M | 15.0M | 15.0M
ClueWeb22-A | 2B | 2B | 2B | 45.3M | 45.3M
ClueWeb22-L | 10B | 10B | 10B | 66.0M | 66.0M
Overlap | | #Distinct | #Success Crawl | #All | #Success Crawl
July 2022 CommonCrawl vs. August 2022 | 126.5M | 104.3M | 33.9M | 31.5M
July 2022 CommonCrawl vs. ClueWeb22-B | 28.5M | 27.6M | 11.2M | 10.6M
July 2022 CommonCrawl vs. ClueWeb22-A | 145.4M | 141.3M | 24.0M | 22.7M
July 2022 CommonCrawl vs. ClueWeb22-L | 454.9M | 444.3M | 29.1M | 27.4M
August 2022 CommonCrawl vs. ClueWeb22-B | 21.0M | 19.9M | 11.4M | 10.7M
August 2022 CommonCrawl vs. ClueWeb22-A | 102.7M | 96.8M | 24.6M | 23.2M
August 2022 CommonCrawl vs. ClueWeb22-L | 313.0M | 296.2M | 30.0M | 28.1M
In this section we analyze the differences between ClueWeb22 and CommonCrawl.
Their main differences are derived from different design choices in their
construction. ClueWeb22 documents are samples of important web pages from a
trillion-scale web crawl of a commercial search engine. CommonCrawl includes
pages from a community maintained crawling process, at the scale of billions
of pages, with a crawled snapshot produced every month.
General CommonCrawl Statistics. Specifically, we analyzed two CommonCrawl
monthly snapshots, July 2022 and August 2022, for comparison with ClueWeb22,
which was collected around February 2022. For the analysis, we perform basic
de-duplication at URL level and also check the HTTP status of crawled web
pages, where “HTTP 200” responses indicate the crawling of corresponding web
pages were successful.
Table 10 shows the number of URLs and web domains of these two snapshots. We
also show these statistics of ClueWeb22 and the overlap between these
datasets. The top domains of the two months before and after HTTP 200
filtering are plotted in Figure 7. These statistics show notable differences
between CommonCrawl and ClueWeb22 in their coverage and distribution. We
discuss them in the remainder of this section.
Differences from Crawling. Each CommonCrawl snapshot is an individual crawl.
There is little overlap between the URLs in the two CommonCrawl snapshots.
This is beneficial because a large amount of unique URLs are accumulated
throughout the history of CommonCrawl. They also differ significantly from the
URLs included in ClueWeb22. On the other hand, at the domain level, there are
large overlaps between the domains included in the two CommonCrawl snapshots.
It indicates that although the crawlers traversed different URLs, they stay in
the similar part of the web. The high domain-level overlap suggests that the
monthly CommonCrawl snapshots were samplings from similar distributions. The
low page-level overlap indicates that it is a sparse sample.
ClueWeb22 covers a different part of the web and has lower overlap with both
CommonCrawl snapshots. ClueWeb22 pages are also spread more widely in
different domains. For example, in the July 2022 CommonCrawl there are 3.1
billion pages sampled from 37 million domains, 83 pages per domain on average.
In comparison, ClueWeb22-A includes 2 billion pages from 45 million domains,
averaging 44 pages/domain.
We also notice a significant drop on some top domains in CommonCrawl after the
HTTP 200 filter is applied, for example, Google, twitter, and Pinterest. Many
websites do not allow a large amount of requests from the same IP address and
often put a frequency limit for crawlers they explicitly allow. There is more
incentive for websites to allow crawling from a commercial search engine, for
example, to be indexed and exposed to search users. ClueWeb22 benefits from
this with better crawler reach in certain websites.
These differences are direct results of the crawling process. CommonCrawl
provides more diversity along the time axis as it crawles and releases
snapshots every month. ClueWeb22 provides more diversity at the domain level
as it is sampled from the index of a commercial search engine who has a much
deeper exploration of the web.
(a) July 2022.
(b) August 2022.
(c) July 2022 Success Crawl.
(d) August 2022 Success Crawl.
Figure 7. The distributions of the top twenty URL domains in CommonCrawl July
and August 2022 snapshots. Success Crawls are pages with HTTP 200 responses.
The top twenty domains cover around 2.5% in each distribution.
Differences from Sampling. CommonCrawl web pages are crawled based on ranking
using their in-house web graphs, prioritizing discovered URLs with high
harmonic centrality and PageRank (Nage, 2021). This leads a standard link-
structure based distribution as reflected in Figure 7. Since Summer 2021, a
per-domain limit has been enforced in CommonCrawl, with maximum 25 million
URLs allowed per top domain and 150k URLs per host (Nage, 2021). This rule is
more restrictive for site with fewer hosts, as a site with one host would hit
the per host budget before reaching the full 25 million quota. This perhaps
leads to the high fraction of wordpress.com and blogspot.com who have many
hosts.
In comparison, ClueWeb22 pages are sampled using signals beyond link
structures to estimate page importance. Many of the importance signals,
especially those stemmed from user search behaviors, are quite powerful but
not available for CommonCrawl.
There is no single best definition of web page importance. Nevertheless, the
frequency of websites in CommonCrawl is unlikely to be aligned with their
frequency of being visited by web search users. For example, it is doubtful
web users now spend more time on amazonaws.com than on amazon.com. According
to a recent discussion from a CommonCrawl contributor (Nage, 2021), the domain
rank of CommonCrawl has around 33% overlap with the general web traffic
estimated by DNS providers. In comparison, ClueWeb22 is designed to reflect
the distribution of search engine traffic and its top websites align with our
intuitions of search user behaviors.
Differences in Available Information. In ClueWeb22 we provide rich data
affiliated with each web page, such as clean text, semantic annotations, and
visual information. Some of them can be produced for CommonCrawl snapshots
with a decent amount of effort and at varying quality. The most difficult one
to obtain is the visual information. It requires using a web browser to render
web pages and record their visual appearances, which is very engineering- and
resource-heavy. This is perhaps one reason why such information, though being
so valuable in industry systems, is overlooked in the research community.
We highlight these differences not to show the superiority of either dataset,
but to raise awareness of the underlying data properties for researchers and
practitioners when using them. All datasets are useful. Perhaps a more
beneficial strategy is to utilize information from multiple resources. For
example, one can leverage ClueWeb22 as the high quality source of web
information and combine multiple months of CommonCrawl snapshots to increase
the coverage on target scenarios, e.g., on some low-resource languages. The
rich information from ClueWeb22 can also serve as weak supervision signals to
train better content extraction systems for CommonCrawl.
## 6\. Related Corpora
ClueWeb22 is the third dataset in a lineage that began with ClueWeb09 (Callan
and Hoy, 2009) and ClueWeb12 (Callan and Pane, 2012). The previous datasets
were constructed by Carnegie Mellon University using open-source software
(Apache Nutch (Team, 2022a) in 2009 and Heritrix (Team, 2022b) in 2012). The
crawling of web pages started from seed URLs and traversed the web graph by
following hyperlinks. ClueWeb09 contains 1 billion pages, 50% English and 50%
in the next nine most popular languages on the Internet, roughly in proportion
to their popularity. ClueWeb12 contains 730 million pages, primarily in
English. Both datasets were among the largest web corpora available when they
were released. They have been popular with researchers for more than a decade,
but they are aging and no longer accurately represent the web of 2020s.
Many have created cleaner and more targeted web corpora by filtering multiple
CommonCrawl snapshots. CCNet (Wenzek et al., 2020) starts from the UTF-8 text
of February 2019 CommonCrawl snapshot, performs de-duplication at text passage
level, identifies English texts, and keeps those closer to Wikipedia texts
using a n-gram text classifier. It has been widely used to pretrain English
language models, especially in the standard base++ and large++ RoBERTa-style
models (Liu et al., 2019; Meng et al., 2021). C4 (Raffel et al., 2020), the
pretraining corpus of T5, uses a series of language patterns to filter high
quality English texts from CommonCrawl (Raffel et al., 2020). Other notable
derivations from CommonCrawl include Pile-CC (Gao et al., 2020), which is
constructed to pretrain GPT-Neo, and CC-News (Hamborg et al., 2017), which
targets news articles.
Another way to estimate the quality of web pages is by upvotes in online
forum. For example, OpenAI scraped WebText by harvesting URLs with three or
more upvotes on Reddit (Radford et al., 2019). Similar approaches have been
used to construct the publicly available OpenWebText (Gokaslan and Cohen,
2019) and OpenWebText2 (Gao et al., 2020) corpora. These human-voted web pages
likely consist of higher quality content than random web pages, but there is
only a limited number of URLs covered by online forums.
Recently, places with access to large scale proprietary web corpora have
constructed their own high quality web datasets to pretrain language models. A
series of recent large language models from Google used web pages sampled by a
learned quality score from their web corpora (Du et al., 2022; Chowdhery et
al., 2022). DeepMind collected the MassiveText corpus to pretrain their large
scale language models (Rae et al., 2021; Hoffmann et al., 2022). Their
research explorations demonstrate the benefits and perhaps also necessity of
large scale high quality web corpora in pretraining large neural models. One
goal of ClueWeb22 is to make this type of data resource available to the
general research community.
## 7\. Licensing and Distribution
ClueWeb22 is licensed and distributed by Carnegie Mellon University and The
Lemur Project using methods similar to those used for more than a decade for
older ClueWeb datasets. There is no fee to license the dataset, which is
convenient for researchers who can access the dataset on a cloud computing
platform that already has it. There is a modest fee to cover the cost of
distributing the dataset when it is necessary to transfer terabytes from
Carnegie Mellon to a research organization, for example, the cost of hard disk
and shipment. The Lemur Project website444https://lemurproject.org/clueweb22/
is kept current with information about updates, dataset maintenance,
licensing, and distribution.
## 8\. Conclusions
ClueWeb22 provides ten billion web pages sampled from the web discovered by
crawlers of a commercial search engine. The sampling was conducted according
to the preference of search engine users and includes web pages from the super
head, head, and tail part of the search traffic. The included web pages are
relatively clean with industry-quality filters to remove spam and adult
content. The result distribution of web pages in ClueWeb22 is as close as
possible to the web distribution of commercial search in the real world.
Besides raw data, ClueWeb22 also includes rich information, such as browser-
rendered web pages, visual information embedded in DOM trees, and semantic
annotations from production-quality models. These resources are widely used in
various industry systems and critical for many search engine functions, but
were hard to obtain without a commercial search product. Furthermore, to ease
the barrier of entry, we pre-compute various data using these resources,
including clean texts, document fields, and anchor graphs. These data serve as
the standardized content of ClueWeb22. They can also be used as the starting
point for more sophisticated and customized document understanding systems.
Access to large scale high quality web corpora becomes more and more important
in AI research with the recent advancement of deep learning. We hope the
release of ClueWeb22 reduces the boundary between research in places with
proprietary data access and the general research community, thus empower more
research progress in the near future.
## 9\. Acknowledgments
We would like to thank the legal teams at Microsoft and Carnegie Mellon
University for helping set up the licensing process, Ruohong Zhang for running
the pretraining study, Junaid Ahmed for initializing the idea of the ClueWeb22
effort together, and Likun Ouyang for dynamic rendering support to get high
quality screenshots and visual features.
## References
* (1)
* Ainslie et al. (2020) Joshua Ainslie, Santiago Ontañón, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: Encoding Long and Structured Inputs in Transformers. In _Proceedings of EMNLP 2020_.
* Bajaj et al. (2016) Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016\. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. (2016).
* Callan and Hoy (2009) Jamie Callan and Mark Hoy. 2009. ClueWeb09. http://lemurproject.org/clueweb09/
* Callan and Pane (2012) Jamie Callan and David Pane. 2012. ClueWeb12. http://lemurproject.org/clueweb12/
* Chen et al. (2017) Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017\. Reading Wikipedia to Answer Open-Domain Questions. In _Proceedings of ACL 2017_.
* Chowdhery et al. (2022) Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022\. PaLM: Scaling Language Modeling with Pathways. (2022).
* Clarke et al. (2009) Charles L. A. Clarke, Nick Craswell, and Ian Soboroff. 2009\. Overview of the TREC 2009 Web Track. In _Proceedings of TREC 2009_.
* Clarke et al. (2012) Charles L. A. Clarke, Nick Craswell, and Ellen M. Voorhees. 2012\. Overview of the TREC 2012 Web Track. In _Proceedings of TREC 2012_.
* Conneau et al. (2020) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised Cross-lingual Representation Learning at Scale. (2020).
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of NAACL-HLT 2019_.
* Du et al. (2022) Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, et al. 2022\. GLaM: Efficient Scaling of Language Models with Mixture-of-Experts. _Proceedings of ICML 2022_.
* Fedus et al. (2021) William Fedus, Barret Zoph, and Noam Shazeer. 2021\. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. (2021).
* Gao et al. (2020) Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, and Noa Nabeshima others. 2020\. The Pile: An 800GB Dataset of Diverse Text for Language Modeling. (2020).
* Gokaslan and Cohen (2019) Aaron Gokaslan and Vanya Cohen. 2019. OpenWebText Corpus. http://Skylion007.github.io/OpenWebTextCorpus.
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Retrieval Augmented Language Model Pre-Training. In _Proceedings of ICML 2020_.
* Hamborg et al. (2017) Felix Hamborg, Norman Meuschke, Corinna Breitinger, and Bela Gipp. 2017. news-please: A Generic News Crawler and Extractor. In _Proceedings of the ISI 2017_.
* Hoffmann et al. (2022) Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. 2022\. Training Compute-Optimal Large Language Models. (2022).
* Lewis et al. (2020) Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020\. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In _Proceedings of NIPS 2020_.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. (2019).
* Ma et al. (2021) Kaixin Ma, Hao Cheng, Xiaodong Liu, Eric Nyberg, and Jianfeng Gao. 2021. Open Domain Question Answering over Virtual Documents: A Unified Approach for Data and Text. (2021).
* Meng et al. (2021) Yu Meng, Chenyan Xiong, Payal Bajaj, Saurabh Tiwary, Paul Bennett, Jiawei Han, and Xia Song. 2021. COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining. In _Proceedings of NIPS 2021_.
* Metzler et al. (2009) Donald Metzler, Jasmine Novak, Hang Cui, and Srihari Reddy. 2009\. Building enriched document representations using aggregated anchor text. In _Proceedings of SIGIR 2009_.
* Microsoft (2019) Microsoft. 2019\. BlingFire. https://github.com/microsoft/BlingFire
* Nage (2021) Sebastian Nage. 2021\. From Web Graphs to Prioritizing Web Crawls. https://indico.cern.ch/event/1006978/contributions/4539477/attachments/2325769/3962907/ossym2021-sn-web-graphs-crawling.pdf
* Page et al. (1999) Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank Citation Ranking : Bringing Order to the Web. In _Proceedings of WebConf 1999_.
* Piktus et al. (2021) Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oguz, Edouard Grave, Wen-tau Yih, et al. 2021\. The Web Is Your Oyster - Knowledge-Intensive NLP against a Very Large Web Corpus. (2021).
* Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019\. Language Models are Unsupervised Multitask Learners. (2019).
* Rae et al. (2021) Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, H. Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021\. Scaling Language Models: Methods, Analysis & Insights from Training Gopher. (2021).
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. (2020).
* Team (2022a) Apache Nutch Team. 2022a. Apache Nutch. https://nutch.apache.org/
* Team (2022b) Heritrix Team. 2022b. Heritrix. https://github.com/internetarchive/heritrix3
* Wenzek et al. (2020) Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data. In _Proceedings of LREC 2020_.
* Williams et al. (2018) Adina Williams, Nikita Nangia, and Samuel R. Bowman. 2018\. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference. In _Proceedings of NAACL-HLT 2018_.
* Xiong et al. (2019) Lee Xiong, Chuan Hu, Chenyan Xiong, Daniel Campos, and Arnold Overwijk. 2019. Open Domain Web Keyphrase Extraction Beyond Language Modeling. In _Proceedings of EMNLP-IJCNLP 2019_.
* Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019\. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In _Proceedings of NIPS 2019_.
* Zhang et al. (2020) Kaitao Zhang, Chenyan Xiong, Zhenghao Liu, and Zhiyuan Liu. 2020\. Selective Weak Supervision for Neural Information Retrieval. In _Proceedings of WebConf 2020_.
* Zhu et al. (2015) Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In _Proceedings of IEEE 2015_.
|
# Association between author metadata and acceptance: A feature-rich, matched
observational study of a corpus of ICLR submissions between 2017-2022
Chang Chen Department of Biostatistics, University of Washington Jiayao
Zhang Department of Computer and Information Science, University of
Pennsylvania Department of Statistics and Data Science, University of
Pennsylvania Dan Roth Department of Computer and Information Science,
University of Pennsylvania Ting Ye Department of Biostatistics, University
of Washington Bo Zhang Correspondence to Bo Zhang, Vaccine and Infectious
Disease Divison, Fred Hutchinson Cancer Center, Seattle, Washington, 98109.
Email<EMAIL_ADDRESS>Vaccine and Infectious Disease Division, Fred
Hutchinson Cancer Center
Abstract: Many recent studies have probed status bias in the peer-review
process of academic journals and conferences. In this article, we investigated
the association between author metadata and area chairs’ final decisions
(Accept/Reject) using our compiled database of $5,313$ borderline submissions
to the International Conference on Learning Representations (ICLR) from 2017
to 2022. We carefully defined elements in a cause-and-effect analysis,
including the treatment and its timing, pre-treatment variables, potential
outcomes and causal null hypothesis of interest, all in the context of study
units being textual data and under Neyman and Rubin’s potential outcomes (PO)
framework. We found some weak evidence that author metadata was associated
with articles’ final decisions. We also found that, under an additional
stability assumption, borderline articles from high-ranking institutions
(top-30% or top-20%) were less favored by area chairs compared to their
matched counterparts. The results were consistent in two different matched
designs (odds ratio = $0.82$ [95% CI: $0.67$ to $1.00$] in a first design and
$0.83$ [95% CI: $0.64$ to $1.07$] in a strengthened design). We discussed how
to interpret these results in the context of multiple interactions between a
study unit and different agents (reviewers and area chairs) in the peer-review
system.
Keywords: Matched observational study; Natural language processing (NLP); Peer
review; Quasi-experimental design; Status bias
## 1 Introduction
### 1.1 Implicit bias in the peer review process
Peer review has been the cornerstone of scientific research. It is important
that the peer review process be fair and impartial, especially for early-
career researchers. In recent years, the peer review process has been under a
lot of scrutiny. For instance, in 2014, the organizers of the Conference on
Neural Information Processing Systems (NeurIPS) randomly duplicated $10\%$ of
submissions and assigned them to two independent sets of reviewers. The study
found that $25.9\%$ of these submissions received inconsistent decisions111See
https://inverseprobability.com/2014/12/16/the-nips-experiment for more
details.. Cortes and Lawrence, (2021) tracked the fate of submissions rejected
in the NeurIPS experiment and found that the peer review process was good at
identifying poor papers but fell short of pinpointing good ones. In a similar
vein, McGillivray and De Ranieri, (2018) analyzed 128,454 articles in Nature-
branded journals and found that authors from less prestigious academic
institutions are more likely to choose double-blind review as opposed to
single-blind review. More recently, Sun et al., (2022) studied 5,027 papers
submitted to the International Conference on Learning Representations (ICLR)
and found that scores given to the most prestigious authors significantly
decreased after the conference switched its review model from single-blind
review to double-blind review. Smirnova et al., (2022) evaluated a policy that
encourages (but did not force) authors to anonymize their submissions and
found that the policy increased positive peer reviews by 2.4% and acceptance
by 5.6% for low-prestige authors while slightly decreased positive peer
reviews and acceptance rate for high-prestige authors. Many of these studies
identified associations between decision makers’ perception of certain aspects
of articles’ author metadata (e.g., authors’ prestige or identity) and final
acceptance decisions of these articles, and suggested various forms of
implicit bias in the peer review processes, especially among those that adopt
a single-blind model.
### 1.2 A hypothetical experiment
In a seminal paper, Bertrand and Mullainathan, (2004) measured racial
discrimination in labor markets by sending resumes with randomly assigned
names, one African American sounding and the other White sounding (e.g.,
Lakisha versus Emily), to potential employers. Bertrand and Mullainathan,’s
(2004) study was elegant for two reasons. First, it was a randomized
experiment that is free of confounding bias, observed or unobserved, although
to what extent the found effect could be attributed to the bias towards
applicants’ race and ethnicity versus towards other personal traits signaled
by the names is unclear; see, e.g., related discussions in Bertrand and
Mullainathan, (2004, Section IV) and Greiner and Rubin, (2011). Second, the
study illustrated a general strategy to measure the causal effect due to an
immutable trait: instead of imagining manipulating the immutable trait itself,
the study manipulates employers’ _perception_ of this immutable trait.
In a recent high-profile study published in the _Proceedings of the National
Academy of Sciences_ , Huber et al., (2022) designed a field experiment in the
similar spirit as Bertrand and Mullainathan, (2004). Huber et al., (2022)
measured the extent of the _status bias_ , defined as a differential treatment
of the same paper by prominent versus less established authors in the peer-
review process, by randomizing over $3,300$ researchers to one of the three
arms: one arm assigned an article with a prestigious author, one arm assigned
an anonymized version of the same article, and the other arm assigned the same
article but with a less established author. Huber et al., (2022) found strong
evidence that the prominence of authorship markedly increased the acceptance
rate by as much as sixfold, although to what extent this conclusion
generalizes to other contexts, e.g., other articles in the same field or
articles in other fields, is unclear.
Bertrand and Mullainathan, (2004) and Huber et al.,’s (2022) studies
illuminate a randomized experiment that conference organizers and journal
editorial offices could carry out, at least hypothetically, in order to
understand various forms of bias. For instance, if the policy interest is to
evaluate the effect of reviewers’ perception of certain aspects of authors
(e.g., authors’ identity, institution, etc), then a hypothetical experiment
would forward to reviewers articles with randomly assigned aspects of
interest. Although such an experiment is conceivable, it is difficult to
implement due to practical constraints.
### 1.3 A quasi-experimental design using a corpus of ICLR papers
In the absence of an RCT, a quasi-experimental design aims to fill in the gap
by constructing two groups, one treatment group and the other comparison
group, that are as similar as possible in pre-treatment variables from
retrospective, observational data. Statistical matching is a popular quasi-
experimental design device (Rosenbaum,, 2002, 2010). In this article, we
describe an effort to conduct a matched observational study using state-of-
the-art natural language processing tools and study design devices in an
effort to investigate the effect of authorship metadata on papers’ final
decisions.
Our database was constructed from a carefully curated corpus of papers from
the International Conference on Learning Representations (ICLR), a premium
international machine learning conference. The database is the first of its
kind to provide an empirical evidence base for investigating the peer review
process. In particular, the database is feature-rich, in the sense that it
contains not only explicit/structural features of an article such as its
keywords, number of figures and author affiliations, but also more subtle and
higher-level features such as topic and textual complexity as reflected by
articles’ text, and reviewers’ sentiment as reflected by their publicly
available comments. Building upon discussions of immutable traits regarding
human subjects, for instance, those in Greiner and Rubin, (2011), we elaborate
on the potential outcomes framework (Neyman,, 1923; Rubin,, 1974) that
facilitates a cause-and-effect analysis between authorship and papers’ final
decisions; in particular, we will carefully define and state the treatment of
interest including its timing, pre-treatment variables, causal identification
assumptions, causal null hypothesis to be tested and how to interpret the
results, all in the context of study units being textual data.
The conference submission and peer review process consist of multiple steps.
For a typical machine learning conference like ICLR, articles need to be
submitted by authors before a pre-specified deadline. Valid submissions are
then forwarded to a number of reviewers (typically three to four) for feedback
and a numerical rating. This part of the peer review process is double-blind
so the reviewers and authors in principle do not know each other although in
practice, reviewers could identify authors from penmanship or because the
authors may upload their articles to the preprint platform arXiv.org. Authors
are then given the chance to answer reviewers’ comments and feedback and
provide a written rebuttal. Reviewers are allowed to modify their previous
ratings taking into account the rebuttal. Finally, an area chair (similar to
an associate editor of an academic journal) reviews the article, its ratings
and then make a final decision. Submitted articles, author metadata,
reviewers’ written comments, authors’ written rebuttals, reviewers’ ratings
and area chairs’ final decisions are all openly available from the website
openreview.net.
Our compiled database allows us to study many different aspects of the peer
review process. In this article, we will focus specifically on the last stage
of the peer review process and investigate the effect of authorship metadata
on area chairs’ final decisions. Our focus was motivated by several
considerations. First, it is an empirical fact that articles receiving
identical or near-identical ratings could receive different final decisions;
see, e.g., the stacked bar graph in Panel A of Figure 1. It is not
unreasonable for authors, especially those who are junior and less
established, to wonder if they are fairly treated. Second, any endeavor to
establish a cause-and-effect relationship using observational data is
challenging because of unmeasured confounding. In our analysis, articles have
many implicit features like novelty, thoughtfulness, etc, and these unmeasured
confounders could in principle explain away any association between author
metadata and final decisions. This problem is greatly attenuated when we focus
on area chairs’ final decisions and have reviewers’ ratings as _pre-treatment_
covariates: reviewers’ ratings should be a good, albeit not perfect, proxy for
the innate quality of an article.
The plan for the rest of the article is as follows. In Section 2, we briefly
describe our compiled database. In Section 3, we lay out the potential
outcomes (PO) framework under which we will conduct the matched observational
study. Section 4 describes two concrete study designs, including matching
samples and underlying matching algorithms. We also report outcome analysis
results in Section 4. Section 5 discusses how to interpret our findings.
## 2 Data: ICLR papers from 2017-2022
Figure 1: Illustration of ratings and high-level textual features. (A) Stacked
bar graphs of acceptance decisions. The decisions of borderline papers
(average rating between $5$ and $7$) are not clearcut. (B) $T$-SNE plot of the
Specter embedding together with arXiv primary categories and primary keywords
among submissions that have been put onto arXiv. (C) Sentiment and ratings of
reviews. (D) Distribution of complexity scores of a random samples of arXiv
articles across four primary categories.
### 2.1 Database construction and structural features
We use the ICLR Database222https://cogcomp.github.io/iclr_database/ collected
and complied by Zhang et al., (2022). Motivated by the observation that area
chairs’ final decisions of submissions with an average rating between $5$ to
$7$ are not deterministic, as shown in Panel A of Figure 1, we restrict
ourselves to this subset of borderline submissions. We first briefly recall
the data collection and cleaning process here for completeness.
The OpenReview API is used to crawl data of $10,289$ submissions to the
International Conference on Learning Representations (ICLR) from $2017$ to
$2022$. The crawled data include (i) submissions and their metadata; (ii)
author metadata, (iii) review/rebuttal data and (iv) area-chair/program chair
decision. Structural features, including the number of sections, figures,
tables and bibliographies, are extracted in a relatively straightforward
manner. We also extracted and rank self-reported keywords from each submission
to form primary and secondary keywords. Author profiles include an optional
self-reported gender; we used the first name dictionary developed by Tsai et
al., (2016) to provide a numerical score based on the first names of the
authors where $0$ signifies female and $1$ otherwise. Author profiles are then
augmented via Google Scholar API to obtain author citation and $h$-index data.
Author institution is matched using the domain name of the author email333
Only domain names are visible from the OpenReview API; other information is
masked.. Although CSRanking data is available, it does not have full coverage
of all authors’ institutions. As such, we mainly use the institutional ranking
derived from the cumulative number of accepted papers to the ICLR in the past.
For example, the ranking in 2020 of institution A is determined by all papers
accepted to ICLR 2017-2020 that have at least one author from it. The review
data include rating, confidence and textual reviews. In some years, for
example, 2020 and 2022, there are additional assessments such as technical
soundness or novelty. Since these additional assessments are not available for
all years, we restrict our attention to ratings, confidence, and higher-level
features derived from textual reviews to be discussed shortly. Finally, we
dichotomize the paper decision by grouping various acceptance designations
(spotlight, poster, short talk) into “Accept” and “Reject” or “invited to
workshop track” as “Reject.”
We also identify if a submission has been posted on the preprint platform
arXiv.org before the review deadline (i.e., the time reviewers are asked to
finalize their reviews) by (i) searching for five most relevant results based
on the title and abstract corresponding to each article from arXiv.org, (ii)
computing the Jacard similarity and normalized Levenshtein similarity between
authors, and (iii) calculating the cosine-similarity of the title-abstract
embedding. Using the arXiv timestamp, we then identified which submissions
were posted prior to the end of the review process. Among the subset of papers
that has arXiv counterparts, we also obtain their arXiv-related metadata such
as primary and secondary categories.
### 2.2 Derived higher-level features
Although the structural features described so far contain abundant
information, we considered further leveraging language models to extract
additional higher-level features directly from textual data. These higher-
level features, like topics, quality of writing and mathematical complexity or
rigor, may help quantify residual confounding not captured by structural
aspects (e.g., those described in Section 2.1) of an article. Furthermore, in
a matched observational study, it is desirable to have a characterization of
the “similarity” among study units to serve as a criterion for matching.
Therefore, we derived the following higher-level features and a similarity
measure based on embeddings from language models to facilitate our study
design.
#### 2.2.1 SPECTER Embedding
Our first tool is the SPECTER model (Cohan et al.,, 2020), a BERT-based model
(Devlin et al.,, 2019) fine-tuned on a scholarly corpus that has a good track
record on summarization tasks (generate abstracts from the main texts) on
academic articles. This model takes the _abstract_ of the submissions and
outputs a $784$-dimensional real-valued vector as its representation. Panel B
of Figure 1 plots a two-dimensional $t$-SNE (van der Maaten and Hinton,, 2008)
embedding of this representation across a subset of $535$ submissions that
have their arXiv information and primary keyword available. We see that
computer vision and computational linguistics articles separate well while
general AI and ML articles blend in. In addition, we sample $9$ primary
keywords to overlay on the $t$-SNE embedding. Note that semantically similar
keywords (e.g., the phrases “BERT” and “natural language processing”)
generally have higher proximity under this embedding, which further
demonstrate its effectiveness. We thus use this embedding to (1) perform a
$10$-component spectral clustering to assign each submission a “semantic
cluster” of submissions, and (2) compute the cosine similarity between any two
articles.
##### Sentiment Modeling.
A RoBERTa model (Liu et al.,, 2019) fine-tuned on Twitter sentiments
(Rosenthal et al.,, 2017) is used to assign a sentiment score to textual
reviews where $0$ signifies negative and $1$ positive. We plot the scatter
plot of average sentiment and average rating of submissions with different
color signifies different decisions in Panel C of Figure 1. We observe that
the sentiment is highly correlated with the rating while behaves more volatile
when the rating is borderline. This suggests that incorporating review
sentiment may help complement numerical ratings in the downstream analysis,
especially when numerical ratings are borderline and not discriminative.
##### Complexity Score.
We use an off-the-shelf fluency
model444https://huggingface.co/prithivida/parrot_paraphraser_on_T5 derived
from the RoBERTa model (Liu et al.,, 2019) to assess sentence-level fluency
and take the average to represent the complexity of an article, where $1$
signifies most fluent and easy-to-read while $0$ denotes gibberish-like
sentences. This fluency score measures how well each article aligns with the
English grammar, and serves as a proxy for an article’s heaviness in
mathematical notation since in-line mathematical notation often disrupts an
English sentence’s fluency and results in a lower score. In Panel D of Figure
1, we perform a sanity-check by randomly sample approximately $100$ arXiv
papers from four categories (computational linguistics, computer vision, fluid
dynamics, and algebraic geometry) and compute their complexity score. Note
that most of the scores are relatively high, as expected since academic
articles are often relatively well-written. We also observe a discrepancy in
that algebraic geometric papers has its score distribution significantly
skewed to the left, which also aligns with our intuition. We thus use this
complexity score as a proxy to mathematical complexity and paper readability
in our subsequent analysis.
## 3 Notation and framework
### 3.1 Matched cohort study
In the absence of a well-controlled experiment (e.g., the hypothetical RCT
envisioned in Section 1.2), observational studies, _up to their important
limitations_ , provide an alternative to explore a cause-and-effect
relationship (Cochran and Chambers,, 1965). In an observational study, study
units receiving different levels of treatment may differ systematically in
their observed covariates, and this induces the so-called _overt bias_
(Rosenbaum,, 2002, Section 3). In our case study, articles with different
author metadata, for instance, those with authors from high-ranking versus
relatively lower-ranking academic institutions, could differ systematically in
topics, keywords, number of figures and equations, among others, and this
would invalidate a naïve comparison.
Statistical matching is a commonly used strategy to adjust for confounding in
empirical studies (Rubin,, 1973, 1979; Rosenbaum,, 2002, 2010; Ho et al.,,
2007; Stuart,, 2010). The ultimate goal of statistical matching is to embed
non-randomized, observational data into an approximate randomized controlled
experiment by _designing_ a matched control (or comparison) group that
resembles the treated group in observed pre-treatment covariates by matching
on these covariates (Rubin,, 1973, 1979), a balancing score derived from these
covariates, e.g., Rosenbaum and Rubin,’s (1983) propensity score, or a
combination of both (Rosenbaum and Rubin,, 1985).
We note that there are multiple ways to adjust for observed covariates and
draw causal conclusions under the potential outcomes framework, statistical
matching being one of them. Other commonly used methods include weighting,
modeling the potential outcomes, and a combination of both. We found a matched
observational study particularly suited for our case study for three reasons.
First, it facilitates testing Fisher’s sharp null hypothesis of no effect,
which is an appropriate causal null hypothesis encoding an intriguing notion
of _fairness_ , as we will discuss in detail in Section 3.4. Second, a matched
design naturally takes into account similarity of textual data (for instance,
as measured by cosine similarity based on their embeddings) and is capable of
balancing some high-dimensional covariates like keywords in our data analysis.
A third strength is mostly stylistic: A matched comparison best resembles
Bertrand and Mullainathan,’s (2004) seminal field experiment and is perhaps
the easiest-to-digest way to exhibit statistical analysis results to a non-
technical audience.
In the rest of this section, we articulate essential elements in our analysis,
including study units, treatment to be hypothetically manipulated, potential
outcomes, timing of the treatment, pre-treatment variables and causal null
hypothesis of interest.
### 3.2 Study units; treatment and its timing; potential outcomes
As discussed in detail in Greiner and Rubin, (2011), there are two agents in
our analysis of the effect of authorship metadata on area chairs’ final
decisions: an ICLR article peer-reviewed and having received reviewers’
ratings, and a decision maker, i.e., an area chair or meta reviewer (AC for
short), who assigned the final acceptance status to the article. In our
analysis, each study unit is a (peer-reviewed ICLR article, area chair) pair.
There are a total of $N=10,289$ study units in our compiled database, and
$5,313$ of them have three or four reviewers and an average rating between $5$
and $7$ and . We will write the $i$-th study unit as
$SU_{i}=(\text{article}_{i},\text{AC}_{i})$.
We define the treatment of interest as an area chair’s perception of a peer-
reviewed article’s authorship metadata. This definition is modeled after
Bertrand and Mullainathan, (2004) and Greiner and Rubin, (2011) and implies
the _timing_ of the treatment: We imagine a hypothetical randomized experiment
where peer-reviewed ICLR articles, whose constituent parts include text,
tables, figures, reviewers’ ratings and comments, are randomly assigned
authorship metadata and presented to the area chair for a final decision. This
timing component of the treatment is critical because it implies what are
meant to be “pre-treatment variables” under Neyman-Rubin’s causal inference
framework, as we will discuss in detail in Section 3.3.
In principle, the most granular author metadata is a complete list of author
names with their corresponding academic or research institutions. Let
$\vec{A}=\vec{a}$ denote author metadata and $\mathcal{A}$ the set of all
possible configurations of author metadata. There is one potential outcome
$Y_{i}(\vec{a})$ associated with unit $i$ and each $\vec{a}\in\mathcal{A}$; in
words, there is one final decision associated with each peer-reviewed article
had the author metadata been $\vec{a}$. We will assume the consistency
assumption so that the observed outcome
$Y_{i}^{\textsf{obs}}=Y_{i}(\vec{a}^{\text{obs}})$. One may adopt a variant of
the _Stable Unit Treatment Value Assumption_ or SUTVA (Rubin,, 1980) to reduce
the number of potential outcomes. For instance, one may further assume that
the potential outcome $Y_{i}(\vec{a})$ depends on author metadata $\vec{a}$
only via authors’ academic institutions. Let $f(\cdot)$ denote a mapping from
author metadata to authors’ academic institutions, then this “stability”
assumption amounts to assuming $Y_{i}(\vec{a})=Y_{i}(\vec{a}^{\prime})$ when
$f(\vec{a})=f(\vec{a}^{\prime})$. We do not _a priori_ make such stability
assumptions.
###### Example 1 (Field experiment in Bertrand and Mullainathan, (2004)).
In Bertrand and Mullainathan,’s (2004) field experiment, each study unit
consists of a resume $i$ and a human resource person reading the resume $i$,
i.e., $SU_{i}=(\text{resume}_{i},\text{HR person}_{i})$. Treatment is a
person’s perception of the name on the resume. In this case, $\mathcal{A}$
would consist of all names and $Y_{i}(A=a)$ is the potential administrative
decision had the resume $i$ been associated with name $a$. If we further make
the stability assumption that $Y_{i}(a)$ depends on $a$ only via its race and
ethnicity connotation as in Bertrand and Mullainathan, (2004) and define
$f(a)=1$ if the name $a$ is African-American sounding and $0$ if it is White
sounding, then the set of potential outcomes $Y_{i}(a),~{}a\in\mathcal{A}$
would reduce to $\\{Y_{i}(1),Y_{i}(0)\\}$.
### 3.3 Observed and unobserved pre-treatment variables
According to Rubin, (2005), covariates refer to “variables that take their
values before the treatment assignment or, more generally, simply cannot be
affected by the treatment.” Below, we briefly review a dichotomy of pre-
treatment variables in the context of drawing causal conclusions from textual
data (Zhang and Zhang,, 2022).
In human populations research, pre-treatment variables or covariates are often
divided into two broad categories: observed and unobserved; see, e.g.,
Rosenbaum, (2002, 2010). A randomized controlled experiment like the one in
Bertrand and Mullainathan, (2004) had the key advantage of balancing both
observed and unobserved confounding, while drawing causal conclusions from
observational data inevitably suffers from the concern of unmeasured
confounding and researchers often control for a large number of observed
covariates in order to alleviate this concern.
When study units are textual data, Zhang and Zhang, (2022) divides observed
covariates into two types: _explicit observed covariates_ $\bf
X^{\textsf{exp}}_{\textsf{obs}}$ that could be derived from textual data at
face value, e.g., number of equations, tables and illustrations in the
article, and _implicit observed covariates_ $\bf
X^{\textsf{imp}}_{\textsf{obs}}$ that capture higher-level aspects of textual
data, e.g., the topic, flow and novelty of the article. In our case study, we
will consider the following explicit observed covariates: year of submission,
reviewers’ ratings, number of authors, sections, figures and reference, and
keywords. We further extracted each article’s complexity, topic and reviewers’
sentiment using state-of-the-art, natural language processing models as
described in Section 2.
Unmeasured confounding is a major concern for any attempt to draw a cause-and-
effect conclusion from observational data, regardless of the covariance
adjustment method. Despite researchers’ best intention and effort to control
for all relevant pre-treatment variables via matching, there is always a
concern about unmeasured confounding bias as we are working with observational
data. In our analysis of ICLR papers, we identified two sources of unmeasured
confounding. First, there could be residual confounding due to the
insufficiency of language models (such as the SPECTER model) in summarizing or
extracting implicit observed covariates $\bf X^{\textsf{imp}}_{\textsf{obs}}$
like topics, flow and sentiment. Second, in our analysis, we used numeric
ratings from reviewers as a proxy of the quality and novelty of the article.
Reviewers’ ratings may not be sufficient in summarizing the quality of the
articles. Unmeasured confounding may lead to a spurious causal conclusion and
researchers routinely examine the robustness of the putative causal conclusion
using a sensitivity analysis (see, e.g., Rosenbaum,, 2002, 2010; VanderWeele
and Ding,, 2017, among many others).
### 3.4 Causal null hypothesis: A case for Fisher
A causal statement is necessarily a comparison among potential outcomes. In
the context of a many-level treatment assignment, Fisher’s sharp null
hypothesis states the following:
$H_{0,\text{sharp}}\mathrel{\mathop{\ordinarycolon}}\quad
Y_{i}(\vec{a})=Y_{i}(\vec{a}^{\prime}),\quad\forall
i=1,\dots,N~{}\text{and}~{}\vec{a},\vec{a}^{\prime}\in\mathcal{A}.$ (1)
Fisher’s sharp null hypothesis prescribes a notion of fairness that, arguably,
best suits our vision: area chairs’ final decisions of the articles are
_irrelevant_ of author metadata; in other words, the decision $Y_{i}(\vec{a})$
could potentially depend on any substantive aspect of the article $i$,
including its topic, quality of writing, reviewers’ ratings, etc, but would
remain the same had we changed author metadata from $\vec{a}$ to
$\vec{a}^{\prime}$.
In addition to Fisher’s sharp null hypothesis, Neyman’s weak null hypothesis,
which states that the sample average treatment effect is zero, is another
commonly tested causal null hypothesis. Unlike Fisher’s sharp null, Neyman’s
weak null hypothesis allows perception bias of varying magnitude for all
article-AC pairs, as long as these biases would cancel out each other in one
way or another. We found this a sub-optimal notion of fairness compared to
that encoded by Fisher’s sharp null hypothesis, and we will focus on testing
Fisher’s sharp null hypothesis in our data analysis.
###### Example 2 (continues=ex:Field experiment).
Bertrand and Mullainathan, (2004) found that White sounding names receive $50$
percent more callbacks for interviews; under the stability assumption
discussed in Section 3.2, their findings could be interpreted as a causal
effect of perceiving White versus African-American sounding names. In the
absence of the stability assumption, Bertrand and Mullainathan,’s result could
still be interpreted as providing evidence against Fisher’s sharp null
hypothesis $H_{0,\text{sharp}}$ in its most generic form, although in what
specific ways $H_{0,\text{sharp}}$ is violated needs further investigation.
Unlike Bertrand and Mullainathan,’s (2004) randomized experiment that randomly
assigns resume names, our cohort of ICLR articles are not randomly assigned
authorship metadata. It is conceivable that articles with more “prestigious”
authors, however one might want to define the concept of “prestige,” could
differ systematically in their reviewers’ ratings, topics, etc, and this
difference in baseline covariates could potentially introduce a spurious
association between author metadata and area chairs’ final decisions. To
overcome this, we embed the observational data into a matched-pair design by
constructing $I$ matched pairs, each with two peer-reviewed articles, indexed
by $j=1,2$, such that these two articles are as similar as possible in their
covariates but with different author metadata. Let $\vec{a}_{ij}$ denote the
author metadata associated with article $j$ in the matched pair $i$. Such a
matched-pair design enables us to test the following sharp null hypothesis:
$H^{\prime}_{0,\text{sharp}}\mathrel{\mathop{\ordinarycolon}}\quad
Y_{ij}(\vec{a}_{i1})=Y_{ij}(\vec{a}_{i2}),\quad\forall i=1,\dots,I,~{}j=1,2.$
(2)
We note that $H_{0,\text{sharp}}$ in (1) implies $H^{\prime}_{0,\text{sharp}}$
in (2), so rejecting $H^{\prime}_{0,\text{sharp}}$ would then provide evidence
against $H_{0,\text{sharp}}$. Such a design is termed “near-far” design in the
literature (Baiocchi et al.,, 2010, 2012) and has been used in a number of
empirical studies (see, e.g., Lorch et al.,, 2012; Neuman et al.,, 2014;
MacKay et al.,, 2021, among others).
## 4 Data analysis: study design and outcome analysis
### 4.1 A first matched comparison: design M1
We restricted our attention to $5,313$ borderline articles that were peer-
reviewed by $3$ or $4$ reviewers and received an average rating between $5$
and $7$. We first considered a study design M1 where each matched pair
consisted of one article whose authors’ average institution ranking was among
the top $30\%$ of these $5,313$ submissions and the other article whose
authors’ average institution ranking was among the bottom $70\%$. Columns 2
and 3 of Table 1 summarize the characteristics, including structural features
and derived higher-level features, of $1,585$ top-$30\%$ articles and those of
the other $3,728$ articles. As one closely examines these two columns, a
number of features, including submission year, number of figures, complexity
score as judged by the language model, keyword and topic, differ
systematically among these submissions. Matching helps remove most of the
overt bias: In the matched comparison group (which is a subset of size
$n=1,585$ from the reservoir of $3,728$ articles), standardized mean
differences of all but one covariates are less than $0.1$, or one-tenth of one
pooled standard deviation. In fact, design M1 required near-exact matching on
important covariates like reviewers’ ratings and year of submission, and
achieved near-fine balance for categorical variables like topic cluster and
primary keyword (Rosenbaum et al.,, 2007). Algorithms used to construct the
matched design M1 will be described in detail in Section 4.3. Panel C in
Figure 2 assesses how different two articles in each matched pair are in their
authors’ average institution rankings; see Figure 5 in Appendix for similar
plots in the four-reviewer stratum. The average, within-matched-pair
difference in authors’ average institution ranking is $74.3$ among $983$
matched pairs in the three-reviewer stratum (median: $49.6$; interquartile
range $26.6$-$96.7$) and $99.6$ among $602$ matched pairs in the four-reviewer
stratum (median: $63.5$; interquartile range $28.5$-$135.0$). We concluded
that there was a sizable difference in author metadata between two articles in
each matched pair.
Figure 2: Diagnostics of design M1. Panel A: Estimated propensity score
top-30% articles, bottom-70% articles and matched comparison articles. Panel
B: Distribution of between-article cosine similarity before (median = 0.71)
and after matching (median = 0.80). Panel C: Boxplots of authors’ average
institution rankings and matched-pair differences in authors’ average
institution rankings. Panel D: Permutation distribution and the observed test
statistic of the classification permutation test. Table 1: Characteristics of
articles before and after matching in the design M1.
[h] Bottom $70\%$ Ranking Articles (n = 3,728) Top $30\%$ Ranking Articles (n
= 1,585) SMD (Before Matching) Matched Comparison Articles (n = 1,585) SMD
(After Matching) Conference and Reviewer Year of submission (%) 2017 109 (
2.9) 129 ( 8.1) 0.23 92 ( 5.8) 0.10 2018 299 ( 8.0) 221 (13.9) 0.19 201 (12.7)
0.04 2019 520 (13.9) 303 (19.1) 0.14 304 (19.2) $<0.01$ 2020 548 (14.7) 231
(14.6) 0.003 234 (14.8) $<0.01$ 2021 1200 (32.2) 388 (24.5) 0.171 412 (26.0)
0.03 2022 1052 (28.2) 313 (19.7) 0.2 342 (21.6) 0.05 Reviewer Ratings Reviewer
I 6.99 (0.92) 6.99 (0.91) 0.008 6.97 (0.90) 0.02 Reviewer II 6.12 (0.80) 6.12
(0.79) 0.001 6.12 (0.78) $<0.01$ Reviewer III 5.21 (1.08) 5.18 (1.09) 0.031
5.19 (1.10) 0.01 Reviewer IV* 4.71 (1.06) 4.74 (1.11) 0.031 4.74 (1.09)
$<0.01$ Reviewer Sentiment Reviewer I 0.75 (0.10) 0.75 (0.11) 0.026 0.75
(0.11) 0.03 Reviewer II 0.64 (0.09) 0.64 (0.09) 0.021 0.64 (0.09) 0.04
Reviewer III 0.55 (0.10) 0.54 (0.10) 0.081 0.54 (0.10) 0.01 Reviewer IV* 0.49
(0.09) 0.50 (0.09) 0.062 0.50 (0.09) 0.01 Article Metadata No. Author 4.21
(1.67) 4.17 (1.69) 0.024 4.13 (1.67) 0.03 No. Figure 13.71 (7.26) 12.55 (7.55)
0.156 12.42 (6.60) 0.02 No. Reference 42.53 (16.59) 42.19 (16.93) 0.020 40.98
(14.94) 0.07 No. Section 19.94 (7.16) 19.96 (7.11) 0.003 19.74 (6.90) 0.03
Complexity, Topics and Keywords Complexity 0.84 (0.03) 0.85 (0.03) 0.285 0.85
(0.03) 0.06 Topic cluster† (%) RL/Meta Learning/Robustness 367 ( 9.8) 113 (
7.1) 0.097 113 ( 7.1) 0 RL/CV/Robustness 298 ( 8.0) 80 ( 5.0) 0.122 80 ( 5.0)
0 DL/GM/CNN 345 ( 9.3) 147 ( 9.3) 0 147 ( 9.3) 0 DL/RNN/GNN 365 ( 9.8) 133 (
8.4) 0.049 133 ( 8.4) 0 DL/Optimization/Generalization 399 (10.7) 126 ( 7.9)
0.097 126 ( 7.9) 0 DL/Robustness/Adversarial Examples 445 (11.9) 270 (17.0)
0.145 270 (17.0) 0 DL/RL/Unsupervised Learning/GM 319 ( 8.6) 143 ( 9.0) 0.014
143 ( 9.0) 0 DL/Multi-Agent or Model-Based RL/IL 475 (12.7) 209 (13.2) 0.015
209 (13.2) 0 DL/Federated or Distributed Learning 370 ( 9.9) 260 (16.4) 0.193
260 (16.4) 0 GM/GAN/VAE 345 ( 9.3) 104 ( 6.6) 0.1 104 ( 6.6) 0 Primary keyword
(%) NA 950 (25.5) 347 (21.9) 0.085 368 (23.2) 0.03 Other 794 (21.3) 292 (18.4)
0.073 312 (19.7) 0.03 Deep learning 393 (10.5) 242 (15.3) 0.144 238 (15.0)
0.01 Reinforcement learning 290 ( 7.8) 183 (11.5) 0.126 181 (11.4) $<0.01$
Graph neural networks 145 ( 3.9) 41 ( 2.6) 0.073 39 ( 2.5) $<0.01$
Representation learning 109 ( 2.9) 40 ( 2.5) 0.025 39 ( 2.5) $<0.01$
Generative models 89 ( 2.4) 35 ( 2.2) 0.013 38 ( 2.4) 0.01 Meta-learning 79 (
2.1) 34 ( 2.1) 0 33 ( 2.1) $<0.01$ Self-supervised learning 72 ( 1.9) 23 (
1.5) 0.031 24 ( 1.5) $<0.01$ Unsupervised learning 70 ( 1.9) 43 ( 2.7) 0.053
32 ( 2.0) 0.05 Neural networks 62 ( 1.7) 30 ( 1.9) 0.015 25 ( 1.6) 0.02
Generative adversarial networks 56 ( 1.5) 14 ( 0.9) 0.055 18 ( 1.1) 0.02
Optimization 43 ( 1.2) 26 ( 1.6) 0.034 26 ( 1.6) 0 Variational inference 39 (
1.0) 8 ( 0.5) 0.058 11 ( 0.7) 0.02 Transformer 37 ( 1.0) 20 ( 1.3) 0.028 15 (
0.9) 0.04 Generalization 36 ( 1.0) 32 ( 2.0) 0.082 22 ( 1.4) 0.05 Decision
Acceptance (%) 1928 ( 51.7) 811 ( 51.2) 851 ( 53.7)
* ∗
Reviewer IV’s rating and sentiment results are derived from articles within
the stratum of four reviewers.
* †
RL: Reinforcement Learning; GM: Generative Models; CV: Computer Vision; CNN:
Convolutional Neural Nets; RNN: Recurrent Neural Nets; GNN: Graph Neural Nets;
IL: Imitation Learning; GAN: Generative Adversarial Nets; VAE: Variational
Auto-Encoder. Note that the description is not exhaustive.
To further demonstrate two groups are well comparable, Panel A of Figure 2
displays the distribution of the estimated “propensity score,” defined as the
probability that authors’ average ranking of an article was among top $30\%$,
in each of the following three groups: top-30% articles (red), bottom-70%
articles (blue), and matched comparison articles (yellow), all in the three-
reviewer stratum. Similar plots for articles with four reviewers can be found
in Appendix. As is evident from the figure, the propensity score distribution
of the matched comparison articles is more similar to that of the top-30%
articles. Panel B of Figure 2 further plots the cosine similarity calculated
from the raw textual data of each article. It is also evident that two
articles in the same matched pair now have improved cosine similarity compared
to that from two randomly drawn articles prior to matching. Our designed
matched comparison M1 appears to be well balanced in many observed covariates
and resembles a hypothetical RCT where we randomly assign author metadata.
The question remains as to whether the balance is sufficiently good compared
to an authentic RCT and could justify randomization inference. We conducted a
formal diagnostic test using Gagnon-Bartsch and Shem-Tov,’s (2019)
classification permutation test (CPT) based on random forests to test the
randomization assumption for the matched cohort. The randomization assumption
cannot be rejected in either the three-reviewer or four-reviewer stratum
(p-value = $0.194$ and $0.641$, respectively); see the null distribution and
test statistic in Panel D of Figure 2 and Figure 5 in Appendix.
Table 2: Contingency table, p-values (exact, two-sided McNemar’s test) and odds ratio (Fay,, 2010) of $1,585$ matched pairs in the design M1 and $1051$ matched pairs in a strengthened design M2. Defining and interpreting the odds ratio and its confidence interval requires an additional “stability” assumption discussed in Section 3.2. Panel A: Design M1 | | Comparison Articles | | Odds Ratio
---|---|---|---|---
| Accepted | Rejected | P-value | 95% CI
Top-30% Articles | Accepted | 633 | 178 | 0.050 | 0.82
Rejected | 218 | 556 | [0.67, 1.00]
Panel B: Design M2 | | Comparison Articles | |
| Accepted | Rejected | |
Top-30% Articles | Accepted | 443 | 115 | 0.149 | 0.83
Rejected | 139 | 354 | [0.64, 1.07]
Panel A of Table 2 summarizes the outcomes of $1,585$ matched pairs of two
articles. We tested Fisher’s sharp null hypothesis
$H^{\prime}_{0,\text{sharp}}$ reviewed and discussed in Section 3.4 using a
two-sided, exact McNemar’s test (Fay,, 2010) and obtained a p-value of
$0.050$, which suggested some weak evidence that authorship metadata was
associated with AC’s final decisions. Under an additional stability assumption
stating that the potential acceptance status $Y(\vec{a})$ depended on author
metadata $\vec{a}$ only via authors’ average institution ranking and remained
unchanged when the average ranking is among the top-30% or among the
bottom-70%, we estimated the odds ratio to be $0.82$ (95% CI: [0.67, 1.00]),
providing some weak evidence that borderline articles from top-30%
institutions were _less_ favored by area chairs compared to their counterparts
from the comparison group. Note that a naïve, unadjusted comparison between
the top-30% and bottom-70% borderline articles would in fact mask any
difference (acceptance rate: 51.2% versus 51.7% before matching).
Our analysis seems to defy the common wisdom that there is a “status bias”
favoring higher-profile authors. It is then natural to ask, if author metadata
were even more drastically different, shall we then see some evidence for
status bias that would better align with previous findings? This inquiry led
to a second, strengthened design M2.
### 4.2 A strengthened design M2
Baiocchi et al., (2010) first considered “strengthening” an encouragement
variable (differential distance) in the context of an instrumental variable
analysis of the effect of low-level versus high-level neonatal intensive care
units (NICUs) on mortality rate. In their analysis, Baiocchi et al., (2010)
compared mothers who lived near to and far from a high-level NICU and
“strengthened” the comparison by restricting their analysis to a smaller
subset of comparable mothers who lived very near to and very far from a high-
level NICU. We adopted their idea here and constructed a strengthened design
M2 where one article in each matched pair is a top-20% article (as opposed to
a top-30% article in design M1) and the other matched comparison article was
from the reservoir of bottom-80% articles. We also added a “dose-caliper” in
the statistical matching algorithm to further separate the average ranking
within each matched pair; see Section 4.3 for details.
A total of $1,051$ matched pairs were formed. Panel C of Figure 3 summarizes
the within-matched-pair difference in authors’ average institution ranking
across $1,051$ matched pairs in the strengthened design M2. In this
strengthened design, the average difference now increases from $74.3$ (as in
the design M1) to $108.8$. Importantly, the cohort of top-20% articles and
their matched comparison group are still comparable in all baseline
covariates, as summarized in Table 3. Similar to the design M1, we cannot
reject the randomization assumption based on the classification permutation
test (p-value = $0.961$).
Panel B of Table 2 summarizes the outcomes of $1,051$ matched pairs of two
articles in the strengthened design M2. Under the additional stability
assumption, we obtained a near identical point estimate for odds ratio (OR =
$0.83$ in M2 versus $0.82$ in M1), though the estimate is slightly less
precise as a result of a smaller sample size ($1,051$ in M2 versus $1,585$ in
M1). Consistent results across two study designs help reinforce the conclusion
that we did not find evidence supporting a “status bias” favoring authors from
high-ranking institutions in this cohort of borderline articles.
Figure 3: Diagnostics of design M2. Panel A: Estimated propensity score
top-30% articles, bottom-70% articles and matched comparison articles. Panel
B: Distribution of between-article cosine similarity before (median = 0.71)
and after matching (median = 0.78). Panel C: Boxplots of authors’ average
institution rankings and matched-pair differences in authors’ average
institution rankings. Panel D: Permutation distribution and the observed test
statistic of the classification permutation test. Table 3: Characteristics of
articles before and after matching in a strengthened design M2.
[h] All Control Articles (n = 4,262) All Treated Articles (n = 1,051) SMD
(Before Matching) Matched Control Articles (n = 1,051) SMD (After Matching)
Conference and Reviewer Year of submission (%) 2017 144 ( 3.4) 94 ( 8.9) 0.23
77 ( 7.3) 0.07 2018 380 ( 8.9) 140 (13.3) 0.14 126 (12.0) 0.04 2019 614 (14.4)
209 (19.9) 0.146 201 (19.1) 0.02 2020 621 (14.6) 158 (15.0) 0.011 157 (14.9)
$<0.01$ 2021 1343 (31.5) 245 (23.3) 0.185 272 (25.9) 0.06 2022 1160 (27.2) 205
(19.5) 0.183 218 (20.7) 0.03 Reviewer Ratings Reviewer I 6.98 (0.92) 7.04
(0.91) 0.059 7.01 (0.89) 0.02 Reviewer II 6.12 (0.79) 6.13 (0.79) 0.017 6.11
(0.78) 0.03 Reviewer III 5.21 (1.07) 5.16 (1.11) 0.042 5.17 (1.09) $<0.01$
Reviewer IV* 4.71 (1.06) 4.77 (1.12) 0.052 4.74 (1.13) 0.02 Reviewer Sentiment
Reviewer I 0.75 (0.10) 0.76 (0.11) 0.051 0.75 (0.10) 0.10 Reviewer II 0.64
(0.09) 0.64 (0.09) 0.005 0.64 (0.09) 0.08 Reviewer III 0.55 (0.10) 0.54 (0.10)
0.062 0.54 (0.10) 0.03 Reviewer IV* 0.49 (0.09) 0.50 (0.09) 0.120 393 0.06
Article Metadata No. uthor 4.21 (1.67) 4.17 (1.72) 0.023 4.19 (1.66) 0.01 No.
Figure 13.60 (7.41) 12.41 (7.10) 0.164 12.32 (6.45) 0.01 No. Reference 42.56
(16.61) 41.91 (17.02) 0.039 40.50 (15.12) 0.08 No. Section 19.94 (7.16) 19.96
(7.06) 0.003 19.54 (6.94) 0.06 Complexity, Topics and Keywords Complexity 0.84
(0.03) 0.85 (0.03) 0.308 0.85 (0.03) 0.15 Topic cluster † (%) RL/Meta
Learning/Robustness 404 ( 9.5) 76 ( 7.2) 0.083 76 ( 7.2) 0 RL/CV/Robustness
328 ( 7.7) 50 ( 4.8) 0.12 50 ( 4.8) 0 DL/GM/CNN 393 ( 9.2) 99 ( 9.4) 0.007 99
( 9.4) 0 DL/RNN/GNN 408 ( 9.6) 90 ( 8.6) 0.035 90 ( 8.6) 0
DL/Optimization/Generalization 439 (10.3) 86 ( 8.2) 0.073 86 ( 8.2) 0
DL/Robustness/Adversarial Examples 551 (12.9) 164 (15.6) 0.077 164 (15.6) 0
DL/RL/Unsupervised Learning/GM 365 ( 8.6) 97 ( 9.2) 0.021 97 ( 9.2) 0
DL/Multi-Agent or Model-Based RL/IL 545 (12.8) 139 (13.2) 0.012 139 (13.2) 0
DL/Federated or Distributed Learning 435 (10.2) 195 (18.6) 0.241 195 (18.6) 0
GM/GAN/VAE 394 ( 9.2) 55 ( 5.2) 0.155 55 ( 5.2) 0 Primary keyword (%) NA 1084
(25.4) 213 (20.3) 0.122 220 (20.9) 0.01 Other 891 (20.9) 195 (18.6) 0.058 195
(18.6) 0 Deep learning 460 (10.8) 175 (16.7) 0.172 178 (16.9) $<0.01$
Reinforcement learning 353 ( 8.3) 120 (11.4) 0.104 122 (11.6) $<0.01$ Graph
neural networks 162 ( 3.8) 24 ( 2.3) 0.087 24 ( 2.3) 0 Representation learning
127 ( 3.0) 22 ( 2.1) 0.057 22 ( 2.1) 0 Generative models 99 ( 2.3) 25 ( 2.4)
0.007 26 ( 2.5) $<0.01$ Meta-learning 89 ( 2.1) 24 ( 2.3) 0.014 23 ( 2.2)
$<0.01$ Self-supervised learning 80 ( 1.9) 15 ( 1.4) 0.039 14 ( 1.3) $<0.01$
Unsupervised learning 79 ( 1.9) 34 ( 3.2) 0.083 26 ( 2.5) 0.04 Neural networks
73 ( 1.7) 19 ( 1.8) 0.008 17 ( 1.6) 0.02 Generative adversarial networks 62 (
1.5) 8 ( 0.8) 0.066 10 ( 1.0) 0.02 Generalization 48 ( 1.1) 20 ( 1.9) 0.066 16
( 1.5) 0.03 Optimization 47 ( 1.1) 22 ( 2.1) 0.08 22 ( 2.1) 0 Variational
inference 44 ( 1.0) 3 ( 0.3) 0.087 7 ( 0.7) 0.05 Decision Acceptance (%) 2181
( 51.2) 558 ( 53.1) 582 ( 55.4)
* ∗
Reviewer IV’s rating and sentiment results are derived from articles within
the stratum of four reviewers.
* †
RL: Reinforcement Learning; GM: Generative Models; CV: Computer Vision; CNN:
Convolutional Neural Nets; RNN: Recurrent Neural Nets; GNN: Graph Neural Nets;
IL: Imitation Learning; GAN: Generative Adversarial Nets; VAE: Variational
Auto-Encoder. Note that the description is not exhaustive.
### 4.3 Matching algorithm: matching one sample according to multiple
criteria
The matched sample M1 displayed in Table 1 was constructed using an efficient,
network-flow-based optimization algorithm built upon a tripartite network
(Zhang et al.,, 2021) as opposed to a traditional, bipartite network
(Rosenbaum,, 1989). Compared to a bipartite network, a tripartite network
structure helps separate two tasks in the design of an observational study:
(1) constructing closely matched pairs and (2) constructing a well balanced
matched sample. A detailed account of the algorithm can be found in Zhang et
al., (2021); below, we described how we designed the cost in the tripartite
network and achieved the features of M1 and M2 described in Section 4.1 and
4.2.
$\gamma_{1}$$\gamma_{2}$$\gamma_{3}$$\tau_{1}$$\tau_{2}$$\tau_{3}$$\tau_{4}$$\tau_{5}$$\overline{\tau}_{1}$$\overline{\tau}_{2}$$\overline{\tau}_{3}$$\overline{\tau}_{4}$$\overline{\tau}_{5}$$\overline{\gamma}_{1}$$\overline{\gamma}_{2}$$\overline{\gamma}_{3}$$\xi$$\overline{\xi}$
Top-30% articles Bottom-70% articles Mirror bottom-70% articles Mirror top-30%
articles Figure 4: An illustrative plot of a tripartite network consisting of
three treated units and five candidate control units.
Figure 4 illustrates the basic tripartite network structure with three units,
$\\{\gamma_{1},\gamma_{2},\gamma_{3}\\}$, from the top-30% articles and five
units, $\\{\tau_{1},\dots,\tau_{5}\\}$, from the reservoir of bottom-70%
articles. To run a tripartite-network-based matching algorithm, two costs need
to be specified (Zhang et al.,, 2021). The _cost_
$\delta_{\gamma_{i},\tau_{j}}$ associated with each edge $e$ connecting
$\gamma_{i}$ and $\tau_{j}$ in the left part of the network encodes criteria
for closely matched pairs. To construct the matched sample M1, we let
$\delta_{\gamma_{i},\tau_{j}}$ be the cosine similarity derived from the
SPECTER embeddings of article $\gamma_{i}$ and $\tau_{j}$. We then enforce
near-exact matching on submission year and reviewers’ ratings by adding a
large penalty to $\delta_{\gamma_{i},\tau_{j}}$ if articles $\gamma_{i}$ and
$\tau_{j}$ were submitted to the ICLR conference in different calendar years
or did not receive same ratings. As a result, two articles in each matched
pair were exactly or near-exactly matched on reviewers’ ratings. For instance,
a top-30% article $\gamma_{i}$ submitted at $2017$ and peer-reviewed by three
reviewers and received ratings of $6$, $5$ and $5$ (from highest to lowest)
was matched to a $2017$-submitted, $(6,5,5)$-rated bottom-70% article
$\tau_{j}$. This conscious design of $\delta_{\gamma_{i},\tau_{j}}$ helped
achieve improved within-matched-pair cosine similarity and near-exact match on
year of submission and reviewers’ ratings in M1.
The cost $\Delta_{\overline{\gamma}_{i},\overline{\tau}_{j}}$ associated with
edge connecting $\overline{\gamma}_{i}$ and $\overline{\tau}_{j}$ in the right
part of the network encodes criteria for good overall balance when groups are
viewed as a whole. To construct M1, we estimated the propensity score based on
article metadata and set $\Delta_{\overline{\gamma}_{i},\overline{\tau}_{j}}$
to be the difference in the estimated propensity score to minimize earth-
mover’s distance between the propensity score distributions of the top-30%
articles and their matched comparison articles. The “balancing” property of
the propensity score (Rosenbaum and Rubin,, 1983) then helped balance the
covariates used to estimate it. One limitation of the propensity score is that
its stochastic balancing property often does not suffice when balancing
nominal variables with many categories; in these scenarios, a design technique
known as _fine balance_ is often used in conjunction with propensity score
matching (Rosenbaum et al.,, 2007; Rosenbaum,, 2010). In short, fine balance
is a technique that forces the frequency of one or more nominal variables to
be identical or as close as possible in two groups. We finely balanced the
topic clusters and keywords by adding a large penalty to
$\Delta_{\overline{\gamma}_{i},\overline{\tau}_{j}}$ when $\gamma_{i}$ and
$\tau_{j}$ differed in the topic cluster or keyword. Finally, matched pairs in
M1 were obtained as a result of solving the minimum-cost flow optimization
problem associated with this tripartite network. We conducted matching in the
stratum of articles with three reviewers and four reviewers, respectively,
because articles in the four-reviewer stratum have two additional covariates:
a fourth reviewer rating and a fourth reviewer sentiment.
Our construction of the design M2 was analogous to that of M1, except that we
further added a “dose-caliper,” defined as a large penalty when two articles
$\gamma_{i}$ and $\tau_{j}$ differ in their authors’ average institution
rankings by less than a pre-specified caliper size, to the cost
$\delta_{\gamma_{i},\tau_{j}}$. The average within-matched-pair difference in
authors’ average rankings is $74.3$ in M1; hence, we set the caliper size to
be $80$ when constructing M2. In this way, the within-matched-pair difference
in authors’ average ranking was as large as $108.8$ in the design M2,
representing a meaningful improvement over that in M1.
## 5 Discussion: interpretation of our results; limitations
In this article, we studied the association between author metadata and area
chairs’ decisions. We did _not_ find evidence supporting a _status bias_ ,
that is, area chairs’ decisions systematically favored authors from high-
ranking institutions, when comparing two cohorts of borderline articles with
near-identical reviewers’ ratings, sentiment, topics, primary keywords and
article metadata. Under an additional stability assumption, we found that
articles from high-ranking institutions had a lower acceptance rate and the
result was consistent among articles from top-30% institutions (odds ratio =
$0.82$) and top-20% institutions (odds ratio = $0.83$).
Our results need to be interpreted under an appropriate context. First, like
all retrospective, observational studies, although we formulated the question
under a rigorous causal framework, we cannot be certain that our analysis was
immune from any unmeasured confounding bias. For instance, the marginally
higher acceptance rate of articles in the matched comparison groups (i.e.,
articles from lower-ranking institutions) could be easily explained away if
these articles, despite having near-identical reviewers’ ratings as their
counterparts in the top-30% or top-20% groups, were in fact superior in their
novelty and significance and area chairs made decisions based on these
attributes rather than author metadata.
Second, any interpretation of our results should be restricted to our designed
matched sample and should _not_ be generalized to other contexts. In
particular, we only focused on area chairs’ decision on _borderline_ articles.
As Greiner and Rubin, (2011, Section III) articulated in great detail, a study
unit may interact with multiple agents of an overall system and we have more
than one choice of decider to study. In our case study, an ICLR article has
interacted with at least two types of deciders, a group of reviewers and an
area chair. We explicitly focused on the interaction between a peer-reviewed
article and an area chair. This deliberate choice allowed us to control for
some valuable pre-treatment variables, including reviewers’ ratings and
sentiment, that are good proxies for articles’ innate quality; however, by
choosing to focus on this interaction that happened after the interaction
between an article and multiple reviewers, we forwent the opportunity to
detect any status bias, in either direction, in any earlier interaction
including the peer review process that could have affected the values of pre-
treatment variables in our analysis (Greiner and Rubin,, 2011). Although the
peer-review process of ICLR is in principle double-blind, it is conceivable
that articles’ author metadata could be leaked (e.g., when articles were
posted in the pre-print platform) during the peer review process, and
reviewers’ ratings could be biased in favor of more (or less) established
authors. It is of great interest to further study any perception bias during
the interaction between articles and their reviewers; however, a key
complication facing such an analysis is that articles may not be comparable
without having a relatively objective judgment or rating (e.g., reviewers’
ratings in our analysis of area chairs’ decision).
With all these important caveats and limitations in mind, we found our
analysis a solid contribution to the social science literature on status bias.
Our analysis also helps clarify many important causal inference concepts and
misconceptions when study units are textual data, including (1) the importance
of shifting focus from an attribute to the perception of it, (2) the
importance of articulating the timing of the treatment and hence what
constitutes pre-treatment variables, (3) Fisher’s sharp null hypothesis as a
relevant causal null hypothesis in the context of fairness, and (4) Rubin,’s
(1980) stability assumption often implicitly assumed but overlooked, all
within a concrete case study.
## Acknowledgements
The authors would like to thank Weijie J. Su from the University of
Pennsylvania for stimulating discussions and helpful suggestions. This work is
in part supported by the US Defense Advanced Research Projects Agency (DARPA)
under Contract FA8750-19-2-1004; the Office of the Director of National
Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA),
via IARPA Contract No. 2019-19051600006 under the BETTER Program; Office of
Naval Research (ONR) Contract N00014-19-1-2620; National Science Foundation
(NSF) under Contract CCF-1934876. The views and conclusions contained herein
are those of the authors and should not be interpreted as necessarily
representing the official policies, either expressed or implied, of ODNI, IARPA,
the Department of Defense, or the U.S. Government. The U.S. Government is
authorized to reproduce and distribute reprints for governmental purposes
notwithstanding any copyright annotation therein.
## Appendix
Figure 5: Diagnostics of design M1. Panel A: Estimated propensity score
top-30% articles, bottom-70% articles and matched comparison articles. Panel
B: Distribution of between-article cosine similarity before (median = 0.70)
and after matching (median = 0.78). Panel C: Boxplots of authors’ average
institution rankings and matched-pair differences in authors’ average
institution rankings. The average, within-matched-pair difference in the
institution ranking is 99.6 among 602 matched pairs in this stratum (median:
63.5; interquartile range 28.5-135.0). Panel D: Permutation distribution and
the observed test statistic of the classification permutation test. Figure 6:
Diagnostics of design M2. Panel A: Estimated propensity score top-30%
articles, bottom-70% articles and matched comparison articles. Panel B:
Distribution of between-article cosine similarity before (median = 0.70) and
after matching (median = 0.76). Panel C: Boxplots of authors’ average
institution rankings and matched-pair differences in authors’ average
institution rankings. The average, within-matched-pair difference in the
institution ranking is 162.5 among 393 matched pairs in this stratum (median:
144.6; interquartile range 90.2-208.0). Panel D: Permutation distribution and
the observed test statistic of the classification permutation test.
## References
* Baiocchi et al., (2010) Baiocchi, M., Small, D. S., Lorch, S., and Rosenbaum, P. R. (2010). Building a stronger instrument in an observational study of perinatal care for premature infants. Journal of the American Statistical Association, 105(492):1285–1296.
* Baiocchi et al., (2012) Baiocchi, M., Small, D. S., Yang, L., Polsky, D., and Groeneveld, P. W. (2012). Near/far matching: a study design approach to instrumental variables. Health Services and Outcomes Research Methodology, 12(4):237–253.
* Bertrand and Mullainathan, (2004) Bertrand, M. and Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review, 94(4):991–1013.
* Cochran and Chambers, (1965) Cochran, W. G. and Chambers, S. P. (1965). The planning of observational studies of human populations. Journal of the Royal Statistical Society. Series A (General), 128(2):234–266.
* Cohan et al., (2020) Cohan, A., Feldman, S., Beltagy, I., Downey, D., and Weld, D. S. (2020). SPECTER: Document-level Representation Learning using Citation-informed Transformers. Available at https://arxiv.org/abs/2004.07180.
* Cortes and Lawrence, (2021) Cortes, C. and Lawrence, N. D. (2021). Inconsistency in conference peer review: Revisiting the 2014 NeurIPS experiment. Available at https://arxiv.org/abs/2109.09774.
* Devlin et al., (2019) Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 4171–4186.
* Fay, (2010) Fay, M. P. (2010). Two-sided Exact Tests and Matching Confidence Intervals for Discrete Data. The R Journal, 2(1):53–58.
* Gagnon-Bartsch and Shem-Tov, (2019) Gagnon-Bartsch, J. and Shem-Tov, Y. (2019). The classification permutation test: A flexible approach to testing for covariate imbalance in observational studies. The Annals of Applied Statistics, 13(3):1464–1483.
* Greiner and Rubin, (2011) Greiner, D. J. and Rubin, D. B. (2011). Causal effects of perceived immutable characteristics. Review of Economics and Statistics, 93(3):775–785.
* Ho et al., (2007) Ho, D. E., Imai, K., King, G., and Stuart, E. A. (2007). Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis, 15(3):199–236.
* Huber et al., (2022) Huber, J., Inoua, S., Kerschbamer, R., König-Kersting, C., Palan, S., and Smith, V. L. (2022). Nobel and novice: Author prominence affects peer review. Proceedings of the National Academy of Sciences, 119(41).
* Liu et al., (2019) Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. CoRR, 1907.11692.
* Lorch et al., (2012) Lorch, S. A., Baiocchi, M., Ahlberg, C. E., and Small, D. S. (2012). The differential impact of delivery hospital on the outcomes of premature infants. Pediatrics, 130(2):270–278.
* MacKay et al., (2021) MacKay, E. J., Zhang, B., Heng, S., Ye, T., Neuman, M. D., Augoustides, J. G., Feinman, J. W., Desai, N. D., and Groeneveld, P. W. (2021). Association between transesophageal echocardiography and clinical outcomes after coronary artery bypass graft surgery. Journal of the American Society of Echocardiography, 34(6):571–581.
* McGillivray and De Ranieri, (2018) McGillivray, B. and De Ranieri, E. (2018). Uptake and outcome of manuscripts in nature journals by review model and author characteristics. Research Integrity and Peer Review, 3(1):5.
* Neuman et al., (2014) Neuman, M. D., Rosenbaum, P. R., Ludwig, J. M., Zubizarreta, J. R., and Silber, J. H. (2014). Anesthesia technique, mortality, and length of stay after hip fracture surgery. JAMA, 311(24):2508–2517.
* Neyman, (1923) Neyman, J. S. (1923). On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Annals of Agricultural Sciences, 10:1–51.
* Rosenbaum, (1989) Rosenbaum, P. R. (1989). Optimal matching for observational studies. Journal of the American Statistical Association, 84(408):1024–1032.
* Rosenbaum, (2002) Rosenbaum, P. R. (2002). Observational Studies. Springer.
* Rosenbaum, (2010) Rosenbaum, P. R. (2010). Design of Observational Studies. Springer.
* Rosenbaum et al., (2007) Rosenbaum, P. R., Ross, R. N., and Silber, J. H. (2007). Minimum distance matched sampling with fine balance in an observational study of treatment for ovarian cancer. Journal of the American Statistical Association, 102(477):75–83.
* Rosenbaum and Rubin, (1983) Rosenbaum, P. R. and Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55.
* Rosenbaum and Rubin, (1985) Rosenbaum, P. R. and Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. The American Statistician, 39(1):33–38.
* Rosenthal et al., (2017) Rosenthal, S., Farra, N., and Nakov, P. (2017). Semeval-2017 task 4: Sentiment analysis in twitter. In Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017), pages 502–518.
* Rubin, (1980) Rubin, D. (1980). Discussion of “randomization analysis of experimental data in the fisher randomization test” by d. basu. Journal of the American Statistical Association, 75:591–593.
* Rubin, (1973) Rubin, D. B. (1973). Matching to remove bias in observational studies. Biometrics, 29:159–183.
* Rubin, (1974) Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5):688.
* Rubin, (1979) Rubin, D. B. (1979). Using multivariate matched sampling and regression adjustment to control bias in observational studies. Journal of the American Statistical Association, 74(366):318–328.
* Rubin, (2005) Rubin, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469):322–331.
* Smirnova et al., (2022) Smirnova, I., Romero, D. M., and Teplitskiy, M. (2022). Nudging science towards fairer evaluations: Evidence from peer review. Available at SSRN $4190623$.
* Stuart, (2010) Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical Science, 25(1):1–21.
* Sun et al., (2022) Sun, M., Barry Danfa, J., and Teplitskiy, M. (2022). Does double-blind peer review reduce bias? evidence from a top computer science conference. Journal of the Association for Information Science and Technology, 73(6):811–819.
* Tsai et al., (2016) Tsai, C.-T., Mayhew, S., and Roth, D. (2016). Cross-lingual named entity recognition via wikification. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 219–228. Association for Computational Linguistics.
* van der Maaten and Hinton, (2008) van der Maaten, L. and Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579–2605.
* VanderWeele and Ding, (2017) VanderWeele, T. J. and Ding, P. (2017). Sensitivity analysis in observational research: introducing the e-value. Annals of Internal Medicine, 167(4):268–274.
* Zhang et al., (2021) Zhang, B., Small, D., Lasater, K., McHugh, M., Silber, J., and Rosenbaum, P. (2021). Matching one sample according to two criteria in observational studies. Journal of the American Statistical Association, (just-accepted):1–34.
* Zhang and Zhang, (2022) Zhang, B. and Zhang, J. (2022). Some reflections on drawing causal inference using textual data: Parallels between human subjects and organized texts. In Schölkopf, B., Uhler, C., and Zhang, K., editors, Proceedings of the First Conference on Causal Learning and Reasoning, volume 177 of Proceedings of Machine Learning Research, pages 1026–1036. PMLR.
* Zhang et al., (2022) Zhang, J., Zhang, H., Deng, Z., and Roth, D. (2022). Investigating fairness disparities in peer review: A language model enhanced approach. Available at https://arxiv.org/abs/2211.06398.
|
# Dust Motion and Possibility of Dust Growth in a Growing Circumstellar Disk
Shunta Koga1 and Masahiro N. Machida1
1Department of Earth and Planetary Sciences, Faculty of Sciences, Kyushu
University, Fukuoka 819-0395, Japan
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We calculate the evolution of a star-forming cloud core using a three-
dimensional resistive magnetohydrodynamics simulation, treating dust grains as
Lagrangian particles, to investigate the dust motion in the early star
formation stage. We prepare six different-sized set of dust particles in the
range $a_{\rm d}=0.01$–$1000\,\mu$m, where $a_{\rm d}$ is the dust grain size.
In a gravitationally collapsing cloud, a circumstellar disk forms around a
protostar and drives a protostellar outflow. Almost all the small dust grains
($a_{\rm d}\lesssim 10$–$100\,\mu$m) initially distributed in the region
$\theta_{0}\lesssim 45^{\circ}$ are ejected from the center by the outflow,
where $\theta_{0}$ is the initial zenith angle relative to the rotation axis,
whereas only a small number of the large dust grains ($a_{\rm d}\gtrsim
100\,\mu$m) distributed in the region are ejected. All other grains fall onto
either the protostar or disk without being ejected by the outflow. Regardless
of the dust grain size, the behavior of the dust motion is divided into two
trends after dust particles settle into the circumstellar disk. The dust
grains reaching the inner disk region from the upper envelope preferentially
fall onto the protostar, while those reaching the outer disk region or disk
outer edge from the envelope can survive without an inward radial drift. These
surviving grains can induce dust growth. Thus, we expect that the outer disk
regions could be a favored place of planet formation.
###### keywords:
stars: formation – stars: magnetic field – MHD – ISM: dust, , extinction– ISM:
jets and outflows
††pagerange: Dust Motion and Possibility of Dust Growth in a Growing
Circumstellar Disk–Dust Motion and Possibility of Dust Growth in a Growing
Circumstellar Disk
## 1 Introduction
Many planetary systems have been confirmed since 1995 (Mayor & Queloz, 1995).
Planets form in disks around young stars (Benisty et al., 2021), as shown in
recent ALMA observations of the formation sites of planets (e.g. ALMA
Partnership et al., 2015). Ring and gap structures can be clearly seen in
protoplanetary disks observed by dust thermal emission (Andrews et al., 2018),
and these structures are likely to be related to planet formation (Andrews,
2020). It is considered that dust grains aggregate and grow to form planets in
the disks around young stars (Hayashi et al., 1985). However, it is very
difficult to observe the disks in the very early phase during which planet
formation begins. Therefore, we cannot confidently identify when and how dust
grows and planet formation begins in a disk based only on observations.
Theoretical studies can help us to understand dust growth and planet formation
(e.g. Blum & Wurm, 2008). In such studies, planet formation (or dust growth)
is conventionally considered to begin in an isolated and relatively low-mass
disk after the gas accretion onto the disk ends. Such a disk is called the
minimum mass solar nebula (MMSN, Hayashi et al. 1985) and is observed around
Class II young stellar objects. However, recent observations have confirmed
signs of planet formation in very young growing circumstellar disks around
Class 0 and I protostars (Sheehan et al., 2020; Tychoniec et al., 2020; Podio
et al., 2020), implying that planet formation begins earlier than previously
thought.
The formation and evolution of disks around young stars have been
theoretically investigated in the framework of star formation (e.g. Machida &
Matsumoto, 2011). Physical quantities such as the mass and radius of the disk
vary over time in the star formation process (Tsukamoto et al., 2020; Xu &
Kunz, 2021a; Lee et al., 2021). Thus, it is not sufficient to consider the
MMSN to be the initial condition of planet formation. It is appropriate to
consider the dust growth and planet formation from the beginning of the disk
formation epoch in the star formation process.
The star formation process has been investigated with three-dimensional
magneto hydrodynamics (MHD) simulations, in which only the gas dynamics are
calculated and the dust motion is not explicitly considered (Machida et al.,
2004; Banerjee & Pudritz, 2006; Hennebelle & Fromang, 2008; Tomida et al.,
2013; Machida et al., 2014; Tsukamoto et al., 2015; Xu & Kunz, 2021b).
However, it is important to consider the behavior of dust grains in the star
formation process to investigate the dust growth and planet formation, because
dust is a fundamental element of planet formation. There are a few studies
calculating the dust motion in three-dimensional MHD star-formation
simulations (Lebreuilly et al., 2020; Tsukamoto et al., 2021b; Koga et al.,
2022). Lebreuilly et al. (2020) performed a star formation simulation with the
inclusion of dust and detailed the increase and decrease of dust around the
circumstellar disk (see also Tsukamoto et al., 2021a). Tsukamoto et al.
(2021b) also calculated the star formation process including dust and showed
that the dust grains are swept by the protostellar outflow and that part of
the swept dust falls onto the disk. They considered the dust growth or dust
size evolution with single-sized approximation. It should be noted that,
recently, the dust growth in the star formation process were also investigated
with various (or different) approaches in Marchand et al. (2022, 2021), Bate
(2022) and Tu et al. (2022).
Our calculation settings are approximately similar to Lebreuilly et al. (2020)
and Tsukamoto et al. (2021b). Our previous study (Koga et al., 2022, hereafter
Paper I) showed almost the same results as in Lebreuilly et al. (2020) and
Tsukamoto et al. (2021b), but used a novel method in which the behavior of the
dust grains is calculated using a Lagrangian formulation and the fluid motion
is calculated using an Eulerian formulation. Following Paper I, this study
focuses on the behavior of dust in both a gravitationally collapsing cloud and
a circumstellar disk, starting the simulation from the prestellar stage (or
the molecular cloud core). Our study differs from the past studies of
Lebreuilly et al. (2020) and Tsukamoto et al. (2021b) in the treatment of
dust, in that while past studies adopted a fluid approximation, our study
treats the dust particles as Lagrange particles, calculated separately from
the gas component (for detail, see Paper I). Using this treatment, we can
individually trace the motion of each dust particle and investigate the
behavior of dust in the disk during the early star formation stage. As
described above, the implementation of an MHD nested grid code in the
treatment of the dust was presented in Paper I, in which we also presented the
trajectories of dust in the infalling envelope focusing on the coupling
between the gas and dust. In this paper, we mainly present the motion of dust
in the circumstellar disk.
The structure of this paper is as follows. The numerical settings and methods
are described in §2 and the calculation results are presented in §3. We
discuss dust motion, dust growth and planet formation in the early star
formation stage in §4. A summary is presented in §5.
## 2 Numerical method and settings
The numerical method and settings are the same as in Paper I and we only
briefly explain them here (for details, see §2 and Appendix of Paper I).
As the initial state, a gas sphere with a critical Bonner–Ebert density
profile is adopted. The mass and radius of the initial cloud are $M_{\rm
cl}=1.25\,{\rm M}_{\odot}$ and $R_{\rm cl}=6.13\times 10^{3}$ au,
respectively. A rigid rotation ($\Omega_{0}=2\times 10^{-13}$ s-1) is added to
the initial cloud and a uniform magnetic field ($B_{0}=5.1\times 10^{-5}$ G)
is imposed over the whole computation domain. The mass-to-flux ratio
normalized by the critical value $(2\pi G^{1/2})^{-1}$ is $\mu=3$. The initial
state is the same as that in Tomida et al. (2017) and Aso & Machida (2020).
In the simulation, the MHD part is calculated by our three-dimensional non-
ideal MHD nested grid code (Machida et al., 2004, 2006, 2009; Machida &
Hosokawa, 2013). The spatial resolution and generation criterion of the grid
are described in Paper I. The cell width and grid size of the finest grid are
$0.374$ au and $24$ au, respectively, while those of the coarsest grid are
$3.07\times 10^{3}$ au and $1.96\times 10^{5}$ au. The number of cells in each
grid is ($i$, $j$, $k$) = (64, 64, 64). Five levels of grid are prepared
before the calculation, while 14 levels of grid are nested just before
protostar formation (for details, see Paper I). In the simulation, a sink of
radius $r_{\rm sink}=1$ au is created after the density exceeds $n_{\rm
thr}=10^{13}\,{\rm cm^{-3}}$ (Machida et al., 2014, 2016). In addition, dust
particles are implemented as super-particles, for which the feedback from the
dust to the gas is ignored, as described in Paper I. It should be noted that
we assume that the dust particles do not interact with each other.
We include a total of 102,984 dust particles in the simulation. The initial
spatial distribution of the particles is shown in Table 1, in spherical
coordinates. As described in the table, the dust locations are distributed
every 10 au in the range 10–6130 au in the radial direction $r_{0}$, every 90∘
in the azimuthal direction $\phi_{0}$, and every 15∘ in the zenith direction
$\theta_{0}$. We prepare six different grain sizes (or six different-size set
of grains) in the range $a_{\rm d}=$0.01–1000 ${\rm\mu m}$, listed in Table 2,
where $a_{\rm d}$ is the radius of a dust grain. The settings for the
distribution of dust particles adopted in this study are identical to those in
Paper I.
Table 1: Initial spatial distributions of dust particles Coordinate | Initial particle locations
---|---
$r_{0}$ | 10–6130 au (every 10 au, 613 locations)
$\phi_{0}$ | 0∘, 90∘, 180∘, 270∘ (every 90∘, 4 locations)
$\theta_{0}$ | 0∘, 15∘, 30∘, 45∘, 60∘, 75∘, 90∘ (every 15∘, 7 locations)
Table 2: Dust grain sizes used in the calculation Dust grain size $a_{\rm d}$
[$\rm\mu$m]
---
0.01, 0.1, 1, 10, 100, 1000
## 3 Results
### 3.1 Time sequence of gas density and velocity distributions
Before presenting the dust motion results, we will describe the gas evolution.
Since the dust grains are coupled with the gas to some degree, it is important
to examine the gas evolution from a Eulerian perspective.
Fig. 1 plots the density and velocity distributions of the gas at four
different epochs. An elliptical structure, which corresponds to the first core
supported by both thermal pressure and rotation, can be seen just before
protostar formation (top right panel of Fig. 1). After protostar formation,
the central region shrinks (right panels of Fig. 1) and a rotationally
supported disk appears (left panels of Fig. 1). Since gas accretion onto the
disk continues during the simulation, the disk increases in size with time.
The simulation was stopped at $t=$ 85000 yr after the cloud collapse begins.
At the end of the simulation, the protostellar mass (or sink mass) is $M_{\rm
ps}=0.0784\,{\rm M}_{\odot}$, which is 6.3 $\%$ of the initial cloud mass. At
this epoch, the disk has a radius of about $\sim 20$ au (bottom left panel of
Fig. 1).
Fig. 2 shows the density and velocity distributions at the same epoch as in
the bottom panels of Fig. 1, though note that the spatial scales of Fig. 1
($\sim 50$ au) and Fig. 2 ($\sim 1500$ au) are very different. Fig. 2
indicates that the protostellar outflow is driven near the center of the
cloud. At the end of the calculation, the size of the outflow exceeds $\sim
1000$ au. As described below, a portion of the dust grains is ejected from the
central region or disk by the outflow (see Paper I).
Figure 1: Gas density (color) and velocity (arrows) distributions on the $z=0$
(left) and $y=0$ (right) planes at four different epochs. The protostellar
mass (or sink mass) and elapsed time are listed above each panel. Figure 2:
Gas density (color) and velocity (arrows) distributions on the $y=0$ plane.
The protostellar mass and the elapsed time after the cloud collapse begins are
given above the plot.
### 3.2 Motion of dust particles
#### 3.2.1 Dust motion from envelope to disk
As described in §3.1, the gas evolution is obtained from a Eulerian simulation
while the dust grains are calculated as Lagrange particles. Thus, we can trace
the trajectory of each dust particle with the gas evolution. In this
subsection, we show the dependence of the dust orbital evolution on their
initial positions.
---
Figure 3: Distance of dust particles from the center against the elapsed time
after the cloud collapse begins. Each panel shows the distance of dust with
$\theta_{0}=0,15,30,45,60,75,90^{\circ}$ (from left top to right bottom). The
dust grain size is $a_{\rm d}=0.01\ {\rm\mu m}$. The initial distances from
the center in units of au for each particle are indicated by color, as given
on the bottom right. The disk size against the elapsed time is also plotted in
gray in each panel.
Figs. 3 and 4 show the time evolution of the distance from the central
protostar (or sink), $r=\sqrt{x^{2}+y^{2}+z^{2}}$, for dust grains with sizes
of $a_{\rm d}=0.01\,{\rm\mu m}$ (Fig. 3) and $1000\,{\rm\mu m}$ (Fig. 4),
respectively. In each panel, the initial zenith angle of the dust particles
($\theta_{0}=0,15,30,45,60,75$, and $90^{\circ}$) is fixed.
Firstly, we describe the evolution of dust particles with a size of $a_{\rm
d}=0.01\,{\rm\mu m}$ (Fig. 3). In Fig. 3, we plot the evolution of dust
particles having different initial radial positions in the range
$r_{0}=1000$–$2450\,{\rm au}$. All the dust particles that initially have
$\theta=0^{\circ}$ reach the sink by $t=79.3\,{\rm kyr}$ (Fig. 3a). These
particles fall toward the center (or sink) due to the gravity of the central
protostar, even though the gas pressure gradient somewhat retards their
infall. In addition, the dust particles plotted in Fig. 3a are not swept up by
the outflow because they have fallen onto the sink before the outflow appears.
The distance evolution for dust particles with $\theta_{0}=15^{\circ}$ clearly
exhibits two trends depending on the initial radius, as seen in Fig. 3b. Dust
particles with an initial radius smaller than $r_{0}\lesssim 1900$ au fall
onto the disk (gray line) within $t<79.5$ kyr. They then orbit within the disk
and finally fall onto the sink by $t\simeq 80$ kyr. Note that the gray line in
each panel corresponds to the radius of the rotationally supported disk. The
identification prescription of the disk is described in Paper I and §3.3. Note
also that we confirmed that the particles are located within the disk when the
distance of each particle is smaller than the disk radius (or gray line). On
the other hand, all the dust particles having an initial radius larger than
$r_{0}\gtrsim 1900$ au increase their distances from the center with time,
indicating that they are outflowing from the center with the gas (see Fig. 2).
For dust particles with $\theta_{0}=30^{\circ}$ (Fig. 3c), the distance for
the particles with an initial radius larger than $r_{0}\gtrsim 1500$ au begins
to increase after initially decreasing, indicating that these particles are
trapped and entrained by the gas outflow while falling toward the center.
Thus, they fall toward the center at an early stage and outflow from the
center at a later stage. The other particles having an initial radius smaller
than $r_{0}<1500$ au orbit within the disk and fall onto the protostar, except
for one particle plotted in Fig. 3c with an initial radius $r_{0}=1650$ au,
which survives without falling to the center. This surviving particle orbits
within the disk while maintaining a distance of $\sim 10$ au after $t>79$ kyr.
In addition, this survivor does not immediately fall to the equatorial plane,
but rotates 10 times in the range $0<z<3$ au within the hydrostatic
equilibrium region of the gas disk before it reaches the equatorial plane of
the disk (for details, see below).
A similar trend can be confirmed for the dust particles with
$\theta_{0}=45^{\circ}$ (Fig. 3d). The particles initially placed far from the
center eventually outflow from the center, while those initially placed near
the center fall onto the sink after they enter into the disk region. On the
other hand, we confirm that particles at $r_{0}=1850$ and $1900\,{\rm au}$
continue to fall into the center after they are momently rolled up by the
outflow. Even when the particles are initially placed at almost the same
region, their final fates are somewhat different. The outflow launching region
varies with time, and the timing of the dust particles reaching the disk
surface is influenced by whether they are swept up by the outflow.
For the dust particles with $\theta_{0}\geq 60^{\circ}$ (Fig. 3e–g), we see no
rapid increase in the distance from the center, indicating that the particles
plotted in these panels are not captured by the outflow. After the dust
particles initially located near the center ($r_{0}\lesssim 1500$ au) reach
the disk, they orbit in the disk while gradually falling to the center, as
seen in Fig. 3e–g. On the other hand, the particles initially located far from
the center ($r_{0}\gtrsim 1800$ au) orbit within the disk without falling onto
the protostar by the end of the simulation. After these particles reach the
disk, they orbit in the disk while maintaining a distance of $\sim 10$–$20$
au. The distances from the center for these surviving particles are comparable
to and slightly smaller than the disk radius. Thus, these particles orbit
around the outer disk region by the end of the simulation. Although the
protostar and disk gradually evolve, these particles continue to orbit in the
disk maintaining almost the same radii. As a result, among the dust particles
with $\theta_{0}\gtrsim 60^{\circ}$, there are many survivors orbiting in the
disk outer region for a long time.
---
Figure 4: As Fig. 3, for $a_{\rm d}=1000\,{\rm\mu m}$.
Next, we describe the behavior of the dust grains with a size of $a_{\rm
d}=1000\,{\rm\mu m}$ shown in Fig. 4. Note that the initial distances of the
particles from the center are different from Fig. 3. We plotted only the dust
particles initially placed in the range $r_{0}=2000$–$3450$ au, because most
of the dust particles having initially small radii ($r_{0}\lesssim 2000$ au)
fell directly onto the sink without passing through the circumstellar disk
(Paper I). Similar to the case for $a_{\rm d}=0.01\,{\rm\mu m}$ (Fig. 3), the
dust particles have three major paths, falling onto the sink, being trapping
by the rotationally supported disk, and being ejected by the outflow.
Most of the dust particles plotted in Fig. 4 accrete onto the disk. Ejection
by the outflow can be seen for only a few particles with
$\theta_{0}=0^{\circ}$ (Fig. 4a). In Fig. 4a, we can confirm that the distance
from the center continues to increase with time for one particle ($r_{0}=3450$
au), which is outflowing from the center. The particles with $r_{0}=3400$,
3350, and 3300 au are entrained by the outflow and their distances initially
increase but then decrease, indicating that they fall to the center. For
$\theta_{0}\neq 0$, the dust particles orbit within the disk after they reach
the disk. In Fig. 4b-g, the dust particles initially placed in the range of
$r_{0}\lesssim 2500$ au finally falls onto the sink. On the other hand, the
particles initially placed in the range $r_{0}\gtrsim 2500$ au maintain
relatively constant distances from the center by the end of the simulation
after they enter the disk. The rotationally supported disk has a size of $\sim
10$–$20$ au and these particles move with almost the same orbit of $r\sim
10$–$20$ au, which is the outer disk region or near the outer edge of the
disk.
Comparing Fig. 3 and Fig. 4 shows that the fraction of particles falling onto
the disk is larger for dust particles with $a_{\rm d}=1000\,{\rm\mu m}$ than
for those with $a_{\rm d}=0.01\,{\rm\mu m}$. The dust particles with a size of
$1000\,{\rm\mu m}$ are initially more decoupled from the gas than those with a
size of $0.01\rm{\rm\mu m}$, as described in Paper I. Thus, large dust
particles tend to fall onto the central region where the disk forms, partially
ignoring the gas drag in the infalling envelope. For dust with $a_{\rm
d}=1000\,{\rm\mu m}$ (Fig. 4), the dust particles orbiting in the
circumstellar disk are initially distributed in the range
$\theta_{0}=15$–$90^{\circ}$, where many particles maintain their distances
from the center in the range $10$–$20$ au. Note that one particle
($r_{0}=3300$ au) in Fig. 4a survives in the disk without falling onto the
sink for $\theta_{0}=0^{\circ}$. On the other hand, a large fraction of the
dust particles with $a_{\rm d}=0.01\,{\rm\mu m}$ (Fig. 3) initially
distributed in the range $\theta_{0}=15$–$45^{\circ}$ are ejected by the
outflow, after initially moving towards the center. Thus, during the
simulation, the number of dust particle orbiting within the circumstellar disk
is larger for a size of $a_{\rm d}=1000\,{\rm\mu m}$ than for a size of
$a_{\rm d}=0.01\,{\rm\mu m}$.
In the above, we only showed the behavior for dust particles with sizes of
$a_{\rm d}=0.01$ and 1000 $\mu$m (i.e., the smallest and largest dust
particles considered in this study and Paper I). In the following, we describe
the dust motion for other particle sizes ($a_{\rm d}$ =0.1, 1, 10, 100
$\mu$m). The behavior of the particles with $a_{\rm d}$=0.1, 1, 10 $\mu$m is
the same as for $a_{\rm d}=0.01\,\mu$m. Fig. 14 of Paper I showed that the
dust abundances of the envelope, protostar, disk, and outflow are the same (or
do not change over time) for dust particles in the size range $a_{\rm
d}=0.01$–$10\,\mu$m. The Stokes numbers for these particles are much less than
unity over the simulation, indicating that these particles are well coupled
with the gas at all times111 The definition of the Stokes number in Paper I is
slightly different from that in the present paper (for details, see §2.3.1 of
Paper I and §3.3). . Therefore, there is no significant difference in the
behavior of these particles. On the other hand, the abundance of particles
with $a_{\rm d}=100\,\mu$m is different from that of particles with $a_{\rm
d}\leq 10\,\mu$m, but is almost the same as for the case of $a_{\rm
d}=1000\,\mu$m, because the Stokes number sometimes exceeds unity for these
particles (Fig. 17 of Paper I). For this reason, we consider particles with
sizes of 0.01 and 1000 $\,\mu$m to illustrate the typical motion of dust
grains with different sizes.
#### 3.2.2 Dust motion within the disk
As described in §3.2.1, the dust particles enter the disk when they are not
ejected by the outflow. Within the disk, some particles survive without
falling onto the disk. Figs. 3 and 4 show that there is a wide range of
initial zenith angle $\theta_{0}=15$–$90^{\circ}$ for which the dust particles
continue to orbit around $10$–$20$ au. These ‘non-drifting’ dust particles may
provide a solution to the radial drift barrier problem, which is one of the
most important issues in theoretical planet formation scenarios. In the
following, we focus on the trajectories of such particles.
Figure 5: Trajectory of a dust particle (black broken line) superimposed on
gas density (color) and velocity (arrows) distributions on the $z=0$ plane.
The particle is initially placed at ($r_{0}$, $\theta_{0}$, $\phi_{0}$)= (1500
au, $90^{\circ}$, $0^{\circ}$) and has a size of $a_{\rm d}=0.01\,{\rm\mu m}$.
The protostellar mass and elapsed time are given above the plot.
Fig. 5 shows the trajectory of a dust particle with a size of $a_{\rm
d}=0.01\,{\rm\mu m}$, initially placed at ($r_{0}$, $\theta_{0}$, $\phi_{0}$)=
(1500 au, $90^{\circ}$, $0^{\circ}$). Note that this particle moves only on
the $z=0$ plane because it is initially placed on the equatorial plane. After
the dust particle reaches the rotationally supported disk, it orbits 30 times
around the center, keeping a distance of $\sim 10$ au from the center. As the
figure shows, the particle does not fall to the center by the end of the
calculation. Although we only plot the trajectory of a single particle in Fig.
5, all particles having $\theta_{0}=90^{\circ}$ continue to orbit around the
protostar without falling onto the sink after they reach the rotationally
supported disk. Interestingly, such particles maintain an orbital radius of
$\sim 10$–$20$ au during the calculation, as seen in Figs. 3 and 4.
Next, we describe the trajectory of a dust particle that ultimately falls onto
the sink. Fig. 6 shows the trajectory of a particle with $a_{\rm
d}=0.01\,{\rm\mu m}$, initially placed at ($r_{0}$, $\theta_{0}$, $\phi_{0}$)=
(1900 au, $45^{\circ}$, 0∘). As seen in the right panel of Fig. 6, the
particle gradually approaches the equatorial plane while going around the
center before it enters the disk region. This particle enters the disk from
the vertical direction, whereas the particle shown in Fig. 5, initially placed
on the equatorial plane, enters the disk from the disk outer edge on the
equatorial plane. As seen in the left panel of Fig. 6, the particle gradually
falls toward the center after it reaches the equatorial plane. The particle
orbits around the center for about 4000 yr, slowly moving in a radial
direction, while the distance from the center continues to decrease. The
particle finally falls onto the sink (or protostar).
Although we only show two typical cases above, the same trend can be seen for
other particles. When the dust particles are initially placed near the center
(small $r_{0}$), they spiral to the sink after reaching the equatorial plane
of the disk. The orbital radii for these particles are $r\lesssim 10$ au when
they reach the equatorial plane. Dust particles initially placed far from the
center (large $r_{0}$) settle around $r\gtrsim 10$ au in the disk by the end
of the simulation. Thus, it can be concluded that dust particles will survive
without falling onto the sink when they have an orbital radius of $\gtrsim 10$
au. These particles tend to reach the outer disk region ($\simeq 10$–$20$ au),
passing through the disk outer edge, as shown in Fig. 5.
Figure 6: Trajectory of a dust particle (black broken line) superimposed on
gas density (color) and velocity (arrows) distributions on the $z=0$ (left)
and $y=0$ (right) planes. The particle is initially placed at ($r_{0}$,
$\theta_{0}$, $\phi_{0}$)= (1900 au, $45^{\circ}$, 0∘) and has the size of
$a_{\rm d}=0.01\,{\rm\mu m}$. The protostellar mass and elapsed time are given
above the plot.
### 3.3 Dust coupling in the disk
Figure 7: Time evolution of the Stokes number for dust particles with $a_{\rm
d}=0.01\,{\rm\mu m}$. St is defined in equation (1). Lines are plotted only
when the particles are moving in the disk region. The color indicates the
initial radius $r_{0}$ and is the same as that in Fig. 3. Filled circles
represent particles that fall onto the sink. Figure 8: As Fig. 7, for
particles with $a_{\rm d}=1000\,{\rm\mu m}$. The color indicates the initial
radius $r_{0}$ and is the same as that in Fig. 4.
In section §3.2, we presented dust trajectories that are not expected based on
classical planet formation scenarios. In this subsection, we investigate
whether the dust particles that move in the disk are coupled with the gas.
In §3.2, we showed curious behavior of dust grains, in which some dust grains
continue to orbit at the same orbital radius without moving toward the center.
If the dust grains are strongly coupled with the gas in the disk, the dust
motion should coincide with the gas motion. If this is the case, the gas
motion can explain why some dust particles maintain an orbit around $r\simeq
10$–$20$ au. The gas density is higher in the disk than in the envelope and
thus, assuming same-sized dust particles, the coupling between the dust
particles and gas is stronger in the disk than in the envelope. To
quantitatively evaluate how strongly a dust particle is coupled with the gas
in the disk, we use the Stokes number St, defined as
${\rm St}=\frac{t_{\rm s}}{t_{\rm K}},$ (1)
where $t_{\rm K}$ is the Keplerian timescale $\Omega_{\rm kep}^{-1}$
($=(GM_{\rm ps}/r)^{-1/2}$).
Figs. 7 and 8 show the time evolution of the Stokes number for dust particles
orbiting within the disk with $a_{\rm d}=0.01$ and $1000\,{\rm\mu m}$.
According to Paper I, the disk is defined as the region where the rotational
velocity dominates the infall velocity ($v_{\phi}>2|v_{r}|$) and the gas is
rotationally supported ($v_{\phi}>0.6\,v_{\rm K}$, where $v_{\rm K}$ is the
Keplerian velocity). In the figures, a broken part of the line indicates that
the dust particle is temporarily located outside the disk. When the line is
broken while St decreases, the particle has temporarily moved into the
envelope near the disk. A filled circle indicates the epoch at which the dust
particle falls onto the sink before the end of the calculation ($t<85$ kyr). A
sudden increase in St is caused by a sudden decrease in the radius of a
particle in the disk.
Fig. 7 shows that dust particles entering the disk from the envelope have St
$\ll 1$. In particular, the Stokes number for dust particles orbiting at
$r\simeq 10$–$20$ au is ${\rm St}\sim 10^{-9}$, indicating that they are
strongly coupled with the gas. Fig. 8 plots the time evolution of St for dust
with $a_{\rm d}=1000\,{\rm\mu m}$, which is the maximum dust size prepared in
this study. When the gas has the same physical quantities, larger dust
particles tend to be more easily decoupled from the gas due to their longer
stopping time. However, even for dust particles having $a_{\rm
d}=1000\,{\rm\mu m}$, the Stokes number is ${\rm St}\sim 10^{-4}$. Thus, such
large particles are also coupled with the gas in the disk.
The St for particles with $a_{\rm d}=1000\,{\rm\mu m}$ is about five orders of
magnitude larger than that for dust particles with $a_{\rm d}=0.01\,{\rm\mu
m}$. This is because the Stokes number is proportional to the dust size as
${\rm St}\propto a_{\rm d}$. Therefore, we can conclude that the result that
some dust particles orbit at the same orbital radius instead of gradually
falling toward the central star can be attributed to the gas motion, that is,
the gas motion on the equatorial plane in the disk prevents the dust particles
from accreting onto the central protostar. Thus, the dust grains are
controlled by gas motion in the disk independent of the dust size in the early
star formation stage.
## 4 Discussion
### 4.1 Angular momentum transport and gravitational instability
As shown in §3.3, a portion of the dust particles keep orbiting around
$r\simeq 10$–$20$ au in the disk without falling onto the sink. The dust
particles eternally orbiting in the disk are strongly coupled with the gas,
because the gas density is sufficiently high and the Stokes number is much
smaller than unity. Thus, to understand why some of the dust is captured in an
orbit, it is important to investigate the physical properties of the gas. In
this subsection, we investigate the characteristics of the gas disk around the
central region.
First, we focus on the angular momentum transport in the disk. To extract the
structure in the high-density region, we plot the density distribution in the
density range $n>2.5\times 10^{12}\,{\rm cm^{-3}}$ in Fig. 9. The figure
clearly shows that the local high-density region has a spiral structure which
extends to $\sim 10$ au. The elapsed time and spatial scales are the same as
in the bottom left panel of Fig. 1. Previous studies have shown the formation
of a spiral structure driven by gravitational instability (Tomida et al.,
2017). Since the high-density spiral arm has a non-axisymmetric structure, the
angular momentum could be transported by the gravitational torque.
To validate the gravitational instability in the disk, the Toomre $Q$
parameter is plotted in Fig. 10. The definition and calculation method of the
$Q$ parameter in the disk are described in Paper I and Tomida et al. (2017).
The figure indicates that $Q<2$ in the range $7\,{\rm au}\lesssim r\lesssim
20\,{\rm au}$. Thus, a spiral structure will naturally develop in this region.
The spiral arms extend to $\sim 10$ au (Fig. 9), while the $Q$ parameter has a
local minimum around $r\sim 10$–$15$ au (Fig. 10). The position of the spiral
arms is not in complete agreement with the region having a local minimum of
$Q$, as previously shown in Tomida et al. (2017). The angular momentum should
be transported outward due to gravitational torque caused by the spiral arms
inside the $\lesssim 10$ au region of the disk. In contrast, the gas
distributed in the outer disk region of $r\gtrsim 10$ au should gain angular
momentum due to angular momentum conservation. Thus, a parcel of gas can stay
around $r\simeq 10$–$20$ au without moving inward. Therefore, the gas around
$r\gtrsim 10$ au orbits at almost the same radius while maintaining its
angular momentum. As seen in Figs. 3–8, the dust particles in the disk are
strongly coupled with the gas. As a result, the dust particles located around
$r\gtrsim 10$ au receive angular momentum from the gas and survive to orbit in
the disk without causing inward radial drift.
Figure 9: Same as the bottom left panel of Fig. 1, with a different color (or
density) range. Figure 10: Toomre’s $Q$ parameter (color) for the disk plotted
on the $z$ = 0 plane, in which the physical quantities are integrated in the
vertical direction within the disk and are averaged.
To understand the behavior of dust particles, the time evolution of the
specific angular momentum for the two dust particles shown in Figs. 5 and 6 is
plotted in Fig. 11. In the figure, a sudden drop at $t\sim 81$ kyr indicates
that the particles enter the disk from the infalling envelope. The dust
particle plotted by the blue line gradually loses its specific angular
momentum in the disk and finally falls onto the sink. The dust particle
plotted by the red line increases its specific angular momentum during
$t=81.2$–$82.5$ kyr, and then it has an almost constant specific angular
momentum by the end of the simulation. Although the specific angular momenta
of the two particles are almost the same before they enter the disk, their
trajectories are different (see Figs. 5 and 6). The particle shown in Fig. 5
enters the disk from the side on the equatorial plane, while the particle
shown in Fig. 6 enters the disk from above. The different trajectory histories
in the envelope result in different outcomes.
Figure 11: Time evolution of specific angular momentum for two selected dust
particles. The red line corresponds to a dust particle orbiting around $10$ au
without showing significant inward radial movement, shown in Fig. 5. The blue
line corresponds to the dust particle shown in Fig. 6, in which the particle
falls onto the sink while orbiting in the disk. Figure 12: As Fig. 10, with
the color indicating the plasma $\beta$.
In addition to the gravitational torque, the angular momentum can be
transported by magnetic effects. Plasma $\beta$ is a useful index to evaluate
whether magnetic effects play a role in angular momentum transfer. Fig. 12
plots plasma $\beta=P_{\rm gas}/P_{\rm mag}$ on the equatorial plane, where
$P_{\rm gas}$ and $P_{\rm mag}$ are the gas and magnetic pressure,
respectively. The figure shows that the plasma beta around $r\sim 10$ au is as
large as $\beta\sim 10^{5}$, indicating that magnetic effects can be
negligible for transporting the angular momentum in the rotationally supported
disk.
### 4.2 Difference from classical planet formation scenario
The MMSN is often used as a disk model in the classical planet formation
scenario. The gas disk considered in this study is different from the MMSN. In
this study, we simulated the formation and evolution of a disk from a
prestellar cloud core (or a molecular cloud core). Molecular cloud cores are
characterized by many observations (e.g., Tokuda et al., 2020). We self-
consistently include both the (disk) self-gravity and magnetic fields in our
cloud core collapse simulation. Our simulation shows a different dust
behavior, which has not been reported in previous studies. The crucial
difference between our study and previous studies is the radial inward
movement of the dust. In previous studies, the dust gradually moves inward,
which is called the radial drift problem (e.g., Weidenschilling, 1977). We
show that a portion of the dust particles does not monotonically fall onto the
central protostar, but that some particles can survive and orbit in the outer
disk region. Many theoretical studies have proposed possible solutions to
avoid the radial drift problem. In terms of a dust model, Okuzumi et al.
(2012) considered the internal structure of dust and showed that porous
aggregates can solve the radial drift problem. In addition, the radial drift
problem can be overcome by preparing artificially adjusted gas disks. For
example, Takahashi & Muto (2018) suggested a gas disk model assuming an MHD
disk wind producing a pressure bump within the disk. In such a disk model,
dust grains concentrate around the pressure bump and dust growth is
significantly promoted before the dust falling onto the central protostar.
As described above, our simulation shows that a non-axisymmetric structure
induced by gravitational instability develops in the disk and can prevent the
radial drift of dust. Our disk structure is obtained from a three-dimensional
MHD simulation starting from the molecular cloud core and is suitable for
investigating the initial conditions of planet formation. However, we ignore
the dust growth and porosity. The dust growth is discussed in the next
section, while the porosity will be focused in future studies.
### 4.3 Dust Growth
The growth of dust particles is ignored in our method, as described in §4.2
and Paper I. Instead, we adopt different-size set of dust particles in the
range $0.01\,\mu{\rm m}\leq a_{\rm d}\leq 1000\,\mu{\rm m}$. The simulation
shows that the dust particles of any size remain in the outer disk region for
$>3000$ yr, as described in §3.2. Thus, the growth of dust particles can be
expected in such a region (Fig. 5). This subsection discusses the possibility
of the growth of dust particles orbiting around the outer disk region.
The evolution of dust peak (or typical) mass $m_{\rm d}$ in the Lagrange form
can be described (Ormel, 2014; Sato et al., 2016; Okuzumi et al., 2016;
Arakawa et al., 2021) as
$\dfrac{dm_{\rm d}}{dt}=\dfrac{2\sqrt{\pi}a_{\rm d}^{2}\,\Delta v_{\rm
d}\,\Sigma_{\rm d}}{h_{\rm d}},$ (2)
where $\Sigma_{\rm d}$ and $h_{\rm d}$ are the surface density and scale
height of the dust, and $\Delta v_{\rm d}$ is the relative velocity between
dust particles. We consider the collisional growth of dust grains composed of
single-sized particles with equation (2). As described in §3.3, the dust
particles of any size adopted in this study are well coupled with the gas
within a rotationally supported (or Keplerian) disk. Thus, the dust scale
height $h_{\rm d}$ is assumed to be equal to the gas scale height $h_{\rm g}$.
In addition, the disk surface density can be described as $\Sigma_{\rm
d}=f_{\rm dg}\Sigma_{\rm g}$, where $f_{\rm dg}(=0.01)$ is the dust-to-gas
mass ratio. Note that the $f_{\rm dg}$ is considered not to be (significantly)
changed during the simulation, as shown in Paper I. Then, with the gas mass
density $\rho_{\rm g}$, we replace $(\Sigma_{\rm d}/h_{\rm d})$ into
$(f\Sigma_{\rm d}/h_{\rm d})=f\rho_{\rm g}$ in equation (2). Thus, equation
(2) can be written as
$\dfrac{dm_{\rm d}}{dt}=2\sqrt{\pi}a_{\rm d}^{2}\Delta v_{\rm d}f\rho_{\rm
g}.$ (3)
In addition, since we assume the spherical dust particles, the mass of each
dust particle is described as
$m_{\rm d}=\dfrac{4\pi a_{\rm d}^{3}\,\rho_{\rm s}}{3},$ (4)
where $\rho_{\rm s}(=1\,{\rm g\,cm^{-3}})$ is the material density of dust
grains, in which the dust grains are assumed to be composed of ice (Paper I).
With equations (2)–(4), we can estimate the growth timescale of dust grains
$t_{\rm grow}$ with the dust mass $m_{\rm d}$ and dust size $a_{\rm d}$
(Arakawa et al., 2021; Tsukamoto et al., 2021b) as
$\displaystyle t_{\rm grow}$ $\displaystyle\equiv$
$\displaystyle\left(\dfrac{1}{a_{\rm d}}\dfrac{da_{\rm
d}}{dt}\right)^{-1}=3\left(\dfrac{1}{m_{\rm d}}\dfrac{dm_{\rm
d}}{dt}\right)^{-1}$ (5) $\displaystyle=$
$\displaystyle\dfrac{2\sqrt{\pi}a_{\rm d}\rho_{\rm s}}{\Delta v_{\rm
d}f\rho_{\rm g}}.$
We need the relative velocity between dust particles $\Delta v_{\rm d}$ to
estimate the growth timescale in equation (5). The relative velocity cannot be
estimated in the simulation because the spatial scale between dust particles
($\lesssim 1$ cm) and circumstellar disk ($>10^{14}$ cm) considerably differs.
Thus, we use a simple formulation of the relative velocity used in past
studies (Ormel & Cuzzi, 2007; Okuzumi et al., 2016; Arakawa et al., 2021;
Shibaike & Mori, 2022) as
$\Delta v_{\rm d}=\sqrt{3\alpha\,{\rm St}}\,c_{\rm s},$ (6)
where $\alpha$, St and $c_{\rm s}$ are the parameter characterizing turbulence
or viscosity on a scale comparable to the size of dust particles, the Stokes
number, and the sound speed of the outer disk region, respectively.
Substituting equation (6) into equation (5), the growth timescale can be
written as
$t_{\rm grow}=\dfrac{2\sqrt{\pi}a_{\rm d}\,\rho_{\rm s}}{f\rho_{\rm
g}}(3\alpha\,{\rm St})^{-1/2}c_{\rm s}^{-1}.$ (7)
In the following, we estimate the dust growth timescale (eq. [7]) using our
simulation results.
As shown in Figures 3–5, the dust particles of any size remain (or continue to
orbit) around $\sim 10$ au far from the central protostar. In such a region,
the gas density is as high as $\rho_{\rm g}\sim 10^{-11}\,{\rm g\,cm^{-3}}$
and the temperature is $T\simeq 100$ K (see Fig. 5 and Paper I). Note that the
$\rho_{\rm g}\sim 10^{-11}\,{\rm g\,cm^{-3}}$ roughly corresponds to the
lowest density of the magnetically inactive region where the magnetic field
dissipates (Nakano et al., 2002; Machida et al., 2007). A rotationally
supported disk forms in such a region in the early star formation stage
(Machida & Matsumoto, 2011). Next, using simulation results (Paper I and Figs.
7 and 8), we relate the dust size $a_{\rm d}$ to the Stokes number St in the
outer disk region ($\sim 10$ au) as
${\rm St}\simeq 10^{-7}\left(\dfrac{a_{\rm d}}{10^{-6}\,{\rm m}}\right).$ (8)
Using these simulation results, we can rewrite the growth timescale given by
equation (7) as
$t_{\rm grow}=29.2\left(\dfrac{a}{10^{-4}\,{\rm
cm}}\right)^{1/2}\left(\dfrac{\rho_{\rm s}}{1{\rm
g\,cm^{-3}}}\right)\left(\dfrac{f}{0.01}\right)^{-1}\left(\dfrac{\rho_{\rm
g}}{10^{-11}\,{\rm
g\,cm^{-3}}}\right)^{-1}\left(\dfrac{\alpha}{0.01}\right)^{-1/2}\left(\dfrac{T}{100}\right)^{-1/2}\
{\rm yr},$ (9)
where the sound speed $c_{\rm s}=7.0\times 10^{4}(T/100)^{1/2}$ is used. For
simplicity, we describe the growth timescale as a function of only the dust
size $a_{\rm d}$ as
$t_{\rm grow}=29.2\left(\dfrac{a}{10^{-6}\,{\rm m}}\right)^{1/2}\ {\rm yr}.$
(10)
Equation (10) indicates that the dust grains with a size of $1\,\mu$m grow in
$\sim 30$ yr in the outer disk region.
As described in §3, the dust particles entered from the horizontal direction
(or the disk outer edge on the equatorial plane) orbit around $\sim 10$ au
within the disk without falling onto the central protostar. The Keplerian
timescale is given by
$t_{\rm
Kep}=100\left(\dfrac{M_{*}}{0.1M_{\odot}}\right)^{-1/2}\left(\dfrac{r}{10\,{\rm
au}}\right)^{3/2}\ {\rm yr}.$ (11)
The dust (and gas) particles in the outer disk region survive for $>3$ kyr
corresponding to $>30$ orbital periods at $r=10$ au (see §3.3). Adopting the
growth timescale being equal to $30\,t_{\rm Kep}=3$ kyr, we can estimate the
maximum dust size possible to be grown in the outer disk region with $t_{\rm
grow}=30\,t_{\rm Kep}$. With equation (10), we can estimate the maximum dust
size as $a_{\rm d}=1.06$ cm when the dust particle orbit around $\sim 10$ au
for $\sim 3$ kyr. In our calculation, the dust particle of any size
($0.01$–$1000\,\mu$m) are well coupled with the gas in the disk (§3.3). Thus,
the growth of the dust particles should be justified until the dust size
reaches $a_{\rm d}\simeq 0.1-1$ cm.
Recently, Tsukamoto et al. (2021b), Tu et al. (2022) and Bate (2022)
investigated the dust growth in the early star formation stage with
(magneto)hydrodynamic simulations. The dust grows to reach $a_{\rm d}\sim
0.1-1$ cm in Tsukamoto et al. (2021b) and Tu et al. (2022), which is
consistent with our estimate above. On the other hand, Bate (2022) showed that
dust grains only grow to about 0.01 cm. However, since he investigated only
the very early stage of star formation, his result does not contradict our
estimate.
We stopped the calculation before the disk sufficiently grows due to the
limitation of computational resources. Thus, the dust particles may grow
larger than 1cm in size in a further evolutionary stage. Equation (8)
indicates that the $\rm{St}\leq 0.1$ sustains as long as the dust size is
smaller than $a_{\rm d}\lesssim 1$ m. Equation (10) indicates that the dust
can grow to reach $a_{\rm d}=1$ m for about $3\times 10^{4}$ yr. When the dust
grains have a size of $a_{\rm d}=1$ m, the relative velocity is $\Delta v_{\rm
d}=38$ m s-1 (eq. 6) and is still comparable to or smaller than the
fragmentation velocity for the dust composed of ice grains $v_{\rm
frag}=20-70$ m s-1 (e.g. Wada et al., 2013; Kawasaki et al., 2022). Thus, the
dust may grow in the outer disk region until the dust size reaches $\sim 1$ m.
Although we should miss many effects to suppress or promote the dust growth
(Bae et al., 2022), our result implies that the dust can sufficiently grow in
the outer disk region. The rapid dust growth is attributed to the high-density
and long dynamical timescale (with a small protostellar mass and large
radius), which is realized in the early stage of star formation.
### 4.4 Observations of large-sized dust grains within protostellar outflow
and envelope
The dust growth in the disk would be verified in the observations of
protostellar outflows. Kwon et al. (2019), Galametz et al. (2019) and Valdivia
et al. (2019) have implied the existence of large-sized dust grains ($\gtrsim
10-100\,\mu$m) within the envelope. The dust growth timescale within the
outflow is as long as $>10^{8}$ yr, in which the maximum outflow density
$\rho=10^{-17}\,{\rm g\,cm^{-3}}$ (Fig. 2) and $a_{\rm d}=100\,\mu$m are
introduced in equation (9). In addition, the dust growth timescale in the
envelope is $>10^{6}$ yr with the density of $\rho=10^{-15}\,{\rm
g\,cm^{-3}}$. Thus, it is difficult for dust particles to grow to $\sim
100\,\mu$m in the envelope and outflow.
On the other hand, we have already shown that the protostellar outflow
contains $100\,\mu$ m dust grains in our previous study (see Figs. 8 and 12 of
Paper I), indicating that the dust grains with a size of $10-100\,\mu$m are
(partially) coupled with the gas within the outflow (see Fig. 13 of Paper I).
Although we did not consider the dust growth in this and previous studies, we
discuss the growth timescale of dust grains in §4.3. The dust grains can grow
up to at least $\sim 1$ cm in size when the grains continue to orbit around
the outer disk region (§4.3). Therefore, it is natural that dust grains with a
size of $\gtrsim 10-100\,\mu$m are ejected from the circumstellar disk by the
outflow. Tsukamoto et al. (2021b) showed that a portion of the dust particles
are decoupled from the gas in the outflow and falls into the envelope and
disk. As a result, the falling particles can contribute to the increase of the
abundance of large-sized dust grains in the envelope. Thus, observing the
large-sized grains in the envelope or outflow may be proof of dust growth in
the circumstellar disk in the early stage of star formation. We cannot
directly show the circulation and growth process of dust grains shown in
Tsukamoto et al. (2021b), because the dust growth is not considered in this
study. We will investigate it in our future studies.
### 4.5 Caveats and future perspectives
In addition to Ohmic dissipation, there are two other non-ideal MHD effects,
ambipolar diffusion and the Hall effect, which are not considered in our
simulation. Although these effects are also important during the star
formation process, they are not expected to significantly change the dust
trajectories in high-density gas disks, as both ambipolar diffusion and the
Hall effect influence the gas dynamics in relatively low-density gas regions
(e.g. Koga et al., 2019; Kawasaki et al., 2021). Our simulation settings are
the same as in Tomida et al. (2017), which reported that the simulation
reproduces the observed disk structure fairly well. We show that dust
particles located in the outer disk region do not fall onto the protostar
because such particles receive angular momentum from the inner disk region,
transported by the gravitational torque. The formation of a gravitationally
unstable disk has also been confirmed with non-ideal MHD simulations that
include three non-ideal MHD effects, Ohmic dissipation, ambipolar diffusion,
and the Hall effect (e.g. Wurster et al., 2019). These simulations also show
that angular momentum is transported outward by gravitational instability (or
gravitational torque). Thus, suppression of the dust inward motion can be
expected as long as the disk is in a gravitationally unstable state or in the
main accretion phase.
The magnetic diffusion coefficients of non-ideal MHD effects depend on the
dust properties such as size and size distribution of dust grains (Kawasaki et
al., 2022). The non-ideal MHD effects or dust properties can influence the
properties of protostellar outflows (e.g. Marchand et al., 2020). The outflow
can transport angular momentum from the circumstellar disk. Thus, the change
in the efficiency of angular momentum transport due to the outflow should
change the disk properties, such as the size and density of the circumstellar
disk in the early stage of star formation. As a result, the properties of the
region where dust particles continue to orbit should be changed. However, the
dust motion in the outer disk region should not be qualitatively changed as
long as the gravitationally unstable disk sustains, as described above. On the
other hand, the dust growth timescale discussed in §4.3 may be changed
according to the properties of both outflow and circumstellar disk.
Finally, we discuss the effect of radiative feedback from the protostar. In
this study, we do not consider the issue of radiative transfer. However,
radiative heating from the central protostar may influence the disk dynamics.
The protostellar accretion luminosity can heat the disk even when the
protostellar mass is as small as $\sim 0.1\,{\rm M}_{\odot}$ (e.g. Machida &
Hosokawa, 2013; Hennebelle et al., 2020). Thus, radiative heating tends to
suppress the disk gravitational instability. In addition, viscous heating by
differential rotation in the disk and shock heating during accretion from the
infalling envelope may affect the disk dynamics. Heating and cooling should be
carefully addressed in future studies to more adequately investigate the disk
dynamics and behavior of dust grains.
## 5 Conclusion
We investigated the behavior of dust grains in a star-forming cloud. We
developed a new method to calculate the behavior of dust grains as Lagrange
particles and solve the fluid dynamics in the Eulerian framework to
investigate the motion of dust grains in the early star and disk formation
stage. We described the implementation and range of application for Lagrange
dust particles in Paper I. We also showed the time evolution of dust abundance
in the protostar, disk, outflow, and envelope regions in Paper I. The current
paper focused on the dust motion and the coupling between dust and gas in the
main accretion phase, during which the circumstellar disk gradually grows.
We prepared six different-size set of dust particles in the range
$0.01$–$1000\,\mu$m and placed them in a prestellar cloud core. We calculated
the evolution of the cloud core until the protostellar mass reaches
0.0784${\rm M}_{\odot}$, ignoring dust growth. In the gravitationally
collapsing cloud, a circumstellar disk forms and drives a protostellar
outflow. Although the disk gradually grows in the main accretion phase, the
size of the disk is as small as $\sim 20$ au at the end of the simulation.
A large portion of the large dust particles ($a_{\rm d}\gtrsim 100\,\mu$m)
falls onto the disk, while the outflow can eject a small number. For the small
dust particles ($a_{\rm d}\lesssim 10$–$100\,\mu$m), the dust particles
initially placed with a small zenith angle $\theta_{0}\lesssim 45^{\circ}$ are
ejected by the outflow, where $\theta_{0}$ is the angle relative to the
rotation axis (or the $z-$axis). Since we already showed the spatial variation
in the abundance of different-sized dust grains in Paper I, we did not comment
on this in this study.
The motion of dust grains exhibits two trends after they enter the
circumstellar disk, dependent on their history or trajectory until they reach
the disk, regardless of the dust size. The dust grains slowly move inward and
fall onto the sink (or protostar) when they enter the circumstellar disk from
above, for which the incidence angle of the dust grains relative to the disk
normal is small. In contrast, the dust grains continue to orbit near the outer
edge of the disk, neither moving inward nor falling onto the protostar, when
they enter the disk from the side or near the equatorial plane with a large
incidence angle relative to the disk normal. In other words, dust grains
reaching the inner disk region from the upper envelope rapidly fall onto the
protostar, while those reaching the outer disk region survive without falling
onto the protostar.
We estimated the coupling between dust grains and gas in the disk using the
Stokes number to investigate the motion of the surviving dust grains. The
Stokes numbers for all dust grains with sizes of $a_{\rm d}=0.01$–$1000\,\mu$m
are well below unity, indicating that the dust grains are well coupled with
the gas in the disk. Thus, the curious behavior of dust grains in the disk
could be attributed to the gas motion within the disk.
In the disk, the magnetic field is weak and the plasma beta is very high
because of effective magnetic dissipation. Thus, the magnetic field cannot
play a role in transporting the angular momentum in the disk. As a result, the
disk mass gradually increases without an effective mechanism for angular
momentum transfer. This will cause gravitational instability because the mass
supply from the infalling envelope to the disk persists in the main accretion
phase. We found that a tiny spiral structure develops due to gravitational
instability near the disk center. The spiral structure extends to $\sim 10$
au, while the disk has a radius of $\sim 20$ au. Thus, the gas and dust
particles located at $r\lesssim 10$ au lose their angular momentum due to
gravitational torque related to the spiral structure, and they can fall onto
the central region. The gas and dust particles located at $r\gtrsim 10$ au
receive angular momentum from the inner gas and dust particles, and survive
without falling onto the protostar.
In our simulation, dust particles orbiting in the outer disk region survive
for $>4000$ yr without showing inward migration, but we cannot determine
whether these particles survive for a further long period. However, if these
particles survive for a very long time, the dust grains will grow and may form
planetesimals or protoplanets. Thus, this study suggests that the outer disk
regions where dust grains accumulate without inward radial drift could be a
favored place of planet formation. The number of dust particles calculated in
this study is not large (102,984 particles, including gas test particles, for
details, see Paper I), so it is difficult to quantitatively estimate the
amount of dust grains orbiting in the outer disk region. The purpose of this
study was to qualitatively investigate the dust motion in a growing disk
during the main accretion phase and we do not extend the discussion to planet
formation. Future studies will investigate long-term integration to determine
the final fate of dust grains orbiting in the outer disk region.
## Acknowledgements
We thank the referee for very useful comments and suggestions on this paper.
We have benefited greatly from discussions with Yusuke Tsukamoto and Yoshihiro
Kawasaki. This work was supported by the Japan Society for the Promotion of
Science KAKENHI (JP17H06360, JP17K05387, JP17KK0096, JP21H00046, JP21K03617:
MNM), JSPS KAKENHI grants (numbers JP20J12062), and NAOJ ALMA Scientific
Research Grant Code 2022-22B. This research used the computational resources
of the High-Performance Computing Infrastructure (HPCI) system provided by the
CyberScience Center at Tohoku University, the Cybermedia Center at Osaka
University, and the Earth Simulator at JAMSTEC through the HPCI System
Research Project (project IDs hp200004, hp210004, hp220003). The simulations
reported in this paper were also performed by 2021 and 2022 Koubo Kadai on the
Earth Simulator (NEC SX-ACE and NEC SX-Aurora TSUBASA) at JAMSTEC.
## Data Availability
The data underlying this article are available in the article and in its
online supplementary material.
## References
* ALMA Partnership et al. (2015) ALMA Partnership et al., 2015, ApJ, 808, L3
* Andrews (2020) Andrews S. M., 2020, ARA&A, 58, 483
* Andrews et al. (2018) Andrews S. M., et al., 2018, ApJ, 869, L41
* Arakawa et al. (2021) Arakawa S., Matsumoto Y., Honda M., 2021, ApJ, 920, 27
* Aso & Machida (2020) Aso Y., Machida M. N., 2020, ApJ, 905, 174
* Bae et al. (2022) Bae J., Isella A., Zhu Z., Martin R., Okuzumi S., Suriano S., 2022, arXiv e-prints, p. arXiv:2210.13314
* Banerjee & Pudritz (2006) Banerjee R., Pudritz R. E., 2006, ApJ, 641, 949
* Bate (2022) Bate M. R., 2022, MNRAS, 514, 2145
* Benisty et al. (2021) Benisty M., et al., 2021, ApJ, 916, L2
* Blum & Wurm (2008) Blum J., Wurm G., 2008, ARA&A, 46, 21
* Galametz et al. (2019) Galametz M., Maury A. J., Valdivia V., Testi L., Belloche A., André P., 2019, A&A, 632, A5
* Hayashi et al. (1985) Hayashi C., Nakazawa K., Nakagawa Y., 1985, in Black D. C., Matthews M. S., eds, Protostars and Planets II. pp 1100–1153
* Hennebelle & Fromang (2008) Hennebelle P., Fromang S., 2008, A&A, 477, 9
* Hennebelle et al. (2020) Hennebelle P., Commerçon B., Lee Y.-N., Chabrier G., 2020, ApJ, 904, 194
* Kawasaki et al. (2021) Kawasaki Y., Koga S., Machida M. N., 2021, MNRAS, 504, 5588
* Kawasaki et al. (2022) Kawasaki Y., Koga S., Machida M. N., 2022, MNRAS, 515, 2072
* Koga et al. (2019) Koga S., Tsukamoto Y., Okuzumi S., Machida M. N., 2019, MNRAS, 484, 2119
* Koga et al. (2022) Koga S., Kawasaki Y., Machida M. N., 2022, arXiv e-prints, p. arXiv:2207.12907
* Kwon et al. (2019) Kwon W., Stephens I. W., Tobin J. J., Looney L. W., Li Z.-Y., van der Tak F. F. S., Crutcher R. M., 2019, ApJ, 879, 25
* Lebreuilly et al. (2020) Lebreuilly U., Commerçon B., Laibe G., 2020, A&A, 641, A112
* Lee et al. (2021) Lee Y.-N., Charnoz S., Hennebelle P., 2021, A&A, 648, A101
* Machida & Hosokawa (2013) Machida M. N., Hosokawa T., 2013, MNRAS, 431, 1719
* Machida & Matsumoto (2011) Machida M. N., Matsumoto T., 2011, MNRAS, 413, 2767
* Machida et al. (2004) Machida M. N., Tomisaka K., Matsumoto T., 2004, MNRAS, 348, L1
* Machida et al. (2006) Machida M. N., Matsumoto T., Hanawa T., Tomisaka K., 2006, ApJ, 645, 1227
* Machida et al. (2007) Machida M. N., Inutsuka S.-i., Matsumoto T., 2007, ApJ, 670, 1198
* Machida et al. (2009) Machida M. N., Inutsuka S.-i., Matsumoto T., 2009, ApJ, 699, L157
* Machida et al. (2014) Machida M. N., Inutsuka S.-i., Matsumoto T., 2014, MNRAS, 438, 2278
* Machida et al. (2016) Machida M. N., Matsumoto T., Inutsuka S.-i., 2016, MNRAS, 463, 4246
* Marchand et al. (2020) Marchand P., Tomida K., Tanaka K. E. I., Commerçon B., Chabrier G., 2020, ApJ, 900, 180
* Marchand et al. (2021) Marchand P., Guillet V., Lebreuilly U., Mac Low M. M., 2021, A&A, 649, A50
* Marchand et al. (2022) Marchand P., Guillet V., Lebreuilly U., Mac Low M. M., 2022, A&A, 666, A27
* Mayor & Queloz (1995) Mayor M., Queloz D., 1995, Nature, 378, 355
* Nakano et al. (2002) Nakano T., Nishi R., Umebayashi T., 2002, ApJ, 573, 199
* Okuzumi et al. (2012) Okuzumi S., Tanaka H., Kobayashi H., Wada K., 2012, ApJ, 752, 106
* Okuzumi et al. (2016) Okuzumi S., Momose M., Sirono S.-i., Kobayashi H., Tanaka H., 2016, ApJ, 821, 82
* Ormel (2014) Ormel C. W., 2014, ApJ, 789, L18
* Ormel & Cuzzi (2007) Ormel C. W., Cuzzi J. N., 2007, A&A, 466, 413
* Podio et al. (2020) Podio L., et al., 2020, A&A, 642, L7
* Sato et al. (2016) Sato T., Okuzumi S., Ida S., 2016, A&A, 589, A15
* Sheehan et al. (2020) Sheehan P. D., Tobin J. J., Federman S., Megeath S. T., Looney L. W., 2020, ApJ, 902, 141
* Shibaike & Mori (2022) Shibaike Y., Mori S., 2022, arXiv e-prints, p. arXiv:2211.08947
* Takahashi & Muto (2018) Takahashi S. Z., Muto T., 2018, ApJ, 865, 102
* Tokuda et al. (2020) Tokuda K., et al., 2020, ApJ, 899, 10
* Tomida et al. (2013) Tomida K., Tomisaka K., Matsumoto T., Hori Y., Okuzumi S., Machida M. N., Saigo K., 2013, ApJ, 763, 6
* Tomida et al. (2017) Tomida K., Machida M. N., Hosokawa T., Sakurai Y., Lin C. H., 2017, ApJ, 835, L11
* Tsukamoto et al. (2015) Tsukamoto Y., Iwasaki K., Okuzumi S., Machida M. N., Inutsuka S., 2015, ApJ, 810, L26
* Tsukamoto et al. (2020) Tsukamoto Y., Machida M. N., Susa H., Nomura H., Inutsuka S., 2020, ApJ, 896, 158
* Tsukamoto et al. (2021a) Tsukamoto Y., Machida M. N., Inutsuka S., 2021a, ApJ, 913, 148
* Tsukamoto et al. (2021b) Tsukamoto Y., Machida M. N., Inutsuka S.-i., 2021b, ApJ, 920, L35
* Tu et al. (2022) Tu Y., Li Z.-Y., Lam K. H., 2022, MNRAS, 515, 4780
* Tychoniec et al. (2020) Tychoniec Ł., et al., 2020, A&A, 640, A19
* Valdivia et al. (2019) Valdivia V., Maury A., Brauer R., Hennebelle P., Galametz M., Guillet V., Reissl S., 2019, MNRAS, 488, 4897
* Wada et al. (2013) Wada K., Tanaka H., Okuzumi S., Kobayashi H., Suyama T., Kimura H., Yamamoto T., 2013, A&A, 559, A62
* Weidenschilling (1977) Weidenschilling S. J., 1977, MNRAS, 180, 57
* Wurster et al. (2019) Wurster J., Bate M. R., Price D. J., 2019, MNRAS, 489, 1719
* Xu & Kunz (2021a) Xu W., Kunz M. W., 2021a, MNRAS, 502, 4911
* Xu & Kunz (2021b) Xu W., Kunz M. W., 2021b, MNRAS, 508, 2142
|
# Disentangling the Mechanisms Behind
Implicit Regularization in SGD
Zachary<EMAIL_ADDRESS>UC San Diego., Simran
<EMAIL_ADDRESS>Princeton University., Tanya
<EMAIL_ADDRESS>Carnegie Mellon University, Saurabh
<EMAIL_ADDRESS>Carnegie Mellon University, Zachary C. Lipton
<EMAIL_ADDRESS>Carnegie Mellon University
###### Abstract
A number of competing hypotheses have been proposed to explain _why_ small-
batch Stochastic Gradient Descent (SGD) leads to improved generalization over
the full-batch regime, with recent work crediting the implicit regularization
of various quantities throughout training. However, to date, empirical
evidence assessing the explanatory power of these hypotheses is lacking. In
this paper, we conduct an extensive empirical evaluation, focusing on the
ability of various theorized mechanisms to close the small-to-large batch
generalization gap. Additionally, we characterize how the quantities that SGD
has been claimed to (implicitly) regularize change over the course of
training. By using _micro-batches_ , i.e. disjoint smaller subsets of each
mini-batch, we empirically show that explicitly penalizing the gradient norm
or the Fisher Information Matrix trace, averaged over micro-batches, in the
large-batch regime recovers small-batch SGD generalization, whereas Jacobian-
based regularizations fail to do so. This generalization performance is shown
to often be correlated with how well the regularized model’s gradient norms
resemble those of small-batch SGD. We additionally show that this behavior
breaks down as the micro-batch size approaches the batch size. Finally, we
note that in this line of inquiry, positive experimental findings on CIFAR10
are often reversed on other datasets like CIFAR100, highlighting the need to
test hypotheses on a wider collection of datasets.
## 1 Introduction
While small-batch SGD has frequently been observed to outperform large-batch
SGD [Geiping et al., 2022, Keskar et al., 2017, Masters and Luschi, 2018,
Smith et al., 2021, Wu et al., 2020, Jastrzebski et al., 2018, Wu et al.,
2018, Wen et al., 2020, Mori and Ueda, 2020], the upstream cause for this
generalization gap is a contested topic, approached from a variety of
analytical perspectives [Goyal et al., 2017, Wu et al., 2020, Geiping et al.,
2022, Lee et al., 2022]. Initial work in this field has generally focused on
the learning rate to batch-size ratio [Keskar et al., 2017, Masters and
Luschi, 2018, Goyal et al., 2017, Mandt et al., 2017, He et al., 2019, Li et
al., 2019] or on recreating stochastic noise via mini-batching [Wu et al.,
2020, Jastrzebski et al., 2018, Zhu et al., 2019, Mori and Ueda, 2020, Cheng
et al., 2020, Simsekli et al., 2019, Xie et al., 2021], whereas recent works
have pivoted focus on understanding how mini-batch SGD may _implicitly
regularize_ certain quantities that improve generalization [Geiping et al.,
2022, Barrett and Dherin, 2020, Smith et al., 2021, Lee et al., 2022,
Jastrzebski et al., 2020].
Figure 1: Validation Accuracy and Average Micro-batch ($|M|=128$) Gradient
Norm for CIFAR10/100 Regularization Experiments, averaged across runs (plots
also smoothed for clarity). In both datasets, Gradient Norm (GN) and Fisher
Trace (FT) Regularization mimic the average micro-batch gradient norm behavior
of SGD during early training and effectively recover generalization
performance (within a small margin of error), whereas both Average and Unit
Jacobian (AJ and UJ) fail to do so.
In this paper, we provide a careful empirical analysis of how these competing
regularization theories compare to each other as assessed by how well the
prescribed interventions, when applied in the large batch setting, recover
SGD’s performance. Additionally, we study their similarities and differences
by analyzing the evolution of the regularized quantities over the course of
training.
Our main contributions are the following:
1. 1.
By utilizing micro-batches (i.e. disjoint subsets of each mini-batch), we find
that explicitly regularizing either the average micro-batch gradient norm
[Geiping et al., 2022, Barrett and Dherin, 2020] or Fisher Information Matrix
trace [Jastrzebski et al., 2020] (equivalent to the average gradient norm when
labels are drawn from the predictive distribution, detailed in Section 2.2) in
the large-batch regime fully recovers small-batch SGD generalization
performance, but using Jacobian-based regularization [Lee et al., 2022] fails
to recover small-batch SGD performance (see Figure 1).
2. 2.
We show that the generalization performance is strongly correlated with how
well the trajectory of the average micro-batch gradient norm during training
_mimics_ that of small-batch SGD, but that this condition is not necessary for
recovering performance in some scenarios. The poor performance of Jacobian
regularization, which enforces either uniform or fully random weighting on
each class and example (see Section 2.3), highlights that the beneficial
aspects of average micro-batch gradient norm or Fisher trace regularization
may come from the loss gradient’s ability to adaptively weight outputs on the
per example and per class basis.
3. 3.
We demonstrate that the generalization benefits of both successful methods no
longer hold when the micro-batch size is closer to the actual batch size. We
too subsequently show that in this regime the average micro-batch gradient
norm behavior of both previously successful methods differs significantly from
the small-batch SGD case.
4. 4.
We highlight a high-level issue in modern empirical deep learning research:
Experimental results that hold on CIFAR10 do not necessarily carry over to
other datasets. In particular, we focus on a technique called _gradient
grafting_ [Agarwal et al., 2020], which has been shown to improve
generalization for adaptive gradient methods. By looking at its behavior for
normal SGD and GD, we show that gradient grafting recovers small-batch SGD
generalization’s performance on CIFAR10 but fails in CIFAR100, arguing that
research in this line should prioritize experiments on a larger and diverse
range of benchmark datasets.
## 2 Prior Work and Preliminaries
In neural network training, the choice of batch size (and learning rate)
heavily influence generalization. In particular, researchers have found that
opting for small batch sizes (and large learning rates) improve a network’s
ability to generalize [Keskar et al., 2017, Masters and Luschi, 2018, Goyal et
al., 2017, Mandt et al., 2017, He et al., 2019, Li et al., 2019] . Yet,
explanations for this phenomenon have long been debated. While some
researchers have attributed the success of small-batch SGD to gradient noise
introduced by stochasticity and mini-batching [Wu et al., 2020, Jastrzebski et
al., 2018, Zhu et al., 2019, Mori and Ueda, 2020, Cheng et al., 2020, Simsekli
et al., 2019, Xie et al., 2021], others posit that small-batch SGD finds “flat
minima” with low non-uniformity, which in turn boosts generalization [Keskar
et al., 2017, Wu et al., 2018, Simsekli et al., 2019]. Meanwhile, some works
credit the implicit regularization of quantities such as loss gradient norm,
the Jacobian norm (i.e., the network output-to-weights gradient norm), and the
Fisher Information Matrix [Geiping et al., 2022, Barrett and Dherin, 2020,
Smith et al., 2021, Lee et al., 2022, Jastrzebski et al., 2020].
Recent works have shown that one can recover SGD generalization performance by
training on a modified loss function that regularizes large loss gradients
[Geiping et al., 2022, Barrett and Dherin, 2020, Smith et al., 2021]. While
Smith et al. [2021] and Barrett and Dherin [2020] expect that training on this
modified loss function with large micro-batch sizes will be unable to recover
SGD generalization performance, we empirically verify this is the case.
To our knowledge, we are the first to introduce the “micro-batch“ terminology
to denote component disjoint sub-batches used in accumulated mini-batch SGD.
This choice was made to avoid overloading the term “mini-batch“ and thus
clarify the work done by Smith et al. [2021] and Barrett and Dherin [2020].
Note that here and for the rest of this paper, we use _Large-Batch_ SGD as an
approximation for full-batch GD due to the computational constraints of using
the full training set on each update. We emphasize that throughout this paper,
micro-batches are not meant as any sort of “alternative" to mini-batches, as
they are purely an implementation feature of gradient accumulation-based
large-batch SGD. We additionally leverage the work done by Agarwal et al.
[2020], who propose the idea of _grafting_ as a meta-optimization algorithm,
though in the paper their focus mostly rests on grafting adaptive optimization
algorithms together, not plain mini-batch SGD.
As a whole, our paper is situated as a comparative analysis of multiple
proposed regularization mechanisms [Geiping et al., 2022, Barrett and Dherin,
2020, Smith et al., 2021, Lee et al., 2022, Jastrzebski et al., 2020] in a
side-by-side empirical context, with additional ablations over how minor
design choices may affect the efficacy of these proposed methods to close the
generalization gap. We now discuss various implicit and explicit
regularization mechanisms in more depth.
Setup and Notation We primarily consider the case of a softmax classifier
$f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{C}$ (where $C$ is the number of
classes) parameterized by some deep neural network with parameters
$\boldsymbol{\theta}$. We let $\ell(\mathbf{x},y;\boldsymbol{\theta})$ denote
the standard cross entropy loss for example $\mathbf{x}$ and label $y$, and
let
$\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta})=\frac{1}{|\mathcal{B}|}\sum_{(\mathbf{x},y)\in\mathcal{B}}\ell(\mathbf{x},y;\boldsymbol{\theta})$
denote the average loss over some batch $\mathcal{B}$. Note that throughout
this paper, the terms “batch" and “mini-batch" are used interchangeably to
refer to $\mathcal{B}$.
### 2.1 Average Micro-batch Gradient Norm Regularization
As proposed by Smith et al. [2021], we attempt to understand the
generalization behavior of mini-batch SGD by how it implicitly regularizes the
norm of the micro-batch gradient,
$\|\nabla\mathcal{L}_{M}(\boldsymbol{\theta})\|$ for some micro-batch
$M\subseteq\mathcal{B}$. In large-batch SGD, we accomplish this through
_gradient accumulation_ (i.e. accumulating the gradients of many small-batches
to generate the large-batch update), and thus can add an explicit regularizer
(described in Geiping et al. [2022]) that penalizes the _average_ micro-batch
norm. Formally, for some large-batch $\mathcal{B}$ and component disjoint
micro-batches $M\subseteq\mathcal{B}$, let
$\nabla_{\boldsymbol{\theta}}\mathcal{L}_{M}(\boldsymbol{\theta})=\frac{1}{|M|}\sum_{(\mathbf{x},y)\in
M}\nabla_{\boldsymbol{\theta}}\ell(\mathbf{x},y;\boldsymbol{\theta})$. The new
loss function is:
$\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta})+\lambda\frac{|M|}{|\mathcal{B}|}\sum_{M\in\mathcal{B}}\|\nabla_{\boldsymbol{\theta}}\mathcal{L}_{M}(\boldsymbol{\theta})\|^{2}.$
(1)
While this quantity can be approximated through finite differences, it can
also be optimized directly by backpropagation using modern deep learning
packages, as we do in this paper.
Note that by definition, we can decompose the regularizer term into the
product of the Jacobian of the network and the gradient of the loss with
respect to network output. Formally, for some network $f$ with $p$ parameters,
if we let $\mathbf{z}=f(\mathbf{x};\boldsymbol{\theta})\in\mathbb{R}^{C}$ be
the model output for some input $\mathbf{x}$ and denote its corresponding
label as $y$, then:
${\color[rgb]{0.0,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.0,0.0}\nabla_{\boldsymbol{\theta}}\ell(\mathbf{x},y;\boldsymbol{\theta})=(\nabla_{\boldsymbol{\theta}}\mathbf{z})(\nabla_{\mathbf{z}}\ell(\mathbf{x},y;\boldsymbol{\theta}))}$
(2)
Where $\nabla_{\boldsymbol{\theta}}\mathbf{z}\in\mathbb{R}^{p\times C}$ is the
Jacobian of the network and the second term is the _loss-output_ gradient. We
explicitly show this equivalence for the comparison made in section 2.3.
### 2.2 Average Micro-batch Fisher Trace Regularization
One noticeable artifact of Equation 1 is its implicit reliance on the true
labels $y$ to calculate the regularizer penalty. Jastrzebski et al. [2020]
shows that we can derive a similar quantity in the mini-batch SGD setting by
penalizing the trace of the _Fisher Information Matrix_ $\mathbf{F}$, which is
given by
$\text{Tr}(\mathbf{F})=\mathbb{E}_{\mathbf{x}\sim\mathcal{X},\hat{y}\sim
p_{\boldsymbol{\theta}}(y\mid\mathbf{x})}[\|\nabla_{\boldsymbol{\theta}}\ell(\mathbf{x},\hat{y};\boldsymbol{\theta})\|^{2}]$,
where $p_{\boldsymbol{\theta}}(y\mid\mathbf{x})$ is the predictive
distribution of the model at the current iteration and $\mathcal{X}$ is the
data distribution. We thus extend their work to the accumulated large-batch
regime and penalize an approximation of the _average_ Fisher trace over micro-
batches: if we let
$\widehat{\mathcal{L}}_{M}(\boldsymbol{\theta})=\frac{1}{|M|}\sum_{\mathbf{x}\in
M,\hat{y}\sim
p_{\boldsymbol{\theta}}(y\mid\mathbf{x})}\ell(\mathbf{x},\hat{y};\boldsymbol{\theta})$,
then our penalized loss is
$\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta})+\lambda\frac{|M|}{|\mathcal{B}|}\sum_{M\in\mathcal{B}}\|\nabla_{\boldsymbol{\theta}}\widehat{\mathcal{L}}_{M}(\boldsymbol{\theta})\|^{2}.$
(3)
The only difference between the expressions in Equation 1 and Equation 3 is
that the latter now uses labels sampled from the _predictive_ distribution,
rather than the true labels, to calculate the regularizer term. As with
Equation 1, we can directly backpropagate using this term in our loss
equation.
Like, in equation 2, we can decompose the regularizer term as:
${\color[rgb]{0.0,0.0,0.0}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.0,0.0}\ell(\mathbf{x},\hat{y};\boldsymbol{\theta})=(\nabla_{\boldsymbol{\theta}}\mathbf{z})(\nabla_{\mathbf{z}}\ell(\mathbf{x},\hat{y};\boldsymbol{\theta}))}$
(4)
Where the second term is another loss-output gradient.
Jastrzebski et al. [2020] observes that models with poor generalization
typically show a large spike in the Fisher Trace during the early phases of
training, which they coin as _Catastrophic Fisher Explosion_. In Figure 1, we
show that this behavior also occurs when looking at the average Micro-Batch
gradient norm.
### 2.3 Jacobian Regularization
Given the decompositions shown in equations 2 and 4, it is unclear in either
case whether the Jacobian term is the sole source of possible generalization
benefit, or if the loss-output gradient is also needed. To disentangle this
effect, we borrow from Lee et al. [2022] and use the _average_ and _unit_
Jacobian regularization losses:
$\displaystyle\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta})+\lambda\frac{|M|}{|\mathcal{B}|}\sum_{M\subseteq\mathcal{B}}\|J_{\text{avg}}(M)\|^{2},\quad$
$\displaystyle\quad J_{\text{avg}}(M)=\frac{1}{|M|}\sum_{(\mathbf{x},y)\in
M}(\nabla_{\boldsymbol{\theta}}\mathbf{z})(\frac{1}{C}\mathbbm{1}),$ (5)
$\displaystyle\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta})+\lambda\frac{|M|}{|\mathcal{B}|}\sum_{M\subseteq\mathcal{B}}\|J_{\text{unit}}(M)\|^{2},\quad$
$\displaystyle\quad J_{\text{unit}}(M)=\frac{1}{|M|}\sum_{(\mathbf{x},y)\in
M}(\nabla_{\boldsymbol{\theta}}\mathbf{z})(\mathbf{u}),$ (6)
where $\mathbf{u}\in\mathbb{R}^{C}$ is randomly sampled from the unit
hypersphere (i.e. $\|\mathbf{u}\|_{2}=1$), and $\mathbf{u}$ is sampled once
per micro-batch. In words, the _average_ Jacobian case penalizes the Jacobian
with equal weighting on every class and every example, while the _unit_
Jacobian case penalizes the Jacobian with different but _random_ weighting on
each class and example. Note that the unit Jacobian penalty is an unbiased
estimator of the Frobenius norm of the Jacobian
$\|\nabla_{\boldsymbol{\theta}}\mathbf{z}\|_{F}^{2}$, which is an upper bound
on its spectral norm $\|\nabla_{\boldsymbol{\theta}}\mathbf{z}\|_{2}^{2}$ (see
Lee et al. [2022] for a more detailed theoretical analysis).
## 3 Explicit Regularization And Gradient Norm Trajectory
These aforementioned explicit regularization mechanisms have previously been
investigated in limited empirical settings. To the best of our knowledge,
Jastrzebski et al. [2020] is the only work that has directly compared some of
these regularization mechanisms, but only did so in the context of improving
_small-batch_ performance. Like our work, Geiping et al. [2022] is centered on
the small-to-large batch generalization gap, but they do not focus _solely_ on
the explicit regularization they propose and do not include any analysis of
the micro-batch gradient norm behavior during training. In this work, we
investigate (i) how these regularizers effect generalization for an array of
benchmarks and (ii) how such performance may correlate with the _evolution_ of
the micro-batch gradient norm during training.
### 3.1 Experimental Setup
We first focus our experiments on the case of using a ResNet-18 [He et al.,
2015], with standard initialization and batch normalization, on the CIFAR10,
CIFAR100, Tiny-ImageNet, and SVHN image classification benchmarks [Krizhevsky,
2009, Le and Yang, 2015, Netzer et al., 2011]. Additional experiments on
different architectures are detailed in Appendix A.1. Besides our small-batch
($\mathcal{B}=128$) and large-batch ($\mathcal{B}=5120$) SGD baselines, we
also train the networks in the large-batch regime using (i) average Micro-
batch Gradient Norm Regularization (GN); (ii) average Micro-batch Fisher Trace
Regularization (FT); and (iii) _average_ and _unit_ Micro-batch Jacobian
Regularizations (AJ and UJ). Note that for all the regularized experiments, we
use a component micro-batch size equal to the small-batch size (i.e. 128). In
order to compare the _best possible_ performance within each experimental
regime, we separately tune the optimal learning rate $\eta$ and optimal
regularization parameter $\lambda$ independently for each regime. Additional
experimental details can be found in Appendix A.5.
Table 1: ResNet18 Test Performance for Regularizer Penalties. Values shown are average test accuracies across two to three different initializations per experiment, with corresponding confidence intervals. Experiment | CIFAR10 | CIFAR100 | Tiny-ImageNet | SVHN
---|---|---|---|---
SB SGD | 92.33 ($\pm 0.10$) | 71.01 ($\pm 0.27$) | 39.64 ($\pm 0.18$) | 93.69 ($\pm 0.12$)
LB SGD | 90.00 ($\pm 0.11$) | 66.45 ($\pm 0.29$) | 27.71 ($\pm 0.09$) | 90.37 ($\pm 0.33$)
GN | 91.98 ($\pm 0.03$) | 70.22 ($\pm 0.27$) | 37.78 ($\pm 0.07$) | 92.77 ($\pm 0.01$)
FT | 91.79 ($\pm 0.05$) | 71.19 ($\pm 0.16$) | 40.25 ($\pm 0.02$) | 93.72 ($\pm 0.16$)
AJ | 90.41 ($\pm 0.01$) | 65.95 ($\pm 0.33$) | 22.86 ($\pm 0.95$) | 91.76 ($\pm 0.11$)
UJ | 90.46 ($\pm 0.20$) | 66.41 ($\pm 0.06$) | 26.07 ($\pm 0.54$) | 92.08 ($\pm 0.01$)
### 3.2 Results
(i) Average micro-batch Gradient Norm and average Fisher trace Regularization
recover SGD generalization For CIFAR100, Tiny-ImageNet, and SVHN we find that
we can fully recover small-batch SGD generalization performance by penalizing
the average micro-batch Fisher trace and nearly recover performance by
penalizing the average micro-batch gradient norm (with an optimally tuned
regularization parameter $\lambda$, see Figure 1 and Table 1). In CIFAR10,
neither penalizing the gradient norm nor the Fisher trace _completely_
recovers small-batch SGD performance, but rather come within $\approx 0.3\%$
and $\approx 0.4\%$ (respectively) the small-batch SGD performance and
significantly improves over large-batch SGD.
We additionally find that using the micro-batch gradient norm leads to
slightly faster per-iteration convergence but less stable training (as noted
by the tendency for the model to exhibit random drops in performance), while
using the Fisher trace leads to slightly slower per-iteration convergence but
much more stable training (see Figure 1). This behavior may be due to the
Fisher trace’s ability to more reliably mimic the small-batch SGD micro-batch
gradient norm behavior _throughout_ training, whereas penalizing the gradient
norm effectively curbs the initial explosion but collapses to much smaller
norm values as training progresses.
(ii) Average and Unit Jacobian regularizations do not recover SGD
generalization Observe in Table 1 that we are unable to match SGD
generalization performance with either Jacobian regularization. In Section 2
we showed that each regularization method can be viewed as penalizing the norm
of the Jacobian matrix-vector product with _some_ $C$-dimensional vector.
Crucially, both the gradient norm and Fisher trace regularizers use some form
of loss-output gradient, which is data-dependent and has no constraint on the
weighting of each class, while both Jacobian regularizers use data-independent
and comparatively simpler vectors. Given the noticeable difference in
generalization performance between the regularization methods that weight the
Jacobian with the loss-output gradient and those that do not, we indicate that
the loss-output gradient may be crucial to either applying the beneficial
regularization effect itself and/or stabilizing the training procedure.
## 4 Shortcomings and Extensions of Gradient Norm Regularization
### 4.1 Generalization Failure at Large Micro-Batch Sizes
In both successful regularization regimes, namely the average micro-batch
gradient norm and average fisher trace regularizers, there is an implicit
hyperparameter: the size of the micro-batch used to calculate the
regularization term. Note that this hyperparameter is a practical artifact of
modern GPU memory limits, as efficiently calculating higher-order derivatives
for large batch sizes is not feasible in standard autodifferentiation
packages. Consequently, gradient accumulation (and the use of the average
micro-batch regularizer, rather than taking the norm over the entire batch)
must be used on most standard GPUs (more detailed hadware specifications can
be found in appendix A.5).
This restriction, however, may actually be beneficial, as Geiping et al.
[2022], Barrett and Dherin [2020], Smith et al. [2021] have noted that they
expect the benefits of gradient norm regularizers to break down when the
micro-batch size becomes too large. To test this hypothesis, we return to the
ResNet-18 in CIFAR100 and Tiny-ImageNet settings and increase the micro-batch
size to as large as we could reasonably fit on a single GPU at $|M|=2560$ in
both the gradient norm and Fisher trace experiments. Additionally, we run
experiments using a VGG11 [Simonyan and Zisserman, 2015] on CIFAR10,
interpolating the micro-batch size from the small to large regimes. In both
settings, we separately tune the learning rate $\eta$ and regularization
coefficient $\lambda$ in each experiment to find the best possible
generalization performance in the large micro-batch regimes.
Table 2: Test Performance for ResNet-18 with Increased Micro-Batch Size. Small-batch SGD performances: CIFAR100 = $71.01$, Tiny-ImageNet = $39.64$. Dataset | GN ($|M|=128$) | FT ($|M|=128$) | GN ($|M|=2560$) | FT ($|M|=2560$)
---|---|---|---|---
CIFAR100 | 70.22 ($\pm 0.27$) | 71.19 ($\pm 0.16$) | 64.23 ($\pm 0.49$) | 65.44 ($\pm 0.76$)
Tiny-ImageNet | 37.78 ($\pm 0.07$) | 40.25 ($\pm 0.02$) | 31.96 ($\pm 0.56$) | 37.71 ($\pm 0.31$)
Figure 2: Average Micro-batch Gradient Norm for varying micro-batch sizes. In all experimental regimes, increasing the micro-batch size leads to a worse reconstruction of the SGD average micro-batch gradient norm behavior, especially in early training. Table 3: Test Performance for VGG11 (no batch-normalization) in CIFAR10 with Increased Micro-Batch Size SB SGD | LB SGD | GN ($|M|=100$) | GN ($|M|=1000$) | GN ($|M|=2500$)
---|---|---|---|---
78.19 | 73.90 | 76.89 ($\pm 0.72$) | 75.19 ($\pm 0.10$) | 75.11 ($\pm 0.29$)
Results We successfully show that such hypotheses mentioned in Geiping et al.
[2022], Smith et al. [2021], Barrett and Dherin [2020] hold true: as the
micro-batch size approaches the mini-batch size, both regularization
mechanisms lose the ability to recover small-batch SGD performance (see Tables
2 and 3). Additionally, we note that using large micro-batch sizes no longer
effectively mimics the average micro-batch gradient norm behavior of small-
batch SGD, thus supporting our claim that matching this quantity throughout
training is of key importance to recovering generalization performance (Figure
2).
### 4.2 Sample Micro-batch Gradient Norm Regularization
One potential practical drawback of these gradient-based regularization terms
is the relatively high computation cost needed to calculate the second-order
gradients for every component micro-batch. Instead of penalizing the average
micro-batch gradient norm, we can penalize one micro-batch gradient norm. For
some large batch $\mathcal{B}$ and fixed sample micro-batch $S$ from batch
$\mathcal{B}$, we define the modified loss function
$\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta})+\lambda\|\nabla_{\boldsymbol{\theta}}\mathcal{L}_{S}(\boldsymbol{\theta})\|^{2}.$
(7)
Figure 3: Explicitly regularizing the sample loss gradient norm recovers SGD
test accuracy
Results In Figure 3, we plot the final test accuracy (left column) and the
average gradient norm (right column) as a function of $\lambda$. We observe
that both a larger $\lambda$ and a smaller micro-batch size $|S|$ boost test
accuracy. Furthermore, we find that with the “optimal” $\lambda$ and micro-
batch size $|S|$, the final test accuracy for sample micro-batch gradient norm
regularization is close to (and sometimes better) than the final test accuracy
for SGD. Just as we observed with the _average_ Micro-batch Gradient Norm
regularization, generalization benefits diminish as the sample micro-batch
size approaches the mini-batch size.
## 5 Is mimicking SGD Gradient Norm Behavior necessary for generalization?
As seen in Figure 1, the trajectory of the average micro-batch gradient norm
during training, and its similarity to that of small-batch SGD especially in
the early stages of training, is strongly correlated with generalization
performance. Furthermore, we have observed that models with _poor_
generalization performance tend to exhibit the characteristic “explosion”
during the early phase of training and quickly plummet to average micro-batch
gradient norm values much smaller than seen in small-batch SGD. That being
said, it is not immediately clear whether recreating the micro-batch norm
trajectory of small-batch SGD is _necessary_ for ensuring good generalization
performance (i.e. whether good generalization directly implies gradient norm
behavior similar to SGD).
To test this hypothesis, we empirically validate an orthogonal vein of
optimization methods that do not explicitly regularize the micro-batch
gradient norm during training for their ability to close the small-to-large
batch generalization gap, and whether they too mimic the average micro-batch
norm trajectory of small-batch SGD.
### 5.1 External and Iterative Grafting and Normalized Gradient Descent
Inspired by the work of Agarwal et al. [2020], we proposed to use _gradient
grafting_ in order to control the loss gradient norm behavior during training.
Formally, for any two different optimization algorithms
$\mathcal{M},\mathcal{D}$, the grafted updated rule is arbitrarily:
$\displaystyle g_{\mathcal{M}}$
$\displaystyle=\mathcal{M}(\boldsymbol{\theta}_{k}),\quad\quad
g_{\mathcal{D}}=\mathcal{D}(\boldsymbol{\theta}_{k})$
$\displaystyle\boldsymbol{\theta}_{k+1}$
$\displaystyle=\boldsymbol{\theta}_{k}-\|g_{\mathcal{M}}\|\frac{g_{\mathcal{D}}}{\|g_{\mathcal{D}}\|}$
(8)
In this sense, $\mathcal{M}$ controls the _magnitude_ of the update step and
$\mathcal{D}$ controls the _direction_. We first propose Iterative Grafting,
wherein
$\mathcal{M}(\boldsymbol{\theta}_{k})=\eta\nabla\mathcal{L}_{M}(\boldsymbol{\theta}_{k})$
and
$\mathcal{D}(\boldsymbol{\theta}_{k})=\nabla\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta}_{k})$,
where $M\in\mathcal{B}$ is sampled uniformly from the component micro-batches
at every update. In words, at every update step we take the large batch
gradient, normalize it, and then rescale the gradient by the norm of one of
the component micro-batch gradients.
Additionally, we propose External Grafting, where
$\mathcal{M}(\boldsymbol{\theta}_{k})=\eta\nabla\mathcal{L}_{M}(\boldsymbol{\theta}_{k^{\prime}})$
and
$\mathcal{D}(\boldsymbol{\theta}_{k})=\nabla\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta}_{k})$.
Here, we use $\nabla\mathcal{L}_{B}(\boldsymbol{\theta}_{k^{\prime}})$ to
denote the gradient norm at step $k$ from a _separate small-batch SGD training
run_. We propose this experiment to make a comparison with the Iterative
Grafting case, since here the implicit step length schedule is independent of
the current run, while with Iterative grafting the schedule depends upon the
current training dynamics.
Aside from grafting algorithms, which define the implicit step length schedule
at every step, we also consider the situation where the step length is fixed
throughout training through normalized gradient descent (NGD) [Hazan et al.,
2015], wherein $\mathcal{M}(\boldsymbol{\theta}_{k})=\eta$ and
$\mathcal{D}(\boldsymbol{\theta}_{k})=\eta\nabla\mathcal{L}_{\mathcal{B}}(\boldsymbol{\theta}_{k})$.
Table 4: Test Performance for Grafting / NGD Experiments Dataset | Model | SB SGD | LB SGD | EG | IG | NGD
---|---|---|---|---|---|---
CIFAR10 | ResNet18 | 92.33 | 89.99 | 92.12 | 92.16 | 92.10
VGG16 w/Batch-Norm | 89.56 | 86.97 | 88.65 | 89.06 | 89.39
CIFAR100 | ResNet18 | 71.21 | 66.17 | 68.3 | 68.4 | 66.83
VGG16 w/Batch-Norm | 64.26 | 55.94 | 59.71 | 63.48 | 58.05
Figure 4: Average Micro-batch Gradient Norm in Grafting Experiments for
ResNet-18 (Plots smoothed for clarity). In both scenarios, irrespective of
generalization performance, the grafting experiments do not mimic the SGD
average micro-batch gradient norm behavior.
Results We find that both forms of grafting and NGD can recover the
generalization performance of SGD _in some model-dataset combinations_ (see
Table 4). Namely, though grafting / NGD seems to work quite well in CIFAR10,
no amount of hyperparameter tuning was able to recover the SGD performance for
either model in CIFAR100. That being said, in the CIFAR10 case we see (in
Figure 4) that the grafting experiments (and NGD, not pictured) _do not_
replicate the same average mini-batch gradient norm behavior of small-batch
SGD despite sometimes replicating its performance. This thus gives us solid
empirical evidence that while controlling the average mini-batch gradient norm
behavior through explicit regularization may aide generalization, it is not
the only mechanism in which large-batch SGD can recover performance.
### 5.2 Wider Implications
The stark disparity in performance between the CIFAR10 and CIFAR100 benchmarks
are of key importance. These differences may be explained by the much larger
disparity between the mid-stage average micro-batch gradient norm behavior in
the CIFAR100 case than in the CIFAR10 case (see Figure 4). This situation
highlights a possible cultural issue within the deep learning community: there
is a concerning trend of papers in the deep learning field that cite desired
performance on _CIFAR10_ , and no harder datasets, as empirical justification
for any posed theoretical results [Orvieto et al., 2022, Geiping et al., 2022,
Agarwal et al., 2020, Smith et al., 2021, Barrett and Dherin, 2020, Cheng et
al., 2020]. Given the continued advancement of state-of-the-art deep learning
models, we argue that it is imperative that baselines like CIFAR100 and
ImageNet are adopted as the main standard for empirical verification, so that
possibly non-generalizable results (as the grafting / NGD results would have
been had we stopped at CIFAR10) do not fall through the cracks in the larger
community (see Appendix A.3 for more information).
## 6 Discussion & Conclusion
In this paper, we provide a holistic account of how the proposed
regularization mechanisms [Geiping et al., 2022, Barrett and Dherin, 2020,
Smith et al., 2021, Lee et al., 2022, Jastrzebski et al., 2020] compare to
each other in performance and gradient norm trajectory, and additionally show
the limitations of this analytical paradigm for explaining the root cause of
generalization. Our results with regards to the relative poor performance of
the Jacobian-based regularizations somewhat conflict with the results of Lee
et al. [2022], which shows positive results on using the unit Jacobian
regularization with respect to improving performance _within the same batch-
size regime_. We attribute this difference to the fact that Lee et al. [2022]
is not concerned with cases where the small-to-large batch generalization gap
exists, which is our main focus.
In light of this prior work, more research should be done to disentangle the
exact effect that implicitly regularizing the loss-output gradient has on
generalization performance. Next, given the success of average micro-batch
gradient norm and average micro-batch Fisher trace regularization (especially
with small micro-batches), future work should leverage these regularization
mechanisms to investigate the possibility of ameliorating generalization,
while improving time efficiency, by taking advantage of high resource,
parallelizable settings. We also show that experimental findings on CIFAR10
may no longer hold in CIFAR100, which sheds light on a wider implication for
the research community. Namely, we urge researchers to adapt the practice of
evaluating empirical hypotheses on a more widespread, complex set of
benchmarks.
We acknowledge that performance in each experiment could possibly be improved
by progressively finer hyperparameter tuning, though we are confident that our
core results would continue to hold in such situations given the extensive
hyperparameter searches performed for each experiment. As a whole, the present
research helps to shed light on the mechanisms behind SGD’s generalization
properties through implicit regularization, and offers robust fixes to the
generalization issue at high batch-sizes.
## References
* Agarwal et al. [2020] Naman Agarwal, Rohan Anil, Elad Hazan, Tomer Koren, and Cyril Zhang. Disentangling adaptive gradient methods from learning rates. _CoRR_ , abs/2002.11803, 2020. URL https://arxiv.org/abs/2002.11803.
* Barrett and Dherin [2020] David G. T. Barrett and Benoit Dherin. Implicit gradient regularization. _CoRR_ , abs/2009.11162, 2020. URL https://arxiv.org/abs/2009.11162.
* Cheng et al. [2020] Xiang Cheng, Dong Yin, Peter L. Bartlett, and Michael I. Jordan. Stochastic gradient and langevin processes. In _International Conference on Machine Learning (ICML)_ , 2020.
* Geiping et al. [2022] Jonas Geiping, Micah Goldblum, Phillip E. Pope, Michael Moeller, and Tom Goldstein. Stochastic training is not necessary for generalization. In _International Conference on Learning Representations (ICLR)_ , 2022.
* Goyal et al. [2017] Priya Goyal, Piotr Dollár, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. _CoRR_ , abs/1706.02677, 2017. URL http://arxiv.org/abs/1706.02677.
* Hazan et al. [2015] Elad Hazan, Kfir Y. Levy, and Shai Shalev-Shwartz. Beyond convexity: Stochastic quasi-convex optimization, 2015. URL https://arxiv.org/abs/1507.02030.
* He et al. [2019] Fengxiang He, Tongliang Liu, and Dacheng Tao. Control batch size and learning rate to generalize well: Theoretical and empirical evidence. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2019.
* He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015. URL https://arxiv.org/abs/1512.03385.
* Jastrzebski et al. [2018] Stanislaw Jastrzebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos J. Storkey. Three factors influencing minima in sgd. In _International Conference on Artificial Neural Networks (ICANN)_ , 2018.
* Jastrzebski et al. [2020] Stanislaw Jastrzebski, Devansh Arpit, Oliver Åstrand, Giancarlo Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, and Krzysztof J. Geras. Catastrophic fisher explosion: Early phase fisher matrix impacts generalization. In _International Conference on Machine Learning (ICML)_ , 2020.
* Keskar et al. [2017] Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In _International Conference on Learning Representations (ICLR)_ , 2017.
* Krizhevsky [2009] Alex Krizhevsky. Learning multiple layers of features from tiny images, 2009.
* Le and Yang [2015] Ya Le and Xuan S. Yang. Tiny imagenet visual recognition challenge, 2015.
* Lee et al. [2022] Sungyoon Lee, Jinseong Park, and Jaewook Lee. Implicit jacobian regularization weighted with impurity of probability output, 2022. URL https://openreview.net/forum?id=RQ3xUXjZWMO.
* Li et al. [2019] Yuanzhi Li, Colin Wei, and Tengyu Ma. Towards explaining the regularization effect of initial large learning rate in training neural networks. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2019.
* Loshchilov and Hutter [2016] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv:1608.03983_ , 2016.
* Mandt et al. [2017] Stephan Mandt, Matthew D. Hoffman, and David M. Blei. Stochastic gradient descent as approximate bayesian inference, 2017. URL https://arxiv.org/abs/1704.04289.
* Masters and Luschi [2018] Dominic Masters and Carlo Luschi. Revisiting small batch training for deep neural networks. _CoRR_ , abs/1804.07612, 2018. URL http://arxiv.org/abs/1804.07612.
* Mori and Ueda [2020] Takashi Mori and Masahito Ueda. Improved generalization by noise enhancement, 2020. URL https://arxiv.org/abs/2009.13094.
* Netzer et al. [2011] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2011.
* Orvieto et al. [2022] Antonio Orvieto, Hans Kersting, Frank Proske, Francis Bach, and Aurelien Lucchi. Anticorrelated noise injection for improved generalization, 2022. URL https://arxiv.org/abs/2202.02831.
* Simonyan and Zisserman [2015] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In _International Conference on Learning Representations (ICLR)_ , 2015.
* Simsekli et al. [2019] Umut Simsekli, Levent Sagun, and Mert Gurbuzbalaban. A tail-index analysis of stochastic gradient noise in deep neural networks. In _International Conference on Machine Learning (ICML)_ , 2019.
* Smith et al. [2021] Samuel L. Smith, Benoit Dherin, David G. T. Barrett, and Soham De. On the origin of implicit regularization in stochastic gradient descent. In _International Conference on Learning Representations (ICLR)_ , 2021.
* Sutskever et al. [2013] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In _International conference on machine learning_ , pages 1139–1147. PMLR, 2013.
* Wen et al. [2020] Yeming Wen, Kevin Luk, Maxime Gazeau, Guodong Zhang, Harris Chan, and Jimmy Ba. An empirical study of large-batch stochastic gradient descent with structured covariance noise. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_ , 2020.
* Wu et al. [2020] Jingfeng Wu, Wenqing Hu, Haoyi Xiong, Jun Huan, and Zhanxing Zhu. The multiplicative noise in stochastic gradient descent: Data-dependent regularization, continuous and discrete approximation. In _International Conference on Machine Learning (ICML)_ , 2020.
* Wu et al. [2018] Lei Wu, Chao Ma, and Weinan E. How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective. In _Advances in Neural Information Processing Systems (NeurIPS)_ , 2018.
* Xie et al. [2021] Zeke Xie, Li Yuan, Zhanxing Zhu, and Masashi Sugiyama. Positive-negative momentum: Manipulating stochastic gradient noise to improve generalization. In _International Conference on Machine Learning (ICML)_ , 2021.
* Yang et al. [2022] Liu Yang, Jifan Zhang, Joseph Shenouda, Dimitris Papailiopoulos, Kangwook Lee, and Robert D Nowak. A better way to decay: Proximal gradient training algorithms for neural nets. _arXiv preprint arXiv:2210.03069_ , 2022.
* Zhu et al. [2019] Zhanxing Zhu, Jingfeng Wu, Bing Yu, Lei Wu, and Jinwen Ma. The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects. In _International Conference on Machine Learning (ICML)_ , 2019.
## Appendix A Appendix
### A.1 Additional Regularization Experiments
Aside from the main results using a ResNet-18, we additionally ran the
regularization experiments with a VGG11 [Simonyan and Zisserman, 2015] without
batch normalization on CIFAR10. Results are shown below:
Table 5: VGG11 (no batch-normalization) Test Performance for Regularizer Penalties Dataset | SB SGD | LB SGD | GN | FT | AJ | UJ
---|---|---|---|---|---|---
CIFAR10 | 78.19 | 73.90 | 77.62 | 79.10 | 74.09 | N/A
Consistent with our earlier observations (see Section 3), we find that average
micro-batch gradient norm and average Fisher trace regularization nearly
recover SGD generalization performance, whereas average Jacobian
regularization does not.
### A.2 Sample Micro-batch Gradient Norm Regularization (Continued)
Figure 5: Explicitly regularizing the average loss gradient norm over a sample
recovers SGD test accuracy
We see that switching from using the average micro-batch gradient norm to
using a single sample micro-batch gradient norm as a regularizer does not
impact final generalization performance. However, we do see in the CIFAR100
case that while using the sample-based regularizer is faster in terms of per
iteration wall-clock time, the average-based regularizer converges to an
optimal performance in considerably fewer gradient update steps (Figure 5).
### A.3 Limitations of Anticorrelated Perturbed Gradient Descent
Orvieto et al. [2022] proposes a method for improving generalization by
injecting spherical Gaussian noise (with variance $\sigma^{2}$ as a
hyperparameter) at each gradient update step that is _anticorrelated_ between
concurrent time steps, which they term _Anti-PGD_. They empirically show that
on CIFAR10, training a ResNet18 in the large-batch regime with Anti-PGD and
then shutting off the noise (to allow for convergence) allows them to beat
small-batch SGD generalization performance.
However, when we extended their methodology to the CIFAR100 regime (while
removing possible confounding factors such as momentum), no hyperparameter
combination in the large-batch regime was able to recover SGD generalization
performance. This thus represents just one example of the possible endemic
issue described in 5.2. Hyperparameter combinations and final test accuracy
are show below.
Table 6: ResNet18 w/Anti-PGD on CIFAR100 (SB SGD Test Accuracy = 71.21). No hyperparameter combination comes close to recovering SGD performance. Learning Rate ($\eta$) | $\sigma^{2}$ | Test Accuracy
---|---|---
0.5 | 0.01 | 67.54
0.5 | 0.001 | 65.44
0.1 | 0.01 | 64.55
0.1 | 0.001 | 64.90
0.05 | 0.01 | 62.52
0.05 | 0.001 | 62.78
### A.4 Explicit Regularization with SOTA Optimization Tools
In the present work, we are concerned with understanding the generalization
performance of explicit regularization mechanisms in SGD _with no other
modifications._ However, in practice many different heuristics are used to
improve SGD, including momentum [Sutskever et al., 2013], weight decay [Yang
et al., 2022], and learning rate scheduling [Loshchilov and Hutter, 2016]. To
verify that the behavior seen throughout the paper holds in more standard
conditions, we return to the set-up of training a ResNet18 on CIFAR100, this
time with momentum, weight decay, and cosine annealed learning rate. Here, we
focus on the two successful regularization algorithms (i.e. Gradient Norm and
Fisher Trace regularization). In Table 7, we see that the behavior shown in
the main paper still holds: with a large gap between small-batch and large-
batch SGD, Fisher Trace regularization is able to recover small-batch
performance, while Gradient Norm regularization is able to beat large-batch
but not fully recover small-batch performance. Specific values for the added
hyperparameters are detailed in Appendix A.5.
Table 7: ResNet18/CIFAR100 Test Accuracy with momentum, weight decay, and cosine annealed learning rate. The relative performance of regularized training is maintained when adding additional optimization tools. SB SGD | LB SGD | LB + GN | LB + FT
---|---|---|---
72.59 | 67.47 | 70.58 | 72.53
### A.5 Experimental Setup Details
All experiments run for the present paper were performed using the Pytorch
deep learning API, and source code can be found here:
https://github.com/ZacharyNovack/imp-regularizers-arxiv.
Values for our hyperparameters in our main experiments are detailed below:
Table 8: Learning rate ($\eta$) used in main experiments Model/Dataset | SB SGD | LB SGD | LB + GN | LB + FT | LB + AJ | LB + UJ
---|---|---|---|---|---|---
ResNet-18/CIFAR10 | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$
ResNet-18/CIFAR100 | $\eta=0.1$ | $\eta=0.5$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$
ResNet-18/Tiny-ImageNet | $\eta=0.1$ | $\eta=0.5$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.5$ | $\eta=0.1$
ResNet-18/SVHN | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$
VGG11/CIFAR10 | $\eta=0.15$ | $\eta=0.01$ | $\eta=0.01$ | $\eta=0.01$ | $\eta=0.01$ | N/A
Table 9: Regularization strength ($\lambda$) used in main experiments Model/Dataset | LB + GN | LB + FT | LB + AJ | LB + UJ
---|---|---|---|---
ResNet-18/CIFAR10 | $\lambda=0.01$ | $\lambda=0.01$ | $\lambda=0.001$ | $\lambda=0.001$
ResNet-18/CIFAR100 | $\lambda=0.01$ | $\lambda=0.01$ | $\lambda=5\times 10^{-5}$ | $\lambda=0.001$
ResNet-18/Tiny-ImageNet | $\lambda=0.01$ | $\lambda=0.01$ | $\lambda=1\times 10^{-5}$ | $\lambda=0.001$
ResNet-18/SVHN | $\lambda=0.01$ | $\lambda=0.01$ | $\lambda=0.0001$ | $\lambda=0.001$
VGG11/CIFAR10 | $\lambda=0.5$ | $\lambda=0.5$ | $\lambda=2\times 10^{-5}$ | N/A
Table 10: Hyperparameters for large micro-batch experiments Model / | Experiment | Microbatch | Learning | Regularization
---|---|---|---|---
Dataset | Size | Rate ($\eta$) | Strength ($\lambda$)
ResNet-18/CIFAR100 | LB + GN | 2560 | 0.5 | 0.0025
ResNet-18/CIFAR100 | LB + FT | 2560 | 0.1 | 0.01
ResNet-18/Tiny-ImageNet | LB + GN | 2560 | 0.5 | 0.1
ResNet-18/Tiny-ImageNet | LB + FT | 2560 | 0.1 | 0.1
VGG11/CIFAR10 | LB + GN | 1000 | 0.01 | 0.25
VGG11/CIFAR10 | LB + FT | 2500 | 0.01 | 0.25
Table 11: Hyperparameters for sample micro-batch experiments Model / | Experiment | Microbatch | Learning | Regularization
---|---|---|---|---
Dataset | Size | Rate ($\eta$) | Strength ($\lambda$)
VGG11/CIFAR10 | SB SGD | N/A | 0.01 | N/A
VGG11/CIFAR10 | LB + FT | 50 | 0.01 | 0.25
VGG11/CIFAR10 | LB + FT | 100 | 0.01 | 0.5
VGG11/CIFAR10 | LB + FT | 1000 | 0.01 | 0.5
VGG11/CIFAR10 | LB + FT | 2500 | 0.01 | 0.5
Table 12: Hyperparameters for Grafting Experiments Model/Dataset | SB SGD | LB SGD | Iterative Grafting | External Grafting | NGD
---|---|---|---|---|---
ResNet-18/CIFAR10 | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.2626$
ResNet-18/CIFAR100 | $\eta=0.1$ | $\eta=0.5$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.3951$
VGG-16/CIFAR10 | $\eta=0.05$ | $\eta=0.1$ | $\eta=0.05$ | $\eta=0.05$ | $\eta=0.2388$
VGG-16/CIFAR100 | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.1$ | $\eta=0.4322$
#### ResNet-18
For all ResNet-18 experiments, we use the standard He initialization [He et
al., 2015], and the default Pytorch batch normalization initialization.
Additionally, we use the standard data augmentations for CIFAR10 and CIFAR100;
that is, random cropping, horizontal flipping, and whitening. For SVHN, we
performed only whitening on the dataset. For Tiny-ImageNet, no data
augmentations were made aside from rescaling the input images (which are
$64\times 64$) to be $32\times 32$. Additionally, for Tiny-ImageNet the sample
regularization penalties were used rather than the normal average
regularization penalties given compute constraints (see Section 4.2 for the
documented similarities between the sample and average regularizations).
All experiments were run for 50000 update iterations. In this case, all models
are trained well past the point of reaching 100% training accuracy. No weight
decay or momentum was used in _any_ of the experiments. We use a large-batch
size of 5120 for all experiments, and thus have 40 micro-batches of 128
examples each for the regularization experiments. We calculate the penalty
term at every update step, which is different from the procedure in
Jastrzebski et al. [2020], which recalculates the penalty term only every 10
update steps. For the external grafting experiments, we use the gradient norm
data from a separate run of small-batch SGD with batch size equal to the same
micro-batch size used for iterative grafting (i.e. 128). All experiments were
run on a single RTX A6000 NVidia GPU.
For the experiments with added optimization tools in Appendix A.4, we take
inspiration from Jastrzebski et al. [2020] and Geiping et al. [2022] and use
momentum $=0.9$, weight decay $=1\times 10^{-4}$, and a cosine annealing
schedule that anneals the initial learning rate to 0 every 300 epochs. This
set-up is used for all experiments in this section.
#### VGG16
The set-up for the VGG16 experiments are identical to the ResNet-18
experiments, including the usage of batch normalization within the
architecture.
#### VGG11 without batch-normalization
For all VGG-11 experiments, we train the network with a fixed learning rate
(and no momentum) until we reach 99% train accuracy. Note that we do not use
any form of data augmentations. We use a small batch size of 100 and a large
batch size of 5000.
|
# Model-based Reconstruction for Multi-Frequency Collimated Beam Ultrasound
Systems
Abdulrahman M. Alanazi1,4, Singanallur Venkatakrishnan2, Hector Santos-
Villalobos3, Gregery T. Buzzard1, and Charles Bouman1
1Purdue University-Main Campus, West Lafayette, IN 47907.
2Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831.
3Amazon Prime Video, 410 Terry Ave N, Seattle 98109, WA.
4King Saud University (KSU), Riyadh, Saudi Arabia This manuscript has been
supported by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the
U.S. Department of Energy. G. Buzzard was partially supported by NSF
CCF-1763896, and C. Bouman was partially supported by the Showalter Trust. The
United States Government retains and the publisher, by accepting the article
for publication, acknowledges that the United States Government retains a non-
exclusive, paid-up, irrevocable, world-wide license to publish or reproduce
the published form of this manuscript, or allow others to do so, for United
States Government purposes. The Department of Energy will provide public
access to these results of federally sponsored research in accordance with the
DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
###### Abstract
Collimated beam ultrasound systems are a technology for imaging inside multi-
layered structures such as geothermal wells. These systems work by using a
collimated narrow-band ultrasound transmitter that can penetrate through
multiple layers of heterogeneous material. A series of measurements can then
be made at multiple transmit frequencies. However, commonly used
reconstruction algorithms such as Synthetic Aperture Focusing Technique (SAFT)
tend to produce poor quality reconstructions for these systems both because
they do not model collimated beam systems and they do not jointly reconstruct
the multiple frequencies.
In this paper, we propose a multi-frequency ultrasound model-based iterative
reconstruction (UMBIR) algorithm designed for multi-frequency collimated beam
ultrasound systems. The combined system targets reflective imaging of
heterogeneous, multi-layered structures. For each transmitted frequency band,
we introduce a physics-based forward model to accurately account for the
propagation of the collimated narrow-band ultrasonic beam through the multi-
layered media. We then show how the joint multi-frequency UMBIR reconstruction
can be computed by modeling the direct arrival signals, detector noise, and
incorporating a spatially varying image prior. Results using both simulated
and experimental data indicate that multi-frequency UMBIR reconstruction
yields much higher reconstruction quality than either single frequency UMBIR
or SAFT.
###### Index Terms:
Non-destructive testing (NDT), ultrasonic imaging, ultrasonic model-based
iterative reconstruction (UMBIR), multilayered objects, collimated beams,
multi-frequency.
## I Introduction
Non-destructive evaluation (NDE) of multi-layered structures that can be
accessed from only a single side is important in many applications. For
example, this imaging scenario occurs when monitoring the structural integrity
of oil and geothermal wells that lie behind layers of fluid and steel casing.
While ultrasound imaging is widely used in NDE applications, multi-layered
structures present a challenge for ultrasounds systems because of the complex
propagation and reverberation of the signal through the material.
Figure LABEL:fig:MultiLayer_strucut_example illustrates an example of a
collimated beam ultrasound system [1, 2, 3, 4] that is designed to image
through multi-layered structures. The system consists of a narrow-band
collimated beam transmitter combined with an array of receivers. The system
can penetrate through heterogeneous layers because the transmitter is
collimated and the center frequency is below 100 kHz. However, since the
systems are narrow-band, they typically are operated at a few different center
frequencies, and then the data from each measurement frequency is processed
separately to image the structure.
The most popular methods to reconstruct data from ultrasound systems use a
delay-and-sum (DAS) approach because of their low computational complexity.
One such approach is the synthetic aperture focusing technique (SAFT), which
produces acceptable ultrasound images for simple objects [5]. SAFT has been
applied to single-layer [6, 7] and multi-layered structures [8, 9] but not to
collimated beam systems. Multi-layer SAFT combines DAS with techniques such as
ray-tracing [10, 11, 12] and root-mean-square velocity [13, 14] to compute the
travel time in multi-layered media. However, SAFT and its variations rely on a
simple model that often leads to artifacts such as multiple reflections and
blur. Methods to counteract these effects include [15] and [16], which use a
linear forward model for single-layer structures and a carefully constructed
sparse deconvolution approach. Finally, the conventional practice when
processing data obtained from multiple transmit frequency bands is to obtain
the reconstruction for each band separately and then visualize the results in
order to identify structures of interest.
More physically realistic inversion methods from seismology include least-
squares reverse time migration (LSRTM) [17, 18, 19] and full wave inversion
(FWI) [20]. LSRTM and FWI are iterative methods that seek the best least-
squares fit between observed and reconstructed data; a reflectivity image for
LSRTM and a velocity image for FWI. These methods have the capability to image
complex structures but rely on an iteration using a non-linear forward model,
making them computationally expensive and impractical for many imaging
applications. Finally, these methods are typically applied separately to data
from each frequency band which does not allow the end user to obtain a single
image corresponding to the structure to be imaged.
In order to reduce the reconstruction artifacts of SAFT while maintaining
computational efficiency, model-based iterative reconstruction (MBIR) can be
used with a linear propagation model. These methods combine a forward model
for the ultrasound measurement system with a prior-model/regularizer for the
unknown structure and cast the reconstruction as a maximum a-posteriori
estimation problem. In [15, 21], the forward model is designed to handle
plane-wave imaging. Recently, the MBIR approach of [22] used a propagation
model of the ultrasound through the media and combined all the data from the
source-detector pairs to jointly reconstruct a fully 3D image. However,
current MBIR approaches have limitations when imaging through heterogeneous
materials. In a single-layer structure, the time delay can be easily computed
using Snell’s law [22, 23]. However, in structures containing two layers, the
time delay no longer has a closed form that can be easily computed using
Snell’s law [24]. When the number of layers exceeds two, some techniques such
as ray-tracing or marching methods [25, 26, 27] can be used to approximate the
computations but these are computationally complex. In summary, no existing
regularized MBIR method accounts for multi-frequency collimated beam systems
used to image structures which are behind several layers.
In this paper, we propose a MBIR algorithm for multi-frequency collimated beam
ultrasound systems in order to image structures that are behind multiple
layers of materials. Our method, which we refer to as multi-frequency
ultrasound model-based iterative reconstruction (UMBIR), can be used to
accurately reconstruct heterogeneous structures behind multiple layers of
material excited at multiple frequencies. UMBIR does this by combining a novel
physics-based forward model that models multi-layered heterogeneous structures
while also coherently integrating data from of multiple frequencies to form a
single reconstructed cross section. In order to estimate the travel time of
the ultrasound signals through multiple media as a part of the forward model,
we introduce a efficient binary search-based method. Finally, the maximum a
posteriori (MAP) estimate is then computed using iterative coordinate descent
(ICD) optimization with a spatially varying prior similar to [23]. We note
that our work builds on preliminary ideas developed in the context of single
band measurements which was presented as a conference article [28]. Using
experimental and simulated data, we demonstrate that the proposed multi-layer
and multi-frequency UMBIR approach yields more accurate reconstructions with
higher spatial resolution and reduced artifacts when compared to both single
frequency UMBIR and SAFT.
## II The UMBIR Forward Model
In order to image the structure which is separated from the source by multiple
layers of known materials, we design a ultrasound model-based iterative
reconstruction (UMBIR) approach. Assuming a linear system for simplicity, we
seek to reconstruct an image $x$ using a measurement model of the form
$y=Ax+Dg+w,$ (1)
where $y\in\mathbb{R}^{MK\times 1}$ is a vector of measurements from $K$
receivers at $M$ timepoints, $A\in\mathbb{R}^{MK\times N}$ is the system
matrix, $x\in\mathbb{R}^{N\times 1}$ is the vectorized version of the desired
image with $N$ total voxels, $D\in\mathbb{R}^{MK\times K}$ is a matrix whose
columns form a basis for the possible direct arrival signals,
$g\in\mathbb{R}^{K\times 1}$ is a scaling coefficient vector for $D$, and $w$
is a Gaussian random vector with distribution $N(0,\sigma^{2}I)$. From (1), we
will be able to formulate the reconstruction as the maximum a posteriori
estimate of $x$ and $g$ given $y$. However, to do this, we will first need to
introduce an acoustic model of propagation through the multi-layered material
that we can use to compute the matrices $A$ and $D$.
### II-A Multi-Layer Acoustic Propagation Model
In this section, we introduce a model of the multi-layer acoustic propagation
based on an extension of the single-layer model used in [22]. Figure
LABEL:fig:MultiLay illustrates the problem. An acoustic signal is transmitted
from location $r^{t}_{o}$, reflected from the voxel, $v$, and then received by
one of $K$ possible microphones at location $r^{r}_{j}$. As the signal
propagates outward, it passes through $L$ different materials each with its
own acoustic velocity, $c_{l}$, in $\frac{m}{s}$, and attenuation coefficient,
$\alpha_{l}$, in $\frac{s}{m}$. Let $T^{t}_{\ell}(v)$, and
$T^{r}_{\ell,j}(v)$, denote that outgoing and returning propagation time of
the beam through each layer of the medium. Notice that both times will be
functions of the particular voxel, $v$, and the return time will also be a
function of the particular microphone, $j$. Then we model the frequency domain
transfer function as
$G_{j}(v,f)=\lambda(v)\prod_{\ell=1}^{L}\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}e^{-(c_{\ell}\alpha_{\ell}|f|+2\mathfrak{j}\pi
f)\left[T^{t}_{\ell}(v)+T^{r}_{\ell,j}(v)\right]}\ ,$ (2)
where $\mathfrak{j}^{2}=-1$ and
$\lambda(v)=\phi_{j}(v)\left(\prod_{\ell=2}^{L}\frac{2\zeta_{\ell}}{\zeta_{\ell-1}+\zeta_{\ell}}\right)\left(\prod_{\ell=1}^{L-1}\frac{2\zeta_{\ell}}{\zeta_{\ell}+\zeta_{\ell+1}}\right)\
,$ (3)
where $\zeta_{\ell}$ in $\frac{kg}{m^{2}s}$ is the acoustic impedance of the
$\ell^{\text{th}}$ layer and $\phi_{j}(v)$ is the scalar correction due to
beam collimation discussed in Section II-C. Note that $\lambda(v)$ in (2)
models the effects on the amplitude of the received signal by the beam
collimation and impedance mismatch between the layers, which are the formulas
defined between parentheses in (3). We can further simplify the expression,
but defining two quantities
$\displaystyle\gamma_{j}(v)$
$\displaystyle=\sum_{l=1}^{L}c_{\ell}\alpha_{\ell}\left[T^{t}_{\ell}(v)+T^{r}_{\ell,j}(v)\right]$
(4) $\displaystyle T_{j}(v)$
$\displaystyle=\sum_{l=1}^{L}\left[T^{t}_{\ell}(v)+T^{r}_{\ell,j}(v)\right]\
.$ (5)
In this case, the frequency domain transfer function has the simpler form of
$G_{j}(v,f)=\lambda(v)\,e^{-(\gamma_{j}(v)\,|f|+2\mathfrak{j}\pi
f\,T_{j}(v))}\ .$ (6)
In this form, it is clear that $T_{j}(v)$ represents the round-trip group
delay to voxel $v$, and $\gamma_{j}(v)$ represents the signal dispersion. From
this, the Fourier transform of the received signal given by
$Y_{j}(v,f)=x(v)\lambda(v)S(f)e^{-(\gamma_{j}(v)\,|f|+2\mathfrak{j}\pi
f\,T_{j}(v))}\ ,$ (7)
where $S(f)$ is the Fourier transform of the transmitted signal, and $x(v)$ is
the reflection coefficient for the voxel $v$. Then in the time-domain the
received signal is given by
$y_{j}(v,t)=x(v)\,\lambda(v)\,h(\gamma_{j}(v),t-T_{j}(v)),$ (8)
where
$h(\gamma,t)=\mathcal{F}^{-1}\left\\{S(f)e^{-\gamma|f|}\right\\}$ (9)
and $\mathcal{F}^{-1}$ is the inverse Fourier transform.
Thus, the output at each time $t$ and each receiver $j$ can be expressed as an
inner product between the input $x(v)$ and a row of the forward model’s system
matrix, $A$. This is expressed in the following formula.
$\displaystyle y_{j}(t)$
$\displaystyle=\sum_{v}\left[h(\gamma_{j}(v),t-T_{j}(v))\,\lambda(v)\right]x(v)$
(10)
So from this we see that, for the multi-layer case, we will need to first
compute, $T_{j}(v)$, the acoustic travel time of the signal through each layer
for each voxel in order to obtain a expression for the received signal. This
is the subject of the next section.
### II-B Time Delay Computation
In single-layer structures, the computation of the time delays at the image
voxels is straightforward. However, in multilayered structures, the acoustic
speed varies with depth, which causes reflections and refractions that result
in a complex wave path. In order to compute the received signal of (10), we
need to know the time delays for each layer, $T^{t}_{\ell}(v)$ and
$T^{r}_{\ell,j}(v)$, which will be dependent on the path of the acoustic
signal through the multiple layers of the material. Figure LABEL:fig:MultiLay
illustrates the acoustic path as it passes through the layers of the medium.
Because each layer has different acoustic velocity, $c_{\ell}$, the signal
will be refracted as it passes between layers. Let $\theta^{t}_{\ell}$
represent the angle of outbound propagation through the $\ell^{th}$ material,
and let $\theta^{r}_{\ell,j}$ represent the angle of return propagation, as in
Figure LABEL:fig:MultiLay. Then by Snell’s law, we know that
$\theta^{t}_{\ell}=\text{sin}^{-1}\left(\text{sin}(\theta^{t}_{\ell-1})\frac{c_{\ell}}{c_{\ell-1}}\right)\
,$ (11)
and the return angles are given by
$\theta^{r}_{\ell-1}=\text{sin}^{-1}\left(\text{sin}(\theta^{r}_{\ell})\frac{c_{\ell}}{c_{\ell+1}}\right)\
.$ (12)
Consequently, if we know $\theta^{t}_{1}$ and $\theta^{r}_{L}$, then we can
use the recursions of (11) and (12) to compute the remaining angles. We denote
these two functions that represent the result of this calculation as
$\displaystyle\theta^{t}_{\ell}$ $\displaystyle=$ $\displaystyle
f^{t}_{\ell}\left[\theta^{t}_{1}\right]$ $\displaystyle\theta^{r}_{\ell}$
$\displaystyle=$ $\displaystyle f^{r}_{\ell}\left[\theta^{r}_{L}\right]\ .$
Let $\eta_{\ell}$ denote the thickness of the $\ell^{th}$ layer, then we can
express the vertical distance that the outbound acoustic signal travels as
$\displaystyle
Z^{t}\\!\left[\theta^{t}_{1}\right]=\sum_{\ell=1}^{L}\eta_{\ell}\text{tan}\left(f^{t}_{\ell}\left[\theta^{t}_{1}\right]\right)\
,$ (13)
and the vertical distance that the returning signal travels as
$\displaystyle
Z^{r}\\!\left[\theta^{r}_{L}\right]=\sum_{\ell=1}^{L}\eta_{\ell}\text{tan}\left(f^{r}_{\ell}\left[\theta^{r}_{L}\right]\right)\
.$ (14)
As an example, Figure 3 shows the relationship between $Z^{t}(\theta^{t}_{1})$
versus $\theta^{t}_{1}$ and $Z^{r}(\theta^{r}_{3})$ versus $\theta^{r}_{3}$ in
three layer media with parameters as shown in Table I. We note that it is
clear from this example, that $Z^{t}(\theta^{t}_{1})$ is a monotone increasing
function of $\theta^{t}_{1}$.
---
(a) (b)
Figure 3: (a) The vertical distance traveled by the outgoing acoustic signal, $Z^{t}(\theta^{t}_{1})$, as a function of $\theta^{t}_{1}$. (b) The vertical distance traveled by the returning signal, $Z^{r}(\theta^{r}_{3})$, as a function of $\theta^{r}_{3}$. Notice that both functions are monotone increasing. TABLE I: The parameters used to show the relationship between the vertical distance and propagation angle in Figure 3. | Layer 1 | Layer 2 | Layer 3
---|---|---|---
Thickness ($\eta_{\ell}$) in m | 0.073 | 0.006 | 0.12
Acoustic velocity ($c_{\ell}$) in $\frac{m}{s}$ | 1500 | 2800 | 2620
Next, for each voxel, $v$, we must solve for the unknown angles,
$\theta^{t}_{1}(v)$, the departure angle of the outbound acoustic signal, and
$\theta^{r}_{L}(v)$, the arrival angle of the returning acoustic signal. We
can do this by solving the following two equations.
$\displaystyle Z_{v}-Z_{o}$ $\displaystyle=$ $\displaystyle
Z^{t}\left[\theta^{t}_{1}(v)\right]$ $\displaystyle Z^{r}_{j}-Z_{v}$
$\displaystyle=$ $\displaystyle Z^{r}\left[\theta^{r}_{L,j}(v)\right]\ ,$
where $Z_{v}-Z_{o}$ is the vertical distance between the transmitter,
$r^{t}_{o}$, and the voxel, $v$; and $Z^{r}_{j}-Z_{v}$ is the vertical
distance between the microphone, $r^{r}_{j}$, and the voxel, $v$. Since both
functions are monotone increasing functions of their arguments, these
equations can be easily solved using half interval search as described in
[29].
Once the values of $\theta^{t}_{1}$ and $\theta^{r}_{L}$ are determined, then
the values of the time delay can be computed using the following two
equations.
$\displaystyle T^{t}_{\ell}(v)$ $\displaystyle=$
$\displaystyle\eta_{\ell}\frac{\sqrt{1+\tan^{2}\left(f^{t}_{\ell}[\theta^{t}_{1}(v)]\right)}}{c_{\ell}}$
(15) $\displaystyle T^{r}_{\ell,j}(v)$ $\displaystyle=$
$\displaystyle\eta_{\ell}\frac{\sqrt{1+\tan^{2}\left(f^{r}_{\ell}[\theta^{r}_{L,j}(v)]\right)}}{c_{\ell}}\
.$ (16)
This provides all the values that are needed to compute the solution to (6).
### II-C Collimated Beam Modeling
In this section, we develop a model for the collimated beam generated by the
acoustic source transducer used in our experiments. In order to accurately
model the effects of beam collimation, we use an apodization function inspired
by work of [7] as later adapted by [22]. Adapting this approach to better
model the collimated beam and defining the angle at the receiver as
$\theta^{r}_{1,j}=f_{1}^{r}[\theta^{r}_{L,j}(v)]$ our apodization function is
given by
$\phi_{j}(v)=\text{cos}^{\beta}\left(\theta^{t}_{1}(v)-\theta^{t}_{p}\right)\cos^{2}\left(\theta_{1,j}^{r}(v)\right)\
,$ (17)
where $\beta$ is a parameter that controls the beam apodization, and
$\theta^{t}_{p}$ is the pointing angle of the transmitter as shown in Figure
LABEL:fig:MultiLay. The reason behind modeling the pointing angle of the
transmitter is because in practice, the transmitter is often tilted upward
with respect to the axial direction of the microphones array to increase the
ultrasonic illumination range and ensure reflection of the transmitted signal
to the receivers [19].
### II-D Single Frequency System Matrix Construction
In this section, we describe how the the matrices, $A$ and $D$, are
constructed from the multi-layer acoustic model for a single frequency as
described in Section II. In the following section, we will describe how these
system matrices are combined to form the full system matrix used in the multi-
frequency case.
In order to reduce computation, we window our model of the received signal in
time by replacing $h$ of (8) with
$\tilde{h}(\gamma_{j}(v),t)=h(\gamma_{j}(v),t)\,\text{rect}\left(\frac{t}{t_{0}}-\frac{1}{2}\right),$
(18)
where $t_{0}$ is a constant based on the assumption that
$h(\gamma_{j}(v),t)\approx 0$ for $t>t_{0}$ and $\text{rect}(u)=1$ for
$|u|<\frac{1}{2}$ and is $0$ otherwise. By populating $A$ and $D$ with the
windowed function, $\tilde{h}$, we ensure that the matrices are sparse so that
computation and memory usage are reduced.
Equation (10) can then be used to populate the entries of the system matrix
$A$. For each voxel, $v_{i}$, and receiver location, $r_{j}^{r}$, the
following partial column vector is formed
$\displaystyle
a^{i,j}=\left[\begin{array}[]{c}\tilde{h}(\gamma_{j}(v_{i}),t_{0})\\\
\vdots\\\ \tilde{h}(\gamma_{j}(v_{i}),t_{M-1})\end{array}\right]$ (22)
where $t_{m}=m\Delta+T_{o}$ and $\Delta$ is the time sampling period. A full
column vector of the system matrix, $A$, is then formed by concatenating the
partial columns for each receiver.
$\displaystyle A_{*,i}=\left[\begin{array}[]{c}a^{i,1}\\\ \vdots\\\
a^{i,K}\end{array}\right]$ (26)
And then the full system matrix is formed by concatenating the columns.
$\displaystyle A=\left[A_{*,1},\cdots,A_{*,N}\right]$ (27)
Note that Equation (10) can also be used to populate the entries of the matrix
$D$. However, the group delay and signal dispersion formulas introduced above
depend on the voxel $v$; which is not the case for the direct arrival signals.
Hence, we define the time delay from the transmitter, $r^{t}_{o}$, to each
receiver, $r^{r}_{j}$, and the direct arrival signal dispersion, respectively,
as
$\tau_{j}=\frac{\lVert r^{t}_{o}-r^{r}_{j}\rVert}{c_{d}}\ ,$
$\bar{\gamma_{j}}=\alpha_{d}c_{d}\tau_{j}\ ,$
where $\alpha_{d}$ and $c_{d}$ are the attenuation coefficient and acoustic
velocity of the material that the transducers are embedded in. The matrix $D$
is formed from $K$ columns, one for each detector location $r^{r}_{j}$. We
denote the columns of $D$ as $d^{j}$. Then, for the $k^{th}$ receiver, a
partial column vector, $d^{k}\in\Re^{M}$, is formed by
$\displaystyle
d^{k}=\left[\begin{array}[]{c}\tilde{h}(\bar{\gamma_{j}},t_{0})\\\ \vdots\\\
\tilde{h}(\bar{\gamma_{j}},t_{M-1})\end{array}\right]\ .$ (31)
These partial vectors can then be concatenated to form the full matrix given
by
$\displaystyle D=\left[\begin{array}[]{cccc}d^{1}&0&\cdots&0\\\
0&d^{2}&\cdots&0\\\ \vdots&\vdots&\vdots&\vdots\\\
0&0&\cdots&d^{K}\end{array}\right]$ (36)
### II-E Multi-Frequency UMBIR
In order to do multi-frequency reconstruction, we must form a system matrix
that account for measurements at all frequencies simultaneously. Let $S$
denote the number of distinct excitation frequencies, and let $A^{s}$ and
$D^{s}$ denote the associated system matrix and direct arrival signal matrix
constructed using the methods described in Section II-D, and let $y_{s}$
denote the measurements associated measurements. Then we can form the full
measurement vector, $y$, full system matrix, $A$, and direct arrival signal
matrix, $D$, to be as follows
$\displaystyle y=\left[\begin{array}[]{c}y^{1}\\\ \vdots\\\
y^{S}\end{array}\right]\hskip 10.0ptA=\left[\begin{array}[]{c}A^{1}\\\
\vdots\\\ A^{S}\end{array}\right]\hskip
10.0ptD=\left[\begin{array}[]{cccc}D^{1}&\cdots&0\\\ \vdots&\ddots&\vdots\\\
0&\cdots&D^{S}\end{array}\right].$ (46)
## III Prior Model of UMBIR
For the prior model, we adopt the q-generalized Gaussian Markov random field
(QGGMRF) from [30, 22]. With this design, the prior probability is
$\displaystyle p(x)=\frac{1}{z}\exp\left(-\sum_{\\{s,r\\}\in C}b_{s,r}\
\rho(x_{s}-x_{r})\right),$ (47)
where $z$ is a normalizing constant, $C$ is the set of pair-wise cliques,
$\displaystyle\rho(\Delta)=\frac{|\Delta|^{p}}{p\sigma_{s,r}^{p}}\left(\frac{|\frac{\Delta}{T\sigma_{g_{s,r}}}|^{q-p}}{1+|\frac{\Delta}{T\sigma_{s,r}}|^{q-p}}\right),$
(48) $\displaystyle\sigma_{s,r}$ $\displaystyle=$
$\displaystyle\sigma_{0}\sqrt{\nu_{s}\nu_{r}},$ (49) $\displaystyle\nu_{s}$
$\displaystyle=$ $\displaystyle
1+(\nu-1)*\left(\frac{d_{s}}{d_{max}}\right)^{a},$ (50)
where $\nu>0$, $a>0$, $d_{s}$ is the distance from the sensor assembly to
pixel $s$, and $d_{max}$ is the max of $d_{s}$ over all $s$. We use $1<p<q=2$
to insure convexity and continuity of first and second derivatives of the
prior model. The parameter $T$ is unit-less and controls the edge threshold.
The QGGMRF parameter $\nu$ is unit-less and can be adjusted to amplify
reflections at deeper regions if needed. Finally, by taking the negative log
of Equation (47), the prior model penalty function is given by
$\displaystyle-\log p(x)=\sum_{\\{s,r\\}\in C}b_{s,r}\
\rho(x_{s}-x_{r})+\text{constant}\ .$ (51)
## IV MAP Estimation and the UMBIR Algorithm
In order to perform UMBIR reconstruction, we will need to estimate both the
image, $x$, and the direct arrival signals coefficients, $g$. Using the MAP
formulation, the multi-layer UMBIR reconstruction is then given by
$\displaystyle\left(\hat{x},\hat{g}\right)$
$\displaystyle=\arg\min_{(x,g)}\left\\{-\log p(y|x,g)-\log p(x)-\log
p_{g}(g)\right\\}\ .$ (52)
We will use an improper prior distribution for $g$ of the form
$-\log p_{g}(g)=\text{constant}\ ;$
the prior term for $x$ is given by (51) above; and the forward model term is
given by
$-\log p(y|x,g)=\frac{1}{2\sigma^{2}}\left\|y-Ax-
Dg\right\|_{2}^{2}+\text{constant}\ ,$ (53)
where the system matrix, $A$, and the direct arrival matrix, $D$, are
constructed as described in Section II above.
Putting this together, results in the
$\displaystyle(\hat{x},\hat{g})=\arg\min_{(x,g)}$
$\displaystyle\left\\{\rule{0.0pt}{20.0pt}\frac{1}{2\sigma^{2}}\left\|y-Ax-
Dg\right\|^{2}\right.$ $\displaystyle\left.+\sum_{\\{s,r\\}\in C}b_{s,r}\
\rho(x_{s}-x_{r})\right\\}.$ (54)
In order to compute the MAP estimate (54), we use Iterative Coordinate Descent
(ICD) algorithm with the majorization technique for the prior model as
described in [29].
| | |
---|---|---|---
(a) $\beta=1$ | (b) $\beta=4$ | (c) $\beta=8$ | (d) Measured
Figure 4: Comparison of experimentally measured and simulated apodization functions. A simulated beam profile, $\phi(v)^{(\beta)}e^{-\alpha_{0}r(v)}$, using the apodization function of (17) for (a) $\beta=1$, (b) $\beta=4$, and (c) $\beta=8$. (d) Experimentally measured beam profile for a collimated source. Notice that the value of $\beta=8$ provides a reasonably accurate approximation to the true beam profile. |
---|---
(a) | (b)
|
(c) | (d)
Figure 5: Illustration of experimental set up for concrete cylinder
experiment. (a) Picture of concrete cylinder that contains a central bore with
the acoustic imaging senor. Notice that a 50 mm notch in the concrete is
painted red. (b) Picture of acoustic imaging sensor removed from cylinder. (c)
Diagram of a cross section of the concrete cylinder with detailed
specification of distances. The blue rectangle indicates the region used to
generate synthetic data to evaluate the UMBIR algorithm. The red region
indicates the cross-section that is to be reconstructed. (d) Diagram of the
receiver for the acoustic imaging sensor with detailed specification of sensor
positions.
## V Experimental Results
In this section, we present results using both synthetic and measured data
sets. The synthetic data was generated using the K-Wave simulation package
[31]. Both our synthetic and measured data experiments are designed to
evaluate the performance of the well-bore integrity inspection system designed
at the Los Alamos National Laboratory (LANL) and shown in Figure 5 and Figure
9. This system uses a collimated acoustic transmitter along with an array of
15 receiving transducers. More detailed about the designed transducer can be
found in [32].
### V-A Methods
In order to validate our method, we performed two different experimental
studies: The concrete cylinder (CC) experiment and the granite block (GB)
experiment. In order to evaluate our method we also generate simulated data
that is similar to the CC experiment using the K-Wave software (referred to in
the results as CC-KWave). Table II provides all parameters used in our
reconstruction experiments for the three cases.
TABLE II: Parameter settings for concrete cylinder and granite block
reconstructions. Forward Model Parameters
---
Parameter | CC-KWave | CC-Exp | GB-Exp | Unit
Num. rows | 140 | 140 | 110 | -
Num. cols | 70 | 70 | 124 | -
Recon. resolution | 3 | 3 | 3 | mm
Sampling freq. | 2 | 2 | 5 | MHz
FOV height | 420 | 420 | 330 | mm
FOV depth | 210 | 210 | 370 | mm
$\beta$ | 8 | 8 | 8 | -
$\alpha_{\text{water}}$ | 2 | 2 | 2 | $1/\text{Hz m}$
$\alpha_{\text{concrete}}$ | 30 | 30 | 30 | $1/\text{Hz m}$
$\alpha_{\text{granite}}$ | - | - | 88 | $1/\text{Hz m}$
Prior Model Parameters
Parameter | CC-KWave | CC-Exp | GB-Exp | Unit
Num. iterations | 100 | 100 | 100 | -
$\sigma$ | 0.1 | 0.12 | 0.2 | Pascal
p | 1.1 | 1.1 | $1.1$ | -
q | 2.0 | 2.0 | $2.0$ | -
T | 0.01 | 1 | 0.001 | -
$\sigma_{0}$ | 2 | 2 | 2 | $m^{-2}$
$\nu$ | 10 | 10 | 5 | -
$a$ | 2 | 2 | 3 | -
Our forward model can account for the the shape of the collimated beam as
discussed in (17). Therefore, we have to pick the value of the $\beta$ in (17)
that best matches the type of profile used in the experimental data. In Figure
4, we visualize the effects of $\beta$ on beam spread and compare it with
measured data. In Figure 4(a-c), we plot
$\phi(v)^{(\beta)}e^{-\alpha_{0}r(v)}$ for $\beta$ = 1, 4, and 8, where
$\phi(v)^{(\beta)}$ is the apodization function from (17), $\alpha_{0}$ is the
attenuation coefficient in $m^{-1}$, $\theta(v)$ is the angle between beam
direction and $v$, and $r(v)$ is distance from source to $v$. Increasing
$\beta$ decreases the beam spread and makes it more collimated, with good
perceptual fit to the measured data in Fig. 4(d) when $\beta=8$.
| | | |
---|---|---|---|---
(a) GT | (b) SAFT 29kHz | (c) SAFT 42kHz | (d) SAFT 58kHz | (e) GT
| | | |
(f) MF-UMBIR | (g) UMBIR 29kHz | (h) UMBIR 42kHz | (i) UMBIR 58kHz | (j) MF-UMBIR
Figure 6: Results using synthetic data generated with K-Wave for concrete
cylinder experiment. (a) ground truth without notch; (b), (c), and (d) single
frequency SAFT reconstructions at 29 kHz, 42 kHz, and 58 kHz without notch;
(e) ground truth with notch; (f) multi-frequency UMBIR reconstruction of
K-Wave data without notch; (g), (h), and (i) single frequency UMBIR
reconstructions at 29 kHz, 42 kHz, and 58 kHz without notch; (j) multi-
frequency UBMIR reconstruction of K-Wave data with notch. The red and green
dashed lines indicate the notch and backwall locations, respectively. Notice
that the two MF-UMBIR reconstructions of (f) and (j) accurately reconstruct
the location of the notch and the back wall.
### V-B Concrete Cylinder Experiment
Figure 5 illustrates the experimental setup for the concrete cylinder
experiment that was performed at LANL [32]. The concrete cylinder is designed
to represent a concrete well bore with a central hole that contains the
acoustic imaging sensor.
Figure 5(a) shows that on one side of the concrete cylinder there is a notch
marked with red paint that is 50 mm deep and subtends an angle of
approximately $45^{\circ}$. Figure 5(c) is a detailed diagram showing the
dimensions and positions of all the components. Notice that the acoustic
imaging sensor is placed in the center of the cylinder with the receiver array
at the top and the acoustic transmitter at the bottom. The blue box in Figure
5(c) shows the region in which the K-Wave simulation is performed for the
generation of synthetic data, and the red box shows the region in which the
UMBIR reconstruction is computed.
Figure 5(b) shows the receiver array along with the collimated transmitter
hanging below. This entire assembly is positioned inside the bore hole at the
center of the concrete cylinder where it can be rotated by the computer-
controlled rotation system. Figure 5(d) is a diagram showing the position of
the transducers in the sensor array. We note that the transmitters and
receivers are immersed in water to facilitate acoustic transmission into and
out of the concrete. However, there is an vacuum barrier shown as a green line
in Figure 5(c) that is positioned between the transmitter and receivers aiming
to block the direct arrival signal.
Table III lists out the transmission parameters at the three different
excitation frequencies that were used. The data was collected at $5^{\circ}$
increments using a rotational span of $180^{\circ}$, with the sensor assembly
facing the middle of the notch at the rotational position of $90^{\circ}$.
29 kHz | 42.4 kHz | 58 kHz | MF-UMBIR
---|---|---|---
$0^{\circ}$ (no notch) | | |
$90^{\circ}$ (notch) | | |
Figure 7: Results of reconstructing experimental data collected from concrete cylinder. Column 1 to 3 of figure: Single frequency UMBIR reconstructions at excitation frequencies of 29 kHz, 42.4 kHz, 58 kHz and multi-frequency UMBIR, without and with the notch. Column 4 of figure: Multi-frequency UMBIR reconstruction without and with the notch. Notice that the multi-frequency UMBIR reconstruction results in much better localization and greater accuracy of the back wall location. TABLE III: Transmit signal parameters used for the concrete cylinder experiment. Excitation Frequency | Duration | Pulse Shape
---|---|---
29 kHz | 200 $\mu$s | Tukey
42.4 kHz | 200 $\mu$s | Tukey
58 kHz | 50 $\mu$s | Tukey
#### V-B1 K-Wave Results
TABLE IV: K-Wave parameters used for concrete cylinder. Component | Dimensions | Units
---|---|---
Computational domain height | 855 | mm
Computational domain width | 362 | mm
Pixel pitch in x/y direction | 3 | mm
perfectly matched layer (PML) size | 20 | samples
water acoustic speed | 1.5 | km/s
Plexiglas acoustic speed | 2.82 | km/s
concrete acoustic speed | 2.62 | km/s
water density | 997 | kg/m3
Plexiglas density | 1180 | kg/m3
concrete density | 1970 | kg/m3
In order to better understand the concrete cylinder experiment, we first
perform reconstructions using synthetic data generated with the K-Wave
simulation using the parameters shown in Table IV. Table II lists the
parameter settings used in K-Wave data reconstructions under CC-KWave, and all
simulations are done using a perfectly matched layer (PML) that extends in all
directions from the outer boundary of the computational domain. The regions
beyond the top, bottom, and back wall are assumed to be air (yellow region) to
mimic the real data, and an isolating layer (void) was placed between the
source and receivers to partially block direct arrival signals.
Figure 6 shows a comparison of single frequency UMBIR, multi-frequency UMBIR,
and SAFT when reconstructing synthetically generated concrete cylinder data.
Figure 6(a) and (e) show the ground truth used to generate the data with and
without the notch, respectively. Without the notch, the back wall should be
located at a depth of 18.85 cm (shown with a red dotted line), and with the
notch it should be located at a depth of 23.85 cm (shown with a green dotted
line).
The single frequency SAFT reconstructions in Figure 6(b-d) show multiple
reflections from the back wall and other artifacts at each excitation
frequency, which leads to uncertainty in the estimated location of the back
wall. In contrast, the single frequency UMBIR reconstructions in Figure 6(g-i)
show that even single-frequency UMBIR reconstructions provide fewer artifacts
and more accurate localization of the back wall than SAFT.
The multi-frequency UMBIR reconstruction without the notch in Figure 6(f) is a
substantial improvement over the single frequency UMBIR reconstructions. The
same is true for the multi-frequency UMBIR reconstruction with the notch in
Figure 6(j). This demonstrates that joint processing of low and high
excitation frequencies results in substantially better reconstruction quality
than single-frequency reconstruction.
#### V-B2 Experimental Data Results
Figure 7 depicts a selection of cross-section reconstruction results using the
measured experimental data. The left 3 columns of Figure 7 show single
frequency UMBIR cross-section reconstructions for the excitation frequencies
29 kHz, 42.4 kHz, 58 kHz without and with the notch. Column 4 shows the multi-
frequency UMBIR reconstruction without and with the notch. We again note that
the back wall without and with the notch are located at 23.85 cm and 18.85 cm,
respectively. So from this we see that the joint reconstruction of the
multiple frequencies more accurately localizes of the back wall.
The panoramic reconstruction view of the concrete cylinder in Figure
LABEL:fig:Concrete-Cylinder-Real-Panoramic provides a more visually intuitive
interpretation of the multi-frequency UMBIR reconstruction. The panoramic
reconstruction is formed by combining the views from each measured angle (37
equi-spaced angles from $0^{\circ}$ to $180^{\circ}$) to form a 2-dimensional
horizontal cross-section at a fixed height of 27 cm. In this case, both the
back wall with and without the notch is shown as a dotted red line. Notice
that the multi-frequency reconstruction localization closely follows its true
location. There is some error in capturing the transition along the notch, but
the locations are captured accurately over a large range of angles, and there
are few reflection artifacts.
Table V gives approximate computation times for 100 iterations of single and
multi-frequency UMBIR using an optimized code running on an Intel(R) Core(TM)
i7 CPU E5-2603 0 @1.80 GHz, 32.00 GB RAM. The table lists out both the time
required to compute the system matrices, $A$ and $D$, and the time required to
reconstruct one 2D image for the concrete cylinder data set shown in Figure
6(i) and (j).
TABLE V: Computation time to obtain the UMBIR and MF-UMBIR reconstructions in Figure 6(i) and (j). Computation Time | UMBIR | MF-UMBIR
---|---|---
Time to compute system matrices | 26.6 s | 91.3 s
Time to perform reconstruction | 44.3 s | 205.2 s
### V-C Granite Block Experiment
| | |
---|---|---|---
(a) | (b) | (c) | (d)
Figure 9: Set up for the Granite Block experiment. (a) Top-view photo of the
granite block used in the experiment. The two left and right defects are
approximately 21 cm away from the center of the borehole. (b) The
transmitter/receiver system used in experiment. (c) The dimensions of the
granite block and (d) A diagram for a cross-section. The red box indicates the
area reconstructed by UMBIR.
Figure 9 illustrates the experimental set up use of the granite block data set
described in this section. The granite block has a borehole with a steel
casing of 4" inner diameter and 4.5" outer diameter. The casing-cement
interface is located at 4.5" from the borehole center, and the cement-granite
interface is located at 5.25". The granite block has induced defects, as can
be seen in Figure 9(a), located at approximately 21 cm from the borehole
center.
Before imaging, the borehole was filled with water. Similar to the previous
experiment, there is one transmitter and 15 receivers. The transmitter is
aligned with the detectors and has a firing angle of $35^{\circ}$. The
receiver/transmitter system deployed to the granite block is shown in Figure
9(b).
Data was collected from 24 azimuth scans uniformly separated with an angle
$15^{\circ}$ to cover of $360^{\circ}$. Table VI lists out the transmission
parameters at the three different excitation frequencies that were used for
this experiment. Table II gives the parameter values used in reconstructions
of the granite block data, and the red box in Figure 9(c) shows the area
reconstructed by UMBIR.
TABLE VI: Transmit signal parameters used for the granite block experiment. Excitation Frequency | Duration | Pulse Shape
---|---|---
103.38 kHz | 150 $\mu$s | Tukey
162.316 kHz | 50 $\mu$s | Tukey
220.554 kHz | 50 $\mu$s | Tukey
103.38 kHz | 162.316 kHz | 220.554 kHz | MF-UMBIR |
---|---|---|---|---
$0^{\circ}$ (no defect) | | | |
$90^{\circ}$ (defect) | | | |
Figure 10: UMBIR and multi-frequency UMBIR reconstruction results using
experimentally measured data from the granite block (GB) experiment. Left to
right columns correspond to the excitation frequencies 103.38 kHz, 162.316
kHz, 220.554 kHz, and the multi-frequency joint reconstructions, respectively.
The first row of reconstruction corresponds to $0^{\circ}$ with no defect, and
the second row corresponds to $90^{\circ}$ with a defect. The red doted line
indicates the location of the defect. Notice that the multi-frequency UMBIR
reconstructions shows substantial improvements over single-frequency
reconstructions and that it more accurately localizes the position of the
defect and then wall.
Figure 10 shows cross-section reconstructions from the granite block
experimental data, at rotational positions $0^{\circ}$ and $90^{\circ}$, with
the defect in view at $90^{\circ}$ and not at $0^{\circ}$. Notice that the
multi-frequency UMBIR reconstruction at $0^{\circ}$ has a clear reflection
from the sidewall around 31 cm (the black dashed line), while the
reconstruction $90^{\circ}$ has a clear reflection from the defect around 21
cm (the red dashed line).
Figure LABEL:fig:LANL_II_realResults_slice shows the multi-frequency UMBIR
panoramic reconstruction at a fixed height of 25.8 cm in polar coordinates.
The two red circles indicate the defect locations, and the black dashed square
shows the wall location. Notice that the multi-frequency UMBIR reconstruction
shows clear reflections from the edges of the specimen around the wall
location along with the defects at $90^{\circ}$ and $270^{\circ}$. The
distance from center to each corner is underestimated, which we speculate is
due to multiple reflections near the corners.
## VI Conclusion
In this paper, we proposed a multi-layer, multi-frequency collimated
ultrasound model-based iterative reconstruction (UMBIR) algorithm. To do this,
we introduced a computationally efficient method for computing an accurate
forward model system matrix for multi-layered structures that can typically
occur in practical ultrasound imaging scenarios using collimated beam
ultrasonic transducers. We also formulated the reconstruction method using MAP
reconstruction with space-varying image prior along with model of the direct
arrival signal.
We tested our method on both simulated and experimentally measured data using
two different scenarios corresponding to a concrete cylinder and a granite
block. Both scenarios were designed to represent the imaging of defects in a
well-bore.
In all cases, we found that multi-frequency UMBIR reconstructions had
substantially better quality than single-frequency UBMIR reconstructions, and
that single-frequency UMBIR reconstructions had substantially better quality
than SAFT reconstructions. Multi-frequency UMBIR reconstructions had much
better localization of image defects. In addition, the multi-frequency UMBIR
reconstructions accurately detected the location of defects and object back
walls.
## Acknowledgment
A. M. Alanazi was supported by King Saud University. C. A. Bouman was
partially supported by the Showalter Trust and by the U.S. Department of
Energy. C. A. Bouman and G.T. Buzzard were partially supported by NSF
CCF-1763896. S.V. and Hector Santos-Villalobos were supported by the U.S.
Department of Energy staff office of the Under Secretary for Science and
Energy under the Subsurface Technology and Engineering Research, Development,
and Demonstration (SubTER) Crosscut program, and the office of Nuclear Energy
under the Light Water Reactor Sustainability (LWRS) program.
## References
* [1] V. K. Chillara, C. Pantea, and D. N. Sinha, “Low-frequency ultrasonic bessel-like collimated beam generation from radial modes of piezoelectric transducers,” _Applied Physics Letters_ , vol. 110, no. 6, p. 064101, 2017\.
* [2] V. K. Chillara, C. Pantea, and D. Sinha, “Radial modes of laterally stiffened piezoelectric disc transducers for ultrasonic collimated beam generation,” _Wave Motion_ , vol. 76, pp. 19–27, 2018.
* [3] V. K. Chillara, E. S. Davis, C. Pantea, and D. N. Sinha, “Collimated acoustic beams from radial modes of piezoelectric disc transducers,” in _AIP Conference Proceedings_ , vol. 2102, no. 1. AIP Publishing LLC, 2019, p. 040013.
* [4] V. K. Chillara, J. Greenhall, and C. Pantea, “Ultrasonic waves from radial mode excitation of a piezoelectric disc on the surface of an elastic solid,” _Smart Materials and Structures_ , vol. 29, no. 8, p. 085002, 2020.
* [5] D. Prine, “Synthetic aperture ultrasonic imaging,” in _Proceedings of the Engineering Applications of Holography Symposium, Los Angeles, CA, USA_ , vol. 1617, 1972.
* [6] T. Stepinski, “An implementation of synthetic aperture focusing technique in frequency domain,” _ieee transactions on ultrasonics, ferroelectrics, and frequency control_ , vol. 54, no. 7, pp. 1399–1408, 2007.
* [7] K. Hoegh and L. Khazanovich, “Extended synthetic aperture focusing technique for ultrasonic imaging of concrete,” _NDT & E International_, vol. 74, pp. 33–42, 2015.
* [8] M. H. Skjelvareid, T. Olofsson, Y. Birkelund, and Y. Larsen, “Synthetic aperture focusing of ultrasonic data from multilayered media using an omega-k algorithm,” _IEEE transactions on ultrasonics, ferroelectrics, and frequency control_ , vol. 58, no. 5, pp. 1037–1048, 2011.
* [9] S. Lin, S. Shams, H. Choi, and H. Azari, “Ultrasonic imaging of multi-layer concrete structures,” _NDT & E International_, vol. 98, pp. 101–109, 2018\.
* [10] V. Cerveny, _Seismic ray theory_. Cambridge university press, 2005.
* [11] G. F. Margrave and M. P. Lamoureux, _Numerical methods of exploration seismology: With algorithms in MATLAB®_. Cambridge University Press, 2019.
* [12] A. Shlivinski and K. Langenberg, “Defect imaging with elastic waves in inhomogeneous–anisotropic materials with composite geometries,” _Ultrasonics_ , vol. 46, no. 1, pp. 89–104, 2007.
* [13] J. Gazdag, “Wave equation migration with the phase-shift method,” _Geophysics_ , vol. 43, no. 7, pp. 1342–1351, 1978.
* [14] M. H. Skjelvareid and Y. Birkelund, “Ultrasound imaging using multilayer synthetic aperture focusing,” in _Pressure Vessels and Piping Conference_ , vol. 49248, 2010, pp. 379–387.
* [15] E. Ozkan, V. Vishnevsky, and O. Goksel, “Inverse problem of ultrasound beamforming with sparsity constraints and regularization,” _IEEE transactions on ultrasonics, ferroelectrics, and frequency control_ , vol. 65, no. 3, pp. 356–365, 2017.
* [16] A. Tuysuzoglu, J. M. Kracht, R. O. Cleveland, M. C¸ etin, and W. C. Karl, “Sparsity driven ultrasound imaging,” _The Journal of the Acoustical Society of America_ , vol. 131, no. 2, pp. 1271–1281, 2012.
* [17] X. Liu, Y. Liu, X. Huang, and P. Li, “Least-squares reverse-time migration with cost-effective computation and memory storage,” _Journal of Applied Geophysics_ , vol. 129, pp. 200–208, 2016.
* [18] Y. Zhang, L. Duan, and Y. Xie, “A stable and practical implementation of least-squares reverse time migration,” _Geophysics_ , vol. 80, no. 1, pp. V23–V31, 2015.
* [19] Y. Chen, K. Gao, E. S. Davis, D. N. Sinha, C. Pantea, and L. Huang, “Full-waveform inversion and least-squares reverse-time migration imaging of collimated ultrasonic-beam data for high-resolution wellbore integrity monitoring,” _Applied Physics Letters_ , vol. 113, no. 7, p. 071903, 2018\.
* [20] C. Zeng, S. Dong, and B. Wang, “A guide to least-squares reverse time migration for subsalt imaging: Challenges and solutions,” _Interpretation_ , vol. 5, no. 3, pp. SN1–SN11, 2017.
* [21] H. Wu, J. Chen, S. Wu, H. Jin, and K. Yang, “A model-based regularized inverse method for ultrasonic b-scan image reconstruction,” _Measurement Science and Technology_ , vol. 26, no. 10, p. 105401, 2015.
* [22] H. Almansouri, S. Venkatakrishnan, C. Bouman, and H. Santos-Villalobos, “Model-based iterative reconstruction for one-sided ultrasonic nondestructive evaluation,” _IEEE Transactions on Computational Imaging_ , vol. 5, no. 1, pp. 150–164, 2018.
* [23] H. Almansouri, S. Venkatakrishnan, D. Clayton, Y. Polsky, C. Bouman, and H. Santos-Villalobos, “Anisotropic modeling and joint-map stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens,” in _AIP Conference Proceedings_ , vol. 1949 (1). AIP Publishing LLC, 2018, p. 030002.
* [24] M. Weston, P. Mudge, C. Davis, and A. Peyton, “Time efficient auto-focussing algorithms for ultrasonic inspection of dual-layered media using full matrix capture,” _Ndt & E International_, vol. 47, pp. 43–50, 2012.
* [25] T. Moser, “Shortest path calculation of seismic rays,” _Geophysics_ , vol. 56, no. 1, pp. 59–67, 1991.
* [26] J. A. Sethian, “A fast marching level set method for monotonically advancing fronts.” _Proceedings of the National Academy of Sciences_ , vol. 93, no. 4, pp. 1591–1595, 1996.
* [27] A. J. Brath and F. Simonetti, “Phased array imaging of complex-geometry composite components,” _IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control_ , vol. 64, no. 10, pp. 1573–1582, 2017\.
* [28] A. Alanazi, S. Venkatakrishnan, H. Santos-Villalobos, G. Buzzard, and C. Bouman, “Model-based reconstruction for collimated beam ultrasound systems,” in _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2022, pp. 1601–1605.
* [29] C. A. Bouman, _Foundations of Computational Imaging: A Model-Based Approach_. SIAM, 2022, vol. 180.
* [30] J.-B. Thibault, K. D. Sauer, C. A. Bouman, and J. Hsieh, “A three-dimensional statistical approach to improved image quality for multislice helical ct,” _Medical physics_ , vol. 34, no. 11, pp. 4526–4544, 2007.
* [31] B. E. Treeby and B. T. Cox, “k-wave: Matlab toolbox for the simulation and reconstruction of photoacoustic wave fields,” _Journal of biomedical optics_ , vol. 15, no. 2, p. 021314, 2010.
* [32] C. Pantea, “Collimated beams for cement evaluation,” Los Alamos National Lab.(LANL), Los Alamos, NM (United States), Tech. Rep., 2019.
|
Preprint
Gabriel Poesia
# Peano: Learning Formal Mathematical Reasoning
Gabriel Poesia1 and Noah D. Goodman12 1Department of Computer Science,
Stanford University
2Department of Psychology, Stanford University<EMAIL_ADDRESS>
###### Abstract
General mathematical reasoning is computationally undecidable, but humans
routinely solve new problems. Moreover, discoveries developed over centuries
are taught to subsequent generations quickly. What structure enables this, and
how might that inform automated mathematical reasoning? We posit that central
to both puzzles is the structure of procedural abstractions underlying
mathematics. We explore this idea in a case study on 5 sections of beginning
algebra on the Khan Academy platform. To define a computational foundation, we
introduce Peano, a theorem-proving environment where the set of valid actions
at any point is finite. We use Peano to formalize introductory algebra
problems and axioms, obtaining well-defined search problems. We observe
existing reinforcement learning methods for symbolic reasoning to be
insufficient to solve harder problems. Adding the ability to induce reusable
abstractions (“tactics”) from its own solutions allows an agent to make steady
progress, solving all problems. Furthermore, these abstractions induce an
order to the problems, seen at random during training. The recovered order has
significant agreement with the expert-designed Khan Academy curriculum, and
second-generation agents trained on the recovered curriculum learn
significantly faster. These results illustrate the synergistic role of
abstractions and curricula in the cultural transmission of mathematics.
###### keywords:
automated theorem proving, mathematical reasoning, reinforcement learning,
curriculum learning, library learning
## 1 Introduction
A wide range of important human questions are formulated in the language of
mathematics: problems from physics to economics to computation. Thus,
computers that are able to reason about mathematical problems could have broad
impact on scientific questions. Such systems could be used directly by
scientists or could potentially aid in training the next generation of human
scientists and engineers. Existing systems, such as Computer Algebra Systems,
implement procedures for solving important classes of well-known mathematical
problems, such as differential equations or matrix computations. A much more
ambitious goal would be to have computers that can reason about novel types of
problems. How could that be done?
There are several computational foundations for expressing mathematics that
can serve as a basis for this endeavour, such as type systems and first-order
or higher-order logics. Given any of these formal systems, we can make new
definitions, state propositions and specify rules of inference. Together,
these components pose a well-defined computational problem: given a
mathematical proposition $p$, determine whether it is provable under the given
assumptions. Unfortunately, no algorithm can decide this for general
mathematical propositions: this is precisely Hilbert’s _Entscheidungsproblem_
, answered in the negative by both Church and Turing even before modern
computers had been invented.
In spite of the impossibility of a general procedure, humans routinely find
solutions to novel mathematical problems — either new to them or to all
mathematicians. Moreover, new generations of mathematicians can reach the
frontiers of mathematical knowledge notably faster than the time it took
previous generations to discover them. These puzzles suggest some structure to
human mathematics that makes new problems approachable and their solutions
teachable, even if we might not hope to solve arbitrary mathematical problems.
To investigate this structure, we propose a case study on learning to solve
educational mathematical problems given minimal prior knowledge. More
concretely, we set to formalize 5 sections of the algebra curriculum available
on the Khan Academy educational platform. We then aim to automatically solve
exercises from those sections in a system with no prior knowledge on _how_ to
find solutions, aside from the basic axioms from which solutions can be
formed. Focusing on this educational domain serves two purposes: It allows us
to attack the general problem of mathematical reasoning from a starting point
that must be solvable – even a child can do it! And, it means that successes
at learning to generate human-like solutions have immediate applications to
education.
Many modern theorem-proving languages, such as Lean, Coq or Isabelle, could be
use to _represent_ the axioms and problems we aim to study. However, current
languages pose a challenge in setting up the problem of _searching_ for
solutions: they permit an infinite number of valid proof steps at any given
point. To conduct our case study, we start by proposing a theorem-proving
language, called Peano, designed with a companion environment for general
proof search. This environment exposes a space of valid proof steps that is
finite at all times, as we describe in detail in Section 4. Peano uses
dependent types to encode mathematical definitions and proofs in a general
fashion using the paradigm of _propositions as types, proofs as programs_.
Using Peano, we formalize 5 sections of Khan Academy — their axioms and
exercises — obtaining a family of well-defined search problems.
How might we solve these problems in a computer agent? Unguided search can
only find the most trivial solutions because the search space grows rapidly
with solution depth, as for most formal systems. Even short solutions,
however, reveal patterns about when axioms are useful, suggesting we can
attempt to learn from past searches to improve our chances of solving harder
problems. More specifically, we can guide search using a _policy_ : a
distribution over actions to take given the current state. A policy can be
used to prioritize taking actions that are more likely to lead to a solution.
Learning a policy from experience is the central problem of _reinforcement
learning_. We can thus use techniques from reinforcement learning to train an
agent to solve problems in our mathematical domain. Indeed, we observe in
section 6 that an agent trained using an existing reinforcement learning
method can learn to solve all problems in the first two Khan Academy sections.
However, it fails to make progress on harder problems beyond the first
sections. Indeed, as we look at later problems in the Khan Academy curriculum,
axiomatic solutions become steadily longer and thus progressively less likely
to be found by exploration. This problem would grow worse were we to continue
on the human curriculum, even if we restricted ourselves to later sections
that do not require additional axioms.
This difficulty, however, is certainly not a barrier for human students, who
routinely learn this material and much beyond. What structure in these
problems might they use that an agent could leverage to make progress? We show
that adding the ability to abstract patterns in previously found solution into
new atomic, higher-level actions makes the problem tractable. We present a
simple algorithm based on anti-unification in Section 7 for learning _tactics_
— higher-level solution actions that can invoke axioms or other tactics. Given
actions at the right level of abstraction, all problems we study admit short
formal solutions, which can thus be found within a reasonable search budget.
When attempting combinations of existing tactics, success at new problems
helps reveal which new combinations are useful, further suggesting newer
tactics. This process allows the agent to make progress without being given
any additional information about the problems themselves.
Furthermore, we posit that the tactics our agents learn reflect some of the
ordering given in the Khan Academy curriculum, _despite that ordering not
having been given to the agent during training_. When we re-order problems
based on dependencies between the learned tactics that appear in their
solutions, as we describe in Section 8, we find that the recovered order
agrees with the order of problems on Khan Academy to a significant level. We
finally observe a virtuous interaction between tactic learning and curricula.
When training a second agent again using tactic learning, but this time seeing
problems as ordered by the curriculum constructed from the first agent, we
find that this second agent learns significantly faster. This experiment
paints a computational account of the role of curricula in cultural
transmission of mathematics: abstractions that might take long to be developed
can be efficiently taught when the right ordering of educational experiences
is put together.
In summary, we make the following contributions:
* •
We introduce Peano, a theorem-proving language based on dependent types along
with an environment for proof search where the action space is finite.
* •
We formalize 5 sections of the Khan Academy algebra curriculum in Peano, and
show that the resulting search problems are challenging for a reinforcement
learning agent.
* •
We show that a _tactic induction_ algorithm, where patterns in previous
successful searches give rise to new atomic actions, enables an agent to make
steady progress through the Khan Academy problems, eventually learning to
solve them all.
* •
We observe that the tactics our agent learns can be used to largely
reconstruct the order of problems in Khan Academy, and that training an agent
on the resulting order enables faster learning.
## 2 Related Work
Automated theorem proving was one of the first targets of artificial
intelligence research. The Logic Theorist, a program developed by Newel, Simon
and Shaw [19, 28], was demonstrated at the Darthmouth Summer Research Project
on Artificial Intelligence in 1956 [18], the meeting that first coined the
name “Artificial Intelligence” for the field. This work pioneered the idea of
formulating mathematical reasoning problems as search in a state graph, where
edges correspond to possible deductions. These ideas were later extended to
other richer formal systems, such as first-order logic and Euclidean geometry
[21]. These programs quickly surfaced the need for search heuristics: even in
a simple domain such as propositional logic, unguided search algorithms can
only find solutions to trivial problems because of the combinatorial explosion
of the search space. While heuristics can push this limit, the need to
manually engineer problem-solving strategies for each problem domain hindered
progress and interest on this research program [27].
These initial efforts on automated reasoning aimed to create programs that
solved problems in general domains by making sequences of human-like deduction
steps. Two deviations from this paradigm led to progress in different
directions. First, focusing on constrained problems — such as satisfiability
(SAT), in propositional logic or other theories — led to algorithms such as DP
[7] and DPLL [6]. While even SAT is NP-Complete, these methods and later
developments were able to solve increasingly large practical instances,
enabling a range of applications in areas such as model checking and program
verification.
Towards using computers in general mathematics, full automation of reasoning —
such as what SAT and SMT solvers aim to provide — has proved much more
difficult [27]. But significant progress has been made in the development of
interactive proof assistants, where humans guide the proof generation process
with potential aid for completing lower level details111In fact, in modern
proof assistants such as Isabelle/HOL, SMT solvers can be called to find
proofs for sub-problems that are expressible in a theory they support.. Modern
proof assistants — such as Coq, Lean, Isabelle and HOL Light — have enabled
the formalization of large bodies of complex mathematical results; examples
include a formal proof of the Kepler conjecture [10] in the HOL Light and
Isabelle proof assistants, and a proof of the independence of the continuum
hypothesis [11] in Lean. Thus, their logical foundations have been shown to
express and verify complex mathematical results. Even if their power also
makes automation difficult in general, these languages allow users to define
_tactics_ : programs that can encode complex proof strategies, and thus
implement domain-specific automation to assist proving theorems in the target
domain.
Concurrently, machine learning methods have advanced substantially over the
last decades, both in supervised learning [16], where a model is trained to
fit a given dataset, and in reinforcement learning (RL), where the learner’s
goal is choose actions that maximize reward signals in a dynamic environment
[20, 1, 29]. In particular, deep neural networks trained via RL emerged as a
tool to learn policies (distributions over actions to take given a state) and
value functions (an estimate of rewards that can be obtained from a given
state) from raw representations, such as strings or pixels. Both policies and
value functions can be used to guide search algorithms, making deep RL
suitable for learning to search in large-scale problems such as finding proofs
[31, 2, 12]. Given the availability of proofs generated by human
mathematicians in large formalization projects, researchers have explored the
use of human-written proofs as supervised training data to guide proof search
[13, 31, 2]. There has been little experience, however, in having
mathematicians use these systems outside of the domains they have been trained
on, as is the case in new formalization projects. To eliminate or mitigate
this dependency on pre-existing training data, research efforts have also
focused on the application of deep reinforcement learning to theorem proving,
as a means to learn from experience by interacting with an environment [14,
32, 26]. Systems that learn without prior training data, however, have only
been demonstrated in more constrained formal systems, such as first-order
logic [14] or a selection of a few tactics in HOL4 [32].
Finally, another related thread of research that has seen significant progress
recently is the field of program synthesis, where the goal is to generate
computer programs that satisfy a given specification. Programs can be mapped
to mathematical proofs by encoding of mathematical propositions as types, and
proofs as programs. This is known as the Curry-Howard correspondence, which we
explore in more depth in Section 4. An important insight of recent program
synthesis systems, such as DreamCoder [8], is that learning to search for
programs can be interspersed with _library learning_ , whereby the synthesizer
extracts useful reusable patterns — abstractions — from programs that it
managed to synthesize. When solving a family of related problems, discovering
abstractions can drastically reduce the difficulty of harder problems: even
complex programs can have short implementations in terms of the appropriate
library. This insight has a direct interpretation in the land of mathematical
reasoning: the difficulty of a mathematical problem also heavily depends on
the “library” — of existing lemmas or tactics. Given any starting library,
harder problems can require arbitrarily large search depths to be solved. But
if we have a family of related problems of progressive difficulty, learning
abstractions — such as the tactics we induce in Section 7 — is a means to make
steady progress.
## 3 Overview
Figure 1: Illustration of the formalized versions of five sections of the
algebra curriculum on the Khan Academy educational platform.
Consider the five sections from Khan Academy shown in Figure 1. These sections
take a student that knows the basic operations with numbers — addition,
subtraction, multiplication, division — as well as their basic properties —
commutativity, associativity, identity elements, and so on — and teaches them
to solving their first equations, such as $x+5=8$. We aim to solve these
problems in a formal system where each deduction corresponds to a complete
solution step, akin to a line that a student could write on paper. We start
this endeavour with two goals in mind. First, this process might give insight
into what ingredients a learning system would need to do human mathematics
(since general mathematical reasoning is undecidable but human students learn
with alacrity). Second, this system can then be used as a foundation for
helping students that routinely need to go through these problems — by
checking their work, providing hints, and creating worked examples, among
other pedagogical actions.
To start, we need a computational basis for representing problems and
solutions. Modern theorem-proving languages are a natural option: their
logical foundations are powerful enough to express mathematics even at the
research frontier, from definitions to propositions and proofs. However, when
seen as environments for automatically finding solutions, these languages pose
several challenges. First, given a solution in progress, there’s often an
infinite number of next steps that are valid (we dissect the reasons for this
in Section 4). This fact implies that agents cannot learn by choosing actions
from a list of valid options – they need to _generate_ their own actions. This
is typically accomplished by first learning a generative model of likely
solution steps from human-written proofs [31, 26]. However, there is little —
if any — data from students writing their solutions in existing theorem-
proving languages222This would also be the case were we formalizing and
exploring a new mathematical domain, a setting where many researchers hope
automated theorem provers could be useful.. Therefore, we wish to proceed
without this dependency.
To enable learning by selecting actions — rather than generating them — and
receiving a termination signal once a solution has been found, we first need
an environment where the set of valid next moves is finite, while maintaining
generality as much as possible. To that end, we propose Peano, a simple
theorem-proving language based on a dependent type system, where the space of
next proof steps is constructed to be finite. We describe Peano in detail in
Section 4. The Peano environment defines states as the sequence of terms
constructed so far. A solution is complete once it constructs a term that
satisfies the goal of each of the sections (e.g., an equality between $x$ and
a constant when solving an equation). Peano allows us to easily implement all
the axioms we need to solve problems from the sections in Figure 1. The
environment can enumerate all the ways in which axioms can be applied to the
current state, thus giving us a set of actions at each state. Finally, in
section 5, we implement simple problem generators for each of the domains by
taking exercises from Khan Academy and turning them into templates (e.g.,
assuming the constants could be chosen at random), which gives a distribution
over initial states. Together, these components (states, actions, a
termination criterion, and a distribution over initial states) yields a well-
defined search problem.
While the set of actions at a given state is finite, the combinatorial search
space hinders unguided search algorithms from solving all but the most trivial
problems. Thus, to make progress, we need to guide search using heuristics. To
maintain generality, instead of manually encoding domain-specific heuristics,
we would like to _learn_ these heuristics from past searches. To that end, we
start by training an agent using Contrastive Policy Learning (ConPoLe) [25], a
method to learn a policy for guiding search that has been shown to work on
similar symbolic reasoning domains with sparse binary rewards. To sample
problems during training, we first choose one of the sections at random, and
then use that section’s problem generator. During learning, we evaluate the
agent on a set of held-out problems from each section.
ConPoLe can learn to solve all problems in the first two sections, where
solutions might have up to 6 formal steps. However, the third section already
poses major challenges: solutions are both longer (requiring up to 9 steps)
and require combining a wider range of axioms. The number of available actions
also grows the more steps we add to the solution, making solutions
exponentially less likely to be found by exploration. As a result, ConPoLe
fails to make meaningful progress on the last three sections, only succeeding
at solving trivial equations ($x+0=k$, $1*x=k$)333 Given a manually written
solution, we can estimate the probability that a random agent would generate
that solution with a given action space by multiplying the probabilities of
picking the next action to match the solution. For the shortest solution to
$x+1=2$, this probability is approximately $10^{-12}$.. This problem would
grow even worse were we to continue progressing throughout the Khan Academy
curriculum further. How would an agent possibly learn to solve problems from
the third section and beyond?
A key insight becomes clear once we analyze how students are taught to solve
equations in these sections. When explaining how to solve $x+1=10$, the
instruction begins by observing one can subtract a constant from both sides of
an equality, using that to get $(x+1)-1=10-1$. From here, the instructor no
longer refers to the base axioms (e.g., commutativity, associativity) as
operations one needs to apply. Instead, they assume that the student can
“simplify” each of the sides to get $x=9$. Many examples of how to “simplify”
have been seen in the previous sections. The key observation is that the
solution would only have 3 conceptual steps, not 9, _if the agent had a high-
level “simplify” action_ (subtract from both sides, simplify one side, then
simplify the other). With that action in the action space, the probability
that the agent would find solutions to learn from would be non-negligible.
How can the agent _learn_ such an action? Since “simplify” is a generalization
of what is learned from exercises in the first sections, one alternative is to
_induce_ that action by abstracting steps from solutions found so far. In
Section 7, we describe a simple method based on anti-unification [22] which
can create high-level actions by finding patterns in Peano solutions.
Following other theorem-proving languages, we call these high-level actions
_tactics_ — a procedure that can perform several operations, including calling
other tactics, to manipulate the proof state. Given the ability to induce
tactics from its own solutions, the agent is indeed able to make steady
progress and eventually solve all problems from the 5 sections.
Human mathematics poses another puzzle: after someone has mastered a
mathematical domain, the next generation of students can get to the same point
much faster, to then go beyond. Indeed, mathematical discoveries such as
calculus and analysis took hundreds of years to be developed and perfected;
yet high-school and college students today often learn the core of these
topics within semester-long courses. What enables us to transmit mathematical
knowledge so efficiently? We propose that part of the explanation has to do
with exploiting the structure of abstractions that underlie the domain. More
specifically, while it may take many sparse experiences to realize that an
abstraction is useful, mathematicians can then collect those experiences so
that the next generation can see it immediately, almost as if the need for a
particular concept was obvious in the first place. The process of selecting
and sorting examples for the next generation is what we understand as
_developing a curriculum_.
We can induce a curriculum from our generated problems by analyzing the
agent’s solutions and what tactics they use. Since tactics can invoke either
axioms or other tactics, we can arrange them in a dependency graph. These
dependencies imply a natural partial order on problems by considering which
tactics are present in their solutions, combined with the dependencies between
those tactics. Sorting problems in topological order — respecting the
dependencies between tactics in their solutions — then gives a curriculum.
Given an automatically constructed curriculum, two natural questions arise.
First, how similar is that curriculum to the human-designed ordering present
on Khan Academy? Second, can this curriculum serve an analogous purpose of
speeding-up learning of a “second generation” of agents? We explore both
questions in Section 8.
Figure 2: Learning curves of (a) a vanilla ConPoLe agent, (b) an agent with
tactic induction, and (c) a ConPoLe agent with tactic induction trained on the
curriculum induced from the previous agent.
## 4 Peano: Theorem Proving with a Finite Action Space
In this section, we aim to define a theorem proving environment that presents
an agent with a finite action space, where each such action corresponds to
adding a solution step. Our goal is to obtain (1) a simple yet expressive
language to specify mathematical domains (their definitions, axioms, problems
and solutions), and simultaneously (2) an environment where an agent can solve
problems formalized in the language by sequentially choosing among a finite
set of valid solution moves.
Many theorem-proving languages, including Coq, Isabelle/HOL and Lean, would
already fulfill our first requirement. These languages are based on a
relatively small core (dependent type systems in Coq or Lean, or higher-order
logic in Isabelle/HOL) and they have been used to fully formalize extremely
high-level mathematical results (e.g., the independence of the Continuum
Hypothesis and the proof of the Kepler Conjecture). But when seen as
environments for agents to find proofs, these languages pose challenges in how
a search space with a finite branching factor might be defined. This is the
main challenge we address here.
To overcome the unbounded space of valid solution steps, several prior works
have used generative models trained on datasets of human-written formal proofs
as a way to sample sensible actions in a given context. This approach works in
the presence of many existing formal proofs for training, but here we would
like to proceed without this assumption. Thus, to circumvent the need for the
agent to _generate_ valid actions, we want to design an action space so that
solution steps can be simply _selected_ out of a finite set of options.
We start by noticing that simply making a finite and complete action space
would be a vacuous goal if actions do not correspond to complete, valid
solution steps. For instance, we could trivially consider character by
character generation of proofs in any existing theorem-proving language, with
a special action to mark the end of the solution. Once that character is
chosen, we can check whether the solution was valid by submitting it to the
language’s verifier. While finite, an untrained agent has virtually no chance
of generating valid solutions in this space, making learning from sparse
rewards implausible. Leveraging the language’s grammar improves this situation
by eliminating lexical and syntactical errors. However, since most of the
constraints are imposed by the language’s semantics, sampling from the grammar
will still produce ill-formed solutions with overwhelmingly high probability.
Instead, we would like to completely rule out semantically invalid solutions,
and define a space where all steps are always complete and valid, though they
might not necessarily move towards the goal.
### 4.1 Foundation: $\lambda$-calculus and dependent types
To define solution steps, we first need a logical foundation to represent
definitions, propositions, axioms and solutions. We find the language of
_dependent $\lambda$-calculus_ [17] to be a compelling choice for our purpose.
Dependent $\lambda$-calculus borrows the three basic constructors of terms
from $\lambda$-calculus: a term can be a variable (e.g., $x$), a function
application (e.g., $(f\ t_{1}\ \cdots\ t_{n})$) or a $\lambda$-abstraction
(e.g., $\lambda x:T.t$). Each term is always associated to a _type_ , which
essentially constrain which terms can be used as arguments in function
applications. Most type systems have separate formation rules for types and
for objects of those types. Dependent type systems blur this distinction,
allowing types themselves to be terms that depend on the _values_ of other
terms. As we explain below, this device makes dependent types a rather
succinct language to express many common patterns of mathematical reasoning in
a small foundation. Moreover, having a type system in our environment will let
us drastically narrow down the set of actions available to an agent by
leveraging type constraints. This contrasts to untyped foundations such as
first-order logic or ZFC Set Theory, where the constraints imposed by the
formal system itself are mostly syntactic, and user-defined predicates encode
most of the semantic constraints 444For instance, classical first-order logic
assumes a single underlying universe of objects. In an interpretation where
objects include both numbers and sets, if we introduce a function symbol
$U(s_{1},s_{2})$ for the union between two sets, it is in principle valid to
consider the union of two numbers..
In a language to represent mathematical reasoning, we wish to represent both
typical mathematical objects (e.g., the number 2) and propositions about those
objects (e.g., that 2 is even). Dependent types can encode both notions (of
_kinds_ of objects and _propositions_ about those objects) in a unified
manner. Two base types are provided in Peano, called type and prop. Regular
types, such as the natural numbers, are themselves terms of type type, while
propositions are encoded as terms of type prop555 In our current
implementation of Peano, the type of both type and prop is type itself. This
does lead to Girard’s paradox, but this is not a practical issue for our
experimental investigation. Lean and Coq circumvent this issue by defining an
infinite sequence of types type i where each type i is of type type (i + 1), a
solution we can also apply.. In a dependent type system, terms can themselves
be (part of) types of other terms. For example, suppose we define a type nat
to represents natural numbers. The proposition that a natural number $n$ might
be even can be encoded as a function with input type nat (the number $n$) and
output type prop (the proposition that $n$ is even)666Note that dependent
types allow for more than simple parametric polymorphism, where types can
depend on other types but not on regular objects. A typical example of the
latter would be the polymorphic type List<T> where each other type T yields a
different list type.. Constructors for proposition types, such as the types
produced by even, define how proofs of those propositions can be created. Note
that there is a distinction of creating the _proposition_ “$n$ is even” and an
object of type “$n$ is even” — the former does not imply that the latter is
possible. Indeed, we would like the system to never allow a construction of an
object of type “$3$ is even”. In this language, proofs are represented as
programs: to prove a proposition, we must give a program that constructs an
object of corresponding type when given objects of the hypothesis types. For
example, when given a natural number $n$ and an object of type “$n$ is even”,
we might be able to construct an object of type “$n^{2}$ is even”. This
encoding of propositions as types and proofs as programs is the well-known
Curry-Howard correspondence.
Typically, a type system is designed to enable the implementation of a type-
checker for a formal language: given a program $P$, the type-checker verifies
whether $P$ satisfies the type constraints. When $P$ encodes a proof, this
corresponds to checking the validity of the proof. Our main use of a type
system is different: we’ll use the rules of the system to define an action
space to _generate_ valid programs. This will also let us easily check the
validity of proofs, however, by sequentially checking whether each step could
have been generated from the action space.
### 4.2 Defining a background library
When using interactive theorem proving environments, we usually have a
distinction between a global _background library_ , which has axioms and other
theorems that we might use and is fixed while proving a particular theorem,
and a local, dynamic _state_ which the solution steps can operate on. Both the
library and the state can be represented as sets of typed objects. In Peano,
we can write:
nat : type.
z : nat.
succ : [nat -> nat].
one : nat = (succ z).
This syntax is inspired by Elf [23]. These statements declare a type called
nat, then an object of type nat called z, a function called succ which
receives one nat and gives another nat, and an object called one, again of
type nat, which is defined as an alias for the application of succ to z. At
this point, our library would have 4 objects. Note that succ is an
uninterpreted function: all we know is its type signature, but we do not need
to provide an implementation.
These same constructs can be used to define properties about nats, such as the
“less than or equal to” (leq) relation:
leq : [nat -> nat -> prop].
z_leq_n : [(’n : nat) -> (leq z ’n)].
n_leq_sn : [(’n : nat) -> (leq ’n (s ’n))].
Thus, (leq one z) is a proposition (which doesn’t imply it is true, since we
have not constructed any object of that type). z_leq_n is an axiom: if we
apply it to any natural number ’n, it outputs an object of type (leq z ’n).
Here, we named the parameter (as ’n) since the output type depends on it. A
single quote is used to prefix bound variables, in order to easily distinguish
them from variables coming from the background library. Parameter types can
also introduce variables that are defined at the time of function application
by unification:
leq_trans : [(leq ’a ’b) -> (leq ’b ’c) -> (leq ’a ’c)].
Thus, if we have proofs that $a\leq b$ and that $b\leq c$ for some $a,b$ and
$c$, this axiom of transitivity would let us obtain a proof that $a\leq c$. We
could also have written this axiom in a longer form, by taking three nat
parameters separately and then the two mentioned proof objects. This choice is
not merely syntactical: while the two versions will allow the same actions,
the version where $a$, $b$ and $c$ must be inferred will enable much more
efficient action enumeration (Algorithm X, described below).
In addition to user-defined types and axioms, Peano provides a built-in
dependent equality type and three axioms to handle equality: eq_refl encoding
reflexivity (from any type $T$ and object $t$ of type $T$, we can construct
evidence that $t=t$), eq_symm encoding symmetry (from two objects $a$ and $b$
and evidence that $a=b$, we can construct evidence that $b=a$) and rewrite
encoding congruence (for any object of a proposition type $P$ and an equality
$a=b$, we can construct an object of type $P[a\mapsto b]$, i.e. the result of
substituting one free occurrence of $a$ by $b$ in $P$).
### 4.3 Problems, states and goals
Having defined a background library, we would like to pose problems and
construct solutions to them. In Peano, the solution state consists of a set of
objects that have been constructed so far — including proofs of intermediate
propositions. A problem is defined as a set of starting objects — for example,
two nats $a$ and $b$ and a hypothesis (leq a b) — and a goal. Our main
deviation here from other proof assistants is in how goals are specified.
In most interactive proof assistants, the prover typically starts by writing
down a proposition to be proven — a type, which is taken as an open goal.
Then, a series of proof steps will follow, each might change the proof state
or the open goals; the prover is done once there are no more open goals.
Essentially, these systems close an open goal once they construct an object of
the goal type. Once no more open goals remain, the proof assistant has enough
information to construct an object of the original proposition type.
Several common classes of educational exercise do not fit into this paradigm
in a straightforward manner. In particular, many exercises, such as CLT in
Figure 1, involve manipulating an expression until a syntactic condition is
met — for instance, until we obtain an equivalent expression that is
simplified, or an equation with a variable on the left-hand side and a
constant on the right side. These problems require checking assertions about
syntactic forms, rather than the objects they represent (e.g., an expression
is simplified, or an equation is solved). To reason about properties of
expressions in the type system, we would need to reify the _expressions_
themselves, and model the relation between expressions and the objects they
_denote_. This is certainly possible within a dependent type system, and
brings conceptual clarity to what exactly are the problems at hand, but
requires adding another layer to the encoding of problems and solutions in the
formal system.
To circumvent this complexity in the context of simple educational domains, we
allow a more flexible approach of simply having a small program, written
outside of the formal system, that checks if the given solution meets the
goal. This program receives the state — all constructed objects and their
types — and returns a binary reward signal. Under this flexible model, the
usual goal of constructing an object of a certain type — a proof of a
proposition — is trivial to check with a program that searches the state for
an object that has the goal type. But syntactic conditions also become easier
to test. For example, to check if “an equation is solved”, a verifier can
check if the type of any of the constructed objects is an equality between $x$
and a constant. If the prover managed to produce such an equality, then we can
say it managed to “solve the equation”, since the formal system will guarantee
that all steps that have been taken leading to this equality were valid.
Thus, in Peano, the state consists of a set of typed objects, a problem
specifies the starting state and the goal of a problem is to modify the state
until it satisfies a known verifier. In other words, the prover has the
information of whether the goal is to “solve the equation” or to “prove a
particular proposition”. We now look at which actions the prover can take to
change the solution state.
### 4.4 Finite action space
In modern dependently-typed proof assistants, such as Lean or Coq, the proof
writing process typically enables an infinite set of valid proof steps at any
given point. We first identify the sources of infiniteness, and then describe
how we constrain this space to be finite.
When processing a proof, existing proof assistants typically keep a state
consisting of a set $S$ of objects that have been constructed or are assumed
to exist (starting with the hypotheses) and a set $G$ of goals. The goals
$g_{i}\in G$ are propositions — thus, types — and we can complete our proof
once $S$ contains objects of each of the types in $G$. A proof step can
therefore have effects in either $S$ or $G$. Changes to $S$ are _forward
reasoning steps_ : they typically construct a new object, which is added to
$S$. The two essential means of constructing objects come from
$\lambda$-calculus: function application and lambda abstraction.
Proof steps might also change $G$: these are _backward reasoning steps_. These
proof steps also essentially consist of applying an existing function $f_{b}$,
but this time _in reverse_ to a goal $g_{i}$. In this case, $f_{b}$ must
output the goal type, and its argument types then replace $g_{i}$ in $G$ as
new sub-goals to be closed. If we eventually satisfy the sub-goals by
constructing objects of the appropriate types, an object of type $g_{i}$ — to
satisfy the original goal — could be produced by applying $f_{b}$ in the
forward direction.
Both the sets of forward and backward steps might be infinite. The number of
valid forward steps — either function application or lambda abstraction — is
unbounded because a single step can construct arbitrarily deep objects. For
example, if we have the natural number $0$ and the successor function
$S:\mathbb{N}\rightarrow\mathbb{N}$, a single forward step might construct any
natural number777In Lean, one could write let n : nat := (s (s (... z)))..
Similarly, a lambda abstraction can have an arbitrarily complex body. We can
easily constrain function application steps without sacrificing generality by
forcing each step to apply a single function to a combination of already
existing arguments. Deep objects can still be incrementally constructed in
multiple steps. A similar — but more subtle — idea could be applied to allow
the construction of lambda abstractions with a finite set of actions at each
step. Our current version of Peano does not include the additional actions
needed to inline lambda abstraction. This limits our current action space to
only produce proofs that don’t require auxiliary functions (or lemmas). When
using Peano, one can still prove theorems that require lemmas by stating and
proving the lemmas separately (i.e., not inline in the main proof). This makes
some proofs unnatural to write888For example, in a proof by induction, each of
the branches is a lemma that currently has to be outlined., but does not
impact the educational mathematical domains we study in the present work.
Our relaxation of goals into general verifiers (that determine if a state is
“done” beyond the existence of objects of certain types) complicates the
specification of backward steps. However, though backward steps add
flexibility to how one can construct a proof, they do not fundamentally enable
new proofs — any proof that contains backward steps can be mechanically
translated into a proof that only uses forward steps. Furthermore, human
students typically start using backward steps at a later level of education
than we consider in our algebra domains. Thus, we also limit ourselves to
forward steps. Together with the above constraint on forward steps, this makes
the action space finite. We note that a strategy similar to how we constrained
forward steps could also be used to arrive at a finite set of backward steps
while still retaining generality999Specifically, we could allow a backward
step when all the produced sub-goals can be expressed with already existing
objects. Multiple steps might then be needed to construct the necessary
objects to apply a backward step. This would overcome the fact that, in proof
assistants, backward steps typically allow the immediate, arbitrary
instantiation of existential quantifiers as long as the result type-checks,
thus allowing infinite options..
In summary, the action space in Peano contains all valid forward steps that
apply a function to a combination of existing objects, thereby creating a new
object. The type system determines which objects are allowed as arguments, and
this constraint lets us efficiently enumerate all of the finitely many
available actions. Algorithm 1 describes this procedure more concretely. The
algorithm is essentially a backtracking search for all ways to fill in
arguments of a chosen function (e.g., an axiom) with objects coming from a
given collection $S$. Because of dependent types, choosing a value for an
argument might change the expected type of later arguments (e.g., a function
could first take a number $n$ and then a proof that $n$ is even; filling in a
concrete value for $n$ thus determines a concrete type for the other
argument). To perform one step in a solution, the agent first chooses a
function to apply. Then, we invoke Algorithm 1 to enumerate the results that
can be obtained with the chosen function. Finally, if the result set was not
empty, the agent chooses one of those results to add to its current solution.
Algorithm 1 Action enumeration in Peano
1:procedure EnumerateActions($S,\ f$)$\triangleright$ Computes all results of
applying the given
2: return
FillNextArgument$(S,f,\langle\rangle,f.param\\_types())$$\triangleright$
function $f$ with parameters
3:end procedure$\triangleright$ coming from the set $S$.
4:procedure FillNextArgument($S,\ f,\ a,param\\_types$)
5: if $|a|=f$.number_of_arguments() then
6: return $\left\\{f(a_{1},\cdots,a_{n})\right\\}$
7: end if
8: $next\\_type\leftarrow param\\_types[|a|]$
9: $results\leftarrow\varnothing$
10: for all $obj\in S$ do
11: $mgu\leftarrow unify(obj.type,next\\_type)$
12: if $mgu\neq null$ then $\triangleright$ Check if $obj$ can be used as the
next argument.
13: $a^{\prime}\leftarrow\langle a,obj\rangle$
14: $new\\_param\\_types\leftarrow param\\_types$
15: for all $(var,assignment)\in mgu$ do $\triangleright$ Make substitutions
in later parameter types.
16: $new\\_param\\_types\leftarrow\sigma(new\\_param\\_types,var\mapsto
assignment)$
17: end for
18: $results\leftarrow results\ \cup\ $
FillNextArgument$(S,f,a^{\prime},new\\_param\\_types)$
19: end if
20: end for
21: return $results$
22:end procedure
## 5 Case study: solving Khan Academy problems in Peano
The main design goal of Peano is to provide a flexible representation for
educational mathematical domains, both to let us understand how learning in
this domains can take place and to facilitate applications in computer-
assisted education. We now describe at our main case study in this direction:
the formalization of the algebra sections of Khan Academy illustrated in
Figure 1. These sections assume a student who starts with the knowledge of how
to evaluate the basic operations with known real numbers, as well as various
properties of these operations (e.g., commutativity and associativity). From
there, they teach this student to solve simple linear equations with one
unknown.
To describe our formulation of these sections in Peano, we must specify the
basic definitions — types and axioms — how we generate problems and finally
how we specify goals.
Figure 3: Axioms in Peano used to formalize the sections of Khan Academy that
we study.
#### Definitions and axioms
Figure 3 shows the full Peano representation of the axioms we use. All axioms
here output equalities relating terms in their inputs. Most parameters to the
axioms are real numbers, but their values in most cases need to be the result
of an expression of some form. For instance, the parameter to +_comm, the
commutativity of addition, is a number of the form (+ a b) for some $a$ and
$b$. When applied to an argument of the necessary form, the output type of
+_comm will be the equality type (= (+ a b) (+ b a)). Note that we can equally
formalize this (and all other) axioms without pattern matching on syntactic
forms, but rather by taking _any_ numbers $a$ and $b$ and returning an
equality (= (+ a b) (+ b a)). While this latter form would more faithfully
represent the property of commutativity of addition, it would also generate
more actions during proof search, since it applies to any real numbers
regardless of whether we’re already considering their sum. Thus, the form we
write these axioms reflects both the properties we need and the fact that we
tend to apply these properties once we already have an object that suggests
they will be useful. In practice, this narrows down the action space without
ruling out natural solutions.
#### Evaluating expressions
The four binary operations on reals are declared as uninterpreted functions.
To focus on algebraic reasoning, we add an additional axiom eval, implemented
outside of the formal system, which can be used to “execute” these operations
when their arguments are known. From the agent’s perspective, eval is an axiom
which takes an object of type real as its only parameter and, if that real is
of the form (op a b), where op $\in\\{+,-,*,/\\}$ and both $a$ and $b$ are
constants, eval returns an equality proof between (op a b) and the result of
evaluating the expression (for example: (eval (+ 1 2)) would have output type
(= (+ 1 2) 3)). If we were to fully formalize the content on Khan Academy, it
would be more faithful to not have an atomic evaluation procedure, but rather
break it down into more basic steps, corresponding to how students are taught
arithmetic. For our current case study, however, we assume evaluation is given
as atomic.
#### Generating problems
For each of the sections in Figure 1, we create random problem generators by
creating syntactic templates with placeholders that are then replaced with
random numbers. We list these templates in Table 1. Some of these templates
come directly from exercises from Khan Academy — for example, the “one-step
equation” $x+10=27$ turns into the template (= (+ x n1) n2), where $n1$ and
$n2$ can be replaced by any constant. Since the pool of exercises on Khan
Academy is fixed and small (4 to 7 exercises in each practice section), we add
a few templates to increase the diversity of problems generated and ensure
that they employ all axioms. We generate integer constants by rounding samples
from a Gaussian $\mathcal{N}(0,25)$, and ensure that we do not sample
divisions by zero or absurd equations such as $0x=1$. For problems that
involve solving an equation, we declare a real number named x and an object
named equation whose type is the equality type corresponding to the equation
to be solved. For the first two sections, which involve simplifying an
expression in some way, we declare a real number named answer and encode the
problem by assuming an equality between answer and the expression to be
simplified.
Section | Syntactic forms used in problem generator
---|---
SEE | (= x (+ 1 2)), (= x (* (+ 1 2) 3)),
| (= x (+ 1 (* 2 3))), (= x (/ (* 1 2) (- 3 4)))
CLT | (= answer (+ (- x 1) 2)), (= answer (- (+ x 1) 2))
| (= answer (* (/ x 1) 2)), (= answer (/ (* x 1) 2))
OAE | (= (+ x 1) 2), (= (- x 1) 2)
OME | (= (* x 1) 2), (= (* 2 x) 3)
| (= (/ x 2) 4)
TSE | (= (+ (* x 2) 1) 3), (= (- (* x 2) 1) 3),
| (= (+ (/ x 2) 1) 3), (= (- (/ x 2) 1) 3)
Table 1: Syntactic forms used in our problem generators for each of the Khan
Academy sections. Each problem is generated by picking one syntactic form
uniformly at random and then randomizing constants. Furthermore, in
Substitution and Evaluating Expressions (SEE), all operations are resampled
from $\\{+,-,*,/\\}$.
#### Goals
Having problems and axioms, the last step in formalizing the domains we study
is to define what it means for a solution to be complete. For the first two
sections, where the goal is to simplify the expression, our solution checker
searches the state for an equality between answer and an expression in a
simplified form (we enumerate a few forms that cover the exercises we
formalize: c (a constant), x, x + c with $c\neq 0$, and variants where $x$ is
multiplied by a constant that is neither $0$ nor $1$). When the goal is to
solve for $x$, our checker searches for an equality of the form $x=c$ for some
constant $c$. Once the solution state satisfies the check corresponding to the
problem, the solution is taken as complete.
Figure 4: Example of state and action sequences solving a problem from the
Combining Like Terms section. Here, the goal, represented in the state by a
short line, is to find a simplified form for the variable a. At each state,
the agent needs to either select one axiom from the library or choose one the
results from that axiom to be added to the state. The solution proceeds by
applying the appropriate associativity rule, then using that to rewrite a,
then evaluating the resulting operation with constants, and doing a final
rewrite to arrive at a solution state.
#### Example
Figure 4 shows an example of solving a problem generated in the Combining Like
Terms section. The solution uses 4 axiom applications: first the applicable
case of associativity, then rewrite (the equality axiom corresponding to
congruence), followed by evaluation and a final rewrite. At that point the
goal is satisfied, since the state contains an equality between the “answer”
variable and a fully simplified expression. The probability that this solution
would be generated by an agent picking a sequence of actions at random is
$9.64\times 10^{-7}$, which we compute by multiplying the inverse of the
number of available choices at each state. Thus, even simple problems have a
non-trivial search space when a modestly-sized library is available.
## 6 Learning to solve mathematical reasoning problems
Given the ability to sample problems, enumerate and apply actions, and to
detect solution states, the problems from Khan Academy yield a family of well-
specified search problems.
Any search algorithm can be applied to attempt to find a solution state given
a problem. But general, domain-agnostic search methods are constrained to
finding extremely short solutions, as illustrated by the example in Figure 4.
This explosion grows worse the deeper we explore, since each action adds
objects to the state, which in turn enable a growing number of new objects to
be constructed by applying axioms. Thus, to be able to find solutions to
harder problems, we need to leverage heuristics to guide search.
How can we encode problem-solving heuristics? One option is to manually design
some of these search strategies, such as preferences for certain actions given
some features of the state. But this approach involves significant engineering
that must be repeated for every new domain we wish to formalize. Moreover, we
still risk missing cases and developing incomplete strategies: even in simple-
looking domains such as equation solving, expert-written heuristics for
finding solutions in this step-by-step fashion risk reaching failure cases
when tested more widely, as demonstrated in [25].
Instead of relying on domain-specific design, we aim to _learn_ search
heuristics, bootstrapping from easy to hard problems by learning from past
searches. One form of encoding search heuristics is through a _policy_ : a
function that takes states and gives a probability distribution over actions.
A search algorithm can then leverage a policy as a heuristic to prune unlikely
actions during search. To learn a good policy, Expert Iteration (ExIt; [1])
provides a simple general paradigm: we can alternate between running search on
batches of problems using our current policy and training the policy by
imitating decisions made during previous successful searches. When applied to
deterministic problems with a binary reward signal, it is typical to ignore
unsuccessful attempts and train only on data from successful searches [26,
25].
One instance of this paradigm that we can directly apply to our current setup
is Contrastive Policy Learning (ConPoLe; [25]). ConPoLe was introduced as a
method for policy learning to solve symbolic reasoning problems from the
Common Core environments, which include equation solving and fraction
simplification. The Common Core environments include domain-specific
representations for states and actions which make them a less general setting
than Peano. For example, they embed the assumption that, when solving a single
equation, one can forget about all steps except for the last. This assumption
severely reduces the state space, but does not generalize (e.g., even to
systems of equations, where we often change which equation we’re working on).
Nevertheless, these environments also present the challenge of learning from
unstructured text representations and sparse binary rewards, making ConPoLe a
natural choice to try on the Khan Academy problems.
The main idea of ConPoLe is to apply a search algorithm that can use a policy
$\pi(a|s)$ to prioritize search nodes — for instance, beam search — to a
sampled batch of problems. Then, the solutions found (typically few and short
at the beginning) are used to train $\pi$ by a reduction to contrastive
learning: ConPoLe learns a representation $\phi(s)$ for states that attempts
to align each state with a successor leading to a solution, using all other
successor states available during search as negative examples. Thus, ConPoLe
is compatible with our setting, where the state and action spaces are
unbounded, the set of available actions depends on the state, the effects of
actions are deterministic and only a sparse, binary reward signal is given
once a solution state is reached.
To apply ConPoLe to the Peano formalization of Khan Academy problems, we
simply need to represent states and actions as strings, and define a
differentiable neural architecture for the embedding function $\phi$. To
represent $\phi$, we use a character-level, two-layer bidirectional GRU
network [4]. We encode states by first formatting the initial objects given in
the problem (e.g., the given equation), then formatting the objects
constructed by each action taken so far. We truncate the state at the
beginning to bound the number of characters the state embedding function might
receive at 200 characters. For actions, we found that enumerating and running
all possible applications of axioms through $\phi$ to be slow, as deeper
states can have hundreds of such actions available. To mitigate this problem,
we decomposed the solution generation process into pairs of actions: first,
the agent chooses which axiom to apply; then, the Peano runtime enumerates the
valid applications of that axiom in the current state, and the agent chooses
between one of the achievable results to add to the current state.
### 6.1 Result: learning to solve by pure policy learning
Figure 2 (red curve, “Bare agent”) shows the results we obtained when applying
ConPoLe to the Khan Academy problems in Peano. Here, the agent is being
trained by sampling and attempting problems in a random order, following the
setup from [25]; after every batch of $500$ problems, we train the policy
using ConPoLe on examples generated from successful training episodes, and
evaluate it on held-out problems from each of the Khan Academy sections.
ConPoLe makes steady progress in learning the first two sections (Substitution
and Evaluating Expressions and Combining Like Terms), eventually learning to
solve all problems in them. However, it stagnates in the equation solving
sections. We note two difficulties that are unique to our current setup when
compared to the Mathematics Common Core environments, in which ConPoLe
succeeds in finding solutions to similar equations [25].
First, the action space in Peano is significantly larger; a problem that
compounds when solutions get longer and thus more constructions are possible
given all the constructed objects. To give a sense of scale, the likelihood of
a random policy solving a “one-step addition equation” from Khan Academy in
the Common Core equations environment is $2\times 10^{-5}$. In contrast, in
our Peano formalization, this likelihood drops to $10^{-12}$. This difference
alone severely affects the probability that any given policy will succeed when
solving new problems, since those require exploration in a large,
combinatorial space.
Lastly, we observe that _representation learning_ , to which ConPoLe reduces
policy learning, has an additional challenge in Peano. Since states accumulate
the results of previous steps, it’s much rarer to arrive at the same state
twice. In contrast, in the Common Core environments, since an action operates
directly on the equation at hand, actions might directly bring us to equations
that we have seen before. For example, once we simplify the right-hand side in
$x+1=1+2$ and arrive at $x+1=3$, it might be possible that our existing policy
recognizes this new equation and can lead to a solution. In Peano, this
recognition that we can follow a previously discovered strategy is more
difficult, since the new state will also include the equality $1+2=3$ as well
as the previous equation. Thus, the representation of the new state might not
immediately tell us that we’re essentially at a problem we have seen before.
In short, some domain-specific state abstraction is present in the Common Core
environments that instead has to be learned in Peano, making policy learning
significantly more challenging.
## 7 Tactic Induction
Given any starting set of axioms, the number of deductions that can be made
from them typically grows exponentially as we enumerate longer sequences of
steps. Using higher-level axioms to attempt to make solutions shorter only
delays this challenge, but does not eliminate it. Long chains of reasoning are
also unwieldy for humans, but the deductions that interest us can be typically
factored into higher-level steps, so that even solutions to hard problems are
made succinct in terms of lemmas or procedures at an adequate level of
abstraction. To allow users to encode these abstract actions, interactive
theorem-proving languages often provide a language for _tactics_ : programs
that manipulate the proof state or goals that can encode higher-level proof
steps for a given domain. A tactic can take parameters and invoke axioms,
theorems or other tactics, ultimately generating a potentially long sequence
of actions in the underlying formal system.
To operationalize the idea that agents should learn high-level actions from
experience, we propose a simple tactic language for Peano. A Peano tactic $t$
is a sequence of $n=|t|$ actions $t_{a}^{(i)}$, along with action arguments
$t_{p}^{(i)}$. Each action $t_{a}^{(i)}$ is either an axiom or another tactic
(tactics do not yet support recursion). Each $t_{p}^{(i)}$ is a list of
arguments passed to action $t_{a}^{(i)}$. In turn, each one of
$t_{p}^{(i1)},\cdots,t_{p}^{(ik)}$ might be either a concrete value or a
symbol that references one of the formal parameters of $t$ itself.
A tactic $t$ can be executed given a set of objects $S$, producing a set of
_traces_. Each trace corresponds to one valid combination of arguments for
$t$. To compute the set of traces that $t$ can generate given $S$, we execute
each of the actions in $t$ in sequence and lazily decide which arguments for
$t$ are possible given the objects that each action generates. More precisely,
for each action $a_{i}$, we first compute the set of valid arguments and
results that it can generate — this will either call Algorithm 1 for invoking
axioms or call the tactic execution procedure recursively when $t_{a}^{(i)}$
is itself another tactic. Then, for each choice of list of arguments that we
can invoke $a_{i}$ with, we unify that list with the existing assignments for
parameters of $t$ in the current trace. If we find inconsistencies, we give up
on the current trace. If there are multiple possible assignments, then we
_branch_ the trace and continue execution in each branch. After all actions,
each of the resulting traces will correspond to one way of invoking $t$,
potentially producing multiple possible results.
Figure 5 shows an example of two solutions in Peano invoking a tactic named
tactic004. Here, this tactic represents the simplifying action of applying the
axiom that $0$ is the identity of addition, and then using the resulting
equality, $\texttt{p0}+0=\texttt{p0}$ for some parameter p0, to rewrite
$\texttt{p0}+0$ to p0 in some other object p1. Just like axiomatic actions, a
tactic might generate one, multiple, or no results given the current state:
when the state consists of the inequality $e^{x+0}<y+0$, tactic004 generates
two results; and none in $x+9=4$. When present in the action space, a tactic
can be used by the agent like any other action, as we described in Section 4:
the agent might first decide to invoke the tactic, and then decides which of
the possible results to add to its solution.
One important difference between executing a tactic compared to directly
executing its underlying sequence of actions is that a tactic only exposes a
single result: the result of its last action. This scoping boundary helps
constrain future actions by limiting how many arguments are available for
invoking them in the current solution.
Figure 5: Example of two Peano tactics: tactic003 simplifies an expression by
invoking the axiom that asserts that $0$ is the identity element of addition,
then using that equality to rewrite the original expression; tactic007 is
being induced by generalizing two segments of solutions to problems in
Combining Like Terms.
### 7.1 Inducing tactics from solutions
To learn tactics, we take an inductive approach: we aim to extract useful
tactics by generalizing segments of previous solutions. Our goal is to find
tactics that would have simplified several of those solutions had those
tactics been available to the agent.
Suppose we have a set $\mathcal{S}$ of solutions. In Peano, each solution can
be seen as a straight-line program101010More generally, the dependency
structure between the actions, which are implied by the parameters, generates
a directed acyclic graph, but for simplicity we assume actions are fully
ordered. that executes actions until the domain verifier determines that the
solution satisfies the goal. To find candidate tactics, we first extract each
contiguous subsequence $s_{i:j}$ of length at least 2 from each
$s\in\mathcal{S}$. Then, we take all pairs of same-length subsequences and
compute the tactic that is the _least general generalization_ of each pair. In
our restricted tactic language, this is a simple case of anti-unification: if
the two sequences call different actions, then there is no generalization
available in our language111111In a more powerful tactic language, for
instance with loops and conditionals, we could aim to encode more complex
patterns into tactics. These extensions are interesting directions for future
work, where several techniques for inductive program synthesis could be
applied.; if they call the same actions, then there’s always a tactic that
generalizes those sequences, and we can compute the most specific argument
structure of their generalization by only introducing new parameters if
strictly necessary121212We can see all sequences that invoke the same actions
as forming a bounded lattice, and this anti-unification procedure as computing
the _meet_ of two sequences, which might introduce new tactic parameters.
$\top$ is a sequence where all arguments in the tactic’s body come from
parameters. This structure is analogous to the subsumption lattice of first-
order predicate calculus [24]. This yields one tactic that, had it been
available, could have simplified at least the two input sequences of actions.
The procedure above gives us a set of candidate tactics $\mathcal{T}$ that
generalize solution spans from $\mathcal{S}$. To determine which of these
candidates are worth making into new actions, we first compute the number of
segments in $\mathcal{S}$ that $t_{i}\in\mathcal{T}$ generalizes, denoted by
$m(t_{i},\mathcal{S})$. Multiplying that by $|t_{i}|-1$ gives us how many
actions in $\mathcal{S}$ would $t_{i}$ have saved if we replaced $|t_{i}|$
actions by a single invocation of $t_{i}$. In other words,
$m(t_{i},\mathcal{S})(|t_{i}|-1)$ is a measure of how much can $t_{i}$
_compress_ the solutions in $\mathcal{S}$. We take that quantity and divide it
by the number of parameters of $t_{i}$ to compute the _utility_ of $t_{i}$:
$u(t_{i})=\frac{m(t_{i},\mathcal{S})(|t_{i}|-1)}{p(t_{i})}\enspace.$
Essentially, we seek tactics that are the least general explanation of the
most solution steps in $\mathcal{S}$.
During learning, tactic induction can be introduced by alternating between the
standard learning loop of ConPoLe — where it attempts to solve problems and
improve its policy — with learning tactics from existing solutions. At each
round, we add discovered tactics with an utility above a threshold $U_{min}$
to the agent’s action space. Furthermore, before policy training, we rewrite
all of the agent’s solutions found so far using the new tactics, everywhere
they apply. In this way, the policy is always trained on solutions that are
irreducible given the tactics induced up to that moment.
### 7.2 Result: learning with tactic induction
Figure 2 shows the success rate across iterations of the ConPoLe agent trained
with a tactic induction step after each batch of problems. While ConPoLe alone
fails to make progress in the later three sections, only managing to solve
degenerate equations (e.g., $x+0=10$ or $1\times x+0=5$), tactic induction
allows the agent to make progress and eventually solve all problems.
Figure 6: Solution found by the agent to a “One-step Addition Equation”. In
terms of the tactics at this moment of training, the solution can be expressed
with 3 steps. These steps have a probability of $1.28\times 10^{-6}$ to be
generated by an agent taking random actions when the action space contains the
axioms and the 18 tactics learned so far. In contrast, given just the axioms,
the probability of finding the equivalent solution at random is $10^{-12}$.
In addition to enabling the agent to solve the harder problems, we find that
the hierarchy of tactics induced during training reflects how the Khan Academy
sections conceptually build on each other. Figure 6 shows the solution found
to equation from the One-Step Addition and Subtraction Equations section, the
first section that ConPoLe alone does not manage to solve. Tactics constructed
from previous solutions allows this problem to be solved within 3 steps:
first, applying the axiom introduced in this section of adding a constant to
both sides, and then applying two induced tactics that each simplify one of
the sides of the equation. The left-hand side has an expression involving $x$,
and thus requires “combining like terms”; the right-hand side is a simple
expression with constants which can be fully evaluated.
After enough examples of equations like this are seen, the agent induces a
tactic that solves them in one step. In terms of that tactic, together with
others learned in the “One-step Multiplication and Division Equation”, the
“Two-step Equations” can indeed be solved with two steps, as the section name
suggests. However, that is only true when steps have the appropriate
conceptual level. Tactic induction enables to agent to construct those steps
and successfully exploit how exercises build on previously developed concepts.
## 8 Curriculum Construction from Abstractions
Our results suggest that our ability to construct abstraction is key in the
acquisition of mathematical knowledge: it’s only possible to reason about
advanced results once we have developed the necessary abstractions. But when
learning mathematics, humans do not start from scratch: we follow carefully
constructed sequences of pedagogical experiences (i.e., curricula) developed
by previous generations. Following a well-designed curriculum has an unusual
effectiveness in mathematics: even discoveries that took many generations of
bright individuals to emerge and develop, such as calculus or complex
analysis, later become part of traditional high-school or college-level
classes. What makes a sensible curriculum for a learner, and why can this
structure speed-up learning so effectively?
We hypothesize that the abstractions underlying the domains of interest can
shed light on these questions. Abstractions — such as our tactics — can be
built by composition of simpler abstractions. This structure suggests a
partial order $\prec_{\mathcal{T}}$ on tactics $t_{i}\in\mathcal{T}$ where
$t_{1}\prec t_{2}$ if $t_{2}$ depends on $t_{1}$. If abstractions are
_induced_ , i.e. extracted from concrete experiences (such as solutions), then
a teacher that wants to help a learner induce $t_{2}$ would naturally place
the learning experiences leading to $t_{1}$ first. Moreover, clustering the
learning experiences that suggest an abstraction is necessary might catalyze
that process. Two natural questions related to this induced curriculum arise.
How well does it agree with the curriculum designed by human educators? And
how effective would that ordering be in accelerating the learning of a second
agent?
Our setup allows us to empirically explore these questions in the case study
of formalized Khan Academy sections. Let $\mathcal{P}$ be the set of training
problems seen by an agent, and suppose $\mathcal{T}$ is the set of all of the
agent’s induced tactics at the end of training. From $\mathcal{T}$, we can
compute each tactic’s _dependency set_ $DS(t_{i})$ by taking all tactics on
which $t_{i}$ depends directly or indirectly; this corresponds to all tactics
that precede $t_{i}$ in the transitive closure of $\prec_{\mathcal{T}}$. This
partial order on tactics lets us infer a corresponding partial order
$\prec_{\mathcal{P}}$ on problems by comparing the tactics that their
solutions use. More precisely, suppose $(p_{a},T_{a})$ and $(p_{b},T_{b})$ are
two problem/tactic set pairs, where $T_{a}$ (resp. $T_{b}$) is the set of
tactics invoked in the solution found for $p_{a}$ (resp. $p_{b}$). We would
like to decide which of these problems should come first in a curriculum for a
learner. A natural choice is to consider that $p_{a}\prec_{\mathcal{P}}p_{b}$
whenever $p_{a}$ depends on strictly less tactics than $p_{b}$, i.e.
$p_{a}\prec_{\mathcal{P}}p_{b}\enspace\iff\enspace\bigcup_{i}DS(T_{a}^{(i)})\subset\bigcup_{i}DS(T_{b}^{(i)})\enspace.$
Given $\prec_{\mathcal{P}}$, any topological ordering of the problems
$\mathcal{P}$ would yield a curriculum that is compatible with the structure
of abstractions induced from solving $\mathcal{P}$. Since
$\prec_{\mathcal{P}}$ is _partial_ — two problems are incomparable if each
uses tactics not present in the solution to the other —, multiple orders might
agree with $\prec_{\mathcal{P}}$131313It is also possible in principle to find
two solutions to the same problem that use different sets of tactics. This is
rare in our setup in practice, but more generally many consistent ways to
aggregate the sets of tactics associated to a problem would still yield a
valid partial order.. We first ask: how do these orders compare to the human-
designed order of Khan Academy sections?
To answer this question, we first sample multiple orderings of $\mathcal{P}$
by running a simple stochastic topological sorting algorithm that respects
$\prec_{\mathcal{P}}$ but otherwise chooses which element to put next at
random from the set of candidate next problems. Then, for each problem in the
resulting order, we take the index of the section of Khan Academy where the
problem originated from (1 to 5), and compute the Kendall tau distance [15] to
the Khan Academy order (which, in this case, is simply the number of
inversions in the generated list of integers).
Figure 7 compares the curricula we obtain by topological orderings that
respect $\prec_{\mathcal{P}}$ with random orderings of problems. In each case,
we sample 100 curricula and compute the normalized Kendall tau distances to
the Khan Academy ordering, and compute 99% bootstrapped confidence intervals.
Curricula induced from $\prec_{\mathcal{P}}$ are significantly closer to the
Khan Academy ordering, indicating that the underlying abstractions capture
part of what makes a curriculum pedagogically sensible.
Figure 8 shows a sample induced curriculum, comparing it to the order from
Khan Academy. The ordering of Khan Academy sections is partly recovered by the
abstractions, though the induced curricula have a much more fine-grained
dependency structure. For example, the abstractions alone suggest that the
degenerate “one-step equation” $x+0=2$ can be solved directly from the axioms.
After specific cases of combining like terms have been learned, the induced
curriculum already allows the equations depending on those cases to come.
Identifying fine conceptual dependencies at a problem level could be useful
for automated tutoring systems, which might use them to suggest problems or
give worked examples to a particular student in a personalized manner. On the
other hand, Khan Academy sections have instructional content in addition to
exercises. Thus, their section structure also needs to take into account which
concepts are most easily taught together, a preference that our model does not
have.
### 8.1 Result: the synergy between tactic induction and curricula
Figure 7: Comparison between the Khan Academy curriculum and the curriculum
inferred from an agent with tactic induction trained on problems seen in a
random order. Bars indicate the average number of inversions — adjacent swaps
needed to make the curricula agree — with 99% confidence intervals (averaged
over a random sample of topological orderings). For comparison, we show the
number of inversions that a random permutation of the problems produces.
Figure 8: Sample induced curriculum from agent’s learned abstractions.
We now evaluate whether this induced curriculum would help a second agent
learn faster. Intuitively, when solving problems in a random order, the first
agent takes many samples to accumulate evidence that a certain abstraction is
useful, and time is wasted attempting problems that are either too hard
(require abstractions several levels above its current tactics) or too easy
(e.g., solved within a single step given its current tactics). Thus, the fact
that the learner is performing tactic induction provides a strong reason why a
curriculum might be helpful.
To train a second agent using a curriculum, we first sample one of the valid
topological orders on all problems solved by the first agent. Then, we split
the resulting sequence of problems into 3 blocks. When sampling problems for
the second agent, we initially only sample problems from the first block,
until training success rate reaches a minimum of $90\%$, at which point we
include the second block in the pool of problems, and again include the third
block once the agent reaches a success rate of $90\%$ when solving the last
batch of problems. This schema matches a well-known definition of a training
curriculum [3]: a function that assigns weights to training examples at each
iteration in such a way that, at the end of training, sampling from the
reweighted distribution is equivalent to sampling from the original target
distribution, but examples might be gradually introduced throughout training.
With this setup, we observe marked benefits from training using a curriculum.
In Figure 2, we observe that agents trained on a curriculum induced from
abstractions (green) indeed learn faster than the agent that discovered the
abstractions without a curriculum (blue) — in particular, the second agent
learns to solve the last section significantly earlier. Even our simple
curriculum scheme allows the agent to learn useful abstractions much faster,
avoiding problems that are too hard (i.e., would require much higher
abstractions than those constructed so far). We’d expect this effect to get
even more pronounced if we were formalizing a markedly larger domain, where
only a small fraction of all new problems could be reasonably solved given the
available tactics. As this fraction diminishes, so does the likelihood of
randomly selecting a problem from which an agent like ours — trained on its
successful searches — can learn productively.
This result suggests a computational account of a cultural ratchet [30] effect
of mathematics: once one generation makes mathematical discoveries, captured
in a hierarchy of new abstractions and relations between them, the next
generation learns from a carefully constructed order so that it can reach the
same point of understanding much faster. In our case, our agents learned until
they reached ceiling performance in the domains we modelled. But if each of
our agents had a limited budget compared to how long it would take to fully
learn the target domains — much like humans have limited lifetimes — we would
still observe an inter-generational speed-up. In this case, the first agent
would not reach mastery of all domains, but would still be able to construct a
curriculum from its experiences. A second agent would learn to reach the same
performance in much less time, and would be able to productively explore
further in it’s lifetime.
## 9 Discussion and Conclusion
We introduced Peano, a language for expressing mathematical reasoning and an
associated environment for solving problems formally in a finite action space.
Using Peano, we formalized 5 sections of the algebra curriculum from the Khan
Academy educational platform. Search alone is unable to find solutions to non-
trivial problems because of a combinatorial explosion of the search space.
Reinforcement learning provides a means to learn from past searches and make
progress, but even then, longer solutions are unreachable. But combining
reinforcement learning with abstraction learning — in the form of tactic
induction, where we learn useful reusable components from solutions found so
far — allows an agent to make progress, learning to solve all problems across
our 5 sections of algebra.
In addition to enabling an agent to solve problems more effectively, we have
found abstraction learning to match our intuition about which higher-level
skills students need to master when learning basic algebra. This was reflected
in the fact that reordering problems using the dependencies between
abstractions in their solutions largely recovers the Khan Academy curriculum
ordering. Note that human curricula are designed with more than problem
ordering in mind: they also aim to facilitate instruction, where conceptual
cohesion is important. These aspects are irrelevant for our agents, since they
only learned from attempting problems. Even so, the dependencies implied by
the learned abstractions recovered some of the structure present in the human-
designed curriculum, suggesting that they capture a key aspect of curriculum
design.
Moreover, ordering problems based on abstractions interacts favorably with our
model of inductive learning of abstractions. Such an ordering can focus the
agent on problems that elicit new abstractions which are concisely expressed
in terms of the agent’s already learned abstractions. This focus allows the
agent to quickly accumulate examples from which the new abstractions can be
induced, avoiding both _too easy_ or _too hard_ problems. Indeed, we have
observed this reconstructed curriculum to accelerate learning of a “second-
generation” agent.
Together, these results provide a computational account of the importance of
abstraction learning for human mathematics. On the one hand, tactic induction
dramatically helps an individual learner in leveraging experience gained in
easier problems to solve harder ones. On the other, after a certain domain has
been mastered, the hierarchical structure of the learner’s induced
abstractions allow it to structure the learning of future generations, helping
them arrive faster at the point the first learner left off.
We believe these experiments bring an important insight towards the goal of
having artificial agents to perform human-like mathematical reasoning. Much of
the past research in this direction has been devoted to making search methods
more effective through search heuristics — either hand-crafted, learned, or a
combination of both. But given any set of initial axioms, many problems will
inevitably be out of reach of search141414The analogy to program synthesis
makes this point clear: if the base programming language is x86 Assembly,
synthesizing a simple list-sorting algorithm would be an enormous challenge.
One would certainly not hope the synthesizer to stumble upon a correct
implementation of merge sort in the search space of sequences of x86
instructions. But in a high-level domain-specific language with convenient
operations for list processing (e.g., a function to merge two sorted lists),
even merge sort becomes short and much less surprising. This insight carries
over to mathematical reasoning when we see solutions (or proofs) as programs..
Abstraction learning provides a means to progress much beyond the limit of
search methods.
Several avenues for future work naturally arise. First, the Peano environment
has limitations: it does not allow the prover to create intermediate lambda
terms (that is, sub-lemmas), and does not support backwards reasoning. Both
these capabilities would be needed to allow natural solutions to problems
arising in some educational domains we have not yet tackled. For instance,
proofs by induction essentially require the prover to produce one lemma per
inductive case, which corresponds to a lambda abstraction (e.g., in the case
of natural numbers, the lambda would be a function that takes $n$ and a proof
of the proposition for $n$, and outputs a proof of the proposition for $n+1$).
Similarly, in the case of induction, the first step is typically backwards:
when one realizes that one must prove a proposition for all natural numbers,
it is natural to start by claiming the proof will be by induction; then, the
necessary “lemmas” are proved. One challenge is extending Peano to support
these natural moves while maintaining a finite action space.
Second, the tactics we were able to learn in Section 7 are rather simple,
consisting of short straight-line programs. Our tactic language cannot express
natural high-level actions involving repetitions and conditions, such as
“apply commutativity and associativity until you group $x$ and $-x$” or
“evaluate all operations you can”. Extending our tactic language along with
the tactic induction algorithm will be necessary to extend our method to more
complex domains. Tactic induction becomes more challenging, but potentially
more powerful, as the environment itself becomes more expressive. Many
techniques from inductive program synthesis can be potentially helpful for
that direction.
Third, our approach to library learning involves learning tactics, but not new
theorems. Tactics can give hints at useful auxiliary theorems to be proven.
For example, a tactic that produces proofs that $n^{2}>0$ when given several
integers $n$ suggests that there is a general procedure for producing such
proofs, and that therefore $\forall n,n^{2}>0$ is a theorem. As we move
towards covering more complex mathematical domains, we believe this notion of
library learning — of useful results in addition to solution strategies — will
be important.
Finally, our approach to learning tactics relies on first encountering many
examples of its use: a tactic is deemed useful if it would have been helpful
in solving many past problems. But human mathematician are often able to
perform this inductive leap after just a single example. For an example,
consider Erdös’ lower bound on the Ramsey number $R(s)$; see [9] for a
discussion of this result and its significance. This is a result to a purely
combinatorial problem that applies a probabilistic argument in a surprising
way. This single example is most often enough for human mathematicians to
realize a potentially fruitful new tactic; this has indeed turned into the
Probabilistic Method, now widely used in combinatorics. The fact that this
tactic stands out so clearly when one reads about this result poses several
puzzling questions. What makes such a solution so _surprising_ , and certain
parts of it especially _interesting_? Answers for these questions would help
us understand a core notion in the human practice mathematics: not only some
statements are true and some are false, but some are more _interesting_ than
others. Computationally characterizing what _interestingness_ means might be
an important goal towards having computers be able to provide insights into
human mathematics [5]. After all, that would imply not just proving new
results, but also recognizing which ones are significant.
All the code, data and configuration files needed to reproduce the experiments
in this paper are available online at https://github.com/gpoesia/peano This
work was supported by a NSF Expeditions Grant, Award Number (FAIN) 1918771,
and by the Stanford HAI Hoffman–Yee project “AI Tutors to Help Prepare
Students for the 21st Century Workforce”. GP was also supported by a Stanford
Interdiciplinary Graduate Fellowship (SIGF).
## References
* [1] Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learning and tree search. Advances in Neural Information Processing Systems, 30, 2017.
* [2] Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. HOList: An environment for machine learning of higher order logic theorem proving. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 454–463. PMLR, 09–15 Jun 2019.
* [3] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009.
* [4] Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder–decoder approaches. Syntax, Semantics and Structure in Statistical Translation, page 103, 2014.
* [5] Simon Colton, Alan Bundy, and Toby Walsh. On the notion of interestingness in automated mathematical discovery. International Journal of Human-Computer Studies, 53(3):351–375, 2000.
* [6] Martin Davis, George Logemann, and Donald Loveland. A machine program for theorem-proving. Communications of the ACM, 5(7):394–397, 1962.
* [7] Martin Davis and Hilary Putnam. A computing procedure for quantification theory. Journal of the ACM (JACM), 7(3):201–215, 1960.
* [8] Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sablé-Meyer, Lucas Morales, Luke Hewitt, Luc Cary, Armando Solar-Lezama, and Joshua B Tenenbaum. Dreamcoder: Bootstrapping inductive program synthesis with wake-sleep library learning. In Proceedings of the 42nd acm sigplan international conference on programming language design and implementation, pages 835–850, 2021.
* [9] William Timothy Gowers. The two cultures of mathematics. Mathematics: frontiers and perspectives, 65:65, 2000.
* [10] Thomas Hales, Mark Adams, Gertrud Bauer, Tat Dat Dang, John Harrison, Hoang Le Truong, Cezary Kaliszyk, Victor Magron, Sean McLaughlin, Tat Thang Nguyen, et al. A formal proof of the kepler conjecture. In Forum of mathematics, Pi, volume 5. Cambridge University Press, 2017.
* [11] Jesse Michael Han and Floris van Doorn. A formal proof of the independence of the continuum hypothesis. In Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and Proofs, pages 353–366, 2020.
* [12] Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. Gamepad: A learning environment for theorem proving. In International Conference on Learning Representations, 2018.
* [13] Geoffrey Irving, Christian Szegedy, Alexander A Alemi, Niklas Eén, François Chollet, and Josef Urban. Deepmath-deep sequence models for premise selection. Advances in neural information processing systems, 29, 2016.
* [14] Cezary Kaliszyk, Josef Urban, Henryk Michalewski, and Miroslav Olšák. Reinforcement learning of theorem proving. Advances in Neural Information Processing Systems, 31, 2018.
* [15] Maurice G Kendall. Rank correlation methods. 1955\.
* [16] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
* [17] Per Martin-Löf and Giovanni Sambin. Intuitionistic type theory, volume 9. Bibliopolis Naples, 1984.
* [18] John McCarthy, Marvin L Minsky, Nathaniel Rochester, and Claude E Shannon. A proposal for the dartmouth summer research project on artificial intelligence, august 31, 1955. AI magazine, 27(4):12–12, 2006.
* [19] Pamela McCorduck and Cli Cfe. Machines who think: A personal inquiry into the history and prospects of artificial intelligence. CRC Press, 2004.
* [20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
* [21] Allen Newell, John C Shaw, and Herbert A Simon. Report on a general problem solving program. In IFIP congress, volume 256, page 64. Pittsburgh, PA, 1959.
* [22] Frank Pfenning. Unification and anti-unification in the calculus of constructions. In LICS, volume 91, pages 74–85, 1991.
* [23] Frank Pfenning. Elf: A meta-language for deductive systems. In International Conference on Automated Deduction, pages 811–815. Springer, 1994.
* [24] Gordon Plotkin. Automatic methods of inductive inference. 1972\.
* [25] Gabriel Poesia, WenXin Dong, and Noah Goodman. Contrastive reinforcement learning of symbolic reasoning domains. Advances in Neural Information Processing Systems, 34:15946–15956, 2021.
* [26] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344, 2022.
* [27] John Alan Robinson. Theorem-proving on the computer. Journal of the ACM (JACM), 10(2):163–174, 1963.
* [28] Stuart J Russell. Artificial intelligence a modern approach. Pearson Education, Inc., 2010.
* [29] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144, 2018.
* [30] Michael Tomasello. The cultural origins of human cognition. Harvard university press, 2009.
* [31] Daniel Whalen. Holophrasm: a neural automated theorem prover for higher-order logic. arXiv preprint arXiv:1608.02644, 2016.
* [32] Minchao Wu, Michael Norrish, Christian Walder, and Amir Dezfouli. Tacticzero: Learning to prove theorems from scratch with deep reinforcement learning. Advances in Neural Information Processing Systems, 34:9330–9342, 2021.
|
# Traveling wave solutions of the generalized scale-invariant analogue of the
KdV equation by tanh–coth method
O. González-Gaxiola ${}^{1\;\ast}$, J. Ruiz de Chávez 2
1 Applied Mathematics and Systems Department, Universidad Autónoma
Metropolitana-Cuajimalpa,
Vasco de Quiroga 4871, 05348 Mexico City, Mexico.
∗<EMAIL_ADDRESS>
2 Department of Mathematics, Universidad Autónoma Metropolitana-Iztapalapa.
San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, 09340, Mexico City, Mexico
###### Abstract
In this work, the generalized scale-invariant analogue of the Korteweg–de
Vries (gsiaKdV) equation is studied. For the first time, the tanh–coth
methodology is used to find traveling wave solutions for this nonlinear
equation. The considered generalized equation is a connection between the
well-known KdV equation and the recently investigated SIdV equation. The
obtained results show many families of solutions for the model, indicating
that this equation also shares bell-shaped solutions with KdV and SIdV, as
previously documented by other researchers. Finally, by executing the symbolic
computation, we demonstrate that the employed technique is a valuable and
effective mathematical tool that can be used to solve problems that arise in
the cross-disciplinary nonlinear sciences and applied mathematics.
Key words: KdV equation; SIdV equation; The tanh–coth method; Travelling
waves; Symbolic computation
## 1 Introduction
Many fields of science and engineering depend heavily on the mathematical
models presented by nonlinear partial differential equations (NPDE) to explain
complex phenomena. These fields include electromagnetic waves theory, ocean
dynamics, plasma physics, fluid mechanics, field theory, nonlinear optical
fibers, nuclear physics, ion acoustic waves, biological process engineering,
chemical kinetics, climatological phenomena and several other mathematical
physics problems. In 1895, Dutch mathematicians D. J. Korteweg and G. de Vries
developed the KdV equation in a formal manner, the KdV equation is a NPDE that
models the propagation of long waves on shallow water and this simultaneously
includes weak advective nonlinearity and dispersion effects given by [1]
$u_{t}+6uu_{x}+u_{xxx}=0,$ (1)
where $u$ is the perturbation wave function that depends on the spatial
variable $x$ and on time $t$. It is well known that Eq. (1) has bell-shaped
solutions of the type:
$u(x,t)=\frac{c}{2}sech^{2}\Big{(}\frac{\sqrt{c}}{2}(x-ct)\Big{)}$ (2)
where $c$ is the velocity of the wavefront [2].
Despite its antiquity, the KdV equation is still an active area of study and
research, with several articles published on the topic in recent years [3, 4,
5, 6, 7, 8, 9, 10, 11, 12, 13].
Among the numerous research associated to the KdV equation that have been
published in the last decades, several propose to generalize and/or modify the
KdV equation. In 2012, for instance, the authors of [3] present the modified
KdV equation
$u_{t}+\Big{(}\frac{2u_{xx}}{u}\Big{)}u_{x}=u_{xxx}.$ (3)
The Eq. (3) is invariant under scaling of the dependent variable; is therefore
referred to as SIdV here. Eq. (3) was discovered by surprise using computer
approaches when researchers explored equations with bell-shaped solutions
analogous to the KdV equation. Other research on Eq. (3) have been published
and can be found in [14, 15, 16].
In this article we will consider the gsiaKdV equation [17] which is a
nonlinear partial differential equation and whose dimensionless form is given
by
$u_{t}+\Big{(}3(1-\alpha)u+(1+\alpha)\frac{u_{xx}}{u}\Big{)}u_{x}=\gamma
u_{xxx}$ (4)
where $u_{xxx}$ is a dispersion term, while the term
$\big{(}3(1-\alpha)u+(1+\alpha)\frac{u_{xx}}{u}\big{)}$ can be seen as an
advecting velocity.
Firstly, let us observe that, if $\alpha=-1$ and $\gamma=-1$, then Eq. (4)
reduces to the well-known KdV equation (1). Second, we can observe that if
$\alpha=1$ and $\gamma=1$ then Eq. (4) reduces to SIdV equation (3).
Eq. (4) was investigated in [18] and the authors demonstrated the existence of
traveling waves of the bell and valley types. Using the tanh-coth method for
the first time, the main objective of this research is to find new traveling
wave type solutions for Eq. (4) with $\alpha\neq\pm 1$ and $\gamma\neq 0$.
In virtue of the significance of the previously described gsiaKdV equation, we
will conduct a study to obtain solutions of the traveling-wave type for the
first time utilizing the tanh-coth technique. In addition, some 3D and 2D
propagation profiles for the derived solutions will be discussed by selecting
various parameters that describe the solution sets achieved by the used
strategy.
## 2 Brief description of the tanh-coth method
The tanh-coth method originally established in [19, 20] provides a very useful
methodology for finding traveling wave type solutions of NPDEs. We will
explain how to implement the method in the rest of this section.
(I) Consider the general nonlinear PDE given by:
$\displaystyle G(u,u_{t},u_{x},u_{xx},u_{xxx},\ldots)=0.$ (5)
Using traveling wave variable change $u(x,t)=u(\xi)$ with $\xi=cx-\omega t$,
Eq. (5) becomes the ordinary differential equation:
$\displaystyle F(u,-\omega
u_{\xi},cu_{\xi},c^{2}u_{\xi\xi},c^{3}u_{\xi\xi\xi},\ldots)=0.$ (6)
(II) The tanh-coth method provides the solutions for Eq (6) as the finite sum
$\displaystyle
u(\xi)=S(Y)=\sum_{i=0}^{M}a_{i}Y^{i}(\xi)+\sum_{i=1}^{M}b_{i}Y^{-i}(\xi),$ (7)
where the coefficients $a_{i}$ and $b_{i}$ are constants to be determined and
$Y$ is a new dependent variable introduced by the method and is given by
$\displaystyle Y=\tanh(\xi).$ (8)
The introduction of this new dependent variable implies that:
$u_{\xi}=(1-Y^{2})\frac{dS}{dY},$
$u_{\xi\xi}=-2Y(1-Y^{2})\frac{dS}{dY}+(1-Y^{2})^{2}\frac{d^{2}S}{dY^{2}},$ (9)
$u_{\xi\xi\xi}=2(1-Y^{2})(3Y^{2}-1)\frac{dS}{dY}-6Y(1-Y^{2})^{2}\frac{d^{2}S}{dY^{2}}+(1-Y^{2})^{3}\frac{d^{3}S}{dY^{3}},$
The subsequent derivatives can be computed in a similar way.
(III) To determine the upper limit $M$ of the sum in Eq. (7), the linear terms
of highest order in the resulting equation with the highest order nonlinear
terms are balanced.
(IV) We consider $u(\xi)$ given in (7) and the necessary derivatives
$u_{\xi}$, $u_{\xi\xi}$, $u_{\xi\xi\xi}$,…, which can be calculated as in (9),
to substitute in the ordinary differential equation (6) and thus we will
obtain the polynomial equation:
$\displaystyle P[Y]=0.$ (10)
(V) We select all the terms that have the same algebraic power of $Y$ from the
polynomial equation (10), we set them equal to zero and obtain a nonlinear
system of algebraic equations with the set of unknown parameters
$\\{a_{0},\ldots,a_{M},b_{1},\ldots,b_{M},c,\omega\\}$. Using software such as
Mathematica, we can execute symbolic calculations to solve the algebraic
equations with the natural restrictions of the mathematical model.
(VI) Finally, having obtained the coefficients
$\\{a_{0},\ldots,a_{M},b_{1},\ldots,b_{M},c,\omega\\}$ and considering the
equality (7) one can obtain the exact solutions to Eq. (5).
## 3 Utilization of the tanh-coth methodology
Using the change of variable $\xi=cx-\omega t$, Eq. (4) is converted into the
ordinary differential equation:
$\displaystyle-\omega
cuu_{\xi}+3c(1-\alpha)u^{2}u_{\xi}+c^{3}(1+\alpha)u_{\xi}u_{\xi\xi}-\gamma
c^{3}uu_{\xi\xi\xi}=0.$ (11)
Integrating once with respect to $\xi$ and considering the constants of
integration as null, we obtain
$\displaystyle-\omega
cu^{2}+2c(1-\alpha)u^{3}+c^{3}(1+\alpha+\gamma)u_{\xi}^{2}-2\gamma
c^{3}uu_{\xi\xi}=0.$ (12)
Then using the characteristic variable change of the method, i.e.,
$Y=\tanh(\xi)$ and considering Eq. (7), the last differential equation is
rewritten as
$-\omega
cS^{2}+2c(1-\alpha)S^{3}+c^{3}(1+\alpha+\gamma)(1-Y^{2})^{2}\Big{(}\frac{dS}{dY}\Big{)}^{2}-2\gamma
c^{3}S\Big{[}(1-Y^{2})^{2}\frac{d^{2}S}{dY^{2}}-2Y(1-Y^{2})\frac{dS}{dY}\Big{]}=0.$
(13)
Balancing $S^{3}$ with $S\cdot\frac{d^{2}S}{dY^{2}}$ gives $M=2$.
Consequently, the tanh-coth technique enables the use of the finite sum
$\displaystyle u(\xi)=S(Y)=a_{0}+a_{1}Y+a_{2}Y^{2}+b_{1}Y^{-1}+b_{2}Y^{-2}.$
(14)
Substituting (14) with their respective derivatives into (13) and collecting
all terms with equal power of $Y$, after some algebraic simplification, we
obtain the following nonlinear system of algebraic equations:
$\displaystyle 12a_{1}b_{1}\gamma c^{3}+4a_{0}b_{2}\gamma
c^{3}+48a_{2}b_{2}\gamma
c^{3}+4a_{1}b_{1}c^{3}\alpha+16a_{2}b_{2}c^{3}\alpha+4a_{1}b_{1}c^{3}+16a_{2}b_{2}c^{3}-6a_{2}b_{1}^{2}c\alpha$
$\displaystyle-12a_{0}a_{1}b_{1}c\alpha-6a_{1}^{2}b_{2}c\alpha-12a_{0}a_{2}b_{2}c\alpha-2a_{1}b_{1}c\omega-2a_{2}b_{2}c\omega+6a_{2}b_{1}^{2}c+12a_{0}a_{1}b_{1}c+6a_{1}^{2}b_{2}c+12a_{0}a_{2}b_{2}c$
$\displaystyle+a_{1}^{2}\gamma c^{3}-4a_{0}a_{2}\gamma
c^{3}+a_{1}^{2}c^{3}\alpha+a_{1}^{2}c^{3}-2a_{0}^{3}c\alpha-
a_{0}^{2}c\omega+2a_{0}^{3}c+5b_{1}^{2}\gamma c^{3}-8b_{2}^{2}\gamma
c^{3}+b_{1}^{2}c^{3}\alpha+b_{1}^{2}c^{3}=0,$ $\displaystyle-7a_{1}^{2}\gamma
c^{3}+8a_{2}^{2}\gamma c^{3}-20a_{0}a_{2}\gamma
c^{3}+a_{1}^{2}c^{3}\alpha-8a_{2}^{2}c^{3}\alpha+a_{1}^{2}c^{3}-8a_{2}^{2}c^{3}-6a_{0}a_{2}^{2}c\alpha-6a_{1}^{2}a_{2}c\alpha-
a_{2}^{2}c\omega$ $\displaystyle+6a_{0}a_{2}^{2}c+6a_{1}^{2}a_{2}c=0,$
$\displaystyle 4a_{1}^{2}\gamma c^{3}-16a_{2}^{2}\gamma
c^{3}+8a_{0}a_{2}\gamma
c^{3}+4a_{2}^{2}c^{3}\alpha+4a_{2}^{2}c^{3}-2a_{2}^{3}c\alpha+2a_{2}^{3}\gamma=0,$
$\displaystyle 4a_{2}b_{1}\gamma\alpha+4a_{0}a_{1}\gamma
c^{3}-24a_{1}a_{2}\gamma
c^{3}+4a_{1}a_{2}c^{3}\alpha+4a_{1}a_{2}c^{3}-6a_{1}a_{2}^{2}c\alpha+6a_{1}a_{2}^{2}\omega=0,$
$\displaystyle-4a_{0}b_{1}\gamma c^{3}-20a_{2}b_{1}\gamma
c^{3}-4a_{1}b_{2}\gamma
c^{3}-4a_{2}b_{1}c^{3}\alpha-4a_{2}b_{1}c^{3}-6a_{2}^{2}b_{1}c\alpha+6a_{2}^{2}b_{1}c-8a_{0}a_{1}\gamma
c^{3}+12a_{1}a_{2}\gamma c^{3}$
$\displaystyle-8a_{1}a_{2}c^{3}\alpha-8a_{1}a_{2}c^{3}-2a_{1}^{3}c\alpha-12a_{0}a_{1}a_{2}c\alpha-2a_{1}a_{2}c\omega+2a_{1}^{3}c+12a_{0}a_{1}a_{2}c=0,$
$\displaystyle-6a_{1}b_{1}\gamma c^{3}-8a_{0}b_{2}\gamma
c^{3}-24a_{2}b_{2}\gamma
c^{3}-2a_{1}b_{1}c^{3}\alpha-8a_{2}b_{2}c^{3}\alpha-2a_{1}b_{1}c^{3}-8a_{2}b_{2}c^{3}-12a_{1}a_{2}b_{1}c\alpha-6a_{2}^{2}b_{2}c\alpha$
$\displaystyle+12a_{1}a_{2}b_{1}c+6a_{2}^{2}b_{2}c+2a_{1}^{2}\gamma
c^{3}+16a_{0}a_{2}\gamma
c^{3}-2a_{1}^{2}c^{3}\alpha+4a_{2}^{2}c^{3}\alpha-2a_{1}^{2}c^{3}+4a_{2}^{2}c^{3}-6a_{0}a_{1}^{2}c\alpha-6a_{0}^{2}a_{2}c\alpha$
$\displaystyle-
a_{1}^{2}c\omega-2a_{0}a_{2}c\omega+6a_{0}a_{1}^{2}c+6a_{0}^{2}a_{2}c-4b_{1}^{2}\gamma
c^{3}=0,$ $\displaystyle 4a_{0}b_{1}\gamma c^{3}+28a_{2}b_{1}\gamma
c^{3}-8a_{1}b_{2}\gamma
c^{3}+8a_{2}b_{1}c^{3}\alpha-4a_{1}b_{2}c^{3}\alpha+8a_{2}b_{1}c^{3}-4a_{1}b_{2}c^{3}-6a_{1}^{2}b_{1}c\alpha-12a_{0}a_{2}b_{1}c\alpha$
$\displaystyle-12a_{1}a_{2}b_{2}c\alpha-2a_{2}b_{1}c\omega+6a_{1}^{2}b_{1}c+12a_{0}a_{2}b_{1}c+12a_{1}a_{2}b_{2}c+4a_{0}a_{1}\gamma
c^{3}+4a_{1}a_{2}c^{3}\alpha+4a_{1}a_{2}c^{3}-6a_{0}^{2}a_{1}c\alpha$
$\displaystyle-2a_{0}a_{1}c\omega+6a_{0}^{2}a_{1}c-12b_{1}b_{2}\gamma
c^{3}=0,$ $\displaystyle 4a_{0}b_{1}\gamma c^{3}-12a_{2}b_{1}\gamma
c^{3}+28a_{1}b_{2}\gamma
c^{3}-4a_{2}b_{1}c^{3}\alpha+8a_{1}b_{2}c^{3}\alpha-4a_{2}b_{1}c^{3}+8a_{1}b_{2}c^{3}-6a_{1}b_{1}^{2}c\alpha-6a_{0}^{2}b_{1}c\alpha$
$\displaystyle-12a_{0}a_{1}b_{2}c\alpha-12a_{2}b_{1}b_{2}c\alpha-2a_{0}b_{1}c\omega-2a_{1}b_{2}c\omega+6a_{1}b_{1}^{2}c+6a_{0}^{2}b_{1}c+12a_{0}a_{1}b_{2}c+12a_{2}b_{1}b_{2}c$
$\displaystyle+12b_{1}b_{2}\gamma
c^{3}+4b_{1}b_{2}c^{3}\alpha+4b_{1}b_{2}c^{3}=0,$
$\displaystyle-12a_{0}b_{2}\gamma
c^{3}-6a_{0}b_{2}^{2}c\alpha+6a_{0}b_{2}^{2}c-3b_{1}^{2}\gamma
c^{3}+8b_{2}^{2}\gamma
c^{3}+b_{1}^{2}c^{3}\alpha-8b_{2}^{2}c^{3}\alpha+b_{1}^{2}c^{3}-8b_{2}^{2}c^{3}$
$\displaystyle-6b_{1}^{2}b_{2}c\alpha-b_{2}^{2}c\omega+6b_{1}^{2}b_{2}c=0,$
$\displaystyle-4a_{0}b_{1}\gamma c^{3}-16a_{1}b_{2}\gamma
c^{3}-4a_{1}b_{2}c^{3}\alpha-4a_{1}b_{2}c^{3}-6a_{1}b_{2}^{2}c\alpha-12a_{0}b_{1}b_{2}c\alpha+6a_{1}b_{2}^{2}c+12a_{0}b_{1}b_{2}c$
$\displaystyle+12b_{1}b_{2}\gamma
c^{3}-8b_{1}b_{2}c^{3}\alpha-8b_{1}b_{2}c^{3}-2b_{1}^{3}c\alpha-2b_{1}b_{2}c\omega+2b_{1}^{3}c=0,$
$\displaystyle-6a_{1}b_{1}\gamma c^{3}+16a_{0}b_{2}\gamma
c^{3}-24a_{2}b_{2}\gamma
c^{3}-2a_{1}b_{1}c^{3}\alpha-8a_{2}b_{2}c^{3}\alpha-2a_{1}b_{1}c^{3}-8a_{2}b_{2}c^{3}-6a_{0}b_{1}^{2}c\alpha-6a_{2}b_{2}^{2}c\alpha$
$\displaystyle-6a_{0}^{2}b_{2}c\alpha-12a_{1}b_{1}b_{2}c\alpha-2a_{0}b_{2}c\omega+6a_{0}b_{1}^{2}c+6a_{2}b_{2}^{2}c+6a_{0}^{2}b_{2}c+12a_{1}b_{1}b_{2}c+2b_{1}^{2}\gamma
c^{3}+8b_{2}^{2}\gamma c^{3}-2b_{1}^{2}c^{3}\alpha$
$\displaystyle+4b_{2}^{2}c^{3}\alpha-2b_{1}^{2}c^{3}+4b_{2}^{2}c^{3}-b_{1}^{2}c\omega=0,$
$\displaystyle-12b_{1}b_{2}\gamma
c^{3}+4a_{1}b_{1}b_{2}c^{3}\alpha+4b_{1}b_{2}c^{3}-6\omega
b_{1}b_{2}^{2}\alpha+6a_{0}b_{1}b_{2}^{2}c=0,$ $\displaystyle-8b_{2}^{2}\gamma
c^{3}+4a_{0}a_{1}\gamma\omega+4b_{2}^{2}c^{3}\alpha+4a_{2}b_{2}^{2}c^{3}-2b_{2}^{3}c\alpha+2b_{2}^{3}c=0.$
Using the well-known Mathematica software to solve the above system, we find
the following families of solutions:
Family 1: For $\alpha\neq 1$ and $c\neq 0$:
$a_{0}=a_{0},\;\,a_{1}=0,\,\;a_{2}=a_{2},\,\;b_{1}=0,\;\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha\
-1},\;\omega\neq 0.$
Substituting the obtained parameters into the general solution (14), we obtain
the following family of solutions
$u_{1}(x,t)=a_{0}+a_{2}\tanh^{2}(cx-\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha\ -1}\coth^{2}(cx-\omega t).$ (15)
Family 2: For $\alpha\neq\pm 1$ and $c\neq 0$:
$a_{0}=0,\,a_{1}=a_{1},\,a_{2}=-\frac{2c^{2}(4\gamma^{2}-\alpha^{2}-2\alpha-1)}{5(\alpha^{2}-1)},\,b_{1}=0,\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\omega\neq
0.$
Therefore, proceeding as in the previous case, the set of solutions for this
family is provided by
$u_{2}(x,t)=a_{1}\tanh(cx-\omega
t)-\frac{2c^{2}(4\gamma^{2}-\alpha^{2}-2\alpha-1)}{5(\alpha^{2}-1)}\tanh^{2}(cx-\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha\ -1}\coth^{2}(cx-\omega t).$ (16)
Family 3: For $\alpha\neq 1$ and $c\neq 0$:
$a_{0}=a_{0}\neq
0,\,a_{1}=a_{1},\,a_{2}=\frac{3}{2}a_{1},\,b_{1}=0,\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},$
$\omega=\frac{8a_{0}\gamma c^{2}\alpha-8a_{0}\gamma
c^{2}+6a_{0}^{2}\alpha-16\gamma^{2}c^{4}+4c^{4}\alpha^{2}+4c^{4}}{a_{0}(\alpha-1)}.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{3}(x,t)=a_{0}+a_{1}\tanh(cx-\omega t)+\frac{3}{2}a_{1}\tanh^{2}(cx-\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha\ -1}\coth^{2}(cx-\omega t).$ (17)
Family 4: For $\alpha\neq\pm 1$ and $c\neq 0$ :
$a_{0}=a_{0},\,a_{1}=a_{1},\,a_{2}=\frac{3}{4}a_{1},\,b_{1}=0,\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\omega=\frac{3\gamma
c^{2}-3(\alpha+1)-7\gamma}{\alpha^{2}-1}.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{4}(x,t)=a_{0}+a_{1}\tanh(cx-\omega t)+\frac{3}{2}a_{1}\tanh^{2}(cx-\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha\ -1}\coth^{2}(cx-\omega t).$ (18)
Family 5: For $\alpha\neq\pm 1$ and $c\neq 0$:
$a_{0}=a_{0},\,a_{1}=a_{1},$ $a_{2}=\frac{16\gamma
c^{2}\alpha^{2}-16a_{0}\gamma
c^{2}-8a_{0}c^{2}\alpha^{3}-8a_{0}c^{2}\alpha^{2}+8a_{0}c^{2}\alpha+8a_{0}c^{2}-3a_{0}^{2}\alpha^{3}}{10c^{2}\left(\alpha^{2}-1\right)},$
$b_{1}=0,\;\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\;\omega=\frac{\gamma^{2}(a_{0}^{3}-4a_{1}\gamma
c^{2}-2c^{2})}{1-\alpha^{2}}.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{5}(x,t)&=\Big{(}\frac{16\gamma c^{2}\alpha^{2}-16a_{0}\gamma
c^{2}-8a_{0}c^{2}\alpha^{3}-8a_{0}c^{2}\alpha^{2}+8a_{0}c^{2}\alpha+8a_{0}c^{2}-3a_{0}^{2}\alpha^{3}}{10c^{2}\left(\alpha^{2}-1\right)(2\gamma-\alpha-1)}\Big{)}\tanh^{2}(cx-\omega
t)\\\ &+a_{0}+a_{1}\tanh(cx-\omega t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha\
-1}\coth^{2}(cx-\omega t).\end{split}$ (19)
Family 6: For $\alpha\neq\pm 1$, $c\neq 0$ and $2\gamma-\alpha-1\neq 0$:
$a_{0}=a_{0},\,\;a_{1}=\frac{20a_{0}\gamma
c^{2}(\alpha-1)(2\gamma-\alpha-1)+b_{1}^{2}(\alpha-1)^{2}}{{16c^{4}(2\gamma-\alpha-1)^{2}}},\;a_{2}=a_{2},\;\;b_{1}=b_{1},$
$b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\omega=\frac{b_{1}^{2}(a_{0}\gamma-3(\alpha+1))}{4c^{2}(\alpha^{2}-1)^{2}}+8c^{2}(\gamma-\alpha-1).$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{6}(x,t)&=a_{0}+\Big{(}\frac{20a_{0}\gamma
c^{2}(\alpha-1)(2\gamma-\alpha-1)+b_{1}^{2}(\alpha-1)^{2}}{{16c^{4}(2\gamma-\alpha-1)^{2}}}\Big{)}\tanh(cx-\omega
t)+a_{2}\tanh^{2}(cx-\omega t)\\\ &+b_{1}\coth(cx-\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1}\coth^{2}(cx-\omega
t).\end{split}$ (20)
Family 7: For $\alpha\neq 1$ and $c\neq 0$:
$a_{0}=a_{0},\;\,a_{1}=a_{1},\;\;a_{2}=\frac{3}{2}a_{1},\;\,b_{1}=0,\;\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\,\omega=\left(a_{0}a_{2}+3\gamma-2\alpha^{2}\right).$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{7}(x,t)=a_{0}+a_{1}\tanh(cx-\omega t)+\frac{3}{2}a_{1}\tanh^{2}(cx-\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha\ -1}\coth^{2}(cx-\omega t).$ (21)
Family 8: For $\alpha\neq-1$, $c\neq 0$ and $3\gamma+1\neq 0$:
$a_{0}=a_{0},\;\,a_{1}=a_{1},\;\;a_{2}=-\frac{3(c\gamma^{2}-\alpha+3)}{2c^{2}(\alpha+1)(3\gamma+1)},\;\;b_{1}=b_{1},\;b_{2}=0,\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{8}(x,t)=a_{0}+a_{1}\tanh(cx-\omega
t)-\frac{3(c\gamma^{2}-\alpha+3)}{2c^{2}(\alpha+1)(3\gamma+1)}\tanh^{2}(cx-\omega
t)+b_{1}\coth(cx-\omega t).$ (22)
Family 9: For $\alpha\neq 1$, $c\neq 0$ and $\gamma\neq 0$:
$a_{0}=\frac{8a_{2}^{2}c^{2}\gamma-2a_{1}^{2}c^{2}\gamma-2a_{2}^{2}c^{2}\alpha-2a_{2}^{2}c^{2}+a_{2}^{3}\alpha-
a_{2}^{3}}{4a_{2}c^{2}\gamma},\;\;a_{1}=a_{1},\;\;a_{2}=a_{2}\neq 0,$
$b_{1}=b_{1},\;\;b_{2}=-\frac{2c^{2}(3\gamma-\alpha-1)}{3(\alpha-1)},\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{9}(x,t)&=\frac{8a_{2}^{2}c^{2}\gamma-2a_{1}^{2}c^{2}\gamma-2a_{2}^{2}c^{2}\alpha-2a_{2}^{2}c^{2}+a_{2}^{3}\alpha-
a_{2}^{3}}{4a_{2}c^{2}\gamma}+a_{1}\tanh(cx-\omega t)+a_{2}\tanh^{2}(cx-\omega
t)\\\ &+b_{1}\coth(cx-\omega
t)-\frac{2c^{2}(3\gamma-\alpha-1)}{3(\alpha-1)}\coth^{2}(cx-\omega
t).\end{split}$ (23)
Family 10: For $\alpha\neq 1$, $c\neq 0$ and $12\gamma-5\alpha-5\neq 0$:
$a_{0}=a_{0},\;\;a_{1}=a_{1},\;\;a_{2}=-\frac{2c^{2}(4\gamma^{2}-\alpha^{2}-2\alpha-1)}{(\alpha-1)(12\gamma-5\alpha-5)},\;b_{1}=0,\;b_{2}=-\frac{2c^{2}a_{1}(2\gamma-\alpha-1)}{\alpha-1},\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{10}(x,t)&=a_{0}+a_{1}\tanh(cx-\omega
t)-\frac{2c^{2}(4\gamma^{2}-\alpha^{2}-2\alpha-1)}{(\alpha-1)(12\gamma-5\alpha-5)}\tanh^{2}(cx-\omega
t)\\\ &-\frac{2c^{2}a_{1}(2\gamma-\alpha-1)}{\alpha-1}\coth^{2}(cx-\omega
t).\end{split}$ (24)
Family 11: For $\alpha\neq 1$, $c\neq 0$ and $\gamma\neq 0$:
$a_{0}=\frac{a_{2}\left(a_{2}\alpha-a_{2}+8\gamma
c^{2}-2c^{2}\alpha-2c^{2}\right)}{4c^{2}\gamma},\;\;a_{1}=0,\;\;a_{2}=a_{2},$
$b_{1}=0,\;b_{2}=-\frac{2c^{2}(3\gamma-\alpha-1)}{3(\alpha-1)},\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{11}(x,t)=\frac{a_{2}\left(a_{2}\alpha-a_{2}+8\gamma
c^{2}-2c^{2}\alpha-2c^{2}\right)}{4c^{2}\gamma}+a_{2}\tanh^{2}(cx-\omega
t)-\frac{2c^{2}(3\gamma-\alpha-1)}{3(\alpha-1)}\coth^{2}(cx-\omega t).$ (25)
Family 12: For $\alpha\neq 1$, $c\neq 0$ and $\gamma\neq 0$:
$a_{0}=\frac{b_{1}^{2}-b_{1}^{2}\alpha}{2c^{2}\gamma},\;\;a_{1}=-\frac{\sqrt{2a_{2}c^{2}(4\gamma-\alpha-1)+a_{2}^{2}(\alpha-1)}}{\sqrt{2}c\sqrt{\gamma}},$
$a_{2}=a_{2},\;\;b_{1}=b_{1},\;\;b_{2}=0,\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{12}(x,t)&=\frac{b_{1}^{2}-b_{1}^{2}\alpha}{2c^{2}\gamma}-\frac{\sqrt{2a_{2}c^{2}(4\gamma-\alpha-1)+a_{2}^{2}(\alpha-1)}}{\sqrt{2}c\sqrt{\gamma}}\tanh(cx-\omega
t)\\\ &+a_{2}\tanh^{2}(cx-\omega t)+b_{1}\coth(cx-\omega t).\end{split}$ (26)
Family 13: For $\gamma\neq 0$ and $c\neq 0$:
$a_{0}=\frac{8a_{2}^{2}c^{2}\gamma-2a_{1}^{2}c^{2}\gamma-2a_{2}^{2}c^{2}\alpha-2a_{2}^{2}c^{2}+a_{2}^{3}\alpha-
a_{2}^{3}}{4a_{2}c^{2}\gamma},\;\;a_{1}=a_{1},\;\;a_{2}=a_{2}\neq 0,$
$b_{1}=\frac{a_{1}(16a_{2}^{2}c^{2}\gamma+2a_{1}^{2}c^{2}\gamma-2a_{2}^{2}c^{2}\alpha-2a_{2}^{2}c^{2}+5a_{2}^{3}\alpha-5a_{2}^{3})}{4a_{2}^{2}c^{2}\gamma},\;\;b_{2}=0,\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{13}(x,t)&=\frac{8a_{2}^{2}c^{2}\gamma-2a_{1}^{2}c^{2}\gamma-2a_{2}^{2}c^{2}\alpha-2a_{2}^{2}c^{2}+a_{2}^{3}\alpha-
a_{2}^{3}}{4a_{2}c^{2}\gamma}+a_{1}\tanh(cx-\omega t)+a_{2}\tanh^{2}(cx-\omega
t)\\\
&+\frac{a_{1}(16a_{2}^{2}c^{2}\gamma+2a_{1}^{2}c^{2}\gamma-2a_{2}^{2}c^{2}\alpha-2a_{2}^{2}c^{2}+5a_{2}^{3}\alpha-5a_{2}^{3})}{4a_{2}^{2}c^{2}\gamma}\coth(cx-\omega
t).\end{split}$ (27)
Family 14: For $\alpha\neq 1$, $c\neq 0$ and $\gamma\neq 0$:
$a_{0}=-\frac{3b_{1}\alpha+c^{4}}{c^{2}\gamma(\alpha-1)},\;\;a_{1}=0,\;\;a_{2}=0,\;\;b_{1}=b_{1},\;\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{14}(x,t)=-\frac{3b_{1}\alpha+c^{4}}{c^{2}\gamma(\alpha-1)}+b_{1}\coth(cx-\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1}\coth^{2}(cx-\omega t).$ (28)
Family 15: For $\alpha\neq 1$, $c\neq 0$ and $9\gamma-7\alpha-7\neq 0$:
$a_{0}=0,\;\;a_{1}=0,\;\;a_{2}=\frac{2c^{2}(6\gamma^{2}-\gamma\alpha-\gamma-\alpha^{2}-2\alpha-1)}{(\alpha-1)(9\gamma-7\alpha-7)},$
$b_{1}=b_{1},\;\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{15}(x,t)&=\frac{2c^{2}(6\gamma^{2}-\gamma\alpha-\gamma-\alpha^{2}-2\alpha-1)}{(\alpha-1)(9\gamma-7\alpha-7)}\tanh^{2}(cx-\omega
t)+b_{1}\coth(cx-\omega t)\\\
&-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1}\coth^{2}(cx-\omega t).\end{split}$
(29)
Family 16: For $\alpha\neq\pm 1$, $c\neq 0$ and $2\gamma-\alpha-1\neq 0$:
$a_{0}=a_{0},\;\;a_{1}=a_{1},\;\;a_{2}=0,\;\;b_{1}=0,\;\;b_{2}=b_{2},\;\omega=\frac{a_{0}c\alpha+3\gamma
b_{2}^{2}}{(1-\alpha^{2})(2\gamma-\alpha-1)}.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{16}(x,t)=a_{0}+a_{1}\tanh(cx+\omega t)+b_{2}\coth^{2}(cx-\omega t).$ (30)
Family 17: For $\alpha\neq 1$, $c\neq 0$ and $\gamma-\alpha-1\neq 0$:
$a_{0}=0,\;\;a_{1}=0,\;\;a_{2}=\frac{2c^{2}(9\gamma^{2}-\alpha^{2}-2\alpha-1)}{9(\alpha-1)(\gamma-\alpha-1)},\;b_{1}=b_{1},\;\;b_{2}=-\frac{2c^{2}(3\gamma-\alpha-1)}{3(\alpha-1)},\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{17}(x,t)=\frac{2c^{2}(9\gamma^{2}-\alpha^{2}-2\alpha-1)}{9(\alpha-1)(\gamma-\alpha-1)}\tanh^{2}(cx+\omega
t)+b_{1}\coth(cx-\omega
t)-\frac{2c^{2}(3\gamma-\alpha-1)}{3(\alpha-1)}\coth^{2}(cx-\omega t).$ (31)
Family 18: For $\alpha\neq 1$, $c\neq 0$ and $\gamma\neq 0$:
$a_{0}=-\frac{a_{2}(3a_{2}\alpha+10\gamma
c^{2}+2c^{2})}{2c^{2}\gamma},\;\;a_{1}=0,\;\;a_{2}=a_{2},\;b_{1}=b_{1},\;\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{18}(x,t)=-\frac{a_{2}(3a_{2}\alpha+10\gamma
c^{2}+2c^{2})}{2c^{2}\gamma}+a_{2}\tanh^{2}(cx+\omega t)+b_{1}\coth(cx-\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1}\coth^{2}(cx-\omega t).$ (32)
Family 19: For $\alpha\neq 1$, $\gamma\neq 0$, $c\neq 0$ and $2a_{1}\neq
b_{1}$:
$a_{0}=-\frac{a_{1}(a_{1}^{2}\alpha^{2}-2a_{1}^{2}\alpha+a_{1}^{2}-8\gamma^{2}c^{4}+4\gamma
c^{4}\alpha+4\gamma
c^{4})}{2c^{2}\gamma(\alpha-1)\left(2a_{1}-b_{1}\right)},\;\;a_{1}=a_{1},\;\;a_{2}=0,$
$b_{1}=b_{1},\;\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{19}(x,t)&=-\frac{a_{1}(a_{1}^{2}\alpha^{2}-2a_{1}^{2}\alpha+a_{1}^{2}-8\gamma^{2}c^{4}+4\gamma
c^{4}\alpha+4\gamma
c^{4})}{2c^{2}\gamma(\alpha-1)\left(2a_{1}-b_{1}\right)}+a_{1}\tanh(cx-\omega
t)+b_{1}\coth(cx-\omega t)\\\
&-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1}\coth^{2}(cx-\omega t).\end{split}$
(33)
Family 20: For $\alpha\neq\pm 1$, $c\neq 0$ and $\gamma\neq 0$:
$a_{0}=0,\;\;a_{1}=a_{1},\;\;a_{2}=0,\;b_{1}=\frac{2c\sqrt{a_{1}(3\gamma-\alpha-1)}}{\sqrt{3}\sqrt{\alpha-1}},\;\;b_{2}=-\frac{c^{4}(3\gamma-5\alpha-1)}{\gamma^{2}(\alpha^{2}-1)},\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{20}(x,t)=a_{1}\tanh(cx+\omega
t)+\frac{2c\sqrt{a_{1}(3\gamma-\alpha-1)}}{\sqrt{3}\sqrt{\alpha-1}}\coth(cx-\omega
t)-\frac{c^{4}(3\gamma-5\alpha-1)}{\gamma^{2}(\alpha^{2}-1)}\coth^{2}(cx-\omega
t).$ (34)
Family 21: For $\gamma\neq 0$ and $c\neq 0$:
$a_{0}=0,\;\;a_{1}=0,\;\;a_{2}=a_{2},\;\;b_{1}=\pm\frac{a_{2}\sqrt{\alpha+1}}{\sqrt{\gamma}},\;\;b_{2}=0,\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{21}(x,t)=a_{2}\tanh^{2}(cx+\omega
t)\pm\frac{a_{2}\sqrt{\alpha+1}}{\sqrt{\gamma}}\coth(cx-\omega t).$ (35)
Family 22: For $\alpha\neq\pm 1$, $c\neq 0$ and $\gamma\neq 0$:
$a_{0}=\frac{3\alpha^{3}-2\gamma
c^{2}}{c\gamma(\alpha^{2}-1)},\;\;a_{1}=0,\;\;a_{2}=-a_{0},\;\;b_{1}=0,\;b_{2}=0,\;\;\omega=\frac{3a_{0}\alpha}{4\gamma
c^{2}(\alpha-1)}.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{22}(x,t)=\frac{3\alpha^{3}-2\gamma
c^{2}}{c\gamma(\alpha^{2}-1)}-\frac{3\alpha^{3}-2\gamma
c^{2}}{c\gamma(\alpha^{2}-1)}\tanh^{2}(cx+\omega t).$ (36)
Family 23: For $\alpha\neq 1$, $c\neq 0$ and $\gamma\neq 0$:
$a_{0}=\frac{3a_{1}^{4}\alpha^{2}-2\gamma
c^{4}}{4c^{2}a_{1}\gamma(\alpha-1)},\;\;a_{1}=a_{1}\neq
0,\;\;a_{2}=0,\;\;b_{1}=0,\;b_{2}=-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1},\;\;\omega=\omega.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$u_{23}(x,t)=\frac{3a_{1}^{4}\alpha^{2}-2\gamma
c^{4}}{4c^{2}a_{1}\gamma(\alpha-1)}+a_{1}\tanh(cx+\omega
t)-\frac{2c^{2}(2\gamma-\alpha-1)}{\alpha-1}\coth^{2}(cx-\omega t).$ (37)
Family 24: For $\alpha\neq\pm 1$, $\gamma\neq 0$, $c\neq 0$ and
$2\gamma-\alpha-1\neq 0$:
$a_{0}=a_{0},\;\;a_{1}=a_{1}\;\;a_{2}=-\frac{10c^{2}(2\gamma-\alpha-1)}{3(\alpha-1)},\;\;b_{1}=\frac{3a_{1}\sqrt{3a_{0}\gamma-\alpha-1}}{\sqrt{2c\gamma}},$
$b_{2}=-\frac{2a_{1}c^{2}(2\gamma-\alpha-1)}{\alpha^{2}-1},\;\;\omega=\frac{a_{0}c^{3}\alpha}{3(2\gamma-\alpha-1)^{2}}.$
Therefore, proceeding as in the previous cases, the set of solutions for this
family is provided by
$\begin{split}u_{24}(x,t)&=a_{0}+a_{1}\tanh(cx-\omega
t)-\frac{10c^{2}(2\gamma-\alpha-1)}{3(\alpha-1)}\tanh^{2}(cx-\omega t)\\\
&+\frac{3a_{1}\sqrt{3a_{0}\gamma-\alpha-1}}{\sqrt{2c\gamma}}\coth(cx-\omega
t)-\frac{2a_{1}c^{2}(2\gamma-\alpha-1)}{\alpha^{2}-1}\coth^{2}(cx-\omega
t).\end{split}$ (38)
As we can see in [21, 22, 23, 24, 25] and its references, the technique
proposed here has been effectively applied by various authors to solve
problems involving shallow water waves.
## 4 Graphical presentation of solutions and discussion
In this part, we will exhibit in 3D and 2D some of the solutions revealed in
the previous section in order to study the evolution of traveling waves and
their dependence on $\alpha$, $\gamma$, and other relevant parameters.
Example 1. To illustrate, let us consider the Family 8 with the parameters
$\alpha=1$, $\gamma=1$, and $c=0.5$. Which allows us to determine that
$a_{2}=-1.87$ and $a_{0}=1.875$. In addition to the free coefficients we
choose them as $a_{1}=0$, $b_{1}=0$, $b_{2}=0$ and $\omega=0.25$. Figure 1
shows the 3D and 2D bell-shaped solution $u_{8}(x,t)$ for these parameters.
Example 2. To illustrate, let us consider the Family 10 with the parameters
$\alpha=-1$, $\gamma=-1$, and $c=3.5$. Which allows us to determine that
$a_{2}=-4.08$ and $a_{0}=4.08$. In addition to the free coefficients we choose
them as $a_{1}=0$, $b_{1}=0$, $b_{2}=0$ and $\omega=0.33$. Figure 2 shows the
3D and 2D bell-shaped solution $u_{10}(x,t)$ for these parameters.
Example 3. To illustrate, let us consider the Family 22 with the parameters
$\alpha=4$, $\gamma=2$, and $c=1.2$. Which allows us to determine that
$a_{0}=1.61$, $a_{2}=-1.61$ and $\omega=0.56$. Figure 3 shows the 3D and 2D
bell-shaped solution $u_{22}(x,t)$ for these parameters.
Example 4. To illustrate, let us consider the Family 4 with the parameters
$\alpha=2$, $\gamma=1.5$, and $c=2.4$. Which allows us to determine that
$a_{2}=3.52$, $b_{2}=0$ and $\omega=2.14$. In addition to the free
coefficients we choose them as $a_{0}=4.7$ and $a_{1}=4.7$. Figure 4 shows the
3D and 2D kink-shaped solution $u_{4}(x,t)$ for these parameters.
Example 5. To illustrate, let us consider the Family 13 with the parameters
$\alpha=3.01$, $\gamma=2.2$, and $c=2.6$. Which allows us to determine that
$a_{0}=8.22$ and $b_{1}=6.51$. In addition to the free coefficients we choose
them as $a_{1}=5.7$, $a_{2}=2.7$ and $\omega=6.0$. Figure 5 shows the 3D and
2D singular traveling wave solution $u_{13}(x,t)$ for these parameters.
Example 6. To illustrate, let us consider the Family 5 with the parameters
$\alpha=3.4$, $\gamma=2.2$, and $c=-2.4$. Which allows us to determine that
$a_{2}=-4.73$ and $b_{2}=0$. In addition to the free coefficients we choose
them as $a_{0}=5.2$, $a_{1}=2.8$ and $\omega=-5.8$. Figure 6 shows the 3D and
2D anti-kink-shaped solution $u_{5}(x,t)$ for these parameters.
Example 7. To illustrate, let us consider the Family 16 with the parameters
$\alpha=5.5$, $\gamma=2.0$, and $c=6.0$. Which allows us to determine that
$\omega=4.15$. In addition to the free coefficients we choose them as
$a_{0}=23$, $a_{1}=-18.6$, $a_{2}=0$, $b_{1}=0$ and $b_{2}=1.05$. Figure 7
shows the 3D and 2D singular-shaped solution $u_{16}(x,t)$ for these
parameters.
Since $\alpha=1$ and $\gamma=1$, Family 8 studied in Example 1 represents a
set of solutions for the SIdV Eq. (3), whereas Family 10 studied in Example 2
represents a set of solutions for the KdV Eq. (1). In addition, the Family 22
examined in Example 3 is a solution set for the gsiaKdV equation (4), because
$\alpha\neq\pm 1$ and $\gamma\neq 0$. These three examples illustrate that the
proposed technique confirms what has been stated by previous researchers [3,
14, 17, 18], namely that Eqs. (1), (3) and (4) have solutions of the bell-
shaped type (2). Finally, the families considered in Examples 4, 5, 6, and 7
are solution sets for the gsiaKdV equation (4), because $\alpha\neq\pm 1$ and
$\gamma\neq 0$, and it is shown that Eq. (4) admits traveling wave solutions
of the kink, anti-kink, and singular anti-kink varieties [17].
Figure 1: Solution profile $u_{8}(x,t)$ in the interval $-5.0\leq x\leq 5.0$,
for the parameters selected in Example 1 (left). Wavefront contour plot
(center), and 2D plot of traveling wave solution $u_{8}(x,t)$ for $t=0.0$ and
$t=4.0$ (right). Figure 2: Solution profile $u_{10}(x,t)$ in the interval
$-1.5\leq x\leq 1.5$, for the parameters selected in Example 2 (left).
Wavefront contour plot (center), and 2D plot of traveling wave solution
$u_{10}(x,t)$ for $t=0.0$ and $t=4.0$ (right). Figure 3: Solution profile
$u_{22}(x,t)$ in the interval $-3.0\leq x\leq 3.0$, for the parameters
selected in Example 3 (left). Wavefront contour plot (center), and 2D plot of
traveling wave solution $u_{22}(x,t)$ for $t=0.0$ and $t=3.0$ (right). Figure
4: Solution profile $u_{4}(x,t)$ in the interval $-5.0\leq x\leq 5.0$, for the
parameters selected in Example 4 (left). Wavefront contour plot (center), and
2D plot of traveling wave solution $u_{4}(x,t)$ for $t=0.0$ and $t=3.0$
(right). Figure 5: Solution profile $u_{13}(x,t)$ in the interval $-3.0\leq
x\leq 3.0$, for the parameters selected in Example 5 (left). Wavefront contour
plot (center), and 2D plot of traveling wave solution $u_{13}(x,t)$ for
$t=0.0$ and $t=1.0$ (right). Figure 6: Solution profile $u_{5}(x,t)$ in the
interval $-5.0\leq x\leq 5.0$, for the parameters selected in Example 6
(left). Wavefront contour plot (center), and 2D plot of traveling wave
solution $u_{5}(x,t)$ for $t=0.0$ and $t=1.0$ (right). Figure 7: Solution
profile $u_{16}(x,t)$ in the interval $-5.0\leq x\leq 5.0$, for the parameters
selected in Example 16 (left). Wavefront contour plot (center), and 2D plot of
traveling wave solution $u_{16}(x,t)$ for $t=0.0$ (right).
## 5 Conclusions
The gsiaKdV equation is a generalization of both the KdV and SIdV equations;
the mathematical model includes the term
$\big{(}3(1-\alpha)u+(1+\alpha)\frac{u_{xx}}{u}\big{)}$ can be seen as an
advecting velocity. In this paper, we study the gsiaKdV equation for the first
time using the tanh-coth method, and the results revealed that this
generalization shares bell-shaped solutions with both the KdV equation and the
SIdV equation, as reported previously by other authors, also has other kinds
of solutions that could be of interest in the study of shallow wave motion
over the ocean. We believe that the research of novel traveling wave solutions
will shed light on new sorts of applications of this equation in applied
mathematics and engineering, as well as depict the specific behaviour of
atmospheric phenomena caused by shallow ocean waves. In future study, other
exact solution methods may be used to the studied equation.
## Data availability statement
Our article have no associated data.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or
personal relationships that could have appeared to influence the work reported
in this paper.
## Funding Information
The authors declare that no funds, grants, or other support were received
during the preparation of this manuscript.
## References
* [1] Korteweg DJ, de Vries G. On the change of form of long waves advancing in a rectangular canal and on a new type of long stationary wave. Philos Mag 1895; 39: 422–443.
* [2] Helal MA, Mehanna MS. A comparative study between two different methods for solving the general Korteweg-de Vries equation (GKdV). Chaos, Solitons and Fractals 2007; 33: 725–739.
* [3] Sen A, Ahalpara DP, Thyagaraja A, Krishnaswami GS. A KdV-like advection–dispersion equation with some remarkable properties. Commun Nonlinear Sci Numer Simul 2012; 17: 4115-4124.
* [4] Wazwaz AM. Integrable $(3+1)-$dimensional Ito equation: variety of lump solutions and multiple-soliton solutions. Nonlinear Dyn 2022; 109; 1929-1934.
* [5] Wazwaz AM. New $(3+1)-$dimensional Painlevé integrable fifth-order equation with third-order temporal dispersion. Nonlinear Dyn 2021; 106; 891–897.
* [6] Wazwaz AM. Multi-soliton solutions for integrable $(3+1)-$dimensional modified seventh-order Ito and seventh-order Ito equations. Nonlinear Dyn 2022; https://doi.org/10.1007/s11071-022-07818-4
* [7] Wazwaz AM. A new integrable nonlocal modified KdV equation: Abundant solutions with distinct physical structures. J Ocean Eng Sci 2017; 2: 1-4.
* [8] Wazwaz AM. New sets of solitary wave solutions to the KdV, mKdV, and the generalized KdV equations. Commun Nonlinear Sci Numer Simul 2008; 13: 331-339.
* [9] Mirzazadeh M, Eslami M, Biswas A. 1-Soliton solution of KdV6 equation. Nonlinear Dyn 2015; 80; 387-396.
* [10] González-Gaxiola O, León-Ramírez A, Chacón-Acosta G. Application of the Kudryashov method for finding exact solutions of the Schamel-Kawahara equation. Russian J of Nonlinear Dynamics 2022; 18: 203-215.
* [11] Kudruashov NA. Lax pair and first integrals of the traveling wave reduction for the KdV hierarchy. Appl Math Comp 2019; 350: 323-330.
* [12] Kudruashov NA. Painlevé analysis and exact solutions of the Korteweg-de Vries equation with a source. Appl Math Lett 2025; 41: 41-45.
* [13] Biswas A. Solitary wave solution for the generalized KdV equation with time-dependent damping and dispersion. Commun Nonlinear Sci Numer Simul 2009; 14: 3503–3506.
* [14] Leal da Silva P, Freire IL, Sampaio JCS. A family of wave equations with some remarkable properties. Proc R Soc A 2018; 474: 20170763.
* [15] Zhang G, He J, Wang L, Mihalache D. Kink-type solutions of the SIdV equation and their properties. R Soc open sci 2019; 6: 191040.
* [16] Qiao Z, Fan E. Negative-order Korteweg-de Vries equations. Phys Rev E 2012; 86: 016601.
* [17] Alzaleq L, Manoranjan V, Alzalg B. Exact traveling waves of a generalized scale-invariant analogue of the Korteweg-de Vries equation. Mathematics 2022; 10: 414.
* [18] Fan X, Yin J. Two types of traveling wave solutions of a KdV-like advection-dispersion equation. Mathematica Aeterna 2012; 2: 273-282.
* [19] Malfliet W, Hereman W. The tanh method: I. Exact solutions of nonlinear evolution and wave equations. Physica Scripta 1996; 54: 563-568.
* [20] Malfliet W, Hereman W. The tanh method: II. Perturbation technique for conservative systems. Physica Scripta 1996; 54: 569-575.
* [21] Gözükızıl Ö F, Akçağıl Ş. The tanh-coth method for some nonlinear pseudoparabolic equations with exact solutions. Adv Differ Equ 2013; 2013: 143.
* [22] Wazwaz AM. The extended tanh method for new solitons solutions for many forms of the fifth-order KdV equations. Appl Math Comput 2007; 184: 1002-1014.
* [23] Wazwaz AM. The Hirota’s direct method and the tanh-coth method for multiple soliton solutions of the Sawada-Kotera-Ito seventh-order equation. Appl Math Comput 2008; 199: 133-138.
* [24] Gomez Sierra CA, Salas AH. The generalized tanh-coth method to special types of the fifth-order KdV equation. Appl Math Comput 2008; 203: 873-880.
* [25] Gomez Sierra CA. On a KdV equation with higher-order nonlinearity: Traveling wave solutions. J Comput Appl Math 2011; 235: 5330-5332.
|
# Spin-2 dark matter from anisotropic Universe in bigravity
Yusuke Manita<EMAIL_ADDRESS>Department of Physics, Kyoto
University, Kyoto 606-8502, Japan Katsuki Aoki Center for Gravitational
Physics and Quantum Information, Yukawa Institute for Theoretical Physics,
Kyoto University, 606-8502, Kyoto, Japan Tomohiro Fujita Waseda Institute for
Advanced Study, Shinjuku, Tokyo 169-8050, Japan Shinji Mukohyama Center for
Gravitational Physics and Quantum Information, Yukawa Institute for
Theoretical Physics, Kyoto University, 606-8502, Kyoto, Japan Kavli Institute
for the Physics and Mathematics of the Universe (WPI), The University of
Tokyo, 277-8583, Chiba, Japan
###### Abstract
Bigravity is one of the natural extensions of general relativity and contains
an additional massive spin-2 field which can be a good candidate for dark
matter. To discuss the production of spin-2 dark matter, we study fixed point
solutions of the background equations for axisymmetric Bianchi type-I
Universes in two bigravity theories without Boulware-Deser ghost, i.e.,
Hassan-Rosen bigravity and Minimal Theory of Bigravity. We investigate the
local and global stability of the fixed points and classify them. Based on the
general analysis, we propose a new scenario where spin-2 dark matter is
produced by the transition from an anisotropic fixed point solution to
isotropic one. The produced spin-2 dark matter can account for all or a part
of dark matter and can be directly detected by laser interferometers in the
same way as gravitational waves.
††preprint: KUNS-2946, YITP-22-150, IPMU22-0063
## I Introduction
Dark matter is an unknown matter component that accounts for more than 20% of
the total energy density in the current Universe Aghanim _et al._ (2020). Its
true nature is still unknown and has been actively explored from both
theoretical and observational perspectives. For the theoretical side,
plentiful dark matter models with a broad mass range have been proposed, and
various detection methods for each model have been suggested Lin (2019). For
example, ultralight bosonic field is one of the candidates for dark matter. A
scalar field candidate with the lightest mass scale around
$\mathcal{O}(10^{-21})$ eV is called fuzzy dark matter, and it is expected to
solve the small-scale problems such as the core-cusp problem. The QCD axion,
which was originally introduced to solve the strong CP problem Peccei and
Quinn (1977), is also a scalar-type dark matter candidate, especially well
motivated in a very light mass range, $m\ll 1$eV.
Since typical intrinsic characteristics of bosonic particles are mass and
spin, it is natural to consider ultralight dark matter with nonzero spin. One
of such extensions is dark photon, which is vector-type ultralight dark
matter. Unlike scalar-type ultralight dark matter, dark photons include
helicity-one modes. Recently, the phenomenology of the dark photon has been
investigated, for example, its production mechanisms Dror _et al._ (2019);
Bastero-Gil _et al._ (2019); Ema _et al._ (2019); Nakai _et al._ (2020);
Salehian _et al._ (2021); Firouzjahi _et al._ (2021), superradiance East and
Pretorius (2017); Cardoso _et al._ (2018); East (2017), etc.
Furthermore, the tensor-type dark matter model called spin-2 dark matter has
been proposed Aoki and Mukohyama (2016); Aoki and Maeda (2018); Babichev _et
al._ (2016a, b). Some production mechanisms of spin-2 dark matter have been
investigated so far. For example, generation by primordial magnetic fields
Aoki and Maeda (2018), bubble collision in the preheating era Aoki and
Mukohyama (2016), and misalignment mechanism Marzola _et al._ (2018).
Ultralight dark matter is also interesting from an observational point of
view. Some ultralight dark matter models are expected to give detectable
signals to gravitational wave interferometers. For example, axion-like
particles couple to electromagnetic fields can cause the birefringence of the
laser beams DeRocco and Hook (2018); Obata _et al._ (2018); Nagano _et al._
(2019). The ultralight dark photon can also generates detectable signals by
accelerating the mirrors of the gravitational wave detectors when it couples
to baryonic matters Abbott _et al._ (2022); Michimura _et al._ (2020);
Miller _et al._ (2021); Morisaki _et al._ (2021). Spin-2 dark matter can
also leave detectable signals in the gravitational wave detectors by changing
the effective length of the arm in a similar way to usual gravitational waves
Armaleo _et al._ (2021).
Spin-2 dark matter is closely related to massive gravity since it has a
nonzero mass and couples to the matter fields as a usual graviton. Massive
gravity has a long history, beginning with the pioneering work of linear
massive gravity by Fierz and Pauli in 1939 Fierz and Pauli (1939). This theory
can be generalized to the nonlinear level, but Ref. Boulware and Deser (1972)
found that non-linear massive gravity suffers from a ghost instability, which
is often called the Boulware-Deser ghost. In 2010, the first ghost-free
nonlinear massive gravity (dRGT theory) was proposed de Rham and Gabadadze
(2010); de Rham _et al._ (2011), and it possesses five degrees of freedom.
Motivated by difficulties in the cosmology of massive gravity De Felice _et
al._ (2012), some extensions of dRGT theory have been explored. For example,
Minimal Theory of Massive Gravity achieves to reduce the number of degrees of
freedom to only two by imposing constraints De Felice and Mukohyama (2016).
More recent development is the extension of Lorentz-invariant massive gravity,
called generalized massive gravity and projected massive gravity De Rham _et
al._ (2014); Gumrukcuoglu _et al._ (2020). They can describe cosmic expansion
without the initial strong coupling problem Kenna-Allison _et al._ (2020);
Manita and Kimura (2022). Massive gravity with single graviton can be a
candidate for the origin of the accelerated expansion of the Universe, but it
is difficult to construct a viable model of spin-2 dark matter based on
massive gravity satisfying the strong mass constraint $m\lesssim 1.2\times
10^{-22}\,\text{eV}$ from the gravitational wave observation of BH-BH merger
Abbott _et al._ (2016). 111The graviton mass bound for single massive gravity
is summarized in de Rham _et al._ (2017) (see also De Felice _et al._
(2021a)). Some of them are stronger than the GW constraint but are model
dependent.
On the other hand, spin-2 dark matter can be originated from bigravity, which
is a gravity theory with two dynamical metrics. The first proposal of
bigravity without Boulware-Deser ghost is called Hassan-Rosen bigravity Hassan
and Rosen (2012), and was accomplished by extending dRGT massive gravity de
Rham and Gabadadze (2010); de Rham _et al._ (2011). The Hassan-Rosen
bigravity has seven degrees of freedom because it can be regarded as a
nonlinear theory in which massive and massless gravitons are interacting with
each other. On the other hand, Minimal Theory of Bigravity De Felice _et al._
(2021b) is a ghost-free bigravity with only four degrees of freedom, which is
constructed by extending the Minimal Theory of Massive Gravity De Felice and
Mukohyama (2016).
In dRGT theory, Ref. Gumrukcuoglu _et al._ (2012) found that the background
equations in Bianchi type-I Universe possess a fixed point solution and
discussed an anisotropic FLRW Universe, in which each of the physical and
fiducial metrics is homogeneous and isotropic but they do not share the same
rotational Killing vectors and thus the system as a whole breaks the isotropy.
In this paper, by extending the previous work, we find a fixed point with
relatively large anisotropy for both Hassan-Rosen bigravity and the Minimal
Theory of Bigravity. Moreover, by using this fixed point, we discuss a new
scenario to produce spin-2 dark matter from the large anisotropy in the early
universe. Since in bigravity, the anisotropic perturbation of FLRW universe
can be regarded as spin-2 dark matter Maeda and Volkov (2013); Aoki and Maeda
(2018), if the early universe is anisotropic, it may give an initial amplitude
for the spin-2 dark matter.
This paper is organized as follows. In Sec. II, we introduce two bigravity
theories without Boulware-Deser ghost, i.e., Hassan-Rosen bigravity and
Minimal Theory of Bigravity. In Sec. III, we consider the Bianch type-I
Universe as an example of the anisotropic Universe, and show that the
background equations are the same for both bigravity theories. We then find
the anisotropic fixed point solutions, in which each metric is homogeneous and
isotropic but they do not share the same rotational Killing vectors. In Sec.
IV, we classify the fixed points by their local stability and investigate
their global stability by drawing the phase portraits around them. In Sec. V,
as an implication of the anisotropic fixed point in bigravity, we discuss the
production of the spin-2 dark matter and its detectability by gravitational
wave interferometers. Section VI is devoted to conclusions.
## II Bigravity
Bigravity is one of the extensions of general relativity that has two
dynamical metrics $g_{\mu\nu}$ and $f_{\mu\nu}$ interacting with each other.
The action of bigravity is given by
$\displaystyle S_{g}$ $\displaystyle=\frac{1}{2\kappa_{g}^{2}}\int
d^{4}x\sqrt{-g}R^{(g)}+\frac{1}{2\kappa_{f}^{2}}\int d^{4}x\sqrt{-f}R^{(f)}$
$\displaystyle+\frac{m^{2}}{\kappa^{2}}\int d^{4}x{\cal L}_{\rm
int}[g_{\mu\nu},f_{\mu\nu}]\,,$ (1)
where $R^{(g)}$ and $R^{(f)}$ are the Ricci scalars for $g_{\mu\nu}$ and
$f_{\mu\nu}$, respectively. The first and second terms are the Einstein-
Hilbert terms of the $g$-sector and the $f$-sector with the gravitational
constants $\kappa_{g}^{2}$ and $\kappa_{f}^{2}$. The third term represents
interactions between $g$-metric and $f$-metric, $m$ denotes a mass parameter,
and $\kappa^{2}$ is defined by $\kappa^{2}:=\kappa_{g}^{2}+\kappa_{f}^{2}$.
For later convenience, we also introduce the ratio of the gravitational
constants
$\displaystyle\alpha:=\frac{\kappa_{g}}{\kappa_{f}}\,.$ (2)
The interaction term depends on the model. At least two bigravity models
without Boulware-Deser ghost have been proposed so far: the Hassan-Rosen
bigravity (HRBG) Hassan and Rosen (2012) and the Minimal Theory of Bigravity
(MTBG) De Felice _et al._ (2021b). In this section, we will briefly review
those bigravity theories.
### II.1 Hassan-Rosen bigravity
HRBG is the ghost-free bigravity which is constructed by extending the dRGT
massive gravity Hassan and Rosen (2012). The interaction term for HRBG is
given by
$\displaystyle{\cal L}_{\rm int}=\sqrt{-g}\sum_{n=0}^{4}b_{n}e_{n}({{\cal
K}})=\sqrt{-f}\sum_{n=0}^{4}b_{4-n}e_{n}({\tilde{{\cal K}}})\,,$ (3)
with constant parameters $b_{k}$. Here ${\cal K}^{\mu}{}_{\nu}$ is defined as
the root of
$\displaystyle{\cal K}^{\mu}{}_{\alpha}{\cal
K}^{\alpha}{}_{\nu}=g^{\mu\alpha}f_{\alpha\nu}\,,$ (4)
$\tilde{{\cal K}}^{\mu}{}_{\nu}$ is its inverse satisfying
$\displaystyle\tilde{{\cal K}}^{\mu}{}_{\alpha}{\cal
K}^{\alpha}{}_{\nu}=\delta^{\mu}_{\nu}={\cal K}^{\mu}{}_{\alpha}\tilde{{\cal
K}}^{\alpha}{}_{\nu}\,,\quad\tilde{{\cal K}}^{\mu}{}_{\alpha}\tilde{{\cal
K}}^{\alpha}{}_{\nu}=f^{\mu\alpha}g_{\alpha\nu}\,,$ (5)
and $e_{n}({\cal M})$ denote the elementary symmetric polynomials of degree
$n$ in the matrix ${\cal M}^{\mu}{}_{\nu}$,
$\displaystyle e_{0}({\cal M})$ $\displaystyle=1\,,$ (6) $\displaystyle
e_{1}({\cal M})$ $\displaystyle=[{\cal M}]\,,$ (7) $\displaystyle e_{2}({\cal
M})$ $\displaystyle=\frac{1}{2!}([{\cal M}]^{2}-[{\cal M}^{2}])\,,$ (8)
$\displaystyle e_{3}({\cal M})$ $\displaystyle=\frac{1}{3!}([{\cal
M}]^{3}-3[{\cal M}][{\cal M}^{2}]+2[{\cal M}^{3}])\,,$ (9) $\displaystyle
e_{4}({\cal M})$ $\displaystyle=\frac{1}{4!}([{\cal M}]^{4}-6[{\cal
M}]^{2}[{\cal M}^{2}]+8[{\cal M}][{\cal M}^{3}]$ $\displaystyle+3[{\cal
M}^{2}]^{2}-6[{\cal M}^{4}])\,,$ (10)
where the square bracket is the trace symbol, $[{\cal M}]={\cal
M}^{\mu}{}_{\mu},~{}[{\cal M}^{2}]={\cal M}^{\mu}{}_{\nu}{\cal
M}^{\nu}{}_{\mu}$ and so on. The interaction term (3) would be unique to avoid
the Boulware-Deser ghost under the Poincarè invariance Hassan and Rosen
(2012).
HRBG possesses $2+5$ physical degrees of freedom, corresponding to the
massless graviton and the massive graviton. The massless graviton has two
degrees of freedom as in general relativity while the massive graviton has
five degrees of freedom corresponding to the helicity modes 0, $\pm$1, and
$\pm$2.
### II.2 Minimal theory of bigravity
Although a massive spin-2 field has five degrees of freedom under the Lorenz
invariance, the physical degrees of freedom can be reduced to only two by
breaking the Lorentz symmetry. The resultant theory is known as Minimal Theory
of Massive Gravity De Felice and Mukohyama (2016) and MTBG is its bigravity
extension De Felice _et al._ (2021b). Similarly to HRBG, MTBG possesses one
massless graviton and one massive graviton but the massive state only has two
tensorial degrees of freedom in MTBG.
To construct the action of MTBG, we first adopt the ADM decompositions for
both metrics
$\displaystyle g_{\mu\nu}dx^{\mu}dx^{\nu}$
$\displaystyle=-N_{g}^{2}dt^{2}+\gamma^{g}_{ij}(N_{g}^{i}dt+dx^{i})(N_{g}^{j}dt+dx^{j})\,,$
(11) $\displaystyle f_{\mu\nu}dx^{\mu}dx^{\nu}$
$\displaystyle=-N_{f}^{2}dt^{2}+\gamma^{f}_{ij}(N_{f}^{i}dt+dx^{i})(N_{f}^{j}dt+dx^{j})\,,$
(12)
where $N_{g},~{}N_{f}$ are the lapse functions and $N_{g}^{i},~{}N_{f}^{i}$
are the shift vectors, and $\gamma^{g}_{ij},~{}\gamma^{f}_{ij}$ are the
induced metrics on the constant-time hypersurface, respectively. We define the
covariant derivatives on the time constant hypersurface, ${\cal
D}^{g}_{i},~{}{\cal D}^{f}_{i}$, associated with the $g$\- and $f$-induced
metrics $\gamma^{g}_{ij},~{}\gamma^{f}_{ij}$. The extrinsic curvatures are
then
$\displaystyle K^{g}_{ij}=\frac{1}{2N_{g}}(\partial_{t}\gamma^{g}_{ij}-{\cal
D}^{g}_{i}N^{g}_{j}-{\cal D}^{g}_{j}N^{g}_{i})\,,$ (13) $\displaystyle
K^{f}_{ij}=\frac{1}{2N_{f}}(\partial_{t}\gamma^{f}_{ij}-{\cal
D}^{f}_{i}N^{f}_{j}-{\cal D}^{f}_{j}N^{f}_{i})\,.$ (14)
The interaction Lagrangian in MTBG is composed of the precursor part and the
constraint part
$\displaystyle{\cal L}_{\rm int}$ $\displaystyle={\cal L}_{\rm
int,prec}[\gamma^{g}_{ij},\gamma^{f}_{ij},\gamma_{g}^{ij},\gamma_{f}^{ij},K^{g}_{ij},K^{f}_{ij}]$
$\displaystyle+{\cal L}_{\rm
int,const}[\gamma^{g}_{ij},\gamma^{f}_{ij},\gamma_{g}^{ij},\gamma_{f}^{ij},K^{g}_{ij},K^{f}_{ij}]\,.$
(15)
They are explicitly given by
$\displaystyle{\cal L}_{\rm
int,prec}[\gamma^{g}_{ij},\gamma^{f}_{ij},\gamma_{g}^{ij},\gamma_{f}^{ij},K^{g}_{ij},K^{f}_{ij}]$
$\displaystyle=-\frac{1}{2}\left(N_{g}\sqrt{\gamma^{g}}\sum_{n=0}^{3}b_{n}e_{n}(\mathfrak{K})+N_{f}\sqrt{\gamma^{f}}\sum_{n=0}^{3}b_{4-n}e_{n}(\tilde{\mathfrak{K}})\right)\,,$
(16) $\displaystyle{\cal L}_{\rm
int,const}[\gamma^{g}_{ij},\gamma^{f}_{ij},\gamma_{g}^{ij},\gamma_{f}^{ij},K^{g}_{ij},K^{f}_{ij}]$
$\displaystyle=-\frac{1}{2}\Bigg{[}\sqrt{\gamma^{g}}\mathcal{U}^{i}{}_{j}\mathcal{D}^{g}_{i}\lambda^{j}-\beta\sqrt{\gamma^{f}}\tilde{\mathcal{U}}^{i}{}_{j}{\cal
D}^{f}_{i}\lambda^{j}$ $\displaystyle\qquad+\left(\lambda+\gamma_{g}^{ij}{\cal
D}^{g}_{i}{\cal
D}^{g}_{j}\bar{\lambda}\right)\sqrt{\gamma^{g}}\mathcal{U}^{k}{}_{l}\gamma_{g}^{lm}K^{g}_{mk}$
$\displaystyle\qquad-\left(\lambda-\gamma_{f}^{ij}{\cal D}^{f}_{i}{\cal
D}^{f}_{j}\bar{\lambda}\right)\sqrt{\gamma^{f}}\tilde{\mathcal{U}}^{k}{}_{l}\gamma_{f}^{lm}K^{f}_{mk}$
$\displaystyle\qquad+\frac{m_{g}^{2}\left(\lambda+\gamma_{g}^{ij}{\cal
D}^{g}_{i}{\cal
D}^{g}_{j}\bar{\lambda}\right)^{2}}{4N_{g}}\sqrt{\gamma^{g}}\left([\mathcal{U}^{2}]-\frac{1}{2}[{\cal
U}]^{2}\right)$
$\displaystyle\qquad+\frac{m_{f}^{2}\left(\lambda-\gamma_{f}^{ij}{\cal
D}^{f}_{i}{\cal
D}^{f}_{j}\bar{\lambda}\right)^{2}}{4N_{f}}\sqrt{\gamma^{f}}\left([\tilde{\mathcal{U}}^{2}]-\frac{1}{2}[\tilde{\mathcal{U}}]^{2}\right)\Bigg{]}\,,$
(17)
where $\lambda,~{}\lambda^{i},~{}\bar{\lambda},~{}\bar{\lambda}^{i}$ are the
Lagrange multipliers, $\beta$ is a constant parameter, and
$\displaystyle m_{g}:=m\frac{\kappa_{g}}{\kappa}=\frac{\alpha
m}{\sqrt{1+\alpha^{2}}}\,,\quad
m_{f}:=m\frac{\kappa_{f}}{\kappa}=\frac{m}{\sqrt{1+\alpha^{2}}}\,.$ (18)
The matrix $\mathfrak{K}^{i}{}_{j}$ and its inverse
$\tilde{\mathfrak{K}}^{i}{}_{j}$ are the roots of
$\displaystyle\mathfrak{K}^{i}{}_{k}\mathfrak{K}^{k}{}_{j}=\gamma_{g}^{ik}\gamma^{f}_{kj}\,,\quad\tilde{\mathfrak{K}}^{i}{}_{k}\tilde{\mathfrak{K}}^{k}{}_{j}=\gamma_{f}^{ik}\gamma^{g}_{kj}\,,$
(19)
and ${\cal U}^{i}{}_{j},\tilde{{\cal U}}^{i}{}_{j}$ are the derivatives of the
symmetric polynomials
$\displaystyle{\cal U}^{i}{}_{j}$
$\displaystyle:=\frac{1}{2}\sum_{n=0}^{3}b_{n}\left(\frac{\partial
e_{n}(\mathfrak{K})}{\partial\mathfrak{K}^{j}{}_{i}}+\gamma_{g}^{ik}\gamma^{g}_{jl}\frac{\partial
e_{n}(\mathfrak{K})}{\partial\mathfrak{K}^{k}{}_{l}}\right)\,,$ (20)
$\displaystyle\tilde{{\cal U}}^{i}{}_{j}$
$\displaystyle:=\frac{1}{2}\sum_{n=0}^{3}b_{4-n}\left(\frac{\partial
e_{n}(\tilde{\mathfrak{K}})}{\partial\tilde{\mathfrak{K}}^{j}{}_{i}}+\gamma_{f}^{ik}\gamma^{f}_{jl}\frac{\partial
e_{n}(\tilde{\mathfrak{K}})}{\partial\tilde{\mathfrak{K}}^{k}{}_{l}}\right)\,.$
(21)
The precursor part possesses a structure similar to the interaction term of
HRBG while the constraint part is added to eliminate the scalar and vector
modes of the massive graviton. MTBG is constructed in such a way that
background equations for a homogeneous universe coincide with those in HRBG.
In Ref. De Felice _et al._ (2021b), this was checked only for the FLRW case.
In the next section, we will show that the background equations are identical
also for the Bianchi type-I Universe.
## III Anisotropic universe
Let us investigate the homogeneous and anisotropic universe both in HRBG and
MTBG. For simplicity, we study the axisymmetric Bianchi type-I universe in
vacuum. We first show that the background equations are identical in HRBG and
MTBG, meaning that our following analysis can be applied to both bigravity
theories. Then, we describe the generic structure of the equations of motion.
These equations have several fixed points which we discuss in this section.
The stability of the fixed points will be studied in the next section.
### III.1 Equations of motion
For both $g$\- and $f$\- metric, we take a metric ansatz as one of the
homogeneous Universe, Bianchi type-I spacetime
$\displaystyle g_{\mu\nu}dx^{\mu}dx^{\nu}$
$\displaystyle=-N_{g}^{2}dt^{2}+a_{g}^{2}\left[e^{4\beta_{g}}dx^{2}+e^{-2\beta_{g}}\left(dy^{2}+dz^{2}\right)\right]\,,$
(22) $\displaystyle f_{\mu\nu}dx^{\mu}dx^{\nu}$
$\displaystyle=-N_{f}^{2}dt^{2}+a_{f}^{2}\left[e^{4\beta_{f}}dx^{2}+e^{-2\beta_{f}}\left(dy^{2}+dz^{2}\right)\right]\,,$
(23)
where the lapse functions $\\{N_{g},N_{f}\\}$, the scale factors
$\\{a_{g},a_{f}\\}$, and the anisotropies $\\{\beta_{f},\beta_{f}\\}$ are
functions of time $t$. The $g$\- and $f$-Hubble expansion rates and the shears
are defined by
$\displaystyle H_{g}$ $\displaystyle:=\frac{\dot{a}_{g}}{a_{g}N_{g}}\,,\quad
H_{f}:=\frac{\dot{a}_{f}}{a_{f}N_{f}}\,,$ (24) $\displaystyle\sigma_{g}$
$\displaystyle:=\frac{\dot{\beta}_{g}}{N_{g}}\,,\quad\sigma_{f}:=\frac{\dot{\beta}_{f}}{N_{f}}\,.$
(25)
For simplicity, we study vacuum solutions in the following.
The mini-superspace action in HRBG is given by Maeda and Volkov (2013)
$\displaystyle S_{\rm mHRBG}=\frac{V}{2\kappa_{g}^{2}}$ $\displaystyle\int
dta_{g}^{3}N_{g}$ $\displaystyle\times\Bigg{\\{}$
$\displaystyle-6(H_{g}^{2}-\sigma_{g}^{2})-6\alpha^{2}r\xi^{4}(H_{f}^{2}-\sigma_{f}^{2})$
$\displaystyle+m_{g}^{2}\Big{[}b_{0}+b_{1}\xi(r+e^{-2\beta}+2e^{\beta})$
$\displaystyle+b_{2}\xi^{2}\left[2e^{-\beta}+e^{2\beta}+r(e^{-2\beta}+2e^{\beta})\right]$
$\displaystyle+b_{3}\xi^{3}\left[1+r(2e^{-\beta}+e^{2\beta})\right]+b_{4}r\xi^{4}\Big{]}\Bigg{\\}}\,,$
(26)
where
$\displaystyle\xi:=\frac{a_{f}}{a_{g}}\,\quad
r:=\frac{a_{g}N_{f}}{a_{f}N_{g}}\,,\quad\beta:=\beta_{g}-\beta_{f}\,,$ (27)
and $V\equiv\int d^{3}x$ formally represents the comoving volume of the
system. Varying the action with respect to
$X=\\{N_{g},~{}N_{f},~{}a_{g},~{}a_{f},~{}\beta_{g},~{}\beta_{f}\\}$, we
obtain the background equations in the form ${\cal E}_{X}=0$:
$\displaystyle{\cal E}_{N_{g}}$
$\displaystyle:=3(H_{g}^{2}-\sigma_{g}^{2})-m_{g}^{2}\Big{[}b_{0}+b_{1}\left(e^{-2\beta}+2e^{\beta}\right)\xi$
$\displaystyle+b_{2}\left(2e^{-\beta}+e^{2\beta}\right)\xi^{2}+b_{3}\xi^{3}\Big{]}\,,$
(28) $\displaystyle{\cal E}_{N_{f}}$
$\displaystyle:=3(H_{f}^{2}-\sigma_{f}^{2})-m_{f}^{2}\Big{[}b_{4}+b_{3}\left(2e^{-\beta}+e^{2\beta}\right)\xi^{-1}$
$\displaystyle+b_{2}\left(e^{-2\beta}+2e^{\beta}\right)\xi^{-2}+b_{1}\xi^{-3}\Big{]}\,,$
(29) $\displaystyle{\cal E}_{a_{g}}$
$\displaystyle:=\frac{2\dot{H}_{g}}{N_{g}}+3(H_{g}^{2}+\sigma_{g}^{2})$
$\displaystyle-\frac{m_{g}^{2}}{3}\big{\\{}3b_{0}+b_{1}\xi\left(3r+2e^{-2\beta}+4e^{\beta}\right)$
$\displaystyle+b_{2}\xi^{2}\left[2r\left(2e^{\beta}+e^{-2\beta}\right)+\left(e^{2\beta}+2e^{-\beta}\right)\right]$
$\displaystyle+b_{3}r\left(e^{2\beta}+2e^{-\beta}\right)\xi^{3}\big{\\}}\,,$
(30) $\displaystyle{\cal E}_{a_{f}}$
$\displaystyle:=\frac{2\dot{H}_{f}}{N_{f}}+3(H_{f}^{2}+\sigma_{f}^{2})-\frac{m_{f}^{2}}{3r\xi^{3}}\big{\\{}b_{1}\left(e^{-2\beta}+2e^{\beta}\right)$
$\displaystyle+b_{2}\xi\left[r\left(e^{-2\beta}+2e^{\beta}\right)+2\left(e^{2\beta}+2e^{-\beta}\right)\right]$
$\displaystyle+b_{3}\xi^{2}\left[2r\left(e^{2\beta}+2e^{-\beta}\right)+3\right]+12b_{4}r\xi^{3}\big{\\}}\,,$
(31) $\displaystyle{\cal E}_{\beta_{g}}$
$\displaystyle:=\frac{1}{a_{g}^{3}}\frac{d}{dt}\left(a_{g}^{3}\sigma_{g}\right)+\kappa_{g}^{2}\frac{\partial
U}{\partial\beta}\,,$ (32) $\displaystyle{\cal E}_{\beta_{f}}$
$\displaystyle:=\frac{1}{a_{g}^{3}}\frac{d}{dt}\left(a_{f}^{3}\sigma_{f}\right)-\kappa_{f}^{2}\frac{\partial
U}{\partial\beta}\,,$ (33)
where we have defined the potential of the anisotropy as
$\displaystyle U:=\frac{m^{2}}{6\kappa^{2}}\big{[}$
$\displaystyle\xi\left(2e^{\beta}+e^{-2\beta}\right)\left(b_{1}N_{g}+b_{2}N_{f}\right)$
$\displaystyle+\xi^{2}\left(e^{2\beta}+2e^{-\beta}\right)\left(b_{2}N_{g}+b_{3}N_{f}\right)\big{]}\,.$
(34)
As shown in the Friedmann equations (28) and (29), the cosmic expansion is
driven by the anisotropic shears $\sigma_{g},\sigma_{f}$ and the graviton mass
term. By using ${\cal E}_{a_{g}}=0$ and ${\cal E}_{\beta_{g}}=0$, we can
eliminate $\dot{H}_{g},\dot{\sigma}_{g}$ from $\dot{{\cal E}}_{N_{g}}=0$ and
then obtain the constraint equation ${\cal C}=0$ with
$\displaystyle{\cal C}$
$\displaystyle:=H_{g}\left[3b_{1}+2b_{2}\xi\left(2e^{\beta}+e^{-2\beta}\right)+b_{3}\xi^{2}\left(e^{2\beta}+2e^{-\beta}\right)\right]$
$\displaystyle-
H_{f}\xi\left[3b_{3}\xi^{2}+2b_{2}\xi\left(e^{2\beta}+2e^{-\beta}\right)+b_{1}\left(2e^{\beta}+e^{-2\beta}\right)\right]$
$\displaystyle-2\xi\left(e^{-\beta}-e^{2\beta}\right)\left[\sigma_{f}\left(b_{1}e^{-\beta}+b_{2}\xi\right)+\sigma_{g}\left(b_{2}e^{-\beta}+b_{3}\xi\right)\right]\,.$
The same constraint equation is obtained by using $\dot{{\cal
E}}_{N_{f}}=0,~{}{\cal E}_{a_{f}}=0$, and ${\cal E}_{\beta_{f}}=0$ instead.
The minisuperspace action of MTBG is composed of the precursor part $S^{\rm
mMTBG}_{\rm pre}$ and the constraint part $S_{\rm mMTBG}^{\rm const}$, where
the precursor part agrees with the minisuperspace action of HRBG (26) in the
Bianchi type-I universe. The spatial homogeneity concludes that the spatial
derivatives vanish and then the minisuperspace action does not depend on
$\bar{\lambda}$ and $\lambda^{i}$. Then, the contribution of the constraint
part to the mini-superspace action is given by a functional of
$X=\\{N_{g},~{}N_{f},~{}a_{g},~{}a_{f},~{}\beta_{g},~{}\beta_{f}\\}$ and
$\lambda(t)$:
$\displaystyle S_{\rm mMTBG}^{\rm const}=\frac{m^{2}V}{2\kappa^{2}}\int
dta_{g}^{3}\left[-\lambda{\cal C}[X]+\frac{1}{2}\lambda^{2}{\cal
D}[X]\right]\,,$ (36)
where ${\cal C}$ is given in (LABEL:eq:constraint) and ${\cal D}$ is
$\displaystyle{\cal D}$
$\displaystyle:=\frac{m_{f}^{2}e^{-4\beta}}{4N_{g}r\xi^{2}}\left(b_{1}+2b_{2}e^{\beta}\xi+b_{3}e^{2\beta}\xi^{2}\right)$
$\displaystyle\times\Big{[}b_{1}\left(3\alpha^{2}re^{4\beta}\xi^{2}+4e^{3\beta}-1\right)$
$\displaystyle+2b_{2}e^{\beta}\xi\left[\alpha^{2}re^{\beta}\left(e^{3\beta}+2\right)\xi^{2}+2e^{3\beta}+1\right]$
$\displaystyle+b_{3}e^{2\beta}\xi^{2}\left[3-\alpha^{2}re^{\beta}\left(e^{3\beta}-4\right)\xi^{2}\right]\Big{]}\,.$
(37)
The equations of motion are obtained by the variations of the total mini-
superspace action $S_{\rm mMTBG}=S_{\rm mMTBG}^{\rm prec}+S_{\rm mMTBG}^{\rm
const}$ with respect to $X$ and $\lambda$. Since the precursor part is
identical to the minisuperspace action of HRBG, the equations of motion for
$X=\\{N_{g},~{}N_{f},~{}a_{g},~{}a_{f},~{}\beta_{g},~{}\beta_{f}\\}$ take the
form
$\displaystyle{\cal E}_{X}+{\cal E}_{X}^{\rm const}=0\,,$ (38)
where ${\cal E}_{X}^{\rm const}$ is the contribution from the constraint part,
while the equation of motion for $\lambda$ is
$\displaystyle{\cal E}_{\lambda}=\lambda{\cal D}-{\cal C}=0\,.$ (39)
One can easily conclude that $\lambda=0$ is a solution to the equations of
motion. When $\lambda=0$ is substituted, we find ${\cal E}_{X}^{\rm const}=0$
and ${\cal E}_{\lambda}=-{\cal C}$. As we have explained, equations ${\cal
E}_{X}=0$ lead to the constraint ${\cal C}=0$; then, the equation of motion
for $\lambda$ is consistently solved. Note that this analysis does not exclude
the existence of other solutions, but the other solution does not work well,
at least in the isotropic Universe (see Appendix A). Hence, the background
equations of motion in MTBG are reduced to those of HRBG.
### III.2 Structure of equations of motion
By the use of the freedom of the time reparametrization, $t\to t^{\prime}(t)$,
we impose the gauge condition $N_{g}=1$. The independent equations of motion
are
$\displaystyle{\cal E}_{N_{g}}$ $\displaystyle=0\,,\quad{\cal
E}_{N_{f}}=0\,,\quad{\cal C}=0\,,$ (40) $\displaystyle{\cal E}_{\beta_{g}}$
$\displaystyle=0\,,\quad{\cal E}_{\beta_{f}}=0\,,$ (41)
which determine the dynamics of the five variables
$\\{N_{f},a_{g},a_{f},\beta_{g},\beta_{f}\\}$. The equations in (40) are
understood as constraints since they do not contain second derivatives whereas
(41) are the equations of motion for the anisotropies.
To solve the equations (40) and (41), it is convenient to regard
$\\{H_{g},H_{f},\xi,r,\beta_{g},\beta_{f}\\}$ as independent variables. The
equations (40) and (41) are closed within
$\\{H_{g},H_{f},\xi,r,\beta_{g},\beta_{f}\\}$. However, while there are six
variables, only five equations exist and an additional equation is required.
The time derivative of $\xi=a_{f}/a_{g}$ is expressed as
$\displaystyle\dot{\xi}=-\xi H_{g}+r\xi^{2}H_{f}\,.$ (42)
By taking the time derivative of ${\cal C}=0$ and using the equations
(30)-(33) and (42), we obtain
$\displaystyle\dot{{\cal C}}=\dot{{\cal
C}}(H_{g},H_{f},\xi,r,\beta,\dot{\beta}_{g},\dot{\beta}_{f})=0\,.$ (43)
Hence, we have six equations
$\displaystyle{\cal E}_{N_{g}}=0,~{}{\cal E}_{N_{f}}=0,~{}{\cal
C}=0,~{}\dot{{\cal C}}=0,~{}{\cal E}_{\beta_{g}}=0,~{}{\cal
E}_{\beta_{f}}=0\,,$ (44)
which are closed within the six variables
$\\{H_{g},H_{f},\xi,r,\beta_{g},\beta_{f}\\}$. Once the solutions to (44) are
found, the dynamics of $\\{a_{g},a_{f},N_{f}\\}$ can be solved by using
$\dot{a}_{g}=H_{g}a_{g}$ and (27).
The variables $\\{H_{g},H_{f},\xi,r\\}$ are algebraically determined by
$\\{\beta_{g},\beta_{f},\dot{\beta}_{g},\dot{\beta}_{f}\\}$, although explicit
solutions cannot be found due to the nonlinearity of the constraints. We only
consider a branch such that $H_{g}>0$, namely the expanding universe. The
variables $\beta_{g}$ and $\beta_{f}$ obey a couple of second-order
differential equations (41). The present system requires $2\times 2$ initial
conditions for integration, corresponding to one physical degree of freedom of
the massless graviton and that of the tensor mode of the massive graviton,
respectively. The equations (41) give
$\displaystyle\dot{\Sigma}_{0}+3H_{g}\Sigma_{0}=0\,,$ (45)
where
$\displaystyle\Sigma_{0}:=\sigma_{g}+\alpha^{2}\xi^{3}\sigma_{f}\,.$ (46)
The solution to (45) is immediately found to be $\Sigma_{0}\propto
a_{g}^{-3}$, which is the same as the decaying law of the shear in GR. Hence,
$\Sigma_{0}$ can be interpreted as the massless mode of the shear. On the
other hand, $\beta_{g}$ and $\beta_{f}$ always appear in the equations of
motion in the combination $\beta=\beta_{g}-\beta_{f}$ which can be interpreted
as the massive mode. (Hence, the number of physically meaningful initial
conditions is $3$ rather than $4$. The redundant initial condition is the
freedom associated with the global rescaling of the spatial coordinates, $x\to
e^{2c}x,y\to e^{-c}y,z\to e^{-c}z$, with a constant parameter $c$.) However,
the differential equation for $\beta$ cannot be expressed in a simple form.
### III.3 Fixed points
As explained above, the equations are nonlinear differential equations and
their generic properties are not easily deduced. Therefore, by following Ref.
Gumrukcuoglu _et al._ (2012), we first look for solutions under the condition
$\displaystyle\ddot{\beta}_{g}=\ddot{\beta}_{f}=\dot{\beta}_{g}=\dot{\beta}_{f}=0\,.$
(47)
Since $\\{H_{g},H_{f},\xi,r\\}$ are determined by the algebraic equations, the
above condition (47) implies
$\displaystyle\dot{H}_{g}=\dot{H}_{f}=\dot{\xi}=\dot{r}=0\,,$ (48)
and then all the variables $\\{H_{g},H_{f},\xi,r,\beta_{g},\beta_{f}\\}$
remain constant. Hence, the condition (47) yields fixed-point solutions. At
the fixed points, the $g$\- and the $f$-spacetime themselves are isotropic
because of the absence of the shear while the ratio
$g^{\mu\alpha}f_{\alpha\nu}$ is anisotropic when $\beta\neq 0$. We call
solutions with $\beta=0$ isotropic fixed points and those with $\beta\neq 0$
anisotropic fixed points, respectively.
Under the fixed point conditions (47) and (48), both equations for the
anisotropy (32) and (33) are reduced to the same equation
$\displaystyle(e^{\beta}-e^{-2\beta})\left[b_{1}+b_{2}\left(e^{\beta}+r\right)\xi+b_{3}e^{\beta}r\xi^{2}\right]$
$\displaystyle=0\,,$ (49)
while the Friedmann equation for the $g$-metric (28), that for the $f$-metric
(29), and the constraint equation become respectively
$\displaystyle-3h_{g}^{2}+b_{0}+b_{2}e^{-\beta}\left(e^{3\beta}+2\right)\xi^{2}$
$\displaystyle\quad+b_{1}\left(e^{-2\beta}+2e^{\beta}\right)\xi+b_{3}\xi^{3}=0\,,$
(50) $\displaystyle
b_{1}+\left[b_{2}\left(e^{-2\beta}+2e^{\beta}\right)-3\alpha^{2}r^{-2}h_{g}^{2}\right]\xi$
$\displaystyle\quad+b_{3}\xi^{2}\left(e^{2\beta}+2e^{-\beta}\right)+b_{4}\xi^{3}=0\,,$
(51) $\displaystyle
b_{3}\left[-3+r\left(2e^{-\beta}+e^{2\beta}\right)\right]\xi^{2}$
$\displaystyle\quad-2b_{2}\left[(2e^{-\beta}+e^{2\beta})-r\left(2e^{\beta}+e^{-2\beta}\right)\right]\xi$
$\displaystyle\quad-b_{1}\left(e^{-2\beta}+2e^{\beta}-3r\right)=0\,,$ (52)
where we have defined a dimensionless combination $h_{g}:=H_{g}/m_{g}$ and
have used the relations
$\displaystyle H_{f}=\frac{H_{g}}{r\xi}\,,\quad N_{f}=r\xi\,,$ (53)
following from (27) and (48).
We first consider the isotropic case $\beta=0$. The constraint equation (52)
is reduced to
$\displaystyle(r-1)(b_{1}+2b_{2}\xi+b_{3}\xi^{2})=0\,.$ (54)
This equation has two branches. The first branch $r=1$ is called the normal
branch and it leads to the relation $H_{f}=\xi H_{g}$. In this branch,
eliminating $h_{g}$ from the Friedmann equations (50) and (51), and using
$\beta=0$, we obtain
$\displaystyle-\alpha^{2}b_{3}\xi^{4}+\left(4b_{4}-3\alpha^{2}b_{2}\right)\xi^{3}+3\left(b_{3}-\alpha^{2}b_{1}\right)\xi^{2}$
$\displaystyle+\left(3b_{2}-\alpha^{2}b_{0}\right)\xi+b_{1}=0\,,$ (55)
which is an algebraic equation for $\xi$ and can be solved for $\xi$.
Substituting the root $\xi$ into the Friedmann equation (50), the Hubble
parameter $h_{g}$ is fixed in terms of the coupling constants of the theory.
On the other hand, the second branch $b_{1}+2b_{2}\xi+b_{3}\xi^{2}=0$ is
called the self-accelerating branch. By using a root of
$b_{1}+2b_{2}\xi+b_{3}\xi^{2}=0$, the Hubble parameter $h_{g}$ and the ratio
$r$ are determined by (50) and (51). In particular, $r$ is given by
$\displaystyle
r=\alpha\sqrt{\frac{\xi\left(b_{3}\xi^{3}+3b_{2}\xi^{2}+3b_{1}\xi+b_{0}\right)}{b_{4}\xi^{3}+3b_{3}\xi^{2}+3b_{2}\xi+b_{1}}}\,.$
(56)
In the case of HRBG, the self-accelerating branch would suffer from a
nonlinear instability as with the dRGT theory De Felice _et al._ (2012). The
normal branch is stable when the Hubble parameter is sufficiently small while
the scalar mode of the massive graviton becomes a ghost, known as the Higuchi
ghost, when the Hubble parameter exceeds a critical value Higuchi (1987, 1989)
(see also Grisa and Sorbo (2010); Comelli _et al._ (2012); De Felice _et
al._ (2014); Aoki _et al._ (2015)). On the other hand, MTBG can avoid both
instabilities thanks to the absence of the dynamical scalar mode De Felice
_et al._ (2021b).
Next, we consider the anisotropic fixed points, $\beta\neq 0$. Eliminating
$b_{3}$ from (49) and (52), we obtain
$\displaystyle(1-e^{\beta})\left(b_{2}e^{\beta}\xi+b_{1}\right)\left(r-e^{\beta}\right)\left(r-e^{-2\beta}\right)=0\,.$
(57)
where $-3+r(e^{-2\beta}+2e^{\beta})\neq 0$ is assumed. Note that the isotropic
limit $\beta\to 0$ leads to $-3+r(e^{-2\beta}+2e^{\beta})\to-3(1-r)$ so the
anisotropic extension of the normal branch does not have to satisfy (57).
There are in principle four ways to satisfy (57), defining up to four
different branches. The branch $e^{\beta}=1$ corresponds to the isotropic
self-accelerating branch while the other three branches,
$\displaystyle
e^{\beta}=\left\\{-\frac{b_{1}}{b_{2}\xi}\,,~{}r\,,~{}r^{-1/2}\right\\}\,,$
(58)
may lead to anisotropic fixed points. As in the dRGT theory Gumrukcuoglu _et
al._ (2012), either $e^{\beta}=-b_{1}/(b_{2}\xi)$ or $e^{\beta}=r$ does not
give interesting solutions, and non-trivial anisotropic fixed points can be
found in the third branch $e^{\beta}=r^{-1/2}$. In the following, we discuss
them in order.
Branch 1. Substituting the solution $e^{\beta}=-b_{1}/(b_{2}\xi)$ into (49),
we obtain
$\displaystyle\left(b_{2}^{3}\xi^{3}+b_{1}^{3}\right)\left(b_{2}^{2}-b_{1}b_{3}\right)=0\,.$
(59)
The first solution $\xi=-b_{1}/b_{2}$ gives $\beta=0$ and thus this solution
is not anisotropic. The second solution $b_{2}^{2}-b_{1}b_{3}=0$ requires a
parameter tuning. In this case, the equations of motion yield
$\displaystyle\xi
r=\sqrt{\frac{\alpha^{2}b_{2}(b_{2}^{3}-b_{0}b_{3}^{2})}{b_{3}^{2}(b_{3}^{2}-b_{2}b_{4})}}\,,$
(60) $\displaystyle
h_{g}=\sqrt{\frac{\alpha^{2}(-b_{2}^{3}+b_{0}b_{3}^{2})}{3b_{3}^{2}(1+\alpha^{2})}}\,,$
(61) $\displaystyle
h_{f}=\sqrt{\frac{b_{2}b_{4}-b_{3}^{2}}{b_{2}(3\alpha^{2}+1)}}\,,$ (62)
$\displaystyle\frac{e^{\beta}}{r}=\sqrt{\frac{b_{2}\left(b_{3}^{2}-b_{2}b_{4}\right)}{\alpha^{2}(b_{2}^{3}-b_{0}b_{3}^{2})}}\,,$
(63)
where the variables $r,\xi$ and $\beta$ are not completely determined, that
is, the fixed point is not isolated. Therefore, we shall not discuss this
branch furthermore.
Branch 2. We then consider the solution $r=e^{\beta}$. Substituting this into
the equation for anisotropy (49), we obtain
$\displaystyle b_{3}e^{2\beta}\xi^{2}+2b_{2}e^{\beta}\xi+b_{1}=0\,,$ (64)
which is solved by
$\displaystyle\xi=\frac{e^{-\beta}\left(-b_{2}\pm\sqrt{b_{2}^{2}-b_{1}b_{3}}\right)}{b_{3}}\,.$
(65)
Then the Friedmann equations (50) and (51) give
$\displaystyle h_{g}^{2}$
$\displaystyle=\frac{2b_{2}^{3}-3b_{1}b_{2}b_{3}+b_{1}^{2}b_{4}\pm
2(b_{2}^{2}-b_{1}b_{3})^{3/2}}{3b_{3}^{2}}\,,$ (66) $\displaystyle 0$
$\displaystyle=-2b_{2}^{2}b_{4}+b_{2}b_{3}^{2}+b_{1}b_{3}b_{4}+\alpha^{2}\left(2b_{2}^{3}-3b_{1}b_{2}b_{3}+b_{0}b_{3}^{2}\right)$
$\displaystyle\pm
2\sqrt{b_{2}^{2}-b_{1}b_{3}}\left[\left(b_{3}^{2}-b_{2}b_{4}\right)+\alpha^{2}\left(b_{2}^{2}-b_{1}b_{3}\right)\right]\,.$
(67)
The first equation determines the Hubble parameter in terms of the coupling
constants while the second equation imposes a constraint on the coupling
constants rather than determining the value of $e^{\beta}$. Hence, this branch
is not of our interest.
Branch 3. Finally, we discuss the third branch $r=e^{-2\beta}$. With this
solution, the anisotropy equation (49) and a combination of (50) and (51)
gives algebraic equations for $\xi$ and $e^{\beta}$:
$\displaystyle
b_{3}e^{-\beta}\xi^{2}+b_{2}(e^{-2\beta}+e^{\beta})\xi+b_{1}=0\,.$ (68)
$\displaystyle
b_{3}\alpha^{2}\xi^{4}+\left[b_{2}\alpha^{2}(2e^{-\beta}+e^{2\beta})-b_{4}e^{-4\beta}\right]\xi^{3}$
$\displaystyle+\left[b_{1}\alpha^{2}(e^{-2\beta}+2e^{\beta})-b_{3}(2e^{-5\beta}+e^{-2\beta})\right]\xi^{2}$
$\displaystyle+\left[b_{0}\alpha^{2}-b_{2}(e^{-6\beta}+2e^{-3\beta})\right]\xi-
b_{1}e^{-4\beta}=0\,,$ (69)
We can further combine (68) and (69) to find an expression linear in $\xi$,
$\displaystyle\xi=-\frac{b_{1}\left[b_{3}^{2}-b_{2}b_{4}+\alpha^{2}(b_{2}^{2}-b_{1}b_{3})e^{3\beta}\right](e^{2\beta}+e^{5\beta})}{{\cal
Q}_{0}+{\cal Q}_{1}e^{3\beta}+{\cal Q}_{2}e^{6\beta}+{\cal
Q}_{3}e^{9\beta}}\,,$ (70)
and a quartic-order algebraic equation for $e^{3\beta}$,
$\displaystyle{\cal C}_{0}+{\cal C}_{1}e^{3\beta}+{\cal C}_{2}e^{6\beta}+{\cal
C}_{3}e^{9\beta}+{\cal C}_{4}e^{12\beta}=0\,,$ (71)
where the coefficients are given by
$\displaystyle{\cal Q}_{0}$ $\displaystyle=b_{2}(b_{3}^{2}-b_{2}b_{4})\,,$
(72) $\displaystyle{\cal Q}_{1}$
$\displaystyle=b_{2}b_{3}^{2}-2b_{2}^{2}b_{4}+b_{1}b_{3}b_{4}+\alpha^{2}b_{2}(b_{2}^{2}-b_{1}b_{3})\,,$
(73) $\displaystyle{\cal Q}_{2}$
$\displaystyle=b_{2}(b_{3}^{2}-b_{2}b_{4})+\alpha^{2}(2b_{2}^{3}-3b_{1}b_{2}b_{3}+b_{0}b_{3}^{2})\,,$
(74) $\displaystyle{\cal Q}_{3}$
$\displaystyle=\alpha^{2}b_{2}(b_{2}^{2}-b_{1}b_{3})\,,$ (75)
and
$\displaystyle{\cal C}_{0}$
$\displaystyle=\left(b_{2}^{2}-b_{1}b_{3}\right)\left(b_{3}^{2}-b_{2}b_{4}\right)\,,$
(76) $\displaystyle{\cal C}_{1}$
$\displaystyle=-2b_{4}b_{2}^{3}+b_{3}^{2}b_{2}^{2}+4b_{1}b_{2}b_{3}b_{4}-2b_{1}b_{3}^{3}-b_{1}^{2}b_{4}^{2}$
$\displaystyle+\alpha^{2}(b_{2}^{4}-2b_{1}b_{2}^{2}b_{3}+b_{0}b_{2}^{2}b_{4}-b_{0}b_{2}b_{3}^{2}-b_{1}^{2}b_{2}b_{4}+2b_{1}^{2}b_{3}^{2})\,,$
(77) $\displaystyle{\cal C}_{2}$
$\displaystyle=\left(b_{2}^{2}-b_{1}b_{3}\right)[b_{3}^{2}-b_{2}b_{4}+\alpha^{2}\left(2b_{2}^{2}-4b_{1}b_{3}+2b_{0}b_{4}\right)$
$\displaystyle+\alpha^{4}(b_{1}^{2}-b_{0}b_{2})]\,,$ (78) $\displaystyle{\cal
C}_{3}$
$\displaystyle=\alpha^{2}(b_{2}^{4}-2b_{1}b_{2}^{2}b_{3}+b_{0}b_{2}^{2}b_{4}-b_{0}b_{2}b_{3}^{2}-b_{1}^{2}b_{2}b_{4}+2b_{1}^{2}b_{3}^{2})$
$\displaystyle+\alpha^{4}(-2b_{1}^{3}b_{3}+b_{1}^{2}b_{2}^{2}+4b_{0}b_{1}b_{2}b_{3}-2b_{0}b_{2}^{3}-b_{0}^{2}b_{3}^{2})\,,$
(79) $\displaystyle{\cal C}_{4}$
$\displaystyle=\alpha^{4}\left(b_{1}^{2}-b_{0}b_{2}\right)\left(b_{2}^{2}-b_{1}b_{3}\right)\,.$
(80)
Since (71) is quartic order in $e^{3\beta}$, there are four independent roots
of the algebraic equation, in general. Once a root is chosen, $\xi$ and
$h_{g}^{2}$ are uniquely determined by (70) and (50). Hence, unlike the other
branches, all the variables $\\{r,\xi,\beta,h_{g}\\}$ are fixed without any
fine-tuning of the coupling constants. We thus focus on this branch in the
following.
## IV Stability of fixed points
In this section, we study the stability of the fixed points obtained in the
previous section. As we have explained, the system involves one massless
degree of freedom and one massive degree of freedom. In particular, the
massless mode $\Sigma_{0}$ decays as $a_{g}^{-3}$ and can be ignored as the
universe expands. Since our interest is in the dynamics of the massive mode,
we shall assume
$\displaystyle\Sigma_{0}=\sigma_{g}+\alpha^{2}\xi^{3}\sigma_{f}=0\,.$ (81)
in which the dimension of the phase space is reduced to two. In principle, the
equations of motion can be reduced to a single second-order differential
equation for $\beta=\beta_{g}-\beta_{f}$ (or a couple of first-order
differential equations) when the constraints are solved. In practice, however,
the constraints are nonlinear and cannot be solved explicitly. Hence, we
classify the fixed points based on the stability against small perturbations
by which the equations are linearized. The global stability is then examined
by using two-dimensional phase portraits.
### IV.1 Local stability
The equations (32) and (33) yield
$\displaystyle 0={\cal E}_{\beta}$
$\displaystyle:=\ddot{\beta}+\frac{(r+3\alpha^{2}\xi^{2})H_{g}+2r^{2}\xi
H_{f}-\dot{r}}{r+\alpha^{2}\xi^{2}}\dot{\beta}$
$\displaystyle+\frac{m^{2}}{3(1+\alpha^{2})\xi}(e^{\beta}-e^{-2\beta})(r+\alpha^{2}\xi^{2})$
$\displaystyle\quad\times[b_{1}+b_{2}(e^{\beta}+r)\xi+b_{3}e^{\beta}r\xi^{2}]\,.$
(82)
where $\Sigma_{0}=0$ is used. We consider perturbations around the fixed
points as
$\displaystyle H_{g}$ $\displaystyle=m_{g}(h_{g0}+\epsilon h_{g1}(t))\,,$ (83)
$\displaystyle H_{f}$
$\displaystyle=m_{g}\left(\frac{h_{g0}}{r_{0}\xi_{0}}+\epsilon
h_{f1}(t)\right)\,,$ (84) $\displaystyle\xi$
$\displaystyle=\xi_{0}+\epsilon\xi_{1}(t)\,,$ (85) $\displaystyle r$
$\displaystyle=r_{0}+\epsilon r_{1}(t)\,,$ (86) $\displaystyle\beta$
$\displaystyle=\beta_{0}+\epsilon\beta_{1}(t)\,.$ (87)
Here, the quantities with the subscript $0$ are the fixed-point solutions
which are determined in terms of the coupling constants while
$\\{h_{g1},h_{f1},\xi_{1},r_{1},\beta_{1}\\}$ represent the perturbations and
we have introduced a small parameter $\epsilon$ to keep track of orders of
parturbations. The linearized equation for $\beta$ is given by
$\displaystyle{\cal
E}^{(1)}_{\beta}=\ddot{\beta}_{1}+3H_{g0}\dot{\beta}_{1}+{\cal
E}^{(1)}_{\beta\beta}\beta_{1}+{\cal E}^{(1)}_{\beta r}r_{1}+{\cal
E}^{(1)}_{\beta\xi}\xi_{1}=0\,,$ (88)
where $H_{g0}=m_{g}h_{g0}$ and the coefficients ${\cal
E}^{(1)}_{\beta\beta},{\cal E}^{(1)}_{\beta r},{\cal E}^{(1)}_{\beta\xi}$ are
computed for each fixed-point solution. As we have explained,
$\\{H_{g},H_{f},\xi,r\\}$ are fixed by the constraints. Thanks to the
linearization, the constraints can be explicitly solved for
$h_{g1},h_{f1},\xi_{1},r_{1}$ although the exact expressions are lengthy. We
then obtain a second-order differential equation for $\beta_{1}$.
In the case of the isotropic fixed points, $\beta_{0}=0$, the coefficients
${\cal E}^{(1)}_{\beta r}$ and ${\cal E}^{(1)}_{\beta\xi}$ vanish and then we
do not need to solve the constraints explicitly. The equation for $\beta_{1}$
is given by
$\displaystyle\ddot{\beta}_{1}+3H_{g0}\dot{\beta}_{1}+M^{2}_{I}\beta_{1}=0\,.$
(89)
with
$\displaystyle M_{I}^{2}$
$\displaystyle=\frac{m^{2}\left(r_{0}+\alpha^{2}\xi_{0}^{2}\right)\left[b_{1}+b_{2}\xi_{0}\left(r_{0}+1\right)+b_{3}r_{0}\xi_{0}^{2}\right]}{(1+\alpha^{2})\xi_{0}}\,.$
(90)
The values of $r_{0}$ and $\xi_{0}$ are fixed by choosing the branch:
$\xi_{0}$ is a root of (55) and $r_{0}=1$ in the normal branch while $\xi_{0}$
is a root of $b_{1}+2b_{2}\xi+b_{3}\xi^{2}=0$ and $r_{0}$ is given by (56) in
the case of the self-accelerating branch, respectively.
At the anisotropic fixed points, on the other hand, the coefficients ${\cal
E}^{(1)}_{\beta r}$ and ${\cal E}^{(1)}_{\beta\xi}$ do not vanish and the
constraints need to be solved. We recall that the anisotropic fixed points
satisfy
$\displaystyle r_{0}=e^{-2\beta_{0}}\,,\quad
b_{3}e^{-\beta_{0}}\xi_{0}^{2}+b_{2}(e^{-2\beta_{0}}+e^{\beta_{0}})\xi_{0}+b_{1}=0\,,$
(91)
which can be used to simplify the expressions. Using (91) to eliminate $r_{0}$
and $b_{1}$, we finally obtain
$\displaystyle\ddot{\beta}_{1}+3H_{g0}\dot{\beta}_{1}+M^{2}_{A}\beta_{1}=0\,.$
(92)
where the mass squared is given by
$\displaystyle
M_{A}^{2}=\frac{m^{2}d_{1}d_{2}d_{3}e^{-5\beta_{0}}[-d_{1}d_{2}+6\alpha^{2}e^{6\beta_{0}}h_{g0}^{2}]}{(1+\alpha^{2})[d_{1}d_{2}^{2}+2\alpha^{2}e^{6\beta_{0}}h_{g0}^{2}(3d_{2}+2d_{3}e^{\beta_{0}})]},$
(93)
with
$\displaystyle d_{1}$
$\displaystyle:=\left(e^{3\beta_{0}}-1\right)\left(1+\alpha^{2}e^{2\beta_{0}}\xi_{0}^{2}\right)\,,$
(94) $\displaystyle d_{2}$ $\displaystyle:=e^{\beta_{0}}b_{3}\xi_{0}+b_{2}\,,$
(95) $\displaystyle d_{3}$
$\displaystyle:=b_{2}e^{2\beta_{0}}+b_{3}\xi_{0}\,.$ (96)
| | stable spiral
---
(damped-oscillation)
| stable node
---
(over-damping)
| saddle point
---
(unstable)
$M^{2}$ | $+$ | $+$ | $-$
$9H_{g0}^{2}-4M^{2}$ | $-$ | $+$ | $+$
phase portraits | | isotropic: Fig. 1a
---
anisotropic: Fig. 1d
| isotropic: Fig. 1b
---
anisotropic: Fig. 1e
| isotropic: Fig. 1c
---
anisotropic: Fig. 1f
Table 1: Classification of the fixed points.
Therefore, in either case, the linearized equation for $\beta$ takes the form
$\displaystyle\ddot{\beta}_{1}+3H_{g0}\dot{\beta}_{1}+M^{2}\beta_{1}=0\,,$
(97)
where $M^{2}$ is either $M_{I}^{2}$ (isotropic fixed points) or $M_{A}^{2}$
(anisotropic fixed points). This equation is consistent with the linear
equation of the tensor modes of the massive graviton as long as the gradient
term is ignored at least around the isotropic fixed point. Thus, the masses
$M$ are considered as the graviton mass since (97) is identical to the
superhorizon limit of the linear equation of the tensor modes at least around
the isotropic fixed point.
We then split the second-order differential equation (97) into a couple of
first-order differential equations:
$\displaystyle\dot{\bm{v}}=K\bm{v}\,,\quad\bm{v}=\begin{pmatrix}\Sigma_{{\rm
m}1}\\\ \beta_{1}\end{pmatrix}$ (98)
with
$\displaystyle K=\begin{pmatrix}-3H_{g0}&-M^{2}\\\ 1&0\end{pmatrix}\,.$ (99)
The property of the fixed points are classified by the eigenvalues of the
matrix $K$
$\displaystyle\lambda_{\pm}:=\frac{1}{2}\left(-3H_{g0}\pm\sqrt{9H_{g0}^{2}-4M^{2}}\right)\,,$
(100)
which we summarize in Table 1.222Strictly speaking, there are other cases such
as non-isolated fixed points at the boundary of the classifications. Since the
fine-tuning of the coupling constants is required, we shall not discuss these
cases in this paper. Recall that we are interested in the expanding universe
$H_{g0}>0$. In the case of $M^{2}<0$, both eigenvalues are real and satisfy
$\lambda_{+}>0$ and $\lambda_{-}<0$. Therefore, a fixed point with $M^{2}<0$
is a saddle point. When $M^{2}>0$, the fixed point is locally stable because
the real parts of both eigenvalues are always negative. Depending on the sign
of $9H_{g0}^{2}-4M^{2}$, the stable fixed points are divided into stable
spirals $(9H_{g0}^{2}-4M^{2}<0)$ and stable nodes $(9H_{g0}^{2}-4M^{2}>0)$.
The anisotropy $\beta$ is overdamping due to a large Hubble friction around
the stable nodes; the eigenvalues are complex around the stable spirals and
the anisotropy exhibits damped oscillation. All the cases can be realized in
both isotropic fixed points and anisotropic fixed points when the coupling
constants are appropriately chosen.
### IV.2 Global stability
(a)
$b_{0}=9.32,b_{1}=-0.0162,b_{2}=-0.0479,b_{3}=0.0122,b_{4}=0.00549,\alpha=1$.
(b)
$b_{0}=6.61,b_{1}=-0.0542,b_{2}=-0.00258,b_{3}=0.00320,b_{4}=0.00357,\alpha=1$.
(c) $b_{0}=50,b_{1}=1,b_{2}=8.15,b_{3}=-12.0,b_{4}=26.6,\alpha=1$.
(d) $b_{0}=-6.8,b_{1}=4,b_{2}=-1.9,b_{3}=0.95,b_{4}=-1,\alpha=1$.
(e) $b_{0}=50,b_{1}=1,b_{2}=8.15,b_{3}=-12.0,b_{4}=26.6,\alpha=1$.
(f)
$b_{0}=9.32,b_{1}=-0.0162,b_{2}=-0.0479,b_{3}=0.0122,b_{4}=0.00549,\alpha=1$.
Figure 1: Examples of phase portraits around fixed points: stable spirals
(left), stable nodes (middle), and saddle points (right). The black points in
the top figures represent the isotropic fixed points (self-accelerating
branch) and the red points in the bottom figures are the anisotropic fixed
points. The black curves are the trajectories of numerical solutions. The
parameters are chosen as specified in each figure.
(a) Anisotropic stable node and isotropic saddle point:
$b_{0}=50,b_{1}=1,b_{2}=8.15,b_{3}=-12.0,b_{4}=26.6,\alpha=1$.
(b) Anisotropic saddle point and isotropic stable node:
$b_{0}=6.61,b_{1}=-0.0542,b_{2}=-0.00258,b_{3}=0.00320,b_{4}=0.00357,\alpha=1$.
(c) Anisotropic saddle point and isotropic stable spiral:
$b_{0}=9.32,b_{1}=-0.0162,b_{2}=-0.0479,b_{3}=0.0122,b_{4}=0.00549,\alpha=1$.
Figure 2: Global structure of phase portraits. Top: the isotropic fixed point
is unstable and the universe evolves towards the anisotropic fixed point.
Middle: the universe generically approaches the isotropic fixed point without
oscillation. Bottom: the universe moves towards the isotropic fixed point with
oscillation.
The current universe has to be around a stable spiral to explain the dark
matter by the coherent oscillation of the massive graviton. On the other hand,
the initial condition is not necessarily in the vicinity of the stable spiral.
Let us then discuss the global structure of the system by using phase
portraits.
The set of independent equations is given in (44). At each point in the phase
space $(\beta,\Sigma_{\rm m})$, where $\Sigma_{\rm m}=\dot{\beta}$, their time
derivatives are computed by solving (44) combined with the condition
$\Sigma_{0}=0$. However, due to the nonlinearity of the equations, there are
multiple branches and we have to choose the correct branch. We first choose a
fixed point and then consider the vicinity of the fixed point. The branch of
the solutions in the vicinity is then chosen so that the solution is
continuously connected to the fixed point, which is numerically achieved by
employing the Newton-Raphson method. Iterating this procedure, we can obtain a
phase portrait around each of the fixed points.
Fig. 1 shows the phase portraits around the isotropic fixed points and the
anisotropic fixed points. Although only the phase portraits of the self-
accelerating branch are shown for the isotropic fixed points, similar figures
can be obtained for the normal branch as well. We also integrate the equations
(44) numerically. The trajectories of the numerical solutions are shown as the
black curves in Fig. 1. The solutions indeed behave as classified in the
perturbative analysis even at a finite distance away from the fixed point.
For a given value of the coupling constants, the equations may have several
fixed points which can or cannot be connected through a dynamical evolution.
We find that the anisotropic fixed point can be continuously connected to the
self-accelerating branch of the isotropic fixed point. Fig. 2 shows three
phase portraits which exhibit flows from saddle points to stable fixed points.
In Fig. 2a, the isotropic universe is unstable. Even if the initial condition
is isotropic, the universe typically moves towards the anisotropic fixed point
when $\beta<0$. Hence, those parameters realize a spontaneous growth of the
anisotropy from a tiny anisotropy. On the other hand, Figs. 2b and 2c are the
cases with stable isotropic universes. Although the solutions go away from the
isotropic stable point if the initial value of $\beta$ is largely negative,
the solutions generically approach the isotropic universe under a wide range
of initial conditions. In particular, the anisotropy oscillates with a
decreasing amplitude around the isotropic universe in Fig. 2c and behaves as a
dark matter component of the universe. Therefore, when the coupling constants
are appropriately chosen, the spin-2 dark matter scenario is stably realized
under generic initial conditions.
## V Dark matter production
In the previous section, we have found that the isotropic universe can be
unstable and one of the endpoints of the instability is the anisotropic fixed
point. This solution may be used for a novel production mechanism of spin-2
dark matter which we shall discuss in this section.
So far we have assumed the vacuum configuration, but to discuss a realistic
cosmological scenario, we have to add matter components such as radiation and
inflaton. In general, the graviton mass squared $M^{2}$ is expected to depend
on the matter field through the complicated constraint equations. As a simple
example, let us consider a scalar field $\phi$ as a matter field and promote
the coupling constants $b_{i}$ to be functions of $\phi$. In particular
$b_{0}(\phi)$ (or $b_{4}(\phi)$) is nothing but a potential of the scalar
field minimally coupled to the $g$-metric (or the $f$-metric). The theory with
$\phi$-dependent coupling constants $b_{1},b_{2},b_{3}$ is known as chameleon
bigravity De Felice _et al._ (2018a, b) (see also Aoki (2020) as well as a
similar setup in MTMG Fujita _et al._ (2019, 2020)). As $\phi$ evolves in
time, the coupling constants $b_{i}(\phi)$ also change which may realize a
phase transition from a Fig. 2a-type phase diagram to a Fig. 2c-type phase
diagram. We shall not discuss a concrete realization of this scenario in the
present paper because it would be strongly model dependent. However, we have
confirmed that there indeed exists a one-parameter change of the coupling
constants $b_{i}(\phi)$ that realizes an adiabatic transition from Fig. 2a to
Fig. 2b and then Fig. 2c.
In the first stage (Fig. 2a), the isotropic universe is unstable due to a
tachyonic mass $M_{I}^{2}<0$ and a non-zero value of $\beta$ can be
spontaneously produced (when the Hubble friction is not too large). Then,
$\beta$ eventually reaches the vicinity of the anisotropic fixed point. After
the phase transition from Fig. 2a to Fig. 2b, the anisotropic fixed point
turns into unstable one by changing the sign of $M_{A}^{2}$ and then $\beta$
starts to deviate from the vicinity of the anisotropic fixed point. As the
graviton mass increases (or the Hubble expansion rate $H_{g}$ decreases), the
phase diagram further changes from Fig. 2b to Fig. 2c. As a result, $\beta$
behaves as a dark matter component of the universe around the isotropic fixed
point.
In this scenario, the dark matter abundance is roughly estimated by the value
of the anisotropic fixed point and the time of phase transition. For
simplicity, we assume that the evolution of $\beta$ in the second stage (Fig.
2b) is negligible and $M_{A}$ does not significantly change after the
transition. Provided that the phase transition occurs at $H_{g}(a_{\rm
tra})\sim M_{A}(a_{\rm tra})\sim m$, the present amount of dark matter is
computed in the same way as the misalignment mechanism Marzola _et al._
(2018); Preskill _et al._ (1983); Abbott and Sikivie (1983); Dine and
Fischler (1983) by replacing the initial amplitude with the fixed-point value.
Here we assume the coupling constants $b_{i}(\phi)$ are of order unity at the
transition time and hence $\beta$ and $\xi$ are also approximately of order
unity.
Figure 3: Constrains on spin-2 dark matter from the current and future
experiments. The green, blue, and orange region represents the estimate of the
detectability of the spin-2 dark matter with $f_{g}\alpha^{2}$ by advanced
LIGO, DECIGO, and LISA, respectively. In this figure, we use the sensitivity
curve in Lisa _et al._ (2018); Yagi and Seto (2011); Robson _et al._ (2019),
and assume 2 years of the observation time (see Appendix B). The black dashed
lines represent the rough estimate of $f_{g}\alpha^{2}$ in our production
mechanism by using (103). They are given by fixing the fraction of spin-2 dark
matter density to the total dark matter density as $f_{g}=1,10^{-3},10^{-6}$
in our scenario. The plotted sensitivity of advanced LIGO is consistent with
“optimised sensitivity” in Armaleo _et al._ (2021).
By assuming the transition occurs at the radiation dominant era, the scale
factor at the transition time is estimated as
$\displaystyle a_{\rm
tra}\sim\Omega_{r,0}^{1/4}\left(\frac{H_{0}}{H_{g}(a_{\rm
tra})}\right)^{1/2}\sim\Omega_{r,0}^{1/4}\left(\frac{H_{0}}{m}\right)^{1/2}\,,$
(101)
where $\Omega_{r,0}$ is the current density parameter of the radiation
components. The energy density of spin-2 dark matter at the transition time
can be roughly estimated as $\rho_{g}(a_{\rm tra})\sim M_{\rm
Pl}^{2}m_{g}^{2}$. Then the current density parameter of spin-2 dark matter is
$\displaystyle\Omega_{g,0}=\frac{\rho_{g}(a_{\rm tra})}{\rho_{c,0}}a_{\rm
tra}^{3}\sim\frac{\alpha^{2}}{1+\alpha^{2}}\Omega_{r,0}^{3/4}\left(\frac{m}{H_{0}}\right)^{1/2}\,.$
(102)
This is consistent with the result in Marzola _et al._ (2018). The fraction
of the density of spin-2 dark matter to the total dark matter is given by
$\displaystyle f_{g}\equiv\frac{\Omega_{g,0}}{\Omega_{\rm
DM}}\sim\frac{\alpha^{2}}{1+\alpha^{2}}\Omega_{r,0}^{3/4}\left(\frac{m}{H_{0}}\right)^{1/2}\,.$
(103)
Since the spin-2 dark matter couples to matter fields in the same way as the
massless graviton, a signal caused by the oscillating spin-2 dark matter can
be probed by the gravitational wave detectors. As detailed in Appendix B, the
signal depends on the combination $f_{g}\alpha^{2}$ and the graviton mass
$M_{I}$. In Fig. 3, we show the detectability of $f_{g}\alpha^{2}$ for spin-2
dark matter by advanced LIGO, DECIGO, and LISA.
In our scenario, when the fraction is fixed, we obtain a relation between the
graviton mass $M_{I}\sim m$ and the ratio of the gravitational constants
$\alpha$ by using (103). The values of $f_{g}\alpha^{2}$ for several fixed
$f_{g}$ are shown as black dashed lines in Fig. 3. The signal of our scenario
is detectable even if the massive spin-2 field only contributes to a small
fraction of the total dark matter density in the mass range $m\lesssim
10^{-10}{\rm eV}$.
## VI Conclusion
In the present paper, we have considered the Bianchi type-I solution in the
two kinds of bigravity theories without Boulware-Deser ghost, Hassan-Rosen
bigravity and Minimal Theory of Bigravity. First, we have identified the
background equations for the Bianchi type-I Universe, and found that the
background equations are the same in the two theories. Furthermore, we have
found fixed points of the background equations with relatively large
anisotropy and classified them by local stability. We have also investigated
the global stability around the fixed points by showing the phase portraits
for all patterns of the local stability.
One of the interesting applications of the anisotropic fixed point is the
production of spin-2 dark matter. The production of spin-2 dark matter
corresponds to the production of the initial anisotropy $\beta$ in the
Universe. One way to generate the initial amplitude of $\beta$ is a phase
transition that changes the stability of anisotropic and isotropic fixed
points. The phase transition can be achieved by introducing a matter field.
Our scenario is somewhat similar to the axion dark matter Preskill _et al._
(1983); Abbott and Sikivie (1983); Dine and Fischler (1983). In the
misalignment mechanism of the axion dark matter, the initial amplitude of
axion is generated by a misalignment away from the bottom of the potential in
the early universe. In our scenario, on the other hand, the “misalignment” is
spontaneously generated by the instability of the isotropic fixed point even
if its initial amplitude is negligibly small, and the size of the
“misalignment” is fixed when the model is given. The rough estimation of the
abundance from this production mechanism shows that spin-2 dark matter can
account for all or a part of dark matter. As shown in Fig. 3, gravitational
wave detectors are expected to be able to search ultralight spin-2 dark matter
in a certain range of the graviton mass even if its fraction to all dark
matter is small.
###### Acknowledgements.
We would like to thank Hiroki Takeda for insightful comments. The work of Y.M.
was supported by the establishment of university fellowships towards the
creation of science technology innovation. This work was supported in part by
Japan Society for the Promotion of Science (JSPS) Grants-in-Aid for Scientific
Research No. 20K14468 (K.A.), No. 18K13537 (T.F.), No. 20H05854 (T.F.) and No.
17H02890 (S.M.), No. 17H06359 (S.M), and by World Premier International
Research Center Initiative, MEXT, Japan.
## Appendix A Hamiltonian formulation of Minimal Theory of Bigravity
In this section, we derive the background equation in Bianchi type-I Universe
for the Minimal theory of Bigravity through the Hamilton formalism. The
Minimal Theory of Bigravity is originally constructed with Hamiltonian to
impose an appropriate constraint, and thus it looks relatively simple in the
Hamilton formalism. We define the canonical momentum associated with
$a_{g},a_{f},\beta_{g},\beta_{f}$ as
$\displaystyle P_{g}$ $\displaystyle=\frac{\partial
L}{\partial\dot{a}_{g}}\,,\quad P_{f}=\frac{\partial
L}{\partial\dot{a}_{f}}\,,$ (104) $\displaystyle Q_{g}$
$\displaystyle=\frac{\partial L}{\partial\dot{\beta}_{g}}\,,\quad
Q_{f}=\frac{\partial L}{\partial\dot{\beta}_{f}}\,.$ (105)
The mini-superspace Hamiltonian in Bianchi type-I Universe is obtained by
Legendre transformation of the Lagrangian in (36) as
$\displaystyle H$
$\displaystyle=P_{g}\dot{a}_{g}+P_{f}\dot{a}_{f}+Q_{g}\dot{\beta}_{g}+Q_{f}\dot{\beta}_{f}-L$
$\displaystyle={\cal C}_{N_{g}}N_{g}+{\cal C}_{N_{f}}N_{f}+{\cal
C}_{\lambda}\lambda\,,$ (106)
where
$\displaystyle{\cal C}_{N_{g}}$ $\displaystyle=-\frac{m^{4}M_{\rm
Pl}^{2}a_{g}^{3}\lambda^{2}}{8N_{g}^{2}}\big{[}-3b_{3}^{2}-4b_{2}b_{3}(e^{-2\beta}+2e^{\beta})\xi-2(2b_{2}^{2}+b_{1}b_{3})(2e^{-\beta}+e^{2\beta})\xi^{2}-12b_{1}b_{2}\xi^{3}+b_{1}^{2}(e^{4\beta}-4e^{\beta})\xi^{4}\big{]}$
$\displaystyle+\frac{m^{2}M_{\rm
Pl}^{2}a_{g}^{3}}{2}\big{[}b_{4}+b_{3}(e^{-2\beta}+2e^{\beta})\xi+b_{2}(2e^{-\beta}+e^{2\beta})\xi^{2}+b_{1}\xi^{3}\big{]}+\frac{-a_{g}^{2}P_{g}^{2}+Q_{g}^{2}}{12M_{\rm
Pl}^{2}a_{g}^{3}}\,,$ (107) $\displaystyle{\cal C}_{N_{f}}$
$\displaystyle=\frac{m^{4}M_{\rm
Pl}^{2}a_{g}^{3}\lambda^{2}}{8\alpha^{2}N_{f}^{2}\xi}\big{[}b_{3}^{2}(-e^{-\beta}+4e^{-\beta})+12b_{2}b_{3}\xi+2(2b_{2}^{2}+b_{1}b_{3})(e^{-2\beta}+2e^{\beta})\xi^{2}+4b_{1}b_{2}(2e^{-\beta}+e^{2\beta})\xi^{3}+3b_{1}^{2}\xi^{4}\big{]}$
$\displaystyle\frac{m^{2}M_{\rm
Pl}^{2}a_{g}^{3}}{2}\big{[}b_{3}+b_{2}(e^{-\beta}+2e^{\beta})\xi+b_{1}(2e^{-\beta}+e^{2\beta})\xi^{2}+b_{0}\xi^{3}\big{]}+\frac{-a_{g}^{2}\xi^{2}P_{f}^{2}+Q_{f}^{2}}{12\alpha^{2}M_{\rm
Pl}^{2}a_{g}^{3}\xi^{3}}\,,$ (108) $\displaystyle{\cal C}_{\lambda}$
$\displaystyle=-\frac{m^{4}M_{\rm
Pl}^{2}a_{g}^{3}\lambda}{4\alpha^{2}N_{g}N_{f}\xi}(b_{3}+2b_{2}e^{\beta}\xi+b_{1}e^{2\beta}\xi^{2})\big{\\{}N_{g}[b_{3}(-e^{-4\beta}+4e^{-\beta})+2b_{2}(e^{-3\beta}+2)\xi+3b_{1}e^{-2\beta}\xi^{2}]$
$\displaystyle-\alpha^{2}N_{f}\xi[-3b_{3}-2b_{2}(2e^{-2\beta}+e^{\beta})\xi+b_{1}(-4e^{-\beta}+e^{2\beta})\xi^{2}]\big{\\}}$
$\displaystyle+\frac{m^{2}a_{g}}{12\alpha^{2}\xi}\big{\\{}\big{[}b_{3}e^{-2\beta}+2b_{3}e^{\beta}+2b_{2}(2e^{-\beta}+e^{2\beta})\xi+3b_{1}\xi^{2}\big{]}P_{f}-\alpha^{2}\xi\big{[}3b_{3}+2b_{2}(e^{-2\beta}+2e^{\beta})\xi+b_{1}(2e^{-\beta}+e^{2\beta})\xi^{2}\big{]}P_{g}\big{\\}}$
$\displaystyle+\frac{m^{2}(e^{\beta}-e^{-2\beta})}{6\alpha^{2}\xi^{2}}\big{[}(b_{3}+b_{2}e^{\beta}\xi)Q_{f}+\alpha^{2}\xi^{3}(b_{2}+b_{1}e^{\beta}\xi)Q_{g}\big{]}\,.$
(109)
Then we immediately get the constraint equations ${\cal C}_{N_{g}}\approx
0,{\cal C}_{N_{f}}\approx 0,{\cal C}_{\lambda}\approx 0$. We can also obtain
the canonical equations illustrated by
$\displaystyle\dot{P}_{g}$ $\displaystyle=-\frac{\partial H}{\partial
a_{g}}\,,$ $\displaystyle\dot{P}_{f}$ $\displaystyle=-\frac{\partial
H}{\partial a_{f}}\,,$ (110) $\displaystyle\dot{a}_{g}$
$\displaystyle=\frac{\partial H}{\partial P_{g}}\,,$
$\displaystyle\dot{a}_{f}$ $\displaystyle=\frac{\partial H}{\partial
P_{f}}\,,$ (111) $\displaystyle\dot{Q}_{g}$ $\displaystyle=-\frac{\partial
H}{\partial\beta_{g}}\,,$ $\displaystyle\dot{Q}_{f}$
$\displaystyle=-\frac{\partial H}{\partial\beta_{f}}\,,$ (112)
$\displaystyle\dot{\beta}_{g}$ $\displaystyle=\frac{\partial H}{\partial
Q_{g}}\,,$ $\displaystyle\dot{\beta}_{f}$ $\displaystyle=\frac{\partial
H}{\partial Q_{f}}\,.$ (113)
The consistency of the constraint equation requires that time derivatives of
the constraint equations have to vanish. Substituting (109), (111), and (113)
into $\dot{{\cal C}}_{N_{g}}\approx 0$, we obtain
$\displaystyle\lambda
F_{1}\left[\lambda,a_{g},a_{f},\beta_{g},\beta_{f},P_{g},P_{f},Q_{g},Q_{f}\right]=0\,.$
(114)
The function is linear in $\lambda$, then we get two branches of the solution,
$\lambda\approx 0$ and $F_{1}\approx 0$. Similarly, $\dot{{\cal
C}}_{N_{g}}\approx 0$ gives
$\displaystyle\lambda
F_{2}\left[\lambda,a_{g},a_{f},\beta_{g},\beta_{f},P_{g},P_{f},Q_{g},Q_{f}\right]=0\,.$
(115)
Then it in principle gives two branches of the solution, $\lambda\approx 0$
and $F_{1}\approx 0\land F_{2}\approx 0$. Although MTBG is intended to give
the same background equations as HRBG in the homogeneous Universe,
$F_{1}\approx 0\land F_{2}\approx 0$ leads to an additional constraint to the
background. Furthermore, it can be shown that the background solution with
$F_{1}\approx 0\land F_{2}\approx 0$ does not work well at least for the
isotropic Universe, thus, we select $\lambda\approx 0$. Since the differences
in the equations from Hassan-Rosen bigravity are terms with $\lambda$, we now
confirm that the equations of the Minimal Theory of Bigravity are identical
with those of the Hassan-Rosen bigravity.
## Appendix B Probing spin-2 dark matter with gravitational wave detectors
In this section, we briefly show the detectability of spin-2 dark matter. The
main result is shown in Fig.3. Our analysis is similar to that in Ref. Armaleo
_et al._ (2021).
### B.1 Perturbations around the Minkowski spacetime
We will consider the action of bigravity with matter field $\psi_{\rm m}$
which couples only to the $g$-metric:
$\displaystyle S=S_{g}+S_{\rm m}[\psi_{\rm m},g_{\mu\nu}]\,,$ (116)
where $S_{g}$ is defined by (1). In order to analyze the responses of the
gravitational wave detector, we define the metric perturbations around the
Minkowski spacetime by
$\displaystyle\delta g_{\mu\nu}$ $\displaystyle:=g_{\mu\nu}-\eta_{\mu\nu}\,,$
(117) $\displaystyle\delta f_{\mu\nu}$
$\displaystyle:=f_{\mu\nu}-\eta_{\mu\nu}.$ (118)
Note that either $\delta g_{\mu\nu}$ or $\delta f_{\mu\nu}$ is not a mass
eigenstate. At the linear order, the mass eigenstate is given by
$\displaystyle h_{\mu\nu}$
$\displaystyle:=\frac{\kappa_{f}}{\kappa_{g}\kappa}\delta
g_{\mu\nu}+\frac{\kappa_{g}}{\kappa_{f}\kappa}\delta f_{\mu\nu}$ (119)
$\displaystyle\varphi_{\mu\nu}$ $\displaystyle:=\frac{1}{\kappa}\left(\delta
g_{\mu\nu}-\delta f_{\mu\nu}\right)\,.$ (120)
The quadratic-order action is then
$\displaystyle S_{2}$ $\displaystyle=\int d^{4}x\Bigg{[}{\cal L}_{\rm
EH}[h]+{\cal L}_{\rm EH}[\varphi]+{\cal L}_{\rm FP}[\varphi]$
$\displaystyle+\frac{1}{2M_{\rm Pl}}h_{\mu\nu}T_{\rm
m}^{\mu\nu}+\frac{1}{2M_{G}}\varphi_{\mu\nu}T_{\rm m}^{\mu\nu}\Bigg{]}\,,$
(121)
where
$\displaystyle M_{\mathrm{pl}}:=\frac{\kappa}{\kappa_{g}\kappa_{f}},\quad
M_{G}:=\frac{\kappa}{\kappa_{g}^{2}}=\frac{\kappa_{f}}{\kappa_{g}}M_{\mathrm{Pl}}\,,$
(122)
and for an arbitrary $\chi_{\mu\nu}$, we define
$\displaystyle{\cal L}_{\rm EH}[\chi]$
$\displaystyle:=\frac{1}{8}\big{[}(2\partial_{\nu}\chi_{\mu\rho}-\partial_{\rho}\chi_{\mu\nu})\partial^{\rho}\chi^{\mu\nu}$
$\displaystyle+(\partial^{\mu}\chi-2\partial_{\nu}\chi^{\mu\nu})\partial_{\mu}\chi\big{]}\,,$
(123) $\displaystyle{\cal L}_{\rm FP}[\chi]$
$\displaystyle:=\frac{M^{2}}{8}(\chi^{2}-\chi^{\mu\nu}\chi_{\mu\nu})\,,$ (124)
with the mass of spin-2 dark matter $M$, and we have used the notation
$\chi=\chi^{\mu}{}_{\mu}$. Ultralight spin-2 dark matter in our Galaxy is
modeled by
$\displaystyle\varphi_{ij}=\sum_{\lambda}\varphi_{0,\lambda}e^{\lambda}_{ij}\cos(\omega
t-\bm{k}\cdot\bm{x}+\delta_{\tau}(t))\,,$ (125)
where $\delta_{\tau}(t)$ is a time-dependent phase factor, which evolves on
the coherent timescale $\tau=2\pi/(Mv_{\rm DM}^{2})$. Since the typical dark
matter velocity in our Galaxy is $v_{\mathrm{DM}}\sim 10^{-3}$, we can use the
non-relativistic approximation $\omega\sim M$. In this model, the dark matter
density is given by
$\displaystyle\rho_{g}=\frac{1}{4}\left<{\dot{\varphi}_{ij}\dot{\varphi}}_{ij}\right>\simeq\frac{M^{2}}{4}\sum_{\lambda}\left<\varphi_{0,\lambda}^{2}\right>\,.$
(126)
where the symbol $\braket{\cdots}$ denotes the spacetime average. We have used
the fact $\braket{\cos^{2}(Mt)}=1/2$ and
$e_{ij}^{\lambda}e_{ij}^{\lambda^{\prime}}=2\delta^{\lambda\lambda^{\prime}}$.
In the following, we assume massive graviton with only helicity two modes:
$\displaystyle\varphi_{0}$
$\displaystyle:=\sqrt{\braket{\varphi_{0,+}^{2}}}=\sqrt{\braket{\varphi_{0,\times}^{2}}}=\frac{\sqrt{2\rho_{g}}}{M}\,,$
(127) $\displaystyle\sqrt{\braket{\varphi_{0,x}^{2}}}$
$\displaystyle=\sqrt{\braket{\varphi_{0,y}^{2}}}=\sqrt{\braket{\varphi_{0,b}^{2}}}=0\,.$
(128)
### B.2 Signal in a gravitational-wave detector
The $g$-metric, which is coupled to the matter fields, is given by
$\displaystyle g_{\mu\nu}=\eta_{\mu\nu}+\frac{h_{\mu\nu}}{M_{\rm
Pl}}+\frac{\varphi_{\mu\nu}}{M_{G}}\,.$ (129)
The signal for the gravitational wave detector from the massive graviton is
given by operating the detector tensor
$D^{ij}=(\hat{x}^{i}\hat{x}^{j}-\hat{y}^{i}\hat{y}^{j})/2$ to the fluctuation,
$\displaystyle h(t)$ $\displaystyle=\frac{1}{M_{G}}D^{ij}\varphi_{ij}$
$\displaystyle=\frac{\alpha\varphi_{0}}{M_{\rm
Pl}}[F_{+}(\theta,\phi,\psi)+F_{\times}(\theta,\phi,\psi)]$
$\displaystyle\times\cos(\omega t-\bm{k}\cdot\bm{x}+\delta_{\tau}(t))\,.$
(130)
where $F_{+},F_{\times},\cdots$ are antenna pattern functions which depend on
the sky location $(\theta,\phi)$ and polarization angle $\psi$. For advanced
LIGO, the antenna pattern functions are given by
$\displaystyle F_{+}(\theta,\phi,\psi)$
$\displaystyle=\frac{1}{2}(1+\cos^{2}\theta)\cos(2\phi)\cos(2\psi)$
$\displaystyle-\cos\theta\sin(2\phi)\sin(2\psi)\,,$ (131) $\displaystyle
F_{\times}(\theta,\phi,\psi)$
$\displaystyle=\frac{1}{2}(1+\cos^{2}\theta)\cos(2\phi)\sin(2\psi)$
$\displaystyle+\cos\theta\sin(2\phi)\cos(2\psi)\,.$ (132)
The sky/polarization average of squared antenna pattern functions are given by
$\displaystyle\mathcal{R}=\left<F_{+}^{2}\right>=\left<F_{\times}^{2}\right>=\frac{1}{5}\,,\quad\left<F_{+}F_{\times}\right>=0\,,$
(133)
where the bracket $\left<\cdots\right>$ denotes
$\displaystyle\left<\cdots\right>=\frac{1}{4\pi^{2}}\int_{0}^{\pi}d\psi\int_{0}^{2\pi}d\phi\int_{0}^{\pi}d\theta\sin\theta(\cdots)\,.$
(134)
For LISA, the antenna pattern functions depend on the frequency, and their
sky/polarization average $\mathcal{R}$ is given by Robson _et al._ (2019)
$\displaystyle\mathcal{R}=\frac{3}{10}-\frac{507}{5040}\left(\frac{f}{f_{*}}\right)+\cdots$
(135)
where $f_{*}=19.09$ mHz is the peak frequency.
The threshold of the detection signal can be estimated by
$\displaystyle\left<h^{2}\right>=\frac{S_{n}(\frac{M}{2\pi})}{T_{\rm eff}}\,,$
(136)
where $S_{n}$ is the one-sided noise spectrum of each detector, and $T_{\rm
eff}$ is the effective observation time that takes into account the coherent
timescale $\tau$ Budker _et al._ (2014)
$\displaystyle T_{\rm eff}=\left\\{\begin{array}[]{cc}T_{\rm obs}&(T_{\rm
obs}<\tau)\\\ \sqrt{\tau T_{\rm obs}}&(T_{\rm
obs}\geq\tau)\end{array}\right..$ (139)
Here, the time-averaged signal is
$\displaystyle\left<h^{2}\right>=\frac{2\alpha^{2}f_{g}\rho_{\rm DM}}{5M_{\rm
Pl}^{2}M^{2}}\,,$ (140)
where $\rho_{\rm DM}\simeq 0.3~{}{\rm GeV/cm^{3}}$ is the local dark matter
density, and $f_{g}=\rho_{g}/\rho_{\rm DM}$ is the spin-2 dark matter fraction
of the total dark matter density. Plugging $T_{\rm obs}=2$ years and the noise
spectra given in Ref. Lisa _et al._ (2018); Yagi and Seto (2011); Robson _et
al._ (2019), we obtain the sensitivity curves for $\alpha^{2}f_{g}$ shown in
Fig. 3.
## References
* Aghanim _et al._ (2020) N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)], arXiv:1807.06209 [astro-ph.CO] .
* Lin (2019) T. Lin, PoS 333, 009 (2019), arXiv:1904.07915 [hep-ph] .
* Peccei and Quinn (1977) R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977).
* Dror _et al._ (2019) J. A. Dror, K. Harigaya, and V. Narayan, Phys. Rev. D 99, 035036 (2019), arXiv:1810.07195 [hep-ph] .
* Bastero-Gil _et al._ (2019) M. Bastero-Gil, J. Santiago, L. Ubaldi, and R. Vega-Morales, JCAP 04, 015 (2019), arXiv:1810.07208 [hep-ph] .
* Ema _et al._ (2019) Y. Ema, K. Nakayama, and Y. Tang, JHEP 07, 060 (2019), arXiv:1903.10973 [hep-ph] .
* Nakai _et al._ (2020) Y. Nakai, R. Namba, and Z. Wang, JHEP 12, 170 (2020), arXiv:2004.10743 [hep-ph] .
* Salehian _et al._ (2021) B. Salehian, M. A. Gorji, H. Firouzjahi, and S. Mukohyama, Phys. Rev. D 103, 063526 (2021), arXiv:2010.04491 [hep-ph] .
* Firouzjahi _et al._ (2021) H. Firouzjahi, M. A. Gorji, S. Mukohyama, and B. Salehian, JHEP 06, 050 (2021), arXiv:2011.06324 [hep-ph] .
* East and Pretorius (2017) W. E. East and F. Pretorius, Phys. Rev. Lett. 119, 041101 (2017), arXiv:1704.04791 [gr-qc] .
* Cardoso _et al._ (2018) V. Cardoso, O. J. C. Dias, G. S. Hartnett, M. Middleton, P. Pani, and J. E. Santos, JCAP 03, 043 (2018), arXiv:1801.01420 [gr-qc] .
* East (2017) W. E. East, Phys. Rev. D 96, 024004 (2017), arXiv:1705.01544 [gr-qc] .
* Aoki and Mukohyama (2016) K. Aoki and S. Mukohyama, Phys. Rev. D 94, 024001 (2016), arXiv:1604.06704 [hep-th] .
* Aoki and Maeda (2018) K. Aoki and K.-i. Maeda, Phys. Rev. D 97, 044002 (2018), arXiv:1707.05003 [hep-th] .
* Babichev _et al._ (2016a) E. Babichev, L. Marzola, M. Raidal, A. Schmidt-May, F. Urban, H. Veermäe, and M. von Strauss, JCAP 09, 016 (2016a), arXiv:1607.03497 [hep-th] .
* Babichev _et al._ (2016b) E. Babichev, L. Marzola, M. Raidal, A. Schmidt-May, F. Urban, H. Veermäe, and M. von Strauss, Phys. Rev. D 94, 084055 (2016b), arXiv:1604.08564 [hep-ph] .
* Marzola _et al._ (2018) L. Marzola, M. Raidal, and F. R. Urban, Phys. Rev. D 97, 024010 (2018), arXiv:1708.04253 [hep-ph] .
* DeRocco and Hook (2018) W. DeRocco and A. Hook, Phys. Rev. D 98, 035021 (2018), arXiv:1802.07273 [hep-ph] .
* Obata _et al._ (2018) I. Obata, T. Fujita, and Y. Michimura, Phys. Rev. Lett. 121, 161301 (2018), arXiv:1805.11753 [astro-ph.CO] .
* Nagano _et al._ (2019) K. Nagano, T. Fujita, Y. Michimura, and I. Obata, Phys. Rev. Lett. 123, 111301 (2019), arXiv:1903.02017 [hep-ph] .
* Abbott _et al._ (2022) R. Abbott _et al._ (LIGO Scientific Collaboration, Virgo Collaboration,, KAGRA, Virgo), Phys. Rev. D 105, 063030 (2022), arXiv:2105.13085 [astro-ph.CO] .
* Michimura _et al._ (2020) Y. Michimura, T. Fujita, S. Morisaki, H. Nakatsuka, and I. Obata, Phys. Rev. D 102, 102001 (2020), arXiv:2008.02482 [hep-ph] .
* Miller _et al._ (2021) A. L. Miller _et al._ , Phys. Rev. D 103, 103002 (2021), arXiv:2010.01925 [astro-ph.IM] .
* Morisaki _et al._ (2021) S. Morisaki, T. Fujita, Y. Michimura, H. Nakatsuka, and I. Obata, Phys. Rev. D 103, L051702 (2021), arXiv:2011.03589 [hep-ph] .
* Armaleo _et al._ (2021) J. M. Armaleo, D. López Nacir, and F. R. Urban, JCAP 04, 053 (2021), arXiv:2012.13997 [astro-ph.CO] .
* Fierz and Pauli (1939) M. Fierz and W. Pauli, Proc. Roy. Soc. Lond. A 173, 211 (1939).
* Boulware and Deser (1972) D. G. Boulware and S. Deser, Phys. Rev. D 6, 3368 (1972).
* de Rham and Gabadadze (2010) C. de Rham and G. Gabadadze, Phys. Rev. D 82, 044020 (2010), arXiv:1007.0443 [hep-th] .
* de Rham _et al._ (2011) C. de Rham, G. Gabadadze, and A. J. Tolley, Phys. Rev. Lett. 106, 231101 (2011), arXiv:1011.1232 [hep-th] .
* De Felice _et al._ (2012) A. De Felice, A. E. Gumrukcuoglu, and S. Mukohyama, Phys. Rev. Lett. 109, 171101 (2012), arXiv:1206.2080 [hep-th] .
* De Felice and Mukohyama (2016) A. De Felice and S. Mukohyama, Phys. Lett. B 752, 302 (2016), arXiv:1506.01594 [hep-th] .
* De Rham _et al._ (2014) C. De Rham, L. Keltner, and A. J. Tolley, Phys. Rev. D 90, 024050 (2014), arXiv:1403.3690 [hep-th] .
* Gumrukcuoglu _et al._ (2020) A. E. Gumrukcuoglu, R. Kimura, and K. Koyama, Phys. Rev. D 101, 124021 (2020), arXiv:2003.11831 [gr-qc] .
* Kenna-Allison _et al._ (2020) M. Kenna-Allison, A. E. Gumrukcuoglu, and K. Koyama, Phys. Rev. D 102, 103524 (2020), arXiv:2009.05405 [gr-qc] .
* Manita and Kimura (2022) Y. Manita and R. Kimura, Phys. Rev. D 105, 084038 (2022), arXiv:2112.13855 [gr-qc] .
* Abbott _et al._ (2016) B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 116, 221101 (2016), [Erratum: Phys.Rev.Lett. 121, 129902 (2018)], arXiv:1602.03841 [gr-qc] .
* de Rham _et al._ (2017) C. de Rham, J. T. Deskins, A. J. Tolley, and S.-Y. Zhou, Rev. Mod. Phys. 89, 025004 (2017), arXiv:1606.08462 [astro-ph.CO] .
* De Felice _et al._ (2021a) A. De Felice, S. Mukohyama, and M. C. Pookkillath, JCAP 12, 011 (2021a), arXiv:2110.01237 [astro-ph.CO] .
* Hassan and Rosen (2012) S. F. Hassan and R. A. Rosen, JHEP 02, 126 (2012), arXiv:1109.3515 [hep-th] .
* De Felice _et al._ (2021b) A. De Felice, F. Larrouturou, S. Mukohyama, and M. Oliosi, JCAP 04, 015 (2021b), arXiv:2012.01073 [gr-qc] .
* Gumrukcuoglu _et al._ (2012) A. E. Gumrukcuoglu, C. Lin, and S. Mukohyama, Phys. Lett. B 717, 295 (2012), arXiv:1206.2723 [hep-th] .
* Maeda and Volkov (2013) K.-i. Maeda and M. S. Volkov, Phys. Rev. D 87, 104009 (2013), arXiv:1302.6198 [hep-th] .
* Higuchi (1987) A. Higuchi, Nucl. Phys. B 282, 397 (1987).
* Higuchi (1989) A. Higuchi, Nucl. Phys. B 325, 745 (1989).
* Grisa and Sorbo (2010) L. Grisa and L. Sorbo, Phys. Lett. B 686, 273 (2010), arXiv:0905.3391 [hep-th] .
* Comelli _et al._ (2012) D. Comelli, M. Crisostomi, and L. Pilo, JHEP 06, 085 (2012), arXiv:1202.1986 [hep-th] .
* De Felice _et al._ (2014) A. De Felice, A. E. Gümrükçüoğlu, S. Mukohyama, N. Tanahashi, and T. Tanaka, JCAP 06, 037 (2014), arXiv:1404.0008 [hep-th] .
* Aoki _et al._ (2015) K. Aoki, K.-i. Maeda, and R. Namba, Phys. Rev. D 92, 044054 (2015), arXiv:1506.04543 [hep-th] .
* De Felice _et al._ (2018a) A. De Felice, S. Mukohyama, and J.-P. Uzan, Gen. Rel. Grav. 50, 21 (2018a), arXiv:1702.04490 [hep-th] .
* De Felice _et al._ (2018b) A. De Felice, S. Mukohyama, M. Oliosi, and Y. Watanabe, Phys. Rev. D 97, 024050 (2018b), arXiv:1711.04655 [hep-th] .
* Aoki (2020) K. Aoki, Phys. Rev. D 102, 124049 (2020), arXiv:2009.11739 [hep-th] .
* Fujita _et al._ (2019) T. Fujita, S. Kuroyanagi, S. Mizuno, and S. Mukohyama, Phys. Lett. B 789, 215 (2019), arXiv:1808.02381 [gr-qc] .
* Fujita _et al._ (2020) T. Fujita, S. Mizuno, and S. Mukohyama, JCAP 01, 023 (2020), arXiv:1909.07563 [astro-ph.CO] .
* Preskill _et al._ (1983) J. Preskill, M. B. Wise, and F. Wilczek, Phys. Lett. B 120, 127 (1983).
* Abbott and Sikivie (1983) L. F. Abbott and P. Sikivie, Phys. Lett. B 120, 133 (1983).
* Dine and Fischler (1983) M. Dine and W. Fischler, Phys. Lett. B 120, 137 (1983).
* Lisa _et al._ (2018) B. Lisa, F. Peter, E. Matthew, and G. Slawomir, “Updated Advanced LIGO sensitivity design curve, LIGO Report No. LIGO-T1800044,” https://dcc.ligo.org/LIGO-T1800044/public (2018).
* Yagi and Seto (2011) K. Yagi and N. Seto, Phys. Rev. D 83, 044011 (2011), [Erratum: Phys.Rev.D 95, 109901 (2017)], arXiv:1101.3940 [astro-ph.CO] .
* Robson _et al._ (2019) T. Robson, N. J. Cornish, and C. Liu, Class. Quant. Grav. 36, 105011 (2019), arXiv:1803.01944 [astro-ph.HE] .
* Budker _et al._ (2014) D. Budker, P. W. Graham, M. Ledbetter, S. Rajendran, and A. Sushkov, Phys. Rev. X 4, 021030 (2014), arXiv:1306.6089 [hep-ph] .
|
# Holomorphic Anomaly Equations For $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$
Deniz Genlik Department of Mathematics
Ohio State University
100 Math Tower, 231 West 18th Ave.
Columbus, OH 43210
USA<EMAIL_ADDRESS>and Hsian-Hua Tseng Department of Mathematics
Ohio State University
100 Math Tower, 231 West 18th Ave.
Columbus, OH 43210
USA<EMAIL_ADDRESS>
###### Abstract.
We prove holomorphic anomaly equations for $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$
based on the work of Lho [8].
To the memory of Bumsig Kim
###### Contents
1. 0 Introduction
1. 0.1 Acknowledgement
2. 1 On mirror theorem
3. 2 Frobenius Structures
4. 3 Lift of modified R-matrix
1. 3.1 Canonical Lift
2. 3.2 Preparations
5. 4 Holomorphic anomaly equations
1. 4.1 Formula for potentials
1. 4.1.1 Graphs
2. 4.1.2 Formula for $\mathcal{F}_{g}$
2. 4.2 Proof of holomorphic anomaly equations
## 0\. Introduction
The cyclic group $\mathbb{Z}_{5}$ acts naturally on $\mathbb{C}^{5}$ by
letting its generator $1\in\mathbb{Z}_{5}$ act by multiplication by the fifth
root of unity
$e^{\frac{2\pi\sqrt{-1}}{5}}.$
This action commutes with the diagonal action of the torus
$\mathrm{T}=(\mathbb{C}^{*})^{5}$ on $\mathbb{C}^{5}$ and induces a
$\mathrm{T}$-action on $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$. Consequently
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ is a toric Deligne-Mumford stack.
This paper is concerned with $\mathrm{T}$-equivariant Gromov-Witten invariants
of $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$. By definition, these are the following
integrals
(0.1)
$\int_{\left[\overline{M}_{g,n}^{\mathrm{orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right],0\right)\right]^{vir}}\prod_{k=1}^{n}\mathrm{ev}_{i}^{*}\left(\gamma_{k}\right)\psi_{i}^{k_{i}}.$
Here,
$[\overline{M}_{g,n}^{\mathrm{orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right],0\right)]^{vir}$
is the ($\mathrm{T}$-equivariant) virtual fundamental class of the moduli
space
$\overline{M}_{g,n}^{\mathrm{orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right],0\right)$
of stable maps to $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$. The classes $\psi_{i}\in
H^{2}(\overline{M}_{g,n}^{\mathrm{orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right],0\right),\mathbb{Q})$
are descendant classes. The evaluation maps
$\mathrm{ev}_{i}:\overline{M}_{g,n}^{\mathrm{orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right],0\right)\to
I[\mathbb{C}^{5}/\mathbb{Z}_{5}]$
take values in the inertia stack $I[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ of
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$. The classes $\gamma_{i}\in
H^{*}_{\mathrm{T,Orb}}([\mathbb{C}^{5}/\mathbb{Z}_{5}]):=H^{*}_{\mathrm{T}}(I[\mathbb{C}^{5}/\mathbb{Z}_{5}])$
are classes in the Chen-Ruan cohomology of $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$.
Let
$\lambda_{0},\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\in
H^{*}_{\mathrm{T}}(\text{pt})=H^{*}(B\mathrm{T})$
be the first Chern classes of the tautological line bundles of
$B\mathrm{T}=(B\mathbb{C}^{*})^{5}$. Then (0.1) takes value in
$\mathbb{Q}(\lambda_{0},\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$.
Foundational treatments of orbifold Gromov-Witten theory can be found in many
references. For compact target stacks, the original reference is [1]. For non-
compact target stacks admitting torus actions, such as
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$, one can define Gromov-Witten theory for
them using virtual localization formula [5]. In this case, their Gromov-Witten
theory should be understood as certain twisted Gromov-Witten theory of stacks.
Generalities on twisted Gromov-Witten theory of stacks can be found in [2] and
[15].
The main results of this paper concern structures of Gromov-Witten invariants
(0.1), formulated in terms of generating functions. The definition of inertia
stacks implies that
$I[\mathbb{C}^{5}/\mathbb{Z}_{5}]=[\mathbb{C}^{5}/\mathbb{Z}_{5}]\cup\bigcup_{k=1}^{4}B\mathbb{Z}_{5}.$
Let
$\phi_{0}=1\in
H^{0}_{\mathrm{T}}([\mathbb{C}^{5}/\mathbb{Z}_{5}]),\phi_{k}=1\in
H^{0}_{\mathrm{T}}(B\mathbb{Z}_{5}),1\leq k\leq 4.$
Then $\\{\phi_{0},\phi_{1},\phi_{2},\phi_{3},\phi_{4}\\}$ is an additive basis
of $H^{*}_{\mathrm{T,Orb}}([\mathbb{C}^{5}/\mathbb{Z}_{5}])$. The orbifold
Poincaré dual $\\{\phi^{0},\phi^{1},\phi^{2},\phi^{3},\phi^{4}\\}$ of this
basis is given by
$\phi^{0}=5\lambda_{0}\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}\phi_{0},\quad\phi^{1}=5\phi_{4},\quad\phi^{2}=5\phi_{3},\quad\phi^{3}=5\phi_{2},\quad\phi^{4}=5\phi_{1}.$
To simplify notation, in what follows we set
$\phi_{i}\coloneqq\phi_{j}\quad\text{if
}j\equiv{i}\mod{5}\quad\text{and}\quad\phi^{i}\coloneqq\phi^{j}\quad\text{if
}j\equiv{i}\mod{5},$
for all $i\geq{0}$ and $0\leq{j}\leq{4}$.
For $\phi_{c_{1}},\ldots,\phi_{c_{n}}\in
H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)$,
define the Gromov-Witten potential by
(0.2)
$\mathcal{F}_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{c_{1}},\ldots,\phi_{c_{n}}\right)=\sum_{d=0}^{\infty}\frac{\Theta^{d}}{d!}\int_{\left[\overline{M}_{g,n+d}^{\mathrm{orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right],0\right)\right]^{vir}}\prod_{k=1}^{n}\mathrm{ev}_{i}^{*}\left(\phi_{c_{k}}\right)\prod_{i=n+1}^{n+d}\mathrm{ev}_{i}^{*}\left(\phi_{1}\right).$
In the standard double bracket notation, this is
$\left\langle\left\langle\phi_{c_{1}},\ldots,\phi_{c_{n}}\right\rangle\right\rangle_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}=\mathcal{F}_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{c_{1}},\ldots,\phi_{c_{n}}\right).$
The main results of this paper are differential equations for these generating
functions $\mathcal{F}^{[\mathbb{C}^{5}/\mathbb{Z}_{5}]}_{g}$ for $g\geq{2}$
after the following specializations of equivariant parameters:
(0.3) $\lambda_{i}=e^{\frac{2\pi\sqrt{-1}i}{5}},\quad 0\leq i\leq 4.$
There are two differential equations, given precisely in Theorems 4.6 and 4.7
below. Theorem 4.6 is an analogue of a main result of [10]. To the best of our
knowledge, Theorem 4.7 does not have analogue in previous studies. Borrowing
terminology from String Theory, we call these two differential equations
holomorphic anomaly equations for $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$.
Our approach to proving holomorphic anomaly equations is the same as that of
[10] and is based on the fact that genus $0$ Gromov-Witten theory of
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ yields a semisimple Frobenius structure on
$H^{*}_{\mathrm{T,Orb}}([\mathbb{C}^{5}/\mathbb{Z}_{5}])$. Consequently, the
cohomological field theory (in the sense of [6]) associated to the Gromov-
Witten theory of $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ is semisimple. The
Givental-Teleman classification [4], [14] of semisimple cohomological field
theories can then be applied to yield an explicit formula for
$\mathcal{F}^{[\mathbb{C}^{5}/\mathbb{Z}_{5}]}_{g}$, which can be used to
prove holomorphic anomaly equations.
The rest of the paper is organized as follows. In Section 1, we state the
mirror theorem for $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ and study certain power
series arising from the $I$-function. In Section 2, we describe necessary
ingredients of the Frobenius structure from Gromov-Witten theory of
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$. In Section 3, we study lifting to certain
ring of functions of an important ingredient called the $R$-matrix. Section 4
contains the main results of this paper. In Section 4.1, we give the formula
for Gromov-Witten potentials of $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ arising from
Givental-Teleman classification of semisimple CohFTs. In Section 4.2, we state
the two holomorphic anomaly equations and use the formula in Section 4.1 to
prove them.
### 0.1. Acknowledgement
D. G. is supported in part by a Special Graduate Assignment fellowship by Ohio
State University’s Department of Mathematics and H.-H. T. is supported in part
by Simons foundation collaboration grant.
## 1\. On mirror theorem
In this Section we discuss mirror theorem for Gromov-Witten theory of
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$.
The $I$-function of $\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]$ is
defined111Here, $\langle\bullet\rangle$ is the fractional part of $\bullet$.
to be
(1.1)
$I\left(x,z\right)=\sum_{k=0}^{\infty}\frac{x^{k}}{{z^{k}}k!}\prod_{\begin{subarray}{c}b:0\leq
b<\frac{k}{5}\\\ \langle
b\rangle=\langle\frac{k}{5}\rangle\end{subarray}}\left(1-(bz)^{5}\right)\phi_{k}.$
It is easy to see that $I$-function (1.1) of $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$
is of the form
(1.2) $I\left(x,z\right)=\sum_{k=0}^{\infty}\frac{I_{k}(x)}{z^{k}}\phi_{k}.$
The small $J$-function for $[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ is defined by
$J\left(\Theta,z\right)=\phi_{0}+\frac{\Theta\phi_{1}}{z}+\sum_{i=0}^{n-1}\phi^{i}\left\langle\left\langle\frac{\phi_{i}}{z(z-\psi)}\right\rangle\right\rangle_{0,1}^{[\mathbb{C}^{n}/\mathbb{Z}_{n}]}.$
The following is a consequence of the main result of [2].
###### Proposition 1.1.
We have the following mirror identity,
(1.3) $J\left(T(x),z\right)=I(x,z),$
with the mirror transformation
(1.4) $T(x)=I_{1}(x)=\sum_{k\geq
0}\frac{(-1)^{5k}x^{5k+1}}{(5k+1)!}\left(\frac{\Gamma\left(k+\frac{1}{5}\right)}{\Gamma\left(\frac{1}{5}\right)}\right)^{5}.$
Define the operator
$D:\mathbb{C}[[x]]\rightarrow x\mathbb{C}[[x]]$
by
$Df(x)=x\frac{df(x)}{dx}.$
Next, we consider the following series222Here, our $L$ differs from $L$
defined in [8] by a sign. Although the definitions of $C_{1}$ and $C_{2}$ look
different from those in [8], it is easy to check that these definitions match
with those in [8]. in $\mathbb{C}[[x]]$ arising from the $I$-function, which
will be useful later:
(1.5) $\displaystyle L=$ $\displaystyle
x\left(1+\left(\frac{x}{5}\right)^{5}\right)^{-\frac{1}{5}},$ $\displaystyle
C_{1}=$ $\displaystyle DI_{1},$ $\displaystyle C_{2}=$ $\displaystyle
D\left(\frac{DI_{2}}{C_{1}}\right),$ $\displaystyle C_{3}=$ $\displaystyle
D\left(\frac{D\left(\frac{DI_{3}}{C_{1}}\right)}{C_{2}}\right).$
It is easy to verify that
(1.6) $\frac{DL}{L}=1-\frac{L^{5}}{5^{5}}.$
In [8, Proposition 4], the following identity is given:
(1.7) $C_{1}^{2}C_{2}^{2}C_{3}=L^{5}.$
The following lemma is a direct result of the definition (0.2) of Gromov-
Witten potential and the mirror map $\Theta=T(x)$.
###### Lemma 1.2.
For $k\geq{1}$, we have
$\frac{\partial^{k}\mathcal{F}_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{c_{1}},\ldots,\phi_{c_{n}}\right)}{\partial
T^{k}}=\mathcal{F}_{g,n+k}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}(\phi_{c_{1}},\ldots,\phi_{c_{n}},\underbrace{\phi_{1},\ldots,\phi_{1}}_{k-\text{many}}).$
We further define the following series:
(1.8) $\displaystyle X_{1}=$ $\displaystyle\frac{DC_{1}}{C_{1}},$
$\displaystyle X_{2}=$ $\displaystyle\frac{DC_{2}}{C_{2}},$ $\displaystyle
A_{1}=$ $\displaystyle\frac{1}{L}\left(\frac{DL}{L}-X_{1}\right),$
$\displaystyle A_{2}=$
$\displaystyle\frac{1}{L}\left(2\frac{DL}{L}-X_{1}-X_{2}\right),$
$\displaystyle B_{i}=$
$\displaystyle\frac{1}{5^{i}}(D+X_{1})^{i-1}X_{1}\quad\text{for}\quad
1\leq{i}\leq{4}.$
In [8, Section 3], the following equations are given:
(1.9) $\displaystyle B_{4}=$
$\displaystyle\left(1-\frac{L^{5}}{5^{5}}\right)\left(2B_{3}-\frac{7}{5}B_{2}+\frac{2}{5}B_{1}-\frac{24}{625}\right),$
(1.10) $\displaystyle DX_{2}=$
$\displaystyle-10\left(1-\frac{L^{5}}{5^{5}}\right)+10\left(1-\frac{L^{5}}{5^{5}}\right)X_{1}+5\left(1-\frac{L^{5}}{5^{5}}\right)X_{2}-2X_{1}^{2}-4DX_{1}-2X_{1}X_{2}-X_{2}^{2}.$
Since there is a linear relation between $\\{A_{1},A_{2}\\}$ and
$\\{X_{1},X_{2}\\}$ with coefficients from the ring $\mathbb{C}[L^{\pm 1}]$,
we can rewrite these two equations in terms of $A_{i}$’s and their $D$
differentials. For example, equation (1.10) can be rewritten as
(1.11)
$DA_{2}=LA_{1}^{2}+LA_{2}^{2}-DA_{1}-15\left(1-\frac{L^{5}}{5^{5}}\right)\frac{L^{5}}{5^{5}}.$
Moreover, these linear relations show that the differential ring
$\mathbb{C}[L^{\pm
1}][A_{1},A_{2},DA_{1},DA_{2},D^{2}A_{1},D^{2}A_{2},\ldots]$
is a quotient of the free polynomial ring
$\mathbb{F}\coloneq\mathbb{C}[L^{\pm 1}][A_{1},DA_{1},D^{2}A_{1},A_{2}].$
## 2\. Frobenius Structures
In this Section, we spell out details of the Frobenius structure on
$H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)$
defined using genus $0$ Gromov-Witten theory of
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$. We refer to [7] for generalities of
Frobenius structures.
Let $\gamma=\sum_{i=0}^{4}t_{i}\phi_{i}\in
H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)$.
The full genus $0$ Gromov-Witten potential is defined to be
(2.1)
$\mathcal{F}_{0}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}(t,\Theta)=\sum_{n=0}^{\infty}\sum_{d=0}^{\infty}\frac{1}{n!d!}\int_{\left[\overline{M}_{0,n+d}^{\mathrm{orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right],0\right)\right]^{vir}}\prod_{i=1}^{n}\operatorname{ev}_{i}^{*}(\gamma)\prod_{i=n+1}^{n+d}\mathrm{ev}_{i}^{*}\left(\Theta\phi_{1}\right).$
In the basis $\\{\phi_{0},\phi_{1},\phi_{2},\phi_{3},\phi_{4}\\}$ and under
the specialization (0.3), the orbifold Poincaré pairing
$g(-,-):H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)\times
H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)\to\mathbb{Q}(\lambda_{0},\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$
has the matrix representation
(2.2) $G=\frac{1}{5}\begin{bmatrix}1&0&0&0&0\\\ 0&0&0&0&1\\\ 0&0&0&1&0\\\
0&0&1&0&0\\\ 0&1&0&0&0\end{bmatrix}.$
The quantum product $\bullet_{\gamma}$ at $\gamma\in
H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)$
is a product structure on
$H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)$.
It can be defined as follows:
$g(\phi_{i}\bullet_{\gamma}\phi_{j},\phi_{k}):=\frac{\partial^{3}}{\partial
t_{i}\partial t_{j}\partial
t_{k}}\mathcal{F}_{0}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}(t,\Theta).$
In what follows, we focus on the quantum product $\bullet_{\gamma=0}$ at
$\gamma=0\in
H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)$,
which we denote by $\bullet$. Note that $\bullet$ still depends on $\Theta$.
It is proved in [8, Section 5] that the quantum product at $0\in
H^{\star}_{\mathrm{T,Orb}}\left(\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]\right)$
is semisimple with the idempotent basis $\\{e_{0},e_{1},e_{2},e_{3},e_{4}\\}$,
that is,
$e_{i}\bullet e_{j}=\delta_{i,j}e_{j}.$
The corresponding normalized idempotent basis
$\\{\widetilde{e}_{0},\widetilde{e}_{1},\widetilde{e}_{2},\widetilde{e}_{3},\widetilde{e}_{4}\\}$
is given by
$\widetilde{e}_{i}=5e_{i}\quad\text{for}\quad 0\leq{i}\leq{4}.$
In [8, Section 5], the transition matrix given by
$\Psi_{ij}=g(\widetilde{e}_{i},\phi_{j})$ is calculated to be
$\Psi=\frac{1}{5}\begin{bmatrix}1&\frac{L}{C_{1}}&\frac{L^{2}}{C_{1}C_{2}}&\frac{C_{1}C_{2}}{L^{2}}&\frac{C_{1}}{L}\\\
1&\zeta\frac{L}{C_{1}}&\zeta^{2}\frac{L^{2}}{C_{1}C_{2}}&\zeta^{3}\frac{C_{1}C_{2}}{L^{2}}&\zeta^{4}\frac{C_{1}}{L}\\\
1&\zeta^{2}\frac{L}{C_{1}}&\zeta^{4}\frac{L^{2}}{C_{1}C_{2}}&\zeta\frac{C_{1}C_{2}}{L^{2}}&\zeta^{3}\frac{C_{1}}{L}\\\
1&\zeta^{3}\frac{L}{C_{1}}&\zeta\frac{L^{2}}{C_{1}C_{2}}&\zeta^{4}\frac{C_{1}C_{2}}{L^{2}}&\zeta^{2}\frac{C_{1}}{L}\\\
1&\zeta^{4}\frac{L}{C_{1}}&\zeta^{3}\frac{L^{2}}{C_{1}C_{2}}&\zeta^{2}\frac{C_{1}C_{2}}{L^{2}}&\zeta\frac{C_{1}}{L}\end{bmatrix}.$
Let $\left\\{u^{0},u^{1},u^{2},u^{3},u^{4}\right\\}$ be canonical coordinates
associated to the idempotent basis
$\left\\{e_{0},e_{1},e_{2},e_{3},e_{4}\right\\}$ which satisfy
$u^{\alpha}\left(t_{i}=0,\Theta=0\right)=0.$
By [8, Lemma 6], we have
(2.3) $\frac{du^{\alpha}}{dx}=\zeta^{\alpha}L\frac{1}{x}$
at $t=0$, for $0\leq\alpha\leq{4}$.
The full genus $0$ Gromov-Witten potential (2.1) satisfies the following
property
$\mathcal{F}_{0}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}(t,\Theta)=\mathcal{F}_{0}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}(t|_{t_{1}=0},\Theta+t_{1}).$
that is,
$\mathcal{F}_{0}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}(t,\Theta)$
depends on $t_{1}$ and $\Theta$ through $\Theta+t_{1}$. In particular, the
operator
(2.4) $\frac{\partial}{\partial{t_{1}}}-\frac{\partial}{\partial\Theta}$
annihilates
$\mathcal{F}_{0}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}(t,\Theta)$.
Denote by
$R(z)=\text{Id}+\sum_{k\geq
1}R_{k}z^{k}\in\text{End}(H^{*}_{\mathrm{T,Orb}}([\mathbb{C}^{5}/\mathbb{Z}_{5}]))[[z]]$
the $R$-matrix of the Frobenius structure associated to the
($\mathrm{T}$-equivariant) Gromov-Witten theory of
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ near the semisimple point $0$. The
$R$-matrix plays a central role in the Givental-Teleman classification of
semisimple cohomological field theories. By definition of $R$, the symplectic
condition
$R(z)\cdot R(-z)^{*}=\text{Id}$
holds. The following flatness equation
(2.5) $z(d\Psi^{-1})R+z\Psi^{-1}(dR)+\Psi^{-1}R(dU)-\Psi^{-1}(dU)R=0$
also holds, see [7, Section 4.6] and [3, Proposition 1.1]. Here
$d=\frac{d}{dt}$.
Since $\mathcal{F}_{0}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}(t,\Theta)$
depends on $t_{1}$ and $\Theta$ through $\Theta+t_{1}$, it follows that $\Psi$
and $R(z)$ also depend on $t_{1}$ and $\Theta$ through $\Theta+t_{1}$. So we
have333An argument for this (written for a different target space) from the
CohFT viewpoint can be found in [13, Section 3.3].
$\frac{\partial}{\partial
t_{1}}\Psi=\frac{\partial}{\partial\Theta}\Psi,\quad\frac{\partial}{\partial
t_{1}}R(z)=\frac{\partial}{\partial\Theta}R(z).$
In equation (2.5), we set $t_{\neq 1}=0$ and only consider $\frac{d}{dt_{1}}$.
It follows that (2.5) can be written as
$z(\frac{d}{d\Theta}\Psi^{-1})R+z\Psi^{-1}(\frac{d}{d\Theta}R)+\Psi^{-1}R(\frac{d}{d\Theta}U)-\Psi^{-1}(\frac{d}{d\Theta}U)R=0.$
Since
$\frac{d}{d\Theta}=\frac{dx}{d\Theta}\frac{d}{dx},$
after cancelling $\frac{dx}{d\Theta}$ and multiplying by $x$, we rewrite the
above equation as
$z(x\frac{d}{dx}\Psi^{-1})R+z\Psi^{-1}(x\frac{d}{dx}R)+\Psi^{-1}R(x\frac{d}{dx}U)-\Psi^{-1}(x\frac{d}{dx}U)R=0.$
By equating coefficients of $z^{k}$, we see that
(2.6)
$\Psi\left(D\Psi^{-1}\right)R_{k-1}+DR_{k-1}+R_{k}\left(DU\right)-\left(DU\right)R_{k}=0$
or equivalently,
(2.7)
$D\left(\Psi^{-1}R_{k-1}\right)+\left(\Psi^{-1}R_{k}\right)DU-\Psi^{-1}\left(DU\right)\Psi\left(\Psi^{-1}R_{k}\right)=0$
where $D=x\frac{d}{dx}$ as before.
Now set $t_{1}=0$. By equation (2.3), we have
(2.8) $DU=\operatorname{diag}(L,L\zeta,...,L\zeta^{n-1}).$
Let $P_{i,j}^{k}$ denote the $(i,j)$-entry of the matrix defined by
$P_{k}=\Psi^{-1}R_{k}$ after being restricted to the semisimple point $0\in
H^{*}_{\mathrm{T,Orb}}\left([\mathbb{C}^{5}/\mathbb{Z}_{5}]\right)$. Set
$\widetilde{P}_{i,j}^{k}=\frac{L^{i}}{K_{i}}P_{i,j}^{k}\zeta^{(k+i)j}$
where $0\leq i,j\leq{4}$ and $k\geq 0$. Then, the flatness equation (2.7)
reads as
(2.9) $\displaystyle\widetilde{P}_{4,j}^{k}=$
$\displaystyle\widetilde{P}_{0,j}^{k}+\frac{1}{L}D\widetilde{P}_{0,j}^{k-1},$
$\displaystyle\widetilde{P}_{3,j}^{k}=$
$\displaystyle\widetilde{P}_{4,j}^{k}+\frac{1}{L}D\widetilde{P}_{4,j}^{k-1}+A_{1}\widetilde{P}_{4,j}^{k-1},$
$\displaystyle\widetilde{P}_{2,j}^{k}=$
$\displaystyle\widetilde{P}_{3,j}^{k}+\frac{1}{L}D\widetilde{P}_{3,j}^{k-1}+A_{2}\widetilde{P}_{3,j}^{k-1},$
$\displaystyle\widetilde{P}_{1,j}^{k}=$
$\displaystyle\widetilde{P}_{2,j}^{k}+\frac{1}{L}D\widetilde{P}_{2,j}^{k-1}-A_{2}\widetilde{P}_{2,j}^{k-1},$
$\displaystyle\widetilde{P}_{0,j}^{k}=$
$\displaystyle\widetilde{P}_{1,j}^{k}+\frac{1}{L}D\widetilde{P}_{1,j}^{k-1}-A_{1}\widetilde{P}_{1,j}^{k-1}.$
We call equation (2.9) the modified flatness equations.
By the methodology of [16], we obtain the following result.
###### Lemma 2.1.
We have $\widetilde{P}_{0,j}^{k}\in\mathbb{C}[L^{\pm 1}]$ for all
$0\leq{j}\leq{4}$ and $k\geq{0}$.
## 3\. Lift of modified R-matrix
### 3.1. Canonical Lift
The functions $\widetilde{P}_{i,j}^{k}\in\mathbb{C}[[x]]$ in modified flatness
equations have canonical lifts to the the ring
$\mathbb{F}=\mathbb{C}[L^{\pm 1}][A_{1},DA_{1},D^{2}A_{1},A_{2}]$
via equation (1.6), equation (1.11) and the first four rows of the flatness
equations (2.9) in the descending order. More precisely, we start with Lemma
2.1, that is
$\widetilde{P}_{0,j}^{k}\in\mathbb{C}[L^{\pm 1}]\subset\mathbb{F}.$
Then, by equation (1.6), we obtain
$\widetilde{P}_{4,j}^{k}=\widetilde{P}_{0,j}^{k}+\frac{1}{L}D\widetilde{P}_{0,j}^{k-1}\in\mathbb{C}[L^{\pm
1}]\subset\mathbb{F}.$
By proceeding in a similar manner, we see that
$\displaystyle\widetilde{P}_{3,j}^{k}$
$\displaystyle=\widetilde{P}_{4,j}^{k}+\frac{1}{L}D\widetilde{P}_{4,j}^{k-1}+A_{1}\widetilde{P}_{4,j}^{k-1}\in\mathbb{C}[L^{\pm
1}][A_{1}]\subset\mathbb{F},$ $\displaystyle\widetilde{P}_{2,j}^{k}$
$\displaystyle=\widetilde{P}_{3,j}^{k}+\frac{1}{L}D\widetilde{P}_{3,j}^{k-1}+A_{2}\widetilde{P}_{3,j}^{k-1}\in\mathbb{C}[L^{\pm
1}][A_{1},DA_{1},A_{2}]\subset\mathbb{F}.$
Lastly, using equation (1.11) we get
$\widetilde{P}_{1,j}^{k}=\widetilde{P}_{2,j}^{k}+\frac{1}{L}D\widetilde{P}_{2,j}^{k-1}-A_{2}\widetilde{P}_{2,j}^{k-1}\in\mathbb{C}[L^{\pm
1}][A_{1},DA_{1},D^{2}A_{1},A_{2}]=\mathbb{F}.$
This procedure gives us a canonical lift of
$\widetilde{P}_{i,j}^{k}\in\mathbb{C}[[x]]$ to the free polynomial ring
$\mathbb{F}$, which we denote again as $\widetilde{P}_{i,j}^{k}$. We state
this result in the following way.
###### Lemma 3.1.
We have $\widetilde{P}_{i,j}^{k}\in\mathbb{F}$ for all $0\leq{i,j}\leq{4}$ and
$k\geq{0}$.
### 3.2. Preparations
In this subsection, we use the lift $\widetilde{P}_{i,j}^{k}\in\mathbb{F}$ and
prove two lemmas which will be used for the proof of holomorphic anomaly
equations.
###### Lemma 3.2.
The following identity holds
$\frac{\partial\widetilde{P}_{i,j}^{k}}{\partial{A_{2}}}=\delta_{i,2}\widetilde{P}_{{3},j}^{k-1}.$
###### Proof.
It is clear that the degrees of $\widetilde{P}_{0,j}^{k}$,
$\widetilde{P}_{4,j}^{k}$ and $\widetilde{P}_{3,j}^{k}$ in $A_{2}$ are all
zero. Hence, we get
(3.1)
$\frac{\partial\widetilde{P}_{0,j}^{k}}{\partial{A_{2}}}=\frac{\partial\widetilde{P}_{4,j}^{k}}{\partial{A_{2}}}=\frac{\partial\widetilde{P}_{3,j}^{k}}{\partial{A_{2}}}=0.$
The place where we see $A_{2}$ for the first time are the next two equations
in (2.9),
(3.2) $\displaystyle\widetilde{P}_{2,j}^{k}=$
$\displaystyle\widetilde{P}_{3,j}^{k}+\frac{1}{L}D\widetilde{P}_{3,j}^{k-1}+A_{2}\widetilde{P}_{3,j}^{k-1},$
(3.3) $\displaystyle\widetilde{P}_{1,j}^{k}=$
$\displaystyle\widetilde{P}_{2,j}^{k}+\frac{1}{L}D\widetilde{P}_{2,j}^{k-1}-A_{2}\widetilde{P}_{2,j}^{k-1},$
From the first equation (3.2), we see that
$\frac{\partial\widetilde{P}_{2,j}^{k}}{\partial{A_{2}}}=\widetilde{P}_{3,j}^{k-1}.$
Note that equation (1.11) gives
$\frac{\partial\left(DA_{2}\right)}{\partial A_{2}}=2LA_{2}.$
Now, we compute the last derivative. By flatness equations (2.9) and equation
(3.2), we obtain
(3.4) $\displaystyle\frac{\partial\widetilde{P}_{1,j}^{k}}{\partial A_{2}}=$
$\displaystyle\frac{\partial\widetilde{P}_{2,j}^{k}}{\partial
A_{2}}+\frac{1}{L}\frac{\partial\left(D\widetilde{P}_{2,j}^{k-1}\right)}{\partial
A_{2}}-\widetilde{P}_{2,j}^{k-1}-A_{2}\frac{\partial\widetilde{P}_{2,j}^{k-1}}{\partial
A_{2}}$ $\displaystyle=$
$\displaystyle\frac{\partial\widetilde{P}_{2,j}^{k}}{\partial
A_{2}}+\frac{1}{L}\left(2LA_{2}\widetilde{P}_{3,j}^{k-2}+D\widetilde{P}_{3,j}^{k-2}\right)-\widetilde{P}_{2,j}^{k-1}-A_{2}\frac{\partial\widetilde{P}_{2,j}^{k-1}}{\partial
A_{2}}.$
Then, by equation (3.2) and again by flatness equations (2.9), we get
(3.5) $\displaystyle\frac{\partial\widetilde{P}_{1,j}^{k}}{\partial A_{2}}=$
$\displaystyle\widetilde{P}_{3,j}^{k-1}+2A_{2}\widetilde{P}_{3,j}^{k-2}+\frac{1}{L}D\widetilde{P}_{3,j}^{k-2}-\widetilde{P}_{2,j}^{k-1}-A_{2}\widetilde{P}_{3,j}^{k-2}$
$\displaystyle=$
$\displaystyle\widetilde{P}_{3,j}^{k-1}+A_{2}\widetilde{P}_{3,j}^{k-2}+\frac{1}{L}D\widetilde{P}_{3,j}^{k-2}-\widetilde{P}_{2,j}^{k-1}=0.$
This completes the proof. ∎
###### Lemma 3.3.
The following identity holds
$\frac{\partial\widetilde{P}_{i,j}^{k}}{\partial{(D^{2}A_{1})}}=\delta_{i,1}\frac{1}{L^{2}}\widetilde{P}_{{4},j}^{k-3}.$
###### Proof.
It is clear that the only $\widetilde{P}_{i,j}^{k}\in\mathbb{F}$ that has non-
zero degree in $D^{2}A_{1}$ is $\widetilde{P}_{1,j}^{k}$, and the degree of
$\widetilde{P}_{1,j}^{k}$ in $D^{2}A_{1}$ is $1$. So, we obtain
(3.6)
$\frac{\partial\widetilde{P}_{0,j}^{k}}{\partial{(D^{2}A_{1})}}=\frac{\partial\widetilde{P}_{4,j}^{k}}{\partial{(D^{2}A_{1})}}=\frac{\partial\widetilde{P}_{3,j}^{k}}{\partial{(D^{2}A_{1})}}=\frac{\partial\widetilde{P}_{2,j}^{k}}{\partial{(D^{2}A_{1})}}=0.$
The coefficient of $D^{2}A_{1}$ in $\widetilde{P}_{1,j}^{k}$ descends from the
coefficient of $A_{1}$ in $\widetilde{P}_{3,j}^{k-2}$, which is
$\widetilde{P}_{4,j}^{k-3}$. Keeping track of this term in the procedure of
canonical lifting, we see that the coefficient of $D^{2}A_{1}$ in
$\widetilde{P}_{1,j}^{k}$ is
$\frac{1}{L^{2}}\widetilde{P}_{{4},j}^{k-3}.$
This completes the proof. ∎
## 4\. Holomorphic anomaly equations
### 4.1. Formula for potentials
By general considerations, Gromov-Witten theory of
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ has the structure of a cohomological field
theory (CohFT). We refer to [6] and [11] for discussions on CohFTs.
#### 4.1.1. Graphs
We describe the graphs needed in the formula for Gromov-Witten potentials.
Recall that a stable graph $\Gamma$ is a tuple
$\Gamma=\left(\mathrm{V}_{\Gamma},\mathrm{H}_{\Gamma},\mathrm{L}_{\Gamma},\mathrm{g}:\mathrm{V}_{\Gamma}\rightarrow\mathbb{Z}_{\geq
0},\nu:\mathrm{H}_{\Gamma}\cup\mathrm{L}_{\Gamma}\rightarrow\mathrm{V}_{\Gamma},\iota:\mathrm{H}_{\Gamma}\rightarrow\mathrm{H}_{\Gamma},\ell:\mathrm{L}_{\Gamma}\rightarrow\\{1,\ldots,m\\}\right)$
satisfying:
1. (1)
$\mathrm{V}_{\Gamma}$ is the vertex set with a genus assignment
$\mathrm{g}:\mathrm{V}_{\Gamma}\rightarrow\mathbb{Z}_{\geq 0}$,
2. (2)
$\mathrm{H}_{\Gamma}$ is the half-edge set equipped with an involution
$\iota:\mathrm{H}_{\Gamma}\rightarrow\mathrm{H}_{\Gamma}$,
3. (3)
$\mathrm{E}_{\Gamma}$ is the edge set defined by the orbits of
$\iota:\mathrm{H}_{\Gamma}\rightarrow\mathrm{H}_{\Gamma}$ in
$\mathrm{H}_{\Gamma}$ (self-edges are allowed at the vertices) and the tuple
$\left(\mathrm{V}_{\Gamma},\mathrm{E}_{\Gamma}\right)$ defines a connected
graph,
4. (4)
$\mathrm{L}_{\Gamma}$ is the set of legs equipped with an isomorphism
$\ell:\mathrm{L}_{\Gamma}\rightarrow\\{1,\ldots,m\\}$,
5. (5)
The map
$\nu:\mathrm{H}_{\Gamma}\cup\mathrm{L}_{\Gamma}\rightarrow\mathrm{V}_{\Gamma}$
is a vertex assignment,
6. (6)
For each vertex $v$, let
$\mathrm{n}(\mathfrak{v})=\mathrm{l}(\mathfrak{v})+\mathrm{h}(\mathfrak{v})$
be the valence of the vertex (where $\mathrm{l}(\mathfrak{v})$ and
$\mathrm{h}(\mathfrak{v})$ are the number of legs and the number of edges
attached to the vertex $\mathfrak{v}$ respectively). Then, the following
stability condition holds:
$2\mathrm{g}(\mathfrak{v})-2+\mathrm{n}(\mathfrak{v})>0.$
The genus of a stable graph $\Gamma$ is defined by
$\mathrm{g}(\Gamma)=h^{1}(\Gamma)+\sum_{\mathfrak{v}\in\mathrm{V}}\mathrm{g}(\mathfrak{v}).$
A decorated stable graph
$\Gamma\in\mathrm{G}_{g,n}^{\text{Dec}}(5)$
of order $5$ is a stable graph $\Gamma\in\mathrm{G}_{g,n}$ with an extra
assignment $\mathrm{p}:\mathrm{V}_{\Gamma}\rightarrow\\{0,1,2,3,4\\}$ to each
vertex $\mathfrak{v}\in\mathrm{V}_{\Gamma}$.
#### 4.1.2. Formula for $\mathcal{F}_{g}$
By the results stated in Section 2, the CohFT of Gromov-Witten theory of
$[\mathbb{C}^{5}/\mathbb{Z}_{5}]$ is semisimple. By Givental-Teleman
classification of semisimple CohFTs (see e.g. [11] for a survey), we can write
Gromov-Witten potential as a sum over decorated stable graphs,
(4.1)
$\mathcal{F}_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{c_{1}},\ldots,\phi_{c_{n}}\right)=\sum_{\Gamma\in\mathrm{G}_{g,m}^{\text{Dec}}(5)}\operatorname{Cont}_{\Gamma}\left(\phi_{c_{1}},\ldots,\phi_{c_{n}}\right).$
Details about how this formula works in general can be found in e.g. [12] and
[10].
In order to state the contributions of graphs to
$\mathcal{F}_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{c_{1}},\ldots,\phi_{c_{n}}\right)$,
we need to introduce the following series in $\mathbb{C}[[x]]$:
$K_{0},\quad K_{1}=C_{1},\quad K_{2}=C_{1}C_{2},\quad
K_{3}=C_{1}C_{2}C_{3},\quad\text{and}\quad K_{4}=C_{1}C_{2}^{2}C_{3},$
and the following involution
$\mathrm{Inv}:\\{0,1,2,3,4\\}\rightarrow\\{0,1,2,3,4\\},$
with $\mathrm{Inv}(0)=0$ and $\mathrm{Inv}(i)=5-i$ for $1\leq i\leq 4$.
###### Proposition 4.1.
The contribution associated to a decorated stable graph
$\Gamma\in\mathrm{G}_{g,n}^{\text{Dec}}(5)$ is
$\operatorname{Cont}_{\Gamma}\left(\phi_{c_{1}},\ldots,\phi_{c_{n}}\right)=\frac{1}{|\mathrm{Aut}(\Gamma)|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}(\Gamma)}}\prod_{\mathfrak{v}\in\mathrm{V}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})\prod_{\mathfrak{e}\in\mathrm{E}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})\prod_{\mathfrak{l}\in\mathrm{L}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{l}),$
where
$\mathrm{F}(\Gamma)=\left|\mathrm{H}_{\Gamma}\cup\mathrm{L}_{\Gamma}\right|=n+\left|\mathrm{H}_{\Gamma}\right|$
and $\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})$,
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})$, and
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{l})$, the vertex, edge and leg
contributions with flag $\mathrm{A}$-values
$(a_{1},\ldots,a_{n},b_{1},\ldots,b_{\left|\mathrm{H}_{\Gamma}\right|})$
respectively, are given by444Notation: The values
${b_{\mathfrak{v}1}},\ldots,{b_{\mathfrak{v}\mathrm{h}(\mathfrak{v})}}$ and
$b_{\mathfrak{e}1},b_{\mathfrak{e}2}$ are the entries of
$(a_{1},\ldots,a_{n},b_{1},\ldots,b_{\left|\mathrm{H}_{\Gamma}\right|})$
corresponding to $\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})$ and
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})$ respectively.
$\displaystyle\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})=$
$\displaystyle\sum_{k\geq
0}\frac{g({e}_{\mathrm{p}(\mathfrak{v})},{e}_{\mathrm{p}(\mathfrak{v})})^{-\frac{2g-2+\mathrm{n}(\mathfrak{v})+k}{2}}}{k!}$
$\displaystyle\times\int_{\overline{M}_{\mathrm{g}(\mathfrak{v}),\mathrm{n}(\mathfrak{v})+k}}\psi_{1}^{a_{\mathfrak{v}1}}\cdots\psi_{\mathrm{l}(v)}^{a_{\mathfrak{v}\mathrm{l}(\mathfrak{v})}}\psi_{\mathrm{l}(\mathfrak{v})+1}^{b_{\mathfrak{v}1}}\cdots\psi_{\mathrm{n}(\mathfrak{v})}^{b_{\mathfrak{v}\mathrm{h}(\mathfrak{v})}}t_{\mathrm{p}(\mathfrak{v})}(\psi_{\mathrm{n}(\mathfrak{v})+1})\cdots
t_{\mathrm{p}(\mathfrak{v})}(\psi_{\mathrm{n}(\mathfrak{v})+k}),$
$\displaystyle\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})=$
$\displaystyle\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5}\sum_{m=0}^{b_{\mathfrak{e}2}}(-1)^{m}\sum_{r=0}^{4}\frac{\widetilde{P}_{\mathrm{Inv}(r),\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}+m+1}\widetilde{P}_{r,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-m}}{\zeta^{(b_{\mathfrak{e}1}+m+1+\mathrm{Inv}(r))\mathrm{p}(\mathfrak{v}_{1})}\zeta^{(b_{\mathfrak{e}2}-m+r)\mathrm{p}(\mathfrak{v}_{2})}},$
$\displaystyle\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{l})=$
$\displaystyle\frac{(-1)^{a_{\ell(\mathfrak{l})}}}{5}\frac{K_{\mathrm{Inv}(c_{\ell(\mathfrak{l})})}}{L^{\mathrm{Inv}(c_{\ell(\mathfrak{l})})}}\frac{\widetilde{P}_{\mathrm{Inv}(c_{\ell(\mathfrak{l})}),\mathrm{p}(\nu(\mathfrak{l}))}^{{a_{\ell(\mathfrak{l})}}}}{\zeta^{({a_{\ell(\mathfrak{l})}}+{\mathrm{Inv}(c_{\ell(\mathfrak{l})})})\mathrm{p}(\nu(\mathfrak{l}))}},$
where
$t_{\mathrm{p}(\mathfrak{v})}(z)=\sum_{k\geq{2}}\mathrm{T}_{\mathrm{p}(\mathfrak{v})k}z^{k}\quad\text{with}\quad\mathrm{T}_{\mathrm{p}(\mathfrak{v})k}=\frac{(-1)^{k}}{n}\widetilde{P}_{0,\mathrm{p}(\mathfrak{v})}^{k}\zeta^{-k\mathrm{p}(\mathfrak{v})}.$
We should emphasize that Proposition 4.1 holds in $\mathbb{C}[[x]]$. Using
Proposition 4.1 and lifting procedure in Section 3, we obtain the following
lift of Gromov-Witten potential to certain polynomial rings.
###### Theorem 4.2 (Finite generation property).
Let $\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})$,
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})$, and
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{l})$ be as in Proposition 4.1.
We have $\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})\in\mathbb{C}[L^{\pm
1}]$, $\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})\in\mathbb{F}$, and
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{l})\in\mathbb{F}[C_{1},C_{2},C_{3}]$.
Hence, we have
$\mathcal{F}_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{c_{1}},\ldots,\phi_{c_{n}}\right)\in\mathbb{F}[C_{1},C_{2},C_{3}]$
and when there is no insertions, we have
$\mathcal{F}_{g}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\in\mathbb{F}$
where $\mathbb{F}=\mathbb{C}[L^{\pm 1}][A_{1},DA_{1},D^{2}A_{1},A_{2}]$ as
before.
###### Proof.
By Lemma 2.1, we have
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})\in\mathbb{C}[L^{\pm 1}]$
since its expression involves only $\widetilde{P}_{0,j}^{k}$’s. By Lemma 3.1,
and definitions of $K_{i}$’s, we see that
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})\in\mathbb{F}$ and
$\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{l})\in\mathbb{F}[C_{1},C_{2},C_{3}]$.
Hence, results for Gromov-Witten potentials follow. ∎
Depending on the insertions, we can give a better description of the
polynomial ring that contains Gromov-Witten potentials. For example, by
Proposition 4.1 we have
(4.2)
$\mathcal{F}_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{c_{1}},\ldots,\phi_{c_{1}}\right)\in\mathbb{F}[C_{1}^{-1}]=\mathbb{C}[L^{\pm
1}][A_{1},DA_{1},D^{2}A_{1},A_{2},C_{1}^{-1}]$
and the degree of $C_{1}^{-1}$ in
$\mathcal{F}_{g,n}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{c_{1}},\ldots,\phi_{c_{1}}\right)$
is $n$. Then, we obtain the following result by Lemma 1.2.
###### Corollary 4.3.
For all, $k\geq{1}$, we have
$\frac{\partial^{k}\mathcal{F}_{g}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}}{\partial
T^{k}}\in\mathbb{F}[C_{1}^{-1}]=\mathbb{C}[L^{\pm
1}][A_{1},DA_{1},D^{2}A_{1},A_{2},C_{1}^{-1}]$
and the degree of $C_{1}^{-1}$ in
$\frac{\partial^{k}\mathcal{F}_{g}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}}{\partial
T^{k}}$ is $k$.
### 4.2. Proof of holomorphic anomaly equations
###### Lemma 4.4.
We have
$\frac{\partial}{\partial
A_{2}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})=\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5}\frac{\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}}\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}}}{\zeta^{(b_{\mathfrak{e}1}+3)\mathrm{p}(\mathfrak{e}_{1})}\zeta^{(b_{\mathfrak{e}2}+3)\mathrm{p}(\mathfrak{v}_{2})}}.$
###### Proof.
By Proposition 4.1 and Lemma 3.2, we obtain
$\displaystyle\frac{\partial}{\partial
A_{2}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})=$
$\displaystyle\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5}\sum_{m=0}^{b_{\mathfrak{e}2}}(-1)^{m}\sum_{r=0}^{4}\frac{\frac{\partial}{\partial
A_{2}}\left(\widetilde{P}_{\mathrm{Inv}(r),\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}+m+1}\widetilde{P}_{r,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-m}\right)}{\zeta^{(b_{\mathfrak{e}1}+m+1+\mathrm{Inv}(r))\mathrm{p}(\mathfrak{v}_{1})}\zeta^{(b_{\mathfrak{e}2}-m+r)\mathrm{p}(\mathfrak{v}_{2})}}$
$\displaystyle=$
$\displaystyle\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5}\sum_{m=0}^{b_{\mathfrak{e}2}}(-1)^{m}\frac{\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}+m}\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-m}}{\zeta^{(b_{\mathfrak{e}1}+m+3)\mathrm{p}(\mathfrak{v}_{1})}\zeta^{(b_{\mathfrak{e}2}-m+3)\mathrm{p}(\mathfrak{v}_{2})}}$
$\displaystyle+\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5}\sum_{m=0}^{b_{\mathfrak{e}2}}(-1)^{m}\frac{\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}+m+1}\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-m-1}}{\zeta^{(b_{\mathfrak{e}1}+m+4)\mathrm{p}(\mathfrak{v}_{1})}\zeta^{(b_{\mathfrak{e}2}-m+2)\mathrm{p}(\mathfrak{v}_{2})}}.$
Since, $\widetilde{P}_{i,j}^{k}$ is defined to be $0$ for $k<0$, the second
summation ends actually at $m=b_{\mathfrak{e}2}-1$. Then, by shifting the
second summmation by $1$, and cancelling out terms in total expression, we get
$\displaystyle\frac{\partial}{\partial
A_{2}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})=$
$\displaystyle\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5}\sum_{m=0}^{b_{\mathfrak{e}2}}(-1)^{m}\frac{\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}+m}\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-m}}{\zeta^{(b_{\mathfrak{e}1}+m+3)\mathrm{p}(\mathfrak{v}_{1})}\zeta^{(b_{\mathfrak{e}2}-m+3)\mathrm{p}(\mathfrak{v}_{2})}}$
$\displaystyle+\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5}\sum_{m=1}^{b_{\mathfrak{e}2}}(-1)^{m-1}\frac{\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}+m}\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-m}}{\zeta^{(b_{\mathfrak{e}1}+m+3)\mathrm{p}(\mathfrak{v}_{1})}\zeta^{(b_{\mathfrak{e}2}-m+3)\mathrm{p}(\mathfrak{v}_{2})}}$
$\displaystyle=$
$\displaystyle\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5}\frac{\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}}\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}}}{\zeta^{(b_{\mathfrak{e}1}+3)\mathrm{p}(\mathfrak{v}_{1})}\zeta^{(b_{\mathfrak{e}2}+3)\mathrm{p}(\mathfrak{v}_{2})}}.$
∎
###### Lemma 4.5.
We have
$\frac{\partial}{\partial(D^{2}A_{1})}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})=\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5L^{2}}\sum_{m=0}^{2}(-1)^{m}\frac{\widetilde{P}_{4,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}+m-2}\widetilde{P}_{4,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-m}}{\zeta^{(b_{\mathfrak{e}1}+m+2)\mathrm{p}(\mathfrak{e}_{1})}\zeta^{(b_{\mathfrak{e}2}-m+4)\mathrm{p}(\mathfrak{v}_{2})}}$
###### Proof.
The strategy of proof is similar to that of Lemma 3.2. The only difference is
that we use Lemma 3.3 instead of Lemma 3.2 and shift one of the summations by
$3$ rather than by $1$. ∎
###### Theorem 4.6 (The first holomorphic anomaly equation).
For $g\geq{2}$, we have
$\frac{C_{3}}{5L}\frac{\partial\mathcal{F}_{g}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}}{\partial
A_{2}}=\frac{1}{2}\mathcal{F}_{g-1,2}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{2},\phi_{2}\right)+\frac{1}{2}\sum_{i=1}^{g-1}\mathcal{F}_{g-i,1}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{2}\right)\mathcal{F}_{i,1}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{2}\right)$
in $\mathbb{F}[C_{1},C_{2},C_{3}]$.
###### Proof.
Let $\Gamma\in\mathrm{G}_{g,0}^{\text{Dec}}(5)$ be a decorated graph and
$\tilde{\mathfrak{e}}\in\mathrm{E}_{\Gamma}$ be an edge of $\Gamma$ connecting
two vertices $\mathfrak{v}_{1}$ and $\mathfrak{v}_{2}$. After deleting the
edge $\tilde{\mathfrak{e}}$, we obtain a new graph. (By deleting, we mean
breaking the edge $\tilde{\mathfrak{e}}$ into two legs
$\mathfrak{l}_{\tilde{\mathfrak{e}}}$ and
$\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}}$.) There are two possibilities
for the resulting graph after deletion of edge $\mathfrak{e}$:
* (i)
If it is connected, then we obtain an element of
$\mathrm{G}_{g-1,2}^{\text{Dec}}(5)$, which we denote as
$\Gamma_{\tilde{\mathfrak{e}}}^{0}$. In this case, note that
$|\mathrm{Aut}(\Gamma)|=|\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{0})|$.
Note that we have two possibilities to label two legs and there is not a
canonical choice for this labeling.
* (ii)
If it is disconnected, then the resulting graph has two connected components,
which we denote as
$\Gamma_{\tilde{\mathfrak{e}}}^{1}\in\mathrm{G}_{g_{1},1}^{\text{Dec}}(5)$ and
$\Gamma_{\tilde{\mathfrak{e}}}^{2}\in\mathrm{G}_{g_{2},1}^{\text{Dec}}(5)$
where we have $g=g_{1}+g_{2}$. In this case, for the cardinality of the
automorphism groups of the decorated stable graphs, we have
$|\mathrm{Aut}(\Gamma)|=|\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{1})||\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{2})|$
if $\Gamma_{\tilde{\mathfrak{e}}}^{1}\neq\Gamma_{\tilde{\mathfrak{e}}}^{2}$.
For the special case when
$\Gamma_{\tilde{\mathfrak{e}}}^{1}=\Gamma_{\tilde{\mathfrak{e}}}^{2}$, the
graph $\Gamma$ has a $\mathbb{Z}_{2}$-symmetry given by interchanging
$\Gamma_{\tilde{\mathfrak{e}}}^{1}$ and $\Gamma_{\tilde{\mathfrak{e}}}^{2}$.
Hence
$|\mathrm{Aut}(\Gamma)|=2|\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{1})|^{2}$.
Putting these together into a single formula, we get
$|\mathrm{Aut}(\Gamma)|=(1+\delta_{\Gamma_{\tilde{\mathfrak{e}}}^{1},\Gamma_{\tilde{\mathfrak{e}}}^{2}})|\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{1})||\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{2})|$.
By Proposition 4.1 and Lemma 4.4, we observe that
$\displaystyle\frac{\partial\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\tilde{\mathfrak{e}})}{\partial
A_{2}}=$
$\displaystyle\frac{(-1)^{b_{\tilde{\mathfrak{e}}1}+b_{\tilde{\mathfrak{e}}2}}}{5}\frac{\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\tilde{\mathfrak{e}}1}}\widetilde{P}_{3,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\tilde{\mathfrak{e}}2}}}{\zeta^{(b_{\tilde{\mathfrak{e}}1}+3)\mathrm{p}(\mathfrak{v}_{1})}\zeta^{(b_{\tilde{\mathfrak{e}}2}+3)\mathrm{p}(\mathfrak{v}_{2})}}$
$\displaystyle=$ $\displaystyle
5\left(\frac{L^{3}}{K_{3}}\right)^{2}\begin{cases}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})&\text{for
the case (i)},\\\
\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}^{\mathrm{A}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}^{\mathrm{A}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})&\text{for
the case (ii)}\end{cases}$
with
$\ell(\mathfrak{l}_{\tilde{\mathfrak{e}}})=\ell(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})=2$,
i.e., with insertions $\phi_{2}$ .
By definition of $K_{3}$ and equation (1.7), we also note that
$\left(\frac{L^{3}}{K_{3}}\right)^{2}=\frac{L}{C_{3}}.$
Then, for case (i), we easily see that we have
(4.3)
$\displaystyle\operatorname{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}\left(\phi_{2},\phi_{2}\right)=$
$\displaystyle\frac{1}{|\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{0})|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}({\Gamma_{\tilde{\mathfrak{e}}}^{0}})}}\prod_{\mathfrak{v}\in\mathrm{V}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{v})\prod_{\mathfrak{e}\in\mathrm{E}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{e})\prod_{\mathfrak{l}\in\mathrm{L}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{l})$
$\displaystyle=$
$\displaystyle\frac{1}{|\mathrm{Aut}(\Gamma)|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}(\Gamma)}}\frac{C_{3}}{5L}\frac{\partial\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\tilde{\mathfrak{e}})}{\partial
A_{2}}\prod_{\mathfrak{v}\in\mathrm{V}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})\prod_{\begin{subarray}{c}\mathfrak{e}\in\mathrm{E}_{\Gamma}\\\
\mathfrak{e}\neq\tilde{\mathfrak{e}}\end{subarray}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e}).$
Similarly, for case (ii), we observe the following
(4.4)
$\displaystyle\operatorname{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}\left(\phi_{s}\right)$
$\displaystyle\operatorname{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}\left(\phi_{s}\right)$
$\displaystyle=$
$\displaystyle\frac{1}{|\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{1})|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}({\Gamma_{\tilde{\mathfrak{e}}}^{1}})}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}^{\mathrm{A}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\prod_{\mathfrak{v}\in\mathrm{V}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}^{\mathrm{A}}(\mathfrak{v})\prod_{\mathfrak{e}\in\mathrm{E}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}^{\mathrm{A}}(\mathfrak{e})$
$\displaystyle\times$
$\displaystyle\frac{1}{|\mathrm{Aut}(\Gamma_{\tilde{\mathfrak{e}}}^{2})|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}({\Gamma_{\tilde{\mathfrak{e}}}^{2}})}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}^{\mathrm{A}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})\prod_{v\in\mathrm{V}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}^{\mathrm{A}}(\mathfrak{v})\prod_{\mathfrak{e}\in\mathrm{E}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}^{\mathrm{A}}(\mathfrak{e})$
$\displaystyle=$
$\displaystyle\frac{(1+\delta_{\Gamma_{\tilde{\mathfrak{e}}}^{1},\Gamma_{\tilde{\mathfrak{e}}}^{2}})}{|\mathrm{Aut}(\Gamma)|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}(\Gamma)}}\frac{C_{3}}{5L}\frac{\partial\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\tilde{\mathfrak{e}})}{\partial
A_{2}}\prod_{\mathfrak{v}\in\mathrm{V}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})\prod_{\begin{subarray}{c}\mathfrak{e}\in\mathrm{E}_{\Gamma}\\\
\mathfrak{e}\neq\tilde{\mathfrak{e}}\end{subarray}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e}).$
By Lemma 2.1 and Theorem 4.2, we have the following vanishing result:
$\frac{\partial\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})}{\partial
A_{2}}=0$
for any vertex $\mathfrak{v}\in\mathrm{V}_{\Gamma}$.
Then, this vanishing result gives us the following
$\displaystyle\frac{\partial\mathrm{Cont}_{\Gamma}}{\partial A_{2}}=$
$\displaystyle\frac{1}{|\mathrm{Aut}(\Gamma)|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}(\Gamma)}}\prod_{\mathfrak{v}\in\mathrm{V}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})\frac{\partial}{\partial
A_{2}}\left(\prod_{\mathfrak{e}\in\mathrm{E}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})\right)$
$\displaystyle=$
$\displaystyle\sum_{\tilde{\mathfrak{e}}\in\mathrm{E}_{\Gamma}}\frac{1}{|\mathrm{Aut}(\Gamma)|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}(\Gamma)}}\frac{\partial\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\tilde{\mathfrak{e}})}{\partial
A_{2}}\prod_{\mathfrak{v}\in\mathrm{V}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})\prod_{\begin{subarray}{c}\mathfrak{e}\in\mathrm{E}_{\Gamma}\\\
\mathfrak{e}\neq\tilde{\mathfrak{e}}\end{subarray}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e}).$
So, we have
(4.5) $\frac{C_{3}}{5L}\frac{\partial\mathrm{Cont}_{\Gamma}}{\partial
A_{2}}=\sum_{\tilde{\mathfrak{e}}\in\mathrm{E}_{\Gamma}}\frac{1}{|\mathrm{Aut}(\Gamma)|}\sum_{\mathrm{A}\in\mathbb{Z}_{\geq
0}^{\mathrm{F}(\Gamma)}}\frac{C_{3}}{5L}\frac{\partial\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\tilde{\mathfrak{e}})}{\partial
A_{2}}\prod_{\mathfrak{v}\in\mathrm{V}_{\Gamma}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{v})\prod_{\begin{subarray}{c}\mathfrak{e}\in\mathrm{E}_{\Gamma}\\\
\mathfrak{e}\neq\tilde{\mathfrak{e}}\end{subarray}}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e}).$
Then, summing equation (4.3) and equation (4.4) over all decorated stable
graphs $\Gamma_{\tilde{\mathfrak{e}}}^{0}$ and
$(\Gamma_{\tilde{\mathfrak{e}}}^{1},\Gamma_{\tilde{\mathfrak{e}}}^{2})$ we
obtain
$\left\langle\left\langle\phi_{2},\phi_{2}\right\rangle\right\rangle_{g-1,2}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\quad\text{and}\quad\sum_{i=1}^{g-1}\left\langle\left\langle\phi_{2}\right\rangle\right\rangle_{g-i,1}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left\langle\left\langle\phi_{2}\right\rangle\right\rangle_{i,1}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}$
respectively. Then, by equation (4.5) we conclude that we have
$2\frac{C_{3}}{5L}\frac{\partial}{\partial
A_{2}}\left\langle\left\langle\right\rangle\right\rangle_{g}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}=\left\langle\left\langle\phi_{2},\phi_{2}\right\rangle\right\rangle_{g-1,2}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}+\sum_{i=1}^{g-1}\left\langle\left\langle\phi_{2}\right\rangle\right\rangle_{g-i,1}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left\langle\left\langle\phi_{2}\right\rangle\right\rangle_{i,1}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}$
after summing over all decorated stable graphs $\Gamma$. The reason we have
$2$ in front of the left hand side is due to not having a canonical order of
labelings of each of the legs $\mathfrak{l}_{\tilde{\mathfrak{e}}}$ and
$\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}}$ for case (i) and double
counting of different genera of connected components555Note that we count the
special case
$\Gamma_{\tilde{\mathfrak{e}}}^{1}=\Gamma_{\tilde{\mathfrak{e}}}^{2}$ once.
This only occur when $g$ is even. Although we do not have a double counting
issue, this time automorphism group of $\Gamma$ is
$2|{\Gamma_{\tilde{\mathfrak{e}}}^{0}}|^{2}$ for case (ii). This completes the
proof. ∎
###### Theorem 4.7 (The second holomorphic anomaly equation).
For $g\geq{2}$, we have
$\frac{C_{2}^{2}C_{3}}{5L^{3}}\frac{\partial\mathcal{F}_{g}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}}{\partial(D^{2}A_{1})}=\frac{1}{2}\mathcal{F}_{g-1,2}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{1},\phi_{1}\right)+\frac{1}{2}\sum_{i=1}^{g-1}\mathcal{F}_{g-i,1}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{1}\right)\mathcal{F}_{i,1}^{\left[\mathbb{C}^{5}/\mathbb{Z}_{5}\right]}\left(\phi_{1}\right)$
in $\mathbb{F}[C_{1},C_{2},C_{3}]$.
###### Proof.
The proof is similar to that of Theorem 4.6 with some technical difference.
Instead of giving full details, this time we point out these different
technicalities. Throughout the proof, let $\Gamma$, $\tilde{\mathfrak{e}}$,
$\mathfrak{v}_{1}$, $\mathfrak{v}_{2}$, $\mathfrak{l}_{\tilde{\mathfrak{e}}}$,
$\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}}$,
$\Gamma_{\tilde{\mathfrak{e}}}^{0}$, $\Gamma_{\tilde{\mathfrak{e}}}^{1}$,
$\Gamma_{\tilde{\mathfrak{e}}}^{2}$, “case (i)” and “case (ii)” be as in the
proof of Theorem 4.6.
By Proposition 4.1 and Lemma 4.5, we have
(4.6)
$\displaystyle\frac{\partial}{\partial(D^{2}A_{1})}\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\mathfrak{e})=$
$\displaystyle\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5L^{2}}\frac{\widetilde{P}_{4,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}-2}\widetilde{P}_{4,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}}}{\zeta^{(b_{\mathfrak{e}1}+2)\mathrm{p}(\mathfrak{e}_{1})}\zeta^{(b_{\mathfrak{e}2}+4)\mathrm{p}(\mathfrak{v}_{2})}}$
$\displaystyle-\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5L^{2}}\frac{\widetilde{P}_{4,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}-1}\widetilde{P}_{4,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-1}}{\zeta^{(b_{\mathfrak{e}1}+3)\mathrm{p}(\mathfrak{e}_{1})}\zeta^{(b_{\mathfrak{e}2}+3)\mathrm{p}(\mathfrak{v}_{2})}}$
$\displaystyle+\frac{(-1)^{b_{\mathfrak{e}1}+b_{\mathfrak{e}2}}}{5L^{2}}\frac{\widetilde{P}_{4,\mathrm{p}(\mathfrak{v}_{1})}^{b_{\mathfrak{e}1}}\widetilde{P}_{4,\mathrm{p}(\mathfrak{v}_{2})}^{b_{\mathfrak{e}2}-2}}{\zeta^{(b_{\mathfrak{e}1}+4)\mathrm{p}(\mathfrak{e}_{1})}\zeta^{(b_{\mathfrak{e}2}+2)\mathrm{p}(\mathfrak{v}_{2})}}$
where right hand side of this equation is equal to
$5\left(\frac{L^{4}}{K_{4}}\right)^{2}\left(\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}_{b_{\mathfrak{e}1}-2}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})-\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}_{b_{\mathfrak{e}1}-1}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}_{b_{\mathfrak{e}2}-1}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})+\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}_{b_{\mathfrak{e}2}-2}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})\right)$
for the case (i), and it is equal to
$5\left(\frac{L^{4}}{K_{4}}\right)^{2}\left(\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}^{\mathrm{A}_{b_{\mathfrak{e}1}-2}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}^{\mathrm{A}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})-\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}^{\mathrm{A}_{b_{\mathfrak{e}1}-1}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}^{\mathrm{A}_{b_{\mathfrak{e}2}-1}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})+\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}^{\mathrm{A}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}^{\mathrm{A}_{b_{\mathfrak{e}2}-2}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})\right)$
for the case (ii), with
$\ell(\mathfrak{l}_{\tilde{\mathfrak{e}}})=\ell(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})=1$,
i.e., with insertions $\phi_{1}$ for both cases. Here, by
$\mathrm{A}_{b_{\mathfrak{e}s}-r}$ we mean the flag value $b_{\mathfrak{e}s}$
is shifted by $r$ in $\mathrm{A}\in\mathbb{Z}_{\geq 0}^{\mathrm{F}(\bullet)}$
where $\bullet$ is $\Gamma_{\tilde{\mathfrak{e}}}^{0}$,
$\Gamma_{\tilde{\mathfrak{e}}}^{1}$ or $\Gamma_{\tilde{\mathfrak{e}}}^{2}$.
Since $\widetilde{P}_{i,j}^{k}=0$ for $k<0$ and shifting
$b_{\mathfrak{e}1}+b_{\mathfrak{e}2}$ by $2$ does not change signs in equation
(4.6), we can view equation (4.6) as
(4.7)
$\displaystyle\frac{\partial\mathrm{Cont}_{\Gamma}^{\mathrm{A}}(\tilde{\mathfrak{e}})}{\partial(D^{2}A_{1})}=$
$\displaystyle
5\left(\frac{L^{4}}{K_{4}}\right)^{2}\begin{cases}\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{0}}^{\mathrm{A}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})&\text{for
the case (i)},\\\
\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{1}}^{\mathrm{A}}(\mathfrak{l}_{\tilde{\mathfrak{e}}})\mathrm{Cont}_{\Gamma_{\tilde{\mathfrak{e}}}^{2}}^{\mathrm{A}}(\mathfrak{l}^{\prime}_{\tilde{\mathfrak{e}}})&\text{for
the case (ii)}\end{cases}$
while following proof strategy of Theorem 4.6.
By definition of $K_{4}$ and equation (1.7), we also note that
$\left(\frac{L^{4}}{K_{4}}\right)^{2}=\frac{L^{3}}{C_{2}^{2}C_{3}}.$
The rest of the proof is just adaptation of proof of Theorem 4.6. ∎
## References
* [1] D. Abramovich, T. Graber, A. Vistoli, Gromov-Witten theory of Deligne-Mumford stacks, Amer. J. Math. 130 (2008), no. 5, 1337–1398.
* [2] T. Coates, A. Corti, H. Iritani, H.-H. Tseng, Computing Genus-Zero Twisted Gromov-Witten Invariants, Duke Math. J. 147 (2009), no.3, 377–438.
* [3] A. Givental, Elliptic Gromov-Witten invariants and the generalized mirror conjecture, in: “Integrable systems and algebraic geometry (Kobe/Kyoto, 1997)”, 107–155, World Sci. Publ., River Edge, NJ, 1998.
* [4] A. Givental, Symplectic geometry of Frobenius structures, in: “Frobenius manifolds”, Aspects Math., E36, 91–112, Friedr. Vieweg, Wiesbaden, 2004.
* [5] T. Graber, R. Pandharipande, Localization of virtual classes, Invent. Math. 135 (1999), 487–518.
* [6] M. Kontsevich, Y. Manin, Gromov-Witten classes, quantum cohomology, and enumerative geometry, Comm. Math. Phys. 164 (1994), 525–562.
* [7] Y.-P. Lee, R. Pandharipande, Frobenius manifolds, Gromov-Witten theory, and Virasoro constraints, manuscript available from the authors’ websites.
* [8] H. Lho, Crepant resolution conjecture for $\mathbb{C}^{5}/\mathbb{Z}_{5}$, arXiv:1707.02910.
* [9] H. Lho, R. Pandharipande, Stable quotients and the holomorphic anomaly equation, Adv. Math. 332 (2018), 349–402.
* [10] H. Lho, R. Pandharipande, Crepant resolution and the holomorphic anomaly equation for $[\mathbb{C}^{3}/\mathbb{Z}_{3}]$, Proc. London Math. Soc. (3) 119 (2019), 781–813.
* [11] R. Pandharipande, Cohomological field theory calculations, Proceedings of the ICM (Rio de Janeiro 2018), Vol 1, 869–898, World Sci. Publications: Hackensack, NJ, 2018.
* [12] R. Pandharipande, A. Pixton, D. Zvonkine, Relations on $\overline{M}_{g,n}$ via $3$-spin structures, J. Amer. Math. Soc. 28 (2015), 279–309.
* [13] R. Pandharipande, H.-H. Tseng, Higher genus Gromov-Witten theory of $\mathsf{Hilb}^{n}(\mathbb{C}^{2})$ and $\mathsf{CohFTs}$ associated to local curves, Forum of Mathematics, Pi (2019), Vol. 7, e4, 63 pages, arXiv:1707.01406.
* [14] C. Teleman, The structure of 2D semi-simple field theories, Invent. Math. 188 (2012), 525–588.
* [15] H.-H. Tseng, Orbifold quantum Riemann-Roch, Lefschetz and Serre, Geom. Topol. 14 (2010), 1–81.
* [16] D. Zagier, A. Zinger, Some properties of hypergeometric series associated with mirror symmetry, In: “Modular forms and string duality”, 163–177, Fields Inst. Commun. 54, AMS 2008.
|
# Approximating Martingale Process for Variance Reduction
in Deep Reinforcement Learning with Large State Space
Charlie Ruan Department of Computer Science, Cornell University;
School of Operations Research and Information Engineering, Cornell University
(November 2022)
###### Abstract
Approximating Martingale Process (AMP) is proven to be effective for variance
reduction in reinforcement learning (RL) in specific cases such as Multiclass
Queueing Networks. However, in the already proven cases, the state space is
relatively small and all possible state transitions can be iterated through.
In this paper, we consider systems in which state space is large and have
uncertainties when considering state transitions, thus making AMP a
generalized variance-reduction method in RL. Specifically, we will investigate
the application of AMP in ride-hailing systems like Uber, where Proximal
Policy Optimization (PPO) is incorporated to optimize the policy of matching
drivers and customers.
††preprint: APS/123-QED
## I Introduction
Ride-hailing services, such as Uber, Lyft, and Didi Chuxing, have become a
popular stochastic process problem being studied in operations research.
Approximating the optimal policy of matching drivers and customers in real-
time is especially difficult due to the large state space and the
combinatorial nature of the problem. In Feng et al. (2021), the authors
consider a Markov decision process (MDP) model of a ride-hailing service
system and innovatively decompose the MDP actions by sequentially assigning
tasks to available drivers due to the large action space. Then, the
reinforcement learning algorithm proximal policy optimization (PPO) Schulman
et al. (2017) is adopted for the ride-hailing system’s control policy
optimization.
On the other hand, Multiclass Queueing Networks (MQNs) are a special class of
stochastic processing networks, a classic problem in operations research. In
order to find the optimal control policy of such a network, Dai and Gluzman
(2022) formulated the MQN problem with Poisson arrival and exponential service
time as an MDP, also using reinforcement learning algorithm PPO from Schulman
et al. (2017) to optimize the network’s policy. However, a naive
implementation of PPO would not perform well when the network experiences
heavy traffic which causes high variance. The paper Dai and Gluzman (2022)
thus adopts several variance-reduction techniques, among which is the
Approximating Martingale Process method from Henderson (1997), demonstrated as
an essential part of the final optimization algorithm.
In this paper, we generalize the Approximating Martingale Process (AMP) method
as a variance-reduction technique in reinforcement learning, specifically
under systems with a large state space and hence uncertainties in state
transitions. We will consider the ride-hailing service system as the example.
Intuitively, if the AMP method reduces variance effectively in the ride-
hailing context, then the required number of Monte-Carlo episodes to roll-out
for each policy iteration will decrease.
## II Formulating AMP in Ride-Hailing
In this section, we formulate the use of AMP in the ride-hailing problem under
the setup of Feng et al. (2021) by imitating the approach in Section 4.2 in
Dai and Gluzman (2022).
In Feng et al. (2021), the authors define a value function
$V_{\pi}:S\rightarrow\mathbb{R}$ of policy $\pi$:
$V_{\pi}(s_{t,i}):=\mathbb{E}_{\pi}\biggl{[}\sum_{k=i}^{I_{t}}c(s_{t,k},a_{t,k})+\sum_{j=t+1}^{H}\sum_{k=1}^{I_{j}}c(s_{j,k},a_{j,k})\biggr{]},$
(1)
for each epoch $t=1,...,H$, step $i=1,...,I_{t}$, and $s_{t,i}\in S$, where
$I_{t}$ is the number of available cars at epoch $t$, $H$ is the number of
minutes in a working day (episode), and $c(s,a)$ is the reward for taking
action $a$ at state $s$, following the scheme in Feng et al. (2021).
Then, with the roll-outs collected by Monte Carlo simulation with policy
$\pi$, we can estimate the value function $V_{\pi}$ as
$\hat{V}_{t,i,k}:=\sum_{j=i}^{I_{t,k}}c(s_{t,j,k},a_{t,j,k})+\sum_{l=t+1}^{H}\sum_{j=1}^{I_{l,k}}c(s_{l,j,k},a_{l,j,k}),$
(2)
which is a one-replication estimate of the value function $V(s_{t,i,k})$ at
state $s_{t,i,k}$, visited at epoch $t$, episode $k$, after $i-1$ steps of the
“sequential decision making process” (SDM process).
So far, we have been revisiting the formulation in Feng et al. (2021). We
notice that Equation (2) may suffer from large variance as it is the sum of
many random terms $c(s,a)$, the number of which depends on the step and epoch.
Therefore, we focus on decreasing the magnitude of summands in Equation (2) by
following Section 4.2 in Dai and Gluzman (2022). We first notice that the
value function (1) is a solution to the Bellman Equation
$V_{\pi}(s_{t,i})=\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s_{t,i})V_{\pi}(s^{\prime})+\mathbb{E}_{\pi(s_{t,i})}\biggl{[}c(s_{t,i},a_{t,i})\biggr{]},$
(3)
where $P_{\pi}(s^{\prime}|s_{t,i})$ is the probability of transitioning from
state $s_{t,i}$ to state $s^{\prime}$ under policy $\pi$.
Assuming that an episode
$\\{(s_{t,1},a_{t,1}),(s_{t,2},a_{t,2}),...,(s_{t,I_{t}},a_{t,I_{t}})\\}_{t=1}^{H}$
is collected under policy $\pi$, then from the definition of Bellman Equation
(3):
$\mathbb{E}_{\pi(s_{t,i})}\biggl{[}c(s_{t,i},a_{t,i})\biggr{]}=V_{\pi}(s_{t,i})-\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s_{t,i})V_{\pi}(s^{\prime})$
Assuming that an approximation $\zeta$ of the value function $V_{\pi}$ is
available and sufficiently close, then the correlation between
$\mathbb{E}_{\pi(s_{t,i})}\biggl{[}c(s_{t,i},a_{t,i})\biggr{]}\text{ and
}\zeta(s_{t,i})-\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s_{t,i})\zeta(s^{\prime})$
is positive, which can be exploited as the control variate to reduce the
variance. Further following Dai and Gluzman (2022) we consider the martingale
process in Henderson (1997)
$M(s_{t,i,k})=\zeta(s_{t,i,k})+\sum_{j=i}^{I_{t,k}}\biggl{[}\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s_{t,j,k})\zeta(s^{\prime})-\zeta(s_{t,j,k})\biggr{]}+\sum_{l=t+1}^{H}\sum_{j=1}^{I_{l,k}}\biggl{[}\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s_{l,j,k})\zeta(s^{\prime})-\zeta(s_{l,j,k})\biggr{]}.$
(4)
Adding $M$ to estimator (2), we get the AMP estimator of the solution to the
value function:
$\displaystyle\hat{V}^{AMP(\zeta)}(s_{t,i,k}):=\zeta(s_{t,i,k})+$
$\displaystyle\sum_{j=i}^{I_{t,k}}\biggl{[}c(s_{t,j,k},a_{t,j,k})+\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s_{t,j,k})\zeta(s^{\prime})-\zeta(s_{t,j,k})\biggr{]}+$
$\displaystyle\sum_{l=t+1}^{H}\sum_{j=1}^{I_{l,k}}\biggl{[}c(s_{l,j,k},a_{l,j,k})+\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s_{l,j,k})\zeta(s^{\prime})-\zeta(s_{l,j,k})\biggr{]}.$
(5)
However, we immediately notice that the $\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s)\zeta(s^{\prime})$ is not trivial to compute for
certain states $s$. We first break down the term
$P_{\pi}(s^{\prime}|s)\zeta(s^{\prime})$:
$\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s)\zeta(s^{\prime})=\sum_{s^{\prime}\in S}\sum_{a\in
A}\pi(a|s)P(s^{\prime}|s,a)\zeta(s^{\prime}),$ (6)
where $P(s^{\prime}|s,a)$ is solely dependent on the system’s dynamic. Now
there are a total of three cases to consider: $(a)$ when $s$ is not the last
state in an SDM, $(b)$ when $s$ is the the last state in an SDM and also the
last epoch of the episode, and $(c)$ when $s$ is the last state in SDM but not
the last epoch of the episode.
In the previous two cases, $P(s^{\prime}|s,a)$ is trivial because the
transition of state is deterministic when given the action. However, for the
last case, when $s$ is the last state in the SDM, passenger arrivals become
the randomness involved for the system’s transition. Furthermore, it is
computationally infeasible to iterate through all the possible passenger
arrival combinations.
This introduces to the problem that this paper will explore: whether the
Approximating Martingale Process is still effective when the state space is
intractably large, especially when the state transitions are impossible to
iterate through.
## III A Sampling-Based Estimated AMP in Queueing Networks
With the problem being introduced, one intuitive approach is to approximate
the AMP by sampling $L$ number of $s^{\prime}$ as we cannot iterate through
all the possible $s^{\prime}$. In this section, we revisit the Multiclass
Queueing Network Dai and Gluzman (2022), and apply such a sampling-based
estimated AMP. Even though the MQN case does not experience the problem of
intractable number of possible transitions, we can testify whether such an AMP
estimator would work by comparing its performance with the original one.
Therefore, we start by modifying the original AMP estimator to a sampling-
based estimated AMP.
We use Algorithm 2 in Section 4.2 of Dai and Gluzman (2022), which does not
involve discounting, since we are not applying discounting in the ride-hailing
case. The MQN version of AMP estimator of the solution to the Poisson equation
(analogous to Bellman equation in ride-hailing) is:
$\hat{h}_{\eta}^{AMP(\zeta)}(x^{(k)}):=\zeta(x^{(k)})+\sum_{t=k}^{\sigma_{k}-1}\biggl{(}g(x^{(t)})-\widehat{\mu_{\eta}^{T}g}+\sum_{y\in\chi}P_{\eta}(y|x^{(t)})\zeta(y)-\zeta(x^{(t)})\biggr{)},$
(7)
for each state $x^{k}$, where $k=1,...,\sigma(N)$
Now, assuming that we do not have access to a precise
$\sum_{y\in\chi}P_{\eta}(y|x^{(t)})\zeta(y)$ due to $\chi$ being impossible to
iterate through, we formulate the sampling-based estimated AMP as following:
$\sum_{y\in\chi}P_{\eta}(y|x^{(t)})\zeta(y)=\sum_{y\in\chi}\sum_{a\in
A}\eta(a|x^{(t)})P(y|x^{(t)},a)\zeta(y)\approx\sum_{a\in
A}\eta(a|x^{(t)})\frac{1}{L}\sum_{l=1}^{L}\zeta(y_{l}),$ (8)
where $y_{l}$ is determined by the current state $x^{(t)}$, action $a$, and
most importantly $activity_{l}$, which is the sampled state transition of the
system. In the case of Criss-Cross Network in Dai and Gluzman (2022), there
are a total of 5 possible transitions: arrival of job 1 or job 2, and
completion of job 1, job 2, or job 3.
Figure 1: Comparison of learning curves (with $95\%$ Confidence Interval) with
different sample size $L$ in Equation (8), all using Algorithm 2 without
discounting, on the Criss-Cross Network with different traffic regimes.
(a) Imbalanced low (IL) traffic
(b) Imbalanced medium (IM) traffic
(c) Balanced low (BL) traffic
(d) Balanced medium (BM) traffic
As shown in Figure 1d, the larger the sample size $L$, the faster the
convergence, which is what we would expect. Looking at Figure 2d, we observe
that the estimated AMP with a sample size of $L=500$ has a very similar
performance with the original AMP (Algorithm 2 in the original paper Dai and
Gluzman (2022)). Therefore, we can conclude that a sampling-based estimated
AMP is still effective. However, in this case, there are only 5 possible
“activities” for the Criss-Cross network, while there are a lot more in the
ride-hailing services system due to all the possible passenger arrival
combinations.
Figure 2: Comparison of learning curves (with $95\%$ Confidence Interval)
among Algorithm 1 (no AMP), original Algorithm 2, and Algorithm 2 using
estimated AMP with a sample size of $L=500$ on the Criss-Cross Network with
different traffic regimes.
(a) Imbalanced low (IL) traffic
(b) Imbalanced medium (IM) traffic
(c) Balanced low (BL) traffic
(d) Balanced medium (BM) traffic
## IV Current Results and Future Work
Since the performance of Approximating Martingale Process on ride-hailing
system is still inconclusive, this section documents the current progress and
issues observed, along with suggestions on future efforts.
Following a similar sampling formulation, we expand the $\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s)\zeta(s^{\prime})$ in Equation (II):
$\sum_{s^{\prime}\in
S}P_{\pi}(s^{\prime}|s_{t,i})\zeta(s^{\prime}):=\begin{cases}\sum_{a\in
A}\pi(a|s_{t,i})\frac{1}{L}\sum_{l=1}\zeta(s^{\prime}_{l}),\text{ if
}t=I_{t},i\neq H,\\\ \sum_{s^{\prime}\in S}\sum_{a\in
A}\pi(a|s_{t,i})P(s^{\prime}|s_{t,i},a)\zeta(s^{\prime}),\text{
otherwise}\end{cases}$ (9)
where epoch $t=1,...,H$, step $i=1,...,I_{t}$, and sampled next state
$s^{\prime}_{l}$ is determined by current state $s_{t,i}$, action $a$, and
sampled passenger arrivals. Besides, following Dai and Gluzman (2022), we use
the Value neural network fitted from previous policy iteration (i.e.
$f_{\psi_{i-1}}$) as $\zeta$.
As shown in Figure 3, currently, such an implementation performs very badly.
For both $L=50$ and $L=500$, the matching rates are not monotonously
increasing as they should be, and the Value neural network’s loss skyrockets
after a certain epochs unlike the no-AMP counterpart.
Based on experimental results, the way we normalize the input and output
before training the Value neural network plays a huge role in the performance.
In Figure 3, we normalize both the input (states $s$) and output (Equation
(II)). However, as shown in Figure 4, if we only normalize the input and not
the output, we observe a better performance, though still worse than the no-
AMP case. The specific rationale behind this observation is still unclear. For
now, we cannot exclude the possibility of having an error hidden somewhere
behind the implementation of the estimated AMP. Therefore, future work should
determine whether the poor performance indeed signifies that AMP does not work
on ride-hailing systems (possibly due to the large number of possible
transitions as opposed to only five in Criss-Cross Network), or there was
simply an error in the formulation or implementation that causes such a
performance.
Figure 3: Comparison between no AMP and estimated AMP with two different
sample sizes, with Value neural network loss and car matching rates being the
metrics. The experiments run with 500 cars, $H=360$ minutes, and $K=300$
episodes. Figure 4: Comparison between normalizing both input (states $s$) and
output (Equation (II)), and normalizing input only, with Value neural network
loss and car matching rates being the metrics. The experiments run with 500
cars, $H=360$ minutes, and $K=300$ episodes.
## V Experimental Details
In this section, we record relevant details about running the experiments.
While the original paper Feng et al. (2021) runs with 1000 cars and 300
episodes per policy iteration, the results in Section IV in this paper run
with 500 cars and 300 episodes for efficiency, using Amazon Web Service’s EC2
instance r6i.16xlarge.
Table 1 compares the time it takes to run one iteration between no AMP and AMP
with a sample size of $L=50$. In the actual implementation, we sample the next
states during the Monte-Carlo simulation, explaining the time difference in
the Simulation column. The difference in Data Preprocessing is due to the
extra computation generated by Equation (9). On the other hand, the sampling-
based estimated AMP does not generate overhead during neural network Training.
Note that Data Preprocessing accounts for computation for Equation (II) and
advantage function.
Now we discuss the preferred configuration for the cloud instance to carry out
the ride-hailing task. There are three general types presented by Amazon Web
Service’s EC2: memory-optimized instances (e.g. r6i.16xlarge), instances with
high-performance GPUs (e.g. p3.8xlarge), and instances with medium-performance
GPU but more CPU and RAM (e.g. g5.16xlarge). The configurations and prices for
each instance are recorded in Table 2.
Table 1: Comparison of the time it takes for one iteration between no AMP and running sampling-based AMP. Both ran on r6i.16xlarge with 500 cars and 300 episodes. Task | Simulation (mins) | Data Preprocessing (mins) | NN Training (mins) | Total Time (mins)
---|---|---|---|---
no AMP | 3.39 | 2.58 | 14.97 | 20.94
AMP with $L=50$ | 9.26 | 3.69 | 14.21 | 27.16
Table 2: Comparison of AWS EC2 instances that we consider for the ride-hailing task. While p3 instances uses NVIDIA Tesla V100 GPUs and Broadwell E5-2686 v4 for CPUs, g5 instances use NVIDIA A10G Tensor Core GPUs and 2nd generation AMD EPYC processors for CPUs, and r6i uses Ice Lake 8375C for CPUs. Prices are recorded as of November 2022. Instance | vCPUs | Mem (GiB) | GPUs | GPU Memory (GiB) | Price (USD/hr)
---|---|---|---|---|---
r6i.16xlarge | 64 | 512 | N/A | N/A | 4.032
r6i.24xlarge | 96 | 768 | N/A | N/A | 6.048
g5.16xlarge | 64 | 256 | 1 | 24 | 4.096
g5.24xlarge | 96 | 384 | 4 | 96 | 8.144
p3.8xlarge | 32 | 244 | 4 | 64 | 12.24
p3.16xlarge | 64 | 488 | 8 | 128 | 24.48
Table 3: Comparison of the time it takes for one iteration between r6i.16xlarge and g5.16xlarge, both ran without AMP. We used 300 cars and 100 episodes since g5.16xlarge does not have enough RAM to finish a 500-car and 300-episode task. EC2 Instance | Simulation (mins) | Data Preprocessing (mins) | NN Training (mins) | Total Time (mins)
---|---|---|---|---
r6i.16xlarge | 0.63 | 0.40 | 2.32 | 3.35
g5.16xlarge | 5.38 | 0.32 | 6.78 | 12.48
Table 3 compares the time it takes to run one iteration of no-AMP task between
using r6i.16xlarge (CPUs only) and g5.16xlarge (has 1 GPU, but less memory).
Surprisingly, with a GPU, we not only have slower simulation (since simulation
is a sequential and hence CPU-intensive workload), we also have a
significantly slower neural network training speed. This is probably due to
how g5.16xlarge’s GPU only has 24 GiB Memory, since tensor computation takes
place on the GPU in order to experience the speed-up that a GPU brings, and a
small amount of GPU memory may slow things down. When running with 500 cars
with 100 episodes (not even 300 episodes), we even run into out-of-memory
(OOM) problem for the g5.16xlarge’s GPU during neural network fitting.
Then, one may think that we should use instances like p3.8xlarge to benefit
from the higher GPU memory. However, even though GPUs may increase the
efficiency for neural network training, the number of vCPUs and the RAM should
be the priority because of the nature of our task. We need to simulate 300
episodes, thus the more vCPUs, the more parallel we can be. Besides, 300
episodes of data is not trivial: we run into out-of-memory (OOM) issues (CPU
RAM in this case) on g5.16xlarge during simulation, hence Table 3 runs with
300 cars and 100 episodes.
Upgrading the instance to g5.24xlarge or p3.16xlarge is another possible
option, but AWS requires users to have sufficient historical usage with other
lower-tier instances in order to apply for such instances. Besides, we cannot
guarantee if the 384 GiB memory for g5.24xlarge would be enough to carry out a
500-car and 300-episode task (since r6i.16xlarge has 512 GiB). Note that if we
want to run experiments with 1000 cars, instead of 500 cars, with 300
episodes, we need at least r6i.24xlarge, otherwise r6i.16xlarge might also run
into out-of-memory problems.
The ideal cloud instance for the ride-hailing task is one that has the vCPUs
that match the calibre of the r6i.16xlarge, along with GPUs with enough memory
on top of that (maybe the four from the p3 instances). Unfortunately, there is
not such an instance to my knowledge; therefore we use CPU-only instances to
carry out the experiments.
###### Acknowledgements.
First and foremost, I would like to thank Professor Jim Dai for this precious
research opportunity and his exceptional guidance. Furthermore, the current
progress would not have been possible without the help of Jiekun Feng and Mark
Gluzman along the way. Finally, the generous funding from the School of
Operations Research and Information Engineering at Cornell University was
crucial for securing the computational resources.
## References
* Feng et al. (2021) J. Feng, M. Gluzman, and J. G. Dai, IEEE Control Systems Letters 5, 2060 (2021), URL https://doi.org/10.1109%2Flcsys.2020.3046995.
* Schulman et al. (2017) J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, _Proximal policy optimization algorithms_ (2017), URL https://arxiv.org/abs/1707.06347.
* Dai and Gluzman (2022) J. G. Dai and M. Gluzman, Stochastic Systems 12, 30 (2022), URL https://doi.org/10.1287%2Fstsy.2021.0081.
* Henderson (1997) S. G. Henderson, Ph.D. thesis, Department of Operations Research, Stanford University (1997), http://people.orie.cornell.edu/shane/pubs/thesis.pdf.
|
# Global Carleman Estimate and State Observation Problem for Ginzburg-Landau
Equation
Fangfang Dou, Zhonghua Liao and Xiaomin Zhu School of Mathematical Sciences,
University of Electronic Science and Technology of China, Chengdu, China.
<EMAIL_ADDRESS>of Mathematics, Sichuan University,
Chengdu 610064, China<EMAIL_ADDRESS>of
Mathematics, Sichuan University, Chengdu 610064, China.
<EMAIL_ADDRESS>
###### Abstract
In this paper, we prove a global Carleman estimate for the complex Ginzburg-
Landau operator with a cubic nonlinear term in a bounded domain of
${\mathop{\rm l\negthinspace R}}^{n},n=2,3$. As applications, we study state
observation problems for the Ginzburg-Landau equation.
Key Words. Ginzburg-Landau equation, Carleman estimate, state observation
problem, conditional stability.
2000 Mathematics Subject Classification. 93C20, 93B07.
## 1 Introduction
Let $T>0$, $\Omega\subset{\mathop{\rm l\negthinspace R}}^{n}(n=2,3)$ be a
given bounded domain with a $C^{2}$ boundary $\Gamma$. Let
$\omega\subset\Omega$ be a nonempty open subset and $\Gamma_{0}\subset\Gamma$
be a nonempty open subset. Put
$Q\buildrel\triangle\over{=}(0,T)\times\Omega,\quad\Sigma\buildrel\triangle\over{=}(0,T)\times\Gamma,\quad
Q_{\omega}\buildrel\triangle\over{=}(0,T)\times\omega,\quad\Sigma_{0}\buildrel\triangle\over{=}(0,T)\times\Gamma_{0}.$
Consider the following Ginzburg-Landau type equation
$y_{t}-(1+ib)\Delta y+(1+ic)|y|^{2}y=0\qquad\text{ in }Q,\\\ $ (1.1)
where $i=\sqrt{-1}$, $b,c\in{\mathop{\rm l\negthinspace R}}$ characterize
linear and nonlinear dispersion, respectively.
Ginzburg-Landau equation is introduced to serve as a phenomenological
description of superconductivity in [7]. Now it is used to described many
physical phenomena, such as superconductivity, superfluidity, Bose-Einstein
condensation to liquid crystals and strings in field theory, and instability
waves (e.g., [1, 16]). It is one of the most-studied nonlinear equations both
in the physics and mathematics community. Particularly, we refer the readers
to [8] for a detailed introduction of the well-posedness of Ginzburg-Landau
equations.
In this paper, we focus on establishing some Carleman estimates for solutions
to (1.1). Carleman estimates, which serve as important tools in the study of
unique continuation problems, observability and controllability problems, and
inverse problems for partial differential equations, have been investigated
extensively (see [5, 9, 10, 12, 14, 13] and the rich references cited
therein). Generally speaking, Carleman estimates are established for linear
partial differential operators. The usual way to hand semilinear equation is
combining the Carleman estimate for the linearized equation and some Sobolev
embedding theorem (e.g., [3]). Nevertheless, such technique put some
restriction on the semilinear term, which is not satisfied by a cubic term.
In this paper, we are endeavor to derive Carleman estimates for the Ginzburg-
Landau equation (1.1). For convenience, we define the Ginzburg-Landau operator
by
${\cal F}y\buildrel\triangle\over{=}y_{t}-(1+ib)\Delta y+(1+ic)|y|^{2}y.$
(1.2)
We first establish a modified pointwise estimate for the Ginzburg-Landau
operator, and then get the desired Carleman estimates.
As an application of the Carleman estimates established in this paper, we
consider a state observation problem of the Ginzburg-Landau equation, that is,
can one determine the solution to (1.1) (with suitable boundary condition)
from some partial measurement of the solution on $Q_{0}$ or $\Sigma_{0}$? More
precisely, we consider the following state observation problems:
* •
The first one is the Identification Problem: Is the solution $y$ to (1.1) be
determined uniquely by the observation $y|_{Q_{\omega}}$ (resp.
$\displaystyle\frac{\partial y}{\partial\nu}\Big{|}_{\Sigma_{0}}$)?
* •
If the answer to the Identification problem is positive, then it is natural to
ask the Conditional Stability Problem: Assume that two solutions $y$ and
$\hat{y}$ (to the equation (1.1)) are given. Let $y|_{Q_{\omega}}$ and
$\hat{y}|_{Q_{\omega}}$ be the corresponding observations. Can we find a
positive constant $C$ such that
$|y-\hat{y}|\leq
C(y,\hat{y})|\\!|y|_{Q_{\omega}}-\hat{y}|_{Q_{\omega}}|\\!|,\;\;\Big{(}{\it
resp.}|y-\hat{y}|\leq C(y,\hat{y})\Big{|}\\!\Big{|}\frac{\partial
y}{\partial\nu}\Big{|}_{\Sigma_{0}}-\frac{\partial\hat{y}}{\partial\nu}\Big{|}_{\Sigma_{0}}\Big{|}\\!\Big{|}\Big{)}$
(1.3)
with appropriate norms in both sides?
###### Remark 1.1
In the formulation of the Conditional Stability Problem, the constant
$C(y,\hat{y})$ in (1.3) means that we need some a priori bound for the
solutions $y$ and $\hat{y}$. This is natural since we deal with semilinear
equation. However, the choice of the a priori bound is subtle. Indeed, if one
assume that
$|y|_{L^{\infty}(0,T;L^{\infty}(\Omega))}+|\hat{y}|_{L^{\infty}(0,T;L^{\infty}(\Omega))}<M$
for some $M>0$, then the derivation of (1.3) for the semilinear equation with
cubic term will be the same as the linear equation since
$|y^{3}|\leq M^{2}|y|,\quad|\hat{y}^{3}|\leq M^{2}|\hat{y}|.$
In this paper, we do not assume such kind of condition.
State observation problem in the above type is studied extensively for linear
PDEs (see [2, 4, 5, 17] and the rich references therein). However, the
semilinear case with cubic term attracts very little attention. To our best
knowledge, there is no published works addressing state observation problem
for (1.1). The cubic nonlinear terms in the Ginzburg-Landau equation will
bring us many challenges in the proof that we cannot simply imitate the
results of linear equations to obtain the desired conditional stability. As
will be seen in Section 4, some technical obstacles should be overcome.
The rest of this paper is organized as follows. In Section 2, we state an
internal and a boundary Carleman estimates for the Ginzburg-Landau operators.
These Carleman estimates will be established in Section 3. Finally, as
applications of the Carleman estimates given in Section 2, we solve the
conditional stability problem for the observability problem of Ginzburg-Landau
equation in Section 4.
## 2 Global Carleman estimate for Ginzburg-Landau operators
We first recall the following known results.
###### Lemma 2.1
([10]) There is a real function $\psi_{1}\in C^{4}({\overline{\Omega}})$ such
that
$\left\\{\begin{array}[]{ll}\displaystyle\psi_{1}>0\ \mbox{ in }\Omega,\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\psi_{1}=0\ \mbox{ on
}\Gamma,\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle|\nabla\psi_{1}(x)|>0\ \mbox{ for all
}x\in{\overline{\Omega}}\setminus{\omega}.\end{array}\right.$
Take a bounded domain $J\subset{\mathop{\rm l\negthinspace R}}^{n}$ such that
$\partial J\cap\overline{\Omega}\subset\Gamma_{0}$ and that
$\widetilde{\Omega}=J\cup\Omega\cup\Gamma$ has a $C^{4}$ boundary
$\partial\widetilde{\Omega}$. Applying Lemma 2.1 to the domain
$\widetilde{\Omega}$ with choosing the set of the critical points of
$\psi_{1}$ belonging to $J$, we get the following result:
###### Lemma 2.2
There is a real function $\psi_{2}\in C^{4}({\overline{\Omega}})$ such that
$\left\\{\begin{array}[]{ll}\displaystyle\psi_{2}>0\ \mbox{ in
}\Omega,\quad|\nabla\psi_{2}(x)|>0\mbox{ in }\Omega,\\\ \vskip 6.0pt plus
2.0pt minus 2.0pt\cr\displaystyle\psi_{2}=0\quad\mbox{on
}{\Gamma\setminus\Gamma_{0}},\quad\frac{\partial\psi_{2}}{\partial\nu}\leq 0,\
\ \forall x\in{\Gamma\setminus\Gamma_{0}}.\end{array}\right.$
Now we introduce the weight functions for the Carleman estimates. Let
$\lambda>1$ and $\mu>1$. For $j=1,2$, put
$\varphi_{j}(t,x)={e^{\mu\psi_{j}(x)}\over{t(T-t)}},\quad\rho_{j}(t,x)={{e^{\mu\psi_{j}(x)}-e^{2\mu|\psi_{j}|_{C(\overline{\Omega};\;{\mathop{\rm
l\negthinspace
R}})}}}\over{t(T-t)}},\quad\ell_{j}=\lambda\rho_{j},\quad\theta_{j}=e^{\ell_{j}}.$
(2.1)
Denote
$\begin{array}[]{ll}\displaystyle\alpha_{1}=-\frac{1}{1+b^{2}},\quad\beta_{1}=\frac{b}{1+b^{2}},\quad\alpha_{2}=\frac{1+bc}{1+b^{2}},\quad\beta_{2}=\frac{c-b}{1+b^{2}},\end{array}$
(2.2) $\begin{array}[]{ll}\displaystyle{\cal
P}y\buildrel\triangle\over{=}(\alpha_{1}+i\beta_{1})y_{t}+\Delta y\end{array}$
and
$\begin{array}[]{ll}\displaystyle{\cal G}y\buildrel\triangle\over{=}{\cal
P}y-(\alpha_{2}+i\beta_{2})|y|^{2}y.\end{array}$ (2.3)
Then ${\cal F}=-(1+ib){\cal G}$. In the following, we consider the operator
${\cal G}$ instead of ${\cal F}$ for the simplicity of notations.
We assume that
###### Condition 2.1
For some $r_{0}\in(0,1)$ and $\delta_{0}\in(0,1/8)$,
$|b|\leq
r_{0}<1,\qquad\alpha_{2}>0,\qquad|\beta_{2}|\leq\delta_{0}\alpha_{2}.$ (2.4)
###### Remark 2.1
From the first inequality of Condition 2.1 and (2.2), we know
$\beta_{1}<\frac{r_{0}}{1+r_{0}^{2}}<\frac{1}{2}$. Hence, Condition 2.1 means
that the imaginary part $b,c$ cannot be large, so do $|b|$ and $|c|$. We
believe that this is a technical assumption. However, we do not know how to
drop it. On the other hand, this condition is satisfied in some important case
(e.g.,[1]).
We have the following internal and boundary Carleman estimate for the operator
${\cal G}$.
###### Theorem 2.1
Assume that Condition 2.1 holds. For all $y\in C([0,T];L^{2}(\Omega))\cap
L^{2}(0,T;H^{1}(\Omega))$ such that ${\cal G}y\in L^{2}(0,T;L^{2}(G))$ and
$\displaystyle{\partial y}/{\partial\nu}=0$ or $\displaystyle y=0$ on
$\Sigma$, there is a $\mu_{1}>0$ such that for all $\mu\geq\mu_{1}$, one can
find two constants $C=C(\mu_{1})>0$ and $\lambda_{1}=\lambda_{1}(\mu_{1})$ so
that for all $\lambda\geq\lambda_{1}$, there holds
$\begin{array}[]{ll}\displaystyle\int_{Q}(\lambda\varphi_{1})^{-1}\theta_{1}^{2}\big{(}|y_{t}|^{2}+|\Delta
y|^{2}\big{)}dxdt+\int_{Q}\big{(}\theta_{1}^{2}|y|^{6}+\theta_{1}^{2}|y|^{2}|\nabla
y|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle+\lambda\mu^{2}\int_{Q}\theta_{1}^{2}\varphi_{1}\big{(}\lambda^{2}\mu^{2}\varphi_{1}^{2}|y|^{2}+|\nabla
y|^{2}+\lambda\varphi_{1}|y|^{4}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\leq C\left[\int_{Q}\theta_{1}^{2}|{\cal
G}y|^{2}dxdt+\lambda^{2}\mu^{2}\int_{Q_{\omega}}\theta_{1}^{2}\varphi_{1}^{2}\big{(}\lambda\mu^{2}\varphi_{1}|y|^{2}+|y|^{4}\big{)}dxdt\right].\end{array}$
(2.5)
Here and in the rest of this paper, we use $C$ to denote a generic positive
constant depending on $\Omega$, $T$, $\omega$, $b$ and $c$ (unless otherwise
stated), which may change from line to line.
###### Theorem 2.2
Assume that Condition 2.1 holds. For all $y\in C([0,T];L^{2}(\Omega))\cap
L^{2}(0,T;H^{1}(\Omega))$ such that ${\cal G}y\in L^{2}(0,T;L^{2}(G))$ and
$y=0$ on $\Sigma$, there is a $\mu_{2}>0$ such that for all $\mu\geq\mu_{2}$,
one can find two constants $C^{*}=C^{*}(\mu_{2})>0$ and
$\lambda_{2}=\lambda_{2}(\mu_{2})$ such that for all $\lambda\geq\lambda_{2}$,
it holds that
$\begin{array}[]{ll}&\displaystyle\int_{Q}(\lambda\varphi_{2})^{-1}\theta_{2}^{2}\big{(}|y_{t}|^{2}+|\Delta
y|^{2}\big{)}dxdt+\int_{Q}\big{(}\theta_{2}^{2}|y|^{6}+\theta_{2}^{2}|y|^{2}|\nabla
y|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle+\lambda\mu^{2}\int_{Q}\theta_{2}^{2}\varphi_{2}\big{(}\lambda^{2}\mu^{2}\varphi_{2}^{2}|y|^{2}+|\nabla
y|^{2}+\lambda\varphi_{2}|y|^{4}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle\leq C^{*}\left(\int_{Q}\theta_{2}^{2}|{\cal
G}y|^{2}dxdt\\!+\lambda\mu\int_{\Sigma_{0}}\varphi_{2}\theta_{2}^{2}\frac{\partial\psi_{2}}{\partial\nu}\Big{|}\frac{\partial
y}{\partial\nu}\Big{|}^{2}d\Gamma dt\right).\end{array}$
###### Remark 2.2
Recalling (2.2) for the definitions of $\alpha_{2}$ and $\beta_{2}$, it is
easy to see that when $b\rightarrow 0$ and $c\rightarrow 0$, all the
assumptions in (2.4) are satisfied. Thus one can get the Carleman estimate for
the semilinear heat operator “$y_{t}-\Delta y+|y|^{2}y$” immediately.
###### Remark 2.3
Carleman estimates for the heat operator $\partial_{t}-\Delta$ and complex
parabolic operator $\partial_{t}-(1+ib)\Delta$ with Dirichlet boundary
condition have been established in [6] and [4], respectively. We mention that,
following the proofs of above two theorems in next section, we can easily
obtain the Carleman inequality for the parabolic operator with complex
coefficient $\partial_{t}-(1+ib)\Delta$ with Neumann boundary condition:
$\begin{array}[]{ll}&\displaystyle\int_{Q}(\lambda\varphi_{1})^{-1}\theta_{1}^{2}\big{(}|y_{t}|^{2}+|\Delta
y|^{2}\big{)}dxdt+\lambda\mu^{2}\int_{Q}\theta_{1}^{2}\varphi_{1}\big{(}\lambda^{2}\mu^{2}\varphi_{1}^{2}|y|^{2}+|\nabla
y|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\leq
C\left[\int_{Q}\theta_{1}^{2}\big{|}y_{t}-(1+ib)\Delta
y\big{|}^{2}dxdt+\lambda^{2}\mu^{2}\int_{Q_{\omega}}\varphi_{1}^{3}\theta_{1}^{2}\lambda\mu^{2}|y|^{2}dxdt\right].\end{array}$
and
$\begin{array}[]{ll}\displaystyle\int_{Q}(\lambda\varphi_{2})^{-1}\theta_{2}^{2}\big{(}|y_{t}|^{2}+|\Delta
y|^{2}\big{)}dxdt+\lambda\mu^{2}\int_{Q}\theta_{2}^{2}\varphi_{2}\big{(}\lambda^{2}\mu^{2}\varphi_{2}^{2}|y|^{2}+|\nabla
y|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
C^{*}\Big{[}\int_{Q}\theta_{2}^{2}\big{|}y_{t}-(1+ib)\Delta
y|^{2}dxdt+\lambda\mu\int_{\Sigma_{0}}\varphi_{2}\theta_{2}^{2}\frac{\partial\psi_{2}}{\partial\nu}\Big{|}\frac{\partial
y}{\partial\nu}\Big{|}^{2}d\Gamma dt\Big{]},\end{array}$
respectively.
## 3 Proof of Theorems 2.1 and 2.2
In the following context, we denote by $\overline{z},{\mathop{\rm Re}\,}z$ and
${\mathop{\rm Im}\,}z$ the complex conjugate, real part and imaginary part of
a complex number $z\in{\mathop{\rm l\negthinspace\negthinspace\negthinspace
C}}$, respectively.
### 3.1 A weighted identity for Ginzburg-Landau operator
In this subsection, we establish a pointwise weighted identity for operator
“${\cal G}$” which is a key to prove our Carleman estimates.
Fix a weight function $\ell\in C^{2}({\mathop{\rm l\negthinspace
R}}^{1+n};{\mathop{\rm l\negthinspace R}})$, and put
$\theta=e^{\ell},\quad v=\theta y.$
Some elementary calculations yield that
$\begin{array}[]{ll}\displaystyle&\displaystyle\theta{\cal
P}y\buildrel\triangle\over{=}\theta\big{[}(\alpha_{1}+i\beta_{1})y_{t}+\Delta
y\big{]}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle=(\alpha_{1}+i\beta_{1})(v_{t}-\ell_{t}v)+\Delta
v-2\nabla\ell\cdot\nabla v+|\nabla\ell|^{2}v-\Delta\ell v\\\ \vskip 6.0pt plus
2.0pt minus 2.0pt\cr&\displaystyle=I_{1}+I_{2},\end{array}$ (3.1)
where
$\left\\{\begin{array}[]{ll}\displaystyle
I_{1}\buildrel\triangle\over{=}i\beta_{1}v_{t}-\alpha_{1}\ell_{t}v+\Delta
v+|\nabla\ell|^{2}v,\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle
I_{2}\buildrel\triangle\over{=}\alpha_{1}v_{t}-i\beta_{1}\ell_{t}v-2\nabla\ell\cdot\nabla
v-\Delta\ell v.\end{array}\right.$ (3.2)
Therefore,
$\theta{\cal G}y=\theta{\cal
P}y-\theta(\alpha_{2}+i\beta_{2})|y|^{2}y=I_{1}+I_{2}-(\alpha_{2}+i\beta_{2})\theta^{-2}|v|^{2}v={\cal
J}_{1}+{\cal J}_{2},$ (3.3)
where
$\left\\{\begin{array}[]{ll}\displaystyle{\cal
J}_{1}=I_{1}-\frac{3\alpha_{2}}{4}|v|^{2}v\theta^{-2},\\\ \vskip 6.0pt plus
2.0pt minus 2.0pt\cr\displaystyle{\cal
J}_{2}=I_{2}-\frac{\alpha_{2}}{4}|v|^{2}v\theta^{-2}-i\beta_{2}|v|^{2}v\theta^{-2}.\end{array}\right.$
(3.4)
We have the following weighted identity for ${\cal G}y$ defined by (2.3).
###### Theorem 3.1
Let $\alpha_{j},\ \beta_{j}\in{\mathop{\rm l\negthinspace R}}$ ($j=1,2$).
Assume that $y,\;v\in C^{2}({\mathop{\rm l\negthinspace
R}}^{1+n};\;{\mathop{\rm l\negthinspace\negthinspace\negthinspace C}})$ and
$\Psi,\;\Phi,\;\ell\in C^{2}({\mathop{\rm l\negthinspace
R}}^{1+n};{\mathop{\rm l\negthinspace R}})$ satisfying
$\Psi+\Phi=-\Delta\ell.$
Then, we have
$\displaystyle 2{\mathop{\rm Re}\,}(\theta{\cal G}y\overline{\cal
J}_{1})+\Big{(}M+\frac{3}{8}\alpha_{1}\alpha_{2}\theta^{-2}|v|^{4}\Big{)}_{t}+\nabla\cdot{\cal
H}(v)$ $\displaystyle=|{\cal J}_{1}|^{2}+|{\cal J}_{1}+\Phi
v|^{2}+B|v|^{2}+4{\mathop{\rm
Re}\,}\sum_{j,k=1}^{n}\ell_{x_{j}x_{k}}v_{x_{j}}\overline{v}_{x_{k}}+2\Phi|\nabla
v|^{2}+E\theta^{-2}|v|^{4}+\frac{3\alpha_{2}^{2}}{8}\theta^{-4}|v|^{6}$
$\displaystyle\quad+\frac{\alpha_{2}}{4}\theta^{-2}\big{|}\nabla|v|^{2}\big{|}^{2}+\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}|\nabla
v|^{2}+U+2\beta_{1}\Big{(}\Phi+\frac{\alpha_{2}}{4}\theta^{-2}|v|^{2}\Big{)}{\mathop{\rm
Im}\,}(\overline{v}v_{t}),$ (3.5)
where $I_{1}$ and $I_{2}$ are given in (3.2), ${\cal J}_{1}$ and ${\cal
J}_{2}$ are given in (3.4), in addition,
$\left\\{\begin{array}[]{ll}\displaystyle
M\buildrel\triangle\over{=}\left[(\alpha_{1}^{2}+\beta_{1}^{2})\ell_{t}-\alpha_{1}|\nabla\ell|^{2}\right]|v|^{2}+\alpha_{1}|\nabla
v|^{2}-2\beta_{1}{\mathop{\rm Im}\,}(\nabla\ell\cdot\nabla\overline{v}v),\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle
V(v)\buildrel\triangle\over{=}4{\mathop{\rm
Re}\,}(\nabla\ell\cdot\nabla\overline{v})\nabla v-2|\nabla
v|^{2}\nabla\ell-2\alpha_{1}{\mathop{\rm Re}\,}(\overline{v}_{t}\nabla
v)+2\beta_{1}{\mathop{\rm Im}\,}(\overline{v}_{t}v\nabla\ell)\\\ \vskip 6.0pt
plus 2.0pt minus 2.0pt\cr\displaystyle\qquad\quad+2{\mathop{\rm
Im}\,}(\ell_{t}\overline{v}\nabla v)-2\Psi{\mathop{\rm
Re}\,}(\overline{v}\nabla
v)+2(|\nabla\ell|^{2}-2\alpha_{1}\ell_{t})\nabla\ell|v|^{2},\\\ \vskip 6.0pt
plus 2.0pt minus 2.0pt\cr\displaystyle
B\buildrel\triangle\over{=}(\alpha_{1}^{2}+\beta_{1}^{2})\ell_{tt}+2\alpha_{1}\Phi\ell_{t}-4\alpha_{1}\nabla\ell\cdot\nabla\ell_{t}+4\sum_{j,k=1}^{n}\ell_{x_{j}x_{k}}\ell_{x_{j}}\ell_{x_{k}}-2\Phi|\nabla\ell|^{2}-\Phi^{2},\end{array}\right.$
(3.6)
and
$\left\\{\begin{array}[]{ll}\displaystyle{\cal
H}(v)=V(v)-\frac{\alpha_{2}}{4}\theta^{-2}|v|^{4}\nabla\ell+\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{v}\nabla v),\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle
E=\frac{\alpha_{2}}{2}|\nabla\ell|^{2}+\alpha_{2}\Delta\ell-\frac{\alpha_{1}\alpha_{2}}{4}\ell_{t}+\frac{3\alpha_{2}}{2}\Phi,\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle U=-4\beta_{1}{\mathop{\rm
Im}\,}(\nabla\ell_{t}\cdot\nabla\overline{v}v)-2{\mathop{\rm
Re}\,}(\nabla\Psi\cdot\nabla v\overline{v})-2{\mathop{\rm
Re}\,}(i\beta_{2}\overline{{\cal
J}_{1}}\theta^{-2}|v|^{2}v).\end{array}\right.$ (3.7)
###### Remark 3.1
In Theorem 3.1, $U$ can be regarded as low order terms with respect to $v$ and
$\nabla v$. It can be absorbed finally by the energy terms $|v|^{2}$ and
$|\nabla v|^{2}$. The last term $\displaystyle
2\beta_{1}(\Phi+\alpha_{2}\tau\theta^{-2}|v|^{2}){\mathop{\rm
Im}\,}(\overline{v}v_{t})$ involves the principal part $v_{t}$. Since it is
related to the choice of $\Phi$, we can deduce a modified estimate of this
term in Section 3.2.
###### Remark 3.2
As explained in [5], in order to keep more flexibility in the sequel, we also
introduce two auxiliary functions $\Psi\in C^{1}({\mathop{\rm l\negthinspace
R}}^{1+n};{\mathop{\rm l\negthinspace R}})$ and $\Phi\in C({\mathop{\rm
l\negthinspace R}}^{1+n};{\mathop{\rm l\negthinspace R}})$. The choice of them
will be given later.
We recall the following result, which is useful in the proof of Theorem 2.1.
###### Lemma 3.1
([5, Theorem 1.1]) Under the assumptions of Theorem 3.1, it holds that
$\begin{array}[]{ll}&\displaystyle\displaystyle 2{\mathop{\rm
Re}\,}(\theta{\cal P}_{1}y\overline{I_{1}})+M_{t}+\nabla\cdot V(v)\\\ \vskip
6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=|I_{1}|^{2}+|I_{1}+\Phi
v|^{2}+B|v|^{2}+4{\mathop{\rm
Re}\,}\sum_{j,k=1}^{n}\ell_{x_{j}x_{k}}v_{x_{j}}\overline{v}_{x_{k}}+2\Phi|\nabla
v|^{2}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle\quad-2{\mathop{\rm Re}\,}(\nabla\Psi\cdot\nabla
v\overline{v})-4\beta_{1}{\mathop{\rm
Im}\,}(\nabla\ell_{t}\cdot\nabla\overline{v}v)+2\beta_{1}\Phi{\mathop{\rm
Im}\,}(\overline{v}v_{t}),\end{array}$ (3.8)
where $I_{1}$ is given in (3.2), $M,\ V$ and $B$ are given in (3.6).
Proof of Theorem 3.1. By (3.3) and (3.4), we have
$\begin{array}[]{ll}&\displaystyle 2{\mathop{\rm Re}\,}(\theta{\cal
G}y\overline{{\cal J}_{1}})=2|{\cal J}_{1}|^{2}+2{\mathop{\rm
Re}\,}(\overline{{\cal J}_{1}}{\cal J}_{2})\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle=2|{\cal J}_{1}|^{2}+2{\mathop{\rm
Re}\,}(\overline{I_{1}}I_{2})-\frac{3\alpha_{2}}{2}{\mathop{\rm
Re}\,}(|v|^{2}\overline{v}\theta^{-2}I_{2})-\frac{\alpha_{2}}{2}{\mathop{\rm
Re}\,}(\overline{I_{1}}\theta^{-2}|v|^{2}v)\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle\quad-2{\mathop{\rm Re}\,}(i\beta_{2}\overline{{\cal
J}_{1}}\theta^{-2}|v|^{2}v)+\frac{3\alpha_{2}^{2}}{8}\theta^{-4}|v|^{6}.\end{array}$
(3.9)
From (3.1), (3.2) and (3.4), we find that
$\begin{array}[]{ll}\displaystyle 2{\mathop{\rm
Re}\,}(\overline{I_{1}}I_{2})=2{\mathop{\rm Re}\,}(\theta{\cal
P}y\overline{I_{1}})-2|I_{1}|^{2},\end{array}$ (3.10)
and
$\begin{array}[]{ll}\displaystyle&\displaystyle|I_{1}+\Phi
v|^{2}=|I_{1}|^{2}+2\Phi{\mathop{\rm
Re}\,}(I_{1}\overline{v})+\Phi^{2}|v|^{2}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle=|I_{1}|^{2}+2\Phi{\mathop{\rm Re}\,}({\cal
J}_{1}\overline{v})+\Phi^{2}|v|^{2}+\frac{3\alpha_{2}}{2}\Phi\theta^{-2}|v|^{4}\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle=|I_{1}|^{2}+|{\cal
J}_{1}+\Phi v|^{2}-|{\cal
J}_{1}|^{2}+\frac{3\alpha_{2}}{2}\Phi\theta^{-2}|v|^{4}.\end{array}$ (3.11)
Combining (3.8)–(3.11), we obtain
$\begin{array}[]{ll}&\displaystyle 2{\mathop{\rm Re}\,}(\theta{\cal
G}y\overline{\cal J}_{1})+M_{t}+\nabla\cdot V\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle=|{\cal J}_{1}|^{2}+|{\cal J}_{1}+\Phi
v|^{2}+B|v|^{2}+4{\mathop{\rm
Re}\,}\sum_{j,k=1}^{n}\ell_{x_{j}x_{k}}v_{x_{j}}\overline{v}_{x_{k}}+2\Phi|\nabla
v|^{2}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle\quad+\frac{3\alpha_{2}^{2}}{8}\theta^{-4}|v|^{6}+\frac{3\alpha_{2}}{2}\Phi\theta^{-2}|v|^{4}-\frac{3\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{v}I_{2})\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle\quad-\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{I_{1}}v)+U+2\beta_{1}\Phi{\mathop{\rm
Im}\,}(\overline{v}v_{t}).\end{array}$ (3.12)
Comparing (3.12) with (3.1), we still need to deal with
$``\displaystyle-\frac{3\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{v}I_{2})"$ and
$``\displaystyle-\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{I_{1}}v)"$. Clearly,
$\begin{array}[]{ll}\displaystyle-2{\mathop{\rm
Re}\,}(\theta^{-2}|v|^{2}\overline{v}I_{2})\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-2{\mathop{\rm
Re}\,}\Big{[}\theta^{-2}|v|^{2}\overline{v}(\alpha_{1}v_{t}-i\beta_{1}\ell_{t}v-2\nabla\ell\cdot\nabla
v-\Delta\ell v)\Big{]}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-2\alpha_{1}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{v}v_{t})+4\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{v}\nabla\ell\cdot\nabla
v)+2\Delta\ell\theta^{-2}|v|^{4}.\end{array}$ (3.13)
By integrating by parts, the first two terms on the right hand side of (3.13)
are respectively
$\begin{array}[]{ll}&\displaystyle-2\alpha_{1}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{v}v_{t})=-\alpha_{1}\theta^{-2}|v|^{2}(|v|^{2})_{t}=-\frac{1}{2}(\alpha_{1}\theta^{-2}|v|^{4})_{t}-\alpha_{1}\ell_{t}\theta^{-2}|v|^{4},\end{array}$
(3.14)
and
$\begin{array}[]{ll}\displaystyle 4|v|^{2}\theta^{-2}{\mathop{\rm
Re}\,}(\overline{v}\nabla\ell\cdot\nabla
v)=\nabla\cdot(\theta^{-2}|v|^{4}\nabla\ell)+\theta^{-2}(2|\nabla\ell|^{2}-\Delta\ell)|v|^{4}.\end{array}$
(3.15)
Substituting (3.14) and (3.15) into (3.13) yields
$\begin{array}[]{ll}\displaystyle-\frac{3\alpha_{2}}{2}{\mathop{\rm
Re}\,}(\theta^{-2}|v|^{2}\overline{v}I_{2})\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-\frac{3\alpha_{1}\alpha_{2}}{8}(\theta^{-2}|v|^{4})_{t}-\frac{3\alpha_{1}\alpha_{2}}{4}\theta^{-2}\ell_{t}|v|^{4}+\frac{3\alpha_{2}}{4}\nabla\cdot(\theta^{-2}|v|^{4}\nabla\ell)\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+\frac{3\alpha_{2}}{4}(2|\nabla\ell|^{2}-\Delta\ell)\theta^{-2}|v|^{4}+\frac{3\alpha_{2}}{2}\Delta\ell\theta^{-2}|v|^{4}.\end{array}$
(3.16)
Further, from the definition of $I_{1}$ in (3.2) and by integrating by parts,
we obtain that
$\begin{array}[]{ll}\displaystyle-\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(\overline{I_{1}}v)\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}\big{[}v(-i\beta_{1}\overline{v}_{t}-\alpha_{1}\ell_{t}\overline{v}+\Delta\overline{v}+|\nabla\ell|^{2}\overline{v})\big{]}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-\frac{\beta_{1}\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Im}\,}(v\overline{v}_{t})-\frac{\alpha_{2}}{2}(|\nabla\ell|^{2}-\alpha_{1}\ell_{t})\theta^{-2}|v|^{4}-\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(v\Delta\overline{v}).\end{array}$ (3.17)
Similarly, there holds
$\begin{array}[]{ll}\displaystyle-\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(v\Delta\overline{v})\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-\frac{\alpha_{2}}{2}\nabla\cdot\big{[}\theta^{-2}|v|^{2}{\mathop{\rm
Re}\,}(v\nabla\overline{v})\big{]}+\frac{\alpha_{2}}{2}\theta^{-2}|v|^{2}|\nabla
v|^{2}+\frac{\alpha_{2}}{4}\theta^{-2}\big{|}\nabla|v|^{2}\big{|}^{2}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad-\frac{\alpha_{2}}{4}\nabla\cdot\big{(}\theta^{-2}\nabla\ell|v|^{4}\big{)}-\frac{\alpha_{2}}{4}\theta^{-2}(2|\nabla\ell|^{2}-\Delta\ell)|v|^{4}.\end{array}$
(3.18)
Finally, by substituting (3.16)–(3.18) into (3.12), we obtain (3.1)
immediately.
### 3.2 Proof of Theorems 2.1 and 2.2
Proof of Theorem 2.1. The proof is long. Thus we divide it into five steps.
Step 1. Put $\ell=\ell_{1}$ and $\theta=\theta_{1}$ in Theorem 3.1. For
$j,k=1,\cdots,n$, we have
$\ell_{1t}=\lambda\rho_{1t},\quad\ell_{1x_{j}}=\lambda\mu\varphi_{1}\psi_{1x_{j}},\quad\ell_{1x_{j}x_{k}}=\lambda\mu^{2}\varphi_{1}\psi_{1x_{j}}\psi_{1x_{k}}+\lambda\mu\varphi_{1}\psi_{1x_{j}x_{k}}.$
(3.19)
In addition, it is easy to get that
$|\rho_{1t}|\leq
Ce^{2\mu|\psi_{1}|_{C(\overline{\Omega})}}\varphi_{1}^{2},\quad|\varphi_{1t}|\leq
C\varphi_{1}^{2},$ (3.20)
and
$\begin{array}[]{ll}\displaystyle|\ell_{1tt}|\negthinspace\negthinspace\negthinspace&\displaystyle=\Big{|}\lambda\frac{e^{\mu\psi_{1}}-e^{2\mu|\psi_{1}|_{C(\bar{\Omega})}}}{t^{2}(T-t)^{2}}\Big{[}2\frac{(2t-T)^{2}}{t(T-t)}+2\Big{]}\Big{|}\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\leq
C\lambda\varphi_{1}^{2}e^{2\mu|\psi_{1}|_{C(\bar{\Omega})}}(1+e^{-\mu\psi_{1}}\varphi_{1})\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\leq
C\lambda\varphi_{1}^{3}e^{3\mu|\psi_{1}|_{C(\bar{\Omega})}}.\end{array}$
(3.21)
Recalling that $\Phi+\Psi=-\Delta\ell_{1}$, by choosing
$\Psi=-2\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}$, we obtain
$\Phi=\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}-\lambda\mu\varphi_{1}\Delta\psi_{1}.$
(3.22)
Next, by (2.2), we see that
$-1<\alpha_{1}<0,\ |\beta_{1}|\leq\frac{1}{2}.$ (3.23)
In the sequel, for $k\in{\mathop{\rm l\negthinspace N}}$, we denote by
$O(\mu^{k})$ a function of order $\mu^{k}$ for large $\mu$, and by
$O_{\mu}(\lambda^{k})$ a function of order $\lambda^{k}$ for large $\lambda$,
with $\mu$ being fixed as a parameter. Recalling (3.6) for the definition of
$B$, and by using (3.19)–(3.23), a short calculation shows that
$\begin{array}[]{ll}\displaystyle
B\negthinspace\negthinspace\negthinspace&\displaystyle=(\alpha_{1}^{2}+\beta_{1}^{2})\ell_{1tt}+2\alpha_{1}\Phi\ell_{1t}-4\alpha_{1}\nabla\ell_{1}\cdot\nabla\ell_{1t}+4\sum_{j,k=1}^{n}\ell_{1x_{j}x_{k}}\ell_{1x_{j}}\ell_{1x_{k}}-2\Phi|\nabla\ell_{1}|^{2}-\Phi^{2}\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr&\displaystyle\geq
2\lambda^{3}\mu^{4}\varphi_{1}^{3}|\nabla\psi_{1}|^{4}+\lambda^{3}\varphi_{1}^{3}O(\mu^{3})+\varphi_{1}^{3}O_{\mu}(\lambda^{2}),\end{array}$
(3.24)
and that
$\begin{array}[]{ll}\displaystyle
E=\frac{\alpha_{2}}{2}|\nabla\ell_{1}|^{2}+\alpha_{2}\Delta\ell_{1}-\frac{\alpha_{1}\alpha_{2}}{4}\ell_{1t}+\frac{3\alpha_{2}}{2}\Phi\geq\frac{\alpha_{2}}{2}\lambda^{2}\mu^{2}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}-C\alpha_{2}\lambda\mu^{2}\varphi_{1}.\end{array}$
(3.25)
Next, by choosing $\Phi$ as in (3.22), we have that
$\begin{array}[]{ll}\displaystyle 4{\mathop{\rm
Re}\,}\sum_{j,k=1}^{n}\ell_{1x_{j}x_{k}}v_{x_{j}}\overline{v}_{x_{k}}+2\Phi|\nabla
v|^{2}\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\geq
4\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}\cdot\nabla
v|^{2}+(2\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}-C\lambda\mu\varphi_{1})|\nabla
v|^{2}.\end{array}$ (3.26)
Further, noting that $\alpha_{1}<0$, we obtain that
$\begin{array}[]{ll}\displaystyle
U\negthinspace\negthinspace\negthinspace&\displaystyle=-4\beta_{1}{\mathop{\rm
Im}\,}(\nabla\ell_{1t}\cdot\nabla\overline{v}v)-2{\mathop{\rm Re}\,}(\nabla
v\overline{v})\cdot\nabla\Psi-2{\mathop{\rm Re}\,}(i\beta_{2}\overline{{\cal
J}_{1}}\theta^{-2}|v|^{2}v)\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle\geq-C\lambda^{2}\mu^{4}\varphi_{1}^{3}|\nabla\psi_{1}|^{4}|v|^{2}-C\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}|\nabla
v|^{2}-\frac{32\beta_{2}^{2}}{\alpha_{2}^{2}}|{\cal
J}_{1}|^{2}-\frac{1}{32}\alpha_{2}^{2}\theta_{1}^{-4}|v|^{6}.\end{array}$
(3.27)
Combining the pointwise identity (3.1) given in Theorem 3.1 with
(3.23)–(3.27), we conclude
$\begin{array}[]{ll}\displaystyle 2|\theta_{1}{\cal
G}y|^{2}+\Big{(}M+\frac{3}{8}\alpha_{1}\theta_{1}^{-2}|v|^{4}\Big{)}_{t}+\nabla\cdot{\cal
H}\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\geq
2\lambda^{3}\mu^{4}\varphi_{1}^{3}|\nabla\psi_{1}|^{4}|v|^{2}+4\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}\cdot\nabla
v|^{2}+2\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}|\nabla
v|^{2}+\frac{\alpha_{2}}{4}\theta_{1}^{-2}\big{|}\nabla|v|^{2}\big{|}^{2}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+\frac{\alpha_{2}}{2}\theta_{1}^{-2}|v|^{2}|\nabla
v|^{2}+\frac{\alpha_{2}}{2}\lambda^{2}\mu^{2}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}\theta_{1}^{-2}|v|^{4}+\frac{11}{32}\alpha_{2}^{2}\theta_{1}^{-4}|v|^{6}-\Lambda_{1}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+2\beta_{1}\Big{(}\Phi+\frac{\alpha_{2}}{4}\theta_{1}^{-2}|v|^{2}\Big{)}{\mathop{\rm
Im}\,}(\overline{v}v_{t}),\end{array}$ (3.28)
where
$\begin{array}[]{ll}\displaystyle\Lambda_{1}=C(\lambda^{2}\mu^{4}\varphi_{1}^{3}-\lambda^{3}\varphi_{1}^{3}O(\mu^{3})-\varphi_{1}^{3}O_{\mu}(\lambda^{2}))|v|^{2}\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\qquad\
+C(\lambda\mu\varphi_{1}+\mu^{2}\varphi_{1})|\nabla
v|^{2}+C\lambda\mu^{2}\varphi_{1}\theta_{1}^{-2}|v|^{4}.\end{array}$ (3.29)
Step 2. We first estimate “$\displaystyle
2\beta_{1}\Big{(}\Phi+\frac{\alpha_{2}}{4}\theta_{1}^{-2}|v|^{2}\Big{)}{\mathop{\rm
Im}\,}(\overline{v}v_{t})$”.
For simplicity, put
$\gamma_{1}\buildrel\triangle\over{=}\frac{1}{\alpha_{1}+i\beta_{1}}=|\gamma_{1}|^{2}(\alpha_{1}-i\beta_{1}),\quad\gamma_{2}\buildrel\triangle\over{=}\alpha_{2}+i\beta_{2}.$
(3.30)
From (3.2) and (3.3), we have
$v_{t}=\gamma_{1}\left(\theta_{1}\mathcal{G}y+\gamma_{1}^{-1}\ell_{1t}v-\Delta
v+2\nabla\ell_{1}\cdot\nabla
v-|\nabla\ell_{1}|^{2}v+\Delta\ell_{1}v+\gamma_{2}|v|^{2}v\theta_{1}^{-2}\right).$
(3.31)
Noting that
$\Phi=\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}-\lambda\mu\varphi_{1}\Delta\psi_{1}$,
we get that
$\begin{array}[]{ll}\displaystyle 2\beta_{1}\Phi{\mathop{\rm
Im}\,}(\overline{v}v_{t})=2\beta_{1}\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}(\overline{v}v_{t})-2\beta_{1}\lambda\mu\varphi_{1}\Delta\psi_{1}{\mathop{\rm
Im}\,}(\overline{v}v_{t})\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\geq-C|\gamma_{1}|^{2}\lambda^{2}\mu^{4}\varphi_{1}^{3}|v|^{2}-|\theta_{1}\mathcal{G}y|^{2}-2\beta_{1}\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}\left(\gamma_{1}\overline{v}\Delta v\right)\\\ \vskip 6.0pt plus 2.0pt
minus
2.0pt\cr\displaystyle\quad+4\beta_{1}\lambda^{2}\mu^{3}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}\left(\gamma_{1}\overline{v}\nabla\psi_{1}\cdot\nabla
v\right)+2\beta_{1}^{2}|\gamma_{1}|^{2}\lambda^{3}\mu^{4}\varphi_{1}^{3}|\nabla\psi_{1}|^{4}|v|^{2}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+2\beta_{1}\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}(\gamma_{1}\gamma_{2})\theta_{1}^{-2}|v|^{4}-\beta_{1}^{2}\lambda^{3}\mu^{3}\varphi_{1}^{3}|\Delta\psi_{1}^{4}||v|^{2}-(\lambda\mu\varphi_{1})^{-1}|v_{t}|^{2}.\end{array}$
(3.32)
By some direct calculation, we have
$\begin{array}[]{ll}-2\beta_{1}\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}\left(\gamma_{1}\overline{v}\Delta v\right)\\\ \vskip 6.0pt plus 2.0pt
minus
2.0pt\cr\displaystyle=-2\beta_{1}\lambda\mu^{2}\nabla\cdot\left(\varphi_{1}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\nabla v)\right)+2\beta_{1}{\mathop{\rm
Im}\,}(\gamma_{1})\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}|\nabla
v|^{2}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+2\beta_{1}\lambda\mu^{2}\nabla(\varphi_{1}|\nabla\psi_{1}|^{2})\cdot{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\nabla v)\\\ \end{array}$ (3.33)
and
$\begin{array}[]{ll}4\beta_{1}\lambda^{2}\mu^{3}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}\left(\gamma_{1}\overline{v}\nabla\psi_{1}\cdot\nabla v\right)\\\ \vskip
6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-2i\beta_{1}|\gamma_{1}|^{2}\lambda^{2}\mu^{3}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}\Big{[}(\alpha_{1}-i\beta_{1})\overline{v}\nabla\psi_{1}\cdot\nabla
v-(\alpha_{1}+i\beta_{1})v\nabla\psi_{1}\cdot\nabla\overline{v}\Big{]}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-2\beta_{1}^{2}|\gamma_{1}|^{2}\lambda^{2}\mu^{3}\nabla\cdot\left(\varphi_{1}^{2}|\nabla\psi_{1}|^{2}|v|^{2}\nabla\psi_{1}\right)+2\beta_{1}^{2}|\gamma_{1}|^{2}\lambda^{2}\mu^{3}\nabla\cdot\left(\varphi_{1}^{2}|\nabla\psi_{1}|^{2}\nabla\psi_{1}\right)|v|^{2}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+4\alpha_{1}\beta_{1}|\gamma_{1}|^{2}\lambda^{2}\mu^{3}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}\left(\overline{v}\nabla\psi_{1}\cdot\nabla v\right)\\\ \vskip 6.0pt
plus 2.0pt minus
2.0pt\cr\displaystyle\geq-2\beta_{1}^{2}|\gamma_{1}|^{2}\lambda^{2}\mu^{3}\nabla\cdot\left(\varphi_{1}^{2}|\nabla\psi_{1}|^{2}|v|^{2}\nabla\psi_{1}\right)+2\beta_{1}^{2}|\gamma_{1}|^{2}\lambda^{2}\mu^{3}\nabla\cdot\left(\varphi_{1}^{2}|\nabla\psi_{1}|^{2}\nabla\psi_{1}\right)|v|^{2}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad-4\alpha_{1}^{2}|\gamma_{1}|^{2}\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}\cdot\nabla
v|^{2}-\beta_{1}^{2}|\gamma_{1}|^{2}\lambda^{3}\mu^{4}\varphi_{1}^{3}|\nabla\psi_{1}|^{4}|v|^{2}.\end{array}$
(3.34)
By (3.31), we know that
$\begin{array}[]{ll}\displaystyle\frac{1}{2}\beta_{1}\alpha_{2}\theta_{1}^{-2}|v|^{2}{\mathop{\rm
Im}\,}(\overline{v}v_{t})\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=\frac{1}{2}\beta_{1}\alpha_{2}\theta_{1}^{-2}|v|^{2}{\mathop{\rm
Im}\,}\big{(}\overline{v}\gamma_{1}\theta_{1}\mathcal{G}y-\gamma_{1}\overline{v}\Delta
v+2\gamma_{1}\nabla\ell_{1}\cdot\nabla v\overline{v}\big{)}\\\ \vskip 6.0pt
plus 2.0pt minus
2.0pt\cr\displaystyle\quad-\frac{1}{2}\beta_{1}\alpha_{2}\theta_{1}^{-2}|v|^{4}{\mathop{\rm
Im}\,}(\gamma_{1})(|\nabla\ell_{1}|^{2}-\Delta\ell_{1})+\frac{1}{2}\beta_{1}\alpha_{2}\theta_{1}^{-4}|v|^{6}{\mathop{\rm
Im}\,}(\gamma_{1}\gamma_{2}).\end{array}$ (3.35)
For the first term in the right hand side of (3.35), we have
$\begin{array}[]{ll}\displaystyle\frac{1}{2}\beta_{1}\alpha_{2}\theta_{1}^{-2}|v|^{2}{\mathop{\rm
Im}\,}(\overline{v}\gamma_{1}\theta_{1}\mathcal{G}y)\geq-\frac{1}{32}\alpha_{2}^{2}\theta_{1}^{-4}|v|^{6}-2\beta_{1}^{2}|\gamma_{1}|^{2}|\theta_{1}{\cal
G}y|^{2},\end{array}$ (3.36)
and
$\begin{array}[]{ll}\displaystyle\frac{1}{2}\beta_{1}\alpha_{2}\theta_{1}^{-2}|v|^{2}{\mathop{\rm
Im}\,}\big{(}-\gamma_{1}\overline{v}\Delta
v+2\gamma_{1}\nabla\ell_{1}\cdot\nabla v\overline{v}\big{)}\\\ \vskip 6.0pt
plus 2.0pt minus
2.0pt\cr\displaystyle=-\frac{1}{2}\beta_{1}\alpha_{2}\theta_{1}^{-2}|v|^{2}{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\Delta
v)+\beta_{1}\alpha_{2}\theta_{1}^{-2}|v|^{2}{\mathop{\rm
Im}\,}(\gamma_{1}\nabla\ell_{1}\cdot\nabla v\overline{v})\\\ \vskip 6.0pt plus
2.0pt minus 2.0pt\cr\displaystyle=-\frac{1}{2}\beta_{1}\alpha_{2}\hbox{\rm
div$\,$}\big{[}\theta_{1}^{-2}|v|^{2}{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\nabla
v)\big{]}+\frac{1}{2}\beta_{1}\alpha_{2}{\mathop{\rm
Im}\,}(\gamma_{1})\theta_{1}^{-2}|v|^{2}|\nabla v|^{2}\\\ \vskip 6.0pt plus
2.0pt minus
2.0pt\cr\displaystyle\quad+\beta_{1}\alpha_{2}\theta_{1}^{-2}{\mathop{\rm
Re}\,}(\overline{v}\nabla v)\cdot{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\nabla v).\end{array}$ (3.37)
Recalling the definition of $\gamma_{1}$ in (3.30), it is easy to see that
$\begin{array}[]{ll}\displaystyle\beta_{1}\alpha_{2}\theta_{1}^{-2}{\mathop{\rm
Re}\,}(\overline{v}\nabla v)\cdot{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\nabla v)\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-\frac{1}{4}\beta_{1}^{2}\alpha_{2}\theta_{1}^{-2}|\gamma_{1}|^{2}\big{|}\nabla|v|^{2}\big{|}^{2}+\frac{1}{2}\beta_{1}\alpha_{2}\alpha_{1}|\gamma_{1}|^{2}\theta_{1}^{-2}{\mathop{\rm
Im}\,}\big{(}(\overline{v}\nabla v)^{2}\big{)}\\\ \vskip 6.0pt plus 2.0pt
minus
2.0pt\cr\displaystyle\geq-\frac{1}{4}\beta_{1}^{2}\alpha_{2}\theta_{1}^{-2}|\gamma_{1}|^{2}\big{|}\nabla|v|^{2}\big{|}^{2}+\frac{1}{2}\beta_{1}\alpha_{2}\alpha_{1}|\gamma_{1}|^{2}\theta_{1}^{-2}|v|^{2}|\nabla
v|^{2}.\end{array}$ (3.38)
Now, substituting (3.32)–(3.38) into (3.28), and noting that ${\mathop{\rm
Im}\,}(\gamma_{1})=-\beta_{1}|\gamma_{1}|^{2}$, ${\mathop{\rm
Im}\,}(\gamma_{1}\gamma_{2})=(\alpha_{1}\beta_{2}-\alpha_{2}\beta_{1})|\gamma_{1}|^{2}$
and $\beta_{1}^{2}|\gamma_{1}|^{2}<1$, we obtain that
$\begin{array}[]{ll}\displaystyle(3+2\beta_{1}^{2}|\gamma_{1}|^{2})|\theta_{1}{\cal
G}y|^{2}+\Big{(}M+\frac{3}{8}\alpha_{1}\theta_{1}^{-2}|v|^{4}\Big{)}_{t}+\nabla\cdot{\cal
K}(v)\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\geq(2-\beta_{1}^{2}|\gamma_{1}|^{2}+2\beta_{1}^{2}|\gamma_{1}|^{2})\lambda^{3}\mu^{4}\varphi_{1}^{3}|\nabla\psi_{1}|^{4}|v|^{2}+4(1-\alpha_{1}^{2}|\gamma_{1}|^{2})\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}\cdot\nabla
v|^{2}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+2(1-\beta_{1}^{2}|\gamma_{1}|^{2})\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}|\nabla
v|^{2}+\frac{1}{2}\alpha_{2}(1+\beta_{1}^{2}|\gamma_{1}|^{2})\lambda^{2}\mu^{2}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}\theta_{1}^{-2}|v|^{4}+{\cal
T}\theta_{1}^{-4}|v|^{6}\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+\frac{1}{2}(1-\beta_{1}^{2}|\gamma_{1}|^{2}+\beta_{1}\alpha_{1}|\gamma_{1}|^{2})\alpha_{2}\theta_{1}^{-2}|v|^{2}|\nabla
v|^{2}-\tilde{\Lambda}_{1}-(\lambda\mu\varphi_{1})^{-1}|v_{t}|^{2},\end{array}$
(3.39)
where
$\displaystyle{\cal K}(v)={\cal
H}(v)+2\beta_{1}\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\nabla
v)+2\beta_{1}^{2}|\gamma_{1}|^{2}\lambda^{2}\mu^{3}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}|v|^{2}\nabla\psi_{1}$
$\displaystyle\qquad\quad\;+\frac{1}{2}\beta_{1}\alpha_{2}\theta_{1}^{-2}|v|^{2}{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\nabla v),$ (3.40) $\displaystyle{\cal
T}=\Big{(}\frac{5}{16}-\frac{1}{2}\beta_{1}^{2}|\gamma_{1}|^{2}\Big{)}\alpha_{2}^{2}+\frac{1}{2}\beta_{1}\alpha_{2}\alpha_{1}\beta_{2}|\gamma_{1}|^{2},$
(3.41)
and
$\begin{array}[]{ll}\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\tilde{\Lambda}_{1}=\Lambda_{1}+C|\gamma_{1}|^{2}\lambda^{2}\mu^{4}\varphi_{1}^{3}|v|^{2}-2\beta_{1}\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}{\mathop{\rm
Im}\,}(\gamma_{1}\gamma_{2})\theta_{1}^{-2}|v|^{4}\\\ \vskip 6.0pt plus 2.0pt
minus 2.0pt\cr\displaystyle\qquad\
-2\beta_{1}\lambda\mu^{2}\nabla(\varphi_{1}|\nabla\psi_{1}|^{2})\cdot{\mathop{\rm
Im}\,}(\gamma_{1}\overline{v}\nabla
v)-2\beta_{1}^{2}|\gamma_{1}|^{2}\lambda^{2}\mu^{3}\nabla\cdot(\varphi_{1}^{2}|\nabla\psi_{1}|^{2}\nabla\psi_{1})|v|^{2}\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\qquad\
+\frac{1}{2}\beta_{1}^{2}|\gamma_{1}|^{2}\alpha_{2}(\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}+\lambda\mu\varphi_{1}\Delta\psi_{1})\theta_{1}^{-2}|v|^{4}+\beta_{1}^{2}\lambda^{3}\mu^{3}\varphi_{1}^{3}|\Delta\psi_{1}^{4}||v|^{2}.\end{array}$
Furthermore, it is easy to check that
$\begin{array}[]{ll}\displaystyle-\beta_{1}^{2}|\gamma_{1}|^{2}+2\beta_{1}^{2}|\gamma_{1}|^{2}>0,\quad
1-\beta_{1}^{2}|\gamma_{1}|^{2}=\frac{1}{1+b^{2}}=-\alpha_{1},\\\ \vskip 6.0pt
plus 2.0pt minus 2.0pt\cr\displaystyle
1-\alpha_{1}^{2}|\gamma_{1}|^{2}=\frac{b^{2}}{1+b^{2}}=-\alpha_{1}b^{2},\quad
1-\beta_{1}^{2}\geq\frac{1}{1+b^{2}}=-\alpha_{1},\\\ \vskip 6.0pt plus 2.0pt
minus 2.0pt\cr\displaystyle
1-\beta_{1}^{2}|\gamma_{1}|^{2}+\beta_{1}\alpha_{1}|\gamma_{1}|^{2}=\frac{1-b}{1+b^{2}}\geq\frac{1-r_{0}}{(1+b^{2})}=-(1-r_{0})\alpha_{1}.\end{array}$
(3.42)
By (2.2), (2.4), (3.23) and (3.41) and noting that
$|\gamma_{1}|^{2}=1+b^{2}<2$, we get that
$\begin{array}[]{ll}\displaystyle{\cal
T}\displaystyle\geq\Big{(}\frac{5}{16}-\frac{1}{8}|\gamma_{1}|^{2}\Big{)}\alpha_{2}^{2}-\frac{1}{4}\alpha_{2}|\beta_{2}||\gamma_{1}|^{2}\geq\Big{(}\frac{5}{16}-\frac{1}{8}|\gamma_{1}|^{2}\Big{)}\alpha_{2}^{2}-\frac{1}{32}\alpha_{2}^{2}|\gamma_{1}|^{2}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad=\frac{5}{32}\alpha_{2}^{2}(2-|\gamma_{1}|^{2})>0.\end{array}$
By (3.39) and (3.42), we end up Step 2 with the following inequality:
$\begin{array}[]{ll}\displaystyle(3+2\beta_{1}^{2}|\gamma_{1}|^{2})|\theta_{1}{\cal
G}y|^{2}+\Big{(}M+\frac{3}{8}\alpha_{1}\theta_{1}^{-2}|v|^{4}\Big{)}_{t}+\nabla\cdot{\cal
K}(v)\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\geq
2\lambda^{3}\mu^{4}\varphi_{1}^{3}|\nabla\psi_{1}|^{4}|v|^{2}\\!-\\!2\alpha_{1}\lambda\mu^{2}\varphi_{1}|\nabla\psi_{1}|^{2}|\nabla
v|^{2}\\!+\\!\frac{\alpha_{2}}{2}(1\\!+\\!\beta_{1}^{2}|\gamma_{1}|^{2})\lambda^{2}\mu^{2}\varphi_{1}^{2}|\nabla\psi_{1}|^{2}\theta_{1}^{-2}|v|^{4}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad-\frac{1}{8}(1-b^{2})\alpha_{2}^{2}\theta_{1}^{-4}|v|^{6}-\frac{1}{2}\alpha_{1}\alpha_{2}(1-r_{0})\theta_{1}^{-2}|v|^{2}|\nabla
v|^{2}-\tilde{\Lambda}_{1}-(\lambda\mu\varphi_{1})^{-1}|v_{t}|^{2}.\end{array}$
(3.43)
Step 3. Recalling the choice of $\psi_{1}$ in Lemma 2.1, we know that
$\mathop{\rm min}_{x\in{\Omega\setminus{\omega_{0}}}}|\nabla\psi_{1}|>0.$
For simplicity, we put
$s_{0}\buildrel\triangle\over{=}\mathop{\rm
min}_{x\in{\Omega\setminus{\omega_{0}}}}\\{|\nabla\psi_{1}|^{2},\
|\nabla\psi_{1}|^{4}\\}>0.$
Noting that $\theta_{1}(0,x)=\theta_{1}(T,x)=0$, $\alpha_{1}<0,\ \alpha_{2}>0$
and $\beta_{1}^{2}|\gamma_{1}^{2}|^{2}\leq 1$, by intergrating (3.43) over
$Q$, we have
$\begin{array}[]{ll}\displaystyle\lambda\mu^{2}s_{0}\int_{0}^{T}\int_{\Omega\setminus{\omega_{0}}}\varphi_{1}\Big{(}2\lambda^{2}\mu^{2}\varphi_{1}^{2}|v|^{2}-2\alpha_{1}|\nabla
v|^{2}+\frac{1}{2}\alpha_{2}\lambda\varphi_{1}\theta_{1}^{-2}|v|^{4}\Big{)}dxdt\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle-\frac{1}{8}\alpha_{2}\int_{Q}\Big{(}(1-b^{2})\alpha_{2}\theta_{1}^{-4}|v|^{6}+4(1-r_{0})\alpha_{1}\theta_{1}^{-2}|v|^{2}|\nabla
v|^{2}\Big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
5\int_{Q}|\theta_{1}{\cal G}y|^{2}dxdt+\int_{Q}\nabla\cdot{\cal
K}(v)dxdt+\int_{Q}\tilde{\Lambda}_{1}dxdt+\int_{Q}(\lambda\mu\varphi_{1})^{-1}|v_{t}|^{2}dxdt.\end{array}$
Thus,
$\begin{array}[]{ll}\displaystyle\quad\lambda\mu^{2}s_{0}\int_{Q}\varphi_{1}\big{(}2\lambda^{2}\mu^{2}\varphi_{1}^{2}|v|^{2}-2\alpha_{1}|\nabla
v|^{2}+\frac{1}{2}\alpha_{2}\lambda\varphi_{1}\theta_{1}^{-2}|v|^{4}\big{)}dxdt\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad-\frac{1}{8}\alpha_{2}\int_{Q}\Big{(}(1-b^{2})\alpha_{2}\theta_{1}^{-4}|v|^{6}+4(1-r_{0})\alpha_{1}\theta_{1}^{-2}|v|^{2}|\nabla
v|^{2}\Big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
5\int_{Q}|\theta_{1}{\cal G}y|^{2}dxdt+\int_{Q}\nabla\cdot{\cal
K}(v)dxdt+\int_{Q}\tilde{\Lambda}_{1}dxdt+\int_{Q}(\lambda\mu\varphi_{1})^{-1}|v_{t}|^{2}dxdt\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+\lambda\mu^{2}s_{0}\int_{Q_{\omega_{0}}}\varphi_{1}\big{(}2\lambda^{2}\mu^{2}\varphi_{1}^{2}|v|^{2}-2\alpha_{1}|\nabla
v|^{2}+\frac{1}{2}\alpha_{2}\lambda\varphi_{1}\theta_{1}^{-2}|v|^{4}\big{)}dxdt.\end{array}$
(3.44)
Step 4. We estimate the boundary term $\int_{Q}\nabla\cdot{\cal K}(v)dxdt$ for
the case that $y=0$ on $\Sigma$. Recalling that $v=\theta_{1}y$, we have
$v|_{\Sigma}=0$ and $v_{x_{j}}=\frac{\partial
v}{\partial\nu}\nu_{j}(j=1,\cdots,n)$. Noting that
$\frac{\partial\psi}{\partial\nu}\leq 0$ on $\Gamma$, by (3.40), we have
$\begin{array}[]{ll}\displaystyle\int_{Q}\nabla\cdot{\cal
K}(v)dxdt\negthinspace\negthinspace\negthinspace&\displaystyle=\int_{Q}\nabla\cdot
V(v)dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr&\displaystyle=2\lambda\mu\int_{\Sigma}\varphi_{1}\frac{\partial\psi_{1}}{\partial\nu}\Big{|}\frac{\partial
v}{\partial\nu}\Big{|}^{2}\big{(}\sum_{j,k=1}^{n}\nu_{j}\nu_{k}\big{)}^{2}d\Gamma
dt\leq 0.\end{array}$ (3.45)
Noting that $\omega_{0}\subset\subset\omega$, we can choose a cut off function
$\xi\in C_{0}^{\infty}(\omega;[0,1])$ so that $\xi\equiv 1$ on $\omega_{0}$.
Then,
$\begin{array}[]{ll}\displaystyle\int_{Q}\xi^{2}\varphi_{1}\theta_{1}^{2}|\nabla
y|^{2}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=-{\mathop{\rm
Re}\,}\int_{Q}\xi^{2}\varphi_{1}\theta_{1}^{2}\overline{y}\Delta
ydxdt-{\mathop{\rm
Re}\,}\int_{Q}\nabla(\xi^{2}\varphi_{1}\theta_{1}^{2})\cdot\nabla
y\overline{y}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\leq\varepsilon\int_{Q}\frac{1}{\lambda^{2}\mu^{2}\varphi_{1}}\theta_{1}|\Delta
y|^{2}dxdt+\frac{C}{\varepsilon}\lambda^{2}\mu^{2}\int_{Q_{\omega}}\varphi_{1}^{3}\theta_{1}^{2}|y|^{2}dxdt+\frac{1}{2}\int_{Q}\xi^{2}\varphi_{1}\theta_{1}^{2}|\nabla
y|^{2}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+C\lambda^{2}\mu^{2}\int_{Q_{\omega}}\varphi_{1}^{3}\theta_{1}^{2}|y|^{2}dxdt,\end{array}$
(3.46)
where $\varepsilon>0$ small enough.
Next, noting $\theta_{1}(0,x)=\theta_{1}(T,x)=0$, we have
$\displaystyle-2{\mathop{\rm
Re}\,}\displaystyle\int_{Q}\theta_{1}^{2}\frac{1}{\lambda\varphi_{1}}(\alpha_{1}+i\beta_{1})y_{t}\cdot\Delta\overline{y}dxdt$
$\displaystyle=2{\mathop{\rm
Re}\,}\displaystyle\int_{Q}\displaystyle(\alpha_{1}+i\beta_{1})y_{t}\nabla\left(\theta_{1}^{2}\frac{1}{\lambda\varphi_{1}}\right)\cdot\nabla\overline{y}dxdt+2{\mathop{\rm
Re}\,}\displaystyle\int_{Q}\displaystyle\theta_{1}^{2}\frac{1}{\lambda\varphi_{1}}(\alpha_{1}+i\beta_{1})\nabla
y_{t}\cdot\nabla\overline{y}dxdt$ $\displaystyle=2{\mathop{\rm
Re}\,}\displaystyle\int_{Q}\displaystyle(\alpha_{1}+i\beta_{1})y_{t}\nabla\left(\theta_{1}^{2}\frac{1}{\lambda\varphi_{1}}\right)\cdot\nabla\overline{y}dxdt-\alpha_{1}\displaystyle\int_{Q}\displaystyle\left(\theta_{1}^{2}\frac{1}{\lambda\varphi_{1}}\right)_{t}|\nabla
y|^{2}dxdt$ (3.47)
$\displaystyle\leq\displaystyle\frac{1}{2}\displaystyle\int_{Q}\theta_{1}^{2}\displaystyle\frac{1}{\lambda\varphi_{1}}|(\alpha_{1}+i\beta_{1})y_{t}|^{2}dxdt+C\displaystyle\int_{Q}\theta_{1}^{2}\lambda\mu^{2}\varphi_{1}|\nabla
y|^{2}dxdt.$
From (3.2), we conclude that
$\begin{array}[]{ll}\displaystyle\int_{Q}\frac{1}{\lambda\varphi_{1}}\theta_{1}^{2}\Big{(}|(\alpha_{1}+i\beta_{1})y_{t}|^{2}+|\Delta
y|^{2}\Big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=\int_{Q}\frac{1}{\lambda\varphi_{1}}\theta_{1}^{2}\Big{|}{\cal
G}y+(\alpha_{2}+i\beta_{2})|y|^{2}y\Big{|}^{2}dxdt-2{\mathop{\rm
Re}\,}\displaystyle\int_{Q}\theta_{1}^{2}\frac{1}{\lambda\varphi_{1}}(\alpha_{1}+i\beta_{1})y_{t}\cdot\Delta\overline{y}dxdt\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\leq\int_{Q}\frac{1}{\lambda\varphi_{1}}\left(|\theta_{1}{\cal
G}y|^{2}+|\alpha_{2}+i\beta_{2}|^{2}\theta_{1}^{2}|y|^{6}\right)dxdt+C\displaystyle\int_{Q}\theta_{1}^{2}\lambda\mu^{2}\varphi_{1}|\nabla
y|^{2}dxdt.\end{array}$ (3.48)
Combining (3.23) and (3.44)–(3.48), we find that for $\varepsilon>0$
sufficiently small, there is a $\mu_{0}>0$ such that for all $\mu\geq\mu_{0}$,
there exist two constants $\lambda_{0}=\lambda_{0}(\mu)$ and $C>0$, so that
for all $\lambda\geq\lambda_{0}$, (2.5) holds.
Step 5. In this step, borrowing some idea from [6, 15], we deal with the case
that $\displaystyle\frac{\partial y}{\partial\nu}=0$ on $\Sigma$. Let
$\begin{array}[]{ll}\displaystyle\tilde{\varphi}_{1}(t,x)={e^{-\mu\psi_{1}(x)}\over{t(T-t)}},\quad\tilde{\rho}_{1}(t,x)={{e^{-\mu\psi_{1}(x)}-e^{2\mu|\psi_{1}|_{C(\overline{\Omega};\;{\mathop{\rm
l\negthinspace
R}})}}}\over{t(T-t)}},\quad\tilde{\ell}_{1}=\lambda\tilde{\rho}_{1},\quad\tilde{\theta}_{1}=e^{\tilde{\ell}_{1}}.\end{array}$
Set $\eta=\tilde{\theta}_{1}y$. Then, one can obtain a similar inequality as
(3.44) with $\theta_{1},\ \ell_{1},\varphi_{1}$ and ${\cal K}(v)$ replaced by
$\tilde{\theta}_{1},\ \tilde{\ell}_{1},\ \tilde{\varphi}_{1}$ and ${\cal
K}(\eta)$, respectively. Noting that $0<\tilde{\theta}_{1}\leq\theta,\
0<\tilde{\varphi}_{1}<\varphi.$ Thus, we have
$\begin{array}[]{ll}\displaystyle\hbox{The left-hand side of
(\ref{1102-0c})}\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
10\int_{Q}|\theta_{1}{\cal G}y|^{2}dxdt+\int_{Q}\nabla\cdot\Big{(}{\cal
K}(v)+{\cal K}(\eta)\Big{)}dxdt+2\int_{Q}\tilde{\Lambda}_{1}dxdt\\\ \vskip
6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+\int_{Q}(\lambda\mu\tilde{\varphi_{1}})^{-1}|\eta_{t}|^{2}dxdt+\int_{Q}(\lambda\mu\varphi_{1})^{-1}|v_{t}|^{2}dxdt\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad+2\lambda\mu^{2}s_{0}\int_{Q_{\omega_{0}}}\varphi_{1}\Big{(}2\lambda^{2}\mu^{2}\varphi_{1}^{2}|v|^{2}-2\alpha_{1}|\nabla
v|^{2}+\frac{1}{2}\alpha_{2}\lambda\varphi_{1}\theta_{1}^{-2}|v|^{4}\Big{)}dxdt.\end{array}$
Noting that $\psi_{1}=0$ on the boundary $\Sigma$, and that $v=\theta_{1}y$
and $\eta=\tilde{\theta}_{1}y$, we see that
$\left\\{\begin{array}[]{ll}\displaystyle\theta_{1}=\tilde{\theta}_{1},\quad\ell_{1}=\tilde{\ell}_{1},\quad\varphi_{1}=\tilde{\varphi}_{1},\quad
v=\eta&\hbox{ on }\Sigma,\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\ell_{1t}=\tilde{\ell}_{1t},\quad
v_{t}=\eta_{t},\quad\nabla\ell_{1}=-\nabla\tilde{\ell}_{1},\quad\nabla\psi_{1}=-\nabla\tilde{\psi}_{1}&\hbox{
on }\Sigma.\end{array}\right.$ (3.49)
By (3.49), noting that $\displaystyle\frac{\partial y}{\partial\nu}=0$ on
$\Sigma$, $v=\theta_{1}y$ and $\eta=\tilde{\theta}_{1}y$, it is easy to see
that
$\begin{array}[]{ll}\displaystyle\nabla
v\cdot\nu+\nabla\eta\cdot\nu=\theta_{1}(\nabla
y+\nabla\ell_{1}y)\cdot\nu+\tilde{\theta}_{1}(\nabla
y+\nabla\tilde{\ell}_{1}y)\cdot\nu=0&\hbox{ on }\Sigma,\\\ \vskip 6.0pt plus
2.0pt minus 2.0pt\cr\displaystyle\big{(}|\nabla
v|^{2}\nabla\ell_{1}+|\nabla\eta|^{2}\nabla\tilde{\ell}_{1}\big{)}\cdot\nu=2\theta_{1}^{2}|\nabla\ell_{1}|^{2}{\mathop{\rm
Re}\,}(\nabla y\overline{y})\cdot\nu=0&\hbox{ on }\Sigma,\end{array}$ (3.50)
and that
$\begin{array}[]{ll}\displaystyle{\mathop{\rm
Re}\,}\big{[}(\nabla\ell_{1}\cdot\nabla\overline{v})\nabla
v+(\nabla\tilde{\ell}_{1}\cdot\nabla\overline{\eta})\nabla\eta\big{]}\cdot\nu\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=2\theta_{1}^{2}{\mathop{\rm
Re}\,}\big{[}(\nabla\ell_{1}\cdot\nabla\overline{y})\nabla\ell_{1}\cdot\nu+|\nabla\ell_{1}|^{2}(\nabla\overline{y}\cdot\nu)\big{]}y=0\quad\hbox{on
}\Sigma.\end{array}$ (3.51)
Recalling (3.6), (3.7) and (3.40) for the definitions of $V(v)$ ,${\cal H}$
and ${\cal K}$, by (3.49)–(3.51), it is easy to see that
$\int_{Q}\nabla\cdot\Big{(}{\cal K}(v)+{\cal K}(\eta)\Big{)}dxdt=0.$
Further, recalling (3.29) for $\tilde{\Lambda}_{1}$, by (3.23), and noting
that $v=\theta_{1}y$, we conclude that there is a $\mu_{0}>0$ such that for
all $\mu\geq\mu_{0}$, there exist two constants $\lambda_{0}=\lambda_{0}(\mu)$
and $C>0$, so that for all $\lambda\geq\lambda_{0}$, it holds that
$\begin{array}[]{ll}\displaystyle\quad\lambda\mu^{2}\int_{Q}\tilde{\theta}_{1}^{2}\tilde{\varphi_{1}}\big{(}\lambda^{2}\mu^{2}\tilde{\varphi_{1}}^{2}|y|^{2}+|\nabla
y|^{2}+\alpha_{2}\lambda\tilde{\varphi_{1}}|y|^{4}\big{)}dxdt\\\ \vskip 6.0pt
plus 2.0pt minus
2.0pt\cr\displaystyle\quad+\int_{Q}\big{(}\tilde{\theta}_{1}^{2}|y|^{6}+\tilde{\theta}_{1}^{2}|y|^{2}|\nabla
y|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
C\Big{[}\int_{Q}|\theta_{1}{\cal
G}y|^{2}dxdt+\int_{Q}\frac{1}{\lambda\mu\varphi_{1}}|y_{t}|^{2}dxdt+\int_{Q}\frac{1}{\lambda\mu\tilde{\varphi_{1}}}|y_{t}|^{2}dxdt\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\quad\quad+\lambda\mu^{2}\int_{Q_{\omega_{0}}}\varphi_{1}\theta_{1}^{2}\big{(}\lambda^{2}\mu^{2}\varphi_{1}^{2}|y|^{2}+|\nabla
y|^{2}+\alpha_{2}\lambda\varphi_{1}|y|^{4}\big{)}dxdt\Big{]}.\end{array}$
(3.52)
On the other hand, we can obtain a similar result as (3.48) with $\varphi$
replaced by $\tilde{\varphi_{1}}$ as follows:
$\begin{array}[]{ll}\displaystyle\int_{Q}\frac{1}{\lambda\tilde{\varphi_{1}}}\tilde{\theta_{1}}^{2}\big{(}|(\alpha_{1}+i\beta_{1})y_{t}|^{2}+|\Delta
y|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle=\int_{Q}\frac{1}{\lambda\tilde{\varphi_{1}}}\tilde{\theta_{1}}^{2}\big{|}{\cal
G}y+(\alpha_{2}+i\beta_{2})|y|^{2}y\big{|}^{2}dxdt-2{\mathop{\rm
Re}\,}\displaystyle\int_{Q}\tilde{\theta_{1}}^{2}\frac{1}{\lambda\tilde{\varphi_{1}}}(\alpha_{1}+i\beta_{1})y_{t}\cdot\Delta\overline{y}dxdt\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\leq\int_{Q}\frac{1}{\lambda\tilde{\varphi_{1}}}\big{[}|\tilde{\theta_{1}}{\cal
G}y|^{2}+|\alpha_{2}+\beta_{2}|^{2}\tilde{\theta_{1}}^{2}|y|^{6}\big{]}dxdt+C\displaystyle\int_{Q}\tilde{\theta_{1}}^{2}\lambda\mu^{2}\tilde{\varphi_{1}}|\nabla
y|^{2}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
C\int_{Q}\frac{1}{\lambda}\big{[}|\theta_{1}{\cal
G}y|^{2}+|\alpha_{2}+\beta_{2}|^{2}\theta_{1}^{2}|y|^{6}\big{]}dxdt+C\int_{Q}\theta_{1}^{2}\lambda\mu^{2}\varphi_{1}|\nabla
y|^{2}dxdt.\end{array}$ (3.53)
Finally, combining (3.23), (3.46), (3.48), (3.52) and (3.53), for
$\varepsilon>0$ sufficiently small, we can get the desired result immediately.
Proof of Theorem 2.2. For $\theta_{2}$ given by (2.1) with $\psi_{2}$
satisfying Lemma 2.2, noting that $\psi_{2}=0$ on $\Gamma\setminus\Gamma_{0}$,
and by Lemma 3.1, proceeding exactly almost the same argument as the proof of
Theorem 2.1, one can get the desired result.
## 4 Conditional stability
In this section, we study the state observation for the following Ginzburg-
Landau equation
$\left\\{\begin{array}[]{ll}\displaystyle y_{t}-(1+ib)\Delta
y+(1+ic)|y|^{2}y=0&\mbox{ in }Q,\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\frac{\partial y}{\partial\nu}=h\mbox{ or }y=g&\mbox{ on
}\Sigma.\end{array}\right.$ (4.1)
Let $\varepsilon\in(0,\frac{T}{2})$ and
$Q_{\varepsilon}\triangleq[\varepsilon,T-\varepsilon]\times\Omega.$ (4.2)
###### Theorem 4.1
Assume that (2.2) and (2.4) are fulfilled. There exists a constant $C>0$ such
that for any solutions $u_{1},u_{2}\in C([0,T];L^{2}(\Omega))\cap
L^{2}(0,T;H^{1}(\Omega))$ of (4.1) with $u_{2}\in
L^{\infty}([0,T];L^{6}(\Omega))$, it holds that
$\int_{Q_{\varepsilon}}\big{(}|u_{1}-u_{2}|^{2}+|\nabla u_{1}-\nabla
u_{2}|^{2}\big{)}dxdt\leq{\cal
C}(u_{2})\int_{Q_{\omega}}(|u_{1}-u_{2}|^{2}+|u_{1}-u_{2}|^{4})dxdt.$ (4.3)
where
${\cal C}(u_{2})=C\|u_{2}\|_{L^{\infty}([0,T];L^{6}(\Omega))}^{8}.$
###### Remark 4.1
As we explained in the introduction, Theorem 4.1 provides a conditional
stability result, that is, if we know a priori that $u_{2}\in
L^{\infty}([0,T];L^{6}(\Omega))$, then we get an estimate for $|u_{1}-u_{2}|$
on $Q_{\varepsilon}$ by $|u_{1}-u_{2}|$ on $Q_{\omega}$. This is reasonable
since we deal with an equation with a cubic term. Further, the condition that
$u_{2}\in L^{\infty}([0,T];L^{6}(\Omega))$ is not very restrictively. From the
well-posedness result for (4.1), we know this holds when the initial data of
(4.1) corresponding to $u_{2}$ belong to $H_{0}^{1}(\Omega)$, then $u_{2}\in
L^{\infty}([0,T];L^{6}(\Omega))$ (e.g., [8, Chapter 3]).
###### Remark 4.2
From the proof of Theorem 4.1, we see that the constant
$C\|u_{2}\|_{L^{\infty}([0,T];L^{6}(\Omega))}^{8}$ can be replaced by
$C\|u_{1}\|_{L^{\infty}([0,T];L^{6}(\Omega))}^{8}$, that is, we can assume an
a priori bound of $u_{1}$ to get the conditional stability.
###### Remark 4.3
The constant $\varepsilon$ can be chosen to be any positive number. Therefore,
we know the answer to the Identification Problem is positive. On the other
hand, from the proof of Theorem 4.1, we see that the constant ${\cal
C}(u_{2})$ in (4.3) tends to infinity as $\varepsilon$ tends to $0$. This is
reasonable since (4.1) is time irreversible.
###### Remark 4.4
If we know a priori that the $L^{\infty}([0,T];L^{6}(\Omega))$-norm of the
state of the system (4.1) is smaller than a positive constant $M$, then we see
that the conditional stability result (4.3) can be presented as follows:
$\int_{Q_{\varepsilon}}\big{(}|u_{1}-u_{2}|^{2}+|\nabla u_{1}-\nabla
u_{2}|^{2}\big{)}dxdt\leq
CM^{8}\int_{Q_{\omega}}(|u_{1}-u_{2}|^{2}+|u_{1}-u_{2}|^{4})dxdt.$
Proof of Theorem 4.1. Let $u_{1},u_{2}\in C([0,T];L^{2}(\Omega))\cap
L^{2}(0,T;H^{1}(\Omega))$, $u_{2}\in L^{\infty}([0,T];L^{6}(\Omega))$ and
$u_{1}$, $u_{2}$ be the solutions of (4.1). Let
$z\buildrel\triangle\over{=}u_{1}-u_{2}$. Then we have
$\left\\{\begin{array}[]{ll}\displaystyle\partial_{t}z-(1+ib)\Delta
z+(1+ic)|z|^{2}z=3(1+ic)(|z|^{2}u_{2}+z|u_{2}|^{2})&\mbox{ in $Q$ },\\\ \vskip
6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\frac{\partial
z}{\partial\nu}=0\mbox{ or }z=0&\mbox{ on $\Sigma$ }.\end{array}\right.$ (4.4)
Applying Theorem 2.1 to (4.4), we obtain that
$\begin{array}[]{ll}\displaystyle\lambda\mu^{2}\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}\varphi_{1}\Big{(}\lambda^{2}\mu^{2}\varphi_{1}^{2}|z|^{2}+|\nabla
z|^{2}+\lambda\varphi_{1}|z|^{4}\Big{)}dxdt+\int_{0}^{T}\int_{\Omega}\Big{(}\theta_{1}^{2}|z|^{6}+\theta_{1}^{2}|z|^{2}|\nabla
z|^{2}\Big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
C\left[\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}(|z|^{2}u_{2}+z|u_{2}|^{2})^{2}dxdt+\lambda^{2}\mu^{2}\int_{0}^{T}\int_{\omega}\varphi_{1}^{2}\theta_{1}^{2}(\lambda\mu^{2}\varphi_{1}|z|^{2}+|z|^{4})dxdt\right].\end{array}$
(4.5)
It follows from Hölder’s inequality that
$\begin{array}[]{ll}\displaystyle\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}(|z|^{2}u_{2}+z|u_{2}|^{2})^{2}dxdt\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
2\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}(|z|^{4}|u_{2}|^{2}+|z|^{2}|u_{2}|^{4})dxdt\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\leq\epsilon_{0}\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}|z|^{6}dxdt+C_{\epsilon_{0}}\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}|z|^{2}|u_{2}|^{4}dxdt,\end{array}$
(4.6)
where $\varepsilon_{0}$ is a sufficiently small positive constant which can be
absorbed by the left side of (4.5).
By Hölder’s inequality again, we have
$\begin{array}[]{ll}\displaystyle{\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}|z|^{2}|u_{2}|^{4}dxdt}=\displaystyle{\int_{0}^{T}\int_{\Omega}|\theta_{1}z|^{2}|u_{2}|^{4}dxdt}\\\
\vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\leq\int_{0}^{T}\left[\left(\int_{\Omega}|\theta_{1}z|^{6}dx\right)^{\frac{1}{3}}\left(\int_{\Omega}|u_{2}|^{6}dx\right)^{\frac{2}{3}}\right]dt\\\
\vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
C\|u_{2}\|^{4}_{L^{\infty}([0,T];L^{6}(\Omega))}\int_{0}^{T}\parallel\theta_{1}z\parallel^{2}_{H^{1}(\Omega)}dt.\end{array}$
(4.7)
Choosing $\lambda>C\|u_{2}\|^{4}_{L^{\infty}([0,T];L^{\infty}(\Omega))}$, and
gathering together (4.5)–(4.7), we obtain that
$\begin{array}[]{ll}\displaystyle\lambda\mu^{2}\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}\varphi_{1}\big{(}\lambda^{2}\mu^{2}\varphi_{1}^{2}|z|^{2}+|\nabla
z|^{2}+\lambda\varphi_{1}|z|^{4}\big{)}dxdt+\int_{0}^{T}\int_{\Omega}\big{(}\theta_{1}^{2}|z|^{6}+\theta_{1}^{2}|z|^{2}|\nabla
z|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
C\lambda^{2}\mu^{2}\int_{0}^{T}\int_{\omega}\varphi_{1}^{2}\theta_{1}^{2}\big{(}\lambda\mu^{2}\varphi_{1}|z|^{2}+|z|^{4}\big{)}dxdt.\end{array}$
On the other hand, noting that
$0<\theta_{1}(\varepsilon,x)\leq\theta_{1}(t,x)\leq\theta_{1}\Big{(}\frac{T}{2},x\Big{)},\quad\quad\forall(x,t)\in
Q_{\varepsilon},$ (4.8)
we deduce that
$\begin{array}[]{ll}\displaystyle\int_{\varepsilon}^{T-\varepsilon}\int_{\Omega}\theta_{1}^{2}\big{(}|z|^{2}+|\nabla
z|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle\leq\lambda^{3}\mu^{4}\varphi_{1}^{3}\int_{0}^{T}\int_{\Omega}\theta_{1}^{2}\big{(}|z|^{2}+|\nabla
z|^{2}\big{)}dxdt\\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\leq
C\lambda^{2}\mu^{2}\int_{0}^{T}\int_{\omega}\varphi_{1}^{2}\theta_{1}^{2}\big{(}\lambda\mu^{2}\varphi_{1}|z|^{2}+|z|^{4}\big{)}dxdt.\end{array}$
This, together with (4.8), implies (4.3).
###### Remark 4.5
Similar to the proof of Theorem 4.1, one can give the same analysis to obtain
the conditional stability under boundary observations, which is stated as
follows.
Let $u_{1},u_{2}\in C([0,T];L^{2}(\Omega))\cap L^{2}(0,T;H^{1}(\Omega))$ be
solutions of the following equation
$\left\\{\begin{array}[]{ll}\displaystyle y_{t}-(1+ib)\Delta
y+(1+ic)|y|^{2}y=0&\mbox{ in }Q,\\\ \vskip 6.0pt plus 2.0pt minus
2.0pt\cr\displaystyle y=g&\mbox{ on }\Sigma,\end{array}\right.$ (4.9)
and $u_{2}\in L^{\infty}(0,T;L^{6}(\Omega))$, then it holds that
$\begin{array}[]{ll}\displaystyle{\int_{Q_{\varepsilon}}\big{(}|u_{1}-u_{2}|^{2}+|\nabla
u_{1}-\nabla u_{2}|^{2}\big{)}dxdt}\leq
C\displaystyle{\int_{\Sigma_{0}}\Big{|}\frac{\partial
u_{1}}{\partial\nu}-\frac{\partial u_{2}}{\partial\nu}\Big{|}^{2}d\Gamma
dt}.\end{array}$
## Acknowledgement
This work is partially supported by the NSF of China (No. 11971333, 11931011,
12071061, 11971093), the Science Fund for Distinguished Young Scholars of
Sichuan Province (No. 2022JDJQ0035), the Applied Fundamental Research Program
of Sichuan Province (No. 2020YJ0264), the Science Development Project of
Sichuan University (No. 2020SCUNL201), and the Fundamental Research Funds for
the Central Universities of UESTC (No. ZYGX2019 J094).
## References
* [1] I. Aranson and L. Kramer. The World of the complex Ginzburg-Landau equation. Rev. Mod. Phys., 74(1)(2002), 99–143.
* [2] M. Bellassoued and M. Yamamoto, Carleman estimates and applications to inverse problems for hyperbolic systems. Springer, Tokyo, 2017.
* [3] T. Duyckaerts, X. Zhang and E. Zuazua, On the optimality of the observability inequalities for parabolic and hyperbolic systems with potentials, Ann. Inst. H. Poincaré Anal. Non Linéaire, 25 (2008), 1–41.
* [4] X. Fu, Null controllability for the parabolic equation with a complex principal part. J. Funct. Anal. 257 (2009), 1333–1354.
* [5] X. Fu, Q. Lü and X. Zhang, Carleman estimates for second order partial differential operators an applications. A unified approach. Springer, 2019.
* [6] A. V. Fursikov and O. Yu. Imanuvilov, Controllability of Evolution Equations, Lecture Notes Series 34, Research Institute of Mathematics, Seoul National University, Seoul, Korea, 1996.
* [7] V. L. Ginzburg and L. D. Landau, On the theory of superconductivity. Zh. Eksp. Teor. Fiz. 20 (1950), 1064–1082.
* [8] B. Guo, M. Jiang and Y. Li, Ginzburg-Landau Equations. Science Press Beijing, 2020.
* [9] L. Hörmander, The analysis of linear partial differential operators. IV. Fourier integral operators. Springer-Verlag, Berlin, 2009.
* [10] O. Yu. Imanuvilov and M. Yamamoto, Lipschitz stability in inverse parabolic problems by the Carleman estimate.Inverse problem 14(1998), 1229–1245.
* [11] M. V. Klibanov, Global uniqueness of a multidimensional inverse problem for a nonlinear parabolic equation, Inverse Probl. 22(2004), 495–514.
* [12] M. V. Klibanov and J. Li, Inverse problems and Carleman estimates, global uniqueness, global convergence and experimental data. De Gruyter, Berlin, 2021.
* [13] N. Lerner, Carleman inequalities. An introduction and more. Springer, Cham, 2019.
* [14] J. Le Rousseau, G. Lebeau and L. Robbiano, Elliptic Carleman estimates and applications to stabilization and controllability. Vol. I. Dirichlet boundary conditions on Euclidean space. Birkhäuser/Springer, Cham, 2022.
* [15] Q. Lü, A lower bound on local energy of partial sum of eigenfunctions for Laplace-Beltrami operators. ESAIM Control Optim. Calc. Var. 19 (2013), 255–273.
* [16] B. Rosenstein and D. Li, Ginzburg-Landau theory of type II superconductors in magnetic field, Rev. Mod. Phys. 82(2010), 109–168.
* [17] M. Yamamoto, Carleman estimates for parabolic equations and applications, Inverse Problems, 123013 (2009).
|
Li
# An Extreme-Adaptive Time Series Prediction Model Based on Probability-
Enhanced LSTM Neural Networks
Yanhong Li Department of Computer Science and Engineering, Santa Clara
University, Santa Clara, CA, USA Jack Xu Santa Clara Valley Water District,
San Jose, CA, USA David C. Anastasiu Corresponding author: David C.
Anastasiu. E-mail<EMAIL_ADDRESS>Department of Computer Science and
Engineering, Santa Clara University, Santa Clara, CA, USA
###### Abstract
Forecasting time series with extreme events has been a challenging and
prevalent research topic, especially when the time series data are affected by
complicated uncertain factors, such as is the case in hydrologic prediction.
Diverse traditional and deep learning models have been applied to discover the
nonlinear relationships and recognize the complex patterns in these types of
data. However, existing methods usually ignore the negative influence of
imbalanced data, or severe events, on model training. Moreover, methods are
usually evaluated on a small number of generally well-behaved time series,
which does not show their ability to generalize. To tackle these issues, we
propose a novel probability-enhanced neural network model, called NEC+, which
concurrently learns extreme and normal prediction functions and a way to
choose among them via selective back propagation. We evaluate the proposed
model on the difficult 3-day ahead hourly water level prediction task applied
to 9 reservoirs in California. Experimental results demonstrate that the
proposed model significantly outperforms state-of-the-art baselines and
exhibits superior generalization ability on data with diverse distributions.
###### keywords:
multivariate time series analysis | hydrologic prediction | reservoir water level prediction | extreme value theory | deep learning models
## Introduction
Time series forecasting is an important technique for many domains in which
most types of data are stored as time sequences, including traffic (1),
weather forecasting (2), biology (3), stock price forecasting (4), and water
resource management (5). These data usually contain seasonality, long term
trends, and non-stationary characteristics which usually are taken into
account by traditional models during prediction. However, in hydrologic
prediction, the water level of dams and reservoirs are also affected by
complicated uncertain factors like weather, geography, and human activities,
which makes the task of precisely predicting them challenging. Most reservoirs
are large hydraulic constructions that serve multiple purposes, including
power generation, flood control, irrigation, and navigation, making them
critical components in the safety and quality of life of the general
population. Therefore, a large number of studies and architectures have
explored the problem of reservoir water level prediction.
For a long time, water level prediction was mainly based on traditional
machine learning and statistics-based models. However, methods such as
Autoregressive Integrated Moving Average (ARIMA) (6) seem to adjust poorly to
extreme changes in the water level values and cannot easily find the nonlinear
relationships among the data. Recently, deep neural networks (DNNs) have shown
their great advantages in various areas (7, 8, 9). Both conventional Neural
Network (NN) and Recurrent Neural Network (RNN) models have been used to
overcome the disadvantages of traditional methods for time series forecasting
(10), since they can map time series data into latent representations by
capturing the non-linear relationships of data in sequences. In particular,
Long Short-Term Memory (LSTM) models generally outperform other models in
long-term predictions. However, imbalanced data or severe events might hurt
deep learning models when it comes to long-term predictions. In the context of
reservoir water level forecasting, most of the works mentioned above falter
when predicting extreme events. Furthermore, they usually focus on predicting
only one or two sensors, putting in question the generalizability of these
models. To solve these two challenges, we provide an extreme-adaptive solution
for reservoir water level prediction which we evaluate extensively on data
from 9 different reservoirs across more than 30 years.
The fundamental contribution of this research is the proposal of NEC+, a
probability-enhanced neural network framework. We use an unsupervised
clustering approach in NEC+ to dynamically produce distribution indicators,
which improves the model’s robustness to the occurrence of severe events. To
improve training performance, we present a selected backpropagation approach
and a two-level sampling algorithm to accommodate imbalanced extreme data,
along with a customizable weighted loss function for implementing a binary
classifier, which is a crucial component of NEC+.
## Related work
Time series prediction has been studied extensively. Traditionally, there were
several techniques used to effectively forecast future values in the time
series, including univariate Autoregressive (AR), Moving Average (MA), Simple
Exponential Smoothing (SES), Extreme Learning Machine (ELM) (11), and more
notably ARIMA (6) and its many variations. In particular, the ARIMA model has
demonstrated it can outperform even deep learning models in predicting future
stock (12) and dam reservoir inflow levels (13). Prophet (14) is an additive
model that fits nonlinear trends with seasonal and holiday impacts at the
annual, weekly, and daily levels. A number of other classical machine learning
models have been used for the task of water level prediction. Due to lack of
space, we details them in the additional related work section in the appendix.
With the recent success of deep neural network (DNN) models (15, 16, 17),
hybrid models incorporating different prediction methodologies were also used
for water level prediction. Zhang et al. (5) designed CNNLSTM, a deep learning
hybrid model based on the Convolutional Neural Network (CNN) and LSTM models,
to predict downstream water levels. Du and Liang (18) created an ensemble LSTM
and Prophet model which was shown to outperform any of the single models used
in the ensemble. Le et al. (19) added an attention mechanism (20, 21) to an
encoder-decoder architecture to solve the hydro prediction problem. Even in
more traditional time series prediction tasks, Siami-Namini et al. (22) report
that deep learning-based algorithms such as LSTM outperform traditional
algorithms such as ARIMA given sufficient input data. Ibañez et al. (23)
examined two versions of the LSTM based DNN model exactly for the reservoir
water level forecasting problem, a univariate encoder-decoder model (DNN-U)
and a multivariate version (DNN-M). Both models used trigonometric time series
encoding.
Statistical methods also provide promising solutions when they are combined
with DNN models, especially in the field of sales forecasting. DeepAR (24)
approximates the conditional distribution using a neural network. Deep State
Space Models (DeepState) (25) is a probabilistic forecasting model that fuses
state space models and deep neural networks. By choosing the appropriate
probability distribution, the bias in the objective function becomes further
reduced and the prediction accuracy can be improved. Tyralis and
Papacharalampous showed that the architecture can be simply calibrated using
the quantile (26) or the expectile (27) loss functions for delivering quantile
or expectile hydrologic predictions and forecasts. N-BEATS (28), builds a pure
deep learning solution which outperforms well-established statistical
approaches in more general time series problems. The N-BEATS interpretable
architecture is composed of 2 stacks, namely a trend model and a seasonality
model.
While many recent water level prediction methods showed they can outperform
traditional or simple DNN models, none of them consider the imbalance of
extreme vs. normal events in the time series and hence ignore the negative
influence of extreme values on model training. Generally, these extreme values
could be deemed as outliers and be recognized and even removed during data
preprocessing. However, in our problem, accurate prediction of extreme events
is generally even more important than the prediction of normal ones. However,
we focus on achieving the best overall prediction performance, without
sacrificing either the quality of normal or of extreme predictions.
## Preliminaries
### 0.1 Problem Statement
We take on a challenging univariate time series forecasting problem,
considering that the data contain a majority of normal values that
significantly contribute to the overall prediction performance, along with a
minority of extreme values that must be precisely forecasted to avoid
disastrous events.
The problem can be described as,
$[x_{1},x_{2},\ldots,x_{T}]\in\mathbb{R}^{T}\rightarrow[x_{T+1},\ldots,x_{T+H}],\in\mathbb{R}^{H}$
which means predicting the vector of length-$H$ horizon future values, given a
length-$T$ observed series history, where $x_{1}$ to $x_{T}$ are inputs and
$x_{T+1}$ to $x_{T+H}$ are the outputs. Root mean square error (RMSE) and mean
absolute percentage error (MAPE), as standard scale-free metrics, are used to
evaluate forecasting performance.
For our experiments, we obtained approximately 31 years of hourly reservoir
water level sensor data, along with rain gauge data from a number of sensors
in the same area. The Santa Clara reservoirs were built for water conservation
in the 1930s and 1950s in order to catch storm runoff that would otherwise
drain into the San Francisco Bay. The reservoirs also provide flood protection
by controlling runoff early in the rainy season, recreational opportunities,
and they aid the ecology by storing water to keep rivers flowing.
Our models predict 72 (hours) future reservoir water level values, i.e., 3
days ahead. Table 3 in the appendix shows the location and type of sensors
used in this study. In the remainder of the paper, we will refer to the
sensors and theirs associated time series by their given sensor ID in the
table.
The forecast of reservoir water levels is critical for the management of a
water resources system. Local and state governments need accurate forecasts in
order to allocate water resources for competing uses, such as agriculture,
domestic household usage, hydropower generation, and environmental objectives.
Accurate prediction of extreme events is also essential to prevent reservoir
overfill that may have catastrophic flooding results. Reservoir level
prediction is especially essential in California, which has had to deal with
severe drought for many years in recent decades.
Figure 1: Water level values for 5 reservoirs across 20 years.
### 0.2 Extreme Events
Extreme Value Theory (EVT) tries to explain the stochastic behavior of extreme
events found in the tails of probability distributions, which often follow a
very different distribution than “normal” values. Towards that end, the
Generalized Extreme Value (GEV) distribution is a continuous probability
distribution that generalizes extreme values that follow the Gumbel (Type I),
Fréchet (Type II), or Weibull (Type III) distributions. Its cumulative
distribution function (CDF) is described as
$F(x;\mu,\sigma,\xi)=\exp\left\\{-\left[1+\xi\left(\frac{x-\mu}{\sigma}\right)\right]^{-1/\xi}\right\\},$
(1)
where $\mu\in\mathbb{R}$, $\sigma>0$, and $\xi$ are the location, scale, and
shape parameters, respectively, conditioned on $1+\xi(x-\mu)/\sigma>0$.
Figure 1 shows water levels for five of our 9 sensors across a period of 20
years. In order to understand whether extreme events were present in these
data, we fit GEV and Gaussian probability density functions (pdf) to the water
level values and found that the GEV distribution provides a better fit. In
particular, the RMSE of the Gaussian distribution fit is 26.9%, 46.0%, and
37.2% higher than that of the GEV distribution fit for the 4001, 4003, and
4009 reservoirs, respectively, which we also show graphically in the appendix.
However, our data has distinct seasonality (rain in winter will increase water
levels) and trends (reservoirs slowly deplete over the year). A time series
with trends, or with seasonality, is not stationary and will generally lead to
inferior predictions. Therefore, we follow a standard time series analysis
preprocessing approach and obtain a stationary time series by applying first-
order differencing and then standardizing the resulting time series values,
$x_{t}^{\prime}=x_{t}-x_{t-1}$, and
$x^{\prime}=\frac{x^{\prime}-\mu}{\sigma}$, where $\mu$ and $\sigma$ here are
the mean (location parameter) and standard deviation (scale parameter) of the
Gaussian distribution of the time series $x^{\prime}$. After obtaining
predictions for a time series, we use the same location and scale parameters
to inverse the standardization, and the last ground truth value in the time
series to inverse the first-order differencing, obtaining values in the same
range as the original time series. When the mean of the distribution is 0
($\mu=0$) and its standard deviation is 1 ($\sigma=1$), as is the case in our
standardized time series, 68% of the values lie within 1 standard deviation,
95% within 2 standard deviations, and 99.7% within 3 standard deviations from
the mean. Yet our time series show values that are up to 100 standard
deviations away from the mean in both directions. In our work, we define
normal values as those within $\epsilon\times\sigma$ of the mean of the
preprocessed time series, in both directions, where $\epsilon$ is a meta-
parameter we tune for each time series, i.e.,
$x^{\prime}_{n}\in[-\epsilon,\epsilon]$, since $\sigma=1$. The remaining
values in the time series are then labeled as extreme values.
## Methods
Figure 2: The NEC+ framework; $\bigoplus$ denotes the element-wise addition
and $\bigotimes$ denotes the element-wise product.
### 0.3 NEC
We designed our NEC framework to account for the distribution shift between
normal and extreme values in the time series. NEC is composed of three
separate models, which can be trained in parallel. The Normal (N) model is
trained to best fit normal values in the time series, the Extreme (E) model is
trained to best fit extreme time series values, and a third Classifier (C)
model is trained to detect when a certain value may be categorized as normal
or extreme. The framework is flexible and may use any prediction models for
the 3 components, yet in this work, given the evidence presented in the
related work section, we focus on deep learning models that use a fixed set of
$h$ consecutive past values as input to predict the next $f$ values in the
time series. At prediction time, the C model is used to decide, for each of
the following $f$ time points, whether the value will be normal or extreme,
and the appropriate regression model is then applied to obtain the prediction
for those points.
The middle section of Figure 2 shows the configuration for our chosen N, E,
and C models. The N and E models each have 4–6 LSTM layers followed by 3 fully
connected (FC) layers that consecutively reduce the width of the layer down to
$f$, which is 72 in our case (3 days). The number of inputs was set to 15
days, i.e., $h=15\times 24=360$. Since there are much fewer extreme values in
the data than normal ones, we set the LSTM layer width to only 512 in the E
model, while we set it to 1024 in the N model. Finally, the C model uses the
same size LSTM layers as the N model, followed by a 72 node FC layer with a
Sigmoid activation function.
### 0.4 GMM Indicator
Figure 3: GMM indicator distribution.
A Gaussian mixture model (GMM) (29) can be described by the equation,
$p(x|\lambda)=\sum_{i=1}^{M}w_{i}\leavevmode\nobreak\
g(x|\mu_{i},\mathbf{\Sigma_{i}}),$
where $x$ is a $D$-dimensional continuous-valued vector, $w_{i}\ \forall
i=1,\ldots,M$ are the mixture weights, and $g(x|\mu_{i},\Sigma_{i})$, are the
component Gaussian densities. Each component density is a $D$-variate Gaussian
function, and the overall GMM model is a weighted sum of M component Gaussian
densities,
$g(x|\mathbf{\mu_{i}},\mathbf{\Sigma_{i}})=\frac{1}{2\pi^{\frac{D}{2}}|\Sigma_{i}|^{\frac{1}{2}}}\exp{\left\\{-\frac{1}{2}{(x-\mu_{i})}^{T}\leavevmode\nobreak\
\Sigma_{i}^{-1}(x-\mu_{i})\right\\}},$
where $\mu_{i}$ is the mean vector and $\Sigma_{i}$ is the covariance matrix
of the $i$th component. The mixture weights are constrained such that
$\sum_{i=1}^{M}w_{i}=1$. The GMM’s capacity to produce smooth approximations
to arbitrarily shaped densities is one of its most impressive features (29).
In our work, we use Expectation-Maximization to fit a GMM model using the time
series training data. Then, each model component can generate a probability
for each point in the time series. Finally, for each value in the time series,
we compute an indicator feature as the weighted sum of all component
probabilities, given the weights learned when fitting the GMM model. In our
framework, the number of components $M$ is a hyper-parameter which we tune for
each time series. As an illustration, Figure 3 shows the indicator values for
GMM with $M=4$ for Sensor 4009 and with $M=3$ for the other 9 sensors. The
x-axis in the figures represents the preprocessed time series input, which we
limited to the $[-10,10]$ range for visibility, while the y-axis represents
the indicator values. It is easy to see that the learned normal and extreme
indicator bounds vary depending on the sensors. We hypothesize that providing
this indicator as an additional input feature for our models will help
differentiate between normal and extreme values and minimize prediction
errors. Therefore, we extend our N, E, and C models by providing both the
water level and its associated GMM indicator as input for each of the $h$
input values.
### 0.5 Exogenous Variables
For some time series, we may provide additional exogenous inputs which may
help improve the overall prediction of future values. For example, rain fall
in the region around the reservoir is not affected by the reservoir water
level but, when it is raining, it can be a strong indicator that the reservoir
level may increase soon, as water drains into streams and rivers that may flow
into the reservoir. For a given region, a watershed is a land area that
channels rainfall and snowmelt to creeks, streams, and rivers, and eventually
to outflow points such as reservoirs, bays, and the ocean. In our work, as
shown in Table 3 in the appendix, we define several watersheds and use several
rain gauge sensors in those watersheds as exogenous variables to aid in the
predictions associated with several of the reservoirs, which were chosen in
consultation with domain experts. Namely, for reservoir 4005 we used rain
sensor 6017, for 4010 we used 6135, and for 4007 we used both 6044 and 6069.
### 0.6 NEC+
Our NEC+ framework is described in Figure 2. Unlike the base NEC framework,
which relies only on historical time series values for future predictions,
NEC+ adds GMM indicator and watershed exogenous variables (when available) to
create multivariate regression N and E models. Given $k$ watershed variables,
the NEC+ models will be $k+2$-variate models, after adding the original input
and the GMM indicator. In addition, to account for the differences between the
distributions of the normal and extreme values during training, we define
custom sampling policies, regression backpropagation, and classification loss
function, which we present in the following.
### 0.7 Sampling Policies
Our models require $h$ values from the time series to predict the following
$f$ values, and $h,f\ll|x|$, the length of the time series. Moreover, while
the number of extreme values differs based on the choice of $\epsilon$, it is
still quite small in comparison to the number of normal values. In our
experiments, $h=360,f=72$, $|x|\sim 276K$, and extreme values ranged from
0.08% to 4.08% of the time series values across our 9 sensors. Therefore,
sampling plays a crucial role during training. However, oversampling cannot be
used to mitigate this problem. In an experiment we detail in the appendix (due
to lack of space), we found that, while oversampling extreme events improves
predictions in that area, it leads to worse overall predictions for the rest
of the time series.
When training our NEC+ model, we apply a two-stage sampling policy. First,
given the high cardinality of our time series, we randomly sample subsections
of length $h+f$ from the series as samples to use in training our models,
while avoiding sections included in the test and validation sets.
Specifically, the validation and test sets each include 24 randomly chosen
$f$-length sections from the years 2014 and 2016 for the validation and 2017
and 2018 for the test set, respectively, and the training set includes all
other values in the time series. Second, we perform stratified sampling of
regions with and without extreme values, allowing the E and C models to
oversample up to OS% samples with at least 1 extreme value in the prediction
zone.
### 0.8 Selected Backpropagation in the N and E models
In addition to a custom sampling policy, one important approach we use to
ensure proper training of the N and E models is selected backpropagation,
which we describe visually in Figure 4. Each prediction sample in our data
contains $f$ values, only a few of which may be extreme. The rarity of extreme
events would cause the E model to be unduly influenced by the loss on normal
values, and vice-versa. As a result, our backpropagation ignores predictions
on normal values in the E model and on extreme values in the N model, forcing
the model to only focus on the values important for the given model.
Specifically, when training the N model, only normal values add to the loss,
and when training the E model, only extreme values add to the loss. This is
equivalent to perfect predictions (predicting the ground truth) for normal
values when training the E model, and perfect predictions for extreme values
when training the N model. In this way, only the positions and values of
appropriate normal or extreme data will affect the hidden parameters in the
network during backpropagation when training the N and E models.
Figure 4: Computational graph for the N and E models. Blue and orange inputs
represent normal and extreme values, respectively.
### 0.9 Parameterized Loss Function in the C Model
The Binary Cross Entropy (BCE) loss is usually the most appropriate loss
function for binary classification tasks. Based on BCE, we propose a tunable
loss function to accommodate the serious imbalance problem in the prediction
of time series with extreme events.
BCE loss compares the target, which in our case is whether the value is normal
(0) or extreme (1), with the prediction, which takes values close to 0 or 1
after the transformation of of the Sigmoid function. The loss increases
exponentially when the difference between the prediction and target increases
linearly. It can be defined as
$BCE(t,p)=-(t\times\log{(p)}+(1-t)\times\log{(1-p))},$ (2)
where $t$ and $p$ are the target and predicted values, respectively. However,
for datasets with a high imbalance between the two classes, such as our time
series, BCE will favor the prominent class. To solve this problem, we propose
a parameterized tunable loss as follows,
$\mathcal{L}=\beta\times BCE(t,p^{\alpha})+(1-\beta)\times{RMSE(t,p)},$ (3)
where $\alpha$ and $\beta$ are parameters that can be tuned. Values $\alpha>1$
cause the model to predict $p$ values that are higher in general in order to
minimize the distance between $t$ and $p^{\alpha}$. The BCE part of the loss
can be thought of as a blunt instrument that grossly exaggerates all miss-
classifications in order to more accurately predict the obscure class, while
the RMSE part allows for a more gentle penalty based on the distance between
$t$ and $p$. In other words, the higher $\alpha$ is set, the more extreme
(class 1) predictions can be obtained. The $\beta$ meta-parameter controls the
strength of the two components of the loss. For time series that are more
balanced, $\beta$ can be small, or even 0.
## Evaluation
In this section, we present empirical results for our proposed framework. We
are interested in answering the following research questions with regards to
prediction effectiveness:
1. 1.
What is the effect of adding the GMM indicator to a model?
2. 2.
What is the effect of introducing exogenous features?
3. 3.
How do the loss function parameters affect performance?
4. 4.
How does NEC+ compare against state-of-the-art baselines?
In the following, we will first present the experimental setup and baseline
approaches we compared against, and then answer the proposed research
questions, in order.
### 0.10 Experimental Settings
#### 0.10.1 Dataset
Our dataset includes over 31 years of hourly sensor readings for the water
level in 9 reservoirs and 5 rain sensor gauges in Santa Clara County, CA,
which are described in the appendix and listed in Table 3. After reducing all
time series to a common date range, each reservoir and rain sensor time series
has 276,226 values. When training all baseline models and the N and C models
in NEC+, at least 100,000 random samples were selected, with replacement, from
the training set. However, due to the sparsity of extreme events, only 50,000
random samples were selected, with replacement, when training the E model. The
N model did not use any oversampling ($OS=0$), but we set $OS=1$ for both the
E and C models, ensuring that all training samples had at least 1 extreme
event in the prediction section of the sample. The data for the 9 reservoirs
and 4 rain gauge sensors we used in this study, along with the code for this
work have been made available on GitHub at
https://github.com/davidanastasiu/NECPlus.
#### 0.10.2 Model Parameters
For reservoir 4009, we set $M$to $4$ and $\epsilon$to $1.8$. For all other
reservoirs, $M=3$ and $\epsilon=1.5$. For each reservoir, we tested models
with 4 or 6 LSTM layers, and 5 reservoirs use 6 LSTM layers while the rest use
4. We also tested LSTM layer widths of 512 and 1024 nodes and found 1024 node
layers were better suited for the N and C models, while E models performed
better with 512 nodes across all reservoirs. While $f=72$ (3 days) was set by
our problem definition, we tested $h\in{72,168,360,720}$, i.e., ${3,7,15,30}$
days, and found $h=360$ to work the best for all reservoirs.
All models were trained using PyTorch 1.9.1+cu102 on a Linux server running
CentOS 7.9.2009 equipped with 2x 20-core Intel(R) Xeon(R) Gold 6148 CPUs, 768
GB RAM, and 3 NVIDIA V100 GPUs. Finally, the LSTM layers were trained using an
SGD optimizer with learning rate 1E-3, while the fully connected layers were
trained using an Adam optimizer with learning rate 5E-4.
### 0.11 Baseline Methods
We compared our proposed method, NEC+, against a wide array of traditional and
state-of-the-art time series and reservoir level prediction methods, which are
introduced in the related work section and summarized below.
* •
ARIMA (6) is sometimes referred to as the Box-Jenkins method. The I in the
model refers to integrating the time series values during one or more
differencing or seasonal differencing steps, until the series becomes
stationary with respect to its means. Then, predictions are performed via an
autoregressive moving average model $ARMA(p,q)$ that expresses the future
value of a variable as a linear combination of $p$ past values while
minimizing $q$ past errors.
* •
Prophet (14) is a time series prediction method based on an additive model
that fits nonlinear trends with seasonal and holiday impacts at the annual,
weekly, and daily levels. It works well with time series data with strong
seasonal influences as well as historical data with many seasons, and it can
usually handle outliers.
* •
LSTM (15) was first proposed by Hochreiter and Schmidhube. The architecture
makes it easier for a neural network to preserve information over many time
steps. It does not guarantee that there is no vanishing/exploding gradient,
but, compared with traditional recurrent neural networks, it does provide an
easier way for the model to learn long-distance dependencies, by using a
forget gate, an output gate, and the input gate. We compare our NEC+ model
against LSTM and also used 4–6 LSTM layers within our model. When comparing
against LSTM, for each sensor, we used the same number of layers for the
baseline LSTM model as for our NEC+ model.
* •
DNN-U (23) is a state-of-the-art univariate LSTM-based encoder-decoder
hydrologic model used to predict reservoir lagged water levels. DNN-U closely
follows an earlier sequence-to-sequence (seq2seq) architecture proposed by
Sutskever et al. (30). The encoder LSTM takes in lagged observations of the
water level $y$ and encodes them into a hidden state $h$ and cell state $c$.
These encoder states are then used as the initial states for the decoder LSTM,
which accepts known future dates $d$ as input, representing the target dates
they wish to forecast. The decoder outputs are then passed to a time
distributed dense layer, which generates the actual forecasts.
* •
Attention-LSTM (19) is a hydrologic prediction model that forecasts the stream
flow values over the next 12 hours, relying on an attention mechanism based on
LSTM networks that consider past stream data. The context vector used as the
decoder input is determined by the last hidden state of the encoder.
* •
N-BEATS (28) is a deep neural architecture which is composed of groups of
residual links and a deep stack of fully-connected layers. It is a state-of-
the-art network on several well-known datasets, including M3, M4 and TOURISM
competition datasets. It was compared with DeepAR (24), and Deep State (25),
the winner of the M4 competition (31), which represents a hybrid approach
between neural network and statistical time series models.
### 0.12 Effect of Adding the GMM Indicator Variable
Figure 5: Effectiveness comparison of NEC and LSTM variants with (LSTM+G and
NEC+G) and without (LSTM and NEC) GMM indicator variables across all 9
sensors.
In the methods section, we hypothesized that adding the GMM indicator variable
as an additional input to our model would help the component N and E models
better distinguish between normal and extreme values. In order to test this
hypothesis, we added GMM indicators to both the baseline LSTM model and our
basic NEC model, and compared the test set effectiveness (RMSE). Figure 5
shows the result of this experiment for our 9 sensors. In the figure, LSTM+G
and NEC+G are the LSTM and NEC models with GMM indicator variables,
respectively. Interestingly, adding the GMM indicator seems detrimental for
the baseline LSTM, producing much worse RMSE scores for sensors 4009, and more
than 5% RMSE improvements only in sensors 4001, 4007, and 4010. The average
RMSE improvement of the LSTM model when adding the GMM indicator across the 9
sensors is only 4%. On the other hand, adding the GMM indicator produces
significantly better results in 8 of the 9 sensors for our NEC model, and only
slightly worse results for sensors 4010. The average RMSE improvement for NEC
is 18%, which is significantly better than the 4% average RMSE improvement of
the LSTM model. This shows that, while the GMM indicator may provide some
limited benefit to a one-shot model like LSTM, it plays a much more important
role in the success of our NEC+ framework.
Table 1: Effectiveness Comparison (RMSE) of NEC+ Against Baselines for 9 Reservoirs Model/Reservoir | 4001 | 4003 | 4004 | 4005 | 4006 | 4007 | 4009 | 4010 | 4011
---|---|---|---|---|---|---|---|---|---
ARIMA | 1016.32 | 1859.70 | 2501.97 | 9692.87 | 1039.38 | 5854.48 | 1060.05 | 3465.20 | 690.23
Prophet | 8469.74 | 38827.22 | 95279.31 | 181607.50 | 20904.57 | 187603.80 | 28629.44 | 114115.4 | 2829.26
LSTM | 1167.73 | 1514.90 | 2342.71 | 6730.93 | 959.05 | 5035.91 | 954.04 | 3734.53 | 662.48
DNN-U | 1162.01 | 1597.72 | 3989.20 | 9878.41 | 983.27 | 4320.40 | 1411.63 | 4257.58 | 763.73
A-LSTM | 878.71 | 1536.04 | 2548.56 | 8919.33 | 1638.65 | 13529.86 | 1064.15 | 2914.75 | 700.50
N-BEATS | 937.24 | 1926.74 | 2280.83 | 7153.82 | 960.42 | 3153.76 | 1295.90 | 3162.17 | 514.30
NEC+ | 740.19 | 1411.44 | 1783.92 | 4352.74 | 780.46 | 2092.73 | 703.93 | 2275.48 | 632.61
### 0.13 Effect of Adding Exogenous Variables
Table 2: Effectiveness With/Without Exogenous Variables Model/Reservoir | 4005 | 4007 | 4010
---|---|---|---
LSTM | 6730.93 | 5035.91 | 3734.53
LSTM+W | 7568.68 | 5728.30 | 4145.16
LSTM+G | 6455.90 | 3545.19 | 3004.14
LSTM+G+W | 9760.62 | 4128.37 | 2602.58
NEC+G | 5114.49 | 2924.30 | 2385.77
NEC+G+W (NEC+) | 4352.74 | 2092.73 | 2275.48
An additional benefit may be obtained in NEC+ by including exogenous variables
that may provide an additional signal that the model may use to learn the
proper prediction function. In our experiments, we used watershed rain gauge
time series data from the same times as our primary reservoir water level data
to enhance models for reservoirs 4005, 4007, and 4010, as described in the
methods section. We compared our NEC+ model with (NEC+G+W) and without (NEC+G)
the watershed variables against variations of the baseline LSTM model with
(LSTM+W, LSTM+G+W) and without (LSTM, LSTM+G) those same variables. The letter
G in all model names indicates the presence of the GMM indicator variable.
Table 2 shows the results of our analysis. The watershed model results are
colored green if they are better (lower RMSE) than the same model without
watershed variables, and red if worse. We use bold to denote the best results
across all models. Interestingly, including the watershed variables in the
baseline LSTM and LSTM+G models leads to significantly worse results in most
cases, but significantly better results (5%–28% lower RMSE) in the case of the
NEC+G model. Our model can benefit more by focusing on normal or extreme
prediction individually.
### 0.14 Effect of Loss Function Parameters
We proposed a parameterized loss function that we hypothesized would help
improve the ability of our C model to pick out the rare extreme values and, as
a result, lead to better prediction of future values. As a way to see how the
two parameters of our loss function may affect the prediction, we trained
several models with different values of $\alpha$ (the BCE power) while keeping
$\beta$ (BCE vs. RMSE strength) constant, and several with varying $\beta$
while keeping $\alpha$ constant, the results of which can be seen in Figure 6.
As expected, increasing the $\alpha$ parameter (top figure) leads to more
values being classified as extreme, allowing the E model to play a bigger role
in the NEC+ model. When $\alpha$ if too large (bottom figure), its effect can
be dampened by decreasing the value of $\beta$. Therefore, we suggest keeping
$\beta=1$ while tuning $\alpha$ and then tuning $\beta$ for the best found
$\alpha$.
Figure 6: Loss function parameter choices for sensor 4004.
### 0.15 Effectiveness of NEC+ Against Baselines
Figure 7: Example 3 days ahead predictions for four sensors.
Our main evaluation question was whether our proposed NEC+ model is an
effective method for solving the 3-day ahead prediction problem in time series
with extreme events such as our 9 reservoirs. To answer this question, we
compared NEC+ against a variety of traditional and state-of-the-art methods
and Table 1 presents the RMSE results of all these models. Equivalent MAPE
values are included in Table 5 in the appendix. Values in bold are the best
(lowest) RMSE for each sensor. Our model significantly outperforms all
traditional methods (ARIMA, Prophet, and LSTM) and state-of-the-art methods
DNN-U and A-LSTM for all 9 sensors. However, the results for NEC+ are, on
average, 37% and 33% better than those of DNN-U and A-LSTM, respectively,
across all 9 sensors. Moreover, DNN-U and A-LSTM were unable to outperform the
traditional ARIMA or LSTM baselines for 6 and 3 out of the 9 sensors,
respectively, pointing to their overall instability. The N-BEATS model was the
most competitive, outperforming NEC+ on only one sensor, 4011. However, NEC+
results are significantly better than those of N-BEATS (Wilcoxon T test s=1,
p=0.0078).
Figure 7 shows some example 3-day predictions from our test set for four of
the sensors (due to lack of space). Predicted time series for other sensors
are included in the technical appendix. We did not include Prophet in the
results as the model performed very poorly and would impede visualizing the
performance of the remaining models. Overall, NEC+ is able to more closely
predict the ground truth water level values, both in the presence of extreme
events and during normal conditions. ARIMA often misses the mark and DNN-U,
LSTM, A-LSTM, and N-BEATS sometimes follow the trend of the ground truth and
sometimes do not. Overall, NEC+ shows it can more closely account for extreme
changes in the time series.
## Conclusion
In this work, we presented a novel composite framework and model, NEC+,
designed to better account for rare yet important extreme events in long
single- and multi-variate time series. Our framework learns distinct
regression models for predicting extreme and normal values, along with a
merging classifier that is used to choose the appropriate model for each
future event prediction. NEC+ uses an unsupervised clustering approach to
dynamically produce distribution indicators, which improves the model’s
robustness to the occurrence of severe events. In addition, to improve
training performance, our framework uses a selected backpropagation approach
and a two-level sampling algorithm to accommodate imbalanced extreme data. A
parameterized loss function is also proposed to improve the NEC+ classifier
performance. Extensive experiments using more than 31 years of reservoir water
level data from Santa Clara County, CA, showed that the components of the NEC+
framework are beneficial towards improving its performance and that NEC+
provided significantly better predictions than state-of-the-art baselines
(Wilcoxon T test p-values between 0.0039 and 0.0078).
## Appendix
## 1 Additional Related Work
In this section, we detail additional related work employing traditional
machine learning methods for the task of water level prediction. Castillo-
Botón et al. (32) applied support vector regression (SVR), Gaussian processes,
and artificial neural networks (ANNs) to obtain short-term water level
predictions in a hydroelectric dam reservoir. Several studies by Nayak et al.
(33), Larrea et al. (34), and Tsao et al. (35) use neural network models (NN)
and adaptive neuro-fuzzy inference systems (ANFIS) (36) to forecast the water
level. Some machine learning algorithms, such as Gaussian process regression
(GPR) and quantile regression, can not only predict but also quantify forecast
uncertainty. Tree-based models are computationally inexpensive and have the
advantage that they do not assume any specific distribution in the predictors
(37). Classification and regression trees (CARTs) and random forest (RF) have
also been used to solve hydrologic prediction problems (37). Nguyen et al.
(38) proposed a hybrid model for hourly water level prediction that integrates
the XGBoost model with two evolutionary algorithms and showed that it
outperformed RF and CART in the multistep-ahead prediction of water levels.
While neural networks were used in some works for water level prediction, they
were usually shallow networks that were not able to recognize complex patterns
in the data, so feature engineering and extensive manual tuning based on
domain expertise had to be employed to improve their performance (39).
Table 3: Reservoir and Rain Sensor Data ID | Type | Name/Location | Start | End
---|---|---|---|---
4001 | Reservoir | Almaden | 10/1973 | 06/2020
4003 | Reservoir | Calero | 06/1974 | 06/2020
4004 | Reservoir | Chesbro | 02/1974 | 06/2020
4005 | Reservoir | Coyote | 10/1973 | 06/2020
4006 | Reservoir | Guadalupe | 10/1973 | 06/2020
4007 | Reservoir | Lexington | 10/1973 | 06/2020
4009 | Reservoir | Stevens Creek | 10/1973 | 06/2020
4010 | Reservoir | Uvas | 10/1973 | 06/2020
4011 | Reservoir | Vasona | 11/1973 | 06/2020
6017 | Rain | Coe Park | 02/1980 | 07/2019
6135 | Rain | Uvas Canyon Park | 07/1991 | 07/2019
6044 | Rain | Lome Prieta | 07/1989 | 07/2019
6069 | Rain | Mt Umunhum | 07/1989 | 07/2019
Figure 8: Guadalupe Reservoir, CA.
## 2 Data and Preprocessing
Table 3 shows the 9 reservoir sensors and 4 rain sensors used in our study,
along with their location and period of time we had data for. As an
illustration, Figure 8 shows Guadalupe Reservoir (ID 4006). Information about
these reservoirs can be found at https://www.valleywater.org/your-water/local-
dams-and-reservoirs. While data for most reservoir sensors was available from
1974, we observed many missing and outlier data points in early years of the
series due to sensor or data storage failures. Therefore, we limited our
analysis to the years 1988–2019 for univariate models involving only reservoir
sensor data, and to the years 1991–2019 for multivariate models using both
reservoir and rain sensor data. Short gaps in the time series during these
periods were filled in via an adaptive polynomial interpolation approach that
learned a polynomial function to best fit $k$ values before and after a gap of
size $2k$ and projected the missing values onto that function. A dataset
including relevant data used in this study will be made available to
researchers upon publication of this paper.
Year
Water Level
Standardized First-Order Difference of the Water Level
Figure 9: Water level values (top), extreme value distribution pdf fit of
water levels (middle), and Gaussian distribution pdf fit of water level
standardized first difference (bottom) for 3 reservoirs across 20 years.
## 3 Extreme Events
Figure 9 shows water levels for three of our 9 sensors across a period of 20
years (top) and GEV and Gaussian probability density functions (pdf) we fitted
for the same water level values (middle). As can be seen from the middle
charts, the GEV distribution provides a better fit, showing the presence of
extreme values in our data. In particular, the RMSE of the Gaussian
distribution fit is 26.9%, 46.0%, and 37.2% higher than that of the GEV
distribution fit for the 4001, 4003, and 4009 reservoirs, respectively.
We follow a standard time series analysis preprocessing approach and obtain a
stationary time series by applying first-order differencing and then
standardizing the resulting time series values,
$\displaystyle x_{t}^{\prime}$ $\displaystyle=x_{t}-x_{t-1},$ $\displaystyle
x^{\prime}$ $\displaystyle=\frac{x^{\prime}-\mu}{\sigma},$
where $\mu$ and $\sigma$ here are the mean (location parameter) and standard
deviation (scale parameter) of the Gaussian distribution of the time series
$x^{\prime}$. After obtaining predictions for a time series, we use the same
location and scale parameters to inverse the standardization, and the last
ground truth value in the time series to inverse the first-order differencing,
obtaining values in the same range as the original time series. The bottom
charts in Figure 9 show the pdf of the differenced and standardized time
series values for the same sensors as in the top and middle figures, along
with the best fit Gaussian for those same values. The y-axis is limited to the
range $[0,150]$ for visibility but otherwise stretches to $80,000$ for the 0
water level difference bar in the histograms of the three sensors (most of the
time there is no change in water level from one hour to the next), resulting
in very thin and tall Gaussian distributions.
## 4 Sampling Policies
Table 4: Over Sampling for Sensor 4005 Models | RMSE | OS=0.3 | OS=0.2 | OS=0.04 | OS=0
---|---|---|---|---|---
L+G | Total | 22495.9 | 9558.5 | 7919.8 | 8208.4
Normal | 21788.4 | 8894.5 | 6872.9 | 7162.1
Extreme | 1992.1 | 1558.0 | 2474.4 | 2478.7
L | Total | 68442.4 | 12300.9 | 7769.6 | 8183.0
Normal | 68170.3 | 11556.7 | 6784.9 | 7122.4
Extreme | 917.1 | 1932.6 | 2654.3 | 2512.0
To showcase the importance of sampling in our work, we ran an experiment in
which we trained an LSTM model and one that also used GMM features (L and L+G
in Table 4), and allowed different levels of oversampling; OS=0.04 means that
4% of the training samples had at least one extreme value in the prediction
area of the sample, even though only 0.5% of the values in sensor 4005 were
deemed extreme. Table 4 shows the total RMSE as well as component RMSE scores
computed only for the normal and extreme values. While the extreme values RMSE
continues to decrease as the OS level increases, oversampling lead to marginal
improved total RMSE scores at OS=0.04 and markedly worse results for higher
levels of oversampling. This shows that oversampling cannot be used as a
panacea to account for the rarity of the extreme values in the time series.
Instead, our NEC framework separates the task of predicting extreme and normal
values, achieving markedly improved results in the process.
Figure 10: Example 3 days ahead predictions for each of our 9 sensors. Best viewed in color. Table 5: MAPE of NEC+ vs. Baselines for 9 Reservoirs Model/Reservoir | 4001 | 4003 | 4004 | 4005 | 4006 | 4007 | 4009 | 4010 | 4011
---|---|---|---|---|---|---|---|---|---
ARIMA | 1.3573 | 0.7626 | 0.8694 | 1.2560 | 1.5401 | 0.8517 | 0.9504 | 1.7871 | 3.2914
Prophet | 16.7877 | 19.8559 | 38.9642 | 35.6662 | 56.0537 | 32.9152 | 31.8069 | 45.2579 | 15.3312
LSTM | 1.6697 | 0.6153 | 0.7450 | 1.0092 | 1.3264 | 0.9253 | 0.9298 | 2.5520 | 3.1282
DNN-U | 1.6509 | 0.6812 | 1.8738 | 1.9394 | 1.4551 | 0.6509 | 1.5604 | 2.1582 | 3.7131
A-LSTM | 1.3533 | 0.6506 | 0.8424 | 1.2060 | 2.8017 | 2.1738 | 0.9705 | 1.3986 | 3.4137
N-BEATS | 1.3346 | 0.7972 | 0.7882 | 1.1405 | 2.0061 | 0.4709 | 1.4580 | 1.7146 | 2.3108
NEC+ | 1.0319 | 0.5687 | 0.6030 | 0.6350 | 1.0662 | 0.3316 | 0.5992 | 1.2894 | 2.9237
## 5 Effectiveness Comparison
Table 5 shows MAPE values for the same test set predictions whose RMSE values
are shown in Table 1 in the main paper. The best results for each sensor are
displayed in bold. As can be seen, and as expected, the results mimic the ones
in the main paper. Our method, NEC+ is able to outperform all standard and
state-of-the-art baselines for all but one sensor.
Figure 10 provides additional example predictions, one for each of our 9
sensors, showcasing typical prediction results for NEC+ and its baselines.
Results show that NEC+ predictions are able to more closely follow the ground
truth values, showcasing its adaptability in the presence of extreme events.
## 6 Metaparameter Choices
In our study, we fixed $f$ to 72 forecasting points (3 days ahead prediction)
and experimented with input length of history $h$ in [72, 120, 240, 360, 720],
extreme threshold $\epsilon$ in [1.2, 1.5, 2.0], GMM indicator components $M$
in [2, 3, 4, 5] and oversampling ratio $OS\%$ in [0, 0.04, 0.2, 0.3, 0.5, 0.8,
1.0]. Table 6 shows the final parameters chosen for each sensor after our
parameter study. We want to emphasize how similar the architecture and even
the hyperparameters are across models for various sensor datasets in our
experiments. This shows that the suggested architecture generalizes
effectively to reservoirs with various distributions. The same design was
successfully applied to other sensors after first being tuned for sensor 4005.
We then performed a second round of tuning for sensor 4009 and successfully
applied the learned parameters to some of the other sensors to improve their
performance. As a result, there are primarily two groups of parameter sets for
the N, E, and C models, each with a different batch size, number of layers,
and volume of training samples, as detailed in Table 6. We used early stopping
in our training with a threshold of 3 iterations for the N model and 4
iterations for the E and C models.
Table 6: Metaparameters Meta-parameter | 4001 | 4003 | 4004 | 4005 | 4006 | 4007 | 4009 | 4010 | 4011
---|---|---|---|---|---|---|---|---|---
Input length $h$ | 360 | 360 | 360 | 360 | 360 | 360 | 360 | 360 | 360
Extreme threshold $\epsilon$ | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.8 | 1.5 | 1.5
GMM components $M$ | 3 | 3 | 3 | 3 | 3 | 3 | 4 | 3 | 3
N batch size | 32 | 64 | 64 | 64 | 32 | 64 | 32 | 64 | 32
N hidden | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024
N layers | 4 | 6 | 6 | 6 | 4 | 6 | 4 | 6 | 4
N volume | 180000 | 180000 | 180000 | 180000 | 180000 | 180000 | 180000 | 180000 | 180000
E batch size | 8 | 8 | 8 | 32 | 32 | 32 | 8 | 32 | 32
E hidden | 512 | 512 | 512 | 512 | 512 | 512 | 512 | 512 | 512
E layer | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4
E volume | 25000 | 25000 | 25000 | 50000 | 50000 | 50000 | 30000 | 50000 | 25000
E oversampling $OS\%$ | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0
C batch size | 64 | 8 | 64 | 64 | 8 | 64 | 8 | 64 | 64
C hidden | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024 | 1024
C layers | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4
C volume | 100000 | 30000 | 100000 | 100000 | 30000 | 100000 | 30000 | 100000 | 100000
N oversampling $OS\%$ | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0
Loss-alpha | 2 | 2 | 3 | 1 | 1 | 1 | 2 | 2 | 1
Loss-beta | 0.5 | 0.5 | 0.45 | 1 | 1 | 1 | 0.5 | 0.5 | 1
## 7 Dataset and Code
The data for the 9 reservoirs and 4 rain gauge sensors we used in this study,
along with the code for this work have been made available on GitHub at
https://github.com/davidanastasiu/NECPlus.
## 8 Discussion
While trend-based models attempt to capture the non-linear trend and
seasonality in time series, the effectiveness of these approaches will be
constrained by datasets like reservoir water levels. Our observations show
that the trends are more intricate than in typical time series. Furthermore,
as we showed in our work, a single data-driven model struggles to strike a
balance between normal and extreme value forecasting due to the rarity and
magnitude of severe events. By using selective backpropogation, NEC+
effectively isolates and simplifies the signal trends and values scope.
Utilizing RNN and FC layers, NEC+ transforms the problem of predicting
continuous time series points into one of first identifying the type of the
upcoming point and then forecasting its value using a highly-tuned normal or
extreme predictive model. NEC can be updated as a framework by using different
neural network implementations for the N, E, and C models separately.
Utilizing the same original data source with a constant extreme threshold
value $\epsilon$ is a crucial rule to follow. Other statistical methods, such
as quantile (40) , the expectile (41) information, or other transformation
skills (42) can also be used in place of the GMM indicator.
###### Acknowledgements.
Funding for this project was made possible by the Santa Clara Valley Water
District.
## Bibliography
## References
* Hua et al. (2018) Shuai Hua, Manika Kapoor, and David C. Anastasiu. Vehicle tracking and speed estimation from traffic videos. In _2018 IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , volume 1 of _CVPRW’18_ , pages 153–1537, , July 2018. IEEE.
* Hewage et al. (2021) Pradeep Hewage, Marcello Trovati, Ella Pereira, and Ardhendu Behera. Deep learning-based effective fine-grained weather forecasting model. _Pattern Analysis and Applications_ , 24(1):343–366, 2021.
* Bose et al. (2022) Bipasa Bose, Taylor Downey, Anand K. Ramasubramanian, and David C. Anastasiu. Identification of distinct characteristics of antibiofilm peptides and prospection of diverse sources for efficacious sequences. _Frontiers in Microbiology_ , 12, 2022. ISSN 1664-302X.
* Mohan et al. (2019) Saloni Mohan, Sahitya Mullapudi, Sudheer Sammeta, Parag Vijayvergia, and David C. Anastasiu. Stock price prediction using news sentiment analysis. In _2019 IEEE Fourth International Conference on Big Data Computing Service and Applications (BigDataService)_ , BDS 2019, pages 205–208, , April 2019. IEEE.
* Zhang et al. (2021) Zhendong Zhang, Hui Qin, Liqiang Yao, Yongqi Liu, Zhiqiang Jiang, Zhongkai Feng, Shuo Ouyang, Shaoqian Pei, and Jianzhong Zhou. Downstream water level prediction of reservoir based on convolutional neural network and long short-term memory network. _Journal of Water Resources Planning and Management_ , 147(9):04021060, sep 2021. 10.1061/(asce)wr.1943-5452.0001432.
* Box and Jenkins (1976) George.E.P. Box and Gwilym M. Jenkins. _Time Series Analysis: Forecasting and Control_. Holden-Day, , 1976.
* Yang et al. (2019) Shuyu Yang, Dawen Yang, Jinsong Chen, and Baoxu Zhao. Real-time reservoir operation using recurrent neural networks and inflow forecast from a distributed hydrological model. _Journal of Hydrology_ , 579:124229, 2019.
* Anastasiu et al. (2020) David C. Anastasiu, Jack Gaul, Maria Vazhaeparambil, Meha Gaba, and Prajval Sharma. Efficient city-wide multi-class multi-movement vehicle counting: A survey. _Journal of Big Data Analytics in Transportation_ , 2(3):235–250, Dec 2020. ISSN 2523-3564.
* Pei et al. (2021) Yifei Pei, Ying Liu, Nam Ling, Lingzhi Liu, and Yongxiong Ren. Class-specific neural network for video compressed sensing. In _2021 IEEE International Symposium on Circuits and Systems (ISCAS)_ , pages 1–5, 2021. 10.1109/ISCAS51556.2021.9401450.
* Qi et al. (2019) Yutao Qi, Zhanao Zhou, Lingling Yang, Yi-Ning Quan, and Qiguang Miao. A decomposition-ensemble learning model based on lstm neural network for daily reservoir inflow forecasting. _Water Resources Management_ , 33:4123 – 4139, 2019.
* Huang et al. (2012) Guang-Bin Huang, Hongming Zhou, Xiaojian Ding, and Rui Zhang. Extreme learning machine for regression and multiclass classification. _IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)_ , 42(2):513–529, 2012. 10.1109/TSMCB.2011.2168604.
* Ariyo et al. (2014) Adebiyi A Ariyo, Adewumi O Adewumi, and Charles K Ayo. Stock price prediction using the arima model. In _2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation_ , pages 106–112, , 2014. IEEE Xplore.
* Valipour et al. (2013) Mohammad Valipour, Mohammad Ebrahim Banihabib, and Seyyed Mahmood Reza Behbahani. Comparison of the arma, arima, and the autoregressive artificial neural network models in forecasting the monthly inflow of dez dam reservoir. _Journal of Hydrology_ , 476:433–441, 2013. ISSN 0022-1694. https://doi.org/10.1016/j.jhydrol.2012.11.017.
* Taylor and Letham (2018) Sean J. Taylor and Benjamin Letham. Forecasting at scale. _The American Statistician_ , 72(1):37–45, 2018\. 10.1080/00031305.2017.1380080.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. _Neural computation_ , 9(8):1735–1780, 1997.
* Fischer and Krauss (2018) Thomas Fischer and Christopher Krauss. Deep learning with long short-term memory networks for financial market predictions. _European Journal of Operational Research_ , 270(2):654–669, 2018.
* Gers et al. (2000) Felix A Gers, Jürgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm. _Neural computation_ , 12(10):2451–2471, 2000\.
* Du and Liang (2021) Nannan Du and Xuechun Liang. Short-term water level prediction of hongze lake by prophet-lstm combined model based on lae. In _2021 7th International Conference on Hydraulic and Civil Engineering Smart Water Conservancy and Intelligent Disaster Reduction Forum (ICHCE SWIDR)_ , pages 255–259, , 2021. IEEE Xplore. 10.1109/ICHCESWIDR54323.2021.9656315.
* Le et al. (2021) Yan Le, Changwei Chen, Ting Hang, and Youchuan Hu. A stream prediction model based on attention-lstm. _Earth Science Informatics_ , 14:1–11, 06 2021. 10.1007/s12145-021-00571-z.
* Xu et al. (2015) Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In _International conference on machine learning_ , pages 2048–2057. PMLR, 2015.
* Chorowski et al. (2015) Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. _Advances in neural information processing systems_ , 28, 2015.
* Siami-Namini et al. (2018) Sima Siami-Namini, Neda Tavakoli, and Akbar Siami Namin. A comparison of arima and lstm in forecasting time series. In _2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA)_ , pages 1394–1401, , 2018. IEEE Xplore. 10.1109/ICMLA.2018.00227.
* Ibañez et al. (2022) Sebastian C. Ibañez, Carlo Vincienzo G. Dajac, Marissa P. Liponhay, Erika Fille T. Legara, Jon Michael H. Esteban, and Christopher P. Monterola. Forecasting reservoir water levels using deep neural networks: A case study of angat dam in the philippines. _Water_ , 14(1), 2022. ISSN 2073-4441. 10.3390/w14010034.
* Salinas et al. (2020) David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. Deepar: Probabilistic forecasting with autoregressive recurrent networks. _International Journal of Forecasting_ , 36(3):1181–1191, 2020.
* Rangapuram et al. (2018) Syama Sundar Rangapuram, Matthias W Seeger, Jan Gasthaus, Lorenzo Stella, Yuyang Wang, and Tim Januschowski. Deep state space models for time series forecasting. _Advances in neural information processing systems_ , 31, 2018.
* Tyralis and Papacharalampous (2021a) Hristos Tyralis and Georgia Papacharalampous. Quantile-based hydrological modelling. _Water_ , 13(23):3420, 2021a.
* Waltrup et al. (2015a) Linda Schulze Waltrup, Fabian Sobotka, Thomas Kneib, and Göran Kauermann. Expectile and quantile regression—david and goliath? _Statistical Modelling_ , 15(5):433–456, 2015a.
* Oreshkin et al. (2019) Boris N Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-beats: Neural basis expansion analysis for interpretable time series forecasting. _arXiv preprint arXiv:1905.10437_ , 2019.
* Day (1969) N. E. Day. Estimating the components of a mixture of normal distributions. _Biometrika_ , 56(3):463–474, 12 1969. ISSN 0006-3444. 10.1093/biomet/56.3.463.
* Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, _Advances in Neural Information Processing Systems_ , volume 27, , 2014. Curran Associates, Inc.
* Makridakis et al. (2018) Spyros Makridakis, Evangelos Spiliotis, and Vassilios Assimakopoulos. The m4 competition: Results, findings, conclusion and way forward. _International Journal of Forecasting_ , 34(4):802–808, 2018.
* Castillo-Botón et al. (2020) C. Castillo-Botón, David Casillas-Pérez, Carlos Casanova-Mateo, L. M. Moreno-Saavedra, B. Morales-Díaz, Julia Sanz-Justo, Pedro Antonio Gutiérrez, and Sancho Salcedo-Sanz. Analysis and prediction of dammed water level in a hydropower reservoir using machine learning and persistence-based techniques. _Water_ , 12:1528, 2020.
* Nayak et al. (2004) P.C Nayak, K.P Sudheer, D.M Rangan, and K.S Ramasastri. A neuro-fuzzy computing technique for modeling hydrological time series. _Journal of Hydrology_ , 291(1):52–66, 2004. ISSN 0022-1694. https://doi.org/10.1016/j.jhydrol.2003.12.010.
* Páliz Larrea et al. (2021) Pablo Páliz Larrea, Xavier Zapata-Ríos, and Lenin Campozano Parra. Application of neural network models and anfis for water level forecasting of the salve faccha dam in the andean zone in northern ecuador. _Water_ , 13(15), 2021. ISSN 2073-4441. 10.3390/w13152011.
* Tsao et al. (2021) Hao-Han Tsao, Yih-Guang Leu, Li-Fen Chou, and Chao-Yang Tsao. A method of multi-stage reservoir water level forecasting systems: A case study of techi hydropower in taiwan. _Energies_ , 14(12), 2021. ISSN 1996-1073. 10.3390/en14123461.
* Chang and Chang (2006) Fi-John Chang and Ya-Ting Chang. Adaptive neuro-fuzzy inference system for prediction of water level in reservoir. _Advances in Water Resources_ , 29:1–10, 2006.
* Nhu et al. (2020) Viet-Ha Nhu, Himan Shahabi, Ebrahim Nohani, Ataollah Shirzadi, Nadhir Al-Ansari, Sepideh Bahrami, Shaghayegh Miraki, Marten Geertsema, and Hoang Nguyen. Daily water level prediction of zrebar lake (iran): A comparison between m5p, random forest, random tree and reduced error pruning trees algorithms. _ISPRS International Journal of Geo-Information_ , 9(8), 2020. ISSN 2220-9964. 10.3390/ijgi9080479.
* Nguyen et al. (2021) Duc Hai Nguyen, Xuan Hien Le, Jae-Yeong Heo, and Deg-Hyo Bae. Development of an extreme gradient boosting model integrated with evolutionary algorithms for hourly water level prediction. _IEEE Access_ , 9:125853–125867, 2021. 10.1109/access.2021.3111287.
* Li et al. (2016) Chuan Li, Yun Bai, and Bo Zeng. Deep feature learning architectures for daily reservoir inflow forecasting. _Water Resources Management_ , 30, 11 2016. 10.1007/s11269-016-1474-8.
* Tyralis and Papacharalampous (2021b) Hristos Tyralis and Georgia Papacharalampous. Quantile-based hydrological modelling. _Water_ , 13(23):3420, 2021b.
* Waltrup et al. (2015b) Linda Schulze Waltrup, Fabian Sobotka, Thomas Kneib, and Göran Kauermann. Expectile and quantile regression—david and goliath? _Statistical Modelling_ , 15(5):433–456, 2015b.
* Pei et al. (2020) Yifei Pei, Ying Liu, and Nam Ling. Deep learning for block-level compressive video sensing. In _2020 IEEE International Symposium on Circuits and Systems (ISCAS)_ , pages 1–5, 2020. 10.1109/ISCAS45731.2020.9181254.
|
# Shape resonances in photoionization cross sections and time delay
Anatoli S. Kheifets Stephen Catsamas 1Research School of Physics, The
Australian National University, Canberra ACT 2601, Australia
###### Abstract
Shape resonances in photoionization of atoms and molecules arise from a
particular geometry of the ionic potential which traps the receding
photoelectron in a quasi-bound state in a particular partial wave. This
mechanism allows us to connect the photoionization cross section in the
resonant region with the photoelectron scattering phase in this partial wave
by a simple formula $\sigma\propto\sin^{2}\delta_{\ell}$. Due to this
relation, the phase $\delta_{\ell}$ can be extracted from an experimentally
known cross section and then converted to the photoelectron group delay
(Wigner time delay) $\tau_{\rm W}=\partial\delta_{\ell}/\partial E$ which is
measurable by recently developed laser interferometric techniques. Such a
direct connection of the photoionization cross section and the time delay is a
fundamental property of shape resonances which provides a comprehensive test
of novel measurements against a large body of older synchrotron data.
###### pacs:
32.80.Rm 32.80.Fb 42.50.Hz
Shape resonances (SR’s) affect profoundly numerous phenomena in physics,
chemistry and biology (see Introduction of Nandi et al. (2020) for several
fascinating examples). SR’s have long been intensely studied in electron-
molecule scattering Bardsley and Mandl (1968) and molecular photoionization
Dehmer et al. (1988). Similar resonant features are observed in electron-atom
scattering Shimamura (2012) and atomic photoionization Rau and Fano (1968);
Fano and Cooper (1968); Connerade et al. (1986). Formation of SR’s is well
understood Bardsley and Mandl (1968); Carlson et al. (1983); Dehmer et al.
(1988); Child (1996); Shimamura (2012). SR’s are associated with the shape of
some effective potential in an open channel, normally a combination of short-
range attractive and long-range repulsive potentials. Such a combination forms
a barrier holding a large portion of the electron wave function while the
remaining part of this wave function leaks out. This normally occurs at
energies above and usually close to the threshold of that open channel and is
typically associated with a large photoelectron angular momentum $\ell\geq 2$.
A common property of SR’s is that they can be turned into bound states by a
slight modification of the target Hamiltonian Chrysos (1998);
Horá$\rm\breve{c}$ek (2019). In molecules, SR’s can be associated with anti-
bonding vacant orbitals, typically of the $\sigma^{*}$ character Langhoff
(1984); Piancastelli (1999).
A renewed interest in studying SR’s has been promoted by recent development of
laser interferometric techniques which allowed to resolve atomic and molecular
photoionization in time. One such technique known as reconstruction of
attosecond beating by interference of two-photon transitions (RABBITT) allowed
to measure the photoelectron group delay in the SR region of various
molecules: N2 Haessler et al. (2009); Nandi et al. (2020); Loriot et al.
(2021), N2O Huppert et al. (2016), CO2 Kamalov et al. (2020), NO Gong et al.
(2022) and CF4 Ahmadi et al. (2022); Heck et al. (2021). A similar SR
measurement in NO Driver et al. (2020) was conducted using an attosecond
angular streaking technique Kheifets et al. (2022). The photoelectron group
delay, also known as the Wigner time delay, was introduced into particle
scattering theory Eisenbud (1948); Wigner (1955); Smith (1960) and then
extended to various applications including photoionization (see reviews de
Carvalho and Nussenzveig (2002); Deshmukh and Banerjee (2021); Deshmukh et al.
(2021)). In the presence of a SR, the photoelectron propagation is naturally
delayed relative to the free space propagation. Thus the Wigner time delay
acquires large positive values in the hundred of attoseconds range (1 as =
$10^{-18}$ s).
In general, an accurate determination of the Wigner time delay requires
detailed knowledge of elastic scattering phases and ionization amplitudes in
various photoemission channels. Gaining knowledge of all these quantities
amounts to performing a so-called complete photoemission experiment Cherepkov
et al. (2000); Holzmeier et al. (2021); Rist et al. (2021). However, in a
simple case of an isolated SR in a strongly dominant channel, the Wigner time
delay can be expressed as the energy derivative of the photoelectron
scattering phase in this particular channel $\tau_{\rm
W}=\partial\delta_{\ell}/\partial E$. In this Letter, we demonstrate that the
phase in such a case can be extracted straightforwardly from the measured
photoionization cross section. The latter is connected with the phase by a
simple formula $\sigma\propto\sin^{2}\delta_{\ell}$. We derive this relation
from the integral equation relating the photoionization cross section with the
transition $T$-matrix. The diagonal $T$-matrix allows to express the unitary
scattering $S$-matrix and to find the elastic scattering phase. We examine
several shape resonances in the $nd\to\varepsilon f,n=3,4$ ionization channels
of the Xe atom and the I- negative ion which demonstrate the $\sigma(\delta)$
relation to a very high precision. Then we examine the SR in the NO molecule
and find a consistency between the measured photoionization cross section
Kosugi et al. (1992) and the resonant Wigner time delay Holzmeier et al.
(2021). This way the experimental results obtained over a span of 30 years are
seamlessly bridged. The SR analysis in the N2 molecule also supports our
findings.
We start our derivation by expressing the photoionization amplitude as an
integral of the dipole matrix with the transition $T$-matrix:
$D_{i}(k)=d_{i}(k)+\sum_{j}\int p^{2}dp\,d_{j}(p)G_{i,j}(p,k)T_{ij}(p,k)\ .$
(1)
Here the indices $i,j$ describe the residual ion and $p,k$ denotes the
photoelectron momenta. The Green’s function
$G_{ij}(k,p)=(\varepsilon_{i}+k^{2}-\varepsilon_{j}-p^{2}-i\epsilon)^{-1}\ .$
(2)
accounts for the energy non-conservation in the intermediate virtual state. We
used Eq. (1) previously in convergent close-coupling calculations of multiple
atomic photoionization Bray et al. (2002, 2012). This equation is exhibited
graphically in the top row of diagrams shown in Fig. 1. The transition
$T$-matrix is expressed via the Coulomb $V$-matrix by an infinite sequence of
diagrams displayed in the bottom row of this figure. The knowledge of the
$T$-matrix allows to express the unitary scattering $S$-matrix. In the case of
a single-channel scattering, the said matrix is related to the elastic
scattering phase Bransden (1970):
$S(k)=e^{2i\delta(k)}=1-i2\pi kT(k,k)=1+2{\rm Im}\,G(k)\ T(k,k)$ (3)
In the above expression, we dropped the single valued indices $i,j$ defining
the ionic state in the dominant scattering channel. The integral equation for
the ionization amplitude (1) in a single channel approximation is reduced to
$\displaystyle D(k)$ $\displaystyle=$ $\displaystyle d(k)+\int
dp\,d(p)G(p)T(p,k)$ $\displaystyle\approx$ $\displaystyle d(k){\rm Im}\,G(k)\
T(k,k)=\frac{1}{2}\ d(k)\Big{[}e^{2i\delta(k)}-1\Big{]}$
Here we assume that the integral term in the right-hand side of Eq. (Shape
resonances in photoionization cross sections and time delay) is dominant over
the bare term near the resonance and the Green’s function can be represented
by its on-shell imaginary part. Our numerical examples show that both
assumptions are satisfied to a high accuracy near a SR. By squaring the
modulus of the ionization amplitude (Shape resonances in photoionization cross
sections and time delay) we arrive to the cross section
$\sigma=\sigma_{\rm max}\sin^{2}\delta(k)\ .$ (5)
Here $\sigma_{\rm max}$ is the cross-section maximum at the resonance which
corresponds to $\delta(k)=\pi/2$. A similar expression is valid for the SR in
the electron scattering
$\sigma_{\ell}(k)=4\pi k^{-2}(2\ell+1)\sin^{2}\delta_{\ell}(k)\ .$ (6)
Connerade (1984); Horá$\rm\breve{c}$ek (2019). The only difference is the
normalization of the cross section to the incident electron flux rather than
the cross-section maximum in Eq. (5).
Figure 1: Diagrammatic representation of the integrated dipole matrix element
$D(k)$ (top) and the scattering $T$-matrix (bottom). The following graphical
symbol are in use: the straight line with an arrow to the right and left
denotes a photoelectron and an ionic (hole) state, respectively. The dotted
line exhibits a photon, the wavy line stands for the Coulomb interaction. The
shaded circle and oval are used to represent the $D$\- and $T$-matrices,
respectively. The black dot stands for the bared dipole matrix element $D(k)$.
In our numerical demonstrations of validity of Eq. (5), we use several
approximations of reducing complexity. The most accurate photoionization
calculations account for inter-channel coupling between various atomic shells
(inter-shell correlation). Such calculations are performed using the random
phase approximation (RPA) Amusia (1990). In lesser accurate calculations, we
switch off the inter-shell correlation and evaluate the ionization amplitude
as a radial integral between the bound and continuum wave functions found in
the Hartree-Fock (HF) approximation Chernysheva et al. (1976, 1979). By
observing a close agreement between the RPA and HF photoionization cross
sections we ensure the SR is indeed a single channel phenomenon. Finally, we
evaluate the cross section from the elastic scattering phase $\delta_{\ell}$
found in the HF approximation. For neutral atomic targets, we subtract the
long-range Coulomb phase and use the phase difference
$\varDelta_{\ell}=\delta_{\ell}-\sigma_{\ell}$. It appears that the smooth
Coulomb phase plays insignificant role in the SR formation.
We start our numerical demonstrations with the SR in the $nd\to\varepsilon
f,n=3,4$ ionization channels of the I- negative ion. We use the ionic target
to eliminate the long range Coulomb potential which would otherwise dominate
the non-resonant Wigner time delay near the threshold. The top left panel of
Fig. 2 displays the photoionization cross sections of the $3d$ and $4d$ shells
of I- calculated in HF the approximation as well as derived from the
corresponding elastic scattering phases using Eq. (5). For comparison we
display the RPA calculation for the whole $4d$ shell cross-section correlated
with the $5s$ and $5p$ shells. The relativistic RPA (RRPA) calculation for the
$4d$ shell Radojević and Kelly (1992) is also shown. We observe a close
proximity of the RPA and HF cross sections which differ only marginally from
the phase derived cross sections. The relativistic effects are also
insignificant here. These observations support our assumption that the SR is
largely a single-channel phenomenon and the cross section is derived mostly
from the elastic scattering phase in a given partial wave. The bottom left
panel of Fig. 2 displays the time delay in the $nd\to\varepsilon f,n=3,4$
ionization channels. The two sets of differently styled curves exhibit the
time delay in each channel as obtained by energy differentiation of the
corresponding elastic scattering phase $\tau(\delta)$ and as obtained from the
photoionization cross section $\tau(\sigma)$ using Eq. (5). The two methods of
time delay determination produce very close results.
A similar data for Xe are presented in the right set of panels in Fig. 2. The
calculated $4d$ cross sections are compared with the experimental data
Kammerling et al. (1989); Becker et al (1989). For the resonant time delay
calculations, the effect of the Coulomb field is removed by using the reduced
HF phases $\varDelta_{\ell}=\delta_{\ell}-\sigma_{\ell}$ where the
analytically known Coulomb phases $\sigma_{\ell}$ Barata et al. (2011) are
subtracted. Both the phase derived $\tau(\Delta)$ and cross section derived
$\tau(\sigma)$ time delays are even closer in Xe than in the case of I- as
correlation effects are weakened in Xe by the Coulomb field of the ion
remainder.
In both considered targets, I- and Xe, the SR position depends very strongly
on the depth of the $nd$ hole. The Coulomb attractive potential acting on the
departing photoelectron is stronger for a deeper $3d$ hole which is screened
less by outer atomic shells. Such an un-screened Coulomb potential counters
the repulsive centrifugal potential more efficiently. As the result, the lower
energy photoelectrons are trapped into the SR. This effect is somewhat
stronger in the neutral Xe atom in comparison with the negative I- ion. The
photoelectron phase variation in the SR is close to one unit of $\pi$. When
this variation occurs inside a narrow SR, the energy derivative of the phase
and the corresponding Wigner time delay increase proportionally.
Figure 2: Top: photoionization cross sections of the $nd$ shells of I- (left)
and Xe (right). The HF cross sections in the dominant $nd\to\varepsilon f$
channels are compared with the cross sections derived from the corresponding
HF phases using Eq. (5). Also shown are the RPA calculation for the whole $4d$
shell of I- correlated with the $5s$ and $5p$ shells. A similar $4d$ RRPA
calculation by Radojević and Kelly (1992) is marked RK. The $4d$ cross section
of Xe is compared with the experimental data Kammerling et al. (1989); Becker
et al (1989). Bottom: time delay in the $nd\to\varepsilon f,n=3,4$ channels of
I- (left) and Xe (right) as calculated from the corresponding scattering
phases and the photoionization cross sections. In Xe, the scattering phase and
time delay are also derived from the experimental cross-sections (dots) and
the spherical well model Connerade et al. (1986) (black solid lines). Figure
3: Top: photoionization cross-sections of the core O $1s$ Kosugi et al. (1992)
and valence $4\sigma$ Holzmeier et al. (2021) photoionization of NO. Middle:
the photoelectron phases $\delta(\sigma)$ derived from the cross-sections
exhibited in the top pane. The $4\sigma\to k\sigma$ eigenphase from Holzmeier
et al. (2021) is shown for comparison. Bottom: the Wigner time delay
$\tau_{\rm W}$ obtained by energy differentiation of the phases derived from
the corresponding cross-sections. The $\tau(\sigma)$ time delay is compared
with the Fano formula delays calculated and measured in Holzmeier et al.
(2021).
Connerade (1984) applied a simple spherical well model to describe the SR in
the $4d$ photoionization of Xe and neighboring atoms. In this mdel, the
photoelectron phase in the $f$-wave is expressed analytically via the
spherical Bessel functions
$\tan\delta_{3}={zj_{3}(z^{\prime})j_{2}(z)-z^{\prime}j_{3}(z)j_{2}(z^{\prime})\over
zj_{3}(z^{\prime})j_{-3}(z)+z^{\prime}j_{-4}(z)j_{2}(z^{\prime})}\ .$ (7)
Here $z=ak$ and $z^{\prime}=a\sqrt{k^{2}+2D}$ are functions of the
photoelectron momentum $k$, depth $D$ and radius $a$ of the spherical
potential well. Thus found phase $\delta_{3}$ is fed into Eq. (6) with
$\ell=3$ to find the cross-section which is displayed in the bottom panel of
Fig. 1 of Connerade (1984). We retrofit this cross section with Eq. (6) and
display thus extracted phase and time delay in the middle and bottom right
panels of Fig. 2. The time delay in this model is markedly different from our
calculations and the time delay obtained by feeding the experimental data
Kammerling et al. (1989); Becker et al (1989) into Eq. (5). More notably, the
spherical well model fails completely for the SR in the $3d$ photoionization.
This indicates a much more complicated structure of this SR.
Next, we apply our analysis to the NO molecule. Here, the SR occurs because an
unoccupied unti-bonding $6\sigma(\sigma^{*})$ orbital appears at a positive
energy and merges with the $k\sigma$ final state continua. In the meantime, an
anti-bonding $2\pi(\pi^{*})$ orbital falls into the discrete spectrum and
manifests itself as an isolated peak in the photoabsorption cross section. Due
to this mechanism, the $\sigma^{*}$ resonance is expected to be similar both
in the core and valence shell ionization. We demonstrate this effect in Fig. 3
where we compare the oxygen $1s$ Kosugi et al. (1992) and the valence
$4\sigma$ Holzmeier et al. (2021) photoionization of NO. The corresponding
photoionization cross-sections are displayed in the top panel of the figure.
The absolute $4\sigma$ photoionization cross-section is read from Fig. 1 of
Holzmeier et al. (2021). The relative O $1s$ cross-section is read from Fig. 2
of Kosugi et al. (1992) and scaled arbitrarily. In the middle panel of Fig. 3
we display the photoelectron scattering phases $\delta(\sigma)$ extracted from
the photoionization cross-sections exhibited in the top panel. For the valence
$4\sigma$ ionization, we make a comparison with the photoelectron scattering
eigenphase exhibited in Fig. 5 of Holzmeier et al. (2021). The latter phase is
shifted vertically to match the cross-section derived phase. This shift does
not affect the time delay which is expressed via the phase derivative. The
phase comparison shows their rather similar slopes which are translated into
similar time delays displayed in the bottom panel of Fig. 3. In the case of
the valence $4\sigma$ ionization, the time delay compares very closely with
the Fano derived delays obtained from the calculated and measured data in
Holzmeier et al. (2021). These observations support the validity of the phase
and time delay extraction from the corresponding photoionization cross-
sections. We also observe a rather similar phase variation and time delay in
the core and valence shell photoionization. This is in a sharp contrast to the
atomic case illustrated in Fig. 2. Such a profound difference is explained by
different mechanisms of the SR formation. In atoms, it is a competition of the
attractive Coulomb and repulsive centrifugal potentials that leads to trapping
the photoelectron in a SR. In molecules, it is the trapping the photoelectron
into a vacant anti-bonding $\sigma^{*}$ orbital which is rather insensitive to
the photoelectron birth orbital. The only difference is an insignificant shift
of the resonance energy which is marginal on the scale of the vastly different
core and valence shell ionization potentials.
Figure 4: Top: Photoionization cross-sections of the N2 molecule as calculated
in Nandi et al. (2020) and measured in Hamnett et al. (1976); Plummer et al.
(1977). The experimental cross section is fitted with the spherical well
ansatz (6). Bottom: The time delay derived from the calculated and measured
cross-sections are compared with direct calculations of Nandi et al. (2020)
for the two lowest vibrational states $\nu=0,1$ of the final
$X\,^{2}\Sigma^{+}_{g}$ ionic state.
Finally, we derive the Wigner time delay from the cross-section of the valence
$3\sigma$ photoionization of the N2 molecule. Here the
$(3\sigma_{g}^{-1})X\,^{2}\Sigma^{+}_{g}$ channel contains a
$\sigma\to\sigma^{*}$ shape resonance merging with the $3\sigma_{g}\to
k\sigma_{u}$ continuum. For our analysis, we take the measured and calculated
cross-sections displayed in Fig. 3B of Nandi et al. (2020) and re-plot them in
the top frame of Fig. 4. There is an insignificant energy shift between the
calculation and experiment due to the simplicity of the used theoretical
model. The experimental data points are scattered and an analytic fit with Eq.
(6) is applied to feed them, along with the calculated cross section, to Eq.
(5). Thus derived phases are converted to the Wigner time delay by energy
differentiation and the results are displayed in the bottom frame of Fig. 4.
These results are compared with the time delays calculated in Nandi et al.
(2020) for the two lowest vibrational states $\nu=0,1$ of the final
$X\,^{2}\Sigma^{+}_{g}$ ionic state which contains the SR. While the fine
details of the cross-section derived and directly calculated time delays
differ, the overall shape and magnitude of both sets are quite similar.
In conclusion, we derive and test a fundamental relation between the cross
section and the time delay in the region of a shape resonance in
photoionization of atoms and molecules. While this relation is natural in
electron scattering, it is demonstrated and rigorously proven in
photoionization for the first time. This relation signifies an intimate link
of photoionization and electron scattering processes which was demonstrated
previously in multiple atomic photoionization Bray et al. (2002, 2012). We
support our findings by considering several examples of atomic SR’s in the
$nd$ shells of Xe and I- and molecular SR’s in the $n\sigma$ shells of N2 and
NO. In the latter molecule, the SR in the core O $1s$ photoionization is also
considered. In the atomic cases, the $\delta(\sigma)$ scattering phase
produces the time delay which is almost identical with the directly calculated
value. In molecules, small differences exist between the two sets but,
generally, they agree remarkably well for such a simple model that we offer.
The importance of the present findings is that they help to link a large data
set of synchrotron measurements (see e.g. Gallagher et al. (1988)) with the
recent laser based interferometric experiments. This link offers a rigid test
that allows to examine the consistency of the two sets of data. Another
important observation is how the time delay varies when the depth of the
atomic or molecular hole state changes. In atoms, the time delay grows for
inner shells in comparison with their valence counterparts. This finding
supports the SR model in which the Coulomb field of the ionic core
counterbalances the centrifugal potential in a large $\ell$ partial wave. In
molecules, another competing explanation is more relevant in which the SR
occurs due to a trapping of the photoelectron in a non-bonding vacant orbital.
Such a trapping is rather insensitive to the birth place of the photoelectron.
## Acknowledgment:
The authors thank James Cryan and Taran Driver for many stimulating
discussions.
## References
* Nandi et al. (2020) S. Nandi, E. Plésiat, S. Zhong, A. Palacios, D. Busto, M. Isinger, L. Neoricic, C. L. Arnold, R. J. Squibb, R. Feifel, et al., _Attosecond timing of electron emission from a molecular shape resonance_ , Science Advances 6(31), eaba7762 (2020).
* Bardsley and Mandl (1968) J. N. Bardsley and F. Mandl, _Resonant scattering of electrons by molecules_ , Rep. Progr. Phys. 31(2), 471 (1968).
* Dehmer et al. (1988) J. L. Dehmer, D. Dill, and A. C. Parr, _Shape resonances in molecular fields_ , in _Fundamental Processes of Atomic Dynamics_ , edited by J. S. Briggs, H. Kleinpoppen, and H. O. Lutz (Springer US, Boston, MA, 1988), pp. 541–563, URL https://doi.org/10.1007/978-1-4684-5544-1_26.
* Shimamura (2012) I. Shimamura, _Quasi-bound states of electronic and positronic few-body systems_ , in _Advances in Quantum Chemistry_ , edited by C. A. Nicolaides, E. Brändas, and J. R. Sabin (Academic Press, 2012), vol. 63 of _Advances in Quantum Chemistry_ , pp. 165–245, URL https://www.sciencedirect.com/science/article/pii/B9780123970091000047.
* Rau and Fano (1968) A. R. P. Rau and U. Fano, _Atomic potential wells and the periodic table_ , Phys. Rev. 167, 7 (1968).
* Fano and Cooper (1968) U. Fano and J. W. Cooper, _Spectral distribution of atomic oscillator strengths_ , Rev. Mod. Phys. 40, 441 (1968).
* Connerade et al. (1986) J. P. Connerade, J. E. Esteva, and R. Karnatak, eds., _Giant Resonance in Atoms, Molecules and Solids_ (Plenum, New York, 1986), no. 151 in Nato Science Series B.
* Carlson et al. (1983) T. A. Carlson, M. O. Krause, J. W. Taylor, P. R. Keller, M. N. Piancastelli, F. A. Grimm, and T. A. Whitley, _Recent developments in photoelectron dynamics using synchrotron radiation_ , IEEE Transactions on Nuclear Science 30(2), 1034 (1983).
* Child (1996) M. Child, _Molecular Collision Theory_ , Dover Books on Chemistry Series (Dover Publications, 1996).
* Chrysos (1998) M. Chrysos, _Simple spectral width estimation formula extracted from real energy shape resonance wavefunctions_ , J. Phys. B 31(7), 1391 (1998).
* Horá$\rm\breve{c}$ek (2019) J. Horá$\rm\breve{c}$ek, _Resonance cross-section formula for low-energy elastic scattering_ , Phys. Rev. A 100, 032709 (2019).
* Langhoff (1984) P. W. Langhoff (American Chemical Society, 1984), vol. 263 of _ACS Symposium Series_ , chap. Molecular Photoionization Resonances. A Theoretical Chemist’s Perspective, pp. 113–138.
* Piancastelli (1999) M. Piancastelli, _The neverending story of shape resonances_ , J. Electr. Spectr. Relat. Phenom. 100(1), 167 (1999).
* Haessler et al. (2009) S. Haessler, B. Fabre, J. Higuet, J. Caillat, T. Ruchon, P. Breger, B. Carré, E. Constant, A. Maquet, E. Mével, et al., _Phase-resolved attosecond near-threshold photoionization of molecular nitrogen_ , Phys. Rev. A 80, 011404 (2009).
* Loriot et al. (2021) V. Loriot, A. Marciniak, S. Nandi, G. Karras, M. Hervé, E. Constant, E. Plésiat, A. Palacios, F. Martín, and F. Lépine, _Attosecond ionization time delay around a shape resonance in nitrogen measured by the RABBIT- $2\omega$ method_, in _2021 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC)_ (2021), pp. 1–1.
* Huppert et al. (2016) M. Huppert, I. Jordan, D. Baykusheva, A. von Conta, and H. J. Wörner, _Attosecond delays in molecular photoionization_ , Phys. Rev. Lett. 117, 093001 (2016).
* Kamalov et al. (2020) A. Kamalov, A. L. Wang, P. H. Bucksbaum, D. J. Haxton, and J. P. Cryan, _Electron correlation effects in attosecond photoionization of CO 2_, Phys. Rev. A 102, 023118 (2020).
* Gong et al. (2022) X. Gong, W. Jiang, J. Tong, J. Qiang, P. Lu, H. Ni, R. Lucchese, K. Ueda, and J. Wu, _Asymmetric attosecond photoionization in molecular shape resonance_ , Phys. Rev. X 12, 011002 (2022).
* Ahmadi et al. (2022) H. Ahmadi, E. Pl/’esiat, M. Moioli, F. Frassetto, L. Poletto, P. Decleva, C. D. Schröter, T. Pfeifer, R. Moshammer, A. Palacios, et al., _Attosecond photoionisation time delays reveal the anisotropy of the molecular potential in the recoil frame_ , Nature Communications 13, 1242 (2022).
* Heck et al. (2021) S. Heck, D. Baykusheva, M. Han, J.-B. Ji, C. Perry, X. Gong, and H. J. Wr̈ner, _Attosecond interferometry of shape resonances in the recoil frame of CF 4_, Science Advances 7(49), eabj8121 (2021).
* Driver et al. (2020) T. Driver, E. G. Champenois, J. P. Cryan, S. Li, A. Marinelli, P. Rosenberger, M. F. Kling, L. Ortmann, and A. Landsman, _Attosecond Electron Correlation and Molecular Resonance in K-shell Photoexcitation of Nitric Oxide_ in _51st Annual Meeting of the APS Division of Atomic, Molecular and Optical Physics_ (2020).
* Kheifets et al. (2022) A. S. Kheifets, R. Wielian, V. V. Serov, I. A. Ivanov, A. L. Wang, A. Marinelli, and J. P. Cryan, _Ionization phase retrieval by angular streaking from random shots of XUV radiation_ , Phys. Rev. A 106, 033106 (2022).
* Eisenbud (1948) L. Eisenbud, _Formal properties of nuclear collisions_ , Ph.D. thesis, Princeton University (1948).
* Wigner (1955) E. P. Wigner, _Lower limit for the energy derivative of the scattering phase shift_ , Phys. Rev. 98(1), 145 (1955).
* Smith (1960) F. T. Smith, _Lifetime matrix in collision theory_ , Phys. Rev. 118, 349 (1960).
* de Carvalho and Nussenzveig (2002) C. A. A. de Carvalho and H. M. Nussenzveig, _Time delay_ , Phys. Rep. 364(2), 83 (2002).
* Deshmukh and Banerjee (2021) P. C. Deshmukh and S. Banerjee, _Time delay in atomic and molecular collisions and photoionisation/photodetachment_ , Int. Rev. Phys. Chem. 40(1), 127 (2021).
* Deshmukh et al. (2021) P. C. Deshmukh, S. Banerjee, A. Mandal, and S. T. Manson, _Eisenbud-Wigner-Smith time delay in atom-laser interactions_ , Europ. Phys. J. Spec. Topics 230, 4151 (2021).
* Cherepkov et al. (2000) N. A. Cherepkov, G. Raseev, J. Adachi, Y. Hikosaka, K. Ito, S. Motoki, M. Sano, K. Soejima, and A. Yagishita, _K-shell photoionization of CO: II. determination of dipole matrix elements and phase differences_ , J. Phys. B 33(20), 4213 (2000).
* Holzmeier et al. (2021) F. Holzmeier, J. Joseph, J. C. Houver, M. Lebech, D. Dowek, and R. R. Lucchese, _Influence of shape resonances on the angular dependence of molecular photoionization delays_ , Nature Comms. 12, 7343 (2021).
* Rist et al. (2021) J. Rist, K. Klyssek, N. M. Novikovskiy, M. Kircher, I. Vela-Pérez, D. Trabert, S. Grundmann, D. Tsitsonis, J. Siebert, A. Geyer, et al., _Measuring the photoelectron emission delay in the molecular frame_ , Nature Comms. 12, 6657 (2021).
* Kosugi et al. (1992) N. Kosugi, J. Adachi, E. Shigemasa, and A. Yagishita, _High-resolution and symmetry-resolved N and O K-edge absorption spectra of NO_ , J. Chem. Phys. 97(12), 8842 (1992).
* Bray et al. (2002) I. Bray, D. V. Fursa, A. S. Kheifets, and A. T. Stelbovics, _Theory of electrons and photons colliding with atoms_ , J. Phys. B 35(16), R117 (2002).
* Bray et al. (2012) I. Bray, D. Fursa, A. Kadyrov, A. Stelbovics, A. Kheifets, and A. Mukhamedzhanov, _Electron- and photon-impact atomic ionisation_ , Physics Reports 520(3), 135 (2012).
* Bransden (1970) B. Bransden, _Atomic Collision Theory_ , Lecture notes and supplements in physics (W. A. Benjamin, 1970).
* Connerade (1984) J. P. Connerade, _A general formula for the profiles of ’giant resonances’_ , J. Phys. B 17(6), L165 (1984).
* Amusia (1990) M. Y. Amusia, _Atomic photoeffect_ (Plenum Press, New York, 1990).
* Chernysheva et al. (1976) L. V. Chernysheva, N. A. Cherepkov, and V. Radojevic, _Self-consistent field Hartree-Fock program for atoms_ , Comp. Phys. Comm. 11, 57 (1976).
* Chernysheva et al. (1979) L. V. Chernysheva, N. A. Cherepkov, and V. Radojevic, _Frozen core Hartree-Fock programm for atomic discrete and continuous states_ , Comp. Phys. Comm. 18, 87 (1979).
* Radojević and Kelly (1992) V. Radojević and H. P. Kelly, _Photodetachment of the negative iodine ion including relaxation effects_ , Phys. Rev. A 46, 662 (1992).
* Kammerling et al. (1989) B. Kammerling, H. Kossman, and V. Schmidt, _4d photoionisation in xenon: absolute partial cross section and relative strength of 4d many-electron processes_ , J. Phys. B 22(6), 841 (1989).
* Becker et al (1989) U. Becker et al, _Subshell photoionization of Xe between 40 and 1000 eV_ , Phys. Rev. A 39, 3902 (1989).
* Barata et al. (2011) J. C. A. Barata, L. F. Canto, and M. S. Hussein, _New asymptotic formulae for the point Coulomb phase shift_ , Brazilian J. Phys. 41, 50 (2011).
* Hamnett et al. (1976) A. Hamnett, W. Stoll, and C. Brion, _Photoelectron branching ratios and partial ionization cross-sections for CO and N 2 in the energy range 18-50 ev_, J. Electr. Spectr. Relat. Phenom. 8(5), 367 (1976).
* Plummer et al. (1977) E. W. Plummer, T. Gustafsson, W. Gudat, and D. E. Eastman, _Partial photoionization cross sections of N 2 and CO using synchrotron radiation_, Phys. Rev. A 15, 2339 (1977).
* Gallagher et al. (1988) J. W. Gallagher, C. E. Brion, J. A. R. Samson, and P. W. Langhoff, _Absolute cross sections for molecular photoabsorption, partial photoionization, and ionic photofragmentation processes_ , J. Phys. Chem. Ref. Data 17(1), 9 (1988).
|
# Learning Antidote Data to Individual Unfairness
Peizhao Li Ethan Xia Hongfu Liu
###### Abstract
Fairness is essential for machine learning systems deployed in high-stake
applications. Among all fairness notions, individual fairness, deriving from a
consensus that ‘similar individuals should be treated similarly,’ is a vital
notion to describe fair treatment for individual cases. Previous studies
typically characterize individual fairness as a prediction-invariant problem
when perturbing sensitive attributes on samples, and solve it by
Distributionally Robust Optimization (DRO) paradigm. However, such adversarial
perturbations along a direction covering sensitive information used in DRO do
not consider the inherent feature correlations or innate data constraints,
therefore could mislead the model to optimize at off-manifold and unrealistic
samples. In light of this drawback, in this paper, we propose to learn and
generate antidote data that approximately follows the data distribution to
remedy individual unfairness. These generated on-manifold antidote data can be
used through a generic optimization procedure along with original training
data, resulting in a pure pre-processing approach to individual unfairness, or
can also fit well with the in-processing DRO paradigm. Through extensive
experiments on multiple tabular datasets, we demonstrate our method resists
individual unfairness at a minimal or zero cost to predictive utility compared
to baselines. Code available at https://github.com/brandeis-machine-
learning/AntiIndivFairness.
Machine Learning, ICML
## 1 Introduction
Unregulated decisions could reflect racism, ageism, and sexism in high-stakes
applications, such as grant assignments (Mervis, 2022), recruitment (Dastin,
2018), policing strategies (Gelman et al., 2007), and lending services
(Bartlett et al., 2022). To avoid societal concerns, fairness, as one of the
fundamental ethical guidelines for AI, has been proposed to encourage
practitioners to adopt AI responsibly and fairly. The unifying idea of
fairness articulates that ML systems should not discriminate against
individuals or any groups distinguished by legally-protected and sensitive
attributes, therefore preventing disparate impact in automated decision-making
(Barocas & Selbst, 2016).
Many notions have been proposed to specify AI Fairness (Dwork et al., 2012;
Kusner et al., 2017; Hashimoto et al., 2018). Group fairness is currently the
most influential notion in the fairness community, driving different groups to
receive equitable outcomes in terms of statistics like true positive rates or
positive rates, regardless of their sensitive attributes (Hardt et al., 2016).
However, these statistics describe the average of a group, hence lacking
guarantees on the treatments of individual cases. Alternatively, individual
fairness established upon a consensus that ‘similar individuals should be
treated similarly,’ shifts force to reduce the predictive gap between
conceptually similar instances. Here, ‘similar’ usually means two instances
have close profiles regardless of their different sensitive attributes, and
could have customized definitions upon domain knowledge.
Previous studies solve the individual fairness problem mainly by
Distributionally Robust Optimization (DRO) (Yurochkin et al., 2020; Yurochkin
& Sun, 2021; Ruoss et al., 2020; Yeom & Fredrikson, 2021). They convert the
problem to optimize models for invariant predictions towards original data and
their perturbations, where the perturbations are adversarially constructed to
mostly change the sensitive information in samples. However, the primary use
case of DRO in model robustness is to adversarially perturb data distribution
by a small degree which usually bounded by some divergence (Duchi & Namkoong,
2018; Levy et al., 2020). In that case, the perturbations can be regarded as
local perturbations, and the adversarial samples are still on the data
manifold. In contrast, perturbing a sample to best convert its sensitive
information, e.g., directly flipping its sensitive attributes like gender from
male to female, cannot be regarded as a local perturbation. These
perturbations may violate inherent feature correlations, e.g., some features
are subject to gender but without notice, thus driving the adversarial samples
leave the data manifold. Additionally, perturbations in a continuous space
could break the innate constraints from tabular, e.g., discrete features
should be exactly in a one-hot format. Consequently, these adversarial samples
for fairness are unrealistic and do not match the data distribution. Taking
these data can result in sub-optimal tradeoffs between utility and individual
fairness.
In this work, we address the above limitations and propose an approach to
rectify models to be individually fair from a pure data-centric perspective.
By establishing a concrete and semantically rich setup for ‘similar’ samples
in individual fairness, we learn the data manifold and construct on-manifold
samples with different sensitive attributes as antidote data to mitigate
individual unfairness. We launch two ways to use the generated antidote data:
simply inserting antidote data into the original training set and training
models through generic optimization, or equipping antidote data to the DRO
pipeline as an in-processing approach. Our approach is capable of multiple
sensitive attributes with multiple values. We conduct experiments on census,
criminological, and educational datasets. Compared to standard classifiers and
several baseline methods, our method greatly mitigates individual unfairness,
and has minimal or zero side effects on the model’s predictive utility.
## 2 Individual Fairness: Problem Setup
#### Notations
Let ${f_{\theta}}$ denote a parameterized probabilistic classifier,
$\mathcal{X}$ and $\mathcal{Y}$ denote input and output space with instance
$x$ and label $y$, respectively. For tabular datasets, we assume every input
instance $x$ contains three parts of features: sensitive features
$\mathbf{s}=[s_{1},s_{2},\cdots,s_{N_{s}}]$, continuous features
$\mathbf{c}=[c_{1},c_{2},\cdots,c_{N_{c}}]$, and discrete features
$\mathbf{d}=[d_{1},d_{2},\cdots,d_{N_{d}}]$, with $N$ denoting the number of
features in each parts. We assume these three parts of features are exclusive,
i.e., $\mathbf{s}$, $\mathbf{c}$, and $\mathbf{d}$ do not share any feature or
column. We use $\mathbf{d}_{x}$ to denote the discrete features of instance
$x$, and the same manner for other features. For simplification we shall
assume discrete features $\mathbf{d}$ contain categorical features before one-
hot encoding, continuous features $\mathbf{c}$ contain features in a unified
range like $[0,1]$ after some scaling operations, and all data have the same
feature dimension. We consider sensitive attributes in a categorical format.
Any continuous sensitive attribute can be binned into discrete intervals to
fit our scope. We use $\oplus$ to denote feature-wise vector-vector or vector-
scalar concatenation.
#### Individual Fairness: Concept and Practical Usage
The concept of individual fairness is firstly raised in Dwork et al. (2012).
Deriving from a consensus that ‘similar individuals should be treated
similarly,’ the problem is formulated as a Lipschitz mapping problem.
Formally, for arbitrary instances $x$ and $x^{\prime}\in\mathcal{X}$,
individual fairness is defined as a
($D_{\mathcal{X}},D_{\mathcal{Y}}$)-Lipschitz property of a classifier
${f_{\theta}}$:
$D_{\mathcal{Y}}({f_{\theta}}(x),{f_{\theta}}(x^{\prime}))\leq
D_{\mathcal{X}}(x,x^{\prime}),$ (1)
where $D_{\mathcal{X}}(\cdot,\cdot)$ and $D_{\mathcal{Y}}(\cdot,\cdot)$ are
some distance functions respectively defined in the input space $\mathcal{X}$
and output space $\mathcal{Y}$, and shall be customized upon domain knowledge.
However, for a general problem, it could be demanding to carry out a concrete
and interpretable $D_{\mathcal{X}}(\cdot,\cdot)$ and
$D_{\mathcal{Y}}(\cdot,\cdot)$, hence making individual fairness impractical
in many applications. To simplify this problem from a continuous Lipschitz
constraint, some works evaluate individual fairness of models with a binary
distance function: $D_{\mathcal{X}}(x,x^{\prime})=0$ for two different samples
$x$ and $x^{\prime}$ if they are exactly the same except sensitive attributes,
i.e., $\mathbf{c}=\mathbf{c}^{\prime}$, $\mathbf{d}=\mathbf{d}^{\prime}$, and
$\mathbf{s}\neq\mathbf{s}^{\prime}$ (Yurochkin et al., 2020; Yurochkin & Sun,
2021). Despite the interpretability, this constraint can be too harsh to find
sufficient comparable samples since other features may correlate with
sensitive attributes. For empirical studies, these studies can only simulate
the experiments with semi-synthetic data: flipping one’s sensitive attribute
to construct a fictitious sample and evaluate the predictive gap. Note that
for tabular data, simply discarding the sensitive attributes could be
perfectly individually fair to this simulation. Other work (Lahoti et al.,
2019) defines $D_{\mathcal{X}}(\cdot,\cdot)$ on representational space with
Euclidean distance, therefore lacking interpretability over the input tabular
data.
To have a practical and interpretable setup, we present Definition 2.1 as an
examplar to describe in what conditions we consider two samples to be
comparable for an imperfect classifier. When $x$ and $x^{\prime}$ are coming
to be comparable, from the purpose of individual fairness, their predictive
gap $|{f_{\theta}}(x)-{f_{\theta}}(x^{\prime})|$ should be minimized through
the classifier.
###### Definition 2.1 (comparable samples).
Given thresholds $T_{d},T_{c}\in\mathbb{R}_{\geq 0}$, $x$ and $x^{\prime}$ are
comparable iff all constraints are satisfied: 1. For discrete features,
$\sum_{i=1}^{N_{d}}\mathbbm{1}\\{d_{i}\neq d_{i}^{\prime}\\}\leq T_{d}$; 2.
For continuous features, $\max\\{|c_{i}-c_{i}^{\prime}|\\}\leq T_{c},\forall\
1\leq i\leq N_{c}$; and 3. For ground truth label, $y=y^{\prime}$.
###### Remark 2.1.
For some pre-defined thresholds $T_{d}$ and $T_{c}$, two samples are
considered as comparable iff 1. there are at most $T_{d}$ features differing
in discrete features; 2. the largest disparity among all continuous features
is smaller or equal to $T_{c}$, and 3. two samples have the same ground truth
label.
Definition 2.1 allows two samples to be slightly different in discrete and
continuous features, and arbitrarily different in sensitive attributes.
Distinct to previous definitions, the constraints in Definition 2.1 are
relaxed to find out sufficient real comparable samples, and meanwhile are
highly interpretable and semantically rich compared to defining them in
representational space. As a practical use case, in lending data, to certify
individual fairness for two samples, we can set discrete features to the
history of past payment status (where value 1 indicates a complete payment,
and value 0 indicates a missing payment), and continuous features to the
monthly amount of bill statement. Two samples are considered to be comparable
if they have a determinate difference in payment status and amount of bills.
Note that Definition 2.1 is only one exemplar of comparable samples in
individual fairness. Other definitions, for instance, involving ordinal
features or enforcing some features to be identical, are also practicable, and
are highly flexible to extend upon task demand. In this paper, we take
Definition 2.1 as a canonical example and this formulation does not affect our
model design. In this paper, we shall evaluate individual fairness towards
Definition 2.1, and mostly consider comparable samples with different
sensitive attributes.
## 3 Learning Antidote Data to Individual Unfairness
#### Motivation
Several methods solve the individual fairness problem through Distributionally
Robust Optimization (DRO) (Yurochkin et al., 2020; Yurochkin & Sun, 2021;
Ruoss et al., 2020; Yeom & Fredrikson, 2021). The high-level idea is to
optimize a model at some distribution with perturbations that dramatically
change the sensitive information. The optimization solution can be summarized
as:
$\displaystyle\min_{f_{\theta}}\mathbb{E}_{(x,y)}\ \ell({f_{\theta}}(x),y),$
(2) $\displaystyle\min_{f_{\theta}}\mathbb{E}_{(x,y)}\
\max_{x+\epsilon\sim\mathcal{D}_{\text{Sen}}}\ell({f_{\theta}}(x+\epsilon),y),$
where the first term is standard empirical risk minimization, and the second
term is for loss minimization over adversarial samples. The formulation is
technically relevant to traditional DRO (Duchi & Namkoong, 2018; Levy et al.,
2020), while the difference derives from $\mathcal{D}_{\text{Sen}}$, which is
set to some customized distribution offering perturbations to specifically
change one’s sensitive information. For example, Yurochkin et al. (2020)
characterizes $\mathcal{D}_{\text{Sen}}$ as a subspace learnt from a logistic
regression model, which contains the most predictability of sensitive
attributes. Ruoss et al. (2020) finds out this distribution via logical
constraints.
Though feasible, we would like to respectfully point out that (1)
Perturbations that violating feature correlations could push adversarial
samples leave the data manifold. An intuitive example is treating age as the
sensitive attribute. Perturbations can change a person’s age arbitrarily to
find an optimal age that encourage the model to predict the most differently.
Such perturbations ignore the correlations between the sensitive feature and
other features like education or annual income, resulting in an adversarial
sample with age 5 or 10 but holding a doctoral degree or getting $80K annual
income. (2) Samples with arbitrary perturbations can easily break the nature
of tabular data. In tabular data, there are only one-hot discrete values for
categorical variables after one-hot encoding, and potentially a fixed range
for continuous variables. For arbitrary perturbations, the adversarial samples
may in half bachelor degree and half doctoral degree. These two drawbacks lead
samples from $\mathcal{D}_{\text{Sen}}$ to be unrealistic and escaping the
data manifold, thus distorting the learning, and, as shown in our experiments,
resulting in sub-optimal tradeoffs between fairness and utility.
In this work, we address the above issues related to
$\mathcal{D}_{\text{Sen}}$, and propose to generate on-manifold data for
individual fairness purposes. The philosophy is, by giving an original
training sample, generate its comparable samples with different and reasonable
sensitive attributes, and the generated data should fit into existing data
manifold and obey the inherent feature correlations or innate data
constraints. We name the generated data as antidote data. The antidote data
can either mix with original training data to be a pre-processing technique,
or either serve as $\mathcal{D}_{\text{Sen}}$ in Equation 2 as an in-
processing approach. By taking antidote data, a classifier would give
individually fair predictions.
### 3.1 Antidote Data Generator
We start by elaborating on the generator of antidote data. The purpose of
antidote data generator ${g_{\theta}}$ is, given a training sample $x$,
generating its comparable samples with different sensitive attribute(s). To
ensure the generations have different sensitive features, we build
${g_{\theta}}$ as a conditional generative model to generate a sample with
pre-defined sensitive features. Given new sensitive attributes
$\bar{\mathbf{s}}\neq\mathbf{s}_{x}$ (recall $\mathbf{s}_{x}$ is the sensitive
attribute of $x$), the objective is:
$\displaystyle{g_{\theta}}:(x,\bar{\textbf{s}},\mathbf{z})\rightarrow\hat{x},$
(3) $\displaystyle\text{with}\ \mathbf{s}_{\hat{x}}=\bar{\mathbf{s}},\ x\
\text{and}\ \hat{x}\ \text{satisfy \lx@cref{creftype~refnum}{def:comp}},$
where $\mathbf{z}\sim\mathcal{N}(0,\,1)$ is drawn from a standard multivariate
normal distribution as a noise vector. The generation $\hat{x}$ should follow
the data distribution and satisfy some innate constraints from discrete or
continuous features, i.e., the one-hot format for discrete features and a
reasonable range for continuous features. In the following, we shall elaborate
the design and training strategy for ${g_{\theta}}$.
#### Value Encoding
We encode categorical features using one-hot encoding. For continuous
features, we adopt mode-specific normalization (Xu et al., 2019) to encode
every column of continuous values independently, which shown to be effective
on modeling tabular data. We use Variational Bayesian to estimate the Gaussian
mixture in the distribution of one continuous feature. This approach will
decompose the distribution into several modes, where each mode is a Gaussian
distribution with unique parameters. Formally, given a value $c_{i,j}$ in the
$i$-th column of continuous feature and $j$-th row in the tabular, the learned
Gaussian mixture is
$\mathbb{P}(c_{i,j})=\sum_{k=1}^{K_{i}}w_{i,k}\mathcal{N}({c_{i,j};\mu_{i,k},\sigma_{i,k}^{2}})$,
where $w_{i,k}$ is the weight of $k$-th mode in $i$-th continuous feature, and
$\mu_{k}$ and $\sigma_{k}$ are the mean and standard deviation of the normal
distribution of $k$-th mode. We use the learned Gaussian mixture to encode
every continuous value. For each value $c_{i,j}$, we estimate the probability
from each mode via
$p_{i,k}(c_{i,j})=w_{i,k}\mathcal{N}({c_{i,j};\mu_{i,k},\sigma_{i,k}^{2})}$,
and sample one mode from the discrete probability distribution
$\mathbf{p}_{i}$ with $K_{i}$ values. Having a sampled mode $k$, we represent
the mode of $c_{i,j}$ using a one-hot mode indicator vector, an all-zero
vector $\mathbf{e}_{i,x}$ except the $k$-th entry equal to 1. We use a scalar
to represent the relative value within $k$-th mode:
$v_{i,x}=(c_{i,j}-\mu_{i,k})/4\sigma_{i,k}$. By encoding all continuous
values, we have a re-representation $\tilde{x}$ to substitute $x$ as as the
input for antidote data generator ${g_{\theta}}$:
$\tilde{x}=(v_{1,x}\oplus\mathbf{e}_{1,x}\oplus\cdots\oplus
v_{N_{c},x}\oplus\mathbf{e}_{N_{c},x})\oplus\mathbf{d}_{x}\oplus\mathbf{s}_{x}.$
(4)
Recall $\oplus$ denotes vector-vector or vector-scalar concatenation. To
construct a comparable sample $\hat{x}$, the task for continuous features is
to classify the mode from latent representations, i.e., estimate
$\mathbf{e}_{i,k}$, and predict the relative value $v_{i,x}$. We can decode
$v_{i,x}$ and $\mathbf{e}_{i,x}$ back to a continuous value using the learned
Gaussian mixture.
#### Structural Design
The whole model is designed in a Generative Adversarial Networks (Goodfellow
et al., 2014) style, consisting of a generator ${g_{\theta}}$ and a
discriminator ${d_{\theta}}$.
The generator ${g_{\theta}}$ takes the re-representation $\tilde{x}$, a pre-
defined sensitive feature $\bar{\mathbf{s}}$, and noisy vector $\mathbf{z}$ as
input. The output from ${g_{\theta}}$ will be a vector with the same size as
$\tilde{x}$ including $v_{\hat{x}}$, $\mathbf{e}_{\hat{x}}$,
$\mathbf{d}_{\hat{x}}$, and $\mathbf{s}_{\hat{x}}$. To ensure all discrete
features are in a one-hot manner so that the generations will follow a tabular
distribution, we apply Gumbel softmax (Jang et al., 2017) as the final
activation to each discrete feature and obtain $\mathbf{d}_{\hat{x}}$. Gumbel
softmax is a differentiable operation to encode a continuous distribution over
a simplex and approximate it to a categorical distribution. This function
controls the sharpness of output via a hyperparameter called temperature.
Gumbel softmax is also applied to sensitive features $\mathbf{s}_{\hat{x}}$
and mode indicator vectors $\mathbf{e}_{\hat{x}}$ to ensure the one-hot
format.
The purpose for the discriminator model ${d_{\theta}}$ is to distinguish the
fake generations from real samples, and we also build discriminator to
identify generated samples in terms of its comparability from real comparable
samples. Through discriminator, the constraints from comparable samples are
implicitly encoded into the adversarial training. We formulate the fake sample
for discriminator as $\hat{x}\oplus\tilde{x}\oplus(\hat{x}-\tilde{x})$, and
real samples as
$\tilde{x}^{\prime}\oplus\tilde{x}\oplus(\tilde{x}^{\prime}-\tilde{x})$, where
$\tilde{x}^{\prime}$ is the re-representation of a comparable sample
$x^{\prime}$ to $x$ drawn from the training data. The third term
$\hat{x}-\tilde{x}$ is encoded to emphasize the different between two
comparable samples. Implicitly regularizing the comparability leaves full
flexibility to the generator to fit with various definitions of comparable
samples, and avoid adding complicated penalty terms, as long as there are real
comparable samples in data.
#### Training Antidote Data Generator
We train the model iteratively through the following objectives with gradient
penalty (Gulrajani et al., 2017) to ensure stability:
$\displaystyle\min_{g_{\theta}}\mathbb{E}_{x,x^{\prime}\sim\mathcal{D_{\text{comp}}}}\
\ell_{\text{CE}}(\mathbf{s}_{\hat{x}},\mathbf{s}_{x^{\prime}})-{d_{\theta}}({g_{\theta}}(\tilde{x}\oplus\mathbf{s}_{x^{\prime}}\oplus\mathbf{z})),$
(5)
$\displaystyle\min_{d_{\theta}}\mathbb{E}_{x,x^{\prime}\sim\mathcal{D_{\text{comp}}}}\
{d_{\theta}}({g_{\theta}}(\tilde{x}\oplus\mathbf{s}_{x^{\prime}}\oplus\mathbf{z}))-{d_{\theta}}(\tilde{x}^{\prime}),$
where $\mathcal{D}_{\text{comp}}$ is the distribution describing the real
comparable samples in data, $\ell_{\text{CE}}$ is cross entropy loss to
penalize the prediction of every sensitive attribute in $\mathbf{s}_{\hat{x}}$
with $\mathbf{s}_{x^{\prime}}$ as the ground truth.
### 3.2 Learning with Antidote Data
In practice, it is not guaranteed that ${g_{\theta}}$ will produce comparable
samples submitting to Definition 2.1, since in the training of generator we
only apply soft penalties to enforce them to be comparable. Thus, we adopt a
post-processing step Post to select comparable samples from all the raw
generations. Given a dataset $X$, for one iteration of sampling, we input
every $x$ with all possible sensitive features (except $\mathbf{s}_{x}$) to
the generator, collect raw generations $\hat{X}$, and apply Post$(\hat{X})$ to
get the antidote data. The label $y$ for antidote data is copied from the
original data. In experiments, we may have multiple iterations of sampling to
enlarge the pool of antidote data.
We elaborate two ways to easily apply the generated antidote data for the
individual fairness purpose.
#### Pre-processing
The first way to use antidote data is to simply insert all antidote data to
the original training set:
$\min_{f_{\theta}}\sum\ell(f_{\theta}(x),y),\quad x\in
X+\texttt{Post}(\hat{X}).$ (6)
Since we only add additional training data, this approach is model-agnostic,
flexible to any model optimization procedure, and fits well with well-
developed data analytical toolkits such as sklearn (Pedregosa et al., 2011).
We consider the convenience as a favorable property for practitioners.
#### In-processing
The second way is to apply antidote data with Distributionally Robust
Optimization. We present the training procedure in Algorithm 1. In every
training iteration, except the optimization at real data with $\ell(x,y)$, we
add an additional step to select $x$’s comparable samples in antidote data
with the highest loss incurred by the current model’s parameters, and capture
gradients from $\max_{\hat{x}\in\\{\hat{x}_{i}\\}^{M}\leftarrow
x}\ell(\hat{x},y)$ to update the model. The algorithm is similar to DRO with
perturbations along some sensitive directions, but instead we replace the
perturbations with on-manifold generated data. The additional loss term in
Algorithm 1 can be upper bounded by gradient smoothing regularization terms.
Taking Taylor expansion, we have:
$\displaystyle\max_{\hat{x}\in\\{\hat{x}_{i}\\}^{m}\leftarrow
x}\ell(\hat{x},y)$ (7)
$\displaystyle=\ell(x,y)+\max_{\hat{x}\in\\{\hat{x}_{i}\\}^{m}\leftarrow
x}[\ell(\hat{x},y)-\ell(x,y)]$
$\displaystyle=\ell(x,y)+\max_{\hat{x}\in\\{\hat{x}_{i}\\}^{m}\leftarrow
x}[\langle\nabla_{x}\ell(x,y),(\hat{x}-x)\rangle]+\mathcal{O}(\delta^{2})$
$\displaystyle\leq\ell(x,y)+T_{d}\max_{i}\nabla_{d_{i}}\ell(x,y)+T_{c}\max_{i}\nabla_{c_{i}}\ell(x,y)$
$\displaystyle+N_{s}\max_{i}\nabla_{s_{i}}\ell(x,y)+\mathcal{O}(\delta^{2}).$
Recall $T_{d}$ and $T_{c}$ are the thresholds for discrete and continuous
features in Definition 2.1. $\mathcal{O}(\delta^{2})$ is the higher-order from
Taylor expansion. The last inequality is from Definition 2.1. The three
gradients on discrete, continuous, and sensitive features serve as gradient
regularization and encourage the model to have invariant loss with regard to
comparable samples. However, the upper bound is only a sufficient but not
necessary condition, and our solution encodes real data distribution into the
gradient regularization to solve individual unfairness with favorable trade-
offs.
Algorithm 1 AntiDRO: DRO with Antidote Data for Individual Fairness
1: Input: Training data $T=\\{(x_{i},y_{i})\\}^{N}$, learning rate $\eta$,
loss function $\ell$
2: Train Antidote Data Generator ${g_{\theta}}$ with $\\{x_{i}\\}^{N}$ and
comparable constraints
3: Sample antidote data $\hat{X}$ using ${g_{\theta}}$
4: repeat
5:
${f_{\theta}}:\theta\leftarrow\theta-\eta\mathbb{E}_{(x,y)}[\nabla_{\theta}(\max_{\hat{x}\in\\{\hat{x}_{i}\\}^{M}\leftarrow
x}\ell(\hat{x},y)+\ell(x,y))]$ // $\\{\hat{x}_{i}\\}^{M}\leftarrow x$ is the
set of $M$ comparable samples of $x$ and
$\\{\hat{x}_{i}\\}^{M}\in\texttt{Post}(\hat{X})$
6: until convergence
7: Return: Individually fair classifier ${f_{\theta}}$
## 4 Experiments
### 4.1 Experimental Setup
#### Datasets
We involve census datasets Adult (Kohavi & Becker, 1996) and Dutch (Van der
Laan, 2000), educational dataset Law School (Wightman, 1998) and Oulad
(Kuzilek et al., 2017), and criminological dataset Compas (Angwin et al.,
2016) in our experiments. For each dataset, we select one or two attributes
related to ethics as sensitive attributes which expose a significant
individual unfairness in regular training. We report their details in Appendix
A.
#### Protocol
For all datasets, we transform discrete features into one-hot encoding, and
standardize the features by removing the mean and scaling to unit variance. We
transform continuous features into the range between 0 and 1. We construct
pairs of comparable samples for both training and testing sets. We evaluate
both the model utility and individual fairness in experiments. For utility, we
consider the area under the Receiver Operating Characteristic Curve (ROC), and
Average Precision (AP) to characterize the precision of probabilistic outputs
in binary classification. For individual fairness, we consider the gap in
probabilistic scores between comparable samples when both two samples have the
same positive or negative label (abbreviated as Pos. Comp. and Neg. Comp.). We
evaluate unfairness for Pos. Comp. and Neg. Comp. in terms of the arithmetic
mean (Mean) and upper quartile (Q3). The upper quartile can show us the
performance of some worse-performed pairs. For a base model with randomness
like NN, we ran the experiments five times and report the average results.
As elaborated in Section 3.2, we set some iterations to sample raw generations
$\hat{X}$ from the antidote data generator, and apply Post$(\hat{X})$ to
select all comparable samples. This operation would not result in a fixed
amount of antidote data across different datasets since the generator could be
different. We report the relative amount of antidote data used in every
experiment in the following and Appendix A.
#### Baselines
We consider two base models: logistic regression (LR), and three-layers neural
networks (NN). We use logistic regression from Scikit-learn (Pedregosa et al.,
2011), and our antidote data is compatible with this mature implementation
since it does not make a change to the model. Approaches involving DRO
currently do not support this LR pipeline, but will be validated through
neural networks implemented with PyTorch. We have the following five baselines
in experiments: 1. Discard sensitive features (Dis). This approach simply
discards the appointed sensitive features in the input data. 2. Project (Proj)
(Yurochkin et al., 2020). Project finds a linear projection via logistic
regression which minimizes the predictability of sensitive attributes in data.
It requires an extra pre-processing step to project input data. 3. SenSR
(Yurochkin et al., 2020). SenSR is based on DRO. It finds a sensitive subspace
through logistic regression which encodes the sensitive information most, and
generates perturbations on this sensitive subspace during optimization. 4.
SenSeI (Yurochkin & Sun, 2021). SenSeI also uses the DRO paradigm, but
involves distances penalties on both input and model predictions to construct
perturbations; 5. LCIFR (Ruoss et al., 2020). LCIFR computes adversarial
perturbations with logical constraints, and optimizes representations under
the attacks from perturbations. We basically follow the default hyperparameter
setting from the original implementation but fine-tune some parameters to
avoid degeneration in some cases. For our approaches, we use Anti to denote
the approach that simply merges original data and antidote data, use Anti+Dis
to denote discarding sensitive attributes in both original and antidote data,
and use AntiDRO to denote antidote data with DRO.
Table 1: Experimental results on Adult dataset. Our methods are highlighted
withgreen backgroundin the table.
| ROC $\uparrow$ | AP $\uparrow$ | | Pos. Comp. (Mean/Q3) $\downarrow$ | | Neg. Comp. (Mean/Q3) $\downarrow$
---|---|---|---|---|---|---
LR (Base) | 90.04 | 75.72 | | 31.75 / 43.55 | | 10.25 / 18.37
LR+Proj | 81.40 -9.60% | 62.19 -17.87% | | 25.10 -20.95% / 34.83 -20.02% | | 23.29 +127.17% / 33.03 +79.81%
LR+Dis | 89.95 -0.10% | 75.59 -0.17% | | 30.81 -2.94% / 41.10 -5.62% | | 9.40 -8.29% / 17.78 -3.18%
LR+Anti | 89.72 -0.35% | 75.04 -0.90% | | 24.72 -22.13% / 30.84 -29.18% | | 8.66 -15.56% / 14.64 -20.29%
LR+Anti+Dis | 89.56 -0.53% | 74.83 -1.17% | | 23.02 -27.49% / 26.61 -38.90% | | 8.12 -20.76% / 13.91 -24.28%
NN (Base) | 88.18 | 70.09 | | 33.21 / 47.84 | | 13.03 / 23.37
NN+Proj | 87.42 -0.86% | 68.51 -2.25% | | 32.38 -2.52% / 46.45 -2.91% | | 13.59 +4.36% / 23.69 +1.37%
NN+Dis | 88.15 -0.04% | 70.27 +0.26% | | 32.90 -0.93% / 44.79 -6.37% | | 11.83 -9.17% / 23.36 -0.08%
SenSR | 86.01 -2.47% | 66.19 -5.57% | | 28.68 -13.63% / 44.21 -7.59% | | 14.88 +14.20% / 23.07 -1.31%
SenSeI | 86.42 -2.00% | 66.08 -5.72% | | 27.92 -15.94% / 35.92 -24.91% | | 13.22 +1.53% / 26.01 +11.28%
LCIFR | 87.35 -0.94% | 68.52 -2.24% | | 32.51 -2.13% / 44.84 -6.26% | | 12.97 -0.41% / 26.49 +13.35%
NN+Anti | 87.95 -0.26% | 69.51 -0.83% | | 26.05 -21.57% / 35.76 -25.26% | | 10.42 -19.97% / 16.88 -27.80%
NN+Anti+Dis | 87.79 -0.44% | 69.40 -0.98% | | 24.40 -26.53% / 32.12 -32.85% | | 9.56 -26.63% / 15.54 -33.51%
AntiDRO | 87.91 -0.31% | 71.08 +1.41% | | 17.46 -47.44% / 20.04 -58.10% | | 5.48 -57.96% / 6.87 -70.59%
### 4.2 Antidote Data Empirically Mitigate Unfairness
We present our empirical results on Table 1 and Figure 1, and defer more to
Appendix C. From these results we have the following major observations.
#### Antidote Data Show Good Performance
Across all datasets, with antidote data, our models mostly perform the best in
terms of individual fairness, and with only a minimal drop or sometimes even a
slight improvement on predictive utility. For example, on Law School dataset
in Table 5, our NN+Anti mitigates individual unfairness by 70.38% and 63.36%
in terms of the Mean in Pos. Comp. and Neg. Comp., respectively, with
improvements on ROC by 0.47% and AP by 0.07%. On this dataset, other methods
typically bring a 0.1%-2.5% drop in utility, and deliver less mitigation of
individual unfairness. In specific cases, baseline methods do give better
individual fairness, e.g., LCIFR for Neg. Comp., but their good fairness is
not consistent for positive comparable samples, and is usually achieved at a
significant cost on utility (up to a 13.03% drop in ROC).
Figure 1: Experimental results on Compas dataset. Experiments in the left
three figures use Logistic Regression as the base model, and the right three
figures use Neural Networks. The top two rows plot individual fairness, while
the bottom two rows plot the model’s utility. Since we set two sensitive
attributes for Compas dataset, we plot three situations for comparable samples
upon sensitive attributes for these two samples, and use logical expressions
to denote them. We use ‘and’ to indicate none of the sensitive attributes is
same between a pair of comparable samples, use ‘or’ to denote at least one
sensitive attribute is different, and use ‘not’ to indicate both two
attributes are consistent. The dash line in box plots indicate the arithmetic
mean. Our methods are highlighted withgreen backgroundin the figure.
#### Additional Improvements from In-processing
Our AntiDRO outperforms NN+Anti. This approach gets fairer results and
slightly better predictive utility. The reason is AntiDRO introduces antidote
data into every optimization iteration and selects the worst performed data
instead of treating them equally. The DRO training typically has an iterative
optimization in every epoch to search for constructive perturbations. In
contrast, AntiDRO omits the inner optimizations but only evaluates antidote
data in each round.
#### Binding Well with Sensitive Feature Removal
Removing sensitive features from input data (Dis) generally improves
individual fairness, more or less. On Law School dataset in Table 5,
discarding sensitive features can bring up to 44.32% - 63.36% mitigation of
individual fairness. But once sensitive features are highly correlated with
other features, an excellent mitigation is not guaranteed. On Adult dataset in
Table 1, removing sensitive features only gives 0.93% - 2.94% improvements
across LR and NN. Regardless of the varying performance from Dis, our antidote
data bind well with sensitive features discarding. On Adult dataset, our
LR+Anti plus Dis boosts individual fairness in Pos. Comp. by 5.36%, where
solely discarding sensitive features only has 0.94% improvements. This number
is consistent in NN, i.e., 4.96% compared to 0.93%.
#### Fairness-Utility Tradeoffs
In Figure 2 A & B, we show the tradeoffs between utility and fairness. We have
two major observations: (1) Models with antidote data perform better
tradeoffs, i.e., with more antidote data, we have lower individual unfairness
with less drop in model utility. AntiDRO has the best tradeoffs and achieves
individual fairness with an inconspicuous sacrifice of utility even when the
amount of antidote data goes up. (2) Our models enjoy a lower variance with
different random seeds. For baseline methods, when we turn up the
hyperparameters controlling the tradeoffs, there is an instability in the
final results and a significant variance. However, as our model is optimized
on approximately real data, and with no adjustments on models with Anti and
only minimal change in optimization from AntiDRO, there is no observational
variance in final results.
#### Generator Convergence
In Figure 2 C, we show the change of the comparability ratio, i.e., the ratio
of comparable samples from the entire raw generated samples, during the
training of Antidote Data Generator over different types of features. The
comparability ratio of sensitive features quickly converged to 1 since we have
direct supervision. The ratio of discrete and numerical features converged
around the 500-th iteration due to the implicit supervision from the
discriminator. The ratio of continuous features is lower than discrete
features due to more complex patterns. The imperfect comparability ratio
prompts us add an additional step Post() to filter out incomparable samples.
Figure 2: A & B: The tradeoffs between utility and fairness on Adult dataset.
For SenSeI we iterate the controlling hyperparameter in (1e+3, 5e+3, 1e+4,
5e+4, 1e+5, 2e+5, 5e+5). For LCIFR, we iterate the weight for fairness in
(0.1, 1.0, 10.0, 50.0, 100.0). For Anti, we have the proportion of antidote
ratio at 0%, 45%, 90%, 134%, 180%, 225%, 270%, 316%, 361%, and 406%. For
AntiDRO, we have the proportion of antidote ratio at 45%, 90%, 136%, 180%,
225%. Every point is plotted with variances, and the variance for our models
is too small to observe in this figure. C: The convergence of the
comparability ratio during training (see paragraph Generator Convergence).
### 4.3 Modeling the Data Manifold
#### Random Comparable Samples
Table 2: In comparisons to random comparable samples on Adult dataset (see
paragraph Random Comparable Samples)
| ROC $\uparrow$ | Pos. Comp. (Mean/Q3) $\downarrow$
---|---|---
NN | 88.18 | 33.21 / 47.84
+100.0% Rand. | 88.25 | 31.33 / 44.69
+200.0% Rand. | 88.18 | 30.16 / 42.51
+300.0% Rand. | 88.19 | 29.48 / 39.77
+500.0% Rand. | 88.08 | 27.94 / 39.31
+44.5% Anti | 87.95 | 26.05 / 35.76
In Table 2 we compare randomly generated comparable samples to emphasize the
benefit of data manifold modeling. We sample the random comparable samples as
such: (1) Uniformly sample discrete features and perturb them into a random
value in the current feature. The total number of perturbed features is
arbitrary in [0, $T_{d}$]. (2) Uniformly sample values from [$-T_{c}$,
$T_{c}$], and add the perturbations to continuous features. We clip the
perturbed features in [0, 1]. (3) Randomly perturb an arbitrary number of
sensitive features. We add these randomly generated comparable samples to the
original training set. From the results in Table 2, we observe that with only
44.5% antidote data, the model outperforms the one with 500% randomly
generated comparable samples in terms of individual fairness. By surpassing
10x data efficacy, the results demonstrated that modeling on-manifold
comparable samples is greatly efficient to mitigate individual unfairness.
#### Good Learning Efficacy from Antidote Data
In Table 3 we study the model binary classification performance by training
only on generated data. We use Accuracy (Acc.), Bal. Acc. (Balance Accuracy),
and F1 Score (F1) for evaluation. We construct a synthetic training set that
has the same amount of data as the original training set. We use two
baselines. Random Data: the randomly generated data fit the basic feature-wise
constraints from tabular data. Pert. in SenSeI: we collect adversarial
perturbations from the original data in every training iteration of SenSeI,
and uniformly sample training data from these perturbations.
Table 3: Learning efficacy on Adult dataset
| Acc. $\uparrow$ | Bal. Acc. $\uparrow$ | F1 $\uparrow$
---|---|---|---
Original Data | 84.64 | 76.16 | 65.55
Random Data | 30.48 | 40.25 | 29.59
Pert. in SenSeI | 53.81 | 67.83 | 50.36
Antidote Data | 78.48 | 74.03 | 59.84
Within expectation, results in Table 3 show that our antidote data suffer from
a performance drop compared to the original data because the generator cannot
perfectly fit the data manifold. Even so, antidote data surpass random data
and perturbations from SenSeI, indicating that antidote data capture the
manifold and are closer to the original data.
## 5 Related Work
#### Machine Learning Fairness
AI Fairness proposes ethical regulations to rectify algorithms not
discriminating against any party or individual (Li et al., 2021; Hardt et al.,
2016; Li & Liu, 2022; Li et al., 2020; Song et al., 2021; Chhabra et al.,
2022). To quantify the goal, the concept ‘group fairness’ asks for equalized
outcomes from algorithms across sensitive groups in terms of statistics like
true positive rate or positive rate (Hardt et al., 2016). Similarly, minimax
fairness (Hashimoto et al., 2018) characterizes the algorithmic performance of
the worst-performed group among all. Though appealing, both of these two
notions guarantee poorly on individuals. To compensate for the deficiency,
counterfactual fairness (Kusner et al., 2017) describes the consistency of
algorithms on one instance and its counterfacts when sensitive attributes got
changed. However, this notion and corresponding evaluations strongly rely on
the casual structure (Glymour et al., 2016), which originates from the data
generating process. Thus, in practice, an explicit modeling is usually
unavailable. Individual fairness (Dwork et al., 2012) describes the pair-wise
predictive gaps between similar instances, and it is feasible when the
constraints in input and output spaces are properly defined.
#### Individual Fairness
Several methods have been proposed for individual fairness. Sharifi-Malvajerdi
et al. (2019) studies Average Individual Fairness. They regulate the average
error rate for individuals on a series of classification tasks with different
targets, and bound the rate for the worst-performed individual. Yurochkin et
al. (2020); Yurochkin & Sun (2021); Ruoss et al. (2020); Yeom & Fredrikson
(2021) develop models via DRO that iteratively optimized at samples which
violate fairness at most. To overcome the hardness for choosing distance
functions, Mukherjee et al. (2020) inherits the knowledge of
similar/dissimilar pairs of inputs, and propose to learn good similarity
metrics from data. Ilvento (2020) learns metrics for individual fairness from
human judgements, and construct an approximation from a limited queries to the
arbiter. Petersen et al. (2021) develops a graph smoothing approach to
mitigate individual bias based on a similarity graph. Lahoti et al. (2019)
develops a probabilistic mapping from input to low-rank representations that
reconcile individual fairness well. To introduce individual fairness to more
applications, Vargo et al. (2021) studies individual fairness in gradient
boosting, and the model is able to work with non-smooth models such as
decision trees. Dwork et al. (2020) studies individual fairness in a multi-
stage pipeline. Maity et al. (2021); John et al. (2020) study model auditing
with individual fairness.
#### Crafting Adversarial Samples
Beyond regular adversary (Madry et al., 2018), using generative models to
craft on-manifold adversarial samples is an attractive technique for model
robustness (Xiao et al., 2018; Zhao et al., 2018; Kos et al., 2018; Song et
al., 2018). Compared to general adversarial samples without too many data-
dependent considerations, generative samples are good approximations to the
data distribution and can offer attacks with rich semantics. Experimentally,
crafting adversarial samples is in accordance with intuition and has shown to
boost model generalization (Stutz et al., 2019; Raghunathan et al., 2019).
## 6 Conclusion
In this paper we studied individual fairness on tabular datasets, and focused
on an individual fairness definition with rich semantics. We proposed an
antidote data generator to learn on-manifold comparable samples, and used the
generator to produce antidote data for the individual fairness purpose. We
provided two approaches to equip antidote data to regular classification
pipeline or a distributionally robust optimization paradigm. By incorporating
generated antidote data, we showed good individual fairness as well as good
tradeoffs between utility and individual fairness.
## Acknowledgement
EX did this work while working as a research assistant with Hongfu Liu’s group
at Brandeis University. The authors would like to thank all the anonymous
reviews from ICML’23 and ICLR’23.
## References
* Angwin et al. (2016) Angwin, J., Larson, J., Mattu, S., and Kirchner, L. Machine bias. In _Ethics of Data and Analytics_ , 2016.
* Bao et al. (2021) Bao, M., Zhou, A., Zottola, S. A., Brubach, B., Desmarais, S., Horowitz, A. S., Lum, K., and Venkatasubramanian, S. It’s COMPASlicated: The messy relationship between RAI datasets and algorithmic fairness benchmarks. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)_ , 2021. URL https://openreview.net/forum?id=qeM58whnpXM.
* Barocas & Selbst (2016) Barocas, S. and Selbst, A. D. Big data’s disparate impact. _California Law Review_ , 2016.
* Bartlett et al. (2022) Bartlett, R., Morse, A., Stanton, R., and Wallace, N. Consumer-lending discrimination in the fintech era. _Journal of Financial Economics_ , 2022.
* Chhabra et al. (2022) Chhabra, A., Li, P., Mohapatra, P., and Liu, H. Robust fair clustering: A novel fairness attack and defense framework. _arXiv preprint arXiv:2210.01953_ , 2022.
* Dastin (2018) Dastin, J. Amazon scraps secret ai recruiting tool that showed bias against women, 2018.
* Duchi & Namkoong (2018) Duchi, J. and Namkoong, H. Learning models with uniform performance via distributionally robust optimization. _arXiv preprint arXiv:1810.08750_ , 2018.
* Dwork et al. (2012) Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. Fairness through awareness. In _Proceedings of The 3rd Innovations in Theoretical Computer Science Conference_ , 2012.
* Dwork et al. (2020) Dwork, C., Ilvento, C., and Jagadeesan, M. Individual fairness in pipelines. In _1st Symposium on Foundations of Responsible Computing_ , 2020\.
* Gelman et al. (2007) Gelman, A., Fagan, J., and Kiss, A. An analysis of the new york city police department’s “stop-and-frisk” policy in the context of claims of racial bias. _Journal of the American statistical association_ , 2007.
* Glymour et al. (2016) Glymour, M., Pearl, J., and Jewell, N. P. _Causal inference in statistics: A primer_. John Wiley & Sons, 2016.
* Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In _Advances in Neural Information Processing Systems_ , 2014.
* Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved training of wasserstein gans. In _Advances in Neural Information Processing Systems_ , 2017.
* Hardt et al. (2016) Hardt, M., Price, E., and Srebro, N. Equality of opportunity in supervised learning. In _Advances in Neural Information Processing Systems_ , 2016.
* Hashimoto et al. (2018) Hashimoto, T., Srivastava, M., Namkoong, H., and Liang, P. Fairness without demographics in repeated loss minimization. In _Proceedings of the 35th International Conference on Machine Learning_ , 2018.
* Ilvento (2020) Ilvento, C. Metric learning for individual fairness. In _1st Symposium on Foundations of Responsible Computing_ , 2020\.
* Jang et al. (2017) Jang, E., Gu, S., and Poole, B. Categorical reparameterization with gumbel-softmax. In _International Conference on Learning Representations_ , 2017.
* John et al. (2020) John, P. G., Vijaykeerthy, D., and Saha, D. Verifying individual fairness in machine learning models. In _Conference on Uncertainty in Artificial Intelligence_ , 2020.
* Kohavi & Becker (1996) Kohavi, R. and Becker, B. Adult data set, 1996.
* Kos et al. (2018) Kos, J., Fischer, I., and Song, D. Adversarial examples for generative models. In _IEEE Security and Privacy Workshops_ , 2018.
* Kusner et al. (2017) Kusner, M. J., Loftus, J., Russell, C., and Silva, R. Counterfactual fairness. In _Advances in Neural Information Processing Systems_ , 2017.
* Kuzilek et al. (2017) Kuzilek, J., Hlosta, M., and Zdrahal, Z. Open university learning analytics dataset. _Scientific Data_ , 2017.
* Lahoti et al. (2019) Lahoti, P., Gummadi, K. P., and Weikum, G. ifair: Learning individually fair data representations for algorithmic decision making. In _IEEE 35th International Conference on Data Engineering_ , 2019\.
* Levy et al. (2020) Levy, D., Carmon, Y., Duchi, J. C., and Sidford, A. Large-scale methods for distributionally robust optimization. _Advances in Neural Information Processing Systems_ , 33:8847–8860, 2020.
* Li & Liu (2022) Li, P. and Liu, H. Achieving fairness at no utility cost via data reweighing with influence. In _International Conference on Machine Learning_ , pp. 12917–12930. PMLR, 2022.
* Li et al. (2020) Li, P., Zhao, H., and Liu, H. Deep fair clustering for visual learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 9070–9079, 2020.
* Li et al. (2021) Li, P., Wang, Y., Zhao, H., Hong, P., and Liu, H. On dyadic fairness: Exploring and mitigating bias in graph connections. In _International Conference on Learning Representations_ , 2021.
* Madry et al. (2018) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. In _International Conference on Learning Representations_ , 2018.
* Maity et al. (2021) Maity, S., Xue, S., Yurochkin, M., and Sun, Y. Statistical inference for individual fairness. In _International Conference on Learning Representations_ , 2021.
* Mervis (2022) Mervis, J. Nsf grant decisions reflect systemic racism, study argues, 2022.
* Mukherjee et al. (2020) Mukherjee, D., Yurochkin, M., Banerjee, M., and Sun, Y. Two simple ways to learn individual fairness metrics from data. In _International Conference on Machine Learning_ , 2020.
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al. Scikit-learn: Machine learning in python. _Journal of machine learning research_ , 2011.
* Petersen et al. (2021) Petersen, F., Mukherjee, D., Sun, Y., and Yurochkin, M. Post-processing for individual fairness. In _Advances in Neural Information Processing Systems_ , 2021.
* Raghunathan et al. (2019) Raghunathan, A., Xie, S. M., Yang, F., Duchi, J. C., and Liang, P. Adversarial training can hurt generalization. _arXiv preprint arXiv:1906.06032_ , 2019.
* Ruoss et al. (2020) Ruoss, A., Balunovic, M., Fischer, M., and Vechev, M. Learning certified individually fair representations. In _Advances in Neural Information Processing Systems_ , 2020.
* Sharifi-Malvajerdi et al. (2019) Sharifi-Malvajerdi, S., Kearns, M., and Roth, A. Average individual fairness: Algorithms, generalization and experiments. In _Advances in Neural Information Processing Systems_ , 2019.
* Song et al. (2021) Song, H., Li, P., and Liu, H. Deep clustering based fair outlier detection. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_, pp. 1481–1489, 2021.
* Song et al. (2018) Song, Y., Shu, R., Kushman, N., and Ermon, S. Constructing unrestricted adversarial examples with generative models. In _Advances in Neural Information Processing Systems_ , 2018.
* Stutz et al. (2019) Stutz, D., Hein, M., and Schiele, B. Disentangling adversarial robustness and generalization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019.
* Van der Laan (2000) Van der Laan, P. The 2001 census in the netherlands. In _Conference The Census of Population_ , 2000.
* Vargo et al. (2021) Vargo, A., Zhang, F., Yurochkin, M., and Sun, Y. Individually fair gradient boosting. In _International Conference on Learning Representations_ , 2021.
* Wightman (1998) Wightman, L. F. Lsac national longitudinal bar passage study. _LSCA Research Report Series_ , 1998.
* Xiao et al. (2018) Xiao, C., Li, B., Zhu, J.-Y., He, W., Liu, M., and Song, D. Generating adversarial examples with adversarial networks. In _Proceedings of the 27th International Joint Conference on Artificial Intelligence_ , 2018.
* Xu et al. (2019) Xu, L., Skoularidou, M., Cuesta-Infante, A., and Veeramachaneni, K. Modeling tabular data using conditional gan. In _Advances in Neural Information Processing Systems_ , 2019.
* Yeom & Fredrikson (2021) Yeom, S. and Fredrikson, M. Individual fairness revisited: transferring techniques from adversarial robustness. In _Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence_ , 2021.
* Yurochkin & Sun (2021) Yurochkin, M. and Sun, Y. Sensei: Sensitive set invariance for enforcing individual fairness. In _International Conference on Learning Representations_ , 2021.
* Yurochkin et al. (2020) Yurochkin, M., Bower, A., and Sun, Y. Training individually fair ml models with sensitive subspace robustness. In _International Conference on Learning Representations_ , 2020.
* Zhao et al. (2018) Zhao, Z., Dua, D., and Singh, S. Generating natural adversarial examples. In _International Conference on Learning Representations_ , 2018.
## Appendix A Dataset and Relevant Details
We introduce datasets and experimental details related to datasets in this
section. The thresholds $T_{d}$ and $T_{c}$ in Definition 2.1 are chosen to
offer sufficient comparable samples in experiments. These two values are
consistent across all datasets and are not dataset-specific. The only case we
enlarge $T_{c}$ is on Law School dataset because the initial value cannot
offer enough comparable samples for learning.
#### Adult dataset
The Adult dataset contains census personal records with attributes like age,
education, race, etc. The task is to determine whether a person makes over
$50K a year. We use 45.25% antidote data for Anti, and 225.97% antidote data
for AntiDRO in Table 1. We set $T_{d}=1$ and $T_{c}=0.025$ for the constraints
of comparable samples.
#### Compas dataset
The Compas dataset is a criminological dataset recording prisoners’
information like criminal history, jail and prison time, demographic, sex,
etc. The task is to predict a recidivism risk score for defendants. We use
148.55% antidote data for Anti, and 184.89% antidote data for AntiDRO in
Figure 1. We set $T_{d}=1$ and $T_{c}=0.025$. Note that according to Bao et
al. (2021), Compas dataset may not be an ideal dataset for demonstrating
algorithmic fairness.
#### Law School dataset
The Law School dataset contains law school admission records. The goal is to
predict whether a candidate would pass the bar exam, with available features
like sex, race, and student’s decile, etc. We use 56.18% antidote data for
Anti, and 338.50% antidote data for AntiDRO in Table 5. We set $T_{d}=1$ and
$T_{c}=0.1$.
#### Oulad dataset
The Open University Learning Analytics (Oulad) dataset contains information of
students and their activities in the virtual learning environment for seven
courses. It offers students’ gender, region, age, and academic information to
predict students’ final results in a module-presentation. We use 523.23%
antidote data for Anti, and 747.85% antidote data for AntiDRO in Table 7. We
set $T_{d}=1$ and $T_{c}=0.025$.
#### Dutch dataset
The Dutch dataset dataset shows people profiles in Netherlands in 2001. It
provides information like sex, age, household, citizenship, etc., and aim to
predict a person’s occupation. We remove 8,549 duplication in the test set and
reduce the size to 6,556. We use 205.44% antidote data for Anti, and 770.65%
antidote data for AntiDRO in Table 6. We set $T_{d}=1$ and $T_{c}=0.025$.
Table 4: Dataset Statistics. We report data statistic including sample size, feature dimension, sensitive attribute, as well as the number of positive and negative comparable samples in training / testing set, respectively. Dataset | #Sample | #Dim. | Sensitive Attribute | #Pos. Comp. | #Neg. Comp.
---|---|---|---|---|---
Adult | 30,162 / 15,060 | 103 | marital-status | 739 / 193 | 38,826 / 10,412
Compas | 4,626 / 1,541 | 354 | race + sex | 24,292 / 2,571 | 8,116 / 1,020
Law School | 15,598 / 5,200 | 23 | race | 13,425 / 1,530 | 1,068 / 118
Oulad | 16,177 / 5,385 | 48 | age_band | 33,747 / 3,927 | 5,869 / 608
Dutch | 45,315 / 6,556 | 61 | sex | 1,460,028 / 6,727 | 1,301,376 / 9,390
## Appendix B Implementation Details
#### Model Architecture
We elaborate the architecture of our model in details by using $h$ as the
hidden representations.
${g_{\theta}}=\begin{cases}h_{1}=\texttt{ReLU}(\texttt{BatchNorm1d}(\texttt{Linear}_{\rightarrow
256}(\tilde{x}\oplus\tilde{s}\oplus\mathbf{z})))\oplus\tilde{x}\oplus\tilde{s}\oplus\mathbf{z}\\\
h_{2}=\texttt{ReLU}(\texttt{BatchNorm1d}(\texttt{Linear}_{\rightarrow
256}(h_{1})))\oplus h_{1}\\\
h_{3}=\texttt{ReLU}(\texttt{BatchNorm1d}(\texttt{Linear}_{\rightarrow\text{Dim}(\tilde{x})}(h_{2})))\\\
\hat{v}_{i}=\texttt{tanh}(\texttt{Linear}_{\rightarrow 1}(h_{3}[\text{index
for}\ v_{i}]))\quad\forall\ 0\leq i\leq N_{c}\\\
\hat{\mathbf{e}}_{i}=\texttt{gumbel}_{{0.2}}(\texttt{Linear}_{\rightarrow|d_{i}|}(h_{3}[\text{index
for}\ K_{i}]))\quad\forall\ 0\leq i\leq N_{c}\\\
\hat{\mathbf{d}}_{i}=\texttt{gumbel}_{{0.2}}(\texttt{Linear}_{\rightarrow|d_{i}|}(h_{3}[\text{index
for}\ d_{i}]))\quad\forall\ 0\leq i\leq N_{d}\\\ \end{cases}$
${d_{\theta}}=\begin{cases}h_{1}=\texttt{Dropout}_{0.5}(\texttt{LeakyReLU}_{0.2}(\texttt{Linear}_{\rightarrow
256}(\hat{x}\oplus\tilde{x}\oplus{\hat{x}-\tilde{x}})))\\\
h_{2}=\texttt{Dropout}_{0.5}(\texttt{LeakyReLU}_{0.2}(\texttt{Linear}_{\rightarrow
256}(h_{1})))\\\ \text{score}=\texttt{Linear}_{\rightarrow 1}(h_{2})\\\
\end{cases}$
#### Hyperparameter Setting
We use Adam optimizer for training the generator. We set the learning rate for
generator ${g_{\theta}}$ to 2e-4, for discriminator ${d_{\theta}}$ to 2e-4,
weight decay for ${g_{\theta}}$ to 1e-6, for ${d_{\theta}}$ to 0. We set batch
size to 4096 and training epochs to 500. The hyperparameters are inherited
from (Xu et al., 2019). For logistic regression, we set the strength of
$\ell_{2}$ penalty to 1, and max iteration to 2.048. For neural networks, we
set optimization iterations to 10,000, initial learning rate to 1e-1,
$\ell_{2}$ penalty strength to 1e-2, with SGD optimizer and decrease learning
rate by 50$\%$ for every 2,500 iterations. In Table 8, we use the default
XGBClassifier from https://xgboost.readthedocs.io/en/stable/.
## Appendix C Additional Results
We present experimental results on Dutch dataset in Table 6, on Oulad dataset
in Table 7, tradeoffs study in Figure 3, and results with XGBoost classifier
in Table 8. Similar conclusions can be drawn as stated in Section 4.2: with
antidote data, our models Anti and AntiDRO achieve good individual fairness
and favorable tradeoffs between fairness and model predictive utility.
Table 5: Experimental results on Law School dataset. Our methods are
highlighted withgreen backgroundin the table.
| ROC $\uparrow$ | AP $\uparrow$ | | Pos. Comp. (Mean/Q3) $\downarrow$ | | Neg. Comp. (Mean/Q3) $\downarrow$
---|---|---|---|---|---|---
LR (Base) | 86.14 | 97.80 | | 3.67 / 5.39 | | 11.70 / 15.21
LR+Proj | 85.84 -0.35% | 97.74 -0.06% | | 2.23 -39.35% / 2.48 -54.07% | | 8.66 -25.99% / 11.40 -25.05%
LR+Dis | 86.18 +0.04% | 97.79 -0.01% | | 2.04 -44.32% / 2.32 -56.90% | | 7.33 -37.36% / 11.25 -26.04%
LR+Anti | 86.22 +0.08% | 97.80 +0.00% | | 1.79 -51.20% / 2.20 -59.14% | | 6.56 -43.98% / 8.64 -43.21%
LR+Anti+Dis | 86.20 +0.06% | 97.80 -0.00% | | 1.76 -52.08% / 2.16 -59.96% | | 6.28 -46.33% / 8.36 -45.03%
NN (Base) | 85.70 | 97.72 | | 5.38 / 8.22 | | 12.55 / 16.47
NN+Proj | 85.89 +0.22% | 97.76 +0.04% | | 2.04 -62.07% / 2.27 -72.39% | | 5.52 -56.00% / 6.46 -60.77%
NN+Dis | 85.99 +0.34% | 97.78 +0.06% | | 1.97 -63.36% / 2.22 -72.98% | | 5.34 -57.42% / 6.45 -60.81%
SenSR | 84.49 -1.41% | 97.55 -0.18% | | 2.58 -51.99% / 3.23 -60.67% | | 5.56 -55.68% / 7.81 -52.57%
SenSeI | 84.59 -1.30% | 97.49 -0.24% | | 7.01 +30.33% / 10.83 +31.64% | | 18.22 +45.16% / 24.99 +51.72%
LCIFR | 74.53 -13.03% | 95.28 -2.50% | | 2.63 -51.05% / 3.06 -62.79% | | 3.35 -73.28% / 3.78 -77.07%
NN+Anti | 86.11 +0.47% | 97.79 +0.07% | | 1.59 -70.38% / 1.94 -76.44% | | 4.60 -63.36% / 6.31 -61.69%
NN+Anti+Dis | 86.07 +0.43% | 97.79 +0.06% | | 1.54 -71.31% / 1.80 -78.05% | | 4.44 -64.66% / 5.47 -66.78%
AntiDRO | 86.56 +1.00% | 97.88 +0.16% | | 1.52 -71.75% / 1.82 -77.82% | | 4.10 -67.34% / 5.54 -66.33%
Table 6: Experimental results on Dutch dataset. Our methods are highlighted
withgreen backgroundin the table.
| ROC $\uparrow$ | AP $\uparrow$ | | Pos. Comp. (Mean/Q3) $\downarrow$ | | Neg. Comp. (Mean/Q3) $\downarrow$
---|---|---|---|---|---|---
LR (Base) | 89.55 | 87.87 | | 17.84 / 24.15 | | 21.81 / 30.29
LR+Proj | 86.62 -3.28% | 85.13 -3.13% | | 7.74 -56.60% / 8.21 -65.99% | | 8.01 -63.30% / 8.55 -71.79%
LR+Dis | 87.51 -2.28% | 85.71 -2.47% | | 8.44 -52.67% / 9.03 -62.63% | | 9.11 -58.22% / 11.29 -62.72%
LR+Anti | 85.41 -4.63% | 83.38 -5.11% | | 9.55 -46.47% / 10.37 -57.05% | | 10.74 -50.77% / 12.70 -58.09%
LR+Anti+Dis | 87.40 -2.40% | 85.51 -2.69% | | 7.08 -60.32% / 7.06 -70.76% | | 7.10 -67.44% / 7.48 -75.31%
NN (Base) | 90.22 | 88.93 | | 15.88 / 20.85 | | 21.42 / 31.68
NN+Proj | 88.18 -2.26% | 86.94 -2.23% | | 8.11 -48.95% / 9.44 -54.75% | | 7.65 -64.29% / 9.73 -69.28%
NN+Dis | 88.21 -2.23% | 86.92 -2.25% | | 8.18 -48.51% / 9.41 -54.87% | | 8.18 -61.80% / 10.53 -66.76%
SenSR | 87.78 -2.70% | 86.68 -2.52% | | 8.54 -46.20% / 9.71 -53.46% | | 7.72 -63.94% / 8.61 -72.83%
SenSeI | 89.91 -0.34% | 88.34 -0.65% | | 16.21 +2.07% / 21.12 +1.29% | | 21.98 +2.65% / 31.25 -1.35%
LCIFR | 88.04 -2.42% | 86.54 -2.68% | | 8.12 -48.84% / 9.30 -55.42% | | 8.41 -60.73% / 10.61 -66.50%
NN+Anti | 87.05 -3.51% | 85.59 -3.75% | | 8.71 -45.13% / 10.50 -49.67% | | 9.49 -55.70% / 13.23 -58.23%
NN+Anti+Dis | 87.80 -2.68% | 86.37 -2.87% | | 6.78 -57.30% / 7.37 -64.65% | | 6.32 -70.48% / 7.15 -77.44%
AntiDRO | 88.00 -2.46% | 87.13 -2.02% | | 6.34 -60.04% / 6.06 -70.93% | | 4.91 -77.08% / 5.41 -82.93%
Table 7: Experimental results on Oulad dataset. Our methods are highlighted
withgreen backgroundin the table.
| ROC $\uparrow$ | AP $\uparrow$ | | Pos. Comp. (Mean/Q3) $\downarrow$ | | Neg. Comp. (Mean/Q3) $\downarrow$
---|---|---|---|---|---|---
LR (Base) | 63.04 | 76.73 | | 8.41 / 12.09 | | 9.08 / 12.97
LR+Proj | 65.20 +3.44% | 79.29 +3.34% | | 5.33 -36.61% / 7.50 -37.99% | | 5.46 -39.88% / 7.69 -40.71%
LR+Dis | 62.52 -0.83% | 76.39 -0.45% | | 5.42 -35.50% / 7.89 -34.77% | | 5.89 -35.14% / 8.74 -32.63%
LR+Anti | 62.17 -1.38% | 76.24 -0.64% | | 6.42 -23.61% / 9.19 -24.02% | | 7.10 -21.76% / 9.78 -24.63%
LR+Anti+Dis | 60.82 -3.52% | 75.07 -2.17% | | 5.10 -39.31% / 6.95 -42.53% | | 5.81 -36.04% / 8.67 -33.17%
NN (Base) | 65.80 | 79.72 | | 6.63 / 9.59 | | 6.81 / 9.73
NN+Proj | 65.42 -0.57% | 79.49 -0.29% | | 4.76 -28.26% / 6.70 -30.11% | | 4.65 -31.68% / 6.71 -31.00%
NN+Dis | 65.51 -0.43% | 79.59 -0.16% | | 4.78 -27.94% / 6.84 -28.75% | | 4.75 -30.27% / 6.94 -28.66%
SenSR | 65.58 -0.34% | 79.57 -0.19% | | 4.96 -25.17% / 7.00 -27.08% | | 4.23 -37.85% / 6.16 -36.67%
SenSeI | 64.14 -2.52% | 78.68 -1.31% | | 5.53 -16.49% / 8.13 -15.22% | | 5.50 -19.18% / 7.99 -17.90%
LCIFR | 65.21 -0.89% | 79.40 -0.40% | | 4.13 -37.61% / 5.66 -41.01% | | 3.70 -45.70% / 5.26 -45.94%
NN+Anti | 64.75 -1.59% | 79.08 -0.80% | | 4.09 -38.32% / 5.82 -39.36% | | 4.51 -33.69% / 6.30 -35.27%
NN+Anti+Dis | 64.97 -1.26% | 79.20 -0.65% | | 4.00 -39.70% / 5.53 -42.33% | | 4.18 -38.68% / 5.97 -38.61%
AntiDRO | 64.38 -2.16% | 78.62 -1.38% | | 2.86 -56.80% / 3.71 -61.34% | | 3.97 -41.64% / 5.22 -46.31%
Table 8: Experimental results on Adult dataset with XGBoost classifier.
| ROC $\uparrow$ | AP $\uparrow$ | | Pos. Comp. (Mean/Q3) $\downarrow$ | | Neg. Comp. (Mean/Q3) $\downarrow$
---|---|---|---|---|---|---
XGBoost | 92.63 | 82.78 | | 10.29 / 15.47 | | 11.62 / 18.39
XGBoost+Anti | 92.57 -0.06% | 83.01 +0.28% | | 8.89 -13.61% / 15.27 -1.29% | | 10.52 -9.47% / 17.51 -4.79%
Figure 3: The tradeoffs between utility and fairness on Compas dataset. For
SenSeI we iterate the controlling hyperparameter in (1e+3, 5e+3, 1e+4, 5e+4,
1e+5, 2e+5, 5e+5). For LCIFR, we iterate the weight for fairness in (0.1, 1.0,
10.0, 50.0, 100.0). For Anti, we have the proportion of antidote ratio at
110%, 130%, 150%, 167%, 185%, 206%. For AntiDRO, we have the proportion of
antidote ratio at 129%, 146%, 167%, 184%, 201%, 222%. Every point is plotted
with variances.
|
# FakeEdge: Alleviate Dataset Shift in Link Prediction
Kaiwen Dong
University of Notre Dame
<EMAIL_ADDRESS>Tian
University of Notre Dame
<EMAIL_ADDRESS>Guo
University of Notre Dame
<EMAIL_ADDRESS>Yang
University of Notre Dame
<EMAIL_ADDRESS>V. Chawla
University of Notre Dame
<EMAIL_ADDRESS>
###### Abstract
Link prediction is a crucial problem in graph-structured data. Due to the
recent success of graph neural networks (GNNs), a variety of GNN-based models
were proposed to tackle the link prediction task. Specifically, GNNs leverage
the message passing paradigm to obtain node representation, which relies on
link connectivity. However, in a link prediction task, links in the training
set are always present while ones in the testing set are not yet formed,
resulting in a discrepancy of the connectivity pattern and bias of the learned
representation. It leads to a problem of dataset shift which degrades the
model performance. In this paper, we first identify the dataset shift problem
in the link prediction task and provide theoretical analyses on how existing
link prediction methods are vulnerable to it. We then propose FakeEdge, a
model-agnostic technique, to address the problem by mitigating the graph
topological gap between training and testing sets. Extensive experiments
demonstrate the applicability and superiority of FakeEdge on multiple datasets
across various domains.
## 1 Introduction
Graph structured data is ubiquitous across a variety of domains, including
social networks [1], protein-protein interactions [2], movie recommendations
[3], and citation networks [4]. It provides a non-Euclidean structure to
describe the relations among entities. The link prediction task is to predict
missing links or new forming links in an observed network [5]. Recently, with
the success of graph neural networks (GNNs) for graph representation learning
[6, 7, 8, 9], several GNN-based methods have been developed [10, 11, 12, 13,
14] to solve link prediction tasks. These methods encode the representation of
target links with the topological structures and node/edge attributes in their
local neighborhood. After recognizing the pattern of observed links (training
sets), they predict the likelihood of forming new links between node pairs
(testing sets) where no link is yet observed.
Nevertheless, existing methods pose a discrepancy of the target link
representation between training and testing sets. As the target link is never
observed in the testing set by the nature of the task, it will have a
different local topological structure when compared to its counterpart from
the training set. Thus, the corrupted topological structure shifts the target
link representation in the testing set, which we recognize it as a dataset
shift problem [15, 16] in link prediction. Note that there are some existing
work [11] applying edge masking to moderate such a problem, similar to our
treatment. However, they tend to regard it as an empirical trick and fail to
identify the fundamental cause as a problem of dataset shift.
Figure 1: 1-WL test is performed to exhibit the learning process of GNNs. Two
node pairs (denoted as bold black circles) and their surrounding subgraphs are
sampled from the graph as a training (top) and testing (bottom) instance
respectively. Two subgraphs are isomorphic when we omit the focal links. One
iteration of 1-WL assigns different colors, indicating the occurrence of
dataset shift.
We give a concrete example to illustrate how dataset shift can happen in the
link prediction task, especially for GNN-based models with message passing
paradigm [17] simulating the 1-dimensional Weisfeiler-Lehman (1-WL) test [18].
In Figure 1, we have two local neighborhoods sampled as subgraphs from the
training (top) and testing (bottom) set respectively. The node pairs of
interest, which we call focal node pairs, are denoted by black bold circles.
From a bird’s-eye viewpoint, these two subgraphs are isomorphic when we
consider the existence of the positive test link (dashed line), even though
the test link has not been observed. Ideally, two isomorphic graphs should
have the same representation encoded by GNNs, leading to the same link
prediction outcome. However, one iteration of 1-WL in Figure 1 produces
different colors for the focal node pairs between training and testing sets,
which indicates that the one-layer GNN can encode different representations
for these two isomorphic subgraphs, giving rise to dataset shift issue.
Dataset shift can substantially degrade model performance since it violates
the common assumption that the joint distribution of inputs and outputs stays
the same in both the training and testing set. The root cause of this
phenomenon in link prediction is the unique characteristic of the target link:
the link always plays a dual role in the problem setting and determines both
the input and the output for a link prediction task. The existence of the link
apparently decides whether it is a positive or negative sample (output).
Simultaneously, the presence of the link can also influence how the
representation is learned through the introduction of different topological
structures around the link (input). Thus, it entangles representation learning
and labels in the link prediction problem.
To decouple the dual role of the link, we advocate a framework, namely
subgraph link prediction, which disentangles the label of the link and its
topological structure. As most practical link prediction methods make a
prediction by capturing the local neighborhood of the link [11, 19, 12, 1,
20], we unify them all into this framework, where the input is the extracted
subgraph around the focal node pair and the output is the likelihood of
forming a link incident with the focal node pair in the subgraph. From the
perspective of the framework, we find that the dataset shift issue is mainly
caused by the presence/absence of the focal link in the subgraph from the
training/testing set. This motivates us to propose a simple but effective
technique, FakeEdge, to deliberately add or remove the focal link in the
subgraph so that the subgraph can stay consistent across training and testing.
FakeEdge is a model-agnostic technique, allowing it to be applied to any
subgraph link prediction model. It assures that the model would learn the same
subgraph representation regardless of the existence of the focal link. Lastly,
empirical experiments prove that diminishing the dataset shift issue can
significantly boost the link prediction performance on different baseline
models.
We summarize our contributions as follows. We first unify most of the link
prediction methods into a common framework named as subgraph link prediction,
which treats link prediction as a subgraph classification task. In the view of
the framework, we theoretically investigate the dataset shift issue in link
prediction tasks, which motivates us to propose FakeEdge, a model-agnostic
augmentation technique, to ease the distribution gap between the training and
testing. We further conduct extensive experiments on a variety of baseline
models to reveal the performance improvement with FakeEdge to show its
capability of alleviating the dataset shift issue on a broad range of
benchmarks.
## 2 Related work
##### Dataset Shift.
Dataset shift is a fundamental issue in the world of machine learning. Within
the collection of dataset shift issues, there are several specific problems
based on which part of the data experience the distributional shift, including
covariate shift, concept shift, and prior probability shift. [16] gives a
rigorous definition about different dataset shift situations. In the context
of GNNs, [21] investigates the generalization ability of GNN models, and
propose a self-supervised task to improve the size generalization. [22]
studies the problem that the node labels in training set are not uniformly
sampled and suggests applying a regularizer to reduce the distributional gap
between training and testing. [23] proposes a risk minimization method by
exploring multiple context of the observed graph to enable GNNs to generalize
to out-of-distribution data. [24] demonstrates that the existing link
prediction models can fail to generalize to testing set with larger graphs and
designs a structural pairwise embedding to achieve size stability. [25, 26,
27] study the dataset shift problem for graph-level tasks, especially focusing
on the graphs in the training and testing set with varying sizes.
##### Graph Data Augmentation.
Several data augmentation methods are introduced to modify the graph
connectivity by adding or removing edges [28]. DropEdge [29] acts like a
message passing reducer to tackle over-smoothing or overfitting problems [30].
Topping et al. modify the graph’s topological structure by removing negatively
curved edges to solve the bottleneck issue [32] of message passing [31]. GDC
[33] applies graph diffusion methods on the observed graph to generate a
diffused counterpart as the computation graph. For the link prediction task,
CFLP [34] generates counterfactual links to augment the original graph. Edge
Proposal Set [35] injects edges into the training graph, which are recognized
by other link predictors in order to improve performance.
## 3 A proposed unified framework for link prediction
In this section, we formally introduce the link prediction task and formulate
several existing GNN-based methods into a common general framework.
### 3.1 Preliminary
Let ${\mathcal{G}}=(V,E,{\bm{x}}^{V},{\bm{x}}^{E})$ be an undirected graph.
$V$ is the set of nodes with size $n$, which can be indexed as
$\\{i\\}_{i=1}^{n}$. $E\subseteq V\times V$ is the observed set of edges.
${\bm{x}}^{V}_{i}\in\mathcal{X}^{V}$ represents the feature of node $i$.
${\bm{x}}^{E}_{i,j}\in\mathcal{X}^{E}$ represents the feature of the edge
$(i,j)$ if $(i,j)\in E$. The other unobserved set of edges is $E_{c}\subseteq
V\times V\backslash E$, which are either missing or going to form in the
future in the original graph ${\mathcal{G}}$. $d(i,j)$ denotes the shortest
path distance between node $i$ and $j$. The $r$-hop _enclosing subgraph_
${\mathcal{G}}_{i,j}^{r}$ for node $i,j$ is the subgraph induced from
${\mathcal{G}}$ by node sets $V_{i,j}^{r}=\\{v|v\in V,d(v,i)\leq r\text{ or
}d(v,j)\leq r\\}$. The edges set of ${\mathcal{G}}_{i,j}^{r}$ are
$E_{i,j}^{r}=\\{(p,q)|(p,q)\in E\text{ and }p,q\in V_{i,j}^{r}\\}$. An
enclosing subgraph
${\mathcal{G}}_{i,j}^{r}=(V_{i,j}^{r},E_{i,j}^{r},{\bm{x}}^{V}_{V_{i,j}^{r}},{\bm{x}}^{E}_{E_{i,j}^{r}})$
contains all the information in the neighborhood of node $i,j$. The node set
$\\{i,j\\}$ is called the _focal node pair_ , where we are interested in if
there exists (observed) or should exist (unobserved) an edge between nodes
$i,j$. In the context of link prediction, we will use the term subgraph to
denote _enclosing subgraph_ in the following sections.
### 3.2 Subgraph link prediction
In this section, we discuss the definition of Subgraph Link Prediction and
investigate how current link prediction methods can be unified in this
framework. We mainly focus on link prediction methods based on GNNs, which
propagate the message to each node’s neighbors in order to learn the
representation. We start by giving the definition of the subgraph’s
properties:
###### Definition 1.
Given a graph ${\mathcal{G}}=(V,E,{\bm{x}}^{V},{\bm{x}}^{E})$ and the
unobserved edge set $E_{c}$, a subgraph ${\mathcal{G}}_{i,j}^{r}$ have the
following properties:
1. a label ${\textnormal{y}}\in\\{0,1\\}$ of the subgraph indicates if there exists, or will form, an edge incident with focal node pair $\\{i,j\\}$. That is, ${\mathcal{G}}_{i,j}^{r}$ label ${\textnormal{y}}=1$ if and only if $(i,j)\in E\cup E_{c}$. Otherwise, label ${\textnormal{y}}=0$.
2. the existence ${\textnormal{e}}\in\\{0,1\\}$ of an edge in the subgraph indicates whether there is an edge observed at the focal node pair $\\{i,j\\}$. If $(i,j)\in E$, ${\textnormal{e}}=1$. Otherwise ${\textnormal{e}}=0$.
3. a phase ${\textnormal{c}}\in\\{\text{train},\text{test}\\}$ denotes whether the subgraph belongs to training or testing stage. Especially for a positive subgraph (${\textnormal{y}}=1$), if $(i,j)\in E$, then ${\textnormal{c}}=\text{train}$. If $(i,j)\in E_{c}$, then ${\textnormal{c}}=\text{test}$.
Note that, the _label_ ${\textnormal{y}}=1$ does not necessarily indicate the
observation of the edge at the focal node pair $\\{i,j\\}$. A subgraph in the
testing set may have the label ${\textnormal{y}}=1$ but the edge may not be
present. The _existence_ ${\textnormal{e}}=1$ only when the edge is observed
at the focal node pair.
###### Definition 2.
Given a subgraph ${\mathcal{G}}_{i,j}^{r}$, Subgraph Link Prediction is a task
to learn a feature ${\mathbf{h}}$ of the subgraph ${\mathcal{G}}_{i,j}^{r}$
and uses it to predict the label ${\textnormal{y}}\in\\{0,1\\}$ of the
subgraph.
Generally, subgraph link prediction regards the link prediction task as a
subgraph classification task. The pipeline of subgraph link prediction starts
with extracting the subgraph ${\mathcal{G}}_{i,j}^{r}$ around the focal node
pair $\\{i,j\\}$, and then applies GNNs to encode the node representation
${\bm{Z}}$. The latent feature ${\mathbf{h}}$ of the subgraph is obtained by
pooling methods on ${\bm{Z}}$. In the end, the subgraph feature ${\mathbf{h}}$
is fed into a classifier. In summary, the whole pipeline entails:
1. 1.
Subgraph Extraction: Extract the subgraph ${\mathcal{G}}_{i,j}^{r}$ around the
focal node pair $\\{i,j\\}$;
2. 2.
Node Representation Learning:
${\bm{Z}}=\mathtt{GNN}({\mathcal{G}}_{i,j}^{r})$, where
${\bm{Z}}\in\mathbb{R}^{|V_{i,j}^{r}|\times F_{\textnormal{hidden}}}$ is the
node embedding matrix learned by the GNN encoder;
3. 3.
Pooling: ${\mathbf{h}}=\mathtt{Pooling}({\bm{Z}};{\mathcal{G}}_{i,j}^{r})$,
where ${\mathbf{h}}\in\mathbb{R}^{F_{\textnormal{pooled}}}$ is the latent
feature of the subgraph ${\mathcal{G}}_{i,j}^{r}$;
4. 4.
Classification: ${\textnormal{y}}=\mathtt{Classifier}({\mathbf{h}})$.
There are two main streams of GNN-based link prediction models. Models like
SEAL [11] and WalkPool [12] can naturally fall into the subgraph link
prediction framework, as they thoroughly follow the pipeline. In SEAL,
SortPooling [36] serves as a readout to aggregate the node’s features in the
subgraph. WalkPool designs a random-walk based pooling method to extract the
subgraph feature ${\mathbf{h}}$. Both methods take advantage of the node’s
representation from the entire subgraph.
In addition, there is another stream of link prediction models, such as GAE
[10] and PLNLP [14], which learns the node representation and then devises a
score function on the representation of the focal node pair to represent the
likelihood of forming a link. We find that these GNN-based methods with the
message passing paradigm also belong to a subgraph link prediction task.
Considering a GAE with $l$ layers, each node $v$ essentially learns its
embedding from its $l$-hop neighbors $\\{i|i\in V,d(i,v)\leq l\\}$. The score
function can be then regarded as a center pooling on the subgraph, which only
aggregates the features of the focal node pair as ${\mathbf{h}}$ to represent
the subgraph. For a focal node pair $\\{i,j\\}$ and GAE with $l$ layers, an
$l$-hop subgraph ${\mathcal{G}}_{i,j}^{l}$ sufficiently contains all the
information needed to learn the representation of nodes in the subgraph and
score the focal node pair $\\{i,j\\}$. Thus, the GNN-based models can also be
seen as a citizen of subgraph link prediction. In terms of the score function,
there are plenty of options depending on the predictive power in practice. In
general, the common choices are: (1) Hadamard product:
${\mathbf{h}}=z_{i}\circ z_{j}$; (2) MLP:
${\mathbf{h}}=\mathtt{MLP}(z_{i}\circ z_{j})$ where $\mathtt{MLP}$ is the
Multi-Layer Perceptron; (3) BiLinear: ${\mathbf{h}}=z_{i}{\bm{W}}z_{j}$ where
${\bm{W}}$ is a learnable matrix; (4) BiLinearMLP:
${\mathbf{h}}=\mathtt{MLP}(z_{i})\circ\mathtt{MLP}(z_{j})$.
In addition to GNN-based methods, the concept of the subgraph link prediction
can be extended to low-order heuristics link predictors, like Common Neighbor
[1], Adamic–Adar index [20], Preferential Attachment [37], Jaccard Index [38],
and Resource Allocation [39]. The predictors with the order $r$ can be
computed by the subgraph ${\mathcal{G}}_{i,j}^{r}$. The scalar value can be
seen as the latent feature ${\mathbf{h}}$.
## 4 FakeEdge: Mitigates dataset shift in subgraph link prediction
In this section, we start by giving the definition of dataset shift in the
general case, and then formally discuss how dataset shift occurs with regard
to subgraph link prediction. Then we propose FakeEdge as a graph augmentation
technique to ease the distribution gap of the subgraph representation between
the training and testing sets. Lastly, we discuss how FakeEdge can enhance the
expressive power of any GNN-based subgraph link prediction model.
### 4.1 Dataset shift
###### Definition 3.
Dataset Shift happens when the joint distribution between train and test is
different. That is,
$p({\mathbf{h}},{\textnormal{y}}|{\textnormal{c}}=\text{train})\neq
p({\mathbf{h}},{\textnormal{y}}|{\textnormal{c}}=\text{test})$.
A simple example of dataset shift is an object detection system. If the system
is only designed and trained under good weather conditions, it may fail to
capture objects in bad weather. In general, dataset shift is often caused by
some unknown latent variable, like the weather condition in the example above.
The unknown variable is not observable during the training phase so the model
cannot fully capture the conditions during testing. Similarly, the edge
existence ${\textnormal{e}}\in\\{0,1\\}$ in the subgraph poses as an "unknown"
variable in the subgraph link prediction task. Most of the current GNN-based
models neglect the effect of the edge existence on encoding the subgraph’s
feature.
###### Definition 4.
A subgraph’s feature ${\mathbf{h}}$ is called Edge Invariant if
$p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}})=p({\mathbf{h}},{\textnormal{y}})$.
To explain, the Edge Invariant subgraph embedding stays the same no matter if
the edge is present at the focal node pair or not. It disentangles the edge’s
existence and the subgraph representation learning. For example, common
neighbor predictor is Edge Invariant because the existence of an edge at the
focal node pair will not affect the number of common neighbors that two nodes
can have. However, Preferential Attachment, another widely used heuristics
link prediction predictor, is not Edge Invariant because the node degree
varies depending on the existence of the edge.
###### Theorem 1.
$\mathtt{GNN}$ cannot learn the subgraph feature ${\mathbf{h}}$ to be Edge
Invariant.
Recall that the subgraphs in Figure 1 are encoded differently between the
training and testing set because of the presence/absence of the focal link.
Thus, the vanilla GNN cannot learn the Edge Invariant subgraph feature.
Learning Edge Invariant subgraph feature is crucial to mitigate the dataset
shift problem. Here, we give our main theorem about the issue in the link
prediction task:
###### Theorem 2.
Given
$p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}},{\textnormal{c}})=p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}})$,
there is no Dataset Shift in the link prediction if the subgraph embedding is
Edge Invariant. That is,
$p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}})=p({\mathbf{h}},{\textnormal{y}})\Longrightarrow
p({\mathbf{h}},{\textnormal{y}}|{\textnormal{c}})=p({\mathbf{h}},{\textnormal{y}})$.
The assumption
$p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}},{\textnormal{c}})=p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}})$
states that when the edge at the focal node pair is taken into consideration,
the joint distribution keeps the same across the training and testing stages,
which means that there is no other underlying unobserved latent variable
shifting the distribution. The theorem shows an Edge Invariant subgraph
embedding will not cause a dataset shift phenomenon.
Theorem 2 gives us the motivation to design the subgraph embedding to be Edge
Invariant. When it comes to GNNs, the practical GNN is essentially a message
passing neural network [17]. The existence of the edge incident at the focal
node pair can determine the computational graph for message passing when
learning the node representation.
### 4.2 Proposed methods
Figure 2: The proposed four FakeEdge methods. In general, FakeEdge encourages
the link prediction model to learn the subgraph representation by always
deliberately adding or removing the edges at the focal node pair in each
subgraph. In this way, FakeEdge can reduce the distribution gap of the learned
subgraph representation between the training and testing set.
Having developed conditions of dataset shift phenomenon in link prediction, we
next introduce a collection of subgraph augmentation techniques named as
FakeEdge (Figure 2), which satisfies the conditions in Theorem 2. The
motivation is to mitigate the distribution shift of the subgraph embedding by
eliminating the different patterns of target link existence between training
and testing sets. Note that all of the strategies follow the same discipline:
align the topological structure around the focal node pair in the training and
testing datasets, especially for the isomorphic subgraphs. Therefore, we
expect that it can gain comparable performance improvement across different
strategies.
Compared to the vanilla GNN-based subgraph link prediction methods, FakeEdge
augments the computation graph for node representation learning and subgraph
pooling step to obtain an Edge Invariant embedding for the entire subgraph.
Edge Plus A simple strategy is to always make the edge present at the focal
node pair for all training and testing samples. Namely, we add an edge into
the edge set of subgraph by $E_{i,j}^{r+}=E_{i,j}^{r}\cup\\{(i,j)\\}$, and use
this edge set to calculate the representation ${\mathbf{h}}^{plus}$ of the
subgraph ${\mathcal{G}}_{i,j}^{r+}$.
Edge Minus Another straightforward modification is to remove the edge at the
focal node pair if existing. That is, we remove the edge from the edge set of
subgraph by $E_{i,j}^{r-}=E_{i,j}^{r}\backslash\\{(i,j)\\}$, and obtain the
representation ${\mathbf{h}}^{minus}$ from ${\mathcal{G}}_{i,j}^{r-}$.
For GNN-based models, adding or removing edges at the focal node pair can
amplify or reduce message propagation along the subgraph. It may also change
the connectivity of the subgraph. We are interested to see if it can be
beneficial to take both situations into consideration by combining them. Based
on _Edge Plus_ and _Edge Minus_ , we further develop another two Edge
Invariant methods:
Edge Mean To combine _Edge Plus_ and _Edge Minus_ , one can extract these two
features and fuse them into one view. One way is to take the average of the
two latent features by
${\mathbf{h}}^{mean}=\frac{{\mathbf{h}}^{plus}+{\mathbf{h}}^{minus}}{2}$.
Edge Att _Edge Mean_ weighs ${\mathcal{G}}_{i,j}^{r+}$ and
${\mathcal{G}}_{i,j}^{r-}$ equally on all subgraphs. To vary the importance of
two modified subgraphs, we can apply an adaptive weighted sum operation.
Similar to the practice in the text translation [40], we apply an attention
mechanism to fuse the ${\mathbf{h}}^{plus}$ and ${\mathbf{h}}^{minus}$ by:
$\displaystyle{\mathbf{h}}^{att}=w^{plus}*{\mathbf{h}}^{plus}+w^{minus}*{\mathbf{h}}^{minus},$
(1) $\displaystyle\text{where
}w^{\cdot}=\text{SoftMax}({\bm{q}}^{\intercal}\cdot
tanh({\bm{W}}\cdot{\mathbf{h}}^{\cdot}+{\bm{b}}))$ (2)
### 4.3 Expressive power of structural representation
Figure 3: Given two isomorphic but non-overlapping subgraphs $A$ and $B$, GNNs
learn the same representation for the nodes $u$ and $v$. Hence, GNN-based
methods cannot distinguish focal node pairs $\\{u,w\\}$ and $\\{v,w\\}$.
However, by adding a FakeEdge at $\\{u,w\\}$ (shown as the dashed line in the
figure), it can break the tie of the representation for $u$ and $v$, thanks to
$u$’s modified neighborhood.
In addition to solving the issue of dataset shift, FakeEdge can tackle another
problem that impedes the expressive power of link prediction methods on the
structural representation [41]. In general, a powerful model is expected to
discriminate most of the non-isomorphic focal node pairs. For instance, in
Figure 3 we have two isomorphic subgraphs $A$ and $B$, which do not have any
overlapping nodes. Suppose that the focal node pairs we are interested in are
$\\{u,w\\}$ and $\\{v,w\\}$. Obviously, those two focal node pairs have
different structural roles in the graph, and we expect different structural
representations for them. With GNN-based methods like GAE, the node
representation of the node $u$ and $v$ will be the same $z_{u}=z_{v}$, due to
the fact that they have isomorphic neighborhoods. GAE applies a score function
on the focal node pair to pool the subgraph’s feature. Hence, the structural
representation of node sets $\\{u,w\\}$ and $\\{v,w\\}$ would be the same,
leaving them inseparable in the embedding space. This issue is caused by the
limitation of GNNs, whose expressive power is bounded by 1-WL test [42].
Zhang et al. address this problem by assigning distinct labels between the
focal node pair and the rest of the nodes in the subgraph [19]. FakeEdge
manages to resolve the issue by augmenting the neighborhoods of those two
isomorphic nodes. For instance, we can utilize the _Edge Plus_ strategy to
deliberately add an edge between nodes $u$ and $w$ (shown as the dashed line
in Figure 3). Note that the edge between $v$ and $w$ has already existed.
There is no need to add an edge between them. Therefore, the node $u$ and $v$
will have different neighborhoods ($u$ has 4 neighbors and $v$ has 3
neighbors), resulting in the different node representation between the node
$u$ and $v$ after the first iteration of message propagation with GNN. In the
end, we can obtain different representations for two focal node pairs. Other
FakeEdge methods like _Edge Minus_ can also tackle the issue in a similar way.
According to Theorem 2 in [19], such non-isomorphic focal node pairs
$\\{u,w\\}$, $\\{v,w\\}$ are not sporadic cases in a graph. Given an $n$ nodes
graph whose node degree is $\mathcal{O}(\log^{\frac{1-\epsilon}{2r}}n)$ for
any constant $\epsilon>0$, there exists $\omega(n^{2\epsilon})$ pairs of such
kind of $\\{u,w\\}$ and $\\{v,w\\}$, which cannot be distinguished by GNN-
based models like GAE. However, FakeEdge can enhance the expressive power of
link prediction methods by modifying the subgraph’s local connectivity.
## 5 Experiments
In this section, we conduct extensive experiments to evaluate how FakeEdge can
mitigate the dataset shift issue on various baseline models in the link
prediction task. Then we empirically show the distribution gap of the subgraph
representation between the training and testing and discuss how the dataset
shift issue can worsen with deeper GNNs. The code for the experiment can be
found at https://github.com/Barcavin/FakeEdge.
Table 1: Comparison with and without FakeEdge (AUC). The best results are
highlighted in bold.
llccccccccccc ModelsFakeEdge Cora Citeseer Pubmed USAir NS PB Yeast C.ele
Power Router E.coli
GCN _Original_ 84.92$\pm$1.95 77.05$\pm$2.18 81.58$\pm$4.62 94.07$\pm$1.50
96.92$\pm$0.73 93.17$\pm$0.45 93.76$\pm$0.65 88.78$\pm$1.85 76.32$\pm$4.65
60.72$\pm$5.88 95.35$\pm$0.36
2-13 _Edge Plus_ 91.94$\pm$0.90 89.54$\pm$1.17 97.91$\pm$0.14 97.10$\pm$1.01
98.03$\pm$0.72 95.48$\pm$0.42 97.86$\pm$0.27 89.65$\pm$1.74 85.42$\pm$0.91
95.96$\pm$0.41 98.05$\pm$0.30
_Edge Minus_ 92.01$\pm$0.94 90.29$\pm$0.88 97.87$\pm$0.15 97.16$\pm$0.97
98.14$\pm$0.66 95.50$\pm$0.43 97.90$\pm$0.29 89.47$\pm$1.86 85.39$\pm$1.08
96.05$\pm$0.37 97.97$\pm$0.31
_Edge Mean_ 91.86$\pm$0.76 89.61$\pm$0.96 97.94$\pm$0.13 97.19$\pm$1.00
98.08$\pm$0.66 95.52$\pm$0.43 97.70$\pm$0.36 89.62$\pm$1.82 85.23$\pm$1.00
96.08$\pm$0.35 98.07$\pm$0.27
_Edge Att_ 92.06$\pm$0.85 88.96$\pm$1.05 97.96$\pm$0.12 97.20$\pm$0.69
97.96$\pm$0.39 95.46$\pm$0.45 97.65$\pm$0.17 89.76$\pm$2.06 85.26$\pm$1.32
95.90$\pm$0.47 98.04$\pm$0.16
SAGE _Original_ 89.12$\pm$0.90 87.76$\pm$0.97 94.95$\pm$0.44 96.57$\pm$0.57
98.11$\pm$0.48 94.12$\pm$0.45 97.11$\pm$0.31 87.62$\pm$1.63 79.35$\pm$1.66
88.37$\pm$1.46 95.70$\pm$0.44
2-13 _Edge Plus_ 93.21$\pm$0.82 90.88$\pm$0.80 97.91$\pm$0.14 97.64$\pm$0.73
98.72$\pm$0.59 95.68$\pm$0.39 98.20$\pm$0.13 90.94$\pm$1.48 86.36$\pm$0.97
96.46$\pm$0.38 98.41$\pm$0.19
_Edge Minus_ 92.45$\pm$0.78 90.14$\pm$1.04 97.93$\pm$0.14 97.50$\pm$0.67
98.66$\pm$0.55 95.57$\pm$0.39 98.13$\pm$0.10 90.83$\pm$1.59 85.62$\pm$1.17
92.91$\pm$1.09 98.34$\pm$0.26
_Edge Mean_ 92.77$\pm$0.69 90.60$\pm$0.94 97.96$\pm$0.13 97.67$\pm$0.70
98.62$\pm$0.61 95.69$\pm$0.37 98.20$\pm$0.13 90.86$\pm$1.51 86.24$\pm$1.01
96.22$\pm$0.38 98.41$\pm$0.21
_Edge Att_ 93.31$\pm$1.02 91.01$\pm$1.14 98.01$\pm$0.13 97.40$\pm$0.94
98.70$\pm$0.59 95.49$\pm$0.49 98.22$\pm$0.24 90.64$\pm$1.88 86.46$\pm$0.91
96.31$\pm$0.59 98.43$\pm$0.13
GIN _Original_ 82.70$\pm$1.93 77.85$\pm$2.64 91.32$\pm$1.13 94.89$\pm$0.89
96.05$\pm$1.10 92.95$\pm$0.51 94.50$\pm$0.65 85.23$\pm$2.56 73.29$\pm$3.88
84.29$\pm$1.20 94.34$\pm$0.57
2-13 _Edge Plus_ 90.72$\pm$1.11 89.54$\pm$1.19 97.63$\pm$0.14 96.03$\pm$1.37
98.51$\pm$0.55 95.38$\pm$0.35 97.84$\pm$0.40 89.71$\pm$2.06 86.61$\pm$0.87
95.79$\pm$0.48 97.67$\pm$0.23
_Edge Minus_ 89.88$\pm$1.26 89.30$\pm$1.08 97.27$\pm$0.17 96.36$\pm$0.83
98.62$\pm$0.45 95.35$\pm$0.35 97.80$\pm$0.41 89.40$\pm$1.91 86.55$\pm$0.83
95.72$\pm$0.45 97.33$\pm$0.36
_Edge Mean_ 90.30$\pm$1.22 89.47$\pm$1.13 97.53$\pm$0.19 96.45$\pm$0.90
98.66$\pm$0.45 95.39$\pm$0.37 97.78$\pm$0.40 89.66$\pm$2.00 86.51$\pm$0.92
95.73$\pm$0.43 97.57$\pm$0.32
_Edge Att_ 90.76$\pm$0.88 89.55$\pm$0.61 97.50$\pm$0.15 96.34$\pm$0.82
98.35$\pm$0.54 95.29$\pm$0.29 97.66$\pm$0.33 89.39$\pm$1.61 86.21$\pm$0.67
95.78$\pm$0.52 97.74$\pm$0.33
PLNLP _Original_ 82.37$\pm$1.70 82.93$\pm$1.73 87.36$\pm$4.90 95.37$\pm$0.87
97.86$\pm$0.93 92.99$\pm$0.71 95.09$\pm$1.47 88.31$\pm$2.21 81.59$\pm$4.31
86.41$\pm$1.63 90.63$\pm$1.68
2-13 _Edge Plus_ 91.62$\pm$0.87 89.88$\pm$1.19 98.31$\pm$0.21 98.09$\pm$0.73
98.77$\pm$0.39 95.33$\pm$0.39 98.10$\pm$0.33 91.77$\pm$2.16 90.04$\pm$0.57
96.45$\pm$0.40 98.03$\pm$0.23
_Edge Minus_ 91.84$\pm$1.42 88.99$\pm$1.48 98.44$\pm$0.14 97.92$\pm$0.52
98.59$\pm$0.44 95.20$\pm$0.34 98.01$\pm$0.38 91.60$\pm$2.23 89.26$\pm$0.58
95.01$\pm$0.47 97.80$\pm$0.16
_Edge Mean_ 91.77$\pm$1.49 89.45$\pm$1.50 98.36$\pm$0.16 98.17$\pm$0.60
98.66$\pm$0.56 95.30$\pm$0.37 98.10$\pm$0.39 91.70$\pm$2.18 90.05$\pm$0.52
96.29$\pm$0.47 98.02$\pm$0.20
_Edge Att_ 91.22$\pm$1.34 88.75$\pm$1.70 98.41$\pm$0.17 98.13$\pm$0.61
98.70$\pm$0.40 95.32$\pm$0.38 98.06$\pm$0.37 91.72$\pm$2.12 90.08$\pm$0.54
96.40$\pm$0.40 98.01$\pm$0.18
SEAL _Original_ 90.13$\pm$1.94 87.59$\pm$1.57 95.79$\pm$0.78 97.26$\pm$0.58
97.44$\pm$1.07 95.06$\pm$0.46 96.91$\pm$0.45 88.75$\pm$1.90 78.14$\pm$3.14
92.35$\pm$1.21 97.33$\pm$0.28
2-13 _Edge Plus_ 90.01$\pm$1.95 89.65$\pm$1.22 97.30$\pm$0.34 97.34$\pm$0.59
98.35$\pm$0.63 95.35$\pm$0.38 97.67$\pm$0.32 89.20$\pm$1.86 85.25$\pm$0.80
95.47$\pm$0.58 97.84$\pm$0.25
_Edge Minus_ 91.04$\pm$1.91 89.74$\pm$1.16 97.50$\pm$0.33 97.27$\pm$0.63
98.17$\pm$0.74 95.36$\pm$0.37 97.64$\pm$0.30 89.35$\pm$1.98 85.30$\pm$0.91
95.77$\pm$0.79 97.79$\pm$0.30
_Edge Mean_ 90.36$\pm$2.17 89.87$\pm$1.14 97.52$\pm$0.34 97.38$\pm$0.68
98.23$\pm$0.70 95.30$\pm$0.34 97.68$\pm$0.33 89.19$\pm$1.85 85.30$\pm$0.87
95.61$\pm$0.64 97.83$\pm$0.23
_Edge Att_ 91.08$\pm$1.67 89.35$\pm$1.43 97.26$\pm$0.45 97.04$\pm$0.79
98.52$\pm$0.57 95.19$\pm$0.43 97.70$\pm$0.40 89.37$\pm$1.40 85.24$\pm$1.39
95.14$\pm$0.62 97.90$\pm$0.33
WalkPool _Original_ 92.00$\pm$0.79 89.64$\pm$1.01 97.70$\pm$0.19
97.83$\pm$0.97 99.00$\pm$0.45 94.53$\pm$0.44 96.81$\pm$0.92 93.71$\pm$1.11
82.43$\pm$3.57 87.46$\pm$7.45 95.00$\pm$0.90
2-13 _Edge Plus_ 91.96$\pm$0.79 89.49$\pm$0.96 98.36$\pm$0.13 97.97$\pm$0.96
98.99$\pm$0.58 95.47$\pm$0.32 98.28$\pm$0.24 93.79$\pm$1.11 91.24$\pm$0.84
97.31$\pm$0.26 98.65$\pm$0.17
_Edge Minus_ 91.97$\pm$0.80 89.61$\pm$1.04 98.43$\pm$0.10 98.03$\pm$0.95
99.02$\pm$0.54 95.47$\pm$0.32 98.30$\pm$0.23 93.83$\pm$1.13 91.28$\pm$0.90
97.35$\pm$0.28 98.66$\pm$0.17
_Edge Mean_ 91.77$\pm$0.74 89.55$\pm$1.09 98.39$\pm$0.11 98.01$\pm$0.89
99.02$\pm$0.56 95.47$\pm$0.29 98.30$\pm$0.24 93.70$\pm$1.12 91.26$\pm$0.81
97.27$\pm$0.29 98.65$\pm$0.19
_Edge Att_ 91.98$\pm$0.80 89.36$\pm$0.74 98.37$\pm$0.19 98.12$\pm$0.81
99.03$\pm$0.50 95.47$\pm$0.27 98.28$\pm$0.24 93.63$\pm$1.11 91.25$\pm$0.60
97.27$\pm$0.27 98.70$\pm$0.14
### 5.1 Experimental setup
##### Baseline methods.
We show how FakeEdge techniques can improve the existing link prediction
methods, including GAE-like models [10], PLNLP [14], SEAL [11], and WalkPool
[12]. To examine the effectiveness of FakeEdge, we compare the model
performance with subgraph representation learned on the original unmodified
subgraph and the FakeEdge augmented ones. For GAE-like models, we apply
different GNN encoders, including GCN [9], SAGE [43] and GIN [42]. SEAL and
WalkPool have already been implemented in the fashion of the subgraph link
prediction. However, a subgraph extraction preprocessing is needed for GAE and
PLNLP, since they are not initially implemented as the subgraph link
prediction. GCN, SAGE, and PLNLP use a score function to pool the subgraph.
GCN and SAGE use the Hadamard product as the score function, while MLP is
applied for PLNLP (see Section 3.2 for discussions about the score function).
Moreover, GIN applies a subgraph-level pooling strategy, called "mean readout"
[42], whose pooling is based on the entire subgraph. Similarly, SEAL and
WalkPool also utilize the pooling on the entire subgraph to aggregate the
representation. More details about the model implementation can be found in
Appendix D.
##### Benchmark datasets.
For the experiment, we use 3 datasets with node attributes and 8 without
attributes. The graph datasets with node attributes are three citation
networks: Cora [44], Citeseer [45], and Pubmed [46]. The graph datasets
without node attributes are eight graphs in a variety of domains: USAir [47],
NS [48], PB [49], Yeast [50], C.ele [51], Power [51], Router [52], and E.coli
[53]. More details about the benchmark datasets can be found in Appendix E.
##### Evaluation protocols.
Following the same experimental setting as of [11, 12], the links are split
into 3 parts: $85\%$ for training, $5\%$ for validation, and $10\%$ for
testing. The links in validation and testing are unobserved during the
training phase. We also implement a universal data pipeline for different
methods to eliminate the data perturbation caused by train/test split. We
perform $10$ random data splits to reduce the performance disturbance. Area
under the curve (AUC) [54] is used as the evaluation metrics and is reported
by the epoch with the highest score on the validation set.
### 5.2 Results
##### FakeEdge on GAE-like models.
The results of models with (_Edge Plus_ , _Edge Minus_ , _Edge Mean_ , and
_Edge Att_) and without (_Original_) FakeEdge are shown in section 5. We
observe that FakeEdge is a vital component for all different methods. With
FakeEdge, the link prediction model can obtain a significant performance
improvement on all datasets. GAE-like models and PLNLP achieve the most
remarkable performance improvement when FakeEdge alleviates the dataset shift
issue. FakeEdge boosts them by $2\%$-$11\%$ on different datasets. GCN, SAGE,
and PLNLP all have a score function as the pooling methods, which is solely
based on the focal node pair. In particular, the focal node pair is incident
with the target link, which determines how the message passes around it.
Therefore, the most severe dataset shift issues happen at the embedding of the
focal node pair during the node representation learning step. FakeEdge is
expected to bring a notable improvement to these situations.
##### Encoder matters.
In addition, the choice of encoder plays an important role when GAE is
deployed on the _Original_ subgraph. We can see that SAGE shows the best
performance without FakeEdge among these 3 encoders. However, after applying
FakeEdge, all GAE-like methods achieve comparable better results regardless of
the choice of the encoder. We come to a hypothesis that the plain SAGE itself
leverages the idea of FakeEdge to partially mitigate the dataset shift issue.
Each node’s neighborhood in SAGE is a fixed-size set of nodes, which is
uniformly sampled from the full neighborhood set. Thus, when learning the node
representation of the focal node pair in the positive training sets, it is
possible that one node of the focal node pair is not selected as the neighbor
of the other node during the neighborhood sampling stage. In this case, the
FakeEdge technique _Edge Minus_ is applied to modify such a subgraph.
##### FakeEdge on subgraph-based models.
In terms of SEAL and WalkPool, FakeEdge can still robustly enhance the model
performance across different datasets. Especially for datasets like Power and
Router, FakeEdge increases the AUC by over $10\%$ on both methods. Both
methods achieve better results across different datasets, except WalkPool
model on datasets Cora and Citeseer. One of the crucial components of WalkPool
is the walk-based pooling method, which actually operates on both the _Edge
Plus_ and _Edge Minus_ graphs. Different from FakeEdge technique, WalkPool
tackles the dataset shift problem mainly on the subgraph pooling stage. Thus,
WalkPool shows similar model performance between the _Original_ and FakeEdge
augmented graphs. Moreover, SEAL and WalkPool have utilized one of the
FakeEdge techniques as a trick in their initial implementations. However, they
have failed to explicitly point out the fix of dataset shift issue from such a
trick in their papers.
##### Different FakeEdge techniques.
When comparing different FakeEdge techniques, _Edge Att_ appears to be the
most stable, with a slightly better overall performance and a smaller
variance. However, there is no significant difference between these
techniques. This observation is consistent with our expectation since all
FakeEdge techniques follow the same discipline to fix the dataset shift issue.
### 5.3 Further discussions
In this section, we conduct experiments to more thoroughly study why FakeEdge
can improve the performance of the link prediction methods. We first give an
empirical experiment to show how severe the distribution gap can be between
training and testing. Then, we discuss the dataset shift issue with deeper
GNNs. Last but not the least, we explore how FakeEdge can even improve the
performance of heuristics predictors.
#### 5.3.1 Distribution gap between the training and testing
Figure 4: Distribution gap (AUC) of the positive samples between the training
and testing set.
FakeEdge aims to produce Edge Invariant subgraph embedding during the training
and testing phases in the link prediction task, especially for those positive
samples $p({\mathbf{h}}|{\textnormal{y}}=1)$. That is, the subgraph
representation of the positive samples between the training and testing should
be difficult, if at all, to be distinguishable from each other. Formally, we
ask whether
$p({\mathbf{h}}|{\textnormal{y}}=1,{\textnormal{c}}=\text{train})=p({\mathbf{h}}|{\textnormal{y}}=1,{\textnormal{c}}=\text{test})$,
by conducting an empirical experiment on the subgraph embedding.
We retrieve the subgraph embedding of the positive samples from both the
training and testing stages, and randomly shuffle the embedding. Then we
classify whether the sample is from training (${\textnormal{c}}=\text{train}$)
or testing (${\textnormal{c}}=\text{test}$). The shuffled positive samples are
split 80%/20% as train and inference sets. Note that the train set here, as
well as the inference set, contains both the shuffled positive samples from
the training and testing set in the link prediction task. Then we feed the
subgraph embedding into a 2-layer MLP classifier to investigate whether the
classifier can differentiate the training samples
(${\textnormal{c}}=\text{train}$) and the testing samples
(${\textnormal{c}}=\text{test}$). In general, the classifier will struggle to
undertake the classification if the embedding of training and testing samples
is drawn from the same underlying distribution, which indicates there is no
significant dataset shift issue.
We use GAE with the GCN as the encoder to run the experiment. AUC is used to
measure the discriminating power of the classifier. The results are shown in
Figure 4. Without FakeEdge, the classifier shows a significant ability to
separate positive samples between training and testing. When it comes to the
subgraph embedding with FakeEdge, the classifier stumbles in distinguishing
the samples. The comparison clearly reveals how different the subgraph
embedding can be between the training and testing, while FakeEdge can both
provably and empirically diminish the distribution gap.
#### 5.3.2 Dataset shift with deeper GNNs
Table 2: GIN’s performance improvement by _Edge Att_ compared to _Original_
with a different number of layers. GIN utilizes mean-pooling as the subgraph-
level readout.
l|ccccccccccc Layers Cora Citeseer Pubmed USAir NS PB Yeast C.ele Power Router
E.coli
1 $\uparrow$2.80% $\uparrow$3.65% $\uparrow$4.53% $\uparrow$0.29%
$\uparrow$1.30% $\uparrow$1.02% $\uparrow$1.54% $\uparrow$2.13%
$\uparrow$5.24% $\uparrow$11.19% $\uparrow$1.67%
2 $\uparrow$4.66% $\uparrow$14.53% $\uparrow$6.64% $\uparrow$0.73%
$\uparrow$1.55% $\uparrow$2.16% $\uparrow$3.40% $\uparrow$5.41%
$\uparrow$25.32% $\uparrow$14.73% $\uparrow$2.59%
3 $\uparrow$9.78% $\uparrow$15.19% $\uparrow$6.57% $\uparrow$0.98%
$\uparrow$2.49% $\uparrow$2.43% $\uparrow$3.60% $\uparrow$4.48%
$\uparrow$20.46% $\uparrow$13.38% $\uparrow$3.14%
Given two graphs with $n$ nodes in each graph, 1-WL test may take up to $n$
iterations to determine whether two graphs are isomorphic [55]. Thus, GNNs,
which mimic 1-WL test, tend to discriminate more non-isomorphic graphs when
the number of GNN layers increases. SEAL [19] has empirically witnessed a
stronger representation power and obtained more expressive link representation
with deeper GNNs. However, we notice that the dataset shift issue in the
subgraph link prediction becomes more severe when GNNs try to capture long-
range information with more layers.
We reproduce the experiments on GIN by using $l=1,2,3$ message passing layers
and compare the model performance by AUC scores with and without FakeEdge.
Here we only apply _Edge Att_ as the FakeEdge technique. The relative AUC
score improvement of _Edge Att_ is reported, namely
${(AUC_{EdgeAtt}-AUC_{Original})}/{AUC_{Original}}$. The results are shown in
subsubsection 5.3.2. As we can observe, the relative performance improvement
between _Edge Att_ and _Original_ becomes more significant with more layers,
which indicates that the dataset shift issue can be potentially more critical
when we seek deeper GNNs for greater predictive power.
To explain such a phenomenon, we hypothesize that GNNs with more layers will
involve more nodes in the subgraph, such that their computation graph is
dependent on the existence of the edge at the focal node pair. For example,
select a node $v$ from the subgraph ${\mathcal{G}}_{i,j}^{r}$, which is at
least $l$ hops away from the focal node pair $\\{i,j\\}$, namely
$l=min(d(i,v),d(j,v))$. If the GNN has only $l$ layers, $v$ will not include
the edge $(i,j)$ in its computation graph. But with a GNN with $l+1$ layers,
the edge $(i,j)$ will affect $v$’s computation graph. We leave the validation
of the hypothesis to future work.
#### 5.3.3 Heuristic methods with FakeEdge
FakeEdge, as a model-agnostic technique, not only has the capability of
alleviating the dataset shift issue for GNN-based models, but also can tackle
the problem for heuristics methods. The heuristics link predictors assign a
score to each focal node pair, indicating the likelihood of forming a new
edge. Some of the conventional heuristic link predictors, like Common Neighbor
[1], Adamic–Adar index [20], or Resource Allocation [39], are Edge Invariant
because these predictors are independent of the existence of the target link.
However, other link predictors, including Preferential Attachment (PA) [37]
and Jaccard Index (Jac) [38], are not Edge Invariant. The existence/absence of
the target link can change the values of the predictors, which in turn changes
the ranking of focal node pairs. The original PA for a focal node pair $i,j$
is $PA(i,j)=|\mathcal{N}(i)||\mathcal{N}(j)|$, where $\mathcal{N}(i)$ is the
neighbors of node $i$. After applying _Edge Plus_ ,
$PA^{plus}(i,j)=|\mathcal{N}(i)\cup\\{j\\}||\mathcal{N}(j)\cup\\{i\\}|$.
Similarly,
$Jac^{plus}(i,j)=|\mathcal{N}(i)\cap\mathcal{N}(j)|/|\mathcal{N}(i)\cup\mathcal{N}(j)\cup\\{i,j\\}|$.
Table 3: Heuristic methods with/without FakeEdge (AUC). The best results are
highlighted in bold.
llccccccccccc ModelsFake Edge Cora Citeseer Pubmed USAir NS PB Yeast C.ele
Power Router E.coli
PA _Original_ 63.15$\pm$1.38 58.20$\pm$2.18 71.72$\pm$0.36 88.84$\pm$1.41
66.19$\pm$1.82 90.05$\pm$0.52 82.10$\pm$1.15 75.72$\pm$2.20 44.47$\pm$1.58
48.20$\pm$0.83 91.99$\pm$0.78
2-13 _Edge Plus_ 65.05$\pm$1.31 61.05$\pm$1.96 84.04$\pm$0.37 90.36$\pm$1.45
65.29$\pm$1.97 90.47$\pm$0.49 82.66$\pm$0.98 75.98$\pm$2.31 46.83$\pm$1.61
74.03$\pm$1.05 91.98$\pm$0.78
Jac _Original_ 71.76$\pm$0.85 66.33$\pm$1.23 64.41$\pm$0.20 88.89$\pm$1.55
92.19$\pm$0.80 86.82$\pm$0.60 88.49$\pm$0.53 78.77$\pm$1.94 58.18$\pm$0.50
55.77$\pm$0.55 81.43$\pm$0.92
2-13 _Edge Plus_ 71.77$\pm$0.85 66.33$\pm$1.23 64.42$\pm$0.20 89.65$\pm$1.45
92.19$\pm$0.80 87.20$\pm$0.58 88.52$\pm$0.53 79.33$\pm$1.88 58.18$\pm$0.50
55.77$\pm$0.55 81.79$\pm$0.90
We follow the same protocol in the previous experiment. As shown in
subsubsection 5.3.3, _Edge Plus_ can significantly improve the performance of
the PA predictor on several datasets. With FakeEdge, PA performs over 10%
better on Pubmed. Surprisingly, even though PA is not able to predict the
links on Router dataset with AUC score lower than 50%, PA with Edge Plus
achieves 74% AUC score and becomes a functional link predictor. In terms of
Jac, we observe that Jac with FakeEdge can only gain marginal improvement.
This is because that even though Jac is dependent on the existence of target
link, the change of Jac index is relatively small when the existence of the
target link flips.
## 6 Conclusion
Dataset shift is arguably one of the most challenging problems in the world of
machine learning. However, to the best of our knowledge, none of the previous
studies sheds light on this notable phenomenon in link prediction. In this
paper, we studied the issue of dataset shift in link prediction tasks with
GNN-based models. We first unified several existing models into a framework of
subgraph link prediction. Then, we theoretically investigated the phenomenon
of dataset shift in subgraph link prediction and proposed a model-agnostic
technique FakeEdge to amend the issue. Experiments with different models over
a wide range of datasets verified the effectiveness of FakeEdge.
## References
* Liben-Nowell and Kleinberg [2003] David Liben-Nowell and Jon Kleinberg. The link prediction problem for social networks. In _Proceedings of the twelfth international conference on Information and knowledge management_ , CIKM ’03, pages 556–559, New York, NY, USA, November 2003. Association for Computing Machinery. ISBN 978-1-58113-723-1. doi: 10.1145/956863.956972. URL http://doi.org/10.1145/956863.956972.
* Szklarczyk et al. [2019] Damian Szklarczyk, Annika L. Gable, David Lyon, Alexander Junge, Stefan Wyder, Jaime Huerta-Cepas, Milan Simonovic, Nadezhda T. Doncheva, John H. Morris, Peer Bork, Lars J. Jensen, and Christian von Mering. STRING v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. _Nucleic Acids Research_ , 47(D1):D607–D613, January 2019. ISSN 1362-4962. doi: 10.1093/nar/gky1131.
* Koren et al. [2009] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. _Computer_ , 42(8):30–37, 2009. Publisher: IEEE.
* Yang et al. [2016] Zhilin Yang, William Cohen, and Ruslan Salakhudinov. Revisiting semi-supervised learning with graph embeddings. In _International conference on machine learning_ , pages 40–48. PMLR, 2016.
* Yang et al. [2015] Yang Yang, Ryan N. Lichtenwalter, and Nitesh V. Chawla. Evaluating link prediction methods. _Knowledge and Information Systems_ , 45(3):751–782, December 2015. ISSN 0219-3116. doi: 10.1007/s10115-014-0789-0. URL https://doi.org/10.1007/s10115-014-0789-0.
* Defferrard et al. [2016] Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 29. Curran Associates, Inc., 2016. URL https://proceedings.neurips.cc/paper/2016/file/04df4d434d481c5bb723be1b6df1ee65-Paper.pdf.
* Duvenaud et al. [2015] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru-Guzik, and Ryan P Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/2015/file/f9be311e65d81a9ad8150a60844bb94c-Paper.pdf.
* Bruna et al. [2014] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral Networks and Locally Connected Networks on Graphs. _arXiv:1312.6203 [cs]_ , May 2014. URL http://arxiv.org/abs/1312.6203. arXiv: 1312.6203.
* Kipf and Welling [2017] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. _arXiv:1609.02907 [cs, stat]_ , February 2017. URL http://arxiv.org/abs/1609.02907. arXiv: 1609.02907.
* Kipf and Welling [2016] Thomas N. Kipf and Max Welling. Variational Graph Auto-Encoders, 2016. _eprint: 1611.07308.
* Zhang and Chen [2018] Muhan Zhang and Yixin Chen. Link Prediction Based on Graph Neural Networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper/2018/file/53f0d7c537d99b3824f0f99d62ea2428-Paper.pdf.
* Pan et al. [2022] Liming Pan, Cheng Shi, and Ivan Dokmanić. Neural Link Prediction with Walk Pooling. In _International Conference on Learning Representations_ , 2022\. URL https://openreview.net/forum?id=CCu6RcUMwK0.
* Li et al. [2020] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_ , volume 33, pages 4465–4478. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/2f73168bf3656f697507752ec592c437-Paper.pdf.
* Wang et al. [2022] Zhitao Wang, Yong Zhou, Litao Hong, Yuanhang Zou, Hanjing Su, and Shouzhi Chen. Pairwise Learning for Neural Link Prediction. _arXiv:2112.02936 [cs]_ , January 2022. URL http://arxiv.org/abs/2112.02936. arXiv: 2112.02936.
* Quiñonero-Candela et al. [2008] Joaquin Quiñonero-Candela, Masashi Sugiyama, Anton Schwaighofer, Neil D. Lawrence, Michael I. Jordan, and Thomas G. Dietterich, editors. _Dataset Shift in Machine Learning_. Neural Information Processing series. MIT Press, Cambridge, MA, USA, December 2008. ISBN 978-0-262-17005-5.
* Moreno-Torres et al. [2012] Jose G. Moreno-Torres, Troy Raeder, RocíO Alaiz-RodríGuez, Nitesh V. Chawla, and Francisco Herrera. A unifying view on dataset shift in classification. _Pattern Recognition_ , 45(1):521–530, January 2012. ISSN 0031-3203. doi: 10.1016/j.patcog.2011.06.019. URL http://doi.org/10.1016/j.patcog.2011.06.019.
* Gilmer et al. [2017] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural Message Passing for Quantum Chemistry. _CoRR_ , abs/1704.01212, 2017. URL http://arxiv.org/abs/1704.01212. arXiv: 1704.01212.
* Weisfeiler and Leman [1968] Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. _NTI, Series_ , 2(9):12–16, 1968.
* Zhang et al. [2021] Muhan Zhang, Pan Li, Yinglong Xia, Kai Wang, and Long Jin. Labeling Trick: A Theory of Using Graph Neural Networks for Multi-Node Representation Learning. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. Wortman Vaughan, editors, _Advances in Neural Information Processing Systems_ , volume 34, pages 9061–9073. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/4be49c79f233b4f4070794825c323733-Paper.pdf.
* Adamic and Adar [2003] Lada A. Adamic and Eytan Adar. Friends and neighbors on the Web. _Social Networks_ , 25(3):211–230, 2003. ISSN 0378-8733. doi: https://doi.org/10.1016/S0378-8733(03)00009-1. URL https://www.sciencedirect.com/science/article/pii/S0378873303000091.
* Yehudai et al. [2021] Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, and Haggai Maron. From Local Structures to Size Generalization in Graph Neural Networks, July 2021. URL http://arxiv.org/abs/2010.08853. arXiv:2010.08853 [cs, stat].
* Zhu et al. [2021] Qi Zhu, Natalia Ponomareva, Jiawei Han, and Bryan Perozzi. Shift-Robust GNNs: Overcoming the Limitations of Localized Graph Training data. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P. S. Liang, and J. Wortman Vaughan, editors, _Advances in Neural Information Processing Systems_ , volume 34, pages 27965–27977. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/eb55e369affa90f77dd7dc9e2cd33b16-Paper.pdf.
* Wu et al. [2022] Qitian Wu, Hengrui Zhang, Junchi Yan, and David Wipf. Handling Distribution Shifts on Graphs: An Invariance Perspective. May 2022. URL https://openreview.net/forum?id=FQOC5u-1egI.
* Zhou et al. [2022] Yangze Zhou, Gitta Kutyniok, and Bruno Ribeiro. OOD Link Prediction Generalization Capabilities of Message-Passing GNNs in Larger Test Graphs, 2022. URL https://arxiv.org/abs/2205.15117.
* Bevilacqua et al. [2021] Beatrice Bevilacqua, Yangze Zhou, and Bruno Ribeiro. Size-Invariant Graph Representations for Graph Classification Extrapolations, 2021. URL https://arxiv.org/abs/2103.05045.
* Buffelli et al. [2022] Davide Buffelli, Pietro Liò, and Fabio Vandin. SizeShiftReg: a Regularization Method for Improving Size-Generalization in Graph Neural Networks, 2022. URL https://arxiv.org/abs/2207.07888.
* Maskey et al. [2022] Sohir Maskey, Ron Levie, Yunseok Lee, and Gitta Kutyniok. Generalization Analysis of Message Passing Neural Networks on Large Random Graphs, 2022. URL https://arxiv.org/abs/2202.00645.
* Zhao et al. [2022a] Tong Zhao, Gang Liu, Stephan Günnemann, and Meng Jiang. Graph Data Augmentation for Graph Machine Learning: A Survey. _arXiv e-prints_ , page arXiv:2202.08871, February 2022a. _eprint: 2202.08871.
* Rong et al. [2020] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification. In _International Conference on Learning Representations_ , 2020\. URL https://openreview.net/forum?id=Hkx1qkrKPr.
* Chen et al. [2020] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View. In _AAAI_ , 2020.
* Topping et al. [2021] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. _arXiv:2111.14522 [cs, stat]_ , November 2021. URL http://arxiv.org/abs/2111.14522. arXiv: 2111.14522.
* Alon and Yahav [2021] Uri Alon and Eran Yahav. On the Bottleneck of Graph Neural Networks and its Practical Implications. _arXiv:2006.05205 [cs, stat]_ , March 2021. URL http://arxiv.org/abs/2006.05205. arXiv: 2006.05205.
* Klicpera et al. [2019] Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion Improves Graph Learning. _arXiv:1911.05485 [cs, stat]_ , December 2019. URL http://arxiv.org/abs/1911.05485. arXiv: 1911.05485.
* Zhao et al. [2022b] Tong Zhao, Gang Liu, Daheng Wang, Wenhao Yu, and Meng Jiang. Learning from Counterfactual Links for Link Prediction. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_ , volume 162 of _Proceedings of Machine Learning Research_ , pages 26911–26926. PMLR, July 2022b. URL https://proceedings.mlr.press/v162/zhao22e.html.
* Singh et al. [2021] Abhay Singh, Qian Huang, Sijia Linda Huang, Omkar Bhalerao, Horace He, Ser-Nam Lim, and Austin R. Benson. Edge Proposal Sets for Link Prediction. Technical Report arXiv:2106.15810, arXiv, June 2021. URL http://arxiv.org/abs/2106.15810. arXiv:2106.15810 [cs] type: article.
* Zhang et al. [2018a] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An End-to-End Deep Learning Architecture for Graph Classification. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 32(1), April 2018a. ISSN 2374-3468. doi: 10.1609/aaai.v32i1.11782. URL https://ojs.aaai.org/index.php/AAAI/article/view/11782. Number: 1.
* Barabási and Albert [1999] Albert-László Barabási and Réka Albert. Emergence of Scaling in Random Networks. _Science_ , 286(5439):509–512, 1999. doi: 10.1126/science.286.5439.509. URL https://www.science.org/doi/abs/10.1126/science.286.5439.509. _eprint: https://www.science.org/doi/pdf/10.1126/science.286.5439.509.
* Jaccard [1912] Paul Jaccard. The Distribution of the Flora in the Alpine Zone.1. _New Phytologist_ , 11(2):37–50, 1912. ISSN 1469-8137. doi: 10.1111/j.1469-8137.1912.tb05611.x. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1469-8137.1912.tb05611.x. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1469-8137.1912.tb05611.x.
* Zhou et al. [2009] Tao Zhou, Linyuan Lü, and Yi-Cheng Zhang. Predicting missing links via local information. _The European Physical Journal B_ , 71(4):623–630, 2009. Publisher: Springer.
* Luong et al. [2015] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective Approaches to Attention-based Neural Machine Translation, 2015. URL https://arxiv.org/abs/1508.04025.
* Srinivasan and Ribeiro [2020] Balasubramaniam Srinivasan and Bruno Ribeiro. On the Equivalence between Positional Node Embeddings and Structural Graph Representations. In _International Conference on Learning Representations_ , 2020\. URL https://openreview.net/forum?id=SJxzFySKwH.
* Xu et al. [2018] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How Powerful are Graph Neural Networks? _CoRR_ , abs/1810.00826, 2018. URL http://arxiv.org/abs/1810.00826. arXiv: 1810.00826.
* Hamilton et al. [2018] William L. Hamilton, Rex Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. _arXiv:1706.02216 [cs, stat]_ , September 2018. URL http://arxiv.org/abs/1706.02216. arXiv: 1706.02216.
* McCallum et al. [2000] Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. Automating the construction of internet portals with machine learning. _Information Retrieval_ , 3(2):127–163, 2000\. Publisher: Springer.
* Giles et al. [1998] C Lee Giles, Kurt D Bollacker, and Steve Lawrence. CiteSeer: An automatic citation indexing system. In _Proceedings of the third ACM conference on Digital libraries_ , pages 89–98, 1998.
* Namata et al. [2012] Galileo Namata, Ben London, Lise Getoor, and Bert Huang. Query-driven active surveying for collective classification. In _10th International Workshop on Mining and Learning with Graphs_ , volume 8, page 1, 2012.
* Batagelj and Mrvar [2006] Vladimir Batagelj and Andrej Mrvar. Pajek datasets website, 2006. URL http://vlado.fmf.uni-lj.si/pub/networks/data/.
* Newman [2006] Mark EJ Newman. Finding community structure in networks using the eigenvectors of matrices. _Physical review E_ , 74(3):036104, 2006. Publisher: APS.
* Ackland and others [2005] Robert Ackland and others. Mapping the US political blogosphere: Are conservative bloggers more prominent? In _BlogTalk Downunder 2005 Conference, Sydney_ , 2005.
* Von Mering et al. [2002] Christian Von Mering, Roland Krause, Berend Snel, Michael Cornell, Stephen G Oliver, Stanley Fields, and Peer Bork. Comparative assessment of large-scale data sets of protein–protein interactions. _Nature_ , 417(6887):399–403, 2002. Publisher: Nature Publishing Group.
* Watts and Strogatz [1998] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. _nature_ , 393(6684):440–442, 1998. Publisher: Nature Publishing Group.
* Spring et al. [2002] Neil Spring, Ratul Mahajan, and David Wetherall. Measuring ISP topologies with Rocketfuel. _ACM SIGCOMM Computer Communication Review_ , 32(4):133–145, 2002. Publisher: ACM New York, NY, USA.
* Zhang et al. [2018b] Muhan Zhang, Zhicheng Cui, Shali Jiang, and Yixin Chen. Beyond link prediction: Predicting hyperlinks in adjacency space. In _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018b.
* Bradley [1997] Andrew P. Bradley. The use of the area under the ROC curve in the evaluation of machine learning algorithms. _Pattern Recognition_ , 30(7):1145–1159, July 1997. ISSN 0031-3203. doi: 10.1016/S0031-3203(96)00142-2. URL https://www.sciencedirect.com/science/article/pii/S0031320396001422.
* Shervashidze et al. [2011] Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. _Journal of Machine Learning Research_ , 12(9), 2011.
* Zhang and Chen [2017] Muhan Zhang and Yixin Chen. Weisfeiler-Lehman Neural Machine for Link Prediction. In _Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pages 575–583, Halifax NS Canada, August 2017. ACM. ISBN 978-1-4503-4887-4. doi: 10.1145/3097983.3097996. URL https://dl.acm.org/doi/10.1145/3097983.3097996.
* Milo et al. [2004] Ron Milo, Shalev Itzkovitz, Nadav Kashtan, Reuven Levitt, Shai Shen-Orr, Inbal Ayzenshtat, Michal Sheffer, and Uri Alon. Superfamilies of Evolved and Designed Networks. _Science_ , 303(5663):1538–1542, 2004. doi: 10.1126/science.1089167. URL https://www.science.org/doi/abs/10.1126/science.1089167. _eprint: https://www.science.org/doi/pdf/10.1126/science.1089167.
* Hu et al. [2022] Yang Hu, Xiyuan Wang, Zhouchen Lin, Pan Li, and Muhan Zhang. Two-Dimensional Weisfeiler-Lehman Graph Neural Networks for Link Prediction, June 2022. URL http://arxiv.org/abs/2206.09567. arXiv:2206.09567 [cs].
* Guo et al. [2022] Zhichun Guo, William Shiao, Shichang Zhang, Yozen Liu, Nitesh Chawla, Neil Shah, and Tong Zhao. Linkless Link Prediction via Relational Distillation. _arXiv preprint arXiv:2210.05801_ , 2022.
* Hu et al. [2021] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open Graph Benchmark: Datasets for Machine Learning on Graphs. _arXiv:2005.00687 [cs, stat]_ , February 2021. URL http://arxiv.org/abs/2005.00687. arXiv: 2005.00687.
* Wang and Zhang [2022] Xiyuan Wang and Muhan Zhang. How Powerful are Spectral Graph Neural Networks. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, _Proceedings of the 39th International Conference on Machine Learning_ , volume 162 of _Proceedings of Machine Learning Research_ , pages 23341–23362. PMLR, July 2022. URL https://proceedings.mlr.press/v162/wang22am.html.
## Appendix A Related work of link predictions
Early studies on link prediction problems mainly focus on heuristics methods,
which require expertise on the underlying trait of network or hand-crafted
features, including Common Neighbor [1], Adamic–Adar index [20] and
Preferential Attachment [37], etc. WLNM [56] suggests a method to encode the
induced subgraph of the target link as an adjacency matrix to represent the
link. With the huge success of GNN [9], GNN-based link prediction methods have
become dominant across different areas. Graph Auto Encoder(GAE) and
Variational Graph Auto Encoder(VGAE) [10] perform link prediction tasks by
reconstructing the graph structure. SEAL [11] and DE [13] propose methods to
label the nodes according to the distance to the focal node pair. To better
exploit the structural motifs [57] in distinct graphs, a walk-based pooling
method (WalkPool) [12] is designed to extract the representation of the local
neighborhood. PLNLP [14] sheds light on pairwise learning to rank the node
pairs of interest. Based on two-dimensional Weisfeiler-Lehman tests, Hu et al.
propose a link prediction method that can directly obtain node pair
representation [58]. To accelerate the inference speed, LLP [59] is proposed
to perform the link prediction task by distilling the knowledge from GNNs to
MLPs.
## Appendix B Proof of Theorem 1
We restate the Theorem 1: $\mathtt{GNN}$ cannot learn the subgraph feature
${\mathbf{h}}$ to be Edge Invariant.
###### Proof.
Recall that the computation of subgraph feature ${\mathbf{h}}$ involves steps
such as:
1. 1.
Subgraph Extraction: Extract the subgraph ${\mathcal{G}}_{i,j}^{r}$ around the
focal node pair $\\{i,j\\}$;
2. 2.
Node Representation Learning:
${\bm{Z}}=\mathtt{GNN}({\mathcal{G}}_{i,j}^{r})$, where
${\bm{Z}}\in\mathbb{R}^{|V_{i,j}^{r}|\times F_{\textnormal{hidden}}}$ is the
node embedding matrix learned by the GNN encoder;
3. 3.
Pooling: ${\mathbf{h}}=\mathtt{Pooling}({\bm{Z}};{\mathcal{G}}_{i,j}^{r})$,
where ${\mathbf{h}}\in\mathbb{R}^{F_{\textnormal{pooled}}}$ is the latent
feature of the subgraph ${\mathcal{G}}_{i,j}^{r}$;
Here, $\mathtt{GNN}$ is Message Passing Neural Network [17]. Given a subgraph
${\mathcal{G}}=(V,E,{\bm{x}}^{V},{\bm{x}}^{E})$, $\mathtt{GNN}$ with $T$
layers applies following rules to update the representation of node $i\in V$:
$\displaystyle
h_{i}^{(t+1)}=U_{t}(h_{i}^{(t)},\sum_{w\in\mathcal{N}(i)}M_{t}(h_{i}^{(t)},h_{w}^{(t)},{\bm{x}}^{E}_{i,w})),$
(3)
where $\mathcal{N}(i)$ is the neighborhood of node $i$ in ${\mathcal{G}}$,
$M_{t}$ is the message passing function at layer $t$ and $U_{t}$ is the node
update function at layer $t$. The hidden states at the first layer are set as
$h_{i}^{(0)}={\bm{x}}^{V}_{i}$. The hidden states at the last layer are the
outputs ${\bm{Z}}_{i}=h_{i}^{(T)}$.
Given any subgraph
${\mathcal{G}}_{i,j}^{r}=(V_{i,j}^{r},E_{i,j}^{r},{\bm{x}}^{V}_{V_{i,j}^{r}},{\bm{x}}^{E}_{E_{i,j}^{r}})$
with the edge present at the focal node pair $(i,j)\in E_{i,j}^{r}$, we
construct another isomorphic subgraph
${\mathcal{G}}_{\bar{i},\bar{j}}^{r}=(V_{\bar{i},\bar{j}}^{r},E_{\bar{i},\bar{j}}^{r},{\bm{x}}^{V}_{V_{\bar{i},\bar{j}}^{r}},{\bm{x}}^{E}_{E_{\bar{i},\bar{j}}^{r}})$,
but remove the edge $(\bar{i},\bar{j})$ from the edge set
$E_{\bar{i},\bar{j}}^{r}$ of the subgraph.
${\mathcal{G}}_{\bar{i},\bar{j}}^{r}$ can be seen as the counterpart of
${\mathcal{G}}_{i,j}^{r}$ in the testing set.
Thus, for the first iteration of node updates $t=1$:
$\displaystyle
h_{i}^{(1)}=U_{t}(h_{i}^{(0)},\sum_{w\in\mathcal{N}(i)}M_{t}(h_{i}^{(0)},h_{w}^{(0)},{\bm{x}}^{E}_{i,w})),$
(4) $\displaystyle
h_{\bar{i}}^{(1)}=U_{t}(h_{\bar{i}}^{(0)},\sum_{w\in\mathcal{N}({\bar{i}})}M_{t}(h_{\bar{i}}^{(0)},h_{w}^{(0)},{\bm{x}}^{E}_{\bar{i},w})),$
(5)
Note that $\mathcal{N}({\bar{i}})\cup\\{j\\}=\mathcal{N}(i)$. We have:
$\displaystyle h_{i}^{(1)}$
$\displaystyle=U_{t}(h_{i}^{(0)},\sum_{w\in\mathcal{N}(i)\backslash\\{j\\}}M_{t}(h_{i}^{(0)},h_{w}^{(0)},{\bm{x}}^{E}_{i,w})+M_{t}(h_{i}^{(0)},h_{j}^{(0)},{\bm{x}}^{E}_{i,j}))$
(6)
$\displaystyle=U_{t}(h_{\bar{i}}^{(0)},\sum_{w\in\mathcal{N}({\bar{i}})}M_{t}(h_{\bar{i}}^{(0)},h_{w}^{(0)},{\bm{x}}^{E}_{\bar{i},w})+M_{t}(h_{\bar{i}}^{(0)},h_{\bar{j}}^{(0)},{\bm{x}}^{E}_{\bar{i},{\bar{j}}})),$
(7)
As $U_{t}$ is injective,
$p(h_{i}^{(1)},{\textnormal{y}}=1|{\textnormal{e}}=1)\neq
p(h_{\bar{i}}^{(1)},{\textnormal{y}}=1)=p(h_{i}^{(1)},{\textnormal{y}}=1|{\textnormal{e}}=0)$.
Similarly, we can conclude that
$p(h_{i}^{(T)},{\textnormal{y}}=1|{\textnormal{e}}=1)\neq
p(h_{i}^{(T)},{\textnormal{y}}=1|{\textnormal{e}}=0)$.
As we use the last iteration of node updates $h_{i}^{(T)}$ as the final node
representation ${\bm{Z}}$, we have
$p({\bm{Z}},{\textnormal{y}}|{\textnormal{e}}=1)\neq
p({\bm{Z}},{\textnormal{y}}|{\textnormal{e}}=0)$, which leads to
$p({\textnormal{h}},{\textnormal{y}}|{\textnormal{e}}=1)\neq
p({\textnormal{h}},{\textnormal{y}}|{\textnormal{e}}=0)$ and concludes the
proof. ∎
## Appendix C Proof of Theorem 2
We restate the Theorem 2: Given
$p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}},{\textnormal{c}})=p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}})$,
there is no Dataset Shift in the link prediction if the subgraph embedding is
Edge Invariant. That is,
$p({\mathbf{h}},{\textnormal{y}}|{\textnormal{e}})=p({\mathbf{h}},{\textnormal{y}})\Longrightarrow
p({\mathbf{h}},{\textnormal{y}}|{\textnormal{c}})=p({\mathbf{h}},{\textnormal{y}})$.
###### Proof.
$\displaystyle p({\mathbf{h}}={\bm{h}},{\textnormal{y}}=y|{\textnormal{c}}=c)$
(8)
$\displaystyle=\mathbb{E}_{{\textnormal{e}}}[p({\mathbf{h}}={\bm{h}},{\textnormal{y}}=y|{\textnormal{c}}=c,{\textnormal{e}})p({\textnormal{e}}|{\textnormal{c}}=c)]$
(9)
$\displaystyle=\mathbb{E}_{{\textnormal{e}}}[p({\mathbf{h}}={\bm{h}},{\textnormal{y}}=y)p({\textnormal{e}}|{\textnormal{c}}=c)]$
(10) $\displaystyle=p({\mathbf{h}}={\bm{h}},{\textnormal{y}}=y).$ (11)
∎
## Appendix D Details about the baseline methods
To verify the effectiveness of FakeEdge, we tend to introduce minimal
modification to the baseline models and make them compatible with FakeEdge
techniques. The baseline models in our experiments are mainly from the two
streams of link prediction models. One is the GAE-like model, including GCN
[9], SAGE [43], GIN [42] and PLNLP [14]. The other includes SEAL [11] and
WalkPool [12]. GCN, SAGE and PLNLP learn the node representation and apply a
score function on the focal node pair to represent the link. As GAE-like
models are not implemented in the fashion of subgraph link prediction, the
subgraph extraction step is necessary for them as preprocessing. We follow the
code from the labeling trick [19], which implements the GAE models as the
subgraph link prediction task. In particular, GIN concatenates the node
embedding from different layers to learn the node representation and applies a
subgraph-level readout to aggregate as the subgraph representation. As
suggested by [13, 19], we always inject the distance information with the
Double-Radius Node Labeling [11] (DRNL) to enhance the model performance of
GAE-like models. In terms of the selection of hyperparameters, we use the same
configuration as [19] on datasets Cora, Citeseer and Pubmed. As they do not
have experiments on other 8 networks without attributes, we set the subgraph
hop number as 2 and leave the rest of them as default. For PLNLP, we also add
a subgraph extraction step without modifying the core part of the pairwise
learning strategy. We find that the performance of PLNLP under subgraph
setting is very unstable on different train/test splits. In particular, the
performance’s standard deviation of PLNLP is over $10\%$ on each experiment.
Therefore, we also apply DRNL to stabilize the model.
SEAL and WalkPool have applied one of the FakeEdge techniques in their initial
implementation. SEAL uses a _Edge Minus_ strategy to remove all the edges at
focal node pair as a preprocessing step, while WalkPool applies _Edge Plus_ to
always inject edges into the subgraph for node representation learning.
Additionally, WalkPool has the walk-based pooling method operating on both the
_Edge Plus_ and _Edge Minus_ graphs. This design is kept in our experiment.
Thus, our FakeEdge technique only takes effect on the node representation step
for WalkPool. From the results in Section 5.2, we can conclude that the
dataset shift issue on the node representation solely would significantly
impact the model performance. We also use the same hyperparameter settings as
originally reported in their paper.
## Appendix E Benchmark dataset descriptions
The graph datasets with node attributes are three citation networks: Cora
[44], Citeseer [45] and Pubmed [46]. Nodes represent publications and edges
represent citation links. The graph datasets without node attributes are: (1)
USAir [47]: a graph of US Air lines; (2) NS [48]: a collaboration network of
network science researchers; (3) PB [49]: a graph of links between web pages
on US political topic; (4) Yeast [50]: a protein-protein interaction network
in yeast; (5) C.ele [51]: the neural network of Caenorhabditis elegans; (6)
Power [51]: the network of the western USś electric grid; (7) Router [52]: the
Internet connection at the router-level; (8) E.coli [53]: the reaction network
of metabolites in Escherichia coli. The detailed statistics of the datasets
can be found in Table 4.
Table 4: Statistics of link prediction datasets.
Dataset | #Nodes | #Edges | Avg. node deg. | Density | Attr. Dimension
---|---|---|---|---|---
Cora | 2708 | 10556 | 3.90 | 0.2880% | 1433
Citeseer | 3327 | 9104 | 2.74 | 0.1645% | 3703
Pubmed | 19717 | 88648 | 4.50 | 0.0456% | 500
USAir | 332 | 4252 | 12.81 | 7.7385% | -
NS | 1589 | 5484 | 3.45 | 0.4347% | -
PB | 1222 | 33428 | 27.36 | 4.4808% | -
Yeast | 2375 | 23386 | 9.85 | 0.8295% | -
C.ele | 297 | 4296 | 14.46 | 9.7734% | -
Power | 4941 | 13188 | 2.67 | 0.1081% | -
Router | 5022 | 12516 | 2.49 | 0.0993% | -
E.coli | 1805 | 29320 | 16.24 | 1.8009% | -
## Appendix F Results measured by Hits@20 and statistical significance of
results
Table 5: Comparison with and without FakeEdge (Hits@20). The best results are
highlighted in bold.
llccccccccccc ModelsFakeEdge Cora Citeseer Pubmed USAir NS PB Yeast C.ele
Power Router E.coli
GCN _Original_ 65.35$\pm$3.64 61.71$\pm$2.60 48.97$\pm$1.92 87.69$\pm$3.92
92.77$\pm$1.72 41.60$\pm$2.52 85.26$\pm$1.90 65.33$\pm$7.55 39.64$\pm$5.47
39.41$\pm$2.38 82.21$\pm$2.02
2-13 _Edge Plus_ 68.31$\pm$2.89 65.80$\pm$3.28 55.70$\pm$3.07 89.34$\pm$4.09
93.28$\pm$1.69 43.98$\pm$6.25 87.19$\pm$2.13 66.68$\pm$5.25 46.92$\pm$3.78
72.03$\pm$2.85 86.03$\pm$1.40
_Edge Minus_ 67.97$\pm$2.62 66.13$\pm$3.30 54.29$\pm$2.66 90.57$\pm$3.30
93.61$\pm$1.68 43.92$\pm$5.82 86.66$\pm$2.18 66.07$\pm$6.14 47.97$\pm$2.58
72.34$\pm$2.58 85.68$\pm$1.84
_Edge Mean_ 67.76$\pm$3.02 66.11$\pm$2.48 54.55$\pm$2.88 89.48$\pm$3.52
92.77$\pm$1.99 44.64$\pm$6.93 86.64$\pm$2.03 65.28$\pm$6.33 47.54$\pm$2.95
72.26$\pm$2.68 85.62$\pm$1.71
_Edge Att_ 68.43$\pm$3.72 67.65$\pm$4.11 55.55$\pm$2.70 90.80$\pm$4.50
92.88$\pm$2.27 44.80$\pm$6.60 87.83$\pm$0.92 65.93$\pm$11.06 48.50$\pm$2.20
70.96$\pm$2.85 86.56$\pm$1.69
SAGE _Original_ 61.67$\pm$3.68 61.10$\pm$1.54 45.29$\pm$3.99 89.20$\pm$2.80
91.93$\pm$2.74 39.51$\pm$4.44 84.11$\pm$1.47 58.55$\pm$7.17 42.97$\pm$5.34
30.02$\pm$2.75 75.30$\pm$2.77
2-13 _Edge Plus_ 68.58$\pm$2.77 65.47$\pm$3.58 55.23$\pm$2.81 92.59$\pm$3.71
93.83$\pm$2.54 49.10$\pm$5.38 89.36$\pm$0.72 69.72$\pm$6.02 49.70$\pm$2.57
74.90$\pm$3.73 88.16$\pm$1.29
_Edge Minus_ 66.26$\pm$2.54 62.97$\pm$3.50 53.43$\pm$3.52 91.32$\pm$3.42
93.54$\pm$1.96 48.72$\pm$4.90 88.27$\pm$1.00 69.81$\pm$5.34 47.63$\pm$1.87
56.67$\pm$7.20 87.89$\pm$1.59
_Edge Mean_ 66.74$\pm$2.71 65.96$\pm$2.62 55.21$\pm$2.84 91.51$\pm$3.49
93.25$\pm$2.88 48.89$\pm$6.14 89.30$\pm$0.72 69.21$\pm$7.17 47.54$\pm$3.52
73.89$\pm$3.50 88.05$\pm$1.62
_Edge Att_ 68.80$\pm$2.65 66.62$\pm$3.67 55.18$\pm$2.99 92.92$\pm$3.11
94.09$\pm$1.60 48.53$\pm$5.15 89.10$\pm$1.17 69.30$\pm$7.53 47.06$\pm$2.21
73.60$\pm$4.68 87.63$\pm$1.66
GIN _Original_ 55.71$\pm$4.38 51.71$\pm$4.31 40.14$\pm$3.98 86.08$\pm$3.14
90.51$\pm$3.45 38.79$\pm$5.32 79.57$\pm$1.74 54.95$\pm$5.91 41.56$\pm$1.47
55.47$\pm$4.37 77.37$\pm$2.84
2-13 _Edge Plus_ 64.42$\pm$2.67 63.56$\pm$2.92 49.75$\pm$4.50 88.68$\pm$4.10
94.85$\pm$1.90 46.17$\pm$6.12 87.58$\pm$2.22 64.49$\pm$6.52 48.59$\pm$3.33
70.67$\pm$3.58 84.13$\pm$2.12
_Edge Minus_ 63.17$\pm$2.96 63.65$\pm$4.63 50.37$\pm$4.01 89.81$\pm$1.80
94.53$\pm$2.09 45.93$\pm$6.09 88.37$\pm$2.00 67.06$\pm$11.03 47.56$\pm$1.88
71.10$\pm$1.90 83.23$\pm$2.62
_Edge Mean_ 61.46$\pm$4.64 63.74$\pm$4.20 46.97$\pm$6.49 89.86$\pm$2.62
93.98$\pm$2.88 43.48$\pm$7.74 88.16$\pm$2.11 66.73$\pm$6.79 47.66$\pm$2.91
71.09$\pm$2.68 82.48$\pm$1.99
_Edge Att_ 63.26$\pm$3.33 60.64$\pm$4.29 49.71$\pm$4.40 88.87$\pm$4.71
94.49$\pm$1.51 44.94$\pm$5.37 87.92$\pm$1.45 65.93$\pm$8.55 48.19$\pm$2.70
70.03$\pm$3.05 84.38$\pm$2.54
PLNLP _Original_ 58.77$\pm$2.59 57.21$\pm$3.91 40.03$\pm$3.46 88.87$\pm$2.75
93.76$\pm$1.65 38.90$\pm$4.38 81.17$\pm$3.54 66.36$\pm$5.65 43.52$\pm$6.47
34.61$\pm$11.29 65.68$\pm$1.56
2-13 _Edge Plus_ 66.79$\pm$2.77 67.69$\pm$4.13 44.44$\pm$14.29 95.19$\pm$1.60
95.84$\pm$1.09 45.18$\pm$4.87 88.04$\pm$2.42 71.21$\pm$8.04 52.37$\pm$3.95
75.01$\pm$1.83 84.73$\pm$1.70
_Edge Minus_ 67.40$\pm$3.53 62.84$\pm$2.88 47.80$\pm$11.11 94.10$\pm$2.42
95.22$\pm$1.60 45.40$\pm$6.29 87.94$\pm$1.64 69.91$\pm$6.80 52.19$\pm$4.23
68.24$\pm$4.01 83.59$\pm$1.56
_Edge Mean_ 68.61$\pm$3.40 64.81$\pm$3.57 51.92$\pm$13.30 95.24$\pm$2.09
95.95$\pm$0.78 45.37$\pm$5.07 88.08$\pm$2.30 71.26$\pm$8.05 51.97$\pm$3.41
74.42$\pm$2.33 84.78$\pm$1.82
_Edge Att_ 67.82$\pm$3.58 64.37$\pm$3.73 48.47$\pm$12.01 95.38$\pm$2.02
95.62$\pm$0.81 45.28$\pm$5.11 88.57$\pm$1.80 70.65$\pm$8.11 51.79$\pm$4.07
74.99$\pm$1.92 85.10$\pm$1.88
SEAL _Original_ 60.95$\pm$8.00 61.56$\pm$2.12 48.80$\pm$3.33 91.27$\pm$2.53
91.72$\pm$2.01 43.44$\pm$6.82 85.33$\pm$1.76 64.21$\pm$5.86 39.30$\pm$3.79
59.47$\pm$6.66 84.15$\pm$2.16
2-13 _Edge Plus_ 60.51$\pm$7.70 65.12$\pm$2.18 50.90$\pm$3.96 90.85$\pm$4.12
93.61$\pm$1.87 46.77$\pm$4.80 86.66$\pm$1.59 65.47$\pm$7.68 45.90$\pm$2.85
70.06$\pm$3.57 85.76$\pm$2.04
_Edge Minus_ 60.74$\pm$6.60 65.14$\pm$2.93 51.23$\pm$3.82 90.66$\pm$3.49
92.19$\pm$2.03 47.21$\pm$4.73 86.49$\pm$2.08 63.64$\pm$6.93 46.42$\pm$3.42
70.43$\pm$4.40 85.50$\pm$2.06
_Edge Mean_ 62.94$\pm$5.78 64.99$\pm$4.36 51.83$\pm$3.66 91.84$\pm$2.93
92.92$\pm$2.12 46.02$\pm$4.22 86.25$\pm$2.17 65.93$\pm$6.87 46.57$\pm$3.22
70.08$\pm$3.85 85.85$\pm$1.81
_Edge Att_ 62.03$\pm$4.95 63.52$\pm$4.39 48.42$\pm$5.69 91.42$\pm$3.31
94.64$\pm$1.49 44.73$\pm$5.29 86.83$\pm$1.63 65.93$\pm$4.74 47.91$\pm$3.45
67.46$\pm$3.49 86.02$\pm$1.58
WalkPool _Original_ 69.98$\pm$3.37 64.22$\pm$2.84 57.30$\pm$2.56
95.09$\pm$2.78 96.02$\pm$1.64 47.74$\pm$5.81 88.24$\pm$1.33 78.55$\pm$5.83
43.58$\pm$4.40 56.21$\pm$13.92 83.41$\pm$1.72
2-13 _Edge Plus_ 69.13$\pm$2.31 64.51$\pm$2.25 59.23$\pm$3.09 95.00$\pm$3.09
96.06$\pm$1.65 46.18$\pm$5.40 89.79$\pm$0.70 78.36$\pm$5.30 56.27$\pm$4.17
77.65$\pm$2.83 86.44$\pm$1.52
_Edge Minus_ 69.34$\pm$2.45 64.26$\pm$1.93 59.44$\pm$3.10 95.14$\pm$2.93
95.99$\pm$1.67 46.79$\pm$4.88 89.57$\pm$0.85 77.90$\pm$4.49 55.72$\pm$3.63
77.62$\pm$2.64 87.24$\pm$0.77
_Edge Mean_ 70.27$\pm$2.96 62.84$\pm$4.79 59.85$\pm$3.84 95.24$\pm$2.45
96.17$\pm$1.63 46.27$\pm$5.00 89.58$\pm$0.91 77.94$\pm$4.55 56.18$\pm$3.74
76.88$\pm$2.76 86.89$\pm$0.84
_Edge Att_ 69.60$\pm$4.11 64.35$\pm$3.64 59.63$\pm$3.28 95.61$\pm$2.53
96.06$\pm$1.62 46.77$\pm$5.36 89.84$\pm$0.71 77.94$\pm$4.89 56.46$\pm$3.55
76.90$\pm$2.82 87.02$\pm$1.64
Table 6: $p$-values by comparing AUC scores with _Original_ and _Edge Att_.
Significant differences are highlighted in bold.
l|ccccccccccc Models Cora Citeseer Pubmed USAir NS PB Yeast C.ele Power Router
E.coli
GCN $\bm{3.50\cdot 10^{-09}}$ $\bm{6.92\cdot 10^{-12}}$ $\bm{1.52\cdot
10^{-09}}$ $\bm{1.10\cdot 10^{-05}}$ $\bm{9.89\cdot 10^{-04}}$ $\bm{1.21\cdot
10^{-09}}$ $\bm{4.95\cdot 10^{-13}}$ $2.76\cdot 10^{-01}$ $\bm{1.55\cdot
10^{-05}}$ $\bm{2.62\cdot 10^{-13}}$ $\bm{2.44\cdot 10^{-14}}$
SAGE $\bm{1.32\cdot 10^{-08}}$ $\bm{2.04\cdot 10^{-06}}$ $\bm{3.48\cdot
10^{-14}}$ $\bm{2.78\cdot 10^{-02}}$ $\bm{2.33\cdot 10^{-02}}$ $\bm{4.13\cdot
10^{-06}}$ $\bm{4.87\cdot 10^{-08}}$ $\bm{1.23\cdot 10^{-03}}$ $\bm{6.12\cdot
10^{-10}}$ $\bm{4.40\cdot 10^{-12}}$ $\bm{3.54\cdot 10^{-13}}$
GIN $\bm{4.86\cdot 10^{-10}}$ $\bm{6.09\cdot 10^{-11}}$ $\bm{1.46\cdot
10^{-12}}$ $\bm{1.27\cdot 10^{-03}}$ $\bm{1.29\cdot 10^{-05}}$ $\bm{2.47\cdot
10^{-10}}$ $\bm{5.34\cdot 10^{-11}}$ $\bm{3.84\cdot 10^{-04}}$ $\bm{5.10\cdot
10^{-09}}$ $\bm{3.11\cdot 10^{-16}}$ $\bm{3.04\cdot 10^{-12}}$
PLNLP $\bm{1.47\cdot 10^{-10}}$ $\bm{5.30\cdot 10^{-07}}$ $\bm{1.22\cdot
10^{-06}}$ $\bm{1.66\cdot 10^{-07}}$ $\bm{1.70\cdot 10^{-02}}$ $\bm{3.40\cdot
10^{-08}}$ $\bm{7.69\cdot 10^{-06}}$ $\bm{2.46\cdot 10^{-03}}$ $\bm{7.84\cdot
10^{-06}}$ $\bm{2.68\cdot 10^{-13}}$ $\bm{5.27\cdot 10^{-11}}$
SEAL $2.59\cdot 10^{-01}$ $\bm{1.72\cdot 10^{-02}}$ $\bm{6.45\cdot 10^{-05}}$
$4.82\cdot 10^{-01}$ $\bm{1.15\cdot 10^{-02}}$ $5.20\cdot 10^{-01}$
$\bm{5.91\cdot 10^{-04}}$ $4.12\cdot 10^{-01}$ $\bm{3.78\cdot 10^{-06}}$
$\bm{3.91\cdot 10^{-06}}$ $\bm{5.67\cdot 10^{-04}}$
WalkPool $9.52\cdot 10^{-01}$ $4.96\cdot 10^{-01}$ $\bm{2.83\cdot 10^{-07}}$
$4.77\cdot 10^{-01}$ $8.91\cdot 10^{-01}$ $\bm{1.84\cdot 10^{-05}}$
$\bm{1.07\cdot 10^{-04}}$ $8.74\cdot 10^{-01}$ $\bm{4.15\cdot 10^{-07}}$
$\bm{5.89\cdot 10^{-04}}$ $\bm{1.83\cdot 10^{-10}}$
We adopt another widely used metrics in the link prediction task [60],
Hits@20, to evaluate the model performance with and without FakeEdge. The
results are shown in Appendix F. FakeEdge can boost all the models predictive
power on different datasets.
Note that the AUC scores on several datasets are almost saturated in section
5. To further verify the statistical significance of the improvement, a two-
sided $t$-test is conducted with the null hypothesis that the augmented _Edge
Att_ and the _Original_ representation learning would reach at the identical
average scores. The $p$-values of different methods can be found in Appendix
F. Recall that the $p$-value smaller than $0.05$ is considered as
statistically significant. GAE-like methods obtain significant improvement on
almost all of the datasets, except GCN on C.ele. SEAL shows significant
improvement with _Edge Att_ on 7 out of 11 datasets. For WalkPool, more than
half of the datasets are significantly better.
## Appendix G FakeEdge with extremely sparse graphs
In real applications, the size of testing set often outnumbers the training
set. When it happens to a link prediction task, the graph will become more
sparse because of the huge number of unseen links. We are interested to see
how FakeEdge can handle situations where the ratio of training set is low and
there exists a lot of “true” links missing in the training graph.
We reset the train/test split as 20% for training, 30% for validation and 50%
for testing and reevaluate the model performance. The results can be found in
Appendix G. As shown in the table, FakeEdge can still consistently improve the
model performance under such an extreme setting. It shows that the dataset
shift for link prediction is a common issue and FakeEdge has the strength to
alleviate it in various settings.
However, we still observe a significant performance drop when compared to the
85/5/10 evaluation setting. This degradation may be caused by a more
fundamental dataset shift problem of link prediction: the nodes in a graph are
not sampled independently. Existing link prediction models often assume that
the likelihood of forming a link relies on its local neighborhood.
Nevertheless, an intentionally-sparsified graph can contain a lot of missing
links from the testing set, leading to corrupted local neighborhoods of links
which cannot reflect the real environments surrounding. FakeEdge does not have
the potential to alleviate such a dataset shift. We leave this as a future
work.
Table 7: Model performance with only 20% training data (AUC). The best results
are highlighted in bold.
llccccccccccc ModelsFake Edge Cora Citeseer Pubmed USAir NS PB Yeast C.ele
Power Router E.coli
GCN _Original_ 55.35$\pm$1.07 56.02$\pm$0.94 57.09$\pm$1.46 80.44$\pm$1.44
61.14$\pm$1.29 88.54$\pm$0.34 81.09$\pm$0.53 66.67$\pm$2.33 49.18$\pm$0.53
62.98$\pm$8.98 81.79$\pm$0.76
2-13 _Edge Plus_ 64.19$\pm$0.60 62.20$\pm$1.07 85.65$\pm$0.26 88.34$\pm$0.66
62.97$\pm$1.56 92.40$\pm$0.26 85.56$\pm$0.26 71.21$\pm$1.26 52.77$\pm$1.78
77.86$\pm$0.98 91.68$\pm$0.27
_Edge Minus_ 62.89$\pm$0.82 61.47$\pm$0.87 85.48$\pm$0.28 86.46$\pm$1.53
62.17$\pm$1.28 92.63$\pm$0.26 85.44$\pm$0.64 70.15$\pm$1.14 52.66$\pm$1.26
77.68$\pm$0.81 91.86$\pm$0.22
_Edge Mean_ 63.36$\pm$0.75 61.76$\pm$1.00 85.64$\pm$0.29 87.47$\pm$1.18
62.91$\pm$1.29 92.52$\pm$0.25 84.99$\pm$0.47 70.84$\pm$1.36 51.96$\pm$1.30
77.96$\pm$0.88 91.81$\pm$0.20
_Edge Att_ 63.30$\pm$1.17 61.89$\pm$0.90 85.55$\pm$0.29 88.19$\pm$1.26
61.84$\pm$1.77 92.57$\pm$0.24 85.51$\pm$0.40 70.28$\pm$1.42 52.38$\pm$1.53
77.44$\pm$0.64 91.60$\pm$0.33
SAGE _Original_ 51.47$\pm$1.68 54.02$\pm$1.63 57.00$\pm$7.69 84.38$\pm$1.14
62.54$\pm$1.48 89.48$\pm$0.46 77.99$\pm$1.05 66.06$\pm$2.01 51.73$\pm$0.80
61.14$\pm$8.58 85.01$\pm$0.67
2-13 _Edge Plus_ 65.01$\pm$0.61 63.10$\pm$0.63 86.90$\pm$0.25 89.17$\pm$0.80
65.19$\pm$1.75 92.71$\pm$0.27 86.74$\pm$0.38 72.10$\pm$1.81 53.99$\pm$0.70
78.72$\pm$0.65 91.92$\pm$0.15
_Edge Minus_ 62.71$\pm$0.79 58.70$\pm$1.00 85.51$\pm$0.21 87.24$\pm$0.83
63.64$\pm$1.52 91.95$\pm$0.30 85.61$\pm$0.49 70.45$\pm$1.98 52.28$\pm$0.68
70.66$\pm$0.95 89.56$\pm$0.32
_Edge Mean_ 64.44$\pm$0.82 62.63$\pm$0.63 86.54$\pm$0.13 88.95$\pm$0.72
64.33$\pm$1.48 92.67$\pm$0.26 86.60$\pm$0.30 71.78$\pm$1.80 52.38$\pm$0.70
78.54$\pm$0.70 91.86$\pm$0.19
_Edge Att_ 65.31$\pm$0.63 62.81$\pm$0.94 86.62$\pm$0.24 88.73$\pm$0.62
63.90$\pm$1.77 92.70$\pm$0.22 86.64$\pm$0.34 71.40$\pm$1.54 53.07$\pm$1.32
79.08$\pm$0.61 92.14$\pm$0.20
GIN _Original_ 61.93$\pm$0.93 61.27$\pm$0.95 74.32$\pm$1.30 87.39$\pm$0.71
62.70$\pm$0.81 89.52$\pm$0.29 81.70$\pm$0.95 68.25$\pm$1.65 52.02$\pm$1.20
76.50$\pm$0.95 90.07$\pm$0.51
2-13 _Edge Plus_ 63.30$\pm$0.84 62.64$\pm$1.17 86.53$\pm$1.08 87.63$\pm$0.78
65.28$\pm$1.44 91.69$\pm$0.35 86.30$\pm$0.51 69.82$\pm$1.47 53.55$\pm$0.92
78.08$\pm$0.98 91.44$\pm$0.35
_Edge Minus_ 62.66$\pm$1.08 62.11$\pm$1.17 85.12$\pm$0.29 87.31$\pm$0.85
65.01$\pm$1.91 91.75$\pm$0.33 86.25$\pm$0.58 69.72$\pm$1.27 53.59$\pm$0.92
78.09$\pm$0.93 91.39$\pm$0.22
_Edge Mean_ 62.82$\pm$1.33 62.40$\pm$1.10 85.67$\pm$0.66 87.46$\pm$0.75
65.02$\pm$1.65 91.72$\pm$0.31 86.29$\pm$0.53 69.94$\pm$1.53 53.54$\pm$0.84
78.09$\pm$1.08 91.36$\pm$0.26
_Edge Att_ 63.25$\pm$0.70 62.07$\pm$0.65 85.37$\pm$0.64 87.75$\pm$0.92
65.54$\pm$1.23 91.62$\pm$0.11 86.31$\pm$0.44 71.47$\pm$1.39 54.05$\pm$0.85
78.79$\pm$0.66 91.49$\pm$0.17
PLNLP _Original_ 63.08$\pm$1.01 65.23$\pm$1.41 73.97$\pm$1.58 84.14$\pm$1.94
63.06$\pm$1.30 88.34$\pm$1.22 81.15$\pm$1.52 68.33$\pm$1.70 52.27$\pm$0.83
68.90$\pm$1.08 84.80$\pm$1.72
2-13 _Edge Plus_ 70.81$\pm$0.70 72.30$\pm$1.59 94.57$\pm$0.15 89.47$\pm$0.87
65.61$\pm$0.80 92.66$\pm$0.22 86.80$\pm$0.37 73.35$\pm$1.44 54.19$\pm$0.74
79.07$\pm$0.60 92.00$\pm$0.29
_Edge Minus_ 67.21$\pm$1.56 68.79$\pm$1.12 93.77$\pm$0.13 87.49$\pm$1.29
64.13$\pm$1.79 92.22$\pm$0.24 85.78$\pm$0.46 71.22$\pm$1.60 52.92$\pm$0.70
75.97$\pm$0.60 91.08$\pm$0.23
_Edge Mean_ 72.01$\pm$0.96 72.79$\pm$1.72 94.23$\pm$0.22 89.58$\pm$0.89
65.33$\pm$0.68 92.63$\pm$0.22 86.80$\pm$0.43 73.07$\pm$1.39 54.13$\pm$0.50
80.38$\pm$1.10 91.93$\pm$0.29
_Edge Att_ 71.62$\pm$1.28 72.72$\pm$0.90 94.27$\pm$0.22 89.49$\pm$0.81
65.58$\pm$0.84 92.63$\pm$0.21 86.80$\pm$0.40 73.08$\pm$1.41 54.06$\pm$0.54
80.37$\pm$0.71 91.98$\pm$0.26
SEAL _Original_ 58.94$\pm$1.72 59.12$\pm$0.76 78.00$\pm$1.94 87.27$\pm$1.30
62.59$\pm$1.06 91.32$\pm$0.30 83.13$\pm$0.83 69.25$\pm$1.30 51.02$\pm$0.83
71.88$\pm$1.43 90.89$\pm$0.41
2-13 _Edge Plus_ 62.62$\pm$1.16 62.38$\pm$0.91 85.31$\pm$0.36 87.62$\pm$0.84
63.31$\pm$1.41 91.67$\pm$0.18 85.74$\pm$0.62 69.29$\pm$0.97 52.86$\pm$0.62
77.79$\pm$0.66 91.36$\pm$0.43
_Edge Minus_ 60.75$\pm$1.88 61.58$\pm$0.76 86.03$\pm$1.31 87.54$\pm$1.23
63.84$\pm$0.89 91.64$\pm$0.21 85.43$\pm$0.63 69.16$\pm$1.01 52.77$\pm$0.86
78.06$\pm$0.80 91.31$\pm$0.45
_Edge Mean_ 61.55$\pm$2.03 62.19$\pm$0.56 85.50$\pm$0.48 87.52$\pm$1.00
63.27$\pm$1.18 91.68$\pm$0.21 85.64$\pm$0.56 69.30$\pm$1.22 52.40$\pm$0.61
77.57$\pm$0.90 91.41$\pm$0.43
_Edge Att_ 61.77$\pm$1.89 62.15$\pm$0.53 85.09$\pm$0.53 87.76$\pm$0.73
64.01$\pm$1.69 91.91$\pm$0.36 85.56$\pm$0.54 68.97$\pm$1.24 53.11$\pm$0.95
77.82$\pm$0.54 91.01$\pm$0.32
WalkPool _Original_ 64.05$\pm$0.74 63.51$\pm$1.35 86.85$\pm$0.83
94.54$\pm$0.87 90.70$\pm$0.78 93.45$\pm$0.36 94.26$\pm$0.63 87.18$\pm$0.79
65.89$\pm$0.76 83.63$\pm$4.29 91.95$\pm$1.48
2-13 _Edge Plus_ 64.15$\pm$0.87 63.49$\pm$1.08 91.73$\pm$0.38 94.55$\pm$0.92
90.57$\pm$0.86 94.39$\pm$0.14 94.81$\pm$0.24 87.22$\pm$0.77 66.74$\pm$0.54
87.53$\pm$0.40 95.30$\pm$0.16
_Edge Minus_ 64.21$\pm$0.97 63.53$\pm$1.04 91.81$\pm$0.87 94.65$\pm$0.89
90.69$\pm$0.86 94.39$\pm$0.14 94.83$\pm$0.22 87.20$\pm$0.76 66.77$\pm$0.65
87.60$\pm$0.38 95.30$\pm$0.15
_Edge Mean_ 64.13$\pm$0.89 63.47$\pm$1.59 91.46$\pm$0.86 94.65$\pm$0.96
90.71$\pm$0.86 94.40$\pm$0.13 94.83$\pm$0.21 87.13$\pm$0.77 66.79$\pm$0.61
87.57$\pm$0.39 95.32$\pm$0.16
_Edge Att_ 63.90$\pm$1.21 63.62$\pm$1.39 90.97$\pm$1.62 94.71$\pm$0.80
90.69$\pm$0.81 94.40$\pm$0.14 94.85$\pm$0.24 87.17$\pm$0.82 66.78$\pm$0.54
87.66$\pm$0.29 95.32$\pm$0.13
## Appendix H Concatenation as another valid Edge Invariant subgraph
embedding
Edge Concat To fuse the feature from _Edge Plus_ and _Edge Minus_ , another
simple and intuitive way is to concatenate two embedding into one
representation. Namely,
${\mathbf{h}}^{concat}=[{\mathbf{h}}^{plus};{\mathbf{h}}^{minus}]$, where
$[\cdot;\cdot]$ is the concatenation operation. ${\mathbf{h}}^{concat}$ is
also an Edge Invariant subgraph embedding. In Appendix H, we observe that
_Edge Concat_ has the similar performance improvement like other FakeEdge
methods on all different backbone models.
Table 8: Comparison for concatenation operation (AUC). The best results are
highlighted in bold.
llccccccccccc ModelsFake Edge Cora Citeseer Pubmed USAir NS PB Yeast C.ele
Power Router E.coli
GCN _Original_ 84.92$\pm$1.95 77.05$\pm$2.18 81.58$\pm$4.62 94.07$\pm$1.50
96.92$\pm$0.73 93.17$\pm$0.45 93.76$\pm$0.65 88.78$\pm$1.85 76.32$\pm$4.65
60.72$\pm$5.88 95.35$\pm$0.36
2-13 _Edge Att_ 92.06$\pm$0.85 88.96$\pm$1.05 97.96$\pm$0.12 97.20$\pm$0.69
97.96$\pm$0.39 95.46$\pm$0.45 97.65$\pm$0.17 89.76$\pm$2.06 85.26$\pm$1.32
95.90$\pm$0.47 98.04$\pm$0.16
_Edge Concat_ 92.63$\pm$1.00 89.88$\pm$1.00 97.96$\pm$0.11 97.27$\pm$0.95
98.07$\pm$0.78 95.39$\pm$0.44 97.55$\pm$0.46 89.78$\pm$1.59 85.71$\pm$0.75
96.19$\pm$0.59 98.06$\pm$0.23
SAGE _Original_ 89.12$\pm$0.90 87.76$\pm$0.97 94.95$\pm$0.44 96.57$\pm$0.57
98.11$\pm$0.48 94.12$\pm$0.45 97.11$\pm$0.31 87.62$\pm$1.63 79.35$\pm$1.66
88.37$\pm$1.46 95.70$\pm$0.44
2-13 _Edge Att_ 93.31$\pm$1.02 91.01$\pm$1.14 98.01$\pm$0.13 97.40$\pm$0.94
98.70$\pm$0.59 95.49$\pm$0.49 98.22$\pm$0.24 90.64$\pm$1.88 86.46$\pm$0.91
96.31$\pm$0.59 98.43$\pm$0.13
_Edge Concat_ 93.03$\pm$0.57 91.14$\pm$1.46 98.08$\pm$0.07 97.54$\pm$0.70
98.59$\pm$0.26 95.66$\pm$0.39 98.04$\pm$0.37 91.14$\pm$1.19 86.46$\pm$1.04
96.19$\pm$0.54 98.40$\pm$0.22
GIN _Original_ 82.70$\pm$1.93 77.85$\pm$2.64 91.32$\pm$1.13 94.89$\pm$0.89
96.05$\pm$1.10 92.95$\pm$0.51 94.50$\pm$0.65 85.23$\pm$2.56 73.29$\pm$3.88
84.29$\pm$1.20 94.34$\pm$0.57
2-13 _Edge Att_ 90.76$\pm$0.88 89.55$\pm$0.61 97.50$\pm$0.15 96.34$\pm$0.82
98.35$\pm$0.54 95.29$\pm$0.29 97.66$\pm$0.33 89.39$\pm$1.61 86.21$\pm$0.67
95.78$\pm$0.52 97.74$\pm$0.33
_Edge Concat_ 90.90$\pm$0.92 89.94$\pm$0.89 97.48$\pm$0.16 96.17$\pm$0.64
98.41$\pm$0.73 95.45$\pm$0.39 97.71$\pm$0.38 88.81$\pm$1.41 86.77$\pm$0.99
95.72$\pm$0.47 97.72$\pm$0.18
PLNLP _Original_ 82.37$\pm$1.70 82.93$\pm$1.73 87.36$\pm$4.90 95.37$\pm$0.87
97.86$\pm$0.93 92.99$\pm$0.71 95.09$\pm$1.47 88.31$\pm$2.21 81.59$\pm$4.31
86.41$\pm$1.63 90.63$\pm$1.68
2-13 _Edge Att_ 91.22$\pm$1.34 88.75$\pm$1.70 98.41$\pm$0.17 98.13$\pm$0.61
98.70$\pm$0.40 95.32$\pm$0.38 98.06$\pm$0.37 91.72$\pm$2.12 90.08$\pm$0.54
96.40$\pm$0.40 98.01$\pm$0.18
_Edge Concat_ 93.01$\pm$1.16 91.19$\pm$1.52 98.45$\pm$0.12 97.86$\pm$0.37
98.81$\pm$0.33 95.18$\pm$0.24 98.04$\pm$0.21 91.79$\pm$1.79 89.16$\pm$1.01
96.31$\pm$0.36 98.13$\pm$0.18
SEAL _Original_ 90.13$\pm$1.94 87.59$\pm$1.57 95.79$\pm$0.78 97.26$\pm$0.58
97.44$\pm$1.07 95.06$\pm$0.46 96.91$\pm$0.45 88.75$\pm$1.90 78.14$\pm$3.14
92.35$\pm$1.21 97.33$\pm$0.28
2-13 _Edge Att_ 91.08$\pm$1.67 89.35$\pm$1.43 97.26$\pm$0.45 97.04$\pm$0.79
98.52$\pm$0.57 95.19$\pm$0.43 97.70$\pm$0.40 89.37$\pm$1.40 85.24$\pm$1.39
95.14$\pm$0.62 97.90$\pm$0.33
_Edge Concat_ 90.22$\pm$1.60 89.93$\pm$1.31 97.40$\pm$0.24 96.83$\pm$1.01
98.23$\pm$0.49 95.29$\pm$0.43 97.68$\pm$0.34 88.99$\pm$1.13 85.60$\pm$1.03
95.76$\pm$0.74 97.72$\pm$0.25
WalkPool _Original_ 92.00$\pm$0.79 89.64$\pm$1.01 97.70$\pm$0.19
97.83$\pm$0.97 99.00$\pm$0.45 94.53$\pm$0.44 96.81$\pm$0.92 93.71$\pm$1.11
82.43$\pm$3.57 87.46$\pm$7.45 95.00$\pm$0.90
2-13 _Edge Att_ 91.98$\pm$0.80 89.36$\pm$0.74 98.37$\pm$0.19 98.12$\pm$0.81
99.03$\pm$0.50 95.47$\pm$0.27 98.28$\pm$0.24 93.63$\pm$1.11 91.25$\pm$0.60
97.27$\pm$0.27 98.70$\pm$0.14
_Edge Concat_ 91.77$\pm$1.06 89.79$\pm$0.87 98.48$\pm$0.09 98.07$\pm$0.86
99.05$\pm$0.44 95.46$\pm$0.35 98.30$\pm$0.25 93.82$\pm$1.09 91.29$\pm$0.77
97.31$\pm$0.27 98.70$\pm$0.17
## Appendix I Dataset shift vs expressiveness: which contributes more with
FakeEdge?
In Section 4.3, we discussed how FakeEdge can enhance the expressive power of
GNN-based models on non-isomorphic focal node pairs. Meanwhile, we have
witnessed the boost of model performance brought by FakeEdge in the
experiments. One natural question to ask is whether resolving the dataset
shift issue or lifting up expressiveness is the major contributor to make the
model perform better.
To answer the question, we first revisit the condition of achieving greater
expressiveness. FakeEdge will lift up the expressive power when there exists
two nodes being isomorphic in the graph, where we can construct a pair of non-
isomorphic focal node pairs which GNNs cannot distinguish. Therefore, how
often such isomorphic nodes exist in a graph will determine how much
improvement FakeEdge can make by bringing greater expressiveness. Even though
isomorphic nodes are common in specific types of graphs like regular graphs,
it can be rare in the real world datasets [61]. Thus, we tend to conclude that
the effect of solving dataset shift issue by FakeEdge contributes more to the
performance improvement rather than greater expressive power. But fully
answering the question needs a further rigorous study.
## Appendix J Limitation
FakeEdge can align the embedding of isomorphic subgraphs in training and
testing sets. However, it can pose a limitation that hinders one aspect of the
GNN-based model’s expressive power. Figure 1 gives an example where subgraphs
are from training and testing phases, respectively. Now consider that those
two subgraphs are both from training set (${\textnormal{c}}=\text{train}$).
Still, the top subgraph has edge observed at focal node pair
(${\textnormal{y}}=1$), while the other does not (${\textnormal{y}}=0$). With
FakeEdge, two subgraphs will be modified to be isomorphic, yielding the same
representation. However, they are non-isomorphic before the modification. To
the best of our knowledge, no existing method can simultaneously achieve the
most expressive power and get rid of dataset shift issue, because the edge at
the focal node pair in the testing set can never be observed under a practical
problem setting.
|
# Simultaneous Spatial and Temporal Assignment for Fast UAV Trajectory
Optimization using Bilevel Optimization
Qianzhong Chen, Sheng Cheng, and Naira Hovakimyan Manuscript received:
December 1, 2022; Revised March 10, 2023; Accepted April 8, 2023.This paper
was recommended for publication by Editor Hanna Kurniawati upon evaluation of
the Associate Editor and Reviewers’ comments. This work was supported by
National Aeronautics and Space Administration (NASA) grant 80NSSC22M0070,
National Science Foundation (NSF) under the RI grant #2133656, and Air Force
Office of Scientific Research (AFOSR) grant FA9550-21-1-0411.All the authors
are with the Department of Mechanical Science and Engineering, University of
Illinois Urbana-Champaign, USA. {qc19, chengs<EMAIL_ADDRESS>Object Identifier (DOI): see top of this page.
###### Abstract
In this paper, we propose a framework for fast trajectory planning for
unmanned aerial vehicles (UAVs). Our framework is reformulated from an
existing bilevel optimization, in which the lower-level problem solves for the
optimal trajectory with a fixed time allocation, whereas the upper-level
problem updates the time allocation using analytical gradients. The lower-
level problem incorporates the safety-set constraints (in the form of
inequality constraints) and is cast as a convex quadratic program (QP). Our
formulation modifies the lower-level QP by excluding the inequality
constraints for the safety sets, which significantly reduces the computation
time. The safety-set constraints are moved to the upper-level problem, where
the feasible waypoints are updated together with the time allocation using
analytical gradients enabled by the OptNet. We validate our approach in
simulations, where our method’s computation time scales linearly with respect
to the number of safety sets, in contrast to the state-of-the-art that scales
exponentially.
###### Index Terms:
Constrained Motion Planning; Aerial Systems: Applications; Optimization and
Optimal Control
## I Introduction
Trajectory planning has long been a critical problem in robotics. As
trajectory planning affects the quality of robots’ motion to a great extent,
people have long been investigating this problem for various kinds of robots
[1]. Pioneered by Mellinger et al. [2], the minimum-snap method has been
dominantly used in trajectory planning of UAVs with differentially flat
dynamics (e.g., quadrotors). Such a method takes the snap of the whole
trajectory as cost and generates the planning problem in the form of a
quadratic program (QP). Meanwhile, various equality or inequality constraints
with different requirements on the trajectory can be added to the QP, making
it convenient for addressing more complex tasks.
One of the most important features of a trajectory planning algorithm is that
it needs to quickly generate a feasible trajectory for the given UAV. The
computational efficiency is not a significant issue if the optimization
problem is convex, e.g., the minimum-snap formulation [2] with fixed time
allocation to the adjustable waypoints.
However, the optimization problem normally becomes non-convex when the time
allocation is included as an optimization variable. Since time allocation can
largely affect the quality of the planned trajectory, researchers are
searching for more efficient ways to optimize time allocation for trajectory
planning. Early trials have used heuristics [3] and decoupling methods [4],
although they both can lead to inefficient trajectories and burdensome
computation time. Mellinger et al. [2] propose to solve this problem by
gradient descent, which can be seen as the prototype of the bilevel
optimization method, though the finite difference is used to approximate the
gradient. Sun et al. [5] finalize the idea of using bilevel optimization to
determine time allocation using analytical gradients. In their design, the
lower-level problem optimizes the trajectory with fixed time allocation, which
is generated by the upper-level problem that optimizes the time allocation
using analytical gradients.
Another feature that burdens computational efficiency is the requirement of
collision avoidance. To fulfill the collision-free requirement, a flight
corridor, which is a space with safety assurance, must be specified in
advance. Ways to plan a trajectory with flight corridors vary, e.g., sampling
method [2], two-staged planning strategy [6], Bernstein polynomials [7, 5].
Notwithstanding, all of the methods mentioned above are characterized as
inequality constraints, introducing burdens on QP’s computation time as a QP
can be solved in a much shorter time when equality constraints are involved
only.
We propose a bilevel optimization framework to plan a trajectory subject to
variable time allocation and safety-set constraints. Our formulation is
similar to that of [5]: the lower-level problem is convex and solved for the
global minimum; the upper-level problem is non-convex, and a new “solution” is
obtained by one-step gradient descent to reduce the optimization cost. The
uniqueness of our approach is that we exclude the inequality constraint for
the safety set in the lower-level problem, making it a QP with equality
constraints only. Specifically, the feasible waypoints in the safety set
originally characterized by the inequality constraints are fixed in the lower-
level problem. Such a formulation can significantly reduce the computation
time on the lower-level QP since it can be turned into an unconstrained
optimization problem. Correspondingly, the upper-level problem will adjust the
time allocation and the feasible waypoints using analytical gradients that are
efficiently obtained via OptNet [8]. Compared with the upper-level problem
formulation in [5], where only the time allocation is updated via analytical
gradients, the extra computation time to obtain the gradient for the spatial
update (of the feasible waypoints) is negligible. This comparison concludes
the major reason why our formulation can significantly reduce the computation
time of that in [5], where we validated the comparison in simulations with a
scalability test. While both methods minimize the cost to a similar level, our
approach’s computation time scales linearly with the number of waypoints,
whereas the one by [5] scales exponentially.
Our contributions are summarized as follows: 1. We reformulate the bilevel
optimization in [5] by excluding the inequality constraints in the lower-level
problem, which leads to drastically reduced computation time for an optimal
solution. 2. We use analytical gradients in the upper-level problem to
simultaneously update the spatial and temporal assignment, which is more
accurate and efficient than using finite-difference methods or other numerical
approximations.
The remainder of the paper is organized as follows: Section II reviews the
related work. Section III introduces the background of trajectory planning and
the existing bilevel optimization formulation. Section IV shows our
formulation of the bilevel optimization, with simulations results given in
Section V showing the advantage of our method in drastically reducing the
computation time. Finally, Section VI summarizes the paper and discusses
future work.
## II Related Work
### II-A Trajectory Planning
A general introduction to trajectory planning is provided in [1]. The authors
of [2] first introduce the method of minimum-snap that leverages quadrotors’
differential flatness and apply monomial polynomial parametrization to the
planning trajectory. The method uses snap information to build the cost
function and formulate the optimization problem in the form of a convex QP.
The problem can be solved efficiently with off-the-shelf solvers, making it
possible for real-time onboard motion planning. Moreover, the QP formulation
permits more features of trajectory planning to be integrated using equality
or inequality constraints. Based on this formulation, the authors of [7]
replace monomial polynomials with Bernstein polynomials to confine the whole
trajectory in a convex hull with safety assurance. Another variant uses
B-spline instead of monomial polynomials to include curvature constraints on
the planned trajectory [9].
### II-B Time Allocation
As mentioned above, trajectory planning in the form of piecewise polynomials
is a well-studied problem when the temporal assignment is fixed. However, as
temporal assignment determines the coefficients in the cost function, it
affects the resultant optimal trajectory to a great extent. People have long
found it difficult to efficiently obtain an optimal time allocation,
especially when planning the trajectory in a complex environment. The authors
of [10] formulate the problem in the QP form with a time penalty and conduct
gradient descent to iteratively find a better time allocation. However, the
computation time of the method becomes the major obstacle to widely using it
in real-time trajectory planning. Heuristics methods [3] have also been used
to achieve better time allocation. Notwithstanding, such methods do not
guarantee optimal time allocation and can lead to suboptimal trajectories.
Another strategy to find optimal time allocation is to decouple trajectory
planning into two problems: a spatial one and a temporal one, and then make
use of convex second-order conic program (SOCP) reformulation to find optimal
time allocation [4]. However, this method may be infeasible when the initial
and final states are not static.
### II-C Bilevel Optimization
A bilevel optimization is a type of mathematical program where one
optimization problem (the lower-level optimization problem) is embedded inside
another optimization problem (the upper-level optimization problem), and the
lower-level optimization problem is constrained by the upper-level
optimization problem [11]. The hierarchical nature of bilevel or multi-level
optimization makes it rather appropriate for solving complex optimization
problems with more than one level, i.e., the coefficients of the lower level’s
optimization problem are dependent on the upper level’s optimization results.
In robotics, bilevel optimization has been deployed in optimal control [12]
and various kinds of robots’ motion planning [13, 14, 15]. Sun et al. [5]
present a method using bilevel optimization to conduct trajectory optimization
with optimal time allocation. The time allocation is solved in the upper-level
problem and passed to the lower-level one as a parameter. The analytical
gradients of the optimal cost with respect to the time allocation are obtained
with Karush–Kuhn–Tucker (KKT) conditions of the lower-level problem. Using the
analytical gradient with line search, the cost in the upper-level problem is
reduced, and the updated time allocation updates the parameters in the lower-
level problem to generate the trajectory. In the bilevel optimization problem,
the derivation of gradients is crucial. One method of obtaining gradients is
by analyzing the sensitivity of the optimization problem [16]. A similar
approach for obtaining the analytical gradient of the solution of a QP with
respect to the QP’s parameters via implicit differentiation over the KKT
condition is shown in [8].
## III Background
We use piecewise polynomials of time to parameterize the UAV trajectories.
Since the differential flatness of UAVs’ dynamics excludes the necessity of
explicitly enforcing dynamics (e.g., quadrotors [2]), we can characterize the
trajectories as a smooth curve in the space of flat outputs
$\bm{\sigma}:[t_{0},t_{m}]\rightarrow\mathbb{R}^{3}\times SO(2)$ as
$\bm{\sigma}(t)=[x(t),y(t),z(t),\psi(t)]^{\top}$, which contains the
coordinates of the vehicle’s center of mass in the world coordinate system
$\bm{r}=[x,y,z]^{\top}$ and yaw angle $\psi$. Consider $m$ piecewise
polynomials that characterize the entire trajectory, where each piece is an
$n$-th order polynomial, i.e., for $i\in\\{1,2,\dots,m\\}$,
$\bm{\sigma}_{i}(t)=\sum_{k=0}^{n}\bm{\sigma}_{ik}t^{k},\quad
t\in[t_{i-1},t_{i}),\ \bm{\sigma}_{ik}\in\mathbb{R}^{4}.$ (1)
The problem is to find the polynomial coefficients
$\\{\bm{\sigma}_{ik}\\}_{i\in\mathcal{I},k\in\mathcal{K}}$ and the temporal
assignment $\bm{T}=[t_{1},t_{2},\dots,t_{m-1}]^{\top}$ such that the following
cost function is minimized
$\sum_{i=0}^{m-1}\int_{t_{i}}^{t_{i+1}}\mu_{r}\lVert\frac{\text{d}^{k_{r}}\bm{r}(t)}{\text{d}t^{k_{r}}}\rVert^{2}+\mu_{\psi}\lVert\frac{\text{d}^{k_{\psi}}\psi(t)}{\text{d}t^{k_{\psi}}}\rVert^{2}\text{d}t,$
(2)
where $k_{r}$ and $k_{\psi}$ refer to the order of derivative of the
coordinate $\bm{r}$ and yaw angle $\psi$, respectively. For the minimum-snap
cost in [2], $k_{r}=4$ and $k_{\psi}=2$ are used. The constraints include
convex set $\mathcal{C}_{i}\subseteq\mathbb{R}^{3}$ of adjustable waypoints at
time instances, i.e.,
$\bm{r}_{1}(t_{0})\in\mathcal{C}_{0},\ \bm{r}_{i}(t_{i})\in\mathcal{C}_{i},\
i\in\\{1,2,\dots,m\\}.$ (3)
The sets $\\{\mathcal{C}_{i}\\}_{i=0:m}$ serve as one type of safety
constraint, e.g., each set can be a corner connecting two corridors such that
the UAV must pass through the set to ensure safety. (The usage of Bernstein
polynomials can characterize the corridors as safety sets [7, 5], which is out
of the scope of this paper and will not be discussed.)
Without loss of generality, the feasible sets $\mathcal{C}_{i}$’s are
polytopes, yielding simple characterization by linear inequalities. The
continuity between adjacent polynomials is enforced by
$\frac{\text{d}^{k_{1}}\bm{r}_{i}(t_{i})}{\text{d}t^{k_{1}}}=\frac{\text{d}^{k_{1}}\bm{r}_{i+1}(t_{i})}{\text{d}t^{k_{1}}},\
\frac{\text{d}^{k_{2}}\psi_{i}(t_{i})}{\text{d}t^{k_{2}}}=\frac{\text{d}^{k_{2}}\psi_{i+1}(t_{i})}{\text{d}t^{k_{2}}},$
(4)
for $i\in\\{1,2,\dots,m-1\\}$, $k_{1}\in\\{0,1,\dots,k_{r}\\}$, and
$k_{2}\in\\{0,1,\dots,k_{\psi}\\}$. Finally, the temporal constraint requires
the time allocation to satisfy
$\bm{T}\in\mathcal{T}=\\{[t_{1},\dots,t_{m-1}]^{\top}|t_{0}<t_{1}<\dots<t_{m-1}<t_{m}\\}.$
(5)
The optimization problem seeks to find the polynomial coefficients
$\bm{\sigma}$ and temporal assignment $\bm{T}$ that minimize the objective
function in (2) subject to the constraints in (3)–(5). Since polynomials are
highly structured basis functions with coefficients captured by $\bm{\sigma}$,
the problem can be conveniently presented as
$\displaystyle\underset{\bm{\sigma},\bm{T}\in\mathcal{T}}{\text{minimize}}$
$\displaystyle
J(\bm{\sigma},\bm{T})=\bm{\sigma}^{\top}P(\bm{T})\bm{\sigma}+\bm{\sigma}^{\top}\bm{q}(\bm{T})$
(P) subject to $\displaystyle G(\bm{T})\bm{\sigma}\preceq\bm{h},$
$\displaystyle A(\bm{T})\bm{\sigma}=\bm{b},$
where $\bm{\sigma}$ denotes a permutation of all polynomial coefficients
$\\{\bm{\sigma}_{ik}\\}_{i\in\mathcal{I},k\in\mathcal{K}}$. The inequality
constraint $G(\bm{T})\bm{\sigma}\preceq\bm{h}$ associates with the safety-set
constraint (3), whereas the equality constraint $A(\bm{T})=\bm{b}$ associates
with the continuity constraints (4). All the coefficients $P(\bm{T})$,
$\bm{q}(\bm{T})$, $G(\bm{T})$, $\bm{h}$, $A(\bm{T})$, and $\bm{b}$ are derived
based on the objective function (2) and the constraints (3) and (4). The
matrix $P(\bm{T})$ is positive semi-definite.
In general, the problem (P) is a non-convex problem. Therefore, bilevel
optimization has been widely applied to solve the problem (P), e.g., [2, 10,
5]. The bilevel optimization has an upper-level problem:
$\displaystyle\underset{\bm{T}\in\mathcal{T}}{\text{minimize}}$ $\displaystyle
J(\bm{\sigma}^{*},\bm{T})$ (6) subject to
$\displaystyle\bm{\sigma}^{*}(\bm{T})\in\underset{\bm{\sigma}}{\text{argmin}}\\{J(\bm{\sigma},\bm{T}):\bm{\sigma}\in\mathcal{F}\\},$
where the $\mathcal{F}$ denotes the feasible set of $\bm{\sigma}$ such that
$\mathcal{F}=\\{\bm{\sigma}:G(\bm{T})\bm{\sigma}\preceq\bm{h},A(\bm{T})\bm{\sigma}=\bm{b}\\}$.
Embedded in the upper-level problem, the lower-level problem is
$\displaystyle\underset{\bm{\sigma}}{\text{minimize}}$ $\displaystyle
J(\bm{\sigma},\bm{T})$ (7) subject to $\displaystyle
G(\bm{T})\bm{\sigma}\preceq\bm{h},$ $\displaystyle
A(\bm{T})\bm{\sigma}=\bm{b}.$
Note that in the lower-level problem, the time assignment $\bm{T}$ is a
parameter of the problem with a known value (obtained via solving (6)). Hence,
(7) reduces to the minimum-snap formulation in [2] and turns into a quadratic
program, which is convex and efficiently solvable. The challenges come from
the upper-level optimization (6), which is non-convex. A typical approach in
solving (6) is to use gradient descent, where the temporal assignment $\bm{T}$
is iteratively updated. In [2], the descent direction is obtained by computing
the directional derivative along unit vectors. In [5], the descent direction
is the gradient of the optimal lower-level cost
$\nabla_{\bm{T}}J^{*}(\bm{\sigma}^{*}(\bm{T}),\bm{T})$, which is computed as
an analytical gradient by implicit differentiation of the KKT condition of the
low-level problem (7).
## IV Method
In this section, we describe our reformulation of the bilevel-optimization
problem based on [5]. Our reformulation can drastically reduce the computation
time to that of [5] by reducing the computation time of the lower-level QP
problem. Such a QP problem is reported in [5] to dominate the computation time
despite being convex. We keep equality constraints only in the lower-level
problem, which turns into an unconstrained problem that can be solved in a
shorter time.
Our formulation starts by introducing a new optimization variable, which
allows decomposing the inequality constraint
$G(\bm{T})\bm{\sigma}\preceq\bm{h}$ into an equality constraint and an
inequality constraint. Specifically, let the adjustable waypoint
$\bm{\xi}_{i}\in\mathbb{R}^{3}$ be the new optimization variable such that
$\bm{\xi}_{i}\in\mathcal{C}_{i}$ for $i\in\\{0,1,\dots,m\\}$. Then the
constraint (3) breaks down to
$\bm{\xi}_{0}=\bm{r}_{1}(t_{0}),\quad\bm{\xi}_{0}\in\mathcal{C}_{0},\quad\bm{\xi}_{i}=\bm{r}_{i}(t_{i}),\quad\bm{\xi}_{i}\in\mathcal{C}_{i},$
(8)
for $i\in\\{1,\dots,m\\}$. Correspondingly, the inequality constraint
$G(\bm{T})\bm{\sigma}\preceq\bm{h}$ turns into
$C(\bm{T})\bm{\sigma}=\bm{\xi},\quad R\bm{\xi}\preceq\bm{s},$ (9)
where $\bm{\xi}^{\top}=[\bm{\xi}_{0}^{\top}\ \bm{\xi}_{1}^{\top}\ \dots\
\bm{\xi}_{m}^{\top}]$. The matrix $C(\bm{T})$ evaluates the positions of the
trajectory on the allocated time instances in $\bm{T}$, whereas the inequality
$R\bm{\xi}\preceq\bm{s}$ is the half-space representation of the polytopes
$\\{\mathcal{C}_{i}\\}_{i=0:m}$. With the newly introduced variable and
constraints, problem (P) turns into
$\displaystyle\underset{\bm{\sigma},\bm{\xi},\bm{T}\in\mathcal{T}}{\text{minimize}}$
$\displaystyle
J(\bm{\sigma},\bm{T})=\bm{\sigma}^{\top}P(\bm{T})\bm{\sigma}+\bm{\sigma}^{\top}\bm{q}(\bm{T})$
(AP) subject to $\displaystyle R\bm{\xi}\preceq\bm{s},$ $\displaystyle
C(\bm{T})\bm{\sigma}=\bm{\xi},$ $\displaystyle A(\bm{T})\bm{\sigma}=\bm{b}.$
The introduction of the new variable $\bm{\xi}$ and additional constraints
preserves the equivalence between (P) and (AP): from a solution of one
problem, a solution of the other is readily found, and vice versa [17].
Similar to (P), problem (AP) also can be solved using bilevel optimization.
However, the formulation of (AP) permits only keeping the equality constraints
in the lower-level optimization:
$\displaystyle\underset{\bm{\sigma}}{\text{minimize}}$ $\displaystyle
J(\bm{\sigma},\bm{T})$ (LLP) subject to $\displaystyle
C(\bm{T})\bm{\sigma}=\bm{\xi},$ $\displaystyle A(\bm{T})\bm{\sigma}=\bm{b},$
where the spatial and temporal assignments, $\bm{\xi}$ and $\bm{T}$, are
generated from the upper-level optimization:
$\displaystyle\underset{\bm{\xi}\in\mathcal{X},\bm{T}\in\mathcal{T}}{\text{minimize}}$
$\displaystyle J(\bm{\sigma}^{*},\bm{T})$ (ULP) subject to
$\displaystyle\bm{\sigma}^{*}(\bm{\xi},\bm{T})\in\underset{\bm{\sigma}}{\text{argmin}}\\{J(\bm{\sigma},\bm{T}):\bm{\sigma}\in\mathcal{F}(\bm{\xi},\bm{T})\\}.$
Here, the set $\mathcal{F}(\bm{\xi},\bm{T})$ denotes the feasible set of
$\bm{\sigma}$ such that
$\mathcal{F}(\bm{\xi},\bm{T})=\\{\bm{\sigma}:C(\bm{T})\bm{\sigma}=\bm{\xi},A(\bm{T})\bm{\sigma}=\bm{b}\\}$
and $\mathcal{X}$ denotes the feasible waypoints such that
$\mathcal{X}=\\{\bm{\xi}:R\bm{\xi}\preceq\bm{s}\\}$.
###### Remark 1
Our formulation can be extended to include dynamic constraints of a UAV, e.g.,
velocity and acceleration limits. Such constraints can be cast as convex sets
$\mathcal{C}_{i}^{v}$ and $\mathcal{C}_{i}^{a}$ for feasible velocities and
accelerations, respectively. In other words, we have
$\displaystyle\bm{v}_{1}(t_{0})\in\mathcal{C}_{0}^{v},\
\bm{v}_{i}(t_{i})\in\mathcal{C}_{i}^{v},\
\bm{a}_{1}(t_{0})\in\mathcal{C}_{0}^{a},\
\bm{a}_{i}(t_{i})\in\mathcal{C}_{i}^{a},$ (10)
for $i\in\\{1,2,\dots,m\\}$ (analogous to (3) for the feasible waypoints),
which can be augmented to the inequality constraints
$G(\bm{T})\bm{\sigma}\preceq\bm{h}$ in (P). Subsequently, the decomposition in
(9) will include the augmented velocity and acceleration assignments.
Therefore, our design of lower- and upper-level problems also applies to the
dynamic constraints, where the upper-level one determines the temporal
allocation and augmented spatial assignments (for position, velocity, and
acceleration), and the lower-level one determines the polynomial coefficients
to meet these assignments.
A few notes follow our construction of the lower-upper-level problems. First,
the lower-level optimization is still a convex problem: a quadratic program
with equality constraints only. The insight here is to significantly reduce
the computation time of the quadratic program by keeping the equality
constraints only. Such a problem can be reduced to an equivalent unconstrained
problem by introducing Lagrangian multipliers, which reduces to solving a
system of linear equations [17]. Second, the upper-level problem, despite
different forms than in [2, 10, 5], is generally hard to solve due to its non-
convexity. Coordinate-descent may be applied to successively solve (LLP) and
(ULP) in a row despite the fact that the computation time can be forbidden for
real-time planning. Instead, we use gradient-descent on (ULP) to only update
on $\bm{\xi}$ and $\bm{T}$ once (LLP) is solved. Consequently, the updated
values of $\bm{\xi}$ and $\bm{T}$ are plugged into (LLP) as parameters. In
order to update $\bm{\xi}$ and $\bm{T}$, we need $\partial
J^{*}/\partial\bm{\xi}$ and $\partial J^{*}/\partial\bm{T}$ to update
$\bm{\xi}$ and $\bm{T}$ by
$\bm{\xi}\leftarrow P_{\mathcal{X}}\left(\bm{\xi}-\alpha_{1}(\frac{\partial
J^{*}}{\partial\bm{\xi}})^{\top}\right),\ \bm{T}\leftarrow
P_{\mathcal{T}}\left(\bm{T}-\alpha_{2}(\frac{\partial
J^{*}}{\partial\bm{T}})^{\top}\right),$ (11)
where $\alpha_{1},\alpha_{2}>0$ are the user-selected step sizes and
$P_{\mathcal{Y}}(y)$ denotes the projection operator [18] that projects $y$
onto the set $\mathcal{Y}$. The step sizes $\alpha_{1}$ and $\alpha_{2}$
dominate the update of the spatial assignment and time allocation,
respectively. Since the two variables $\bm{\xi}$ and $\bm{T}$ in space and
time have totally different physical units, individual step sizes are needed
for efficient parameter updates. For the gradient descent, the key is to
obtain the partial derivatives $\partial J^{*}/\partial\bm{\xi}$ and $\partial
J^{*}/\partial\bm{T}$, which stand for the sensitivity of the optimal cost of
(LLP) to $\bm{\xi}$ and $\bm{T}$, respectively. We use OptNet [8] to obtain
these two gradients. Specifically, the OptNet allows computing the analytical
gradient of the optimal cost with respect to the coefficients of a QP (e.g.,
$P(\bm{T})$, $\bm{q}(\bm{T})$, $C(\bm{T})$, $A(\bm{T})$, $\bm{\xi}$, and
$\bm{b}$ in (LLP)) by implicitly differentiating the KKT condition. Using the
method enabled by OptNet, we can directly obtain
$\frac{\partial J^{*}}{\partial\bm{\xi}},\frac{\partial J^{*}}{\partial
P(\bm{T})},\frac{\partial J^{*}}{\partial\bm{q}(\bm{T})},\frac{\partial
J^{*}}{\partial C(\bm{T})},\frac{\partial J^{*}}{\partial A(\bm{T})}.$ (12)
Using chain rule, we can obtain
$\displaystyle\frac{\partial J^{*}}{\partial\bm{T}}=$
$\displaystyle\frac{\partial J^{*}}{\partial P(\bm{T})}\frac{\partial
P(\bm{T})}{\partial\bm{T}}+\frac{\partial
J^{*}}{\partial\bm{q}(\bm{T})}\frac{\partial\bm{q}(\bm{T})}{\partial\bm{T}}$
$\displaystyle+\frac{\partial J^{*}}{\partial C(\bm{T})}\frac{\partial
C(\bm{T})}{\partial\bm{T}}+\frac{\partial J^{*}}{\partial
A(\bm{T})}\frac{\partial A(\bm{T})}{\partial\bm{T}},$ (13)
where $\frac{\partial P(\bm{T})}{\partial\bm{T}}$,
$\frac{\partial\bm{q}(\bm{T})}{\partial\bm{T}}$, $\frac{\partial
C(\bm{T})}{\partial\bm{T}}$, $\frac{\partial A(\bm{T})}{\partial\bm{T}}$ are
easy to compute since $P(\bm{T})$, $\bm{q}(\bm{T})$, $C(\bm{T})$, $A(\bm{T})$
are explicit functions of $\bm{T}$. We summarize our solution method to (AP)
in Alg. 1.
Algorithm 1 Solution method of (AP)
1:Initial temporal assignment $\bm{T}_{0}$, initial spatial assignment
$\bm{\xi}_{0}$, spatial step size $\alpha_{1}$, temporal step size
$\alpha_{2}$, and termination condition $\mathcal{C}$.
2:Optimal spatial and temporal assignment $(\bm{\xi}^{*},\bm{T}^{*})$ and
polynomial coefficients $\bm{\sigma}^{*}$.
3:$(\bm{\xi},\bm{T})\leftarrow(\bm{\xi}_{0},\bm{T}_{0})$
4:while $\mathcal{C}$ is FALSE do
5: Solve the (LLP) using a QP solver (with the parameters $P(\bm{T})$,
$\bm{q}(\bm{T})$, $A(\bm{T})$, $\bm{b}$, $C(\bm{T})$, $\bm{\xi}$) to obtain
the optimal solution $\bm{\sigma}^{*}$ and optimal value $J^{*}$
6: Obtain the gradients $\partial J^{*}/\partial\bm{\xi}$ and $\partial
J^{*}/\partial\bm{T}$ using OptNet [8] and chain rule in (13)
7: Use the projected gradient descent in (11) to update $\bm{\xi}$ and
$\bm{T}$ .
8: $(\bm{\xi}^{*},\bm{T}^{*})\leftarrow(\bm{\xi},\bm{T})$
9:end while
10:return $(\bm{\xi}^{*},\bm{T}^{*})$ and $\bm{\sigma}^{*}$.
## V Simulation results
All the simulations shown in this section are implemented using MATLAB R2021b
on a computer with a 2.20 GHz Intel i7-8750H CPU. We use quadprog as the QP
solver, in which we choose the interior-point method instead of the active-set
method in quadprog because the former provided better solution quality with
shorter computation time than the latter in our simulations. This observation
is consistent with the solver comparison shown in [5]. We compare our method
with that of [5] (referred to as “compared method” in this section) in three
experiments. In the first one, we fix the number of adjustable waypoints and
show a breakdown of the computation time. In the second one, we conduct a
scalability experiment to show how the computation time scales with the number
of adjustable waypoints. In the third one, we fix the number of adjustable
waypoints and include the dynamic constraints. Note that the number of
adjustable waypoints equals the number of safety sets as indicated in (8).
### V-A Trajectory Planning with Two Adjustable Waypoints
We randomly select four waypoints, forming a path with a total length of 10 m.
Initially, we allocate 2 s to each trajectory segment. The start and end
points are fixed with velocity, acceleration, jerk, and snap set to 0. Two
adjustable waypoints are constrained in safety sets $\mathcal{C}_{1}$ and
$\mathcal{C}_{2}$, which are cubes with 0.6 m side lengths centered at the
initial waypoints. The scenario is illustrated in Fig. 3. Our method optimizes
the time allocation and spatial assignment for the two adjustable waypoints,
whereas the compared method only updates the time allocation (the adjustable
waypoints are solved in their lower-level problem). The iterations terminate
when the relative cost reduction between consecutive iterations is within 3%.
The hyperparameter choice of the spatial and temporal step sizes, $\alpha_{1}$
and $\alpha_{2}$, largely determine the performance and computation time of
the proposed method. We first investigate the optimal cost and total
computation time subject to different combinations of $\alpha_{1}$ and
$\alpha_{2}$, where the results are shown in Figs. 1 and 2. It can be observed
that when $\alpha_{1}\geq 0.2$, increasing $\alpha_{1}$ does not lower
computation time, and when $\alpha_{2}\leq 0.08$, decreasing $\alpha_{2}$ does
not reduce the optimal cost significantly. Therefore, we choose
$\alpha_{1}=0.2$ and $\alpha_{2}=0.08$ to balance the optimality and
computation time, where we use this combination of step sizes in the
simulations next.
Figure 1: Optimal costs of the trajectories planned by the proposed method
under the combination of different step sizes $\alpha_{1}$ (spatial) and
$\alpha_{2}$ (temporal) Figure 2: Total computation time of the trajectory
planned by the proposed method under the combination of different step sizes
$\alpha_{1}$ (spatial) and $\alpha_{2}$ (temporal) Figure 3: An example with
four random waypoints, where two middle ones are adjustable.
Figure 3 shows the gradually updated trajectory in the first 20 iterations.
Compared to the stretched initial trajectory, the final trajectory in
iteration 20 is better shaped, indicating the benefit of simultaneously
conducting spatial and temporal assignments. Meanwhile, the adjustable
waypoints are contained within the cubic safety sets and close to the initial
waypoints. The cost reduction is demonstrated in Fig. 4, where both the
proposed method and compared method are initialized with identical temporal
assignments and terminate on the 13th iteration according to the 3% relative
cost reduction. It can be seen that both methods achieve similar optimal costs
in a similar number of iterations.
Figure 4: Cost reduction through iterations in the case with two adjustable
waypoints
The optimal cost and computation time of the two methods are shown in Table I.
The comparison shows that the proposed method reaches a slightly smaller cost
with a significant time reduction compared to [5] with the same iterations. We
dissect the computation in each iteration into four parts and count the
individual computation time. The averaged computation time and its standard
deviation is shown in Table II. In the compared method, the computation time
of solving QP fluctuates due to the inequality constraints in the QPs.
Specifically, the change in the time coefficients dramatically affects the
parameters of the QP problem, causing the number of iterations and computation
time to vary in a wide range for solving the QP. Since the compared method
updates the adjustable waypoints in the lower-level QP, the spatial update
time is not applicable. It can be observed that the large gap in total
computation time mainly comes from the discrepancy in the time of solving the
QP in these two methods. Besides, we observe that the QP solver’s reported
iterations are consistently 1 when running the proposed method, whereas the
number could vary from 5 to 20 when running the compared method. These
observations are consistent with our previous analysis that the merits of the
proposed method mainly lie in the exclusion of inequality constraints when
solving QP.
TABLE I: Comparison on the optimal costs and computation time till convergence | Compared | Proposed
---|---|---
Optimal Cost | 135.69 | 132.54
Total Computation Time [ms] | 361.14 | 141.31
TABLE II: Breakdown of the averaged computation time and its standard deviation for one iteration. (U) and (L) associate with the upper- and lower-level problems, respectively. Computation Time [ms] | Compared | Proposed
---|---|---
Gradients from OptNet (U) | 5.21$\pm$0.45 | 4.04$\pm$0.03
Update QP Parameters (U) | 1.75$\pm$0.08 | 1.72$\pm$0.12
Update Waypoints (U) | N/A | 0.34$\pm$0.03
Solving QP (L) | 20.82$\pm$17.39 | 4.77$\pm$0.29
Total | 27.78$\pm$17.92 | 10.87$\pm$0.47
We also study how the size of the safety sets affects the performance of the
two methods. Keeping all other conditions fixed, we expand the side length of
the two cubic safety sets to 0.9 m and 1.2 m. With new safety sets, we conduct
the simulations and record both methods’ optimal cost and computation time,
which are shown in Table III. The results show that for both methods, the
optimal cost and total computation time decline remarkably when the side
length of the safety sets expands from 0.6 m to 0.9 m. However, the values
remain almost unchanged when the side length of safety sets expands from 0.9 m
to 1.2 m. It occurs because the optimal waypoints are on the boundary of the
cubes with 0.6 m side length, in contrast to those in the interior of the
cubes with 0.9 m side length. In other words, the optimization constraints
change from active to inactive when the side length of safety sets sides
expands from 0.6 m to 0.9 m. Therefore, expanding from 0.6 m to 0.9 m lowers
the cost and computation time dramatically. However, expanding from 0.9 m to
1.2 m does not have the same effect since the optimal waypoints are all in the
interior of the safety sets. Regardless of the size of safety sets, the
proposed method has the advantage of shorter computation time while achieving
almost the same planning quality as the compared method for all three cases.
TABLE III: Comparison on optimal cost and computation time for safety sets with different sizes | Side Length of
---
Safety Sets [m]
| Compared | Proposed
0.6 | Optimal Cost | 135.69 | 132.54
Comp. Time [ms] | 361.14 | 141.31
0.9 | Optimal Cost | 98.53 | 99.16
Comp. Time [ms] | 261.30 | 132.30
1.2 | Optimal Cost | 96.44 | 95.25
Comp. Time [ms] | 283.62 | 159.48
### V-B Scalability Experiment with Multiple Adjustable Waypoints
We conduct the scalability experiment to test the scalability of the proposed
method and also demonstrate the proposed method’s merit in computation time
when planning more complex trajectories. In this experiment, we conduct a
series of tests with the number of adjustable waypoints spanning from one to
ten. The start and end waypoints are fixed at (0, 0, 0) and (10, 10, 10),
respectively, with velocity, acceleration, jerk, and snap set to 0 at both
waypoints. The adjustable waypoints are initialized as the vertices of a
slalom path and are evenly distributed in the $z$-axis (see Fig. 5 for an
illustration with ten adjustable waypoints). Therefore, the planned
trajectories twist and turn around the line connecting the start and end
waypoints, causing a dramatic change in the cost as the number of adjustable
waypoints increases (the cost increases from 20 to 833,536 as the waypoints
increase from one to ten). The total time of the trajectory is 8 s. The
initial time allocated to the adjustable waypoints is equally distributed in
the 8 s interval. For both methods, the adjustable waypoints are constrained
within a cube with 0.6 m side length centered at the initially selected
waypoints. For all cases, the termination criterion is set to 5% relative
reduction (which is met within 20 iterations for both methods). Figure 5 shows
an example with ten adjustable waypoints. Similar to what has been observed in
Fig. 3, the final trajectory in iteration 20 is better shaped and less
stretched than the initial trajectory.
Figure 5: An example with twelve random waypoints, where ten middle ones are
adjustable.
The total computation time (when the 5% relative cost reduction is reached) of
the two methods scales similar to that of a single iteration shown in Table
II. The time comparison is demonstrated in Fig. 6. It is clear that the
proposed method has a great advantage compared with [5]: The computation time
of the proposed method scales linearly with the number of adjustable
waypoints, whereas the compared method in [5] scales exponentially.
Specifically, we conduct a regression analysis of the computation time of the
two methods with the safety set size of 0.6 m. Denote the computation time by
$t$. Then for $N\in\\{1,2,\dots,10\\}$ number of safety sets, the proposed
method’s computation time fits $t=7.52N-3.42$ with an R-square score of 0.99
(the exponential regression $t=11.45\exp{(0.19N)}$ gives an R-score of 0.97,
which has less fidelity than the linear fit). The compared method’s
computation time fits $t=72.06\exp{(0.31N)}$ with an R-square score of 0.98
(the linear regression $t=170.32N-370.94$ gives an R-score of 0.93, which has
less fidelity than the exponential fit). Furthermore, the standard deviations
are shown in Fig. 6 by error bars, which indicate the more consistent
computation time of the proposed method than the compared method.
Figure 6: Average computation time in one iteration with error bars showing
the standard deviations.
The optimal costs of the two methods are shown in Fig. 7. The proposed method
either reaches a smaller cost or is within 20% of the cost obtained by the
compared method. Summarizing the comparison in Fig. 6 and Fig. 7, we conclude
that the proposed method can deliver similar performance in the planned
trajectory with the benefit of significantly reducing the computation time.
We further compare the two methods subject to larger safety sets. By expanding
the side length of safety sets from 0.6 m to 1.2 m and keeping all other
conditions unchanged as above, we display the average computation time in one
iteration and optimal cost of both methods in Fig. 6 and Fig. 7. The results
show that the computation time and the optimal cost of both methods decrease
with the expansion of safety sets. Meanwhile, under larger safety sets, the
proposed method still has the advantage of generating trajectories with
similar quality while significantly reducing the computation time than the
compared method.
Figure 7: Comparison of the optimal cost as the number of adjustable waypoints
increases.
### V-C Trajectory Optimization with Dynamic Constraints
We randomly select five waypoints, forming a path with a total length of 20 m.
Initially, we allocate 2 s to each trajectory segment. The start and end
points are fixed with velocity, acceleration, jerk, and snap set to 0. Three
adjustable waypoints are constrained in the cubic safety sets with 0.6 m side
lengths centered at the initial waypoints. Meanwhile, the maximum velocity and
acceleration are set to 5 m/s and 5 m/s2, respectively, on the three
adjustable waypoints. The scenario is illustrated in Fig. 8.
Our method optimizes the time allocation and the augmented spatial assignment
for the three adjustable waypoints in the upper-level problem, whereas the
compared method updates the time allocation only in their upper-level problem
and solves the adjustable waypoints velocity and acceleration subject to the
maximum in their lower-level problem. The cost reduction is demonstrated in
Fig. 9, where both the proposed method and compared method are initialized
with identical temporal assignments. Under the termination condition of 3%
relative cost reduction, the compared method terminates at the 6th iteration,
and the proposed method terminates at the 25th iteration. Meanwhile, the
proposed method’s optimal cost is within 3.5% of the cost obtained by the
compared method.
Table IV shows the optimization results of two methods. Though the proposed
method takes more iterations to terminate, the remarkable advantage in
computation time of one iteration still makes the proposed method use much
less time to plan a trajectory in contrast with the compared method, while a
similar optimality is achieved in both methods. The breakdown of the average
computation time for one iteration and their standard deviations are
demonstrated in Table V. It is clear that the major gap between the two
methods in computation time is caused by the gap in solving the lower-level
QP. For each adjustable waypoint, the compared method needs to solve 18
inequality constraints while the proposed method delegates them to the upper-
level problem and solves the lower-level problem with equality constraints
only, reducing the time consumption significantly.
Figure 8: An example with five random waypoints subject to dynamic constraints, where the three middle ones are adjustable Figure 9: Cost reduction through iterations with three adjustable waypoints subject to dynamic constraints TABLE IV: Comparison on the optimal costs and computation time till convergence when the dynamic constraints are present. | Compared | Proposed
---|---|---
Optimal Cost | 1609.34 | 1662.36
Total Computation Time [ms] | 4342.73 | 362.75
TABLE V: Breakdown of the averaged computation time and its standard deviation for one iteration when the dynamic constraints are present. (U) and (L) associate with the upper- and lower-level problems, respectively. Computation Time [ms] | Compared | Proposed
---|---|---
Gradients from OptNet (U) | 9.82$\pm$1.49 | 5.56$\pm$0.88
Update QP Parameters (U) | 1.73$\pm$0.21 | 2.41$\pm$0.61
Update Waypoints (U) | N/A | 0.35$\pm$0.03
Update Dynamic Constraints(U) | N/A | 0.38$\pm$0.04
Solving QP (L) | 608.84$\pm$19.54 | 5.81$\pm$0.29
Total | 620.39$\pm$21.24 | 14.51$\pm$1.85
## VI Conclusion
We present a novel method of UAV trajectory planning based on bilevel
optimization by simultaneously conducting spatial and temporal assignments.
Our bilevel optimization is composed of a lower-level problem that is a QP
with equality constraints only (which contributes majorly to the reduced
computation time) and an upper-level problem that uses analytical gradients to
make the spatial and temporal update on the adjustable waypoints. Simulation
results show that our method has great advantages in computation time compared
to the existing bilevel optimization method, where the former scales linearly
and the latter scales exponentially to the number of adjustable waypoints.
Future work will deploy our method on a real quadrotor to demonstrate its
capability for onboard and real-time trajectory planning in complex
environments.
## References
* [1] A. Gasparetto, P. Boscariol, A. Lanzutti, and R. Vidoni, “Path planning and trajectory planning algorithms: A general overview,” _Motion and Operation Planning of Robotic Systems_ , pp. 3–27, 2015.
* [2] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in _Proceedings of the IEEE International Conference on Robotics and Automation_ , 2011, pp. 2520–2525.
* [3] S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor, and V. Kumar, “Planning dynamically feasible trajectories for quadrotors using safe flight corridors in 3-D complex environments,” _IEEE Robotics and Automation Letters_ , vol. 2, no. 3, pp. 1688–1695, 2017.
* [4] F. Gao, W. Wu, J. Pan, B. Zhou, and S. Shen, “Optimal time allocation for quadrotor trajectory generation,” in _Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems_ , 2018, pp. 4715–4722.
* [5] W. Sun, G. Tang, and K. Hauser, “Fast UAV trajectory optimization using bilevel optimization with analytical gradients,” _IEEE Transactions on Robotics_ , vol. 37, no. 6, pp. 2010–2024, 2021.
* [6] J. Chen, T. Liu, and S. Shen, “Online generation of collision-free trajectories for quadrotor flight in unknown cluttered environments,” in _Proceedings of the IEEE International Conference on Robotics and Automation_. IEEE, 2016, pp. 1476–1483.
* [7] F. Gao, W. Wu, Y. Lin, and S. Shen, “Online safe trajectory generation for quadrotors using fast marching method and Bernstein basis polynomial,” in _Proceedings of the IEEE International Conference on Robotics and Automation_ , 2018, pp. 344–351.
* [8] B. Amos and J. Z. Kolter, “OptNet: Differentiable optimization as a layer in neural networks,” in _Proceedings of the International Conference on Machine Learning_. PMLR, 2017, pp. 136–145.
* [9] H. Kano and H. Fujioka, “B-spline trajectory planning with curvature constraint,” in _Proceedings of the American Control Conference_ , 2018, pp. 1963–1968.
* [10] C. Richter, A. Bry, and N. Roy, “Polynomial trajectory planning for aggressive quadrotor flight in dense indoor environments,” in _Robotics Research_. Springer, 2016, pp. 649–666.
* [11] A. Sinha, P. Malo, and K. Deb, “A review on bilevel optimization: From classical to evolutionary approaches and applications,” _IEEE Transactions on Evolutionary Computation_ , vol. 22, no. 2, pp. 276–295, 2018\.
* [12] B. Landry, Z. Manchester, and M. Pavone, “A differentiable augmented Lagrangian method for bilevel nonlinear optimization,” in _Proceedings of Robotics: Science and Systems_ , 2019.
* [13] T. Stouraitis, I. Chatzinikolaidis, M. Gienger, and S. Vijayakumar, “Online hybrid motion planning for dyadic collaborative manipulation via bilevel optimization,” _IEEE Transactions on Robotics_ , vol. 36, no. 5, pp. 1452–1471, 2020.
* [14] R. Menasri, A. Nakib, B. Daachi, H. Oulhadj, and P. Siarry, “A trajectory planning of redundant manipulators based on bilevel optimization,” _Applied Mathematics and Computation_ , vol. 250, pp. 934–947, 2015.
* [15] F. Farshidian, M. Neunert, A. W. Winkler, G. Rey, and J. Buchli, “An efficient optimal planning and control framework for quadrupedal locomotion,” in _Proceedings of the IEEE International Conference on Robotics and Automation_. IEEE, 2017, pp. 93–100.
* [16] H. Pirnay, R. López-Negrete, and L. T. Biegler, “Optimal sensitivity based on IPOPT,” _Mathematical Programming Computation_ , vol. 4, no. 4, pp. 307–331, 2012.
* [17] S. Boyd, S. P. Boyd, and L. Vandenberghe, _Convex Optimization_. Cambridge university press, 2004.
* [18] N. Parikh, S. Boyd _et al._ , “Proximal algorithms,” _Foundations and Trends® in Optimization_ , vol. 1, no. 3, pp. 127–239, 2014\.
|
# Performance Evaluation, Optimization and Dynamic Decision in Blockchain
Systems: A Recent Overview
Quan-Lin Li1, Yan-Xia Chang1 and Qing Wang2
1School of Economics and Management
Beijing University of Technology, Beijing 100124, China
2Monash Business School, Monash University
900 Dandenong Road, Caulfield East, VIC, 3145, Australia Corresponding author:
Y. X. Chang<EMAIL_ADDRESS>
—– Tell us your papers to add to the next update
###### Abstract
With rapid development of blockchain technology as well as integration of
various application areas, performance evaluation, performance optimization,
and dynamic decision in blockchain systems are playing an increasingly
important role in developing new blockchain technology. This paper provides a
recent systematic overview of this class of research, and especially,
developing mathematical modeling and basic theory of blockchain systems.
Important examples include (a) performance evaluation: Markov processes,
queuing theory, Markov reward processes, random walks, fluid and diffusion
approximations, and martingale theory; (b) performance optimization: Linear
programming, nonlinear programming, integer programming, and multi-objective
programming; (c) optimal control and dynamic decision: Markov decision
processes, and stochastic optimal control; and (d) artificial intelligence:
Machine learning, deep reinforcement learning, and federated learning. So far,
a little research has focused on these research lines. We believe that the
basic theory with mathematical methods, algorithms and simulations of
blockchain systems discussed in this paper will strongly support future
development and continuous innovation of blockchain technology.
Keywords: Blockchain; Performance evaluation; Performance optimization;
Optimal control; Dynamic decision.
## 1 Introduction
Since Bitcoin was proposed by Nakamoto [93] in 2008, blockchain technology has
received tremendous attention from both practitioners and academics. So far,
blockchain has made remarkable progress by means of many interesting and
creative combinations of multiple key computer technologies, such as
distributed systems, consensus mechanism, network and information security,
privacy protection, encryption technology, peer-to-peer networks, edge
computing, Internet of Things, and artificial intelligence. At the same time,
some effective scalable frameworks and security designs of blockchain have
been further developed, for example, off-chain, side-chain, cross-chain,
shard, fault tolerant, and attack detection. However, compared with rapid
development of blockchain technology, mathematical modeling and analysis of
blockchain systems is relatively backward, thus it is clear that developing
blockchain technology extremely needs such important basic theory and
necessary mathematical methods.
In this paper, we review mathematical modeling and analysis methods in some
aspects (but no completeness) of blockchain technology, including some
important progress that can further drive developing new potential blockchain
technologies. To this end, our overview in this paper is listed as follows:
(1) Mining processes and management; (2) consensus mechanism; (3) performance
evaluation; (4) performance optimization; (5) optimal control and dynamic
decision; (6) machine learning; (7) blockchain economy and market; and (8)
blockchain ecology. Note that the eight survey points aim to setting up
stochastic models and associated mathematical methods to theoretically improve
blockchain’s performance, scalability, security, privacy protection, work
efficiency, and economic benefit. In what follows, we use Figures 1 to 8 to
describe and analyze the eight survey points (1) to (8) simply.
(1) Mining processes and management
For mathematical modeling and analysis on this research direction, we need to
discuss the key system factors or parameters that largely influence
performance, scalability, security, and privacy protection of blockchain
systems. For example, the miners, the mining pools, the difficulty of solving
the cryptographic puzzle, the transaction fee, the blockchain reward, the
competitive behavior, the tree with forked structure, the work efficiency, the
economic benefit; the attack strategies, the security, the vulnerability, the
fault tolerance, and privacy protection. See Figure 1 for more details.
Figure 1: The mining processes and management
(2) Consensus mechanism
For mathematical modeling and analysis on this research direction, we need to
discuss the random consensus-accomplished times for different consensus
protocols (or algorithms), such as PoW, PoS, DPoS, BFT, PBFT, and Raft.
Furthermore, we need to analyze the blockchain systems under different
consensus protocols and to study the throughput, security, privacy protection,
and scalability of blockchain systems. Our main concerns include a set of
basic factors, such as consensus types, efficiency, convergence, consistency,
network delay, and energy consumption. See Figure 2 for more details.
Figure 2: The consensus mechanism
(3) Performance evaluation
In this class of mathematical modeling and analysis, we need to set up
performance models of blockchain systems when considering different consensus
mechanisms or protocols or algorithms (PoW, PoS, DPoS, PBFT, DAG and so on),
different blockchain types (Bitcoin, Ethereum, side-chain, cross-chain, off-
chain and so on), and innovation and new network architectures of blockchain
systems. See Figure 3 for more details.
Figure 3: Performance modeling and analysis of blockchain systems
(4) to (6) Performance optimization, dynamic decision, and machine learning
In this class of mathematical modeling and analysis (4), we need to optimize
the performance measures of a blockchain system by means of linear
programming, nonlinear programming, integer programming, multi-objective
programming and so on.
In this class of mathematical modeling and analysis (5), we need to realize
optimal control and dynamic decision of a blockchain system by using the
Markov decision processes, sensitivity-based optimization, and stochastic
optimal control. See Figure 4 for more details.
For machine learning (6), we need to develop machine learning, deep
reinforcement learning, and federated learning. See Figure 4 for more details.
Figure 4: Performance optimization, dynamic decision, and machine learning
(7) Blockchain economy and market, and (8) blockchain ecology
For the blockchain economy and market (7) as well as the blockchain ecology
(8), readers may refer to Figures 5 and 6 for a simple introduction,
respectively.
Figure 5: The blockchain economy and market Figure 6: The blockchain ecology
With fast development of blockchain, new blockchain technology continue to
emerge. Thus, performance evaluation, performance optimization, optimal
control, and dynamic decision of blockchain systems become progressively. In
particular, performance modeling and analysis methods have been increasingly
lacking and insufficient up to now, and especially, for dealing with the newly
developing blockchain technology. Blockchain is a hierarchical comprehensive
database, and it operates under a consensus mechanism of distribute systems in
a peer-to-peer network. In addition, blockchain is an interesting and creative
combination of multiple computer technologies, such as encryption techniques,
consensus mechanism, security, privacy protection, and scalability; and
wireless, mobility, cloud computing, edge computing, Internet of Things, and
quantum. Therefore, blockchain is always a complicated stochastic system under
a strongly practical environment. In this situation, performance evaluation,
performance optimization, and dynamic decision of blockchain systems always
become interesting and challenging in their theoretical study.
So far, a few survey papers have discussed blockchain technology with a simple
introduction to performance analysis of blockchain systems. See Table 1 for
more details. From Table 1, it is easy to see that those survey papers focused
on several key perspectives: Performance, scalability, security, and privacy
protection.
Table 1: Survey papers for performance evaluation of blockchain systems Year | Surveys or reviews | Research scope
---|---|---
2018 | Kim et al. [61] | Scalability solutions
2019 | Rouhani and Deters [110] | Security, performance, and applications of smart contracts
2019 | Wang et al. [132] | Performance benchmarking tools; optimization methods
2019 | Zheng et al. [145] | Challenges and progresses in blockchain from a performance and security perspective
2020 | Smetanin et al. [118] | Effective simulation and modeling approaches
2020 | Singh et al. [117] | Side-chains for improving scalability, privacy protection, security of blockchain
2020 | Zhou et al. [146] | Scalability of blockchain
2020 | Yu et al. [143] | Sharding for blockchain scalability
2020 | Fan et al. [28] | Stochastic models for blockchain systems: Game theory, performance optimization, machine learning, etc.
2021 | Cao et al. [13] | Mathematical models for blockchain such as stochastic process, game theory, optimization, and machine learning
2021 | Huang et al. [52] | Performance models, and analysis tools of blockchain systems or networks
To open the scope of our survey research on performance evaluation,
performance optimization, optimal control, and dynamic decision of blockchain
systems, this paper chooses a collection of research materials from major
scientific journals, international conferences, and preprint sites including
IEEE Xplore, ACM digital library, Elsevier, SpringerLink, MPDI, arXiv, HAL,
and so on. Based on these research materials, we provide a detailed review and
analysis from the literature with respect to research on performance
evaluation, performance optimization, and dynamic decision of multiple
blockchain systems, including the consensus mechanism or protocols or
algorithms (PoW, PoS, DPoS, PBFT, DAG and so on), the blockchain types
(Bitcoin, Ethereum, side-chain, cross-chain, off-chain, and so on), and the
new network architecture of blockchain. At the same time, we provide how to
set up stochastic models and to develop effective methods or algorithms for
dealing with performance evaluation, performance optimization, optimal
control, and dynamic decision. Note that such a study of blockchain technology
is interesting and challenging in not only the basic theory but also many
practical applications.
Based on the above analysis, we summarize the main contributions of this paper
as follows:
* 1.
We provide a basic overview for the available mathematical methods (in
particular, stochastic analysis), which greatly support performance modeling
and computation in performance evaluation, performance optimization, optimal
control, and dynamic decision.
* 2.
We provide a clear outline and structure for performance evaluation and
performance optimization of blockchain systems. Important mathematical methods
and techniques include (a)performance evaluation: Markov processes, queueing
theory, Markov reward process, random walk, fluid and diffusion
approximations, martingale theory; and (b)performance optimization: Linear
programming, nonlinear programming, integer programming, and multi-objective
programming.
* 3.
We summarize optimal control and dynamic decision of blockchain systems by
means of, for example, (c) Markov decision process, sensitivity-based
optimization, and stochastic optimal control; and (d) machine learning, deep
reinforcement learning, and federated learning. These issues are interesting
and challenging with greater potential in future study.
The remainder of this paper is organized as follows. Section 2 reviews the
recent literature on the performance evaluation of blockchain systems by means
of the queueing theory, the Markov processes, and the Markov reward processes.
Complementing Section 2, Section 3 provides some further methods for
performance evaluation of blockchain systems by using, for example, the random
walks, the fluid approximation, the diffusion approximation, and the
martingale theory. Section 4 reviews performance optimization of blockchain
systems by means of the linear programming, the nonlinear programming, the
integer programming, and the multi-objective programming. Section 5 focuses on
the overview of applications of the Markov decision processes to find the
optimal dynamic strategy of blockchain systems. Section 6 summarizes
applications of machine learning (e.g., deep reinforcement learning and
federated learning) to performance optimization and dynamic decision of
blockchain systems. Section 7 highlights some concluding remarks.
## 2 Performance Evaluation
In this section, we summarize performance evaluation models of blockchain
systems by means of queueing theory and the Markov process. Note that some
other mathematical methods for performance evaluation are left for the next
section.
### 2.1 Queueing theory
Queueing theory is a key mathematical tool to set up performance measures and
performance evaluation of blockchain systems. Applying queueing theory to
performance analysis of blockchain systems is interesting but challenging
since each blockchain system not only is a complicated stochastic system but
also has multiple key factors and a physical structure with different levels.
Specifically, the key factors include (1) transactions arrivals, (2)
transaction fees, (3) block size, (4) network delay, (5) block generated
process (e.g., mining process and voting process), (6) the pegging process of
a block (or a sub-chains), (7) mining competition among multiple mining pools
(e.g., a tree structure), (8) mining reward, (9) computing power distribution.
The physical structure contains (1) consensus mechanism (e.g., PoW, PoS, DPoS,
PBFT, and DAG), and (2) scalable structure (e.g., side-chain, cross-chain, and
off-chain). The research objectives of blockchain systems are designed as, for
example, (a) performance: Throughput, confirmation time; (b) security; (c)
privacy protection; and (d) scalability. Based on these specific examples, we
can see that it is useful and necessary to apply queueing theory to set up
performance models and to analyze performance measures in the study of
blockchain systems.
Understanding a blockchain system and its physical structure is not always
simple. Li et al. [71] may be the first one to provide a simple diagram of the
physical structure of the PoW blockchain system with a miner (or a mining
pool). See Figure 7 for more details.
Figure 7: A simple physical structure of the PoW blockchain system
For the other blockchain systems (e.g., PoS, DPoS, BFT, PBFT, and Raft), Chang
et al. [17] provided a queueing platform to evaluate their performance
measures once the voting processes are determined by using the Markov modeling
technology. Based on this, the first step is to study the voting processes,
and the second step is to set up a queueing platform through the voting
processes are regarded as the service processes. See Figure 8 for more
details. In this queueing platform, it first needs to determine the two random
variables: The block-generated time and the orphan-block-generated time, which
can be related to the arrival and service times in a queueing model
$\mathrm{M}\oplus\mathrm{M}^{\mathrm{b}}/\mathrm{M}^{\mathrm{b}}/1$ or
$\mathrm{M}\oplus\mathrm{PH}^{\mathrm{b}}/\mathrm{PH}^{\mathrm{b}}/1$.
Figure 8: A queueing platform of different blockchain systems
Kawase and Kasahara [59] may be the first to apply queueing theory to study
the PoW blockchain system with a miner, and a further paper by Kasahara and
Kawahara [58] considered a single-server queue with batch service and priority
mechanism to analyze the transaction-confirmation time. Because the block-
generation time (note that it also includes block-pegged time) is a general
probability distribution, the system of differential-difference equations
given in the two papers by using the supplementary variable method will be
unsolvable. For this reason, Li et al. [71] provided a Markov queue with two
stages (the block-generated time and the block-pegged time) to analyze the PoW
blockchain system with a miner. Li et al. [71] may be the first paper that
clearly describes and expresses the physical structure with multiple key
factors of the PoW blockchain system, as seen in Figure 7. For the two-stage
queue of the PoW blockchain system, the matrix geometric solution was applied
to give a complete solution of this system such that the performance
evaluation of the PoW blockchain system was established in a simple form and
was analyzed by means of a more detailed numerical analysis. In later studies,
Li et al. [72] relaxed the model assumptions of Li et al. [71] to a more
general case that the transaction arrivals are a Markovian arrival process
(MAP), and the block-generated time and the block-pegged time are all of phase
type (PH). Obviously, computing the mean transaction-confirmation time becomes
very difficult and complicated due to the complicated blockchain structure, as
suggested by Li et al. [72].
Kawase and Kasahara [59] and Li et al. [71] have inspired numerous later
strain of literature to use the queueing theory in performance evaluation of
blockchain systems. Now, we list some literature as follows:
Geissler et al. [35] neglected the information propagation delays and assumed
the immediate distribution of transactions and blocks to all the peers in the
network. They developed a discrete-time queueing model that allows performance
evaluation of a blockchain system, such as the transaction waiting time
distribution.
Zhao et al. [144] regarded the mining process as a vacation, and the block-
verification process as a service. Specially, they established a non-
exhaustive queueing model with a limited batch service and a possible zero-
transaction service and derived the average number of transactions and the
average confirmation time of a transaction in the blockchain system.
Krieger et al. [64] proposed a Markovian non-purging $(n,k)$ fork-join
queueing model to analyze the delay time of the synchronization process among
the miners, where a vote-based consensus procedure is used.
Ahmad et al. [2] presented an end-to-end blockchain system for dealing with
the audit trail applications, and analyzed the time, space, consensus, search
complexity, and security of this blockchain system by using the queueing
theory.
Mišić et al. [91] applied the Jackson network model to the entire network, in
which each individual node operates as a priority M/G/1 queue, and developed
an analytical model for analyzing the Bitcoin’s blockchain network.
Frolkova and Mandjes [33] proposed a G/M/$\infty$-like Bitcoin queueing model
to consider the propagation delay between two individual users. Fralix [32]
provided a further discussion for the infinite-server queue introduced in
Frolkova and Mandjes [33].
Seol et al. [114] proposed an embedded Markov chain to analyze a blockchain
system with a specific interest in Ethereum.
He et al. [49] introduced a queueing model with priority to incorporate the
operational feature of blockchain, the interplay between miners and users, and
the security issue associated with the decentralized nature of the blockchain
system.
Gopalan et al. [39] analyzed the stability and scalability of the DAG-based
blockchain system by using queueing networks.
Fang and Liu [29] proposed a dynamic mining resources allocation algorithm
(DMRA) to reduce the mining cost in the PoW blockchain networks through using
the logical queueing-based analytical model.
Meng et al. [90] proposed a queueing model for studying the three stages of
the consortium blockchain consensus, analyze the consistency properties of
consortium blockchain protocols, and provided performance evaluation for the
main stages of the blockchain consensus.
Sun et al. [126] provided a queueing system with three service stages, which
express the three-stage consensus process of the RC-chain and the building of
a new block. By using the queueing model, they obtained three key performance
measures: The average number of transactions in system, the average
transaction confirmation time, and the average transaction throughput.
Altarawneh et al. [3] set up a queueing model to compute the average waiting
time for the victim client transactions, and evaluated the security and
reliability of the blockchain system.
Wilhelmi et al. [135] proposed a batch-service queue model for evaluating the
network delay in a blockchain system. Furthermore, they provided some
simulations to assess the performance of the synchronous and asynchronous
mechanisms.
Ricci et al. [108] proposed a framework encompassing machine learning and a
queueing model M/G/1 to identify which transactions will be confirmed, and
characterized the confirmation time of confirmed transactions.
Li et al. [67] discussed a queueing game with a non-preemptive priority of a
blockchain system and considered both the miners’ mining rewards and the
users’ time costs.
Sukhwani et al. [125] presented a performance method of Hyperledger Fabric
v1.0+ by using a stochastic Petri net modeling (stochastic reward nets) to
compute the throughput, utilization, mean queue length at each peer, and the
critical processing stages within a peer.
For ease of reading, we summarize the queueing models of blockchain systems in
Table 2 .
Table 2: The queueing models of blockchain systems Paper | Year | Queue type | Research scope
---|---|---|---
[125] | 2018 | Petri Nets model | Throughput; utilization; mean queue length at each peer; critical processing stages within a peer
[67] | 2018 | A queueing game | The miners’ mining rewards; the users’ time cost
[35] | 2019 | GI/GX/1 | Queue size; waiting time of transactions
[144] | 2019 | M/G${}^{X}\oplus$ G/1 | The average number of transactions; the average confirmation time of transactions
[64] | 2019 | Fork-join queue | The delay performance of the synchronization process among the miners
[2] | 2019 | M/D/c | The time, space, consensus, and search complexity; security
[91] | 2019 | Jackson network model; M/G/1 | Probability distributions of block and transaction distribution time; node response time; forking probabilities; network partition sizes; duration of ledger’s inconsistency period.
[108] | 2019 | M/G/1 | Identify which transactions will be confirmed; the confirmation time of confirmed transactions
[33] | 2019 | GI/M/$\infty$ | Propagation delay between two individual users
[32] | 2020 | Infinite-server queue | A further study of the infinite-server queue studied in [33]; related infinite-server queues have similar dynamics
[114] | 2020 | MX/MX/1 | The average number of slots; the average waiting time per slot; throughput
[49] | 2020 | M/MX/1 with priority | Users’ equilibrium behavior; total fee rate; confirmation latency; system equilibria
[39] | 2020 | Monotone separable queuing models | Stability and scalability of the DAG network
[29] | 2020 | Logical queueing-based analytical model | Mining resources allocation; mining cost; stability
[90] | 2021 | M/H2/1; M/M/1; M/Er/1 | The consistency and security of consortium blockchain protocols
[3] | 2021 | M/M/1; M/M/$\infty$ | The average waiting time for the victim client transactions; security; reliability
[126] | 2021 | Three-phase service queuing process | The average number of transactions; the average transaction confirmation time; the average transaction throughput.
[135] | 2021 | A novel batch-service queue model | The learning completion delay of blockchain-enabled federated learning; performance of synchronous and asynchronous mechanisms
[17] | 2022 | $\mathrm{M}\oplus\mathrm{M}^{\mathrm{b}}/\mathrm{M}^{\mathrm{b}}/1$ | Throughput of the dynamic PBFT blockchain system; the stationary rate (or probability) that a block is pegged on the blockchain; the stationary rate (or probability) that an orphan block is returned to the transaction pool
By means of the queueing theory, some papers have conducted research on the
simulation and empirical study of blockchain systems. Important examples
include among which Memon et al. [89] and Spirkina et al. [122] proposed a
queueing theory-based simulation model to understand the performance measures
of the blockchain system.
In the queueing models of blockchain systems, Bowden et al. [10] is a key work
because the generation time is related to the service time. They showed that
the generation time of a new block has some key statistical properties, for
example, the generation time is non-exponential, and it can also be affected
by many physical factors.
So far, many classes of blockchain systems have still been lacking research on
performance evaluation by using the queueing theory. For example, the PoW
blockchain system with multiple mining pools, the PBFT blockchain system of
dynamic nodes, the DAG-based blockchain systems, the Ethereum, and the large-
scale blockchain systems with either cross-chain, side-chain, or off-chain.
Therefore, the queueing models of blockchain systems are always interesting
and challenging in the future study of blockchain technology.
### 2.2 Markov processes and Markov reward processes
In performance evaluation of blockchain systems, Markov processes and Markov
reward processes are two effective mathematical methods. See Li [68] for a set
of Markov models and computational methods by using the RG-factorizations.
Note that Markov processes are used to evaluate throughput, confirmation time,
and security and privacy protection of blockchain systems; while the Markov
reward processes are applied to analyzing work efficiency, economic benefits,
and cost control of blockchain systems.
For the vulnerability and forked structure of the PoW blockchain systems with
two mining pools (honest and dishonest), Eyal and Sirer [27] proposed a
selfish mining strategy for the competitive mining process between two mining
pools, and set up a simple Markov process with a special reward structure to
discuss the competitive behavior between the two mining pools. By means of an
intuitive reward analysis, they indicated that the selfish miner can win a
higher mining reward through violating the honest agreement in the blockchain
system. However, Li et al. [69] showed that the Markov process with rewards
given in Eyal and Sirer [27] is not correct from the ordinary theory of Markov
processes.
For a PoW blockchain system with two mining pools (honest and dishonest), Li
et al. [69] showed the competitive behavior between the two mining pools by
means of Figure 9.
Figure 9: The competitive behavior between the two mining pools
When the two block branches forked at a common tree root, let $I(t)$ and
$J(t)$ be the numbers of blocks mined by the honest and dishonest mining pools
at time $t$, respectively. It is seen from Li et al. [69] that
$\left\\{\left(I(t),J(t)\right):t\geq 0\right\\}$ is a continuous-time Markov
process whose infinitesimal generator is given by
$Q=\left(\begin{array}[]{ccccccc}Q_{\widetilde{\mathbf{0}},\widetilde{\mathbf{0}}}&Q_{\widetilde{\mathbf{0}},0}&Q_{\widetilde{\mathbf{0}},1}&&&&\\\
Q_{0,\widetilde{\mathbf{0}}}&Q_{0,0}&Q_{0,1}&&&&\\\
Q_{1,\widetilde{\mathbf{0}}}&&Q_{1,1}&Q_{1,2}&&&\\\ B&&&A&C&&\\\ B&&&&A&C&\\\
\vdots&&&&&\ddots&\ddots\end{array}\right).$
For the PoW blockchain system with two mining pools (honest and dishonest),
Gőbel et al. [37] set up a two-dimensional Markov process with network
propagation delay and provided performance evaluation of the PoW blockchain
system. Javier and Fralix [55] further discussed the two-dimensional Markov
process given by Gőbel et al. [37] and developed a new computational method.
Li et al. [69] set up a new two-dimensional pyramidal Markov (reward) process
of the blockchain system, which leads to a novel theoretical framework for
performance evaluation of a PoW blockchain system with adding new random
factors by means of a new class of matrix geometric solutions.
Using the Markov process approach of Eyal and Sirer [27], Nayak et al. [94]
introduced a new type of mining strategy: The stubborn mining strategy and
also established its two extended forms: The equal-fork stubborn mining
strategy and the path stubborn mining strategy. Further important examples
include Wang et al. [133] and Liu et al. [80]. In addition, inspired by the
Markov process approach of Eyal and Sirer [27], the selfish mining strategy
was extended to the Ethereum system. Readers can see Grunspan and Pérez-Marco
[46] and Niu and Feng [97] for more details. Also, the impact of the selfish
mining behavior of multiple mining pools on the blockchain system has also
been paid widespread attention, e.g., see Bai et al. [7], Bai et al. [8],
Chang [16], Liu et al. [78], Marmolejo-Cossío et al. [88] and Xia et al.
[139].
From the ordinary theory of Markov processes, we summarize some works that use
the Markov processes or Markov reward processes to study other interesting
issues of blockchain systems as follows.
Song et al. [121] provided a Markov process theory for network growth
processes of DAG-based blockchain systems.
Chang et al. [17] applied a large-scale Markov process to study the dynamic-
PBFT blockchain system.
Carlsten [14] applied the Markov process to study the impact of transaction
fees on the selfish mining strategy of the blockchain.
Shi et al. [116] developed a new consensus protocol (Proof-of-Age, PoA) and
employed a continuous time Markov chain to show that the consensus protocol
can disincentivize the pooled mining.
Kiffer et al. [60] set up a Markov-chain to analyze the consistency properties
of blockchain protocols.
Huang et al. [51] established a Markov process with an absorbing state to give
performance analysis of the raft consensus algorithm in private blockchains.
Ma et al. [86] established a two-dimensional Markov process to provide
performance evaluation of PBFT blockchain systems.
Srivastava [124] computed the transaction confirmation time of blockchain by
using a Markov model.
Li et al. [76] established a Markov process to analyze performance and
security of the IoT ledgers with a directed acyclic graph.
Li et al. [75] established the Markov process to study the block access
control mechanism in the wireless blockchain network.
Piriou and Dumas [99] constructed a Markov process to analyze the blockchain
system and developed a simulation model of blockchain technology.
Nguyen et al. [95] applied the Markov process and deep reinforcement learning
to study the task offloading problem in the mobile blockchain with privacy
protection.
Jofré et al. [56] established a Markov process to study the convergence rate
of blockchain mining games.
Together, these studies outline a critical role of Markov processes and Markov
reward processes in the performance evaluation of blockchain systems. This
would be a potential area for future study.
## 3 Further Methods for Performance Evaluation
In this section, we summarize further methods for performance evaluation of
blockchain systems, including the random walk, the fluid approximation, the
diffusion approximation, and the martingale theory.
### 3.1 The random walk
The random walk is a key mathematical method in analyzing many stochastic
models, such as queueing systems and information and communication technology
(ICT) systems. See Spitzer [123], Prabhu [102], and Xia et al. [138] for more
details.
Recent, a few papers have studied blockchain systems by using the random walk,
and especially, analyzing the double-spending attacks of blockchain.
Goffard [38] refined a random walk model underlying the double-spending
problem and provided a fraud risk assessment of the blockchain system.
In contrast with Goffard’s model [38], Jang and Lee [54] proposed a new random
walk model to further study the probability distribution of catch-up time
spent for the fraudulent chain to catch up with the honest chain, which takes
into account the block confirmation. They discussed the profitability of the
double-spending attacks that manipulate a priori mined transaction in a
blockchain system.
Brown et al. [11] studied the duration and probability of success of a double-
spend attack in terms of the random walk.
Grunspan and Pérez-Marco [45] determined the minimal number of confirmations
requested by the recipient such that the double spend strategy is non-
profitable by means of the random walk.
### 3.2 The fluid and diffusion approximations
The fluid and diffusion approximations are two key mathematical methods in
analyzing many stochastic models with general random variables, such as
queueing systems, inventory models, supply chains, and communication networks.
The fluid and diffusion approximations describe a deterministic process that
aims to approximately analyze the evolution of stochastic processes, that is,
they can analyze the evolution of generalized stochastic processes by using
the idea of weak limits. Recently, fluid and diffusion approximations have
been widely used in analyzing of large-scale complex networks with the
tendency of scale expansion, complex structure, and dynamic state. See Chen
and Yao [20], Whitt [134], Dai et al. [24], Büke and Chen [12], Chen and
Shanthikumar [18] for more details.
So far, fluid and diffusion approximations have been applied to the analysis
of blockchain systems. Important examples include among which Frolkova and
Mandjes [33] developed a Bitcoin-inspired infinite-server model by means of a
random fluid limit. King [62] proposed a fluid approximation of the random
graph model and discussed the related technologies of shared ledgers and
distributed ledgers in blockchain systems. Ferraro et al. [31] studied the
stability of unverified transaction systems in the DAG-based distributed
ledgers by means of the fluid approximation. Koops [63] applied the diffusion
approximation to predict the confirmation time of Bitcoin transactions.
There are a few blockchain works that analyze the evolution of generalized
stochastic processes by using the idea of weak limits. For example, Corcino et
al. [23] discussed the mean square displacement of fluctuations of Bitcoin
unit prices over time on a daily basis by applying the method of Brownian
motion and Gaussian white noise analysis. Chevallier et al. [21] used the Lévy
jump diffusion Markov switching model to study the price fluctuation
characteristics of Bitcoin.
For the fluid and diffusion approximations of blockchain systems, it is
interesting and challenging to study the PoW blockchain systems with multiple
mining pools. See Li et al. [70] for a general tree representation of
complicated mining competition among multiple mining pools. Note that the
fluid and diffusion approximations can also provide performance evaluation of
blockchain systems, thus there exists a great potential and innovation in the
future research of many blockchain systems (e.g., PoS, DPoS, PBFT, and DAG).
### 3.3 The martingale theory
The martingale theory not only enriches the contents of probability theory but
also provides a powerful method for studying stochastic processes and
stochastic models, and it is widely applied in economics, networks, decision,
and control. Grunspan and Pérez-Marco applied the martingale theory to study
the profits of miners under different attacks of blockchain systems since
2018. Using the martingale theory, the research on common attacks in
blockchain systems is summarized in Table 3.
Table 3: Research on attacks of blockchain by using martingale theory Year | Attack type | Research scope | Method or theory
---|---|---|---
2018 | Selfish mining [40] | Expected duration of attack cycles; the profitability model by using repetition games; improvement of Bitcoin protocol; the miner’s attraction to the selfish mining pools | Martingale theory; Doob stopping time theorem
2018 | Stubborn mining [41] | The profitabilities of stubborn mining strategies | Martingale theory; Catalan numbers and Catalan distributions
2018 | Trailing mining [42] | The revenue ratio of the trail stubborn mining strategy in the Bitcoin network; the profitability of other block-withholding strategies | Martingale theory; classical analysis of hiker problems
2020 | SM; LSM; EFSM and so on [43] | The profitabilities of various mining strategies | Martingale theory; Markov chains; Dyck words
2020 | SM, intermittent SM and smart mining [44] | The closed forms for the profit lag; the revenue ratio for the strategies “selfish mining” and “intermittent selfish mining” | Martingale theory; foundational set-up from previous companion article
2021 | Nakamoto double spend [45] | The exact profitability for Nakamoto double spend strategy; the minimal number of confirmations to be requested by the recipient such that this double spend strategy is non-profitable | Martingale theory; glambler ruin; random walk
## 4 Performance Optimization
In this section, we provide an overview for performance optimization of
blockchain systems by using different optimal methods.
Performance optimization is to optimize performance measures of blockchain
systems by means of mathematical programming (e.g., linear programming,
nonlinear programming, integer programming, and multi-objective programming).
And it composes four elements: Optimization problem, optimization variables,
objective functions, and restrictive conditions. The optimization process
needs to accomplish such a task: When the restrictive conditions are
satisfied, the optimization variables are adjusted to make that these
objective functions go to a maximum or a minimum.
Performance optimization is necessary and important in the study of blockchain
systems, including design, organization, control, and management of blockchain
systems. Such a study will strongly support the overall development of
theoretical research and practical applications of blockchain technology.
So far, performance optimization of blockchain systems has been studied in at
least three aspects as follows:
(1) From consensus mechanism and network architecture of blockchain systems,
it is interesting to optimize performance (e.g., throughput and confirmation
time), work efficiency, economic benefit; improve scalability, security,
privacy protection and degree of decentralization; and balancing operations
costs and efficiency, and allocation of profits. Important examples include
Lundbaek and D’Iddio [83], Liang [77], Nguyen et al. [96], Wang et al. [131],
Saad et al. [111], Reddy and Sharma [107], Leonardos et al. [65], Liu et al.
[79], Varma and Maguluri [128], and Li et al. [66].
(2) From some key factors (e.g., operations costs, pricing, computing power,
transaction fee, network delay) of PoW blockchain systems, it is necessary to
consider the optimal strategies of dishonest miners, for example, how to pack
a transaction package from a transaction pool? How to incentive honest miners
to jump into the dishonest mining pool? How to incentive the dishonest miners
to keep mining in a round of competition? How to maximize miners’ economic
benefit or work efficiency? Important examples include Kang et al. [57],
Aggarwal et al. [1], Ramezan et al. [106], and Liu et al. [79].
Table 4: Performance optimization of blockchain systems Proposed for | Optimization scope | Optimization factors | Methods
---|---|---|---
Governed blockchains [83] | Solving the MINLP optimization problems for computing optimal Proof of Work configuration parameters that trade off potentially conflicting aspects such as availability, resiliency, security, and cost | Expected availability; resiliency; security; cost | Mixed integer nonlinear programming
A new system [77] | Re-innovating all the core elements of the blockchain technology to achieve the best balance among scalability, security and decentralization | Transaction confirmation time; information propagation latency | Min-max optimization
Users, miners, and verifiers [57] | Considering the tradeoff between the network delay of block propagation process and offered transaction fee from the blockchain user to jointly maximize utility of the blockchain user and individual profit of the miners | Network delay of block propagation process; offered transaction fee from the blockchain user | Nonlinear programming
A new sharding paradigm [96] | Proposing OptChain that can minimize transactions and maintain a temporal balance among shards to improve the confirmation time and throughput | Confirmation time; transaction throughput; cross-shard transactions minimization; temporal balancing | Nonlinear programming
A new dynamic routing solution [131] | Proposing a new dynamic routing solution Flash to strike a better tradeoff between path optimality and probing overhead | Payment size; transaction fees; probing overhead; transaction throughput | Linear programming
Miners [1] | Demonstrating BTC’s robust stability, and find that the implemented design of emergency difficulty adjustment resulted in maximal miners’ profits | Coinbase reward; competition cost reward; transaction fees; competition cost fees; mining cost; waiting cost; switching incentive; miners’ profits | Mixed integer nonlinear programming
A new form of attacks [111] | Studying a new form of attacks that can be carried out on the memory pools and proposing countermeasures that optimize the mempool size and help in countering the effects of DDoS attacks | Attack cost; relay fee; mining fee; memorypool size | Nonlinear programming
PoW blockchain and blockDAG [107] | Proposing two models to scale the transaction throughput | Block creation rate; transaction throughput; main chain block growth rate; propagation delay; risk | Nonlinear programming
PoS protocols [65] | Leveraging weighted majority voting rules that optimize collective decision making to improve the efficiency and robustness of the consensus mechanism | Validators’ voting behavior; blockchain rewards; collective decision; collective welfare | Mixed integer nonlinear programming
A new pricing mechanism [109] | Presenting a pricing mechanism that aligns incentives of agents who exchange resources on a decentralized ledger to greatly increase transaction throughput with minimal loss of security | Transaction pricing; expected transaction efficiency; block assembly; transaction throughput; security | Integer linear programming
Miners [106] | How should miners pick up transactions from a transaction pool to minimize the average waiting time per transaction | Average waiting time per transaction | Mixed integer nonlinear programming
Enterprises and users [147] | Choosing the most effective platform from many blockchains to control costs and share data | Technical, market and popularity indicators; improved global DEA-Malmquist measure | Nonlinear programming
Lightning and Spider network [128] | Setting up a two-sided queue model and propose a throughput optimal algorithm that stabilizes the system under any load within the capacity region | Transaction throughput; arrival rate; capacity region; payment requests | Linear programming
A new protocol [66] | Proposing EntrapNet protocol and optimize EntrapNet to deal with the fundamental tradeoff between security and efficiency | Security; efficiency | Nonlinear programming
Protocol designer, users, and miners [79] | Proposing a Fee and Waiting Tax (FWT) mechanism to improve the incentives for the miners’ participation and blockchain security, and to mitigate blockchain insufficient fee issue | Storage costs of miners; users’ transaction fee; fee choices and waiting tax for users; transaction waiting time | Multi-objective programming
(3) For users or enterprises with pricing, cost, transaction fee, and platform
selection, how to maximize user (or enterprise) utility? Important examples
include Kang et al. [57], Riehl and Ward [109], Zhou et al. [147], Varma and
Maguluri [128], and Liu et al. [79].
Based on the above analysis, we summarize performance optimization of
blockchain systems in Table 4. It is seen from Table 4 that most of the
research on performance optimization of blockchain systems focuses on
discussing the following issues:
(i) Does there exist a better network architecture or consensus mechanism such
that the blockchain system is more efficient, secure, and scalable?
(ii) Is there a better application scenario that makes blockchain more
consistent and less waste of resources?
(iii) Is there a more effective economic incentive mechanism that makes
blockchain more profitable and the cost of operations, verification and
communication lower?
(iv) Is there a better trading platform and a more favorable market
environment that make users in blockchain more usable and more credible among
users?
In a word, performance optimization of blockchain systems is an interesting
and hot frontier research topic, and also there exists a large capacity for
research innovation through discussing broad blockchain systems (e.g.,
consensus mechanism and network architectures), for example, cross-chain,
side-chain, off-chain, and interoperability of information and assets among
different chains; data synchronization, data security; pricing, cost, economic
benefit, and work efficiency; scalability, security, and privacy protection.
## 5 Markov Decision Processes
In this section, we apply the Markov decision processes (MDPs) to the study of
blockchain systems and provide some algorithms for computing the optimal
dynamic policy of such a Markov decision process. For the Markov decision
processes, readers may refer to Puterman [103] and Li et al. [73] for more
details.
The Markov decision processes are widely applied to deal with the selfish
mining attacks in the PoW blockchain systems because the selfish mining
process needs to choose a series of mining policies to be able to maximize the
reward or to minimize the cost.
When a PoW blockchain system has two different miners or mining pools (honest
and dishonest) to compete for a more mining reward, in which the dishonest
miner may adopt different mining policies based on the longest chain rule. The
dishonest miner can control the fork structure of block tree through releasing
some parts of blocks to obtain his maximum benefit. Accordingly, an
interesting topic focuses on how the dishonest miner finds an optimal mining
policy (i.e., how many mined blocks are released in a round of competition).
Important examples include among which Sapirshtein et al. [113], Sompolinsky
and Zohar [120] and Gervais et al. [36] introduced four different policies:
Adopt, cover, match, and wait for selfish miners, and they determined the
optimal selfish mining policy.
Zur et al. [148] studied the optimal selfish mining policy of the PoW
blockchain system by using the Markov decision process and proposed a new
method to solve the Markov decision process with an average reward criterion.
Bai et al. [8] applied the Markov process to study the PoW blockchain system
with multiple miners and used the Markov decision process with observable
information to find the optimal selfish mining policy for a special case with
two different miners.
Li et al. [74] discussed the PoW blockchain system by using the hidden Markov
decision process and proposed an improved selfish mining policy.
Ma and Li [87] analyzed the optimal selfish mining policy of the PoW
blockchain system with two mining pools through using the sensitivity-based
optimization theory.
In addition, the Markov decision processes are also applied to deal with other
blockchain control issues as follows:
Niu et al. [98] provided an incentive analysis for the Bitcoin-NG protocol by
using the Markov decision process, and showed that the Bitcoin-NG protocol can
maintain the incentive-compatible mining attacks.
Wüst [137] used the Markov decision process to study the data security in the
blockchain system.
Chicarino et al. [22] discussed the selfish mining inspection and tracking
attacks in the PoW blockchain network by means of the Markov decision
processes.
## 6 Machine Learning
In this section, we summarize the applications of machine learning (e.g., deep
reinforcement learning and federated learning) to performance optimization and
dynamic decision of blockchain systems.
Recent, machine learning (e.g., deep reinforcement learning and federated
learning) has been applied to study performance optimization and dynamic
decision of blockchain systems. Since the Markov decision process of a
blockchain system is always more complicated, it is difficult and challenging
to find the optimal policy of the Markov decision process, while the machine
learning can provide an approximate solution for such an optimal policy.
Therefore, it is interesting to develop approximate methods or algorithms to
find the optimal policy by using, artificial intelligence, machine learning,
deep reinforcement learning, and federated learning.
The survey papers: Liu et al. [81] provided a survey for the recent literature
that the blockchain technology is analyzed by means of machine learning and
discussed several interesting directions on this research line. Ekramifard et
al. [26] provided a systematic overview for applying artificial intelligence
to the study of blockchain systems, including the Markov decision process and
machine learning. Chen et al. [19] applied machine learning to performance
optimization and dynamic decision of blockchain systems and proposed several
interesting topics for future research. Shafay et al. [115] reviewed the
recent literature on applications of deep reinforcement learning to develop
the blockchain technology.
In what follows, we summarize the recent research on applications of machine
learning to the study of blockchain systems from several different aspects:
The mining policy, the mobile-edge computing, and the Internet of Things or
Industrial Internet of Things.
The mining policy: Considering the optimal policy of selfish mining attacks in
Bitcoin as well as the Nash equilibrium in block withholding attacks, Hou et
al. [50] proposed a SquirRL framework to apply deep reinforcement learning to
analyze the impact of attacks on the incentive mechanism of PoW blockchain.
Bar-Zur [9] used reinforcement learning to find the optimal policy for the
miners of different sizes through solving a Markov decision process problem
with an average reward criterion. Wang et al. [130] applied reinforcement
learning (machine learning) to find the optimal mining policy in the Bitcoin-
like blockchain and designed a new multi-dimensional reinforcement learning
algorithm to solve the mining MDP problem with a non-linear objective function
(rather than a linear objective function in the standard MDP problems).
When the growth of PoW blockchain is modeled as a Markov decision process, a
learning agent needs to make the optimal decisions over all the states of
Markov environment in every moment. To track the generation of new blocks and
their verification process (i.e., solving the mathematical puzzles), You [141]
set up the PoW consensus protocol (i.e., solving mathematical puzzles) through
dealing with a reinforcement learning problem. In this case, the verification
and generation of new blocks are designed as a deep reinforcement learning
iterative process.
Mobile-edge computing: Nguyen et al. [96] applied the Markov processes and
deep reinforcement learning to study the task offloading problem of mobile
blockchain under privacy protection. Qiu et al. [105] formulated the online
offloading problem as a Markov decision process and proposed a new model-free
deep reinforcement learning-based online computation offloading approach for
the blockchain-empowered mobile edge computing, in which both the mining tasks
and the data processing tasks are considered. Feng et al. [30] developed a
cooperative computation offloading and resource allocation framework for the
blockchain-enabled mobile-edge computing systems and designed a multi-
objective function to maximize the computation rate of mobile-edge computing
systems and the transaction throughput of the blockchain systems by means of
the Markov decision processes.
Asheralieva and Niyato [4] developed a hierarchical learning framework by
means of the Markov decision processes with the service provider and the
miners and studied resource management of edge computing to support the public
blockchain networks. By applying the Markov decision process, Asheralieva and
Niyato [5] presented a novel game-theoretic, Bayesian reinforcement learning
and deep learning framework to represent the interactions among the miners for
the public and consortium blockchains with mobile edge computing. Yuan et al.
[142] applied the Markov decision processes and deep reinforcement learning to
study the sharding technology for the blockchain-based mobile edge computing.
Internet of Things: Waheed et al. [129] provided a summary of the security and
privacy protection of blockchain technology in the Internet of Things by using
machine learning algorithms. Gao et al. [34] studied the task scheduling of
the mobile blockchain supporting applications of the Internet of Things by
means of deep reinforcement learning and Markov decision processes.
Industrial Internet of Things: Qiu et al. [104] and Luo et al. [84] studied
the blockchain-based software-defined Industrial Internet of Things by means
of a dueling deep Q-learning approach and the Markov decision processes. Yang
et al. [140] studied the energy-efficient resource allocation for the
blockchain-enabled Industrial Internet of Things by deep reinforcement
learning and Markov decision processes. Wu et al. [136] provided a review for
the deep reinforcement learning applied to the blockchain systems in the
Industrial Internet of Things.
## 7 Concluding Remarks
Since Nakamoto [93] proposed Bitcoin in 2008, research on blockchain has
attracted tremendous attention from both theoretical research and engineering
applications. With fast development of blockchain technology, blockchain has
developed many imaginative applicable modes through a series of innovative
combinations among distributed data storage, point-to-point transmission,
consensus mechanisms, encryption techniques, network and data security,
privacy protection, and other computer technologies. Also, their subversive
and imaginative features can further inspire endless technological innovations
of blockchain. Among them, the most representative technologies, such as
timestamp-based chain block structure, DAG-based network data structure,
distributed consensus mechanism, consensus mechanism-based economic
incentives, and flexible and programmable smart contracts, have increased
extremely rich colors to various practical applications. Important examples
include digital economy [15], Fintech [92], cloud services [47], reputation
systems [25], social security [127], e-commerce supply chain [82], artificial
intelligence [53], sharing economy [48], and supply chain management [112].
Performance evaluation, performance optimization, and dynamic decision are one
of the most basic theoretical research of blockchain systems, and they play a
key role in design, control, stability, improvement, and applications of
blockchain systems. So far, some blockchain pitfalls (e.g., low performance
and scalability, weak security and privacy protection, and inconvenient
interoperability among blockchain subsystems) have severely limited a wide
range of applications of blockchain technology. To resolve these blockchain
pitfalls, a few technologies or methods have been proposed and developed,
e.g., see off-chain [100], side-chain and cross-chain [6], sharding [85], and
DAG [101]. However, it is a key to deal with whether these novel technologies
and methods can effectively improve these pitfalls of the blockchain systems,
while such an interesting issue is to need to be sufficiently studied by means
of some strictly mathematical analysis. On the other hand, it is an
interesting topic to set up some useful mathematic relations among
performance, scalability, security, privacy protection and so forth. Some
intuitively understanding examples include among which increased security will
result in low throughput; increased scalability will result in high
throughput; increased security will result in strong privacy protection. Note
that the mathematic relationships can be set up by means of research on
performance evaluation, performance optimization, and dynamic decision of
blockchain systems.
It is easy to understand that practical applications will lead to the
innovation boundary of blockchain technology. That is, blockchain applications
are a main driving force of blockchain technology development. When a new
application of blockchain technology is launched, the interface between
technology and application is not always friendly, the performance and
stability are not always high, and there are also deficiencies in security and
privacy protection. Note that all the necessary improvements or increasing
maturity need some plentiful research on performance evaluation, performance
optimization, and dynamic decision of blockchain systems. In addition, for the
current blockchain technology, we need to actively create a social atmosphere
and ecological environment for both theoretical research and practical
applications of blockchain. Also, this can powerfully promote deep integration
between the blockchain technology and the key information technologies (such
as artificial intelligence, big data, and the Internet of Things).
For a large-scale blockchain system or a new blockchain technology, it is key
to find the bottleneck through analyzing vulnerability and fault tolerance of
network architecture by means of some new mathematical theory and methods
developed in research on performance evaluation, performance optimization, and
dynamic decision of blockchain systems. Thus, this motivates us in this paper
to provide a recent systematic overview of performance evaluation, performance
optimization, and dynamic decision of blockchain systems, which involves
mathematical modeling and basic theory of blockchain systems. Important
examples include (a) performance evaluation: Markov processes, queuing theory,
Markov reward processes, random walks, fluid and diffusion approximations, and
martingale theory; (b) performance optimization: Linear programming, nonlinear
programming, integer programming, and multi-objective programming; (c) optimal
control and dynamic decision: Markov decision processes, and stochastic
optimal control; and (d) machine learning: Deep reinforcement learning and
federated learning. We believe that the new basic theory with mathematical
methods, algorithms, and simulations discussed in this paper will strongly
support future development and continuous innovation of blockchain technology.
Based on the above analysis, we believe that there are still many interesting
research directions to be explored, such as smart contract, DAG-based
blockchain, cross-chain, side-chain, off-chain and other network
architectures; and some basic or new consensus protocols. Our future research
includes:
– Developing effective methods to compute and improve performance, stability,
and scalability of blockchain systems.
– Setting up a mathematical theoretical framework for security and privacy
protection of blockchain systems.
– Providing effective methods to optimize and dynamically control performance,
security and privacy protection of large-scale blockchain systems.
– Developing machine learning for performance optimization and dynamic
decision of blockchain systems.
–Developing a healthy ecological environment and reasonable operations
management in the blockchain community by means of research on performance
evaluation, performance optimization, and dynamic decision of blockchain
systems.
## References
* [1] Aggarwal V., Tan Y. (2019). A structural analysis of bitcoin cash’s emergency difficulty adjustment algorithm. Available at SSRN 3383739, pp. 1-36.
* [2] Ahmad A., Saad M., Njilla L., Kamhoua C., Bassiouni M., Mohaisen A. (2019). Blocktrail: A scalable multichain solution for blockchain-based audit trails. In The 2019 IEEE International Conference on Communications, IEEE, pp. 1-6.
* [3] Altarawneh A., Sun F., Brooks R. R., Hambolu O., Yu L., Skjellum A. (2021). Availability analysis of a permissioned blockchain with a lightweight consensus protocol. Computers $\&$ Security, 102: 102098.
* [4] Asheralieva A., Niyato D. (2019). Learning-based mobile edge computing resource management to support public blockchain networks. IEEE Transactions on Mobile Computing, 20(3): 1092-1109.
* [5] Asheralieva A., Niyato D. (2020). Bayesian reinforcement learning and bayesian deep learning for blockchains with mobile edge computing. IEEE Transactions on Cognitive Communications and Networking, 7(1): 319-335.
* [6] Back A., Corallo M., Dashjr L., Friedenbach M., Maxwell G., Miller A., Poelstra A., Timón J., Wuille P. (2014). Enabling blockchain innovations with pegged sidechains. 72, pp. 201-224. Online, available: http://www.opensciencereview.com/papers/123/enablingblockchain-innovations-with-pegged-sidechains
* [7] Bai Q., Zhou X., Wang X., Xu Y., Wang X., Kong Q. (2019). A deep dive into blockchain selfish mining. In The 2019 International Conference on Communications, pp. 1-6.
* [8] Bai Q., Xu Y., Liu N., Wang X. (2021). Blockchain Mining with Multiple Selfish Miners. arXiv preprint arXiv:2112.10454, pp. 1-22.
* [9] Bar-Zur R. (2020). Finding Optimal Strategies in Blockchain Protocols with Reinforcement Learning. Master’s Thesis, Israel Institute of Technology, Haifa.
* [10] Bowden R., Keeler H. P., Krzesinski A. E., Taylor P. G. (2020). Modeling and analysis of block arrival times in the Bitcoin blockchain. Stochastic Models, 36(4): 602-637.
* [11] Brown M., Peköz E., Ross S. (2021). Blockchain double-spend attack duration. Probability in the Engineering and Informational Sciences, 35(4): 858-866.
* [12] Büke B., Chen H. (2017). Fluid and diffusion approximations of probabilistic matching systems. Queueing Systems, 86(1): 1-33.
* [13] Cao B., Wang Z., Zhang L., Feng D., Peng M., Zhang L. (2021). Blockchain systems, technologies and applications: A methodology perspective. arXiv preprint arXiv:2105.03572, pp. 1-26.
* [14] Carlsten M. (2016). The impact of transaction fees on bitcoin mining strategies. Master’s Thesis, Department of Computer Science, Princeton University.
* [15] Catalini C. (2017). How blockchain technology will impact the digital economy. Blockchains Smart Contracts Internet Things, 4: 2292-2303.
* [16] Chang D. (2019). Revenue generation strategy through selfish mining focusing multiple pools of honest miners. Bachelor of Technology, Computer Science and Applied Mathematics, Indraprastha Institute of Information Technology, New Delhi, India.
* [17] Chang Y. X., Li Q. L., Wang Q., Song X. S. (2022). Dynamic practical byzantine fault tolerance and its blockchain system: A large-Scale markov modeling. arXiv preprint arXiv:2210.14003, pp. 1-46.
* [18] Chen H., Shanthikumar J. G. (1994). Fluid limits and diffusion approximations for networks of multi-server queues in heavy traffic. Discrete Event Dynamic Systems, 4(3): 269-291.
* [19] Chen F., Wan H., Cai H., Cheng G. (2021). Machine learning in/for blockchain: Future and challenges. Canadian Journal of Statistics, 49(4): 1364-1382.
* [20] Chen H., Yao D. D. (2001). Fundamentals of Queueing Networks: Performance, Asymptotics, and Optimization. Springer.
* [21] Chevallier J., Goutte S., Guesmi K., Saadi S. (2019). On the Bitcoin price dynamics: An augmented Markov-switching model with Lévy jumps. HAL Id: halshs-02120636, pp. 1-17.
* [22] Chicarino V., Albuquerque C., Jesus E., Rocha A. (2020). On the detection of selfish mining and stalker attacks in blockchain networks. Annals of Telecommunications, 75(3): 143-152.
* [23] Corcino R., Casas K. P., Elnar A. R. (2018). Bitcoin: A Non-markovian stochastic process. Journal of Science, Engineering and Technology, 6: 287-298.
* [24] Dai J. G., He S. (2012). Many-server queues with customer abandonment: A survey of diffusion and fluid approximations. Journal of Systems Science and Systems Engineering, 21(1): 1-36.
* [25] Dennis R., Owen G. (2015). Rep on the block: A next generation reputation system based on the blockchain. In 2015 10th International Conference for Internet Technology and Secured Transactions, IEEE, pp. 131-138.
* [26] Ekramifard A., Amintoosi H., Seno A. H., Dehghantanha A., Parizi R. M. (2020). A systematic literature review of integration of blockchain and artificial intelligence. Blockchain Cybersecurity, Trust and Privacy, pp. 147-160.
* [27] Eyal I., Sirer E. G. (2018). Majority is not enough: Bitcoin mining is vulnerable. Communications of the ACM , 61(7): 95-102.
* [28] Fan C., Ghaemi S., Khazaei H., Musilek P. (2020). Performance evaluation of blockchain systems: A systematic survey. IEEE Access, 8: 126927-126950.
* [29] Fang M., Liu J. (2020). Toward low-cost and stable blockchain networks. In The 2020 IEEE International Conference on Communications, IEEE, pp. 1-6.
* [30] Feng J., Yu F. R., Pei Q., Chu X., Du J., Zhu L. (2019). Cooperative computation offloading and resource allocation for blockchain-enabled mobile-edge computing: A deep reinforcement learning approach. IEEE Internet of Things Journal, 7(7): 6214-6228.
* [31] Ferraro P., King C., Shorten R. (2019). On the stability of unverified transactions in a DAG-based distributed ledger. IEEE Transactions on Automatic Control, 65(9): 3772-3783.
* [32] Fralix B. (2020). On classes of Bitcoin-inspired infinite-server queueing systems. Queueing Systems, 95(1): 29-52.
* [33] Frolkova M., Mandjes M. (2019). A Bitcoin-inspired infinite-server model with a random fluid limit. Stochastic Models, 35(1): 1-32.
* [34] Gao Y., Wu W., Nan H., Sun Y., Si P. (2020). Deep reinforcement learning based task scheduling in mobile blockchain for iot applications. In The 2020 IEEE International Conference on Communications, IEEE, pp. 1-7.
* [35] Geissler S., Prantl T., Lange S., Wamser F., Hossfeld T. (2019). Discrete-time analysis of the blockchain distributed ledger technology. In 2019 31st International Teletraffic Congress, IEEE, pp. 130-137.
* [36] Gervais A., Karame G. O., Wüst K., Glykantzis V., Ritzdorf H., Capkun S. (2016). On the security and performance of proof of work blockchains. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 3-16.
* [37] Gőbel J., Keeler H. P., Krzesinski A. E., Taylor P. G. (2016). Bitcoin blockchain dynamics: The selfish-mine strategy in the presence of propagation delay. Performance Evaluation, 104: 23-41.
* [38] Goffard P. O. (2019). Fraud risk assessment within blockchain transactions. Advances in Applied Probability, 51(2): 443-467.
* [39] Gopalan A., Sankararaman A., Walid A., Vishwanath S. (2020). Stability and scalability of blockchain systems. In Proceedings of the ACM on Measurement and Analysis of Computing Systems, 4(2), pp. 1-35.
* [40] Grunspan C., Pérez-Marco R. (2018). On profitability of selfish mining. arXiv preprint arXiv:1805.08281, pp. 1-20.
* [41] Grunspan C., Pérez-Marco R. (2018). On profitability of stubborn mining. arXiv preprint arXiv:1808.01041, pp. 1-16.
* [42] Grunspan C., Pérez-Marco R. (2018). On profitability of trailing mining. arXiv preprint arXiv:1811.09322, pp. 1-19.
* [43] Grunspan C., Pérez-Marco R. (2020). The mathematics of Bitcoin. European Mathematical Society Magazine, 115: 31-37.
* [44] Grunspan C., Pérez-Marco R. (2020). Profit lag and alternate network mining. arXiv preprint arXiv:2010.02671, pp. 1-19.
* [45] Grunspan C., Pérez-Marco R. (2021). On profitability of nakamoto double spend. Probability in the Engineering and Informational Sciences, 36(3): 732-746.
* [46] Grunspan C., Pérez-Marco, R. (2020). Selfish mining in ethereum. In Mathematical Research for Blockchain Economy, Springer, pp. 65-90.
* [47] Gupta A., Siddiqui S. T., Alam S., Shuaib M. (2019). Cloud computing security using blockchain. Journal of Emerging Technologies and Innovative Research, 6(6): 791-794.
* [48] Hawlitschek F., Notheisen B., Teubner T. (2018). The limits of trust-free systems: A literature review on blockchain technology and trust in the sharing economy. Electronic commerce research and applications, 29: 50-63.
* [49] He J., Zhang G., Zhang J., Zhang R. (2020). An economic model of blockchain: The interplay between transaction fees and security. Available at SSRN 3616869, pp. 1-41.
* [50] Hou C., Zhou M., Ji Y., Daian P., Tramer F., Fanti G., Juels A. (2019). SquirRL: Automating attack analysis on blockchain incentive mechanisms with deep reinforcement learning. arXiv preprint arXiv:1912.01798, pp. 1-20.
* [51] Huang D., Ma X., Zhang S. (2019). Performance analysis of the raft consensus algorithm for private blockchains. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 50(1): 172-181.
* [52] Huang H., Kong W., Zhou S., Zheng Z., Guo S. (2021). A survey of state-of-the-art on blockchains: Theories, modelings, and tools. ACM Computing Surveys, 54(2): 1-42.
* [53] Hussain A. A., Al-Turjman F. (2021). Artificial intelligence and blockchain: A review. Transactions on Emerging Telecommunications Technologies, 32(9): e4268.
* [54] Jang J., Lee H. N. (2020). Profitable double-spending attacks. Applied Sciences, 10(23): 8477.
* [55] Javier K., Fralix B. (2020). A further study of some Markovian Bitcoin models from Gőbel et al.. Stochastic Models, 36(2): 223-250.
* [56] Jofré A., Pardo A., Salas D., Verdugo V., Verschae J. (2021). The convergence rates of blockchain mining games: A Markovian approach. arXiv preprint arXiv:2107.08077, pp. 1-25.
* [57] Kang J., Xiong Z., Niyato D., Wang P., Ye D., Kim D. I. (2018). Incentivizing consensus propagation in proof-of-stake based consortium blockchain networks. IEEE Wireless Communications Letters, 8(1): 157-160.
* [58] Kasahara S., Kawahara J. (2019). Effect of Bitcoin fee on transaction-confirmation process. Journal of Industrial $\&$ Management Optimization, 15(1): 365.
* [59] Kawase Y., Kasahara S. (2017). Transaction-confirmation time for bitcoin: A queueing analytical approach to blockchain mechanism. In International Conference on Queueing Theory and Network Applications, Springer, pp. 75-88.
* [60] Kiffer L., Rajaraman R., Shelat A. (2018). A better method to analyze blockchain consistency. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 729-744.
* [61] Kim S., Kwon Y., Cho S. (2018). A survey of scalability solutions on blockchain. In 2018 International Conference on Information and Communication Technology Convergence, IEEE, pp. 1204-1207.
* [62] King C. (2021). The fluid limit of a random graph model for a shared ledger. Advances in Applied Probability, 53(1): 81-106.
* [63] Koops D. (2018). Predicting the confirmation time of bitcoin transactions. arXiv preprint arXiv:1809.10596, pp. 1-14.
* [64] Krieger U. R., Ziegler M. H., Cech H. L. (2019). Performance modeling of the consensus mechanism in a permissioned blockchain. In International Conference on Computer Networks, Springer, pp. 3-17.
* [65] Leonardos S., Reijsbergen D., Piliouras G. (2020). Weighted voting on the blockchain: Improving consensus in proof of stake protocols. International Journal of Network Management, 30(5): e2093.
* [66] Li C., Zhang L., Fang S. (2021). EntrapNet: A blockchain-based verification protocol for trustless computing. IEEE Internet of Things Journal, 9(11): 8024-8035.
* [67] Li J., Yuan Y., Wang S., Wang F. (2018). Transaction queuing game in bitcoin blockchain. In 2018 IEEE Intelligent Vehicles Symposium, IEEE, pp. 114-119.
* [68] Li Q. L. (2010). Constructive Computation in Stochastic Models with Applications: The RG-factorizations. Springer.
* [69] Li Q. L., Chang Y. X., Wu X., Zhang G. (2021). A new theoretical framework of pyramid markov processes for blockchain selfish mining. Journal of Systems Science and Systems Engineering, 30(6): 667-711.
* [70] Li Q. L., Chang Y. X. , Zhang C. (2022). Tree representation, growth rate of Blockchain and reward allocation in Ethereum with multiple mining pools. IEEE Transactions on Network and Service Management, online publication, pp. 1-19.
* [71] Li Q. L., Ma J. Y., Chang Y. X. (2018). Blockchain queue theory. In International Conference on Computational Social Networks, Springer, pp. 25-40.
* [72] Li Q. L., Ma J. Y., Chang Y. X., Ma, F. Q., Yu H. (2019). Markov processes in blockchain systems. Computational Social Networks, 6(1): 1-28.
* [73] Li Q. L., Ma J. Y., Fan R. N., Xia L. (2019). An overview for Markov decision processes in queues and networks. In International Conference of Celebrating Professor Jinhua Cao’s 80th Birthday, Springer, Singapore, pp. 44-71.
* [74] Li T., Wang Z., Yang G., Cui Y., Chen Y., Yu X. (2021). Semi-selfish mining based on hidden Markov decision process. International Journal of Intelligent Systems, 36(7): 3596-3612.
* [75] Li Y., Cao B., Liang L., Mao D., Zhang L. (2021). Block access control in wireless blockchain network: Design, modeling and analysis. IEEE Transactions on Vehicular Technology, 70(9): 9258-9272.
* [76] Li Y., Cao B., Peng M., Zhang L., Zhang L., Feng D., Yu J. (2020). Direct acyclic graph-based ledger for Internet of Things: Performance and security analysis. IEEE/ACM Transactions on Networking, 28(4): 1643-1656.
* [77] Liang K. (2018). Fission: A provably fast, scalable, and secure permissionless blockchain. arXiv preprint arXiv:1812.05032, pp. 1-18.
* [78] Liu H., Ruan N., Du R., Jia W. (2018). On the strategy and behavior of Bitcoin mining with $N$-attackers. In Proceedings of Asia Conference on Computer and Communications Security, pp. 357-368.
* [79] Liu Y., Fang Z., Cheung M. H., Cai W., Huang J. (2022). An incentive mechanism for sustainable blockchain storage. IEEE/ACM Transactions on Networking, 1-14.
* [80] Liu Y., Hei Y., Xu T., Liu J. (2020). An evaluation of uncle block mechanism effect on ethereum selfish and stubborn mining combined with an eclipse attack. IEEE Access, 8: 17489-17499.
* [81] Liu Y., Yu F. R., Li X., Ji H., Leung V. C. (2020). Blockchain and machine learning for communications and networking systems. IEEE communications surveys $\&$ tutorials, 22(2): 1392-1431.
* [82] Liu Z., Li Z. (2020). A blockchain-based framework of cross-border e-commerce supply chain. International Journal of Information Management, 52: 102059.
* [83] Lundbaek L. N., D’Iddio A. C., Huth M. (2016). Optimizing governed blockchains for financial process authentications. arXiv preprint arXiv:1612.00407, pp. 1-30.
* [84] Luo J., Chen Q., Yu F. R., Tang L. (2020). Blockchain-enabled software-defined industrial internet of things with deep reinforcement learning. IEEE Internet of Things Journal, 7(6): 5466-5480.
* [85] Luu L., Narayanan V., Zheng C., Baweja K., Gilbert S., Saxena P. (2016). A secure sharding protocol for open blockchains. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 17-30.
* [86] Ma F. Q., Li Q. L., Liu Y. H., Chang Y. X. (2021). Stochastic performance modeling for practical Byzantine fault tolerance consensus in blockchain. Peer-to-Peer Networking and Applications, 15(6): 2516-2528.
* [87] Ma J. Y., Li Q. L. (2021). Sensitivity-based optimization for blockchain selfish mining. Journal of Combinatorial Optimization, online publication, pp. 1-38.
* [88] Marmolejo-Cossío F. J., Brigham E., Sela B., Katz J. (2019). Competing (semi-) selfish miners in Bitcoin. In Proceedings of the 1st ACM Conference on Advances in Financial Technologies, pp. 89-109.
* [89] Memon R. A., Li J. P., Ahmed J. (2019). Simulation model for blockchain systems using queuing theory. Electronics, 8(2): 234.
* [90] Meng T., Zhao Y., Wolter K., Xu C. Z. (2021). On consortium blockchain consistency: A queueing network model approach. IEEE Transactions on Parallel and Distributed Systems, 32(6): 1369-1382.
* [91] Mišić J., Mišić V. B., Chang X., Motlagh S. G., Ali M. Z. (2019). Modeling of bitcoin’s blockchain delivery network. IEEE Transactions on Network Science and Engineering, 7(3): 1368-1381.
* [92] Mori T. (2016). Financial technology: Blockchain and securities settlement. Journal of Securities Operations $\&$ Custody, 8(3): 208-227.
* [93] Nakamoto S. (2008). Bitcoin: A peer-to-peer electronic cash system. pp. 1-9. Online, available: http://bitcoin.org/bitcoin.pdf.
* [94] Nayak K., Kumar S., Miller A., Shi E. (2016). Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 2016 IEEE European Symposium on Security and Privacy, pp. 305-320.
* [95] Nguyen D. C., Pathirana P. N., Ding M., Seneviratne A. (2020). Privacy-preserved task offloading in mobile blockchain with deep reinforcement learning. IEEE Transactions on Network and Service Management, 17(4): 2536-2549.
* [96] Nguyen L. N., Nguyen T. D., Dinh T. N., Thai M. T. (2019). Optchain: optimal transactions placement for scalable blockchain sharding. In 2019 IEEE 39th International Conference on Distributed Computing Systems, IEEE, pp. 525-535.
* [97] Niu J., Feng C. (2019). Selfish mining in ethereum. In 2019 IEEE 39th International Conference on Distributed Computing Systems, IEEE, pp. 1306-1316.
* [98] Niu J., Wang Z., Gai F., Feng C. (2021). Incentive analysis of Bitcoin-NG, revisited. ACM SIGMETRICS Performance Evaluation Review, 48(3): 59-60.
* [99] Piriou P. Y., Dumas J. F. (2018). Simulation of stochastic blockchain models. In 2018 14th European Dependable Computing Conference, IEEE, pp. 150-157.
* [100] Poon J., Dryja T. (2016). The bitcoin lightning network: Scalable off-chain instant payments, ver. 0.5.9.2, pp. 1-59. Online, available: https://www.bitcoinlightning.com/wp-content/uploads/2018/03/lightning-network-paper.pdf
* [101] Popov S. (2016). The Tangle, vol. 1. IOTA Foundation Technical Report, 131-156.
* [102] Prabhu N. U. (1998). Stochastic Storage Processes: Queues, Insurance Risk, and Dams, and Data Communication. Springer.
* [103] Puterman, M. L. (2014). Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley $\&$ Sons.
* [104] Qiu C., Yu F. R., Yao H., Jiang C., Xu F., Zhao C. (2018). Blockchain-based software-defined industrial Internet of Things: A dueling deep ${Q}$-learning approach. IEEE Internet of Things Journal, 6(3): 4627-4639.
* [105] Qiu X., Liu L., Chen W., Hong Z., Zheng Z. (2019). Online deep reinforcement learning for computation offloading in blockchain-empowered mobile edge computing. IEEE Transactions on Vehicular Technology, 68(8): 8050-8062.
* [106] Ramezan G., Leung C., Miao C. (2020). Optimal transaction queue waiting in blockchain mining. arXiv preprint arXiv:2011.10886, pp. 1-4.
* [107] Reddy B. S., Sharma G. V. V. (2020). Scalable consensus protocols for poW based blockchain and blockDAG. arXiv preprint arXiv:2010.05447, pp. 1-14.
* [108] Ricci S., Ferreira E., Menasche D. S., Ziviani A., Souza, J. E., Vieira A. B. (2019). Learning blockchain delays: A queueing theory approach. ACM SIGMETRICS Performance Evaluation Review, 46(3): 122-125.
* [109] Riehl J. R., Ward J. (2020). Transaction pricing for maximizing throughput in a sharded blockchain ledger. In 2020 Crypto Valley Conference on Blockchain Technology, IEEE, pp. 36-42.
* [110] Rouhani S., Deters R. (2019). Security, performance, and applications of smart contracts: A systematic survey. IEEE Access, (7): 50759-50779.
* [111] Saad M., Njilla L., Kamhoua C., Kim J., Nyang D., Mohaisen A. (2019). Mempool optimization for defending against DDoS attacks in PoW-based blockchain systems. In 2019 IEEE international conference on blockchain and cryptocurrency, IEEE, pp. 285-292.
* [112] Saberi S., Kouhizadeh M., Sarkis J., Shen L. (2019). Blockchain technology and its relationships to sustainable supply chain management. International Journal of Production Research, 57(7): 2117-2135.
* [113] Sapirshtein A., Sompolinsky Y., Zohar A. (2016). Optimal selfish mining strategies in bitcoin. In International Conference on Financial Cryptography and Data Security, Springer, Berlin, Heidelberg, pp. 515-532.
* [114] Seol J., Kancharla A., Ke Z., Kim H., Park N. (2020). A variable bulk arrival and static bulk service queueing model for blockchain. In Proceedings of the 2nd ACM International Symposium on Blockchain and Secure Critical Infrastructure, pp. 63-72.
* [115] Shafay M., Ahmad R. W., Salah K., Yaqoob I., Jayaraman R., Omar M. (2022). Blockchain for deep learning: review and open challenges. Cluster Computing, 1-25. Online, available: https://doi.org/10.1007/s10586-022-03582-7
* [116] Shi L., Wang T., Li J., Zhang S. (2021). Pooling is not favorable: Decentralize mining power of PoW blockchain using age-of-work. arXiv preprint arXiv:2104.01918, pp. 1-13.
* [117] Singh A., Click K., Parizi R. M., Zhang Q., Dehghantanha A., Choo K. (2020). Sidechain technologies in blockchain networks: An examination and state-of-the-art review. Journal of Network and Computer Applications, 149: 102471.
* [118] Smetanin S., Ometov A., Komarov M., Masek P., Koucheryavy Y. (2020). Blockchain evaluation approaches: State-of-the-art and future perspective. Sensors, 20(12): 3358.
* [119] Sompolinsky Y., Zohar A. (2015). Secure high-rate transaction processing in bitcoin. In International Conference on Financial Cryptography and Data Security, Springer, pp. 507-527.
* [120] Sompolinsky Y., Zohar A. (2016). Bitcoin’s security model revisited. arXiv preprint arXiv:1605.09193, pp. 1-26.
* [121] Song X. S., Li Q. L., Chang Y. X., Zhang, C. (2022). A markov process theory for network growth processes of DAG-based blockchain systems. arXiv preprint arXiv:2209.01458, pp. 1-49.
* [122] Spirkina A. V., Aptrieva E. A., Elagin V. S., Shvidkiy A. A., Savelieva A. A. (2020). Approaches to modeling blockchain systems. In 2020 12th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops, IEEE, pp. 242-247.
* [123] Spitzer F. (2001). Principles of Random Walk. Springer.
* [124] Srivastava R. (2019). Mathematical assessment of blocks acceptance in blockchain using Markov model. International Journal of Blockchains and Cryptocurrencies, 1(1): 42-53.
* [125] Sukhwani H., Wang N., Trivedi K. S., Rindos A. (2018). Performance modeling of hyperledger fabric (permissioned blockchain network). In 2018 IEEE 17th International Symposium on Network Computing and Applications , IEEE, pp. 1-8.
* [126] Sun L., Yang Q., Chen X., Chen Z. (2021). RC-chain: Reputation-based crowdsourcing blockchain for vehicular networks. Journal of Network and Computer Applications, 176: 102956.
* [127] Tang S., Wang Z., Dong J., Ma Y. (2022). Blockchain-enabled social security services using smart contracts. IEEE Access, 10: 73857-73870.
* [128] Varma S. M., Maguluri S. T. (2021). Throughput optimal routing in blockchain-based payment systems. IEEE Transactions on Control of Network Systems, 8(4): 1859-1868.
* [129] Waheed N., He X., Ikram M., Usman M., Hashmi S. S., Usman M. (2020). Security and privacy in IoT using machine learning and blockchain: Threats and countermeasures. ACM Computing Surveys , 53(6): 1-37.
* [130] Wang T., Liew S. C., Zhang S. (2021). When blockchain meets AI: Optimal mining strategy achieved by machine learning. International Journal of Intelligent Systems, 36(5): 2183-2207.
* [131] Wang P., Xu H., Jin X., Wang T. (2019). Flash: efficient dynamic routing for offchain networks. In Proceedings of the 15th International Conference on Emerging Networking Experiments And Technologies, pp. 370-381.
* [132] Wang R., Ye K., Xu C. Z. (2019). Performance benchmarking and optimization for blockchain systems: A survey. In International Conference on Blockchain, Springer, pp. 171-185.
* [133] Wang Z., Liu J., Wu Q., Zhang Y., Zhou Z. (2019). An analytic evaluation for the impact of uncle blocks by selfish and stubborn mining in an imperfect ethereum network. Computers $\&$ Security, 87(101581): 1-10.
* [134] Whitt W. (2002). Stochastic-process Limits: An Introduction to Stochastic-process Limits and Their Application to Queues. Springer.
* [135] Wilhelmi F., Giupponi L., Dini P. (2021). Blockchain-enabled server-less federated learning. arXiv preprint arXiv:2112.07938, pp. 1-14.
* [136] Wu Y., Wang Z., Ma Y., Leung V. C. (2021). Deep reinforcement learning for blockchain in industrial IoT: A survey. Computer Networks, 191: 108004.
* [137] Wüst K. (2016). Security of blockchain technologies. Master’s Thesis, ETH Zürich.
* [138] Xia F., Liu J., Nie H., Fu Y., Wan L., Kong X. (2020). Random walks: A review of algorithms and applications. IEEE Transactions on Emerging Topics in Computational Intelligence, 4(2): 95-107.
* [139] Xia Q., Dou W., Xi T., Zeng J., Zhang F., Wei J., Liang G. (2021). The impact analysis of multiple miners and propagation delay on selfish mining. In 2021 IEEE 45th Annual Computers, Software, and Applications Conference, IEEE, pp. 694-703.
* [140] Yang L., Li M., Si P., Yang R., Sun E., Zhang Y. (2020). Energy-efficient resource allocation for blockchain-enabled industrial Internet of Things with deep reinforcement learning. IEEE Internet of Things Journal, 8(4): 2318-2329.
* [141] You J. (2022). Blockchain framework for artificial intelligence computation. arXiv preprint arXiv:2202.11264, pp. 1-10.
* [142] Yuan S., Li J., Liang J., Zhu Y., Yu X., Chen J., Wu C. (2021). Sharding for blockchain based mobile edge computing system: A deep reinforcement learning approach. In 2021 IEEE Global Communications Conference, IEEE, pp. 1-6.
* [143] Yu G., Wang X., Yu K., Ni W., Zhang J., Liu R. (2020). Survey: sharding in blockchains. IEEE Access, 8: 14155-14181.
* [144] Zhao W., Jin S., Yue W. (2019). Analysis of the average confirmation time of transactions in a blockchain system. In International Conference on Queueing Theory and Network Applications, Springer, pp. 379-388.
* [145] Zheng X., Zhu Y., Si X. (2019). A survey on challenges and progresses in blockchain technologies: A performance and security perspective. Applied Sciences, 9(22): 4731.
* [146] Zhou Q., Huang H., Zheng Z., Bian J. (2020). Solutions to scalability of blockchain: A survey. IEEE Access, 8: 16440-16455.
* [147] Zhou Z., Li R., Cao Y., Zheng L., Xiao H. (2020). Dynamic performance evaluation of blockchain technologies. IEEE Access, 8: 217762-217772.
* [148] Zur R. B., Eyal I., Tamar A. (2020). Efficient MDP analysis for selfish-mining in blockchains. In Proceedings of the 2nd ACM Conference on Advances in Financial Technologies, pp. 113-131.
|
# Satellite Constellation Avoidance with the Rubin Observatory Legacy Survey
of Space and Time
Jinghan Alina Hu Harvey Mudd College, Claremont, CA, USA Meredith L. Rawls
Department of Astronomy / DiRAC / Vera C. Rubin Observatory, University of
Washington, Seattle, WA, USA Peter Yoachim Department of Astronomy / DiRAC /
Vera C. Rubin Observatory, University of Washington, Seattle, WA, USA Željko
Ivezić Department of Astronomy / DiRAC / Vera C. Rubin Observatory, University
of Washington, Seattle, WA, USA
###### Abstract
We investigate a novel satellite avoidance strategy to mitigate the impact of
large commercial satellite constellations in low-Earth orbit on the Vera C.
Rubin Observatory Legacy Survey of Space and Time (LSST). We simulate the
orbits of currently planned Starlink and OneWeb constellations ($\sim$40,000
satellites) to test how effectively an upgraded Rubin scheduler algorithm can
avoid them, and assess how the overall survey is affected. Given a reasonably
accurate satellite orbit forecast, we find it is possible to adjust the
scheduler algorithm to effectively avoid some satellites. Overall, sacrificing
10% of LSST observing time to avoid satellites reduces the fraction of LSST
visits with streaks by a factor of two. Whether such a mitigation will be
required depends on the overall impact of streaks on science, which is not yet
well quantified. This is due to a lack of adequate information about satellite
brightness distributions as well as the impact of glints and low surface
brightness residuals on alert purity and systematic errors in cosmological
parameter estimation. A significant increase in the number of satellites or
their brightness during Rubin Operations may make implementing this satellite
avoidance strategy worthwhile.
Ground-based astronomy, Light pollution, Sky surveys, Artificial Satellites
††software: Astropy (Astropy Collaboration et al., 2013, 2018, 2022), Healpy
and HEALpix666http://healpix.sourceforge.net (Górski et al., 2005; Zonca et
al., 2019), Matplotlib (Hunter, 2007), Numpy (Harris et al., 2020), rubin_sim
(Yoachim et al., 2022b), Scipy (Virtanen et al., 2020), Skyfield (Rhodes,
2019), Shapely (Gillies et al., 2007–)
## 1 Introduction
Rubin Observatory’s Legacy Survey of Space and Time (LSST) is a ten-year
astronomical imaging survey that will begin in 2024 from a new telescope under
construction in Chile. Instead of soliciting individual requests for what the
telescope should observe, the LSST will uniformly survey the sky every few
nights using six color filters to create a decade-long high-resolution survey
of the entire southern sky, and share massive quantities of data products with
the astronomy community (Ivezić et al., 2019). To accomplish this, the LSST
will employ a scheduling algorithm that uses a modified Markov Decision
Process which can generate lists of desirable observations in real time
(Naghib et al., 2019). The LSST scheduler balances the desire to minimize slew
time, optimize signal to noise in individual images, and to maintain survey
footprint uniformity.
One challenge for the LSST is that increasing numbers of bright low-Earth
orbit (LEO) satellites (e.g., Starlink) are being launched, which may leave
streaks in astronomical pointings. LEO satellites are visible from Earth
because they reflect sunlight, especially during twilight. As the Sun-
illuminated satellites move across the field of view of an astronomical
pointing, they leave a streak in the image. While the flux from satellite
streaks can in many cases be identified and removed, the resulting pixels have
much lower signal-to-noise. For a thorough discussion of the scientific
utility of residual light after masking satellite trails, see Hasan et al.
(2022). Over the last three years, many astronomers have raised concerns about
the impact of the proliferation of commercial satellites on the LEO ecosystem
and astronomical surveys (Lawrence et al., 2022; Tyson et al., 2020). In
addition, astronomers have come together with satellite operators and other
stakeholders to create recommendations and strategies to mitigate impacts to
observational astronomy and beyond (Walker et al., 2020a; Hall et al., 2021;
Walker et al., 2020b, 2021).
Tyson et al. (2020) used a very simple algorithm to see if the LSST could
avoid imaging satellite streaks. They concluded that attempting to naively
dodge of order 48,000 LEO satellites is useless because it is operationally
inefficient. In this paper, rather than try to avoid all satellite streaks, we
incorporate satellite avoidance as a component of the LSST scheduler’s Markov
Decision Process. This allows us to avoid a significant fraction of satellite
streaks and investigate what level of avoidance might be acceptable because it
does not drastically impact the overall performance of the LSST.
There are other efforts underway to mitigate the impact of satellite streaks
in astronomical images. For example, satellite companies like SpaceX have
worked on darkening the exterior of satellites so they will be less
visible111https://api.starlink.com/public-
files/BrightnessMitigationBestPracticesSatelliteOperators.pdf. However, even
with the most effective darkening mitigations to date, satellites still appear
bright to the LSST Camera, and are likely to cause effects like non-linear
crosstalk or glints that are challenging to correct with the LSST Science
Pipelines software and may introduce systematic biases or spurious detections.
This is discussed in more detail in Tyson et al. (2020) and on the Rubin
Observatory LSST Project website222https://ls.st/satcon. Astronomers have also
developed algorithms for masking satellite trails in images, but covering the
outer wings of the trails without losing extra pixels remains a challenge
(Hasan et al., 2022). The rapid increase in population of LEO satellites
threatens to compromise the quality and scientific value of LSST images and
also requires extra human and computer resources to effectively mask trails.
Thus, we explore an additional option: incorporating the orbits of known
commercial satellites into the LSST scheduler so the worst of them may be
avoided.
In this paper, we first create realistic simulated forecasts of satellite
orbits in Section 2. We then build a tool that uses that data to create new
scheduler constraints, and test the impact of the modified scheduler algorithm
on LSST observing programs in Section 3. Finally, we discuss the resulting
trade-off between number of streaks and reduced survey depth that results from
avoiding satellites in Section 4, and lay out possibilities for the future as
the satellite population changes during Rubin Operations. We also make
available a GitHub repository with the data and software necessary to
reproduce the paper’s figures333https://github.com/lsst-sims/satellite-
dodging-ApJL.
## 2 Methods
We begin by creating realistic forecasts of three commercial satellite
constellations, which are illustrated in Figure 1. These are Starlink Gen1
(4,408 satellites, altitude $540-570$ km), OneWeb (6,372 satellites, altitude
1200 km), and Starlink Gen2 (29,988 satellites, altitude $340-614$ km). Each
constellation uses orbital inclinations and number of satellite planes
matching current plans444https://ls.st/x1o. To date, OneWeb has launched and
deployed several hundred satellites, while the number of Starlink satellites
is in the thousands.
Figure 1: Three simulated satellite constellations, one per column. Starlink
Gen1 is 4,408 satellites, Starlink Gen2 is 29,988 satellites, and OneWeb is
6,372 satellites, for a grand total of 40,768. The top row shows the 3D
distribution of each constellation around Earth. The middle row shows an
instantaneous Hammer projection of the altitude and azimuth positions of each
constellation as seen from Rubin Observatory on October 1, 2023 during
twilight (Sun altitude $-18$ degrees). Blue points are satellites illuminated
by the Sun at this time, red points are satellites not illuminated by the Sun,
and black points are satellites that are both illuminated and above the Rubin
20 degree altitude pointing limit. The bottom row is the same Hammer
projections six hours later in the middle of the night (Sun altitude $-50$
degrees). Because Starlink satellites orbit at 550 km, none are illuminated in
the middle of the night at this time of year. The OneWeb constellation at 1200
km has only a single illuminated satellite above the Rubin altitude limit at
this particular time.
To simulate LSST observations, we start with the baseline observing strategy
in Yoachim et al. (2022a). LSST observations are scheduled in visits, where a
$u$ visit is one 30s exposure and visits in all other filters ($grizy$) are
back-to-back 15s exposures. The baseline strategy attempts to take most
observations in mixed filter pairs (e.g., an $r$ visit followed by an $i$
visit 33 minutes later), and completes 215,000 visits in the first year.
The baseline LSST observing strategy uses three primary basis functions which
reward (1) minimizing slewtime, (2) maximizing the depth of images (e.g., by
avoiding the Moon and high airmass), and (3) maintaining a uniform survey
footprint. To this, we add a fourth basis function which penalizes observing
areas of the sky which will have high concentrations of illuminated
satellites. Figure 2 shows an example of this new basis function for the three
simulated satellite constellations. The positions of the illuminated
satellites are computed in 10-second intervals and marked on the sky. These
maps are then summed over 90-minute blocks to generate the basis function
maps. Thus our modified scheduler with a satellite avoidance strategy does not
try to avoid individual satellite streaks, but rather has a parameterized
method for avoiding regions of the sky where satellite streaks are more
likely. This has the additional benefit of not requiring high precision
satellite orbit forecasts.
Figure 2: Satellite avoidance maps constructed for the Rubin scheduler for
each simulated constellation. Each is for a twilight observation period of 90
minutes (beginning after sunset with a Sun altitude of $-17.1$ degrees). The
map projections are rotated so zenith is in the center of the image. Darker
regions have more illuminated satellites and therefore more negative
weighting. By varying the dodging weight placed on these maps, the scheduler
will more actively avoid regions of the sky where satellites could streak
images.
We show three example satellite avoidance maps ready for use by the LSST
scheduler in Figure 2. It is apparent from Figure 2 that the simulated OneWeb
constellation has more negative area — regions that should be avoided due to
large numbers of illuminated satellites — than the other two constellations.
Although OneWeb has fewer satellites than Starlink Gen2, the OneWeb satellites
orbit at a higher altitude (1200 km compared to 340-614 km for Starlink),
meaning that they will be illuminated for a longer portion of the night, and
also have a larger impact close to twilight. This is why one of the
recommendations from Walker et al. (2020a) is to keep LEO satellites below 600
km altitude.
To investigate whether the scheduler behaves how we expect with the new
satellite avoidance strategy, we create a testing function that measures the
length of satellite streaks in the simulated fields of view. To ensure
efficiency, only satellites that are above the altitude limit and illuminated
by the Sun are considered. Satellites below the altitude limit (indicated in
the gray region in Figure 2) cannot be reached by the Simonyi Survey Telescope
and are therefore not included. For each satellite, we first determine whether
it is in the field of view for a given pointing by calculating their distance
from the center of the field of view. If this distance is less than the radius
of the field of view, the satellite has crossed through the pointing. To
quantify the impact of the satellite on the pointing, we then project both the
satellite location and the pointing to a 2D x,y plane. In this plane, the
field of view is roughly circular and the start and end locations of the
satellite crossing are two points on the plane, and we can calculate the total
intersection length. Therefore, given a simulated satellite constellation and
a schedule of observations, we are able to record the number of satellites in
each pointing and measure the total streak length. We assume that the impact
of streaks on science is proportional to their total length, which allows us
to quantify the efficiency of the satellite avoidance strategy.
To estimate the fraction of pixels affected by streaks, following Hasan et al.
(2022) we adopt a fiducial width of 300 pixels (equivalent to 1 arcminute
given the plate scale of 0.2 arcseconds per pixel). The 3.5 degree diameter
Rubin focal plane is populated with 189 4kx4k CCDs. Assuming a length of 15
CCDs (with a CCD side of 4096 pixels), a single streak corresponds to 0.6% of
all the pixels in the focal plane. On the other hand, if a streak is so bright
that entire CCDs are rendered scientifically useless, a single streak would
wipe out 8% of all pixels in the focal plane.
We simulate observations for only the first year of the planned ten-year LSST,
as the survey strategy does not significantly change in later years. We
acknowledge this does not account for the likely satellite population increase
beyond the three simulated constellations; however, our results should scale
linearly to larger future constellations in similar orbital distributions. We
do not consider the effects of satellites launching or de-orbiting, and for
simplicity we assume each satellite’s orbital parameters are constant. This
should be an acceptable approximation as long as actual satellite orbital
parameters are available $\sim 1$ day in advance so our avoidance basis
functions can be constructed. If there is no timely information publicly
available on LEO satellite constellation orbits, or it is highly inaccurate,
the satellite avoidance strategy would be impossible to implement. We estimate
that satellite orbital solutions correct to within about a degree in space and
to within a few minutes in time would be sufficient to effectively avoid some
regions of the sky with more satellites in large constellations. Observing
current Starlink satellites, Halferty et al. (2022) find they can construct
TLEs which predict positions with sub-degree spatial precision and sub-second
temporal precision which is more than adequate for our proposed satellite
avoidance methods.
## 3 Results
We find that higher dodging weights reduces pixels lost to satellite streaks,
and that the satellite avoidance algorithm is able to effectively avoid
satellite streaks in simulated pointings. This is shown in the top two panels
of Figure 3. We also find that smaller constellations at lower orbital
altitudes (Starlink Gen1, for example) inherently cause less pixel loss per
pointing, nearly independent of the dodging weight.
Figure 3: Illustration of changes in the mean streak length in all visits
including those with no streak (top left), the fraction of visits with streaks
(top right), the number of acquired visits in year 1 (bottom left) and coadded
depth in the $g$ band (bottom right) as a function of dodging basis function
weight (starting with essentially no dodging on the left). Note that to reduce
the fraction of visits with streaks by about a factor of two, satellite
avoidance will require 10% of total observing time.
Next, we investigate the relationship between the number of exposures the
scheduler is able to complete as a function of the dodging weight. As shown in
the bottom two panels of Figure 3, higher dodging weight results in fewer
visits, most likely due to longer slew times. With a higher dodging weight,
the telescope may be prompted to slew to a location other than the most
desirable nearby pointings, resulting in fewer overall exposures. We also find
that a larger constellation (Starlink Gen2) tends to decrease the number of
exposures slightly more than a smaller constellation (Starlink Gen1), which is
expected. More satellites or satellites at higher orbital altitudes result in
larger areas of avoidance on the sky, which leads to more slewing required to
avoid the affected areas, which subsequently reduces the total number of
visits. In addition to forcing longer slews, avoiding satellite dense areas
pushes the scheduler to observe pointings with lower signal to noise (e.g.,
higher airmass, brighter sky background areas) than it normally would.
One important LSST survey goal is to collect a large number of exposures of
the whole southern sky so these may be co-added to reveal faint structures
that are not visible in individual visits. As a result, survey depth is
crucial to LSST science, and the trade-off between pixel loss from satellite
streaks versus survey depth reduction from fewer total visits must be
evaluated. With the satellite avoidance algorithm, the scheduler is prompted
to avoid regions with illuminated satellites, which sometimes results in
longer slew times or less desirable pointing conditions and contributes to a
loss in survey depth. Therefore, Figure 4 explores the trade-off between
survey depth loss and satellite avoidance. As evident from the figure, the
fraction of LSST visits with streaks can be decreased by a factor of two with
an investment of 10% of LSST observing time, corresponding to a loss of
coadded depth of 0.05 mag.
Figure 4: The trade-off between the mean streak length and final median co-
added depth in $g$ band for one year of the LSST (controlled by dodging basis
function weight). A negative change in co-added depth indicates the survey is
shallower.
So far, we have primarily considered impacts on the overall LSST. However,
another important LSST science goal involves using twilight images to search
for Near Earth Objects (NEOs) and solar system objects interior to the orbits
of Earth and Venus. These observations must be taken in the direction of the
rising or setting sun at high airmass. Because a small potential area is
targeted, our proposed satellite avoidance scheme is ineffective. Figure 5
shows how regular survey observations and twilight NEO observations would be
affected by satellite constellations using the LSST scheduler with no
satellite avoidance. While the majority of twilight NEO observations could
include a satellite streak with a Starlink Gen2 size constellation, we find
this would only result in a $\sim$0.36% loss of science pixels (for a 1
arcminute wide mask).
Figure 5: Impacts of satellite streaks on simulated LSST observations without
any satellite avoidance. Compared to standard LSST observations (top),
twilight NEO observations (bottom) cannot easily be shifted to avoid
satellites. The left panels show the altitude and azimuth distribution of
observations on the sky (zenith at the center of the plots), and the right
panels show how many streaks would result from the three simulated satellite
constellations as a function of how high above the horizon the telescope is
pointing (observation altitude). Note that most twilight NEO observations
would contain a satellite streak.
## 4 Discussion
We have demonstrated that adding a weighted term in the scheduler algorithm
for illuminated satellites can effectively reduce the amount of satellite
streaks in observations, and subsequently reduce mean pixel loss per visit and
other impacts on science, as illustrated in Figure 3. However, with the new
added priority on satellite avoidance, the telescope can be pushed to take an
observation path that does not optimize slew time, which subsequently reduces
the number of exposures and overall survey depth. The trade-off essentially
comes down to the relationship between total streak length reduction and
survey depth reduction. The final decision will depend on the overall impact
of streaks on science which is not well quantified yet due to a lack of
adequate information about the satellite brightness distribution and the
impact of glints and low surface brightness residuals on alert purity and
systematic errors in cosmological parameter estimation. In other words, when
evaluating whether to implement a weighted satellite avoidance strategy to
effectively reduce satellite streak density, it is necessary to evaluate the
trade-off between pixel loss together with non-linear crosstalk, time-domain
glint effects, and any other relevant systematics versus loss in observing
time.
We note that earlier publications (Lawrence et al., 2022; Tyson et al., 2020)
stated that the majority of LSST images are likely to contain a satellite
streak. They also included some higher-altitude Starlink orbits that are no
longer planned. Our study finds that about 10% of all LSST images will have a
streak from the three simulated constellations (Starlink Gen1, Starlink Gen2,
and OneWeb, totaling 40,768 satellites as currently planned). It is true,
however, that twilight observing campaigns at high airmass — like those
necessary to perform NEO searches — will have streaks in the majority of
images.
There is a concerning possibility of sharp increases in satellite population
in the next $5–10$ years, overlapping the LSST operations period
($2024–2035$). With a dramatic increase in satellite population, the ability
to avoid satellites might become more relevant. It is possible to linearly
extrapolate our results to consider a possible LEO satellite population in the
hundreds of thousands circa 2030, assuming the orbital distribution in LEO is
similar to that of Starlink and OneWeb. A future with 400,000 LEO satellites
rather than 40,000 — the stated goal of various companies intending to launch
very large constellations given the present
filings555https://planet4589.org/space/stats/conlist.html — could render the
trade-off of 10% of LSST observing time in order to cut the number of visits
with streaks in half worthwhile. Tyson et al. (2020) find that satellites with
AB magnitudes of $g\sim 3.2$ to $y\sim 1.5$ would saturate LSST images.
Satellite streaks from Starlink and OneWeb as presently designed are not
expected to saturate the LSST Camera’s CCD detectors as they have magnitudes
of $\sim 5$ in their final orbits (Halferty et al., 2022). While a star of
$g=10$ would be very saturated in a LSST images, LEO satellites are moving
fast enough that their effective exposure time is much lower than astronomical
targets. Satellites typically only leave streaks in images when they are both
illuminated by the Sun and visible from the observatory, and LEO satellites
spend most of the night in Earth’s shadow. Satellites from other operators may
be significantly brighter than present-day Starlink and OneWeb satellites, and
may saturate the LSST Camera’s detectors or cause overwhelming levels of non-
linear crosstalk. In particular, the Blue Walker 3 satellite is the first of a
proposed 100 satellite constellation which have a $V$ magnitude between 1-0.
Such a bright satellite would saturate LSST images, potentially causing much
higher pixel losses than satellites which have been launched to date.
Therefore, one future work direction involves adding brightness weighting to
the satellite avoidance algorithm. The idea is to only avoid satellites
brighter than a certain brightness threshold. This could potentially reduce
the region of avoidance, therefore reducing the loss in observing time and co-
added depth. It may be possible to compute optimal starting locations for a
series of observations based on satellite forecasts to further optimize
satellite avoidance. Finally, since faint trail detection and masking is not
perfect, no satellite avoidance strategy will effectively mitigate faint
glints and the resulting bogus alerts.
We thank the anonymous referee who provided helpful prompt feedback that
improved the clarity of the paper. JH acknowledges support from the Computing
Research Association-Widening Participation (CRA-WP) Distributed Research
Experiences for Undergraduates (DREU) program. JH and MLR are grateful for
LSST Corporation travel support for JH to attend the 2022 Rubin Project and
Community Workshop and present a poster. MLR acknowledges support from Bob
Blum, Leanne Guy, and all of Rubin Operations to spend a fraction of her time
on satellite mitigation work; this study would not have been possible without
formal recognition that the proliferation of bright commercial LEO satellites
poses a threat to LSST science. The authors all wish to thank Tony Tyson for
valuable discussions and feedback that helped place this work in context. This
work was facilitated through the use of advanced computational, storage, and
networking infrastructure provided by the Hyak supercomputer system at the
University of Washington.
## References
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Astropy Collaboration et al. (2022) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167, doi: 10.3847/1538-4357/ac7c74
* Gillies et al. (2007–) Gillies, S., et al. 2007–, Shapely: manipulation and analysis of geometric objects. https://github.com/Toblerity/Shapely
* Górski et al. (2005) Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, 622, 759, doi: 10.1086/427976
* Halferty et al. (2022) Halferty, G., Reddy, V., Campbell, T., Battle, A., & Furfaro, R. 2022, MNRAS, 516, 1502, doi: 10.1093/mnras/stac2080
* Hall et al. (2021) Hall, J., Walker, C., Rawls, M., et al. 2021, in Bulletin of the American Astronomical Society, 2.205, doi: 10.3847/25c2cfeb.4554c01f
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2
* Hasan et al. (2022) Hasan, I., Tyson, J. A., Saunders, C., & Xin, B. 2022, Astronomy and Computing, 39, 100584, doi: 10.1016/j.ascom.2022.100584
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Ivezić et al. (2019) Ivezić, Ž., Kahn, S. M., Tyson, J. A., et al. 2019, ApJ, 873, 111, doi: 10.3847/1538-4357/ab042c
* Lawrence et al. (2022) Lawrence, A., Rawls, M. L., Jah, M., et al. 2022, Nature Astronomy, 6, 428, doi: 10.1038/s41550-022-01655-6
* Naghib et al. (2019) Naghib, E., Yoachim, P., Vanderbei, R. J., Connolly, A. J., & Jones, R. L. 2019, AJ, 157, 151, doi: 10.3847/1538-3881/aafece
* Rhodes (2019) Rhodes, B. 2019, Skyfield: High precision research-grade positions for planets and Earth satellites generator, Astrophysics Source Code Library, record ascl:1907.024. http://ascl.net/1907.024
* Tyson et al. (2020) Tyson, J. A., Ivezić, Ž., Bradshaw, A., et al. 2020, AJ, 160, 226, doi: 10.3847/1538-3881/abba3e
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2
* Walker et al. (2020a) Walker, C., Hall, J., Allen, L., et al. 2020a, in Bulletin of the American Astronomical Society, Vol. 52, 0206, doi: 10.3847/25c2cfeb.346793b8
* Walker et al. (2020b) Walker, C., Di Pippo, S., Aubé, M., et al. 2020b, Dark & Quiet Skies I (2020), Dark & Quiet Skies I (2020), Report of the conference held 5-9 October, 2020., doi: 10.5281/zenodo.5898785
* Walker et al. (2021) —. 2021, Dark & Quiet Skies II (2021), Dark & Quiet Skies II (2021), Report of the conference held 3-7 October, 2021., doi: 10.5281/zenodo.5874725
* Yoachim et al. (2022a) Yoachim, P., Jones, R. L., Eric H. Neilsen, J., & Ribeiro, T. 2022a, lsst-sims/sims_featureScheduler_runs2.2: Initial release, v1.0.0, Zenodo, doi: 10.5281/zenodo.7150784
* Yoachim et al. (2022b) Yoachim, P., Jones, L., Eric H. Neilsen, J., et al. 2022b, lsst/rubin_sim: 0.12.1, 0.12.1, Zenodo, doi: 10.5281/zenodo.7087823
* Zonca et al. (2019) Zonca, A., Singer, L., Lenz, D., et al. 2019, Journal of Open Source Software, 4, 1298, doi: 10.21105/joss.01298
|
# Optimizing Stock Option Forecasting
with the Assembly of Machine Learning Models
and Improved Trading Strategies
Zheng Cao1, Raymond Guo2, Wenyu Du3, Jiayi Gao4,
and Kirill V. Golubnichiy5 University of Washington, Seattle, USA 1 2 3 4
Department of Mathematics<EMAIL_ADDRESS>Department of Computer Science & Math
<EMAIL_ADDRESS>Department of Computer Science<EMAIL_ADDRESS>Academy for
Young Scholars<EMAIL_ADDRESS>University of Calgary, Alberta, Canada5
Department of Mathematics and Statistics<EMAIL_ADDRESS>
###### Abstract
This paper introduced key aspects of applying Machine Learning (ML) models,
improved trading strategies, and the Quasi-Reversibility Method (QRM) to
optimize stock option forecasting and trading results. It follows up on the
findings from Application of Convolutional Neural Networks with Quasi-
Reversibility Method Results for Option Forecasting[8]. First, the project
included an application of Recurrent Neural Networks (RNN) and Long Short-Term
Memory (LSTM) networks to provide a novel way of predicting stock option
trends. Additionally, it examined the dependence of the ML models by
evaluating the experimental method of combining multiple ML models to improve
prediction results and decision-making. Lastly, two improved trading
strategies and simulated investing results were presented. The Binomial Asset
Pricing Model with discrete time stochastic process analysis and portfolio
hedging was applied and suggested an optimized investment expectation. These
results can be utilized in real-world trading strategies to optimize stock
option investment results based on historical data.
Keywords: Recurrent Neural Network, Binomial Asset Pricing Model, Stochastic
Process, Discrete Optimization, Machine Learning, Quasi-Reversibility Method
## 1 Introduction
In their previous work, Cao, Du, and Golubnichiy [8] developed a five-layer
convolutional neural network that achieved $51.49\%$ accuracy and $57.14\%$
precision in testing. Here, the team expanded upon these results by using an
LSTM for the same purposes and combining various neural networks to improve
the stock option forecasting results. Updated trading strategies and simulated
invested results were also illustrated for further analysis. The limitations
and challenges were listed for future developments.
To produce a more precise forecast of stock option pricing, Klibanov,
Kuzhuget, and Golubnichiy created a new empirical mathematical modeling method
[4]. This could be accomplished using the Black-Scholes (BS) equation with
initial and boundary conditions. The present value was calculated for a
specific period of time in the past using the BS equation[2]. For financial
mathematics, it is a parabolic partial differential equation to predict the
European style options that targets on the European style options[6]. In this
study, we employed the BS equation to forecast future option prices. The
research thus belonged to the class of ill-posed problems, where the solution
was either unstable or nonexistent. This research run into complications
because we were attempting to forecast future option pricing. An ill-posed
problem was one for which there was no solution or one that is unstable.
In order to apply regularization to solve the ill-posed problem, the system
must be transformed into linear forms of functional equations.
In our case, $u$ is the minimizer, making it our prediction. We have a vector
$X$, which contains 13 elements including the previous minimizer u and the the
volatility coefficient ${\sigma}$ [7, Chapter 7, Theorem 7.7]:
$\begin{split}&\frac{\partial
u(s,\tau)}{\partial\tau}=\frac{{\sigma}^{2}}{2}s^{2}\frac{\partial^{2}u(s,\tau)}{\partial
s^{2}},\\\ &u(s,0)=f(s),\end{split}$ (1.1)
The payoff function is $f(s)=\max\left(s-K,0\right)$, at T = t, where $K$ is
the strike price [7], $s>0.$, and the time at a given time $t$ will occur is
$\tau$
$\tau=T-t.$ (1.2)
The option price function is defined by the Black-Scholes formula:
$u(s.\tau)=s\Phi(\theta_{+}(s,\tau))-e^{-r\tau}K\Phi(\theta_{-}(s,\tau)),$
(1.3)
Based on the Itô formula, we have:
$du=(-\frac{\partial
u(s,T-t)}{\partial\tau}+\frac{{\sigma}^{2}}{2}s^{2}\frac{\partial^{2}u(s,T-t)}{\partial
s^{2}})dt+\sigma s\frac{\partial u(s,T-t)}{\partial s}dW.$ (1.4)
If equation (3) is solved forwards in time to forecast prices of stock
options, it is an ill-posed inverse problem. By solving the previous equation,
we get the solution as the minimizer and apply it into the trading strategy to
generate prediction results. After testing on real market data, we proved all
these new mathematical approaches suggest better prediction results which may
help traders make better investment decisions.
The QRM, Binary Classification, and Regression Neural Network Machine Learning
outcomes are summarized in the table below. The percentage of lucrative
alternatives throughout model generations was shown in the Precision column of
the table.
Table 1. Previous Models’ Results
Method | Accuracy | Precision | Recall
---|---|---|---
QRM | 49.77% | 55.77% | 52.43%
Binary Classification | 56.36% | 59.56% | 70.22%
Regression NN | 55.42% | 60.32% | 61.29%
## 2 RNNs and LSTMs
A Recurrent Neural Network (RNN) is a neural network that outputs both an
preddiction (the attempted guess for a label) and a hidden state [9]. This
hidden state is passed along to the RNN’s next iteration, where it takes in
both another input and the hidden state generated by the previous iteration.
These hidden states allow RNNs to send themselves data containing information
from previous predictions in order to determine the predictions that should be
made in later iterations. In short, RNNs read through a series of tokens and
have the benefit of being able to ”remember past events” in order to make
predictions.
This allows RNNs to be particularly effective when the input comes as a stream
of data, where each datapoint in the stream has a label whose value is
intuitively dependent on previous datapoints and labels. As a result, one of
the main uses of RNNs is in predicting ”time-series data,” or sequences of
datapoints that come in temporal order. Examples include predicting the
weather, wins or losses of sports teams, or (for our applications) increases
and decreases in stock prices.
In practice, traditional RNNs often have trouble using their hidden state to
remember information over more than a few iterations. The Long Short-Term
Memory network, or LSTM, was created to combat this issue. In every iteration,
it passes itself two different hidden states instead of just one. One of these
hidden states essentially undergoes the traditional processing of a hidden
state in an RNN, and thus serves the role of the short-term memory. On the
other hand, the second hidden state undergoes very few simple alterations in
each iteration, and thus serves as the long-term memory. The creation of this
second hidden state makes LSTMs significantly more likely to have the ability
to remember information for more iterations. Since both short- and long-term
information is necessary to make accurate predictions for option prices, we
use LSTMs to attempt to make more accurate predictions.
Our architecture in this approach consisted of a two-layer LSTM followed by a
fully connected layer and a sigmoid for classification. Our model was trained
on sequences of 10 data points (10 consecutive days of stock information), fed
in one at a time, with a batch size of 8.
This approach resulted in a $52.08\%$ accuracy in validation.
## 3 ML Models Examination
Based on previous modeling utilizing various machine learning techniques to
help forecast stock options, this section explored experimenting with
combining ML models for better results. The goal was to examine the dependence
and inner correlation among the inner architectures of different ML models.
### 3.1 Combing Prediction from Previous ML Models
The motivation for improving the precision of the stock option predicting
model was to combine the results of all previous models. Under the assumption
that the models were executing independent decisions, the final results were
expected to reach a higher degree of precision when all given models made the
same decision.
Previous machine learning models had produced high precision and a high rate
of profitable stock options, with the percentages for QRM, Binary
Classification, RNN, and CNN models being 55.77, 59.56, 60.32, and 57.14
percent, respectively.
In addition, a base-case test was performed by determining the precision by
combining the results from classification NN and model 10K CNN. With this
combination, we were able to obtain a precision of 53.9 percent. Although we
reached a precision that was lower than both original approaches, these
results were biased due to the 13th factor, where both models generated
results based on a common feature: the QRM model’s results. This led to the
models lacking independence from each other, which could explain the decrease
in precision.
The table below lists one example of fusing Classification NN and CNN with the
10K model, which involved 10 thousand data rows of stock options.
Table 1. Previous Models and Combined Model Results
Method | Precision
---|---
QRM | 55.77%
Binary Classification | 59.56%
Regression NN | 60.32%
Classification (CNN) | 57.14%
Classification NN + Model10k CNN | 53.9%
### 3.2 Experimental Analysis
This approach was ultimately abandoned due to the fact that the ML models were
dependent on each other. The output accuracy of 53.9% performed worse than any
of the models separately.
$\begin{split}&P_{1}:=0.56,\\\ &P_{2}:=0.59,\\\ &P:=JointPrecision,\\\
&P=\frac{0.56*0.59}{0.56*0.59+(1-0.56)(1-0.59)}\\\ &P\approx 0.647\end{split}$
(3.1)
A more general formula was shown as below:
$\begin{split}&P_{1}:=PrecisionofModel1,\\\ &P_{2}:=PrecisionofModel2,\\\
&P:=JointPrecision,\\\
&P=\frac{P_{1}*P_{2}}{P_{1}*P_{2}+(1-P_{1})*(1-P_{2})}\\\ \end{split}$ (3.2)
This confirmed the hypothesis that there existed nontrivial correlation among
the ML models and dependence of the prediction results. Given two 100%
independent forecasting results, with 0.56 and 0.59 precision, the expected
combined model should return a value of 0.647.
## 4 Improved Trading Model
To utilize the ML forecasting results into real-world application, we must
develop trading strategies to help execute investing decisions and simulate
expected returns.
### 4.1 Previous Trading Strategy
Cao, Du, and Golubnichiy introduced a simple and straight-forward trading
method in their previous paper.[8]Let’s denote $s$ as the stock price, $t$ as
the time, and $\sigma(t)$ as the volatility of the option. We are going to be
using the historically implied volatility based on the market data of [2].
here, we assume that $\sigma=\sigma(t)$ in order to avoid other historical
data for the volatility. Let’s call $ub(t)$ and $ua(t)$ the bid and ask prices
of the options at the moment of time $t$ and $sb(t)$ and $sa(t)$ the bid and
ask prices of the stock at the moment of time $t$. We will be buying in an
option given that the following holds: $EST(\tau)\geq REAL(0)$.
### 4.2 Trading Strategy Advancements
This paper presented new trading strategy advancements in addition to the
previous efforts, including the investing methods and simulated results.
#### 4.2.1 Trading Strategy Simulation
This first segment initiated trading decisions based solely on machine
learning prediction results. Simulating trading applied the previous trading
strategy and the precision of our estimators, and we saw profits shown in the
following graphs:
#### 4.2.2 Binomial Trading Model
This trading strategy is an improved version compared to the previous
developed by Dr. Golubnichiy and his colleagues. It was inspired from the
Binomial Asset Pricing Model as introduced by Dr Shreve in his book,
”Stochastic Calculus for Finance I: The Binomial Asset Pricing Model” [5]. The
goal of this modeling was to compute the expectation of how a given portfolio
of stock options would perform at a certain day, when executing the ML
predicting results.
To begin with, let’s define some variables as the followings:
* •
ROR: Rate of return
* •
ROL: Rate of loss
* •
p: precision, in $\%$
* •
$M_{k}$: The expected asset we get on day k.
* •
$M_{k},_{x}$: The expected asset for the $x^{t}h$ scenario on day k.
For rate determination, we determined the rates based on the probability and
the magnitude to which the portfolio went up. For the convenience of
programming, we will be setting a fixed probability P, defined by the model
precision.
This analysis begins with the assumption that we have infinite money, and uses
$C_{i}$ as the variable for initial assets. Note that here we utilized the
current CNN precision percentage, $56\%$ precision, as an example to help
demonstrate the process.
After each trading day, each possibility of our assets would turn out to have
two new possible outcomes, $C_{n},_{1}$ and $C_{n},_{2}$.
These two values could be reached depending on the precision and (precision
minus 1) of our ML models. Although bias and error rates for prediction
accuracy across each company might be different, with enough data, these minor
variances could be balanced out. Furthermore, we could treat all of the
companies as a single entity, as a portfolio, with a 56% chance of increasing
(from CNN), for example.
Therefore, with $56\%$ chance that our prediction to be accurate, we would
have a $44\%$ chance that when we invested into a company or portfolio we
predicted would be profitable, it would instead lead to a loss.
As illustrated from the above image, are an increasing number of outcomes
possible after each day, with our initial capital $M_{0}$ and then on the
$k_{t}h$ day, we would have a capital of $M_{k}$. This value was calculated by
multiplying the probability $p$ or $1-p$ along the path from the initial day
by the expected capital in $m_{k},_{x}$. Here, our $m_{k},_{x}$ was obtained
by multiplying the previously connected index by the ROR.
The ROR, or Rate of Return, would be calculated through the Black-Scholes
model, which predicted the next day’s prices. To get ROR, we summed up all of
the worth of the capital as predicted by the Black-Scholes the next day and
divided by the total capital for today.
For ROL, it would hence be natural to claim that it would equal to ROR.
After the number of trading days, our final result would have a huge list of
different possible capitals and their respective probabilities throughout all
the days, such as the one above. We then multiplied the probability by the
capital and sum up all of these values. This return the final expected value
of return after a set number of trading days.
As the example illustrated from the above picture, in just one trading day,
the expected value of return would be $C_{n},_{1}*56\%+C_{n},_{2}*44\%$.
As a better view of how the steps work throughout the entire day, suppose we
had the predicted precision P of 2/3, the ROR of 2, and the ROL of 0.5. This
was possible given that, when we were merging different combinations, the BS
model would output different values. The entire binomial model was a
martingale thus its expectation could be computed by applying Wald’s Theorem.
## 5 Limitations and Future Developments
This section addresses several challenges that occurred during the research.
First, the LSTM was limited in its use case since it was trained solely on
sequences of fixed length (10). This meant the LSTM might perform
unpredictably if it was programmed to take in significantly longer sequences
of tokens.
Additionally, the independent models might be optimized by manipulating the
data so that the models will be independently trained. When independent ML
models are used and combined to predict future trends of stocks and options,
we can expect an increase in precision.
Also, the trading strategies had great potential for development. For each
node in the binomial trading tree, we did not apply advanced techniques such
as the Hamilton–Jacobi–Bellman (HJB) equation to achieve asset hedging based
on their historical data. Instead, we chose to apply a relatively simple and
straightforward method, as introduced earlier. Even though this model returned
a comparatively lower approximation accuracy, it indeed functioned as a more
efficient algorithm for users to calculate expected profit returns based on
the previous models introduced in each section. For future developments, we
would consider applying various hedging models and statistical equations to
program a more accurate model, as suggested by Professor Zhen-Qing Chen.
## 6 Acknowledgement
We are very grateful to Professors Zhen-Qing Chen and Fang Han from the
University of Washington for providing this study with various modeling
techniques and strategy building inspiration and direction. We are also
delighted for the development of QRM and other modeling techniques by
Professor Michael V. Klibanov from University of North Carolina at Charlotte,
Charlotte.
## 7 Summary
In conclusion, the LSTM outperformed QRM in terms of stock price prediction,
however its precision lagged below some earlier models. Multiple ML techniques
were demonstrated to be ineffective when combined, suggesting that there
existed nontrivial relationships between various models.The findings of ML
simulated trading and the Binomial Asset Pricing Model were presented,
demonstrating the advances made during this research stage and the viability
of the investing strategies in the actual U.S. stock option market.
## References
* [1] Bloomberg, Stock and Option Data from late 2019 to early 2021. https://bloomberg.com
* [2] J. C. Hull, Options, Futures, and Other Derivatives, Pearson Education Limited, 2022.
* [3] M. V. Klibanov, A. A. Shananin, K. V. Golubnichiy, and S. M. Kravchenko, Forecasting Stock Options Prices via the Solution of an Ill-Posed Problem for the Black-Scholes Equation, arXiv preprint, arXiv:2202.07174.
* [4] M. V. Klibanov, A. V. Kuzhuget, and K. V. Golubnichiy, An ill-posed problem for the Black-Scholes equation for a profitable forecast of prices of stock options on real market data, Inverse Problems, 32(1), 2016.
* [5] S. E. Shreve, Stochastic Calculus for Finance I: The Binomial Asset Pricing Model, Springer, 2003.
* [6] S. E. Shreve, Stochastic Calculus for Finance II. Continuous - Time Models, Springer, 2003.
* [7] T. Bjork, Arbitrage Theory in Continuous Time, Oxford University Press, 1999.
* [8] Z. Cao, W. Du, K.V. Golubnichiy, Application of Convolutional Neural Networks with Quasi-Reversibility Method Results for Option Forecasting, Computing Conference, 2023.
* [9] Ian Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning, MIT Press, 2016.
|
[a] [a,b] [a,c]
# Branch-Well-Structured Transition Systems and Extensions
Benedikt Bollig0000-0003-0985-6115 , Alain Finkel0000-0003-0702-3232 and
Amrita Suresh0000-0001-6819-9093 Université Paris-Saclay, ENS Paris-Saclay,
CNRS, LMF, 91190, Gif-sur-Yvette, France {benedikt.bollig, alain.finkel,
<EMAIL_ADDRESS>Institut Universitaire de France
University of Oxford, UK<EMAIL_ADDRESS>
###### Abstract.
We propose a relaxation to the definition of well-structured transition
systems (WSTS) while retaining the decidability of boundedness and non-
termination. In our class, we ease the well-quasi-ordered (wqo) condition to
be applicable only between states that are reachable one from another.
Furthermore, we also relax the monotony condition in the same way. While this
retains the decidability of non-termination and boundedness, it appears that
the coverability problem is undecidable. To this end, we define a new notion
of monotony, called cover-monotony, which is strictly more general than the
usual monotony and still allows us to decide a restricted form of the
coverability problem.
###### Key words and phrases:
verification and decidability and coverability and termination and well-quasi-
ordering
## 1\. Introduction
A well-structured transition system (WSTS), initially called structured
transition system in [Fin87, Fin90], is a mathematical framework used to model
and analyse the behaviour of concurrent and reactive systems. It provides a
formal representation of the system’s states, and the transitions between
them. The key characteristic of a WSTS is that it exhibits certain properties
that enable algorithmic verification of important system characteristics, such
as non-termination, boundedness, and coverability.
In a WSTS, the system is represented as a (potentially infinite) set of
_states_ , denoted by $X$ equipped with a _transition relation_
${\rightarrow}\subseteq X\times X$. $X$ may represent different
configurations, states of a computation, or states of a concurrent program,
depending on the specific context of the system being modelled. Furthermore,
the states in $X$ are partially ordered by a quasi-ordering relation, denoted
by $\leq$, and $\rightarrow$ fulfils one of various possible monotonies with
respect to $\leq$. To be considered a well-structured transition system, the
partial order $\leq$ on the set of states $X$ must be _well_ , i.e. well-
founded with no infinite antichains (Section 2 formally defines these
notions).
The theory of WSTS, although initially formulated in the late ’80s [Fin87], is
still being widely explored (refer to [FS01, ACJT00] for surveys). Some
notable recent developments in the theory include handling infinite branching
[BFM18], ideal abstraction [ZWH12], extensions over tree behaviours [Sch21,
LS15], etc. Because they provide a powerful framework for reasoning about
infinite-state systems and enable the development of effective algorithms for
automatic verification, WSTS have found applications in various areas,
including classically, Petri Nets and Lossy Channel Systems, but also recent
explorations in graph transformation systems [Öz22], program verification
[FP20], $\mu$-calculus [SG07, KSS04, BS13], cryptographic protocols [DS20],
Presburger counter machines [FG19], among others.
Since its development, a number of relaxations and extensions of the two core
assumptions of WSTS have been studied. Notably, with regards to the monotony,
the notion of transitive, stuttering monotonies has been studied [FS01]. With
regards to the well-quasi-order assumption, [BFM17] has shown that
coverability is still decidable without the assumption of wellness, with only
the absence of infinite antichains. They refer to this class as _well behaved
transition systems_ (WBTS). However, in the same paper, they also show that
this relaxation is too broad for the decidability of boundedness and non-
termination, and show classes of WBTS with undecidable boundedness and non-
termination properties.
In this work, we look at relaxing both the well-quasi-order and the monotony
assumptions, while still ensuring decidable non-termination and boundedness.
More precisely, we introduce the notion of _branch-well-quasi-order_ , which
is only applicable to states that are reachable one from another.
Interestingly, branch-wqo share many of the closure properties that wqo enjoy.
Similarly, we also relax the notion of monotony to _branch-monotony_ , which
requires only that the same sequence of actions can be repeated from two
states ordered by $\leq$. We call the systems which satisfy these two
conditions _branch-WSTS_. Branch-WSTS, though a strict superclass of WSTS,
still has decidable boundedness and non-termination, under the same
effectivity conditions.
We show examples of classes of systems studied in the literature, which are
not WSTS, but are branch-WSTS. Notably, there are non-trivial subclasses in
both counter machines and FIFO automata, two classes which are well-known to
be Turing-hard, which are branch-WSTS. Moreover, we show that coverability is,
in fact, undecidable for general branch-WSTS. This shows that WBTS and branch-
WSTS are incomparable. Furthermore, we show that the ideas of branch-monotony
can be verified along a single run in order to show non-termination and
unboundedness. This provides an algorithm to verify these properties for a
large subclass of counter machines and branch-wqo FIFO machines. Moreover, we
see that for FIFO machines and the prefix-ordering, the conditions that relate
to validity of branch-monotony along a single run imply infinite iterability,
as defined in [FP20].
The other major contribution of this work deals with coverability. Contrary to
[BFM17], we relax the notion of monotony, which we call _cover-monotony_ ,
while maintaining the well-quasi-order assumption, such that the resulting
class of systems have decidable coverability. For this class, which we refer
to as _cover-WSTS_ , we show that a restricted version of the coverability
problem is decidable, even in the absence of strong (or strict or transitive
or reflexive) monotony.
A preliminary version of this work has been presented at the 42nd
International Conference on Formal Techniques for Distributed Objects,
Components, and Systems (FORTE 2022). The contributions in this work extend
the conference version as follows:
This work further investigates the classes of branch-WSTS. We explicitly prove
that counter machines with restricted zero tests are branch-monotone.
Furthermore, in this work, we prove the conjecture we had stated in the
conference version claiming normalized input-bounded FIFO machines are branch-
WSTS. We also provide a sufficient condition for boundedness and non-
termination for a large subclass of counter machines and FIFO machines, which
strictly include branch-WSTS. Apart from this, we also provide explicit proofs
and examples of systems which we did not include in the conference version,
along with some additional properties, notably for branch-wqos.
Outline. Section 2 introduces terminology and some well-known results
concerning well-quasi-orderings and well-structured transition systems.
Section 3 defines branch-WSTS, and shows that both the boundedness and the
non-termination problems are decidable for such systems. Section 4 provides
some examples of branch-WSTS as well as provides a sufficient condition for
unboundedness and non-termination for a large class of counter and FIFO
machines (strictly including branch-WSTS). Section 5 investigates the
coverability problem for WSTS with relaxed conditions. We conclude in Section
6.
## 2\. Preliminaries
We denote the set of natural numbers as ${\rm Nature}$. We denote the standard
ordering on ${\rm Nature}$ by $\leq$. Given a set $A$, we denote by $|A|$ the
number of distinct elements in $A$.
Let $\Sigma$ be a finite alphabet, and $\Sigma^{*}$ be the set of finite words
over $\Sigma$. We denote the empty word by $\varepsilon$. The concatenation of
two words $u,v\in\Sigma$ is denoted by $u\cdot v$, or $u.v$, or simply $uv$.
Given $a\in\Sigma$ and $w\in\Sigma^{*}$, we let $|w|_{a}$ denote the number of
occurrences of $a$ in $w$. With this, we let
$\mathit{Alph}(w)=\\{a\in\Sigma\mid|w|_{a}\geq 1\\}$. A word $u\in\Sigma^{*}$
is a prefix (resp. suffix) of $w\in\Sigma^{*}$ if $w=u\cdot v$ (resp.
$w=v\cdot u$) for some $v\in\Sigma^{*}$. The sets of prefixes and suffixes of
$w\in\Sigma^{*}$ are denoted by $\mathit{Pref}(w)$ and $\mathit{Suf}(w)$
respectively. We denote the prefix ordering on words over an alphabet $\Sigma$
by $\preceq$. More specifically, for two words $u,w\in\Sigma^{*}$, we say
$u\preceq w$ if $u$ is a prefix of $w$.
### 2.1. Well-structured transition systems
We define some preliminary notions associated with well-structured transition
systems.
Quasi-orderings. Let $X$ be a set and ${\leq}\subseteq X\times X$ be a binary
relation over $X$, which we also write as $(X,\leq)$. We call $\leq$ a _quasi
ordering (qo)_ if it is reflexive and transitive. As usual, we call $\leq$ a
_partial ordering_ if it is a quasi-ordering and anti-symmetric (if $x\leq y$
and $y\leq x$, then $x=y$). We write $x<y$ if $x\leq y$ and $y\not\leq x$. If
$\leq$ is a partial ordering, $x<y$ is then equivalent to $x\leq y$ and $x\neq
y$.
To any $x\in X$, we associate the sets
$\mathord{\uparrow}x\stackrel{{\scriptstyle\scriptscriptstyle\text{def}}}{{=}}\\{y\mid
x\leq y\\}$ and
$\mathord{\downarrow}x\stackrel{{\scriptstyle\scriptscriptstyle\text{def}}}{{=}}\\{y\mid
y\leq x\\}$. Moreover, for $A\subseteq X$, we let
$\mathord{\uparrow}A\stackrel{{\scriptstyle\scriptscriptstyle\text{def}}}{{=}}\bigcup_{x\in
A}\mathord{\uparrow}x$ and
$\mathord{\downarrow}A\stackrel{{\scriptstyle\scriptscriptstyle\text{def}}}{{=}}\bigcup_{x\in
A}\mathord{\downarrow}x$. We say that a set $A$ is _upward closed_ if
$A=\mathord{\uparrow}A$. Similarly, $A$ is _downward closed_ if
$A=\mathord{\downarrow}A$. A _basis_ of an upward-closed set $A$ is a set
$B\subseteq X$ such that $A=\mathord{\uparrow}B$.
We say $(X,\leq)$ is _well-founded_ if there is no infinite strictly
decreasing sequence of elements of $X$. An _antichain_ is a subset $A\subseteq
X$ such that any two distinct elements in the subset are incomparable, i.e.,
for every distinct $x,y\in A$, we have $x\not\leq y$ and $y\not\leq x$. For
example, consider the alphabet $\Sigma=\\{a,b\\}$. There exists an infinite
antichain $\\{b,ab,aab,...\\}$ with respect to the prefix ordering over
$\Sigma^{\ast}$.
An _ideal_ is a downward-closed set $I\subseteq X$ that is also _directed_ ,
i.e., it is non-empty and, for every $x,y\in I$, there exists $z\in I$ such
that $x\leq z$ and $y\leq z$. The set of ideals is denoted by
$\textsf{Ideals}(X)$.
Well-quasi-orderings. For the following definitions, we fix a qo $(X,\leq)$.
When a qo satisfies some additional property, we call it a well-quasi-
ordering:
A _well-quasi-ordering (wqo)_ is a qo $(X,\leq)$ such that every infinite
sequence $x_{0},x_{1},x_{2},\ldots$ over $X$ contains an _increasing pair_ ,
i.e., there are $i<j$ such that $x_{i}\leq x_{j}$.
For example, the set of natural numbers ${\rm Nature}$, along with the
standard ordering $\leq$ is a wqo. Moreover, $({\rm Nature}^{k},\leq)$, i.e.
the set of vectors of $k\geq 1$ natural numbers with component-wise ordering,
is a wqo [Dic13]. On the other hand, the prefix ordering on words over an
alphabet $\Sigma$ is not a wqo (if $|\Sigma|>1$) since it contains infinite
antichains: in the infinite sequence $b,ab,a^{2}b,a^{3}b,...a^{n}b,...$, we
have $a^{i}b\not\preceq a^{j}b$ for all $i<j$ and $i,j\in{\rm Nature}$.
In general, for a qo, upward-closed sets do not necessarily have a _finite_
basis. However, from [Hig52], we know that every upward-closed set in a wqo
has a finite basis.
We have the following equivalent characterization of wqos.
###### Proposition 1 ([ER56, SS12]).
The following are equivalent, given a qo $(X,\leq)$:
1. (1)
$(X,\leq)$ is a wqo.
2. (2)
Every infinite sequence in $X$ has an infinite increasing subsequence with
respect to $leq$.
3. (3)
Every upward-closed non-empty subset in $X$ has a finite basis.
4. (4)
$(X,\leq)$ is well-founded and contains no infinite antichain.
On the other hand, downward-closed subsets of qos enjoy the following
property:
###### Proposition 2 ([ER56]).
A qo $(X,\leq)$ contains no infinite antichain iff every downward-closed set
decomposes into a finite union of ideals.
The above proposition is useful to design the forward coverability algorithm
that will be described in Section 2.2.
Next, we introduce transition systems:
Transition systems. A _transition system_ is a pair
$\mathcal{S}=(X,\xrightarrow{})$ where $X$ is the set of states and
${\xrightarrow{}}\subseteq X\times X$ is the transition relation. We write
$x\xrightarrow{}{}y$ for $(x,y)\in{\xrightarrow{}}$. Moreover, we let
$\xrightarrow{*}$ be the transitive and reflexive closure of the relation
$\xrightarrow{}$, and $\xrightarrow{+}$ be the transitive closure of
$\xrightarrow{}$.
Given a state $x\in X$, we write $\textsf{Post}_{\mathcal{S}}(x)=\\{y\in X\mid
x\xrightarrow{}{}y\\}$ for the set of immediate successors of $x$. Similarly,
$\textsf{Pre}_{\mathcal{S}}(x)=\\{y\in X\mid y\xrightarrow{}{}x\\}$ denotes
the set of its immediate predecessors.
We call $\mathcal{S}$ _finitely branching_ if, for all $x\in X$, the set
$\textsf{Post}_{\mathcal{S}}(x)$ is finite. The _reachability set_ of
$\mathcal{S}$ from $x\in X$ is defined as
$\textsf{Post}_{\mathcal{S}}^{*}(x)=\\{y\in X\mid x\xrightarrow{*}y\\}$. Note
that, when $\mathcal{S}$ is clear from the context, we may drop the subscript
and write, e.g., $\textsf{Post}^{*}(x)$. We say that a state $y$ is reachable
from $x$ if $y\in\textsf{Post}^{*}(x)$.
We recall that the _reachability tree_ from an initial state $x_{0}$ in
$\mathcal{S}$ is a tree with a root node labelled by $x_{0}$. Then, for all
$y$ such that $x_{0}\xrightarrow{}y$, we add a vertex labelled with $y$ and
add an edge from the root node to the node labelled with $y$. We then compute
$\textsf{Post}(y)$ and once again add vertices labelled with each state in
$\textsf{Post}(y)$. We repeat this for every vertex along the tree. Note that
we can have multiple vertices labelled with the same state, and moreover, the
reachability tree can be infinite even if the reachability set from the
initial state is not.
A _(well-)ordered transition system_ is a triple
$\mathcal{S}=(X,\xrightarrow{},\leq)$ consisting of a transition system
$(X,\xrightarrow{})$ equipped with a qo (resp., wqo) $(X,\leq)$. An ordered
transition system $\mathcal{S}=(X,\xrightarrow{},\leq)$ is _effective_ if
$\leq$ and $\xrightarrow{}$ are decidable. We say that a state $y$ is
_coverable_ from $x$ if $y\in\mathord{\downarrow}\textsf{Post}^{*}(x)$.
[[Fin90]] A _well-structured transition system (WSTS)_ is a well-ordered
transition system $\mathcal{S}=(X,\xrightarrow{},\leq)$ that satisfies
(general) _monotony_ : for all $x,y,x^{\prime}\in X$, we have: $x\leq y\land
x\xrightarrow{}x^{\prime}\implies\exists y^{\prime}\in X\textup{:
}x^{\prime}\leq y^{\prime}\land y\xrightarrow{*}y^{\prime}.$
We define other types of monotony. We say that a well-ordered transition
system $\mathcal{S}=(X,\xrightarrow{},\leq)$ satisfies _strong monotony_
(resp., _transitive monotony_) if, for all $x,y,x^{\prime}\in X$ such that
$x\leq y$ and $x\xrightarrow{}x^{\prime}$, there is $y^{\prime}\in X$ such
that $x^{\prime}\leq y^{\prime}$ and $y\xrightarrow{}y^{\prime}$ (resp.,
$y\xrightarrow{+}y^{\prime}$). The transition system $\mathcal{S}$ satisfies
_strict monotony_ if, for all $x,y,x^{\prime}\in X$ such that $x<y$ and
$x\xrightarrow{}x^{\prime}$, there is $y^{\prime}\in X$ such that
$x^{\prime}<y^{\prime}$ and $y\xrightarrow{}y^{\prime}$.
### 2.2. Decision problems for transition systems.
We recall the following well-known decision problems.
Given an ordered transition system $\mathcal{S}=(X,\xrightarrow{},\leq)$ and
an initial state $x_{0}\in X$:
* •
_The non-termination problem_ : Is there an infinite sequence of states
$x_{1},x_{2},\ldots$ such that
$x_{0}\xrightarrow{}x_{1}\xrightarrow{}x_{2}\xrightarrow{}\ldots$ ?
* •
_The boundedness problem_ : Is $\textsf{Post}^{*}_{\mathcal{S}}(x_{0})$
finite?
* •
_The coverability problem_ : Given a state $y\in X$, is $y$ coverable from
$x_{0}$?
It is folklore [Fin90, FS01] that non-termination is decidable for finitely
branching WSTS with transitive monotony and that boundedness is decidable for
finitely branching WSTS $\mathcal{S}=(X,\xrightarrow{},\leq)$ where $\leq$ is
a partial ordering and $\xrightarrow{}$ is strictly monotone. In both cases,
we suppose that the WSTS are effective and that $\textsf{Post}(x)$ is
computable, for all $x\in X$.
Coverability problem.We recall that coverability is decidable for a large
class of WSTS:
###### Theorem 3 ([FS01, ACJT00]).
The coverability problem is decidable for effective WSTS
$\mathcal{S}=(X,\xrightarrow{},\leq)$ equipped with an algorithm that, for all
finite subsets $I\subseteq X$, computes a finite basis $\textsf{pb}(I)$ of
$\mathord{\uparrow}\textsf{Pre}(\mathord{\uparrow}I)$.
Assume $\mathcal{S}=(X,\xrightarrow{},\leq)$ is a WSTS and $x\in X$ is a
state. The _backward coverability algorithm_ involves computing (a finite
basis of) $\textsf{Pre}^{*}(\mathord{\uparrow}x)$ as the limit of the infinite
increasing sequence
$\mathord{\uparrow}I_{0}\subseteq\mathord{\uparrow}I_{1}\subseteq\ldots$ where
$I_{0}=\\{x\\}$ and
$I_{n+1}\stackrel{{\scriptstyle\scriptscriptstyle\text{def}}}{{=}}I_{n}\cup\textsf{pb}(I_{n})$.
Since there exists an integer $k$ such that
$\mathord{\uparrow}I_{k+1}=\mathord{\uparrow}I_{k}$, the finite set $I_{k}$ is
computable (one may test, for all $n$, whether
$\mathord{\uparrow}I_{n+1}=\mathord{\uparrow}I_{n}$) and $I_{k}$ is then a
finite basis of $\textsf{Pre}^{*}(\mathord{\uparrow}x)$ so one deduces that
coverability is decidable.
Coverability can be also decided by using the _forward coverability algorithm_
that relies on two semi-decision procedures (as described below). It relies on
Proposition 2, and enumerates finite unions of ideals composing inductive
invariants. It shows that the “no infinite antichain” property is sufficient
to decide coverability, hence the wqo hypothesis is not necessary. It applies
to the class of _well behaved transition systems_ , which are more general
than WSTS. A well behaved transition system (WBTS) is an ordered transition
system $\mathcal{S}=(X,\xrightarrow{},\leq)$ with monotony such that
$(X,\leq)$ contains no infinite antichain. We describe effectiveness
hypotheses that allow manipulating downward-closed sets in WBTS.
[[BFM17]] A class $C$ of WBTS is _ideally effective_ if, given
$\mathcal{S}=(X,\xrightarrow{}{},\leq)\in C$,
* •
the set of encodings of $\textsf{Ideals}(X)$ is recursive,
* •
the function mapping the encoding of a state $x\in X$ to the encoding of the
ideal $\mathord{\downarrow}{x}\in\textsf{Ideals}(X)$ is computable;
* •
inclusion of ideals of $X$ is decidable;
* •
the downward closure $\mathord{\downarrow}{\textsf{Post}(I)}$ expressed as a
finite union of ideals is computable from the ideal $I\in\textsf{Ideals}(X)$.
###### Theorem 4 ([BFM17]).
The coverability problem is decidable for ideally effective WBTS.
The result is derived from the design of two semi-decision procedures where
downward-closed sets are represented by their finite decomposition in ideals
and this is effective. Procedure 1 checks for coverability of $y$ from
$x_{0}$, by recursively computing $\mathord{\downarrow}x_{0}$,
$\mathord{\downarrow}(\mathord{\downarrow}x_{0}\cup\textsf{Post}(\mathord{\downarrow}x_{0}))$
and so on. This procedure terminates only if $y$ belongs to one of these sets,
hence it terminates if $y$ is coverable.
Procedure 1 : Checks for coverability
input: $\mathcal{S}=(X,\xrightarrow{},\leq)$ and $x_{0},y$
$D:=\mathord{\downarrow}x_{0}$
while $y\notin D$ do
$D:=\mathord{\downarrow}(D\cup\textsf{Post}_{\mathcal{S}}(D))$
end while
return “$y$ is coverable from $x_{0}$”
Hence, we deduce:
###### Proposition 5 ([BFM17]).
For an ideally effective WBTS $\mathcal{S}=(X,\xrightarrow{},\leq)$, an
initial state $x_{0}$, and a state $y$, Procedure 1 terminates iff $y$ is
coverable from $x_{0}$.
Procedure 2 enumerates all downward-closed subsets (by means of their finite
decomposition in ideals) in some fixed order $D_{1},D_{2},\ldots$ We remark
that this enumeration is effective since $\mathcal{S}$ is ideally effective.
Furthermore, it checks for all $i$, $D_{i}\subseteq X$ and
$\mathord{\downarrow}\textsf{Post}(D_{i})\subseteq D_{i}$, and if such a set
$D_{i}$ contains $x_{0}$ but not $y$. If it does contain $x_{0}$, it is an
over-approximation of $\textsf{Post}^{*}(x_{0})$. Hence, if there is such a
set $D_{i}$ with $x_{0}\in D_{i}$ but $y\notin D_{i}$, it is a certificate of
non-coverability. Moreover, this procedure terminates if $y$ is non-coverable
because $\mathord{\downarrow}\textsf{Post}^{*}(x_{0})$ is such a set, and
hence, will eventually be found.
Procedure 2 : Checks for non-coverability
input: $\mathcal{S}=(X,\xrightarrow{},\leq)$ and $x_{0},y$
$\textbf{enumerate}\text{ downward-closed sets }D_{1},D_{2},\ldots$
$i:=1$
while $\neg(\mathord{\downarrow}\textsf{Post}(D_{i})\subseteq D_{i}\text{
}\and\text{ }x_{0}\in D_{i}\text{ }\and\text{ }y\notin D_{i})$ do
$i:=i+1$
end while
return false
Therefore, we have:
###### Proposition 6 ([BFM17]).
For a WBTS $\mathcal{S}=(X,\xrightarrow{},\leq)$, an initial state $x_{0}$ and
a state $y$, Procedure 2 terminates iff $y$ is not coverable from $x_{0}$.
### 2.3. Labelled transition systems
Next, we define labelled transition systems, which are transition systems
where the transitions are equipped with labels.
A _labelled transition system (LTS)_ is a tuple
$\mathcal{S}={(X,\Sigma,\xrightarrow{},x_{0})}$ where $X$ is the set of
states, $\Sigma$ is the finite action alphabet,
${\xrightarrow{}}\subseteq{X\times\Sigma\times X}$ is the transition relation,
and $x_{0}\in X$ is the initial state.
A _quasi-ordered labelled transition system (OLTS)_ is defined as a tuple
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ where
$(X,\Sigma,\xrightarrow{},x_{0})$ is an LTS and $(X,\leq)$ is a qo.
In the case of an LTS or OLTS, we write $x\xrightarrow{a}x^{\prime}$ instead
of $(x,a,x^{\prime})\in{\to}$. For $\sigma\in\Sigma^{\ast}$,
$x\xrightarrow{\sigma}x^{\prime}$ is defined as expected. We also let
$x\xrightarrow{}x^{\prime}$ if $(x,a,x^{\prime})\in{\to}$ for some
$a\in\Sigma$, with closures $\xrightarrow{*}$ and $\xrightarrow{+}$. We let
$\textsf{Traces}(\mathcal{S})=\\{w\in\Sigma\mid x_{0}\xrightarrow{w}x\text{
for some }x\in X\\}$ be the set of traces.
We call an OLTS $\mathcal{S}$ _effective_ if $\leq$ and, for all $a\in\Sigma$,
$\xrightarrow{a}$ are decidable.
###### Remark 7.
We can similarly define a labelled WSTS as an OLTS such that the ordering is
well and it satisfies the general monotony condition (canonically adapted to
take care of the transition labels). Moreover, we lift the decision problems
from Definition 2.2 to OLTS in the obvious way.
### 2.4. Classes of labelled transition systems
In this paper, we mainly study subclasses of two well-known classes of
transition systems, namely counter machines, and FIFO machines. Both these
classes are known to simulate Turing machines, and hence, have undecidable
boundedness and non-termination.
Counter machines. Counter machines, also known as Minsky machines, are finite-
state machines that manipulate counters, which are variables that store non-
negative integers. Transitions of a counter machine, besides changing control-
states, perform a specified operation on a counter: increment by one,
decrement by one, along with a set of counters to be tested for zero. We
define formally a counter machine below.
A _counter machine_ (with zero tests) is a tuple $\mathcal{C}=(Q,V,T,q_{0})$.
Here, $Q$ is the finite set of _control-states_ and $q_{0}\in Q$ is the
_initial control-state_. Moreover, $V$ is a finite set of _counters_ and
$T\subseteq Q\times\textsf{Act}_{\mathcal{C}}\times Q$ is the transition
relation where
$\textsf{Act}_{\mathcal{C}}=\\{\textsf{inc}(v),\textsf{dec}(v),\textsf{noop}\mid
v\in V\\}\times 2^{V}$ (an element of $2^{V}$ will indicate the set of
counters to be tested to $0$).
The counter machine $\mathcal{C}$ induces an LTS
$\mathcal{S}_{\mathcal{C}}=(X_{\mathcal{C}},\textsf{Act}_{\mathcal{C}},\xrightarrow{}_{\mathcal{C}},x_{0})$
with set of states $X_{\mathcal{C}}=Q\times\mathbb{N}^{V}$. In $(q,\ell)\in
X_{\mathcal{C}}$, the first component $q$ is the current control-state and
$\ell=(\ell_{v})_{v\in V}$ represents the counter values. The initial state is
then $x_{0}=(q_{0},\ell_{0})$ with $\ell_{0}=(0,0,\ldots,0)$. For
$op\in\\{\mathsf{inc},\mathsf{dec}\\}$, $v\in V$, and $Z\subseteq V$ ($Z$ is
the set of counters tested for zero), there is a transition
$(q,\ell)\xrightarrow{op(v),Z}_{\mathcal{C}}(q^{\prime},m)$ if
$(q,(op(v),Z),q^{\prime})\in T$, $\ell_{v^{\prime}}=0$ for all $v^{\prime}\in
Z$, $m_{v}=\ell_{v}+1$ if $op=\mathsf{inc}$ and $m_{v}=\ell_{v}-1$ if
$op=\mathsf{dec}$, and $m_{v^{\prime}}=\ell_{v^{\prime}}$ for all
$v^{\prime}\in V\setminus\\{v\\}$.
For $op=\textsf{noop}$, and $Z\subseteq V$, there is a transition
$(q,\ell)\xrightarrow{op,Z}_{\mathcal{C}}(q^{\prime},m)$ if
$(q,(op,Z),q^{\prime})\in T$, $\ell_{v^{\prime}}=0$ for all $v^{\prime}\in Z$
(applies the zero tests), and $m_{v}=\ell_{v}$ for all $v\in V$. We sometimes
omit writing noop and label the transition with only the set of counters to be
tested to zero, or we write $\textsf{zero}(Z)$. Similarly, we omit $Z$ if
$Z=\emptyset$.
FIFO machines. Next, we study FIFO machines. FIFO machines can be viewed as a
finite state machine equipped with one or more (potentially unbounded)
channels, where messages can be sent to and received from. [BZ83] showed that
general FIFO machines can simulate Turing machines, and hence, it is
undecidable to verify boundedness. However, it is still a widely used model to
represent asynchronously communicating systems.
A FIFO machine $\mathcal{M}$ over the set of channels Ch is a tuple
$\mathcal{M}=(Q,A,T,q_{0})$ where $Q$ is a finite set of control-states, $A$
is the finite message alphabet, and $q_{0}\in Q$ is an initial control-state.
Moreover, $T\subseteq Q\times\textsf{Ch}\times\\{!,?\\}\times A\times Q$ is
the transition relation, where $\textsf{Ch}\times\\{!\\}\times A$ and
$\textsf{Ch}\times\\{?\\}\times A$ are the set of send and receive actions,
respectively.
The FIFO machine $\mathcal{M}$ induces the LTS
$\mathcal{S}_{\mathcal{M}}=(X_{\mathcal{M}},\Sigma_{\mathcal{M}},\xrightarrow{}_{\mathcal{M}},x_{0})$.
Its set of states is $X_{\mathcal{M}}=Q\times(A^{*})^{\textsf{Ch}}$. In
$(q,\mathbf{w})\in X_{\mathcal{M}}$, the first component $q$ denotes the
current control-state, and
$\mathbf{w}=(\mathbf{w}_{\textsf{c}})_{\textsf{c}\in\textsf{Ch}}$ denotes the
contents $\mathbf{w}_{c}\in A^{*}$ for every channel
$\textsf{c}\in\textsf{Ch}$. The initial state is
$x_{0}=(q_{0},\mathbf{\varepsilon})$, where $\mathbf{\varepsilon}$ denotes
that every channel is empty. Moreover,
$\Sigma_{\mathcal{M}}=\textsf{Ch}\times\\{!,?\\}\times A$. The transitions are
given as follows:
* •
$(q,\mathbf{w})\xrightarrow{\textsf{c}!a}_{\mathcal{M}}(q^{\prime},\mathbf{w^{\prime}})$
if $(q,\textsf{c}!a,q^{\prime})\in T$,
$\mathbf{w^{\prime}}_{\textsf{c}}=\mathbf{w}_{\textsf{c}}\cdot a$, and
$\mathbf{w^{\prime}}_{\textsf{d}}=\mathbf{w}_{\textsf{d}}$ for all
$\textsf{d}\in\textsf{Ch}\setminus\\{\textsf{c}\\}$.
* •
$(q,w)\xrightarrow{\textsf{c}?a}_{\mathcal{M}}(q^{\prime},w^{\prime})$ if
$(q,\textsf{c}?a,q^{\prime})\in T$,
$\mathbf{w}_{\textsf{c}}=a\cdot\mathbf{w^{\prime}}_{\textsf{c}}$, and
$\mathbf{w^{\prime}}_{\textsf{d}}=\mathbf{w}_{\textsf{d}}$ for all
$\textsf{d}\in\textsf{Ch}\setminus\\{\textsf{c}\\}$.
The index $\mathcal{M}$ may be omitted whenever $\mathcal{M}$ is clear from
the context. When there is no ambiguity, we confuse machines and their
associated LTS.
Note that, in general, FIFO machines with a single channel are as powerful as
FIFO machines with multiple channels. When we denote FIFO machines with a
single channel, we omit specifying the set Ch, and hence, we denote the set of
labels as $\Sigma_{\mathcal{M}}=\\{!,?\\}\times A$.
For FIFO machines, we define the extended prefix ordering, denoted by
$\leq_{p}$ as follows:
$(q,\mathbf{w})\leq_{p}(q^{\prime},\mathbf{w^{\prime}})$ if $q=q^{\prime}$ and
for all $\textsf{c}\in\textsf{Ch}$, $\mathbf{w}_{\textsf{c}}$ is a prefix of
$\mathbf{w^{\prime}}_{\textsf{c}}$, i.e.,
$\mathbf{w}_{\textsf{c}}\preceq\mathbf{w^{\prime}}_{\textsf{c}}$.
## 3\. Branch-well-structured transition systems
In this section, we generalize wqo and monotony such that these properties
only consider states along a branch in the reachability tree. To define these
notions, we use labels on the transitions, hence, we consider labelled
transition systems.
### 3.1. Branch-wqo
Consider an OLTS $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$. A _run_
(or _branch_) of $\mathcal{S}$ is a finite or infinite sequence
$\rho=(x_{0}\xrightarrow{}x_{1})(x_{1}\xrightarrow{}x_{2})...$ simply written
$\rho=x_{0}\xrightarrow{}x_{1}\xrightarrow{}x_{2}\ldots$. We say that $\rho$
is _branch-wqo_ if the set of states $\\{x_{0},x_{1},x_{2},\ldots\\}$ visited
along $\rho$ is wqo w.r.t. $\leq$.
A _run_ (or _branch_) of $\mathcal{S}$ is a finite or infinite sequence
$\rho=(x_{0}\xrightarrow{}x_{1})(x_{1}\xrightarrow{}x_{2})...$ simply written
$\rho=x_{0}\xrightarrow{}x_{1}\xrightarrow{}x_{2}\ldots$. We denote the set of
states visited along $\rho$ as $X_{\rho}=\\{x_{0},x_{1},x_{2},\ldots\\}$. We
say that $\rho$ is _branch-wqo_ if $X_{\rho}$ is wqo w.r.t. $\leq$.
An OLTS $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ is _branch-wqo_ if
every run of $\mathcal{S}$ is branch-wqo.
Consider the FIFO machine $\mathcal{M}_{1}$ in Figure 1 with one FIFO channel.
In control-state $q_{0}$, it loops by sending letter $a$ to the channel. Then,
we may go, non-deterministically, to control-state $q_{1}$ by sending letter
$b$ once, and then we stop. Let us consider the set of states
$X_{1}=\\{q_{0},q_{1}\\}\times\\{a,b\\}^{*}$ together with the extended prefix
ordering $\leq_{p}$, i.e. $(q,u)\leq_{p}(q^{\prime},u^{\prime})$ if
$q=q^{\prime}$ and $u\preceq u^{\prime}$. The reachability set of
$\mathcal{M}_{1}$ from $(q_{0},\varepsilon)$ is equal to
$\textsf{Post}^{*}(q_{0},\varepsilon)=\\{(q_{0},w)\mid w\in
a^{*}\\}\cup\\{(q_{1},w^{\prime})\mid w^{\prime}\in a^{*}b\\}$. Note that
$\leq_{p}$ is not a wqo since elements of the set $\\{(q_{1},w^{\prime})\mid
w^{\prime}\in a^{*}b\\}$ form an infinite antichain for $\leq_{p}$. However,
the reachability tree of $\mathcal{M}_{1}$ is branch-wqo for the initial state
$(q_{0},\varepsilon)$ (every branch is either finite or branch-wqo). Hence,
there exist branch-wqo OLTS $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$
such that $(X,\leq)$ is not a wqo.
$q_{0}$$q_{1}$$!a$$!b$
$(q_{0},\varepsilon)$$(q_{1},b)$$(q_{0},a)$$(q_{0},aa)$$(q_{1},ab)$$...$$(q_{1},aab)$
Figure 1. The FIFO machine $\mathcal{M}_{1}$ (left) with initial state
$(q_{0},\varepsilon)$, and its corresponding (incomplete) infinite
reachability tree (right). We see that the induced transition system is
branch-wqo.
However, unlike wqo, the notion of branch-wqo depends on the initial state
considered. We will see below that there exists a system
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ and $x_{0}^{\prime}\in X$
such that $\mathcal{S}$ is branch-wqo but
$(X,\Sigma,\xrightarrow{},\leq,x_{0}^{\prime})$ is not branch-wqo.
$q_{0}$$q_{1}$$q_{2}$$!a$$!b$$?c$$!b$$?b$$?c$$!c$
$(q_{2},\varepsilon)$$(q_{2},c)$$(q_{2},cc)$$(q_{1},cb)$$...$$(q_{1},ccb)$$(q_{1},b)$$(q_{2},b)$$...$$...$$...$$...$$(q_{2},\varepsilon)$$(q_{2},c)$$(q_{2},cc)$$(q_{1},ccb)$$...$$...$$...$$...$$...$
Figure 2. The FIFO machine $\mathcal{M}_{2}$ (left) and the incomplete
reachability tree from $(q_{2},\varepsilon)$ (right).
Consider the FIFO machine $\mathcal{M}_{2}$ in Figure 2 with one FIFO channel.
If we start from $(q_{0},\varepsilon)$, it behaves exactly as the FIFO machine
$\mathcal{M}_{1}$. However, if we change the initial state to
$(q_{2},\varepsilon)$, then it could either loop by sending $c$, or non-
deterministically send a $b$ and go to control-state $q_{1}$. Then, if at
least one $c$ has been sent, it can either loop again receiving the letters
$c$, or once again non-deterministically come back to $q_{2}$ upon receiving
$c$.
There exists an infinite run $\rho$ with the prefix:
$(q_{2},\varepsilon)\xrightarrow{!c}(q_{2},c)\xrightarrow{!b}(q_{1},cb)\xrightarrow{?c}(q_{2},b)\xrightarrow{?b}(q_{2},\varepsilon)\xrightarrow{!c.!c.!b}(q_{1},ccb)\xrightarrow{}...$
such that all the elements of the infinite set
$B=\\{{(q_{1},cb)},{(q_{1},ccb)},\allowbreak\ldots,\allowbreak{(q_{1},c^{n}b)},\ldots\\}$
are visited along $\rho$, i.e $B\subseteq X_{\rho}$. Hence, we have an
infinite sequence of incomparable elements along $\rho$, i.e. $X_{\rho}$ is
not wqo, and therefore, the system is not branch-wqo.
We now show that branch-wqos enjoy some of the good properties of wqos. Let
$\mathcal{S}_{1}=(X_{1},\Sigma,\xrightarrow{}_{1},\leq_{1},x_{0,1})$ and
$\mathcal{S}_{2}=(X_{2},\Sigma,\xrightarrow{}_{2},\leq_{2},x_{0,2})$ be two
branch-wqos. We consider their product to be
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$, where $X=X_{1}\times
X_{2}$, and
$((x_{1},x_{2}),a,(x^{\prime}_{1},x^{\prime}_{2}))\in\xrightarrow{}$ if
$(x_{1},a,x^{\prime}_{1})\in\xrightarrow{}_{1}$ and
$(x_{2},a,x^{\prime}_{2})\in\xrightarrow{}_{2}$. Moreover, we consider the
ordering to be component-wise, i.e. we have
$(x_{1},x_{2})\leq(x^{\prime}_{1},x^{\prime}_{2})$ if
$x_{1}\leq_{1}x^{\prime}_{1}$ and $x_{2}\leq_{2}x^{\prime}_{2}$; finally
$x_{0}=(x_{0,1},x_{0,2})$.
###### Proposition 8.
The product of finitely many branch-wqos is branch-wqo.
###### Proof 3.1.
We show this for two branch-wqos, but this proof can be extended to finitely
many branch-wqos.
Let us consider an infinite run $\rho$ in the product branch-wqo
$\mathcal{S}$. If we consider the set $X_{\rho}$ along with the order
$\leq_{1}$, we can extract an infinite subsequence $B=(x_{n},y_{n})_{n\in{\rm
Nature}}\subseteq X_{\rho}$ such that $(x_{n})_{n\in{\rm Nature}}$ forms an
increasing sequence wrt. $\leq_{1}$ (since $(X_{1},\leq_{1})$ is branch-wqo).
Now, within this subsequence $B$, there exists an increasing subsequence
$(y_{n_{m}})_{m\in{\rm Nature}}$ (since the branch is also wqo wrt.
$\leq_{2}$). Hence, we have an infinitely long sequence
$(x_{n_{m_{k}}},y_{n_{m_{k}}})_{k\in{\rm Nature}}$.
We can similarly prove that the same holds true for disjoint unions and
intersection of branch-wqos.
### 3.2. Branch-monotony
Now that we have relaxed the notion of branch-wqo, we shall look at a
generalization of strong monotony, which we will refer to as branch-monotony.
An OLTS $\mathcal{S}=(X,\Sigma,$ $\xrightarrow{},\leq,x_{0})$ is _branch-
monotone_ if, for all $x,x^{\prime}\in X$, $\sigma\in\Sigma^{*}$ such that
$x\xrightarrow{\sigma}x^{\prime}$ and $x\leq x^{\prime}$, there exists a state
$y$ such that $x^{\prime}\xrightarrow{\sigma}y$ and $x^{\prime}\leq y$.
###### Proposition 9.
Let $\mathcal{S}$ be a branch-monotone OLTS and let there be states
$x,x^{\prime}$ such that $x\xrightarrow{\sigma}x^{\prime}$ and $x\leq
x^{\prime}$, with $\sigma\in\Sigma^{*}$. Then, for any $n\geq 1$, there exists
$y_{n}\in X$ such that $x\xrightarrow{\sigma^{n}}y_{n}$ with $x\leq y_{n}$.
###### Proof 3.2.
We prove this by induction on $n$. Base case: Let $n=1$. From the hypothesis,
we immediately have a run $x\xrightarrow{\sigma}x^{\prime}$ such that $x\leq
x^{\prime}$. Hence, the property holds trivially for the base case, with
$y_{1}=x^{\prime}$.
Let us assume that it holds for $n$. We will now show that it also holds for
$n+1$. Let $x\xrightarrow{\sigma}x^{\prime}$ be a finite run satisfying $x\leq
x^{\prime}$. Furthermore, from the induction hypothesis, there exists a state
$y_{n}$ such that
$x\xrightarrow{\sigma}x^{\prime}\xrightarrow{\sigma^{n-1}}y_{n}$ such that
$x\leq y_{n}$. Moreover, by the definition of branch-monotony, there exists a
state $y$ such that $x^{\prime}\xrightarrow{\sigma}y$ and $x^{\prime}\leq y$.
Furthermore, if we consider the run $x^{\prime}\xrightarrow{\sigma}y$, again
by induction hypothesis, there exists a state $y^{\prime}_{n}$ such that
$x^{\prime}\xrightarrow{\sigma^{n}}y^{\prime}_{n}$ and $x\leq y^{\prime}_{n}$.
Hence, there exists a run
$x\xrightarrow{\sigma}x^{\prime}\xrightarrow{\sigma^{n}}y^{\prime}_{n}$, and
$x\leq x^{\prime}\leq y^{\prime}_{n}$, so by transitivity, $x\leq
y^{\prime}_{n}$. In other words, we have
$x\xrightarrow{\sigma^{n+1}}y^{\prime}_{n}=y_{n+1}$, and this completes our
proof.
As in the case of general monotony, _strict_ branch-monotony is defined using
strict inequalities in both cases.
Consider $\mathcal{M}_{1}$ from Figure 1 once again. Note $\mathcal{M}_{1}$
induces an OLTS by considering the actions on the edges to be the labels.
Moreover, $\mathcal{M}_{1}$ is branch-monotone. For every
$x\xrightarrow{\sigma}x^{\prime}$ such that $x\leq_{p}x^{\prime}$ and
$\sigma\in\Sigma^{*}$, the only case is that $x=(q_{0},a^{n})$,
$x^{\prime}=(q_{0},a^{n+k})$, for some $n,k\in\mathbb{N}$. Moreover,
$\sigma\in(!a)^{*}$. Hence, we can always repeat $\sigma$ from $x^{\prime}$
such that $x^{\prime}\xrightarrow{\sigma}y=(q_{0},a^{n+k+k})$. Therefore,
$x^{\prime}\leq_{p}y$. We deduce that $\mathcal{M}_{1}$ is branch-monotone.
### 3.3. Branch-WSTS
We are now ready to extend the definition of WSTS.
A _branch-WSTS_ is an OLTS $\mathcal{S}=(X,\Sigma,$
$\xrightarrow{},\leq,x_{0})$ that is finitely branching, branch-monotone, and
branch-wqo.
When we say, without ambiguity, that a machine $\mathcal{M}$ is branch-wqo,
WSTS, or branch-WSTS, we mean that the ordered transition system
$\mathcal{S}_{\mathcal{M}}$, induced by machine $\mathcal{M}$, is branch-wqo,
WSTS, or branch-WSTS, respectively. We will explicitly define the transition
system induced by FIFO machines in Section 4.2.
###### Remark 10.
Branch-WSTS is a strict superclass of labelled WSTS. For example, machine
$\mathcal{M}_{1}$ (seen in Figure 1) with initial state $(q_{0},\varepsilon)$
is branch-WSTS for the ordering $\leq_{p}$ but it is not WSTS for $\leq_{p}$
since $\leq_{p}$ is not a wqo on $\\{q_{0},q_{1}\\}\times\\{a,b\\}^{*}$ or on
the subset $\\{(q_{1},w)\mid w\in a^{*}b\\}$.
Let us recall the _Reduced Reachability Tree ( $\mathit{RRT}$)_, which was
defined as Finite Reachability Tree in [Fin90, FS01]. Suppose that
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ is an OLTS. Then, the
_Reduced Reachability Tree_ from $x_{0}$, denoted by
$\mathit{RRT}(\mathcal{S},x_{0})$, is a tree with root $n_{0}$ labelled
$x_{0}$ that is defined and built as follows. For every
$x\in\textsf{Post}(x_{0})$, we add a child vertex labelled $x$ to $n_{0}$. The
tree is then built in the following way. We pick an unmarked vertex $c$ which
is labelled with $x$:
* •
if $n$ has an ancestor $n^{\prime}$ labelled with $x^{\prime}$ such that
$x^{\prime}\leq x$, we mark the vertex $n$ as _dead_ , and say $n^{\prime}$
_subsumes_ $n$.
* •
otherwise, we mark $n$ as _live_ , and for every $y\in\textsf{Post}(x)$, we
add a child labelled $y$ to $n$.
$(q_{0},\varepsilon)$$(q_{1},b)$$(q_{0},a)$dead Figure 3. The Reduced
Reachability Tree of $\mathcal{M}_{1}$ from $(q_{0},\varepsilon)$. Note that
$(q_{0},a)$ is dead because it is subsumed by state $(q_{0},\varepsilon)$. As
a matter of fact, we have $(q_{0},\varepsilon)\xrightarrow{*}(q_{0},a)$ and
$(q_{0},\varepsilon)\leq_{p}(q_{0},a)$. State $(q_{1},b)$ is also dead but it
is not subsumed.
propfinite Let $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ be an OLTS
that is finitely branching and branch-wqo. Then,
$\mathit{RRT}(\mathcal{S},x_{0})$ is finite.
###### Proof 3.3.
Let us assume to the contrary that $\mathit{RRT}(\mathcal{S},x_{0})$ is
infinite. Since $\mathcal{S}$ is finitely branching and branch-wqo, there is
an infinite branch in the reachability tree. Since this branch is branch-wqo,
there exists a pair of states, say $x_{2}$ and $x_{3}$ visited along the
branch, such that $x_{0}\xrightarrow{}{*}x_{1}\xrightarrow{+}x_{2}$ and
$x_{1}\leq x_{2}$. However, since the run is infinite, there exists $x_{3}$
such that $x_{2}\xrightarrow{+}x_{3}$. We can represent the beginning of this
branch with the finite prefix
$n_{0}(x_{0})\xrightarrow{*}n_{1}(x_{1})\xrightarrow{+}n_{2}(x_{2})\xrightarrow{+}n_{3}(x_{3})$,
in $\mathit{RRT}(\mathcal{S},x_{0})$, such that nodes $n_{1},n_{2},n_{3}$ are
all different. However, since $x_{1}\leq x_{2}$, the node $n_{2}(x_{2})$ has
been marked as dead, and the tree need not be explored further.Thus, there is
a contradiction. Hence, $\mathit{RRT}(\mathcal{S},x_{0})$ is finite.
propboundedFRT Let $\mathcal{S}=(X,\Sigma,$ $\xrightarrow{},\leq,x_{0})$ be a
branch-WSTS, equipped with strict branch-monotony and such that $\leq$ is a
partial ordering. The reachability set
$\textsf{Post}^{*}_{\mathcal{S}}(x_{0})$ is infinite iff there exists a branch
$n_{0}(x_{0})\xrightarrow{*}n_{1}(x_{1})\xrightarrow{+}n_{2}(x_{2})$ in
$\mathit{RRT}(\mathcal{S},x_{0})$ such that $x_{1}<x_{2}$.
###### Proof 3.4.
The following proof is an adaptation of the proof of boundedness for WSTS in
[FS01] to branch-WSTS.
First, let us assume $\mathcal{S}$ is unbounded, i.e.
$\textsf{Post}_{\mathcal{S}}^{*}(x_{0})$ is infinite. We will show that there
exists a branch
$n_{0}(x_{0})\xrightarrow{*}n_{1}(x_{1})\xrightarrow{+}n_{2}(x_{2})$ in
$\mathit{RRT}(\mathcal{S},x_{0})$ such that $x_{1}<x_{2}$. Since $\mathcal{S}$
is unbounded, there are an infinite number of distinct states which are
reachable from $x_{0}$. We first show that there exists a run starting from
$x_{0}$, without any loop, i.e. where all states are distinct. We consider the
finitely branching tree of all prefixes of runs, and prune this tree by
removing prefixes which contain a loop. Because any reachable state can be
reached without a loop, the pruned tree still contains an infinite number of
prefixes. By König’s lemma, there exists an infinite run with no loop. Any run
starting from $x_{0}$ has a finite prefix labelling a maximal path in
$\mathit{RRT}(\mathcal{S},x_{0})$. Hence, there must be a node $n_{2}(x_{2})$
which is subsumed by a node $n_{1}(x_{1})$ such that $x_{1}\neq x_{2}$ (since
we enforce that all nodes are distinct). Since we assumed $\leq$ to be a
partial ordering, we deduce from $x_{1}\neq x_{2}$, and $x_{1}\leq x_{2}$ that
$x_{1}<x_{2}$.
Conversely, let us assume that there exist two vertices $n_{1},n_{2}$ labelled
by $x_{1}$ and $x_{2}$ respectively, in $\mathit{RRT}(\mathcal{S},x_{0})$ such
that $n_{0}(x_{0})\xrightarrow{*}n_{1}(x_{1})\xrightarrow{+}n_{2}(x_{2})$ in
$\mathit{RRT}(\mathcal{S},x_{0})$ such that $x_{1}<x_{2}$. Let
$x_{1}\xrightarrow{a_{1}a_{2}...a_{n}}x_{2}$ and $x_{1}<x_{2}$. Hence, there
exist $y_{1},y_{2},...,y_{n}\in X$ such that
$x_{1}\xrightarrow{a_{1}}y_{1}\xrightarrow{a_{2}}y_{2}\xrightarrow{a_{3}}...\xrightarrow{a_{n}}y_{n}=x_{2}$.
Since the system is strictly branch-monotone, we can repeat this sequence,
i.e., there exist $n$ states $u_{1},u_{2},...,u_{n}$ such that
$x_{2}\xrightarrow{a_{1}}u_{1}\xrightarrow{a_{2}}u_{2}...\xrightarrow{a_{n}}u_{n}=x_{3}$
with $y_{n}<u_{n}$. By iterating this process, we construct an infinite
sequence of states $(x_{k})_{k\geq 0}$ such that for all $k\geq 1$, one has
$x_{k}\xrightarrow{a_{1}a_{2}...a_{n}}x_{k+1}$ and $x_{k}<x_{k+1}$. Then, we
deduce that all $x_{k}$ are different. Hence,
$\textsf{Post}_{\mathcal{S}}^{*}(x_{0})$ is infinite, and $\mathcal{S}$ is
unbounded.
We now need a notion of effectivity adapted to branch-WSTS.
A branch-WSTS $\mathcal{S}=(X,\Sigma,$ $\xrightarrow{},\leq,x_{0})$ is
_branch-effective_ if $\mathcal{S}$ is effective and
$\textsf{Post}_{\mathcal{S}}(x)$ is a (finite) computable set, for all $x\in
X$.
###### Theorem 11.
Boundedness is decidable for branch-effective branch-WSTS
$\mathcal{S}=(X,\Sigma,$ $\xrightarrow{},\leq,x_{0})$ with strict branch-
monotony such that $\leq$ is a partial ordering.
###### Proof 3.5.
Suppose $\mathcal{S}=(X,\Sigma,$ $\xrightarrow{},\leq,x_{0})$ satisfies the
above conditions. From Proposition 3, we obtain that
${\mathit{RRT}(\mathcal{S},x_{0})}$ is finite. By hypothesis, $\mathcal{S}$ is
finitely branching and branch-effective. In particular, for all $x$,
$\textsf{Post}_{\mathcal{S}}(x)$ is a finite computable set. As $\leq$ is
decidable, we deduce that $\mathit{RRT}(\mathcal{S},x_{0})$ is effectively
computable. From Proposition 3.3, we know that
$\textsf{Post}^{*}_{\mathcal{S}}(x_{0})$ is infinite iff there exists a finite
branch $n_{0}(x_{0})\xrightarrow{*}n_{1}(x_{1})\xrightarrow{+}n_{2}(x_{2})$
such that $x_{1}<x_{2}$. This last property can be decided on
$\mathit{RRT}(\mathcal{S},x_{0})$, and so the boundedness property can be
decided, too.
We also generalize the decidability of non-termination for WSTS [FS01] to
branch-WSTS.
proptermFRT A branch-WSTS $\mathcal{S}=(X,\Sigma,$
$\xrightarrow{},\leq,x_{0})$ does not terminate from state $x_{0}$ iff there
exists a subsumed node in $\mathit{RRT}(\mathcal{S},x_{0})$.
###### Theorem 12.
Non-termination is decidable for branch-effective branch-WSTS.
###### Proof 3.6.
Given a branch-WSTS $\mathcal{S}=(X,\Sigma,$ $\xrightarrow{},\leq,x_{0})$, we
apply Proposition 3.5 so that it is sufficient to build
$\mathit{RRT}(\mathcal{S},x_{0})$ and check if there exists a subsumed node.
Since $\mathcal{S}$ is branch-effective, we can effectively construct
$\mathit{RRT}(\mathcal{S},x_{0})$ and verify the existence of a subsumed node.
Note that the non-termination and boundedness problems for the example machine
$\mathcal{M}_{1}$ in Figure 1 are, therefore, decidable. Since there exist
nodes $n_{0}(x_{0})$ and $n_{1}(x_{1})$ in the $\mathit{RRT}$ such that
$x_{0}=(q_{0},\varepsilon)$ and $x_{1}=(q_{0},a)$ such that $x_{0}<x_{1}$ and
$x_{0}\xrightarrow{+}x_{1}$, the machine $\mathcal{M}_{1}$ is unbounded.
Furthermore, since all unbounded machines are non-terminating,
$\mathcal{M}_{1}$ is non-terminating.
On the other hand, boundedness becomes undecidable if we relax the strict
monotony condition to general monotony (even when we strengthen the order to
be wqo). This is because boundedness is undecidable for Reset Petri nets
[DFS98]. Reset Petri nets are effective WSTS $\mathcal{S}=(X,\Sigma,$
$\xrightarrow{},\leq,x_{0})$, hence branch-effective WSTS, where $\leq$ is the
wqo on vectors of integers. Hence, we deduce:
###### Proposition 13.
Boundedness is undecidable for branch-effective branch-WSTS
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ where $\leq$ is a wqo.
## 4\. Classes of branch-WSTS
We now study some classes of systems, namely, counter machines and FIFO
machines. Counter machines are wqo for the extended natural ordering; however,
due to the presence of zero-tests, general monotony fails, and hence, they are
not WSTS. We look at a subclass of counter machines, called counter machines
with restricted zero tests (CMRZ), where zero tests are present, but their
behaviour is controlled.
In [BZ83], the authors showed that general FIFO machines can simulate Turing
machines, and hence, it is undecidable to verify boundedness. However, it is
still a widely used model to represent asynchronously communicating systems.
Moreover, they are neither wqos nor monotone for the prefix ordering.
Secondly, in this section, we show that a subclass of FIFO machines, namely
input-bounded FIFO machines, are branch-WSTS under the prefix ordering.
Finally, we study more general counter machines and FIFO machines, and show
that we can decide non-termination and boundedness for a large subclass using
ideas from branch-well-structured behaviours.
### 4.1. Counter machines with restricted zero tests
We define _counter machines with restricted zero tests (CMRZ)_ imposing the
following requirement: Once a counter has been tested for zero, it cannot be
incremented or decremented any more. Formally, we say that $\mathcal{C}$ is a
counter machine with restricted zero tests if for all transition sequences of
the form
$q_{0}\xrightarrow{op(v_{1}),Z_{1}}q_{1}\xrightarrow{op(v_{2}),Z_{2}}\ldots\xrightarrow{op(v_{n-1}),Z_{n-1}}q_{n-1}\xrightarrow{op(v_{n}),Z_{n}}q_{n}$,
for every two positions $1\leq i\leq j\leq n$, we have $v_{j}\not\in Z_{i}$.
In [BFS20], it was shown that non-termination and boundedness are decidable
for this class of systems by reducing them to the (decidable) reachability
problem. However, using the alternative approach of branch-WSTS, we can verify
that non-termination and boundedness are decidable for this class without
reducing these problems to reachability.
Given a CMRZ $\mathcal{C}=(Q,V,T,q_{0})$, we consider the associated
transition system
$\mathcal{S}_{\mathcal{C}}=(X_{\mathcal{C}},\textsf{Act}_{\mathcal{C}},\xrightarrow{}_{\mathcal{C}},x_{0})$.
From this system, we construct an OLTS over the extended ordering $\leq$ such
that $(q,\mathbf{v})\leq(q^{\prime},\mathbf{v}^{\prime})$ iff $q=q^{\prime}$
and $\mathbf{v}\leq\mathbf{v}^{\prime}$ (component-wise). Note that
$(X_{\mathcal{C}},\leq)$ is a wqo. Moreover, this ordering is a partial
ordering.
We now show that CMRZ are branch-monotone for $\leq$. We drop the subscript
while denoting $X_{\mathcal{C}}$ for the remainder of this section, as it is
clear from context.
###### Proposition 14.
CMRZ are branch-monotone and strictly branch-monotone for the extended
ordering $\leq$.
###### Proof 4.1.
Consider the OLTS $\mathcal{S}=(X,\textsf{Act},\xrightarrow{},\leq,x_{0})$
associated to a CMRZ with states $x,x^{\prime}\in X$ such that $x\leq
x^{\prime}$ (resp. $x<x^{\prime}$) and
$x_{0}\xrightarrow{*}x\xrightarrow{\sigma}x^{\prime}$. We need to show that
there exists a state $y$ such that $x^{\prime}\xrightarrow{\sigma}y$ and
$x^{\prime}\leq y$ (resp. $x^{\prime}<y$).
We first prove the following claim:
_Claim:_ For states $x,x^{\prime}\in X$ such that
$x_{0}\xrightarrow{*}x\xrightarrow{\sigma}x^{\prime}$ where $x\leq x^{\prime}$
(resp. $x<x^{\prime}$) and $|\sigma|=n$, the following property holds: For all
$\nu\preceq\sigma$, we have $z,z^{\prime}\in X$ such that
$x\xrightarrow{\nu}z$ and $x^{\prime}\xrightarrow{\nu}z^{\prime}$ and $z\leq
z^{\prime}$ (resp. $z<z^{\prime}$). We prove this claim by induction on the
length of $\nu$.
For the base case, $|\nu|=0$. We have from the hypothesis, $x\leq x^{\prime}$,
hence the claim is trivially true.
Let us assume that the claim holds for $|\nu|=k$. We show that it holds for
$|\nu|=k+1$. From the induction hypothesis, we know that for
$\nu=\nu^{\prime}\cdot a$ where $a\in\textsf{Act}$, there exists
$z_{1},z^{\prime}_{1}$ such that $x\xrightarrow{\nu^{\prime}}z_{1}$ and
$x^{\prime}\xrightarrow{\nu^{\prime}}z^{\prime}_{1}$ and $z_{1}\leq
z^{\prime}_{1}$. Since $x\xrightarrow{\sigma}x^{\prime}$, we know that there
exists $z_{2}\in X$ such that
$x\xrightarrow{\nu^{\prime}}z_{1}\xrightarrow{a}z_{2}$. We can now be in one
of the following cases:
1. Case i:
If $a$ is of the form noop and $Z=\emptyset$, then we can trivially execute
$a$ from $z^{\prime}_{1}$ and reach $z_{2}^{\prime}$ such that $z_{2}\leq
z_{2}^{\prime}$ (resp. $z_{2}<z_{2}^{\prime}$).
2. Case ii:
The action $a$ is of the form $\textsf{inc}(v)$ or $\textsf{dec}(v)$, and the
set of counters to be tested for zero $Z=\emptyset$, i.e.
$z_{1}\xrightarrow{a}z_{2}$ only increments/decrements one counter and leaves
the others unchanged (and no counters are tested to zero). Since $z_{1}\leq
z^{\prime}_{1}$ (resp. $z_{1}<z^{\prime}_{1}$), we know that
$z_{1},z^{\prime}_{1}$ have the same control-state. Hence, this action is
enabled in $z^{\prime}_{1}$. Moreover, because of the CMRZ property, we know
that $v$ is not tested to zero even once until the state $z^{\prime}_{1}$ is
reached in this run. Therefore, we can execute the increment/decrement
operation on $v$. Furthermore, since $z_{1}\leq z^{\prime}_{1}$ (resp.
$z_{1}<z^{\prime}_{1}$), the value of $v$ in $z^{\prime}_{1}$ is greater than
or equal to (resp. strictly greater than) the value of $v$ in $z_{1}$. Hence,
we can execute $a$ from $z^{\prime}_{1}$ and reach a state $z_{2}^{\prime}$
such that $z_{2}\leq z_{2}^{\prime}$ (resp. $z_{2}<z_{2}^{\prime}$).
3. Case iii:
$Z\neq\emptyset$ in the transition $z_{1}\xrightarrow{a}z_{2}$. Hence, there
are a set of counters $Z$ which are tested to zero. By the CMRZ property, we
know that all counters $v\in Z$, are never incremented or decremented further.
Hence, during the execution
$z_{1}\xrightarrow{a}z_{2}\xrightarrow{w}z^{\prime}_{1}$ where
$w\in\textsf{Act}^{*}$, we know that none of these counters are incremented or
decremented. Hence, the value of the counters in $z^{\prime}_{1}$ is also
equal to zero. Therefore, we can execute $a$ from $z^{\prime}_{1}$ to reach
$z_{2}^{\prime}$. Moreover, since $z_{1}\leq z^{\prime}_{1}$ (resp.
$z_{1}<z^{\prime}_{1}$), and none of these counters change value, we can
conclude that $z_{2}\leq z_{2}^{\prime}$ (resp. $z_{2}<z_{2}^{\prime}$).
Hence, as a special case of the claim where $\nu=\sigma$, we prove that CMRZ
are branch-monotone (resp. strictly branch-monotone).
###### Proposition 15.
CMRZs are branch-effective branch-WSTS.
###### Proof 4.2.
Given an OLTS $\mathcal{S}=(X,A,$ $\xrightarrow{},\leq,x_{0})$ associated to a
CMRZ, for any two states $x,x^{\prime}\in X$, we can decide if $x\leq
x^{\prime}$. Furthermore, $\xrightarrow{}$ is decidable from the underlying
finite automaton, and $\textsf{Post}_{\mathcal{S}}(x)$ is computable for all
$x\in X$. Hence, it is branch-effective.
Hence, we deduce:
###### Theorem 16.
Non-termination and boundedness are decidable for counter machines with
restricted zero tests.
### 4.2. Input-bounded FIFO machines
The FIFO machine $\mathcal{M}_{1}$ from Figure 1 is an example of a system
that is branch-WSTS but the underlying set of states is not well-quasi-
ordered. We first try to generalize a class of systems which are branch-wqo,
and which includes $\mathcal{M}_{1}$.
We consider a restriction that has been studied in [BFS20], which we go on to
prove is branch-wqo. These systems are known as input-bounded FIFO machines,
which we formally define below.
Consider a FIFO machine $\mathcal{M}=(Q,A,T,q_{0})$ over a set of channels Ch.
For $\textsf{c}\in\textsf{Ch}$, we let
${\mathit{proj}}_{\textsf{c}!}:\\{\textsf{Ch}\times\\{!,?\\}\times A\\}^{*}\to
A^{*}$ be the homomorphism defined by
$\mathit{proj}_{\textsf{c}!}(\textsf{c}!a)=a$ for all $a\in A$, and
$\mathit{proj}_{\textsf{c}!}(\alpha)=\varepsilon$ if $\alpha$ is not of the
form $\textsf{c}!a$ for some $a\in A$. Moreover,
$\mathit{proj}_{\textsf{c}!}(\tau_{1}\cdot\tau_{2})=\mathit{proj}_{\textsf{c}!}(\tau_{1})\cdot\mathit{proj}_{\textsf{c}!}(\tau_{2})$.
Similarly, for $\textsf{c}\in\textsf{Ch}$, we let
${\mathit{proj}}_{\textsf{c}?}:\\{\textsf{Ch}\times\\{!,?\\}\times A\\}^{*}\to
A^{*}$ be the homomorphism defined by
$\mathit{proj}_{\textsf{c}?}(\textsf{c}?a)=a$ for all $a\in A$, and
$\mathit{proj}_{\textsf{c}?}(\alpha)=\varepsilon$ if $\alpha$ is not of the
form $\textsf{c}?a$ for some $a\in A$. Moreover,
$\mathit{proj}_{\textsf{c}?}(\tau_{1}\cdot\tau_{2})=\mathit{proj}_{\textsf{c}?}(\tau_{1})\cdot\mathit{proj}_{\textsf{c}?}(\tau_{2})$.
We define the input-language of a FIFO channel c as the set of all words that
are sent into the channel, i.e.
$\mathit{proj}_{\textsf{c}!}(\textsf{Traces}(\mathcal{M}))$. We say that the
machine is _input-bounded_ if for each $\textsf{c}\in\textsf{Ch}$, there is a
regular bounded language $\mathcal{L}_{\textsf{c}}$ (i.e. language of the form
$w_{1}^{*}\ldots w_{n}^{*}$) such that
$\mathit{proj}_{\textsf{c}!}(\textsf{Traces}(\mathcal{M}))\subseteq\mathit{Pref}(\mathcal{L}_{\textsf{c}})$,
i.e. the send-projection of every run of the FIFO machine over each channel is
a prefix of an input-bounded language. We say that $\mathcal{L}_{\textsf{c}}$
is distinct-letter if $|w_{1}\ldots w_{n}|_{a}=1$ for all $a\in A$.
###### Proposition 17.
Input-bounded FIFO machines are branch-wqo for the prefix-ordering $\leq_{p}$.
###### Proof 4.3.
Let us consider the transition system
$\mathcal{S}_{\mathcal{M}}=(X_{\mathcal{M}},\Sigma_{\mathcal{M}},\xrightarrow{}_{\mathcal{M}},x_{0})$
associated an input-bounded FIFO machine $\mathcal{M}$ with a single channel
c, and an infinite run
$x_{0}\xrightarrow{}x_{1}\xrightarrow{}x_{2}\xrightarrow{}...x_{i}...$ with
$x_{i}=(q_{i},w_{i})\in X_{\mathcal{M}}$.
The infinite run is of the form
$x_{0}\xrightarrow{\sigma_{1}}x_{1}\xrightarrow{\sigma_{2}}x_{2}...x_{i-1}\xrightarrow{\sigma_{i}}x_{i}\xrightarrow{\sigma_{i+1}}\ldots$
and we denote
$\sigma[i]=\mathit{proj}_{\textsf{c}!}(\sigma_{1}\sigma_{2}\ldots\sigma_{i})$.
It can be observed that $\sigma[i]$ is a prefix of $\sigma[i+1]$ for all
$i\in\mathbb{N}$. Since $\sigma[i]$ is of the form
$v_{1}^{n_{1,i}}...v_{m}^{n_{m,i}}u_{m}$ for $u_{m}\preceq v_{m}$ and
$n_{1,i}...,n_{m,i}\geq 0$ and $1\leq m\leq k$, the infinite sequence
$(\sigma[i])_{i\in\mathbb{N}}$ satisfies two possible exclusive cases:
1. Case i:
There exists an $i_{0}$ such that $\forall i\geq
i_{0},\mathit{proj}_{\textsf{c}!}(\sigma_{i})=\varepsilon$ so there exists
$i_{1}\geq i_{0}$ such that for all $i\geq i_{1},w_{i}=w_{i+1}$. Hence,
because there are finitely many control-states, we deduce that there exist
$i_{2},i_{3}\geq i_{1}$ such that $x_{i_{2}}\xrightarrow{+}x_{i_{3}}$ and
$x_{i_{2}}=x_{i_{3}}$ hence also in particular $x_{i_{2}}\leq_{p}x_{i_{3}}$.
2. Case ii:
There are infinitely many indices $i$ such that
$\mathit{proj}_{\textsf{c}!}(\sigma_{i})\neq\varepsilon$, which means that the
infinite sequence $(\sigma[i])_{i\in\mathbb{N}}$ is not stationary. This
implies that the set $S_{\sigma}=\\{(n_{1,i},...,n_{k,i})\mid
i\in\mathbb{N}\\}$, associated with $\sigma$, is infinite. Hence, there exists
a least index $p$ such that the set $\\{n_{p,i}\\}_{i\in\mathbb{N}}$ is
infinite. Then the set $F=\\{(n_{1,i},...,n_{p-1,i})\mid i\in\mathbb{N}\\}$ is
finite.
We claim that for all indices $\ell\geq p+1$, $n_{\ell,i}=0$ for all $i$. Let
us assume to the contrary that there is some index $\ell\geq p+1$ and $i_{0}$
such that $n_{\ell,i_{0}}\neq 0$. This means that the word $v_{\ell}$ is in
the channel in state $x_{i_{0}}$, which means that the word $v_{\ell}$ was
sent to the channel before (or at) the step $i_{0}$, i.e,
$\sigma[i_{0}]=v_{1}^{n_{1,{i_{0}}}}...v_{p}^{n_{p,{i_{0}}}}...v_{m}^{n_{m,{i_{0}}}}u_{m}$
for some $u_{m}\preceq v_{m}$ and $n_{\ell,{i_{0}}}>0$ and $1\leq m\leq k$.
So, in particular, word $v_{p}$ cannot be sent after $i_{0}$, hence,
$n_{p,i}=n_{p,{i_{0}}}$ $\forall i>i_{0}.$ Hence,
$\\{n_{p,i}\\}_{i\in\mathbb{N}}$ is finite which is a contradiction to our
assumption that $\\{n_{p,i}\\}_{i\in\mathbb{N}}$ is infinite.
This means that after some state $x_{t}$, we only write word $v_{p}$ to the
channel. Since, the set $F=\\{(n_{1,j},...,n_{p-1,j})\mid j\in\mathbb{N}\\}$
is finite, we can extract an infinite subsequence $(q,w_{i})_{i\in
K\subseteq\mathbb{N}}$ where $w_{i}=uv_{p}^{n_{p,i}}$ with $u\in F$ and
$(n_{p,i})_{i\in K}$ is non-decreasing. Hence, there exist two indices $a,b>0$
such that $w_{a}=u.v_{p}^{n_{p,a}}$ and $w_{a+b}=u.v_{p}^{n_{p,a+b}}$ and
$n_{p,a}\leq n_{p,a+b}$ hence $w_{a+b}=w_{a}.v_{p}^{n_{p,a+b}-n_{p,a}}$ hence
$w_{a}\preceq w_{a+b}$. So we have found two states $x_{a},x_{a+b}$ such that
$x_{a}\leq_{p}x_{a+b}$. Hence, the machine is branch-wqo for the prefix
ordering.
Using the same argument for each channel, we can conclude that input-bounded
FIFO machines with multiple channels are branch-wqo for the extended prefix-
ordering $\leq_{p}$.
It is clear that $\mathcal{M}_{1}$ from Figure 1 belongs to this class of FIFO
machines. But, we see below that the class of input-bounded FIFO machines is
not branch-WSTS. Consider the FIFO machine $\mathcal{M}_{3}$ with a single
channel c in Figure 4 that is (distinct-letter) input-bounded for
$\mathcal{L}_{\textsf{c}}=(ab)^{*}$. We have
$(q_{0},\varepsilon)\xrightarrow{\sigma}(q_{0},b)$, where $\sigma=!a!b?a$.
Moreover, $(q_{0},\varepsilon)\leq_{p}(q_{0},b)$. However, we cannot repeat
$\sigma$ from $(q_{0},b)$, as it is not possible to execute
$(q_{2},bab)\xrightarrow{?a}$. Hence, the machine is not branch-monotone for
the prefix-ordering.
$q_{0}$$q_{1}$$q_{2}$$!a$$?a$$!b$ Figure 4. The FIFO machine $\mathcal{M}_{3}$
From the above counter-example, we see that the class of distinct-letter
input-bounded FIFO machines are not branch-WSTS. Hence, we impose another
restriction on such systems.
Consider an input-bounded FIFO machine
$\hat{\mathcal{M}}=(\hat{Q},A,\hat{T},q_{0})$ (over a set of channels Ch) with
a distinct-letter bounded input-language
$\mathcal{L}=(\mathcal{L}_{\textsf{c}})_{\textsf{c}\in\textsf{Ch}}$. We first
consider a deterministic finite automaton
$\mathcal{A}=(Q_{\mathcal{A}},\Sigma_{\mathcal{M}},\xrightarrow{}_{\mathcal{A}},q^{0}_{\mathcal{A}},F_{\mathcal{A}})$,
with set of final states $F_{\mathcal{A}}\subseteq Q_{\mathcal{A}}$, whose
language is
$L(\mathcal{A})=\mathcal{L}_{!}\mathrel{\cap}\mathit{Pref}(\mathcal{L}_{?})$,
where
$\mathcal{L}_{!}=\\{\sigma\mid\mathit{proj}_{\textsf{c}!}(\sigma)\in\mathcal{L}_{\textsf{c}}\text{
for all }\textsf{c}\in\textsf{Ch}\\}$ and
$\mathcal{L}_{?}=\\{\sigma\mid\mathit{proj}_{\textsf{c}?}(\sigma)\in\mathcal{L}_{\textsf{c}}\text{
for all }\textsf{c}\in\textsf{Ch}\\}$. With this, we define
$\bar{\mathcal{M}}_{\mathcal{L}}=(Q,A,T,q_{0})$ as the product of the FIFO
machine $\hat{\mathcal{M}}$ and $\mathcal{A}$ in the expected manner. In
particular, the set of control-states of $\bar{\mathcal{M}}_{\mathcal{L}}$ is
$Q=\hat{Q}\times Q_{\mathcal{A}}$, and its initial state is the pair
$q_{0}=(\hat{q_{0}},q^{0}_{\mathcal{A}})$.
We assume we are given an input-bounded FIFO machine $\hat{\mathcal{M}}$ and
its input language $\mathcal{L}$. Let $\bar{\mathcal{M}}_{\mathcal{L}}$ be the
FIFO machine constructed as above. Then:
###### Proposition 18.
The machine $\bar{\mathcal{M}}_{\mathcal{L}}$ is branch-monotone.
###### Proof 4.4.
We show this proof for a FIFO machine with a single channel c, but the same
argument can be extended to FIFO machines with multiple channels.
Let $\mathcal{L}=w_{1}^{*}\ldots w_{n}^{*}$. Let
$(q_{0},\varepsilon)\xrightarrow{\tau}(q,w)\xrightarrow{\sigma}(q,w^{\prime})$
such that $w\preceq w^{\prime}$. To prove branch-monotony, we need to show
that there exists $w^{\prime\prime}$ such that
$(q,w^{\prime})\xrightarrow{\sigma}(q,w^{\prime\prime})$ and
$w^{\prime}\preceq w^{\prime\prime}$.
Since $\mathcal{L}$ is a bounded language, we know that
$\mathit{proj}_{\textsf{c}!}(\tau)=w_{1}^{n_{1}}\ldots w_{i}^{n_{i}}.u_{i}$
where $u_{i}\preceq w_{i}$ and $1\leq i\leq n$ and $n_{p}\in\mathbb{N}$ for
all $1\leq p\leq i$. Moreover,
$\mathit{proj}_{\textsf{c}!}(\sigma)\in\mathit{Pref}(u^{\prime}_{i}\cdot
w_{i}^{n^{\prime}_{i}}\ldots w_{j}^{n_{j}})$ where
$u_{i}.u^{\prime}_{i}=w_{i}$ and $1\leq i\leq j\leq n$. Let us consider the
channel content $w$ now. From the characterization of $\tau$ above, we can
express $w=v_{\ell}\cdot w_{\ell}^{n_{\ell}}\ldots w_{i}^{n_{i}}.u_{i}$, where
$1\leq\ell\leq i$ and $v_{\ell}\in\mathit{Suf}(w_{\ell})$. Now, let us analyse
the cases based on the value of $\mathit{proj}_{\textsf{c}?}(\sigma)$:
1. Case i:
$\mathit{proj}_{\textsf{c}?}(\sigma)=\varepsilon$. In other words, this means
that there are only send actions in $\sigma$. Hence, it is possible to repeat
the same sequence of moves as we are in the same control-state $q$. Therefore,
$(q,w^{\prime})\xrightarrow{\sigma}(q,w^{\prime\prime})$ for some value of
$w^{\prime\prime}$. Furthermore, since $\sigma$ has only send actions,
$w^{\prime}=w.v$ for some $v\in A^{*}$. Therefore, after we repeat $\sigma$
once again from $(q,w^{\prime})$, we reach $(q,w^{\prime\prime})$ such that
$w^{\prime\prime}=w^{\prime}.v=w.v.v$. Therefore,
$(q,w^{\prime})\leq_{p}(q,w^{\prime\prime})$ and we are done with this case.
2. Case ii:
$\mathit{proj}_{\textsf{c}?}(\sigma)\neq\varepsilon$.
1. Case a:
Let us first consider that $w\neq\varepsilon$.
Let us first assume $\exists p_{1},p_{2}\cdot~{}1\leq p_{1}<p_{2}\leq i$ such
that $w=v_{p_{1}}.v.u_{p_{2}}$ where $v_{p_{1}}\in\mathit{Suf}(w_{p_{1}})$,
$u_{p_{2}}\in\mathit{Pref}(w_{p_{2}})$, and
$v_{p_{1}},u_{p_{2}}\neq\varepsilon$ and $v\in A^{*}$. In other words, there
is a non-empty part of at least two distinct words in the channel contents
$w$. Since the FIFO machine is input-bounded, we can conclude that
$\mathit{proj}_{\textsf{c}!}(\sigma)$ does not contain any occurrences of
alphabets from the word $w_{p_{1}}$. Therefore, in order for the condition
$w\preceq w^{\prime}$ to be satisfied, it is necessary that
$\mathit{proj}_{\textsf{c}?}(\sigma)=\varepsilon$, which is a contradiction to
our assumption.
Therefore, if $w\neq\varepsilon$, then the only possibility is that
$w=v_{i}.w_{i}^{n_{i}}.u_{i}$ such that $v_{i}\in\mathit{Suf}(w_{i})$.
Therefore, $\mathit{proj}_{\textsf{c}?}(\sigma)$ only consists of letters from
words $w_{j}$ such that $j\geq i$. However, since $w\preceq w^{\prime}$, we
can be certain that it only consists of letters from the word $w_{i}$ (if the
head of the channel contains letters from a word $w_{j}$ where $j>i$ from the
channel, then there can be no occurrence of the word $w_{i}$ in the channel).
Therefore, $\mathit{proj}_{\textsf{c}?}(\sigma)$ consists of only letters
belonging to $w_{i}$. Moreover, since $\mathit{proj}_{\textsf{c}!}(\sigma)$ is
non-empty, there is at least one letter that is read from $w$. Therefore, the
first letter that is sent in the sequence $\sigma$ belongs to the word $w_{i}$
(to ensure $w\preceq w^{\prime}$).
Let us consider this subsequence $\sigma^{\prime}$ from $(q,w)$ to the first
send action. Let us say we have
$(q,w)\xrightarrow{\sigma^{\prime}}(q^{\prime},v^{\prime})$. Now, since the
subsequence $\sigma^{\prime}$ only consists of receptions from $(q,w)$, along
with the first send action, this subsequence is also possible from $(q,w.v)$
for all $v\in\Sigma^{*}$. Therefore, we can execute the same sequence from
$(q,w^{\prime})$. Hence,
$\mathit{proj}_{\textsf{c}!}(\tau.\sigma.\sigma^{\prime})\in\mathcal{L}$.
Therefore, since
$\mathit{Alph}(\mathit{proj}_{\textsf{c}!}(\sigma^{\prime}))\in\mathit{Alph}{(w_{i})}$,
we can be sure that
$\mathit{Alph}{(\mathit{proj}_{\textsf{c}!}(\sigma))}\in\mathit{Alph}{(w_{i})}$.
Therefore, $\sigma$ only sends and receives letters from a single word
$w_{i}$.
Moreover, since the system is input-bounded, and the first send action in
$\sigma^{\prime}$ matches the first send action in $\sigma$, we see that
$w^{\prime}=v_{i}.w_{i}^{n_{i}}.u_{i}.(v^{\prime}_{i}.w_{i}^{n^{\prime}_{i}}.u_{i})$
${=w.(v^{\prime}_{i}.w_{i}^{n^{\prime}_{i}}.u_{i})}$ such that
$u_{i}.v^{\prime}_{i}=w_{i}$. Therefore, we can repeat this sequence from
$(q,w^{\prime})$ and reach a state $(q,w^{\prime\prime})$ such that
$w^{\prime}\preceq w^{\prime\prime}$, and hence, it is branch-monotone for
this case.
2. Case b:
The final case we need to consider is $w=\varepsilon$. In this case, it is
clear that $\sigma$ consists of at least one send action before the first
reception. Therefore, because of the input-bounded property and the fact that
this action can be executed at $(q,w^{\prime})$, we can once again see that
$\mathit{proj}_{\textsf{c}!}(\sigma)$ consists only sending only letters from
a single word. Moreover, since the same action can be executed, once again we
see that $\mathit{proj}_{\textsf{c}!}(\sigma)=v_{j}.w_{j}^{n_{j}}.u_{j}$ such
that $u_{j}.v_{j}=w_{j}$. Therefore,
$\mathit{proj}_{\textsf{c}?}(\sigma)\in\mathit{Pref}(v_{j}.w_{j}^{n_{j}}.u_{j})$.
Now let us consider the run $\tau.\sigma$ in the automaton $\mathcal{A}$ that
we constructed. Since $\tau.\sigma$ is a run in
$\bar{\mathcal{M}}_{\mathcal{L}}$, there is also a run in $\mathcal{A}$ such
that $q^{0}_{\mathcal{A}}\xrightarrow{\tau}q_{s}\xrightarrow{\sigma}q_{s}$.
Moreover, we can also repeat $\sigma$ to obtain
$q^{0}_{\mathcal{A}}\xrightarrow{\tau}q_{s}\xrightarrow{\sigma}q_{s}\xrightarrow{\sigma}q_{s}$.
Therefore, $\tau.\sigma.\sigma\in\mathit{Pref}(\mathcal{L}_{?})$. Moreover,
since $\mathit{proj}_{\textsf{c}?}(\sigma)\neq\varepsilon$,
$\mathit{proj}_{\textsf{c}?}(\sigma)=u^{\prime}_{j}.w_{j}^{n^{\prime}_{j}}.v^{\prime}_{j}$
such that $v^{\prime}_{j}.u^{\prime}_{j}=w_{j}$. Therefore, we can repeat
$\sigma$ from $(q,w^{\prime})$ in $\bar{\mathcal{M}}_{\mathcal{L}}$, and we
reach a state $(q,w^{\prime\prime})$ such that $w^{\prime}\preceq
w^{\prime\prime}$.
Hence, we see that for all cases, if
$(q_{0},\varepsilon)\xrightarrow{\tau}(q,w)\xrightarrow{\sigma}(q,w^{\prime})$
such that $w\preceq w^{\prime}$, then there exists $w^{\prime\prime}$ such
that $(q,w^{\prime})\xrightarrow{\sigma}(q,w^{\prime\prime})$ and
$w^{\prime}\preceq w^{\prime\prime}$.
If we have more than one channel, we can extend this argument to consider each
channel, and hence, $\bar{\mathcal{M}}_{\mathcal{L}}$ is branch-monotone.
Consider the FIFO machine $\mathcal{M}_{4}$ as in Figure 5 that is input-
bounded for $\mathcal{L}=(ab)^{*}$. As we saw in Example 4.2, it is not
branch-monotone. However, let us consider the product
$\bar{\mathcal{M}}_{4,\mathcal{L}}$ of $\mathcal{M}_{4}$ along with the finite
automaton $\mathcal{A}$ that recognizes
$\mathcal{L}_{!}\cap\mathit{Pref}(\mathcal{L}_{?})$. We depict only accessible
states of $\mathcal{M}_{4,\mathcal{L}}$, from which we can still complete the
word read so far to a word belonging to
$\mathcal{L}_{!}\cap\mathit{Pref}(\mathcal{L}_{?})$. Here, we see that the run
we had in counter-example previously no longer violates branch-monotony. In
fact, the loop that could not be realized has now been unfolded, and we can
see that the FIFO machine $\mathcal{M}_{4}$ has only a finite run. Trivially,
due to the absence of loops, we see that it is now branch-monotone for the
prefix-ordering.
$q_{0}$$q_{1}$$q_{2}$$!a$$?a$$!b$$\mathcal{M}_{4}$
$s_{0}$$s_{1}$$r_{0}$$r_{1}$$!a$$!b$$?a$$?b$$L(\mathcal{A}_{s})=\mathcal{L}_{!}$
$L(\mathcal{A}_{r})=\mathit{Pref}(\mathcal{L}_{?})$ $q_{0}$, $s_{0},r_{0}$
$q_{1}$, $s_{1},r_{0}$ $q_{2}$, $s_{0},r_{0}$ $q_{0}$, $s_{0},r_{1}$ $q_{1}$,
$s_{1},r_{1}$ $q_{1}$, $s_{1},r_{1}$
$!a$$!b$$?a$$!a$$!b$$\bar{\mathcal{M}}_{4,\mathcal{L}}$ Figure 5. The FIFO
machine $\mathcal{M}_{4}$ (right) and the automata $\mathcal{A}_{s}$ and
$\mathcal{A}_{r}$ (left) that recognize $\mathcal{L}_{!}$ and
$\mathit{Pref}(\mathcal{L}_{?})$ respectively intersect to give
$\bar{\mathcal{M}}_{4,\mathcal{L}}$(below) which is branch-monotone.
We remark here that in [BFS20], when the authors show that input-bounded
rational-reachability is decidable, they construct a “normalized” FIFO
machine, from the given FIFO machine and bounded language. Using the same
principles, we can modify every input-bounded FIFO machine into one that is
distinct-letter, and using the product construction from above, we have the
following proposition:
###### Proposition 19.
Normalized input-bounded FIFO machines are branch-effective branch-WSTS (for
the prefix ordering).
###### Proof 4.5.
Given an input-bounded FIFO machine $\mathcal{S}_{\mathcal{M}}$, for any two
states $x,x^{\prime}\in X$, we can decide if $x\leq_{p}x^{\prime}$.
Furthermore, $\xrightarrow{}$ is decidable, and
$\textsf{Post}_{\mathcal{S}}(x)$ is computable for all $x\in X$. Hence, it is
branch-effective.
Moreover, the extended prefix ordering is a partial ordering. Hence, we
deduce:
###### Theorem 20.
Non-termination and boundedness are decidable for input-bounded FIFO machines.
### 4.3. Verifying non-termination and boundedness for general counter
machines
We saw in Section 4.1 that non-termination and boundedness are decidable for
CMRZ. However, this class of counter machines is weak as every run in such a
machine has to maintain the restriction. In this section, we show that we can
study these problems for a larger class of counter machines, where the branch-
well-structured behaviour is only maintained for a single run.
We begin by constructing the labelled-$\mathit{RRT}$ (denoted by
$\mathit{l}\text{-}\mathit{RRT}$), which is a refinement of the
$\mathit{RRT}$, where we add an additional label, which we will call
_iterable_. We mark a dead vertex $n^{\prime}$ (labelled by $x^{\prime}$) as
_iterable_ if
* •
it is subsumed by a node $n$ labelled by $x$, i.e. $x\leq x^{\prime}$ and
$\exists\sigma.x\xrightarrow{\sigma}x^{\prime}$, and
* •
there exists a state $x^{\prime\prime}\in X$, such that,
$x^{\prime}\xrightarrow{\sigma}x^{\prime\prime}$ and $x^{\prime}\leq
x^{\prime\prime}$.
For counter machines, we show that the presence of an iterable node in the
$\mathit{l}\text{-}\mathit{RRT}$ is a sufficient condition for unboundedness
and non-termination.
###### Proposition 21.
A counter machine $\mathcal{S}=(X,\textsf{Act},\xrightarrow{},\leq,x_{0})$ is
non-terminating if $\mathit{l}\text{-}\mathit{RRT}(\mathcal{S},x_{0})$ has at
least one iterable node.
###### Proof 4.6.
Let $\mathcal{S}=(X,\textsf{Act},\xrightarrow{},\leq,x_{0})$ be the OLTS
associated to a counter machine, and
$\mathit{l}\text{-}\mathit{RRT}(\mathcal{S},x_{0})$ its labelled reduced
reachability tree. We show that if there exists an iterable node in
$\mathit{l}\text{-}\mathit{RRT}(\mathcal{S},x_{0})$, then $\mathcal{S}$ is
non-terminating. Let us assume there is an iterable node labelled by $x_{1}$
in $\mathit{l}\text{-}\mathit{RRT}(\mathcal{S},x_{0})$. By definition, this
means that there exists a run
$x_{0}\xrightarrow{*}x_{1}\xrightarrow{\sigma}x_{2}\xrightarrow{\sigma}x_{3}$.
Let $x_{i}=(q,{\ell}_{i})$ for $i\in\\{1,2,3\\}$, where
$\ell_{i}=(\ell_{i,v})_{v\in V}$ represents the counter values. Let
$\sigma=a_{1}a_{2}\ldots a_{n}$. Then we have,
$x_{1}\xrightarrow{a_{1}}u_{1}\xrightarrow{a_{2}}\ldots\xrightarrow{a_{n-1}}u_{n-1}\xrightarrow{a_{n}}x_{2}$
and
$x_{2}\xrightarrow{a_{1}}u_{1}^{\prime}\xrightarrow{a_{2}}\ldots\xrightarrow{a_{n-1}}u_{n-1}^{\prime}\xrightarrow{a_{n}}x_{3}$.
Let $u_{i}=(q_{i},{m}_{i})$, and $u_{i}^{\prime}=(q_{i},{m}^{\prime}_{i})$ for
$1\leq i<n$, where ${m}_{i}=({m}_{i,v})_{v\in V}$ and
${m}^{\prime}_{i}=({m}^{\prime}_{i,v})_{v\in V}$ represent the counter values.
Since $x_{1}\leq x_{2}$, for each counter $v\in V$, the following holds:
$\ell_{2,v}=\ell_{1,v}+k_{v}$, where $k\geq 0$. Moreover, because both runs
perform the same sequence of actions, we also have the following property: for
all $1\leq i\leq n$, ${m}^{\prime}_{i,v}={m}_{i,v}+k_{v}$.
Let us assume that there exists a transition
$u_{i-1}\xrightarrow{a_{i}}u_{i}$, such that $a_{i}=\textsf{op}(v_{i}),Z_{i}$,
for some $v_{i}\in V$, and where $Z_{i}\neq\emptyset$, then we also have
$u^{\prime}_{i-1}\xrightarrow{a_{i}}u^{\prime}_{i}$. Therefore, the set of
counters $Z_{i}$ to be tested to zero in $u_{i-1}$ is the same as the set of
counters to be tested to zero in $u^{\prime}_{i-1}$. And the action $a_{i}$ is
feasible from both $u_{i}$ and $u_{i}^{\prime}$. Therefore, for each counter
$v\in Z_{i}$, we have ${m}_{i,v}={m}^{\prime}_{i,v}$, i.e. $k_{v}=0$. In other
words, for each counter $v\in V$ that is tested to zero along $\sigma$,
$k_{v}=0$, i.e. the values of those counters are identical along both runs.
Let us now only consider the set of counters tested for zero along $\sigma$,
and call this set of counters $Z_{\sigma}$. The following property holds:
$\ell_{1,v}=\ell_{2,v}$ for all $v\in Z_{\sigma}$. Now, from $(q,\ell_{2})$,
upon executing the sequence of transitions $\sigma$, for every counter in
$Z_{\sigma}$, once again the values do not change. Therefore, we have the
following property: $\ell_{2,v}=\ell_{3,v}$ for all $v\in Z_{\sigma}$. For
counters $v\in V\setminus Z_{\sigma}$, $\ell_{2,v}=\ell_{1,v}+k_{v}$. However,
since there are no zero tests for these counters along $\sigma$, we can repeat
the same sequence of actions and obtain: $\ell_{3,v}=\ell_{2,v}+k_{v}$.
Therefore, from $x_{3}=(q,{\ell}_{3})$, once again we can repeat $\sigma$ in
order to reach a state $x_{4}=(q,{\ell}_{4})$ such that for all $v\in
Z_{\sigma}$, $\ell_{3,v}=\ell_{4,v}$ and for all counters $m\in V\setminus
Z_{\sigma}$, $\ell_{4,m}=\ell_{3,m}+k_{m}$. Therefore, we can repeat $\sigma$
infinitely many times. Hence, $\mathcal{S}$ is non-terminating.
### 4.4. Verifying non-termination and boundedness for FIFO machines
Now, we extend the same idea as above for branch-wqo FIFO machines.
Let us assume that given a FIFO machine, we have:
$(q,u)\xrightarrow{\sigma}(q,u.v)$, and $\exists w\in\Sigma^{*}$ such that
$(q,u.v)\xrightarrow{\sigma}(q,u.v.w)$. Then, for all
$\textsf{c}\in\textsf{Ch}$:
$\displaystyle
u_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma)=\mathit{proj}_{\textsf{c}?}(\sigma).u_{\textsf{c}}.v_{\textsf{c}}$
(1) $\displaystyle
u_{\textsf{c}}.v_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma)=\mathit{proj}_{\textsf{c}?}(\sigma).u_{\textsf{c}}.v_{\textsf{c}}.w_{\textsf{c}}$
(2)
From the above two equations, we have
$u_{\textsf{c}}.v_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma)=u_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma).w_{\textsf{c}}$
(3)
Hence, we have:
$v_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma)=\mathit{proj}_{\textsf{c}!}(\sigma).w_{\textsf{c}}$
(4)
Moreover,
$\displaystyle|\mathit{proj}_{\textsf{c}!}(\sigma)|=|\mathit{proj}_{\textsf{c}?}(\sigma)|+|v_{\textsf{c}}|$
(5)
$\displaystyle|\mathit{proj}_{\textsf{c}!}(\sigma)|=|\mathit{proj}_{\textsf{c}?}(\sigma)|+|w_{\textsf{c}}|$
(6)
Hence, the length of $\mathit{proj}_{\textsf{c}!}(\sigma)$ is at least as much
as that of $v_{\textsf{c}}$ and $w_{\textsf{c}}$. Moreover,
$|v_{\textsf{c}}|=|w_{\textsf{c}}|$ (7)
Moreover, from Equation 4.1, we see that $v_{\textsf{c}}$ is a suffix of
$\mathit{proj}_{\textsf{c}!}(\sigma)$. Similarly, from Equation 4.2, we see
that $w_{\textsf{c}}$ is a suffix of $\mathit{proj}_{\textsf{c}!}(\sigma)$.
Since $|v_{\textsf{c}}|=|w_{\textsf{c}}|$, we have
$v_{\textsf{c}}=w_{\textsf{c}}$. We can now rewrite Equation 4.4 as:
$v_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma)=\mathit{proj}_{\textsf{c}!}(\sigma).v_{\textsf{c}}$
(8)
The well-known Levi’s Lemma says that the words $u,v\in\Sigma^{*}$ that are
solutions of the equation $uv=vu$ satisfy $u,v\in z^{*}$ where
$z\in\Sigma^{*}$ is a primitive word.
For FIFO machines also, we show that an iterable node in the
$\mathit{l}\text{-}\mathit{RRT}$ can indicate the presence of a non-
terminating run. Before we prove that, we first restate a result from [FP20]
that we will use.
###### Proposition 22 ([FP20]).
Given a FIFO machine, the loop labelled by $\sigma$ is infinitely iterable
from a state $(q,\mathbf{w})$ iff for every channel
$\textsf{c}\in\textsf{Ch}$, either
$\mathit{proj}_{\textsf{c}?}(\sigma)=\varepsilon$, or the following three
conditions are true: $\sigma$ is fireable at least once from $(q,\mathbf{w})$,
$|\mathit{proj}_{\textsf{c}?}(\sigma)|\leq|\mathit{proj}_{\textsf{c}!}(\sigma)|$
and
$\mathbf{w}_{\textsf{c}}\cdot(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}=(\mathit{proj}_{\textsf{c}?}(\sigma))^{\omega}$.
Now, we will use the above result to provide a _sufficient_ condition for non-
termination in FIFO machines.
###### Proposition 23.
A FIFO machine $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq_{p},x_{0})$ is non-
terminating if $\mathit{l}\text{-}\mathit{RRT}(\mathcal{S},x_{0})$ has at
least one iterable node.
###### Proof 4.7.
Let $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq_{p}x_{0})$ be a FIFO machine,
and $\mathit{l}\text{-}\mathit{RRT}(\mathcal{S},x_{0})$ its labelled
reachability tree. Let us assume that there exists an iterable node
$n^{\prime}$ labelled by $x^{\prime}$ in
$\mathit{l}\text{-}\mathit{RRT}(\mathcal{S},x_{0})$. Then, by definition, we
have a run
$x_{0}\xrightarrow{*}x\xrightarrow{\sigma}x^{\prime}\xrightarrow{\sigma}x^{\prime\prime}$
such that $x\leq_{p}x^{\prime}\leq_{p}x^{\prime\prime}$. Let
$x=(q,\mathbf{w})$, $x^{\prime}=(q,\mathbf{w}^{\prime})$ and
$x^{\prime\prime}=(q,\mathbf{w}^{\prime\prime})$. From Equations 4.1 through
4.7, we know that for all $\textsf{c}\in\textsf{Ch}$, if
$\mathbf{w}_{\textsf{c}}=u_{\textsf{c}}$, then
$\mathbf{w}^{\prime}_{\textsf{c}}=u_{\textsf{c}}\cdot v_{\textsf{c}}$, and
$\mathbf{w}^{\prime\prime}_{\textsf{c}}=u_{\textsf{c}}\cdot
v_{\textsf{c}}\cdot v_{\textsf{c}}$, for some words
$u_{\textsf{c}},v_{\textsf{c}}\in\Sigma^{*}$.
In order to show that $\mathcal{S}$ is non-terminating, we will try to
demonstrate the criteria in Proposition 22. We will show that $\sigma$ is
infinitely iterable from $x=(q,\mathbf{w})$. For each
$\textsf{c}\in\textsf{Ch}$ such that
$\mathit{proj}_{\textsf{c}?}(\sigma)=\varepsilon$, the condition in the
proposition is true trivially. Hence, we only need to show that in cases when
$\mathit{proj}_{\textsf{c}?}(\sigma)\neq\varepsilon$, the three conditions
stated are met. We will consider a single channel, but the idea can be
extended to all channels.
Since we have a run $x_{0}\xrightarrow{*}x\xrightarrow{\sigma}x^{\prime}$, the
sequence of actions $\sigma$ is fireable at least once, so the first condition
holds. Moreover, since
$(q,\mathbf{w})\xrightarrow{\sigma}(q,\mathbf{w}^{\prime})$ and for every
$\textsf{c}\in\textsf{Ch}$,
$\mathbf{w}_{\textsf{c}}\preceq\mathbf{w}^{\prime}_{\textsf{c}}$, the net
effect of the loop is not decreasing in the length of the channel contents,
i.e.
$|\mathit{proj}_{\textsf{c}?}(\sigma)|\leq|\mathit{proj}_{\textsf{c}!}(\sigma)|$.
Hence, the second condition also holds true.
We only now need to show that for all $\textsf{c}\in\textsf{Ch}$, the
following holds:
$u_{\textsf{c}}\cdot(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}=(\mathit{proj}_{\textsf{c}?}(\sigma))^{\omega}$.
From Equation 4.1, we have:
$u_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma)=\mathit{proj}_{\textsf{c}?}(\sigma).u_{\textsf{c}}.v_{\textsf{c}}$
Concatenating both sides with the infinite word
$\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$, we have:
$\displaystyle u_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
$\displaystyle=\mathit{proj}_{\textsf{c}?}(\sigma).u_{\textsf{c}}.v_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
(9)
$\displaystyle=\mathit{proj}_{\textsf{c}?}(\sigma).u_{\textsf{c}}.v_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma).(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
(10)
Moreover, from Equation 4.2, we now have:
$\displaystyle u_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
$\displaystyle=\mathit{proj}_{\textsf{c}?}(\sigma).\mathit{proj}_{\textsf{c}?}(\sigma).u_{\textsf{c}}.v_{\textsf{c}}.w_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
(11)
$\displaystyle=(\mathit{proj}_{\textsf{c}?}(\sigma))^{2}.u_{\textsf{c}}.v_{\textsf{c}}.v_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
(12)
Which we can rewrite using Equation 4.8 as:
$\displaystyle u_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
$\displaystyle=(\mathit{proj}_{\textsf{c}?}(\sigma))^{2}.u_{\textsf{c}}.v_{\textsf{c}}.\mathit{proj}_{\textsf{c}!}(\sigma).v_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
(13)
And now, we can repeat the process, and use Equation 4.2 once again to obtain:
$\displaystyle u_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
$\displaystyle=(\mathit{proj}_{\textsf{c}?}(\sigma))^{2}.\mathit{proj}_{\textsf{c}?}(\sigma).u_{\textsf{c}}.v_{\textsf{c}}.w_{\textsf{c}}.v_{\textsf{c}}.(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}$
(14)
Repeating this process, we have:
$u_{\textsf{c}}\cdot(\mathit{proj}_{\textsf{c}!}(\sigma))^{\omega}=(\mathit{proj}_{\textsf{c}?}(\sigma))^{\omega}$.
Hence, the presence of an iterable node in the
$\mathit{l}\text{-}\mathit{RRT}$ implies infinite iterability, or in other
words, non-termination.
As for the case of counter machines, the $\mathit{l}\text{-}\mathit{RRT}$ of
branch-wqo FIFO machines is finite. Hence, for branch-wqo FIFO machines, it is
decidable to verify if there is an iterable node in the
$\mathit{l}\text{-}\mathit{RRT}$.
## 5\. Decidability of Coverability
Coverability algorithms for branch-WSTS. We show that the two existing
coverability algorithms (the forward and backward algorithms) for WSTS (see
Section 2.2 to recall) do not allow one to decide coverability for branch-
WSTS. We remark that, contrary to WSTS,
${\textsf{Pre}^{*}(\mathord{\uparrow}x)}$ is not necessarily upward-closed. In
fact, even with a single zero-test, this property may not satisfied.
$q_{0}$$q_{1}$$c\mathord{=}0?$ Figure 6. System $\mathcal{M}_{6}$ is branch-
WSTS.
In Figure 6, let us consider the counter machine $\mathcal{M}_{6}$ with a
single counter $c$. Let $x=(q_{1},0)$. We see that
${\textsf{Pre}^{*}(\mathord{\uparrow}x)}=\\{(q_{1},n)\mid n\geq
0\\}\cup\\{(q_{0},0)\\}$. However,
${\mathord{\uparrow}\textsf{Pre}^{*}(\mathord{\uparrow}x)}={\textsf{Pre}^{*}(\mathord{\uparrow}x)}\cup\\{(q_{0},n)\mid
n\geq 1\\}$. Hence, given a branch-effective branch-WSTS
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ and a state $x\in X$, the
set $\textsf{Pre}^{*}(\mathord{\uparrow}x)$ is not necessarily upward-closed.
Hence, we cannot use the backward algorithm to verify the coverability problem
for branch-WSTS correctly.
Let us consider using the forward algorithm instead, where the first procedure
(cf. Procedure 1) checks for coverability, and the second procedure (cf.
Procedure 2) looks for a witness of non-coverability.
The second procedure computes all sets $X$ which satisfy the property
$\mathord{\downarrow}\textsf{Post}^{*}(X)\subseteq X$. This is because, for
WSTS, the set $\mathord{\downarrow}\textsf{Post}^{*}(x)$ satisfies this
property. However, we now show a counter-example of a branch-WSTS which does
not satisfy this property.
$q_{0}$$q_{1}$$q_{2}$$c{\mathord{++}}$$c\mathord{=}0?$ Figure 7. System
$\mathcal{M}_{7}$ is branch-WSTS.
Consider the counter machine $\mathcal{M}_{7}$ from Figure 7, with
$x_{0}=(q_{0},0)$. We compute
$Y=\mathord{\downarrow}\textsf{Post}^{*}(x_{0})$. We see that
$\textsf{Post}^{*}(x_{0})=\\{(q_{0},0),(q_{1},1)\\}$, hence,
$Y=\mathord{\downarrow}\textsf{Post}^{*}(x_{0})=\\{(q_{0},0),(q_{1},1),(q_{1},0)\\}$.
However, as
$\mathord{\downarrow}\textsf{Post}^{*}(Y)=\\{(q_{0},0),(q_{1},1),(q_{1},0),(q_{2},0)\\}$
is strictly larger than $Y$, we have
$\mathord{\downarrow}\textsf{Post}^{*}(Y)\not\subseteq Y$. Therefore, for
branch-effective, branch-WSTS
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ such that
$\mathord{\downarrow}\textsf{Post}(\mathord{\downarrow}x)$ is computable for
all $x\in X$, the set $Y=\mathord{\downarrow}\textsf{Post}^{*}(x_{0})$ does
not necessarily satisfy the property
$\mathord{\downarrow}\textsf{Post}^{*}(Y)\subseteq Y$. Hence, it is not
possible to guarantee the termination of the forward coverability algorithm.
We can deduce:
###### Proposition 24.
For branch-WSTS, both the backward coverability algorithm and the forward
coverability algorithm do not terminate, in general.
Not only the two coverability algorithms do not terminate but we may prove
that coverability is undecidable.
###### Theorem 25.
The coverability problem is undecidable for branch-effective branch-WSTS
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ (even if $\mathcal{S}$ is
strongly monotone and $\leq$ is wqo).
###### Proof 5.1.
We use the family of systems given in the proof of Theorem 4.3 [FMP04]. Let us
denote by $\mathit{TM}_{j}$ the $j^{th}$ Turing Machine in some enumeration.
Consider the family of functions
$f_{j}:\mathbb{N}^{2}\xrightarrow{}\mathbb{N}^{2}$ defined by
$f_{j}(n,k)=(n,0)$ if $k=0$ and $TM_{j}$ runs for more than $n$ steps, else
$f_{j}(n,k)=(n,n+k)$.
Let $g:\mathbb{N}^{2}\xrightarrow{}\mathbb{N}^{2}$ be the function defined by
$g(n,k)=(n+1,k)$. The transition system $\mathcal{S}_{j}$ induced by the two
functions $f_{j}$ and $g$ is strongly monotone hence it is also branch-
monotone. Moreover, system $\mathcal{S}_{j}$ is branch-effective and we
observe that Post is computable and $\leq$ is wqo. Now, we have $(1,1)$ is
coverable from $(0,0)$ in $\mathcal{S}_{j}$ iff $\mathit{TM}_{j}$ halts. This
proves that coverability is undecidable.
Decidability of coverability. We show that coverability is decidable for a
class of systems with a wqo but with a restricted notion of monotony. As for
example in [BFM18], we define the $Cover$ of a system $\mathcal{S}$ from a
state $x$ by:
$Cover_{\mathcal{S}}(x)=\mathord{\downarrow}\textsf{Post}_{\mathcal{S}}^{*}(x)$.
Let us consider the following monotony condition.
Let $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ be a system. We say
that $\mathcal{S}$ is _cover-monotone_ (resp. strongly cover-monotone) if, for
all $y_{1}\in Cover_{\mathcal{S}}(x_{0})$ and for all $x_{1},x_{2}\in X$ such
that $x_{1}\leq y_{1}$ and $x_{1}\xrightarrow{}x_{2}$, there exists a state
$y_{2}\in X$ such that $y_{1}\xrightarrow{*}y_{2}$ (resp.
$y_{1}\xrightarrow{}y_{2}$) and $x_{2}\leq y_{2}$.
Let us emphasize that cover-monotony of a system
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ is a property that depends
on the initial state $x_{0}$ while the usual monotony does not depend on any
initial state (see Figure 8).
$q_{0}$$q_{1}$$q_{2}$$q_{3}$$c\mathord{=}0?$$c{\mathord{++}}$$c{\mathord{--}}$$c{\mathord{++}}$$c{\mathord{++}}$
Figure 8. Machine $\mathcal{M}_{8}$ is cover-monotone. However, if we modify
the system such that the initial state $(q_{0},1)$, then it is not cover-
monotone.
###### Remark 26.
The strong cover-monotony property is not trivially decidable for general
models while (usual) strong-monotony is decidable for many powerful models
like FIFO machines and counter machines. However, this notion is still of
theoretical interest, as it shows that we can relax the general monotony
condition.
However, there is a link between general monotony and cover-monotony.
propCoverxComp A system $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq)$ is
monotone iff for all $x_{0}\in X$, $(X,\Sigma,\xrightarrow{},\leq,x_{0})$ is
cover-monotone.
###### Proof 5.2.
Every monotone system is trivially cover-monotone for all $x_{0}\in X$.
Conversely, consider a system $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq)$
such that $(X,\Sigma,\xrightarrow{},\leq,x_{0})$ is cover-monotone for all
$x_{0}\in X$. Let us consider $x_{1},y_{1},x_{2}\in X$ such that $x_{1}\leq
y_{1}$ and $x_{1}\xrightarrow{}x_{2}$. In order to show that $\mathcal{S}$ is
monotone, we need to prove that there exists $y_{2}$ such that
$y_{1}\xrightarrow{*}y_{2}$ and $x_{2}\leq y_{2}$.
Since $x_{1}\leq y_{1}$ (by hypothesis), $x_{1}\in
Cover_{\mathcal{S}}(y_{1})$. By the hypothesis,
$(X,\Sigma,\xrightarrow{},\leq,y_{1})$ is cover-monotone, hence
there exists $y_{2}$ such that $y_{1}\xrightarrow{*}y_{2}$ with $x_{2}\leq
y_{2}$. Hence, $\mathcal{S}$ is monotone.
We may now define cover-WSTS as follows.
A _cover-WSTS_ is a finitely branching cover-monotone system
$\mathcal{S}=(X,\Sigma,$ $\xrightarrow{},\leq,x_{0})$ such that $(X,\leq)$ is
wqo.
For cover-WSTS, the backward algorithm fails. This is once again because the
presence of a single zero test removes the property of the set being upward-
closed. But we will now show that the forward coverability approach is
possible.
propinvariantPost Given a system
$\mathcal{S}=(X,\Sigma,\xrightarrow{}{},\leq,x_{0})$ and a downward-closed set
$D\subseteq X$ such that $\mathord{\downarrow}\textsf{Post}(D)\subseteq D$,
then we have the inclusion $\mathord{\downarrow}\textsf{Post}^{*}(D)\subseteq
D$.
###### Proof 5.3.
We prove the following claim by induction first.
Claim: Given a system $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$, for
every downward-closed set $D\subseteq X$ such that
$\mathord{\downarrow}\textsf{Post}(D)\subseteq D$, the following inclusion
holds: $\mathord{\downarrow}\textsf{Post}^{k}(D)\subseteq D$, for all $k\geq
1$.
Base case: For $k=1$, this is the hypothesis.
Inductive step: The inductive hypothesis asserts that: Given a system
$\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$, for every downward-closed
set $D\subseteq X$ such that $\mathord{\downarrow}\textsf{Post}(D)\subseteq
D$, the following inclusion holds:
$\mathord{\downarrow}\textsf{Post}^{k}(D)\subseteq D$.
We now prove that it is also true for $k+1$.
Let $Z=\textsf{Post}^{k}(D)$. Since
$Z\subseteq\mathord{\downarrow}\textsf{Post}^{k}(D)$, we know that $Z\subseteq
D$ (by hypothesis). Furthermore, for any subset $Y\subseteq D$,
$\mathord{\downarrow}\textsf{Post}(Y)$ is also a subset of $D$. Therefore,
$\mathord{\downarrow}\textsf{Post}(Z)\subseteq D$. Hence, we deduce
$\mathord{\downarrow}\textsf{Post}(\textsf{Post}^{k}(D))\subseteq D$, i.e.
$\mathord{\downarrow}\textsf{Post}^{k+1}(D)\subseteq D$. Hence, we have proved
the claim by induction.
From the above claim, we know that
$\mathord{\downarrow}\textsf{Post}^{k}(D)\subseteq D$ for all $D\subseteq X$.
Note also that $\textsf{Post}^{*}(D)=D\cup\bigcup\limits_{k\geq
1}\textsf{Post}^{k}(D)$. Therefore, $\textsf{Post}^{*}(D)\subseteq D$, and
finally since $D$ is downward-closed,
$\mathord{\downarrow}\textsf{Post}^{*}(D)\subseteq D$.
Let us define a particular instance of the coverability problem in which we
verify if a state is coverable from the initial state.
Given a system $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$. _The
$x_{0}$-coverability problem_ is: Given a state $y\in X$, do we have
$y\in\mathord{\downarrow}\textsf{Post}_{\mathcal{S}}^{*}(x_{0})$ ?
We show that $x_{0}$-coverability is decidable for cover-WSTS:
###### Theorem 27.
Let $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ be an ideally effective
cover-WSTS such that Post is computable. Then, the $x_{0}$-coverability
problem is decidable.
###### Proof 5.4.
Consider a system $\mathcal{S}=(X,\Sigma,\xrightarrow{},\leq,x_{0})$ that is
cover-WSTS, and let us consider a state $y\in X$. To find a certificate of
coverability (if it exists), we cannot use Procedure 1 since general monotony
is not satisfied and then, in general,
$\mathord{\downarrow}\textsf{Post}^{*}(x_{0})\neq\mathord{\downarrow}\textsf{Post}^{*}(\mathord{\downarrow}x_{0})$
but we can use a variation of Procedure 1, where we iteratively compute
$x_{0}$, $\textsf{Post}(x_{0})$, $\textsf{Post}(\textsf{Post}(x_{0}))$, and so
on, and at each step check if $y\leq x$ for some $x$ in the computed set. This
can be done because $\mathcal{S}$ is finitely branching and the sets
$\textsf{Post}^{k}(x_{0})$ are computable for all $k\geq 0$. Hence, if there
exists a state that can cover $y$ reachable from $x_{0}$, it will eventually
be found.
Now, let us prove that Procedure 2 terminates for input $y$ iff $y$ is not
coverable from $x_{0}$.
If Procedure 2 terminates, then at some point, the while condition is not
satisfied and there exists a set $D$ such that $y\notin D$ and $x_{0}\in D$
and $\mathord{\downarrow}\textsf{Post}(D)\subseteq D$. Moreover,
$\mathord{\downarrow}\textsf{Post}^{*}(I)\subseteq I$ for every inductive
invariant $I$ (see Proposition 5.2). Hence,
$Cover_{\mathcal{S}}(x_{0})\subseteq D$, therefore, since $y\notin D$, we
deduce that $y\not\in Cover_{\mathcal{S}}(x_{0})$ and then $y$ is not
coverable from $x_{0}$.
Note that every downward-closed subset of $X$ decomposes into finitely many
ideals since $(X,\leq)$ is wqo. Moreover, since $\mathcal{S}$ is ideally
effective, ideals of $X$ may be effectively enumerated. By [BFM18] and
[BFM17], for ideally effective systems, testing of inclusion of downward-
closed sets, and checking the membership of a state in a downward-closed set,
are both decidable.
To show the opposite direction, let us prove that if $y$ is not coverable from
$x_{0}$, the procedure terminates. It suffices to prove that
$Cover_{\mathcal{S}}(x_{0})$ is an inductive invariant. Indeed, this implies
that $Cover_{\mathcal{S}}(x_{0})$ is eventually computed by Procedure 2 when
$y$ is not coverable from $x_{0}$.
Let us show
$\mathord{\downarrow}\textsf{Post}(Cover_{\mathcal{S}}(x_{0}))\subseteq
Cover_{\mathcal{S}}(x_{0})$. Let
$b\in\mathord{\downarrow}\textsf{Post}(Cover_{\mathcal{S}}(x_{0}))$. Then,
there exists $a,a^{\prime},b^{\prime}$ such that
$x_{0}\xrightarrow{*}a^{\prime}$, $a^{\prime}\geq a$,
$a\xrightarrow{}b^{\prime}$ and $b^{\prime}\geq b$. Furthermore,
$a^{\prime},a\in Cover(x_{0})$. Hence, by cover-monotony, there exists
$b^{\prime\prime}\geq b^{\prime}$ such that
$a^{\prime}\xrightarrow{*}b^{\prime\prime}$. Therefore,
$x_{0}\xrightarrow{*}b^{\prime\prime}$ and $b^{\prime\prime}\geq
b^{\prime}\geq b$, hence, $b\in Cover_{\mathcal{S}}(x_{0})$. Hence, the
$x_{0}$-coverability problem is decidable.
On the other hand, we cannot decide the (general) coverability problem for
this class of systems:
###### Theorem 28.
The coverability problem is undecidable for cover-WSTS.
###### Proof 5.5.
Given any counter machine $\mathcal{C}=(Q,V,T,q_{0})$, let
$\mathcal{S}_{\mathcal{C}}=(X,A_{\mathcal{C}},\xrightarrow{},\leq,x_{0})$ be
its transition system equipped with the natural order on counters. We can
construct a system
$\mathcal{S}^{\prime}=(X^{\prime},A_{\mathcal{C}},\xrightarrow{}^{\prime},\leq,x^{\prime}_{0})$
such that $\mathcal{S}^{\prime}$ is cover-monotone, and any state $x\in X$ is
coverable iff it is also coverable in $X^{\prime}$. The construction is as
follows. We add a new control-state $q$ from the initial state in the counter
machine ($q_{0}$) reachable via an empty transition, therefore,
$X^{\prime}=X\cup\\{(q,0)\\}$. This new control-state is a sink state, i.e.
there are no transitions from $q$ to any other control-state (except itself).
Moreover, we let $x^{\prime}_{0}=(q,0)$. Note that $\mathcal{S}^{\prime}$ is
cover-monotone, because there is no state reachable from $x^{\prime}_{0}$,
hence, the property is vacuously satisfied. However, for all other states, as
we leave the system unchanged, we see that a state $x$ is coverable in
$\mathcal{S}$ by a state $y$ iff it is coverable in $\mathcal{S}^{\prime}$.
Hence, coverability for counter machines reduces to the coverability problem
for cover-WSTS, and coverability is therefore undecidable for cover-WSTS.
## 6\. Conclusion
We have tried to relax the notions of monotony and of the wellness of the
quasi-ordering which were traditionally used to define a WSTS. We observed
that we do not need the wellness of the quasi-ordering or monotony between all
states. By relaxing the conditions to only states reachable from one another,
thus defining what we call _branch-WSTS_ , we are still able to decide non-
termination and boundedness. Moreover, some systems that have been studied
recently have been shown to belong to this class, which adds interest to this
relaxation. Furthermore, we have used the ideas of branch-monotony to hold
along a single run, in order to provide an algorithm that can decide non-
termination and boundedness for a larger class of counter machines and branch-
wqo FIFO machines.
However, as coverability is undecidable for branch-WSTS, the notion of
coverability seems to require a stricter condition than what we define for
branch-WSTS. This leads us to introduce a different class of systems,
incomparable to branch-WSTS, which we call _cover-WSTS_. These systems relax
the condition of monotony to only states within the coverability set, while
still retaining the decidability of a restricted form of coverability.
As future work, other systems that belong to these classes can be studied.
Moreover, it would be interesting to precisely characterise the systems for
which we can decide non-termination and boundedness along a single run. It
would also be interesting to see if the branch-WSTS relaxation translates to
better hope for usability of WSTS and relaxations as a verification technique.
#### Acknowledgements
We thank the reviewers for the detailed comments and suggestions. The work
reported was carried out in the framework of ReLaX, UMI2000 (ENS Paris-Saclay,
CNRS, Univ. Bordeaux, CMI, IMSc). It is partly supported by ANR FREDDA
(ANR-17-CE40-0013) and ANR BRAVAS (ANR-17-CE40-0028). It is also partly
supported by UKRI/EPSRC, references: EP/T006544/2, and EU HORIZON EUROPE
Research and Innovation Programme, grant agreement 101093006 (TaRDIS).
## References
* [ACJT00] Parosh Aziz Abdulla, Kārlis Cerāns, Bengt Jonsson, and Yih-Kuen Tsay. Algorithmic Analysis of Programs with Well Quasi-ordered Domains. Information and Computation, 160(1):109–127, July 2000.
* [BFM17] Michael Blondin, Alain Finkel, and Pierre McKenzie. Well Behaved Transition Systems. Log. Methods Comput. Sci., 13(3), 2017.
* [BFM18] Michael Blondin, Alain Finkel, and Pierre McKenzie. Handling infinitely branching well-structured transition systems. Information and Computation, 258:28–49, February 2018.
* [BFS20] Benedikt Bollig, Alain Finkel, and Amrita Suresh. Bounded Reachability Problems Are Decidable in FIFO Machines. In Igor Konnov and Laura Kovács, editors, 31st International Conference on Concurrency Theory, CONCUR 2020, September 1-4, 2020, Vienna, Austria (Virtual Conference), volume 171 of LIPIcs, pages 49:1–49:17. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
* [BS13] N. Bertrand and P. Schnoebelen. Computable fixpoints in well-structured symbolic model checking. Formal Methods in System Design, 43(2):233–267, October 2013.
* [BZ83] Daniel Brand and Pitro Zafiropulo. On communicating finite-state machines. Journal of the ACM (JACM), 30(2):323–342, 1983. Publisher: ACM.
* [DFS98] C. Dufourd, A. Finkel, and Ph. Schnoebelen. Reset nets between decidability and undecidability. In Kim G. Larsen, Sven Skyum, and Glynn Winskel, editors, Automata, Languages and Programming, Lecture Notes in Computer Science, pages 103–115, Berlin, Heidelberg, 1998. Springer.
* [Dic13] Leonard Eugene Dickson. Finiteness of the Odd Perfect and Primitive Abundant Numbers with n Distinct Prime Factors. American Journal of Mathematics, 35(4):413–422, 1913. Publisher: Johns Hopkins University Press.
* [DS20] Emanuele D’Osualdo and Felix Stutz. Decidable Inductive Invariants for Verification of Cryptographic Protocols with Unbounded Sessions. In Igor Konnov and Laura Kovács, editors, 31st International Conference on Concurrency Theory, CONCUR 2020, September 1-4, 2020, Vienna, Austria (Virtual Conference), volume 171 of LIPIcs, pages 31:1–31:23. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020.
* [ER56] P. Erdös and R. Rado. A partition calculus in set theory. Bulletin of the American Mathematical Society, 62(5):427–489, 1956\.
* [FG19] Alain Finkel and Ekanshdeep Gupta. The Well Structured Problem for Presburger Counter Machines. In Arkadev Chattopadhyay and Paul Gastin, editors, 39th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2019), volume 150 of Leibniz International Proceedings in Informatics (LIPIcs), pages 41:1–41:15, Dagstuhl, Germany, 2019. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. ISSN: 1868-8969.
* [Fin87] Alain Finkel. A Generalization of the Procedure of Karp and Miller to Well Structured Transition Systems. In Thomas Ottmann, editor, Automata, Languages and Programming, 14th International Colloquium, ICALP87, Karlsruhe, Germany, July 13-17, 1987, Proceedings, volume 267 of Lecture Notes in Computer Science, pages 499–508. Springer, 1987.
* [Fin90] Alain Finkel. Reduction and covering of infinite reachability trees. Inf. Comput., 89(2):144–179, 1990.
* [FMP04] Alain Finkel, Pierre McKenzie, and Claudine Picaronny. A well-structured framework for analysing petri net extensions. Information and Computation, 195(1):1–29, November 2004.
* [FP20] Alain Finkel and M. Praveen. Verification of Flat FIFO Systems. Logical Methods in Computer Science, 20(4), October 2020.
* [FS01] Alain Finkel and Philippe Schnoebelen. Well-structured transition systems everywhere! Theor. Comput. Sci., 256(1-2):63–92, 2001.
* [Hig52] Graham Higman. Ordering by Divisibility in Abstract Algebras. Proceedings of The London Mathematical Society, pages 326–336, 1952\.
* [KSS04] E.V. Kouzmin, N.V. Shilov, and V.A. Sokolov. Model checking /spl mu/-calculus in well-structured transition systems. In Proceedings. 11th International Symposium on Temporal Representation and Reasoning, 2004. TIME 2004., pages 152–155, July 2004\. ISSN: 1550-1311.
* [LS15] Ranko Lazić and Sylvain Schmitz. Nonelementary Complexities for Branching VASS, MELL, and Extensions. ACM Transactions on Computational Logic, 16(3):20:1–20:30, June 2015.
* [Sch21] Sylvain Schmitz. Branching in Well-Structured Transition Systems (Invited Talk). In Christel Baier and Jean Goubault-Larrecq, editors, 29th EACSL Annual Conference on Computer Science Logic (CSL 2021), volume 183 of Leibniz International Proceedings in Informatics (LIPIcs), pages 3:1–3:3, Dagstuhl, Germany, 2021. Schloss Dagstuhl–Leibniz-Zentrum für Informatik. ISSN: 1868-8969.
* [SG07] N. V. Shilov and N. O. Garanina. Well-Structured Model Checking of Multiagent Systems. In Irina Virbitskaite and Andrei Voronkov, editors, Perspectives of Systems Informatics, Lecture Notes in Computer Science, pages 363–376, Berlin, Heidelberg, 2007. Springer.
* [SS12] Sylvain Schmitz and Philippe Schnoebelen. Algorithmic Aspects of WQO Theory. Lecture Notes, August 2012.
* [ZWH12] Damien Zufferey, Thomas Wies, and Thomas A. Henzinger. Ideal Abstractions for Well-Structured Transition Systems. In Viktor Kuncak and Andrey Rybalchenko, editors, Verification, Model Checking, and Abstract Interpretation, Lecture Notes in Computer Science, pages 445–460, Berlin, Heidelberg, 2012. Springer.
* [Öz22] Okan Özkan. Decidability of Resilience for Well-Structured Graph Transformation Systems. In Nicolas Behr and Daniel Strüber, editors, Graph Transformation, Lecture Notes in Computer Science, pages 38–57, Cham, 2022. Springer International Publishing.
|
capbtabboxtable[][]
# Prompted Opinion Summarization with GPT-3.5
Adithya Bhaskar1
IIT Bombay &Alexander R. Fabbri2
Salesforce AI
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>&Greg Durrett3
UT Austin
###### Abstract
Large language models have shown impressive performance across a wide variety
of tasks, including text summarization. In this paper, we show that this
strong performance extends to opinion summarization. We explore several
pipeline methods for applying GPT-3.5 to summarize a large collection of user
reviews in a prompted fashion. To handle arbitrarily large numbers of user
reviews, we explore recursive summarization as well as methods for selecting
salient content to summarize through supervised clustering or extraction. On
two datasets, an aspect-oriented summarization dataset of hotel reviews
(SPACE) and a generic summarization dataset of Amazon and Yelp reviews
(FewSum), we show that GPT-3.5 models achieve very strong performance in human
evaluation. We argue that standard evaluation metrics do not reflect this, and
introduce three new metrics targeting faithfulness, factuality, and genericity
to contrast these different methods.
## 1 Introduction
Figure 1: Illustration of the TCG pipeline. Sentences are clustered based on
the aspects closest to their topic (T step); examples are shown for rooms,
food and service. The relevant cluster is then repeatedly chunked and
summarized until the combined length falls below 35 sentences (C step). A
final round of GPT-3.5 summarization follows (G step).
Recent years have seen several shifts in summarization research, from
primarily extractive models Erkan and Radev (2004); Gu et al. (2022); Kwon et
al. (2021); Jia et al. (2020); Zhong et al. (2020) to abstractive models with
copy mechanisms See et al. (2017); Song et al. (2018); Gehrmann et al. (2018)
to pre-trained models Devlin et al. (2019); Isonuma et al. (2021); Lewis et
al. (2020); Zhang et al. (2020a); He et al. (2020). GPT-3 Brown et al. (2020);
Wu et al. (2021); Saunders et al. (2022); Goyal et al. (2022) and GPT-4
represent another shift: they show excellent zero- and few-shot performance
across a variety of text generation tasks. However, their capabilities have
not been extensively benchmarked for opinion summarization. Unlike news, where
extractive lead baselines are often highly effective, opinion summarization
requires balancing contradictory opinions and a higher degree of abstraction
to convey all of the viewpoints faithfully.
In this paper, we apply GPT-3.5, specifically the text-davinci-002
model,111The most advanced model available at the time this work was being
conducted. to the task of opinion summarization, focusing on reviews of
products, hotels, and businesses. Applying GPT-3.5 in this setting is not
straightforward, as the combined length of the reviews or posts may exceed the
model’s maximum input length. Furthermore, we find that certain styles of
inputs can lead to GPT-3.5 simply echoing back an extract of the inputs. To
mitigate these issues, we explore a family of pipelined approaches,
specifically (1) filtering a subset of sentences with an extractive
summarization model, (2) chunking with repeated summarization, and (3) review-
score-based stratification. In the context of aspect-oriented summarization,
we also explore the inclusion of a sentence-wise topic prediction and
clustering step.
We show that our approaches yield high-quality summaries according to human
evaluation. The errors of the systems consist of subtle issues of balancing
contradictory viewpoints and erroneous generalization of specific claims,
which are not captured by metrics like ROUGE Lin (2004) or BERTScore Zhang et
al. (2020b). This result corroborates work calling for a re-examination of
current metrics Fabbri et al. (2021); Tang et al. (2023) and the need for
fine-grained evaluation Gehrmann et al. (2022). We therefore introduce a set
of metrics, using entailment as a proxy for support, to measure the
_factuality_ , _faithfulness_ , and _genericity_ of produced summaries. These
metrics measure the extent of over-generalization of claims and
misrepresentation of viewpoints while ensuring that summaries are not overly
generic.
Our results show that basic prompted GPT-3.5 produces reasonably faithful and
factual summaries when the input reviews are short (fewer than $1000$ words);
more sophisticated techniques do not show much improvement. However, as the
input size grows larger, repeated summarization leads GPT-3.5 to produce
generalized and unfaithful selections of viewpoints relative to the first
round. We demonstrate that using QFSumm Ahuja et al. (2022), an extractive
summarization model, to filter out sentences prior to GPT-3.5 (instead of
multi-level summarization) can slightly help with factuality and faithfulness.
The resulting summaries also present a more specific selection of viewpoints
but are generally shorter and use a higher proportion of common words. A
topicwise clustering and filtering step pre-pended to the pipeline alleviates
these issues while relinquishing a portion of the gains on factuality and
faithfulness.
Our main contributions are: (1) We introduce two approaches to long-form
opinion summarization with GPT-3.5, namely, hierarchical GPT-3.5 summarization
with chunking, and pre-extraction with an extractive summarization model. (2)
We establish the strength of these approaches with a human study and
demonstrate the need for objective and automatic means of evaluation. (3) We
develop three entailment-based metrics for factuality, faithfulness, and
genericity that are better suited to evaluate extremely fluent summaries as
compared to metrics based on $n$-gram matching. The relevant artifacts and
code for this work are publicly available and can be found at
https://github.com/testzer0/ZS-Summ-GPT3/.
## 2 Motivation and Problem Setting
Review summarization involves the summarization of the text of multiple
reviews of a given product or service into a coherent synopsis. More formally,
given a set of reviews $\mathcal{R}=\\{R_{i}\\}_{i=1}^{n}$ with the review
$R_{i}$ consisting of $l_{i}$ sentences $\\{r_{ij}\\}_{j=1}^{l_{i}}$, we
define a summarization system $\mathcal{S}$ to be a function that takes as
input the combined reviews $C$ and then produces $k$ output sentences
$S=\\{s_{i}\\}_{i=1}^{k}$, written as $S=\mathcal{S}(C)$, where
$C\equiv\texttt{combine}(\mathcal{R})$ is typically obtained by concatenating
the review sentences. We use the notation combine to refer to the combination
of both sentences and reviews.
We can also instantiate this pipeline for _aspect-oriented review
summarization_ , which involves the summarization of multiple reviews
conditioned on an aspect $a$ (such as _‘cleanliness’_). In particular, the
summarization is written as $S=\mathcal{S}(C\mid a)$. We consider aspect-
agnostic review summarization as a special case of aspect-oriented review
summarization with the aspect _‘none’_ for notational simplicity.
### 2.1 Desiderata
Opinion summaries should demonstrate three key characteristics.
First, the summaries should also be faithful, i.e., select the most
subjectively important viewpoints with the largest consensus. For instance, if
five reviews raised the issue of small rooms while eight complained about
dusty carpets, the choice (due to a limited output size) to discuss the latter
over the former would be considered faithful. Thus, faithfulness is about
careful management of the word budget given constrained output length.
The summaries should also be factual, i.e., report information grounded in
statements that actually do appear in the set of reviews, without containing
extrinsic hallucinations. For instance, if five reviews found hotel rooms to
be small, but three found them large, the statement _The rooms were large_ is
considered factual despite the viewpoint being in the minority. By contrast,
_A pipe burst and flooded my room_ is unfactual if this is never actually
reported in the reviews.
Finally, the summaries should be relevant: the points raised in them should
only discuss topics relevant to the specified aspect. For example, in a
summary about the cleanliness of a hotel room, bad food should be omitted even
if it was frequently brought up in the reviews.
### 2.2 Framework
Based on the desiderata, we need to ensure that the summaries represent all of
the reviews; however they are too many in number and too long in combined
length. We, therefore, define a summarization pipeline to be a series of
summarization systems $\mathcal{S}_{1},\cdots,\mathcal{S}_{m}$ where each
system takes as input the condensed results of the previous system.
Specifically,
$S_{0}=\mathcal{R},\ \ C_{i}=\texttt{combine}(S_{i-1}),\ \
S_{i}=\mathcal{S}_{i}(C_{i})$
Table 1: The pipelines compared for SPACE and FewSum, and their constituents.
We showcase an example pipeline in Figure 1, with one stage extracting the
relevant sentences from the reviews and the next summarizing the extracted
sentences.
## 3 GPT-3.5 Summarization Pipelines
The components of our summarization pipelines may be broadly categorized into
_extractors_ and _summarizers_ , which we describe next. More details can be
found in Appendix A. First, extractors select relevant parts of a set of
reviews, optionally conditioned on an aspect. Our extractors include:
#### GPT-3.5 Topic Clustering (T)
We prompt GPT-3.5 to produce a single word topic for each sentence, which we
map to the closest aspect with GloVe Pennington et al. (2014) similarity. This
defines a set of sentences to be used for aspect-based summarization. This
step is only used for pipelines on SPACE, as FewSum is aspect-agnostic.
#### QFSumm-long (Q)
We use the aspect-specific extractive summarization model introduced in Ahuja
et al. (2022) to extract up to $35$ most relevant sentences from the input
text. QFSumm was designed to allow extremely long inputs, and thus no
truncation is required at this stage.
Figure 2: Example summaries from TCG, Q, and A, and a reference summary from
the SPACE dataset.
#### Review Stratification (R)
This involves clustering reviews by reviewer scores (given in the dataset) and
summarizing each cluster with GPT-3.5.
In addition to extractors, we also utilize GPT-3.5-chunking (C) in some of our
pipelines. We segment the sentences from the prior step into non-overlapping
chunks, then summarize each individually with GPT-3.5. The results are then
concatenated for the next step.
Our summarizers summarize the text one final time to produce the output
summary. All of our pipelines use GPT-3.5 as the summarizer. However, we also
compare to QFSumm Ahuja et al. (2022), AceSum Amplayo et al. (2021a) and the
model released with FewSum Bražinskas et al. (2020a), also referred to as
FewSum.
These building blocks are composed to build various summarization pipelines,
which we list in Table 1. An illustration of one pipeline (TCG) is shown in
Figure 1. Since topic-wise clustering is unnecessary for FewSum (due to lack
of aspects), we only compare G (vanilla GPT-3.5 used to summarize the set of
product reviews, truncated to fit if necessary), CG (Chunking + GPT-3.5), QG
(QFSumm-long + GPT-3.5), Q (QFSumm), and FS (FewSum) for this dataset. The
table also lists some approaches that are the first stages of pipelines that
begin with GPT-3.5-chunking, which we also compare against in Section 5.
Table 2: ROUGE-1, ROUGE-L, and BERTScore (F1) for the compared models.
## 4 Evaluation
| SPACE | FewSum
---|---|---
Average #reviews per entity | 100.00 | 22.41
Average #sentences per review | 9.16 | 3.37
Average #words per sentence | 17.56 | 12.12
Table 3: SPACE and FewSum dataset statistics.
### 4.1 Datasets
#### SPACE
Amplayo et al. (2021a) involves the summarization of reviews of hotels along
the aspects {_general, rooms, building, cleanliness, location, service, food_}
and provides three human-written summaries for each _(hotel, aspect)_ pair.
Table 3 shows that the reviews of SPACE are too long to summarize with a non-
pipelined system given text-davinci-002’s context window size. We exclude the
_general_ aspect from our experiments.
#### FewSum
Bražinskas et al. (2020a) contains product reviews from Amazon and Yelp. As
opposed to SPACE, FewSum is not aspect-oriented, and the reviews are typically
much shorter. For many of the products, the combined length of the reviews
falls below 900 words, enabling direct summarization with GPT-3.5. FewSum
provides three gold summaries for only a small portion of the products. Across
these two splits, FewSum provides golden summaries for 32 and 70 products in
the Amazon and Yelp categories respectively.
We list SPACE and FewSum statistics in Table 3.
### 4.2 Automatic Eval: ROUGE and BERTScore
We compute ROUGE Lin (2004) and BERTScore Zhang et al. (2020b) and show
results in Table 2.
Figure 3: Example of errors made by GPT-3.5. The viewpoint of a single
reviewer is wrongly expressed as that of a “few reviewers” and generalized to
the hotel not being centrally located, contradicting other reviews (blue).
The BERTScores for AceSum, as well as all GPT-3-related models, are in the
range of $88-90$, and differences in performance are unclear. AceSum achieves
the highest ROUGE-1 as well as ROUGE-L scores by far, and is followed by TQG
and QG. QFSumm does particularly poorly on the ROUGE scores. The scores are
all in the same ballpark on FewSum apart from FS, with it being difficult to
draw any conclusions. The latter achieves the highest ROUGE-L as well as
BERTScore. The GPT-3.5 systems perform slightly better than QFSumm on the Yelp
split which we attribute to the smaller combined review lengths of Yelp.
We argue that these scores are not informative and that they are at times
unreliable when comparing the quality of two summaries. ROUGE and BERTScore
have been critiqued in prior work as inaccurate indicators of summary quality
Fabbri et al. (2021); Liu and Liu (2008); Cohan and Goharian (2016),
particularly as the fluency and coherence of the outputs increase to near-
human levels Goyal et al. (2022). Figure 2 demonstrates this by with an
example. $n$-gram methods penalize GPT-3.5 for generating summaries in a
slightly different style: “ _The reviewers found the rooms to be clean_ ”
instead of “ _The rooms were clean_.” Similarly, the extractive nature of
QFSumm drives it to produce sentences like “ _We were served warm cookies on
arrival_.” While its selections are factual, they are not completely
representative of the review opinions themselves. The actual mistakes in our
systems include over-generalization and misrepresentation of viewpoints of
popularities thereof, which are not well-represented by matching $n$-grams.
Figure 3 shows an example of such errors. We conclude that metrics
benchmarking the summaries on different dimensions are necessary.
Pipeline | Factuality | Represent- ativeness | Faithful- ness | Relev- ance
---|---|---|---|---
TCG | 2.85 | 2.99 | 4.86 | 4.60
TQG | 2.86 | 2.95 | 4.83 | 4.32
QG | 2.88 | 2.97 | 4.79 | 3.93
A | 3.00 | 2.96 | 4.91 | 3.62
Q | 3.00 | 3.00 | 4.88 | 2.30
Maximum | 3 | 3 | 5 | 5
Fleiss-Kappa | 0.64 | 0.49 | 0.49 | 0.64
Table 4: Results of Human Evaluation on the SPACE dataset. Colors indicate moderate (light green) and substantial (darker green) agreement, respectively. Pipeline | Factuality | Represent- ativeness | Faithful- ness | Relev- ance
---|---|---|---|---
G | 2.63 | 2.89 | 4.68 | 4.98
CG | 2.72 | 2.95 | 4.73 | 4.98
QG | 2.68 | 2.90 | 4.63 | 4.98
Q | 2.96 | 2.98 | 4.52 | 4.92
FS | 2.74 | 2.32 | 4.30 | 4.90
Maximum | 3 | 3 | 5 | 5
Fleiss-Kappa | 0.26 | 0.53 | 0.19 | 0.15
Table 5: Results of Human Evaluation on the FewSum dataset. Colors indicate
moderate (light green), fair (yellow) and slight (red) agreement respectively.
### 4.3 Human Evaluation
For a more reliable view of performance, we manually evaluated the summaries
of the pipelines TCG, TQG, AceSum (A) and QFSumm (Q) for 50 randomly chosen
_(hotel, aspect)_ pairs from the SPACE dataset, and G, CG, QG, Q and FS for 50
randomly chosen products (25 each from the _Amazon_ and _Yelp_ splits) from
the FewSum dataset. The axes of evaluation were the attributes established in
Subsection 2.1, namely _Factuality_ , _Faithfulness_ and _Relevance_. In
addition, as we often observed our systems produce summaries of the form “
_While most reviewers thought …, some said …_ ” to highlight contrasting
opinions, we also evaluate on _Representativeness_. Representativeness is a
more restricted form of Faithfulness that measures if the more popular opinion
was exhibited between two opposing ones. For instance, if four people found
the rooms of a hotel clean but two did not, the summary is expected to convey
that the former was the more popular opinion.
The three authors of this paper independently rated the summaries along the
above axes on Likert scales of 1-3 for both variations of factuality, and 1-5
for faithfulness and relevance. The average scores, along with the
Krippendorff’s Alpha and Fleiss Kappa scores (measuring consensus among the
raters) are presented in Table 4. Among the compared pipelines, TCG improves
upon TQG and QG substantially in terms of relevance. All three have a very
high score under Factuality, showing that GPT-3.5 models seldom make blatantly
wrong statements. Viewpoints selected by QFSumm are generally faithful, and
factual due to their extractive nature, but may include irrelevant statements.
We list the corresponding metrics for FewSum in Table 5. CG tends to perform
well, but the consensus is low for Faithfulness and Relevance. FS performs
poorly across the board due to hallucinated statements harming its Factuality
and bad viewpoint selection resulting in low Faithfulness. The lack of aspects
may contribute to the low agreement on FewSum; dimensions such as Relevance
may be considered underconstrained, and thus more difficult to agree upon in
this setting Kryscinski et al. (2019).
We remark that all of our systems are achieving close to the maximum scores;
the small differences belie that the pipelines all demonstrate very strong
performance across the board.
## 5 New Tools for Evaluation and Analysis
Enabling fast automatic evaluation of systems will be crucial for the
development of future opinion summarizers. Furthermore, when a large number of
reviews are presented to a system, it may be nearly impossible even for a
dedicated evaluator to sift through all of them to evaluate a summary. We
investigate the question of how we can automate this evaluation using existing
tools.
One of the areas where automatic evaluation may help is faithfulness. Since
faithfulness represents the degree to which a system is accurate in
representing general consensus, it requires measuring the proportion of
reviews supporting each claim of a summary. A viewpoint with larger support is
more popular and, consequently, more faithful. Our key idea is to use
entailment as a proxy for support. Past work Goyal and Durrett (2021); Laban
et al. (2022) has used Natural Language Inference (NLI) models to assess
summary factuality by computing entailment scores between pairs of sentences.
However, the summaries produced by GPT-3.5 and related pipelines often consist
of compound sentences that contrast two viewpoints. In addition, GPT-3.5
prefers to say “ _The reviewers said…_ ” instead of directly stating a
particular viewpoint. We found these artifacts to impact the entailment model.
We use a split-and-rephrase step to split these sentences into atomic value
judgments by prompting GPT-3.5 as shown in Figure 4. We then use the zero-shot
entailment model from SummaC Laban et al. (2022) to compute the entailment
scores for these atomic value judgments. Similar to the approach in the SummaC
paper, we observe that a summary statement is factual when strongly entailed
by at least one sentence and thus select the top entailment score of each
summary sentence as its factuality score, and aggregate this score to produce
per-system numbers. The choice of the model as well as that of using GPT-3.5
for the split-and-rephrase step are explained further in Appendix B, and the
relevant metric of abstractiveness is discussed in Appendix D.
Figure 4: Per-sentence entailment scores are calculated by taking the maximum
among the various candidates.
A system could potentially game this metric by producing relatively “safe”
statements (like _most reviewers found the rooms clean_). We therefore also
want to evaluate genericity.
### 5.1 Terminology
The set of sentences in the summary of the reviews of a hotel
$h\in\mathcal{H}$ w.r.t aspect $a\in\mathcal{A}$ is called $S_{h,a}$. Passing
these to the split-and-rephrase step gives us a set of split sentences
$Z_{h,a}$. For any two sentences $s_{1},s_{2}$ we denote the entailment score
of $s_{2}$ with respect to $s_{1}$ according to the SummaC-ZS Laban et al.
(2022) model by $e(s_{1},s_{2})\in[-1.0,1.0]$. A score of $1.0$ indicates
perfect entailment while that of $-1.0$ denotes complete contradiction.
Finally, we denote by $N_{n}(s)$ the (multi-)set of $n$-grams (with
multiplicity) of the sentence $s$. In particular, $N_{1}(s)$ is the set of
words in the sentence $s$.
### 5.2 Evaluation of Entailment
We first evaluate whether entailment is effective at identifying the support
of the mentioned viewpoints by human evaluation. The three authors of this
paper marked 100 random pairs (50 each from SPACE and FewSum) of sentences and
assertions entailed with a score above $0.5$ on the scale of $0-2$. Here, $2$
indicates that the assertion is completely supported, and $1$ that the
assertion’s general hypothesis is supported, but some specifics are left out.
The average score of the selection across the raters was 1.88 with a Fleiss
Kappa consensus score of 0.56 (moderate agreement). Many of the lower-rated
entailed sentences also had lower entailment scores (closer to 0.5). The score
illustrates that the precision of the entailment approach is high.
Table 6: Percentages of split-and-rephrased sentences binned according to
support sizes, for all compared pipelines. The threshold used is $\tau=0.75$.
### 5.3 Faithfulness: Support Set Sizes
We propose an entailment metric for determining how the viewpoints in the
summary reflect the consensus of the input. We first compute per-sentence
entailment scores as shown in Figure 4. For each sentence of the split-and-
rephrased summary, we measure the number of review sentences that entail it
with a score greater than a threshold $\tau=0.75$ (the “support” of the
sentence). This threshold was determined based on manual inspection. We bin
these counts into $0,1,2-4$ and $5+$. The frequencies of the bins are
converted to percentages and listed in Table 6. FS performs poorly due to
presenting hallucinated viewpoints, and repeated summarization slightly hurts
CG on the Amazon split. G and CG outperform other methods on the Yelp split,
likely because it has fewer reviews per product than Amazon, making it much
likelier for the combined reviews of a product to fit in a manageable number
of words. The “pure” GPT-3.5 systems generally perform well on the short
review sets of FewSum. As we move to the long combined lengths of the reviews
on SPACE, however, the pure GPT-3.5 pipelines fall behind in terms of
faithfulness. Repeated summarization causes a major dip from First-TCG to TCG,
indicating that this is not effective for long-form inputs. QG outperforms
other GPT-3-related pipelines by a large margin.
Pipeline | Average Top Score | Pipeline | Average Top Score
---|---|---|---
SPACE | FewSum
Q | 91.59 | | (Amazon) | (Yelp)
A | 92.49 | Q | 85.29 | 86.62
First-TCG | 84.96 | FS | 24.36 | 47.23
TCG | 82.06 | G | 65.81 | 68.59
QG | 87.50 | QG | 67.63 | 65.04
TQG | 84.68 | First-CG | 68.34 | 69.86
First-RG | 81.54 | CG | 66.43 | 68.58
RG | 79.85 | | |
Table 7: The average Top Score for each pipeline on the SPACE and FewSum
datasets.
As we saw in human evaluation, however, QG may include some irrelevant
viewpoints in this process. Abating this behavior by performing a topic-
clustering step first brings its numbers down to a level comparable with
First-TCG, which is still more faithful than the TCG pipeline. AceSum has the
largest number of statements with $5+$ supports on the SPACE; however, as we
will see later, many of its summaries are very generic, and support for them
can be easily found among the large number of reviews. Q has the smallest
percentage of statements with no support because it is extractive.
### 5.4 Factuality: Top Score
As depicted in Figure 4, averaging the per-sentence entailment scores (first
per-summary, then per-system) gives us the _Top Score_ metric. The average top
score is a proxy for factuality since true statements will typically be
strongly entailed by at least one sentence of the reviews. We list the
computed average top scores in Table 7. FS performs poorly on FewSum in terms
of Factuality. The numbers for other systems are similar, with QG and CG
performing best on the Amazon and Yelp splits. However, on the longer inputs
of SPACE, the differences in factuality become more apparent. In particular,
to reconcile similar but distinct viewpoints, repeated summarization leads to
a type of generalizing that hurts the factuality of TCG and TG. Among the
GPT-3.5 pipelines, QG performs the best, followed by TQG. TQG yet again
delivers performance comparable to First-TCG and therefore presents a
reasonable trade-off with some gains on factuality and increased relevance.
Table 8: Semantic genericity based on entailment, along with the raw
percentage of scores above the threshold. The threshold used is $\tau=0.5$.
### 5.5 Genericity
Pipeline | Average IDF | Pipeline | Average IDF
---|---|---|---
SPACE | FewSum
Q | 12.00 | | (Amazon) | (Yelp)
A | 5.77 | Q | 4.38 | 4.33
TCG | 8.40 | FS | 3.16 | 3.26
QG | 6.93 | G | 3.02 | 2.93
TQG | 7.82 | QG | 3.10 | 2.93
RG | 8.87 | CG | 3.00 | 2.86
Table 9: Measurement of lexical genericity. Average IDF (larger is better) for
the compared pipelines. The FewSum pipelines report lower ranges for average
IDF due to fewer total number of documents.
As mentioned before, we want to measure whether reviews contain largely
generic statements like _the service was helpful_ , which are likely to be
faithful and factual but not very useful to a user of a system.
We first focus on _semantic_ genericity, i.e. the use of statements generally
applicable to other products/services in the same class. On the other hand,
_lexical_ genericity involves the overuse of generic words and is tackled
next. Our approach to measuring semantic genericity employs the observation
that generic sentences from a summary are often widely applicable and thus
likely to be strongly entailed by statements from other summaries. We
calculate the similarity sim$(S,S^{\prime})$ of two sets of sentences using
the averaged top score, as Figure 4 shows. Similarly, we also measure the
fraction frac$(S,S^{\prime},\tau)$ of sentences whose top score exceeds a
threshold $\tau$. Equation 1 computes the average similarity score between
sentences that belong to two reviews by the same system but different (_hotel,
aspect_) pairs (normalizing by the number of pairs $N$). Equation 2 computes
the corresponding metric based on frac.
$\displaystyle G$
$\displaystyle=\frac{1}{N}\sum\limits_{(h,a)\neq(h^{\prime},a^{\prime})}\texttt{sim}(Z_{h,a},Z_{h^{\prime},a^{\prime}})$
(1) $\displaystyle F_{\tau}$
$\displaystyle=\frac{1}{N}\sum\limits_{(h,a)\neq(h^{\prime},a^{\prime})}\texttt{frac}(Z_{h,a},Z_{h^{\prime},a^{\prime}},\tau)$
(2)
We report these two metrics in Table 8. On the short inputs of FewSum, all
GPT-3.5 pipelines give similar results, with FewSum being slightly less
generic. Moving to SPACE, however, the range of scores becomes much wider.
Forced to reconcile disparate opinions during repeated summarization, TCG and
RG produce generic summaries, although AceSum is the most generic. We note
that pre-extraction with QFSumm and Topic-wise clustering help QG and TQG
remain less generic.
To measure _lexical genericity_ , we use the sentences from all summaries on
the corresponding dataset as the set of documents to calculate an averaged
Inverse Document Frequency (IDF) of the summaries, with stopwords removed and
stemming applied. Since generic words are likely to occur more frequently and
therefore have a low IDF, a smaller score indicates higher genericity. The
scores calculated this way are listed in Table 9. As expected, QFSumm is
highly specific due to being extractive. We observe that AceSum generates
summaries that over-use generic words, in line with our prior observations. We
also note that pre-extraction with QFSumm helps with lexical genericity as it
did with semantic genericity. Finally, on FewSum, we observe that FS does
better than every other pipeline apart from Q. This bolsters our previous
claim that its low Factuality and Faithfulness scores were due to
hallucinated, but specific, viewpoints.
### 5.6 Correlation with Human Judgments
Evaluation Axis | Entailment-Based Metric | ROUGE
---|---|---
Factuality | 0.36 | 0.05
Faithfulness | 0.29 | -0.03
Table 10: Spearman Correlation Coefficients of our metrics and ROUGE with
human judgments.
Our entailment-based approaches set out to measure Factuality and
Faithfulness; how well do these correlate with our human evaluation? We
compute Spearman’s rank correlation coefficient on the human-annotated SPACE
examples with the averaged annotator scores, as the consensus among rater
scores was high on that dataset. In particular, we use the average of the
Factuality scores among the raters as the net human score on Factuality on an
example and the mean score on Faithfulness as that for Faithfulness.
Correspondingly, we consider the Top Score metric as the automatic measurement
of Factuality and the percentage of statements with $3$ or more supports as
Faithfulness. We list the obtained Spearman correlation coefficients in Table
10. While there is room for stronger metrics, the fact that the introduced
metrics correlate with human judgments better than ROUGE provides an
encouraging signal that these target the factors of interest.
## 6 Related work
#### Text Summarization
Historically, most work tackling text summarization has been _extractive_ in
nature Ku et al. (2006); Paul et al. (2010); Carenini et al. (2006); Angelidis
and Lapata (2018), with more recent work applying pre-trained extractive
systems to this task Zhong et al. (2020); Jia et al. (2020); Kwon et al.
(2021); Gu et al. (2022); Ahuja et al. (2022). Abstractive approaches Carenini
et al. (2006); Ganesan et al. (2010); Di Fabbrizio et al. (2014) to
summarizing reviews have become more successful in recent years Liu and Lapata
(2019a); Bražinskas et al. (2020b); Amplayo et al. (2021b); Isonuma et al.
(2021). We follow in this vein, capitalizing on the strength of GPT-3.5.
#### Multi-Stage Summarization
Most systems of both types are now end-to-end Liu and Lapata (2019b); Du et
al. (2022); Ahuja et al. (2022). However, multi-stage approaches Chen and
Bansal (2018); Li et al. (2021); Zhang et al. (2022) like ours have recently
shown great promise. For instance, Li et al. (2021) extracts relevant evidence
spans and then summarizes them to tackle long documents. Recursive
summarization has been explored in Wu et al. (2021) for book summarization,
but involved fine-tuning GPT-3.5 to the task. Other approaches such as the
mixture-of-experts re-ranking model Ravaut et al. (2022) can be considered as
a two-step approach where the combine function ranks and filters the outputs
of the first stage.
#### Evaluation Metrics
The domain of news summarization has recently seen interest in using
factuality/faithfulness for evaluation Scialom et al. (2021); Kryscinski et
al. (2020); Tang et al. (2023). In news, faithfulness and factuality are quite
similar, as news articles usually do not present incorrect information or
conflicting opinions. Opinion summarization is therefore quite distinct in
this regard, and a separate treatment of factuality and faithfulness is
sensible. For the same reason, although unified approaches to evaluating text
generation Deng et al. (2021); Zhong et al. (2022) are useful, more targeted
metrics are likely to be more informative for opinion summarization
specifically.
#### Aspect-Oriented Summarization
In addition to opinion summarization Amplayo et al. (2021a), aspect-oriented
summarization has also been explored in other domains of NLP Bahrainian et al.
(2022); Yang et al. (2022). However, as highlighted above, opinion
summarization differs from news summarization with respect to desired
characteristics, and this work focuses specifically on those issues.
## 7 Conclusion
In this work, we show that GPT-3.5-based opinion summarization produces highly
fluent and coherent reviews, but is not perfectly faithful to input reviews
and over-generalizes certain viewpoints. ROUGE is unable to capture these
factors accurately. We propose using entailment as a proxy for support and
develop metrics that measure the faithfulness, factuality, and genericity of
the produced summaries. Using these metrics, we explore the impact of two
approaches on controlling the size of the input via pre-summarization on two
opinion summarization datasets. With the reasonably sized inputs of FewSum,
GPT-3.5 and CG produce faithful and non-generic outputs. However, as we move
to long-form review summarization, the factuality and faithfulness of these
approaches drop. A pre-extraction step using QFSumm helps in this setting but
leads to generally shorter and more generic summaries; a topic clustering step
can then make summaries less generic and more relevant at a small cost to
faithfulness and factuality. We hope that our efforts inspire future
improvements to systems and metrics for opinion summary evaluation.
## Limitations
Our study here focused on the most capable GPT-3.5 model, text-davinci-002, at
the time the experiments were conducted. We believe that models like ChatGPT
and GPT-4, as well as those in the future, are likely to perform at least as
well as these, and if they improve further, the metrics we have developed here
will be useful in benchmarking that progress. However, significant further
paradigm shifts could change the distribution of errors in such a way that
certain of our factors (e.g., genericity) become less critical. In addition,
the latest iterations of GPT have a much greater input window size, which help
them digest much larger swaths of text in one go and potentially make our
pipelined approaches less needed in certain settings.
Furthermore, the text-davinci-002 model is fine-tuned with data produced by
human demonstrations. The precise data used is not publicly available, so it
is difficult to use our results to make claims about what data or fine-tuning
regimen leads to what failure modes in these models.
Recent work has noted that language models may be susceptible to learning
biases from training data Sheng et al. (2019); Wallace et al. (2019); Shwartz
et al. (2020), and this phenomenon has also been observed for GPT-3.5 Lucy and
Bamman (2021). We did not stress test the models studied for biases and
furthermore only experimented on English-language data.
When properly used, the summarization models described in this paper can be
time-saving. However, as noted above, summary outputs may be factually
inconsistent with the input documents or not fully representative of the
input, and in such a case could contribute to misinformation. This issue is
present among all current abstractive models and is an area of active
research.
## Acknowledgments
This work was partially supported by NSF CAREER Award IIS-2145280, a grant
from Open Philanthropy, a gift from Salesforce, Inc., and a gift from Adobe.
Thanks as well to the anonymous reviewers for their helpful comments.
## References
* Ahuja et al. (2022) Ojas Ahuja, Jiacheng Xu, Akshay Gupta, Kevin Horecka, and Greg Durrett. 2022. ASPECTNEWS: Aspect-oriented summarization of news documents. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 6494–6506, Dublin, Ireland. Association for Computational Linguistics.
* Amplayo et al. (2021a) Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021a. Aspect-controllable opinion summarization. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6578–6593, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Amplayo et al. (2021b) Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021b. Unsupervised opinion summarization with content planning. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 35(14):12489–12497.
* Angelidis and Lapata (2018) Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3675–3686, Brussels, Belgium. Association for Computational Linguistics.
* Bahrainian et al. (2022) Seyed Ali Bahrainian, Sheridan Feucht, and Carsten Eickhoff. 2022. NEWTS: A corpus for news topic-focused summarization. In _Findings of the Association for Computational Linguistics: ACL 2022_ , pages 493–503, Dublin, Ireland. Association for Computational Linguistics.
* Bražinskas et al. (2020a) Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020a. Few-shot learning for opinion summarization. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4119–4135, Online. Association for Computational Linguistics.
* Bražinskas et al. (2020b) Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020b. Unsupervised opinion summarization as copycat-review generation. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5151–5169, Online. Association for Computational Linguistics.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In _Advances in Neural Information Processing Systems_ , volume 33, pages 1877–1901. Curran Associates, Inc.
* Carenini et al. (2006) Giuseppe Carenini, Raymond Ng, and Adam Pauls. 2006. Multi-document summarization of evaluative text. In _11th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 305–312, Trento, Italy. Association for Computational Linguistics.
* Chen and Bansal (2018) Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 675–686, Melbourne, Australia. Association for Computational Linguistics.
* Cohan and Goharian (2016) Arman Cohan and Nazli Goharian. 2016. Revisiting summarization evaluation for scientific articles. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , pages 806–813, Portorož, Slovenia. European Language Resources Association (ELRA).
* Deng et al. (2021) Mingkai Deng, Bowen Tan, Zhengzhong Liu, Eric Xing, and Zhiting Hu. 2021. Compression, transduction, and creation: A unified framework for evaluating natural language generation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7580–7605, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Di Fabbrizio et al. (2014) Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multi-document summarization of opinions in reviews. In _Proceedings of the 8th International Natural Language Generation Conference (INLG)_ , pages 54–63, Philadelphia, Pennsylvania, U.S.A. Association for Computational Linguistics.
* Du et al. (2022) Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM: General language model pretraining with autoregressive blank infilling. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 320–335, Dublin, Ireland. Association for Computational Linguistics.
* Erkan and Radev (2004) Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. _J. Artif. Int. Res._ , 22(1):457–479.
* Fabbri et al. (2021) Alexander R. Fabbri, Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating Summarization Evaluation. _Transactions of the Association for Computational Linguistics_ , 9:391–409.
* Ganesan et al. (2010) Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In _Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)_ , pages 340–348, Beijing, China. Coling 2010 Organizing Committee.
* Gao et al. (2021) Yanjun Gao, Ting-Hao Huang, and Rebecca J. Passonneau. 2021. ABCD: A graph framework to convert complex sentences to a covering set of simple sentences. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 3919–3931, Online. Association for Computational Linguistics.
* Gehrmann et al. (2022) Sebastian Gehrmann, Elizabeth Clark, and Thibault Sellam. 2022. Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text. _arXiv preprint arXiv:2202.06935_.
* Gehrmann et al. (2018) Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics.
* Goyal and Durrett (2021) Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 1449–1462, Online. Association for Computational Linguistics.
* Goyal et al. (2022) Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News Summarization and Evaluation in the Era of GPT-3. _arXiv_.
* Gu et al. (2022) Nianlong Gu, Elliott Ash, and Richard Hahnloser. 2022. MemSum: Extractive summarization of long documents using multi-step episodic Markov decision processes. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 6507–6522, Dublin, Ireland. Association for Computational Linguistics.
* He et al. (2020) Junxian He, Wojciech Kryściński, Bryan McCann, Nazneen Rajani, and Caiming Xiong. 2020. CTRLsum: Towards Generic Controllable Text Summarization. _arXiv_.
* Isonuma et al. (2021) Masaru Isonuma, Junichiro Mori, Danushka Bollegala, and Ichiro Sakata. 2021. Unsupervised abstractive opinion summarization by generating sentences with tree-structured topic guidance. _Transactions of the Association for Computational Linguistics_ , 9:945–961.
* Jia et al. (2020) Ruipeng Jia, Yanan Cao, Hengzhu Tang, Fang Fang, Cong Cao, and Shi Wang. 2020. Neural extractive summarization with hierarchical attentive heterogeneous graph network. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 3622–3631, Online. Association for Computational Linguistics.
* Kim et al. (2021) Joongwon Kim, Mounica Maddela, Reno Kriz, Wei Xu, and Chris Callison-Burch. 2021\. BiSECT: Learning to split and rephrase sentences with bitexts. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6193–6209, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Kryscinski et al. (2019) Wojciech Kryscinski, Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 540–551, Hong Kong, China. Association for Computational Linguistics.
* Kryscinski et al. (2020) Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 9332–9346, Online. Association for Computational Linguistics.
* Ku et al. (2006) Lun-Wei Ku, Yu-Ting Liang, and Hsin-Hsi Chen. 2006. Opinion extraction, summarization and tracking in news and blog corpora. In _AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs_.
* Kwon et al. (2021) Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, and Manabu Okumura. 2021. Considering nested tree structure in sentence extractive summarization with pre-trained transformer. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 4039–4044, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Laban et al. (2022) Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLI-based models for inconsistency detection in summarization. _Transactions of the Association for Computational Linguistics_ , 10:163–177.
* Ladhak et al. (2022) Faisal Ladhak, Esin Durmus, He He, Claire Cardie, and Kathleen McKeown. 2022. Faithful or extractive? on mitigating the faithfulness-abstractiveness trade-off in abstractive summarization. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1410–1421, Dublin, Ireland. Association for Computational Linguistics.
* Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7871–7880, Online. Association for Computational Linguistics.
* Li et al. (2021) Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhargavi Paranjape, Yashar Mehdad, Sonal Gupta, and Marjan Ghazvininejad. 2021. EASE: Extractive-abstractive summarization end-to-end using the information bottleneck principle. In _Proceedings of the Third Workshop on New Frontiers in Summarization_ , pages 85–95, Online and in Dominican Republic. Association for Computational Linguistics.
* Lin (2004) Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In _Text Summarization Branches Out_ , pages 74–81, Barcelona, Spain. Association for Computational Linguistics.
* Liu and Liu (2008) Feifan Liu and Yang Liu. 2008. Correlation between ROUGE and human evaluation of extractive meeting summaries. In _Proceedings of ACL-08: HLT, Short Papers_ , pages 201–204, Columbus, Ohio. Association for Computational Linguistics.
* Liu and Lapata (2019a) Yang Liu and Mirella Lapata. 2019a. Text summarization with pretrained encoders. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3730–3740, Hong Kong, China. Association for Computational Linguistics.
* Liu and Lapata (2019b) Yang Liu and Mirella Lapata. 2019b. Text summarization with pretrained encoders. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3730–3740, Hong Kong, China. Association for Computational Linguistics.
* Loper and Bird (2002) Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit.
* Lucy and Bamman (2021) Li Lucy and David Bamman. 2021. Gender and representation bias in GPT-3 generated stories. In _Proceedings of the Third Workshop on Narrative Understanding_ , pages 48–55, Virtual. Association for Computational Linguistics.
* Miller (1994) George A. Miller. 1994. WordNet: A lexical database for English. In _Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994_.
* Paul et al. (2010) Michael Paul, ChengXiang Zhai, and Roxana Girju. 2010. Summarizing contrastive viewpoints in opinionated text. In _Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing_ , pages 66–76, Cambridge, MA. Association for Computational Linguistics.
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1532–1543, Doha, Qatar. Association for Computational Linguistics.
* Ravaut et al. (2022) Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022. SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 4504–4524, Dublin, Ireland. Association for Computational Linguistics.
* Saunders et al. (2022) William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan Leike. 2022. Self-critiquing models for assisting human evaluators. _arXiv_.
* Scialom et al. (2021) Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang, and Patrick Gallinari. 2021. QuestEval: Summarization asks for fact-based evaluation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6594–6604, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* See et al. (2017) Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1073–1083, Vancouver, Canada. Association for Computational Linguistics.
* Sheng et al. (2019) Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3407–3412, Hong Kong, China. Association for Computational Linguistics.
* Shwartz et al. (2020) Vered Shwartz, Rachel Rudinger, and Oyvind Tafjord. 2020. “you are grounded!”: Latent name artifacts in pre-trained language models. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6850–6861, Online. Association for Computational Linguistics.
* Song et al. (2018) Kaiqiang Song, Lin Zhao, and Fei Liu. 2018. Structure-infused copy mechanisms for abstractive summarization. In _Proceedings of the 27th International Conference on Computational Linguistics_ , pages 1717–1729, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
* Tang et al. (2023) Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng Xu, Semih Yahvuz, Wojciech Kryściński, Justin F. Rousseau, and Greg Durrett. 2023\. Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics_.
* Wallace et al. (2019) Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2153–2162, Hong Kong, China. Association for Computational Linguistics.
* Wu et al. (2021) Jeff Wu, Long Ouyang, Daniel M. Ziegler, Nisan Stiennon, Ryan Lowe, Jan Leike, and Paul Christiano. 2021. Recursively Summarizing Books with Human Feedback. _arXiv_.
* Yang et al. (2022) Xianjun Yang, Kaiqiang Song, Sangwoo Cho, Xiaoyang Wang, Xiaoman Pan, Linda Petzold, and Dong Yu. 2022. Oasum: Large-scale open domain aspect-based summarization.
* Zhang et al. (2020a) Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020a. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 11328–11339. PMLR.
* Zhang et al. (2020b) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with BERT. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net.
* Zhang et al. (2022) Yusen Zhang, Ansong Ni, Ziming Mao, Chen Henry Wu, Chenguang Zhu, Budhaditya Deb, Ahmed Awadallah, Dragomir Radev, and Rui Zhang. 2022. Summn: A multi-stage summarization framework for long input dialogues and documents. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1592–1604, Dublin, Ireland. Association for Computational Linguistics.
* Zhong et al. (2020) Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 6197–6208, Online. Association for Computational Linguistics.
* Zhong et al. (2022) Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multi-dimensional evaluator for text generation. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 2023–2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Figure 5: Aspects of summarization such as verbosity or the format of output
are affected by the specific wording of the prompt. We use the leftmost
prompt, “ _Summarize what the reviewers said._ ” Figure 6: The top 5
supporting and weakening sentences from the reviews for the statement “ _The
hotel is situated close to restaurants and shops_ ” as found by the Conv
SummaC model. The corresponding entailment scores are included in parentheses.
We see that the scores are very close to each other and that the “weakening”
statements do not weaken the statement at all. These issues led us to use the
zero-shot model instead. Figure 7: The scores of three statements with respect
to a set of sentences, highlighting the issues with directly using the model
output to compute entailment scores. Scores rounded to three decimal places
are included before the corresponding sentences, with important lines
highlighted in color. We note that quoting a proposition as said by someone
else or having multiple propositions in the same sentence serve to cloud
entailment scores.
## Appendix A Pipeline Details
### A.1 Details of the Infrastructure, Models, and Datasets Used
#### Computational Resources
All experiments were run on a machine equipped with an Intel Xeon W-2123, and
utilized a TITAN RTX GPU with a 24 GB memory. We estimate the total
computational GPU budget to be roughly 100 GPU-hours.
#### Model Sizes
QFSumm Ahuja et al. (2022) is a fine-tuned version of BERT and therefore has
110M parameters. The FewSum model from Bražinskas et al. (2020a) has 25.1M
parameters including the plug-in network. AceSum Amplayo et al. (2021a) has a
combined total of 142M parameters between the Controller Induction Model and
Opinion Summarization Model. We use the VitC variant of the entailment model
SummaC-ZS Laban et al. (2022), which relies on the ALBERT-xlarge architecture
with 60M parameters. For all models, we used the default parameters as
reported in Ahuja et al. (2022), Bražinskas et al. (2020a), Amplayo et al.
(2021a), and Laban et al. (2022). Consequently, no hyperparameter search was
necessary. All models have been publicly released under the MIT License on
GitHub by the respective authors.
#### Datasets and Evaluation
Both the SPACE and FewSum datasets consist of reviews in English. The former
consists of reviews of hotels, and the latter product reviews from Amazon and
service reviews from Yelp. We are using pre-existing datasets that are
standard in opinion summarization. Through our human evaluation, we did not
see any personal identifying information or offensive content in the reviews
we assessed. All of our human evaluation experiments were performed once by
the authors, and we report the Krippendorff’s Alpha and Fleiss Kappa scores as
measurements of consensus. We used ROUGE with the default settings.222The
rouge.properties file at https://github.com/kavgan/ROUGE-2.0 We used NLTK’s
Loper and Bird (2002) WordNet Miller (1994) lemmatizer as the lemmatizer where
needed. Sentence splitting was done using the sent_tokenize() function of
NLTK.
### A.2 Details of the Configurations and Prompts
Here we provide more details of the configuration and/or prompts used for
various models. Below, GPT-3.5 refers to the text-davinci-002 model.
#### QFSumm and QFSumm-long (Q)
QFSumm allows one to specify the number $n$ of sentences to extract from the
reference text to shape into a summary. We use $n=3$ (the default setting) for
QFSumm (summarizer) and $n=35$ for QFSumm-long (extractor). On the SPACE
dataset, we use the aspect-specific keywords from Ahuja et al. (2022) to pass
to the model. On the FewSum dataset, however, the set of relevant keywords may
be drastically different across examples. Therefore, for each product, we pass
$5$ randomly chosen reviews to GPT-3.5 with the prompt consisting of the
reviews and the directive “ _Output up to eight comma-separated keywords that
capture these reviews most saliently:_ ”. The produced keywords are then used
with QFSumm to summarize the reviews.
#### GPT-3.5 Topic Clustering (T)
The prompt we use is “ _Describe the topic of each sentence in one word_ ”,
followed by three examples and then the sentence whose topic is to be
determined. We then map the produced words to their corresponding normalized
GloVe Pennington et al. (2014) vectors, which are then mapped to the closest
aspects in terms of L2 distance. This is functionally equivalent to using
cosine similarity as the vectors are normalized.
#### GPT-3.5 Chunking (C)
We strive for the length of the chunks (in sentences) to be both as close to
each other and to 30 as possible; thus, when there are $l$ sentences total to
be chunked, we take $c=\lceil\frac{l}{30}\rceil$ to be the number of chunks,
and allocate $\lfloor\frac{l}{c}\rfloor$ sentences to each chunk (except the
last one, which may have fewer).
#### Review Stratification (R)
If a cluster’s length exceeds GPT-3.5’s upper limit at this stage, it is
truncated to the maximum number of sentences that fit.
#### GPT-3.5 (G)
When used as a summarizer, we feed the penultimate set of sentences to GPT-3.5
with the prompt “Summarize what the X said of the Y:,” where X is either “
_reviewers_ ” or “ _accounts_ ” based on whether GPT-3.5-chunking was used so
far. Y is the aspect being summarized (SPACE) or just “ _Product_ ” (FewSum).
The preamble is either “Here’s what some reviewers said about a hotel:” or
“Here are some accounts of what some reviewers said about the hotel” in the
case of SPACE. The word “ _hotel_ ” is replaced by “ _product_ ” for FewSum.
## Appendix B Entailment and Decomposition
In line with our motivation, we would like to be able to use an NLI (Natural
Language Inference) model to retrieve entailment scores of the produced
summaries with respect to the input reviews. We tested several approaches
including BERTScore, due to it being trained on entailment/contradiction
pairs, but finally settled on using the zero-shot model from SummaC Laban et
al. (2022) to produce the entailment scores. SummaC is already becoming a
standard evaluation tool for summarization factuality. We chose to forego the
trained “Conv” SummaC model as we found that it did not generalize well to the
kind of data we were working with. Specifically, two common issues were that
(1) the range of scores assigned to the sentences from the reviews was very
small, and (2) sometimes (especially for the most weakening statements) the
scores assigned to the sentences seemed arbitrary and did not make a lot of
sense. In comparison, the zero-shot model had neither of these issues. This
issue is highlighted in Figure 6.
Further, a proposition X is typically not judged by models to entail
statements of the form “ _The reviewers said X_ ”, or “ _X and Y_ ”, where _Y_
is another proposition. Accordingly, the entailment scores are not very high
for these two cases. We highlight this in Figure 7. Thus, we decide to split
and rephrase all sentences of the produced summary to simple value
propositions for all entailment-related metrics. Note that here rephrasing
also includes removing any attribution such as “ _The guests said…_ ”. We
considered several models to this end, including BiSECT Kim et al. (2021) and
ABCD Gao et al. (2021), but found two common issues with all of them:
* •
The split sentences maintained the words from the original sentences, so a
sentence such as “ _The food was received well but it was served late_ ” would
have one output part as “ _It was served late_ ”, which requires a round of
entity disambiguation to follow the split-and-rephrase step.
* •
These models do not remove attribution of viewpoints as we would like.
* •
A statement such as “ _I liked the setting of the movie but not its cast_ ”
produces one of the outputs as “ _Not its cast_ ”, which does not make any
sense by itself.
Thus, we utilize GPT-3.5 to perform the split-and-rephrase task, with few shot
prompting used to illustrate the removal of attribution and other desired
characteristics. We also experimented with having separate steps for split-
and-rephrase and found no significant difference in the outputs or quality
thereof. We utilize the split-and-rephrased sentences for all of the automatic
metrics that involve entailment of any sort.
Table 11: Complexity as measured by the percentage of contrasting sentences.
Figure 8: Abstractiveness as measured by the percentage of novel $n$-grams
when compared with the source reviews
## Appendix C Measuring Complexity
One of the challenges of opinion summarization is that sentences may contrast
opinions: “ _Most reviewers liked the service, but there were a few complaints
about sluggish response times._ ” We quantify the percentage of simple and
contrasting statements in the model outputs since it is subtly related to the
extent of expression of opposing viewpoints. We use the original (non-split)
sentences for this purpose and classify a sentence as contrasting if it
contains one or more words from the set $\mathcal{K}=$ {’while’, ’but’,
’though’, ’although’, ’other’, ’others’, ’however’}, as Equation 3 depicts. We
present these percentages in Table 11.
$C=\frac{\sum\limits_{h\in\mathcal{H},a\in\mathcal{A}}\sum\limits_{s\in
S_{h,a}}\mathbbm{1}(N_{1}(s)\cap\mathcal{K}\neq\emptyset)}{\sum\limits_{h\in\mathcal{H},a\in\mathcal{A}}|S_{h,a}|}$
(3)
We note that AceSum produces the smallest percentage of contrasting
statements. We see that topic-wise clustering pushes up the number of
contrasting statements for QG. We hypothesize that this is because when
bringing together statements with the same topics in a cluster two opposing
statements are likelier to fall into the same chunk. In cases where two
opposing statements fall into different chunks, say X and Y, the chunks are
likely to each contain statements similar to others in the same chunk. Thus,
the summaries of those chunks are likely to be highly contrasting and thus
increase the above measure even more for the final stage, as is observed above
for TCG.
Figure 9: Average Top Score v/s Abstractiveness on the SPACE dataset.
## Appendix D Abstractiveness
We further investigate how the choice of the pipeline affects abstractiveness.
To measure this, we calculate the percentage of $n$-grams in the summaries
that do not appear in the input reviews for $n\in\\{3,4,5\\}$. For this, we
use the original (non-split) sentences from the output summaries. The results
are tabulated in Table 8.
Since QFSumm is a purely extractive model, it is no surprise that Q has low
abstractiveness. The numbers are non-zero due to some quirks of QFSumm about
splitting into sentences - this leads to some partial sentences ending up next
to each other. The next stand-out is that A has very low abstractiveness. This
is in line with our observation that even though AceSum is abstractive, it
tends to highly generic observations such as “ _The rooms were clean_ ”, which
very likely appear almost verbatim in some user reviews. We also observe that
QG has a relatively low abstractiveness and that topic clustering drives up
abstractiveness. We suspect that the above is a result of GPT-3.5 simply
mashing together some sentences when presented with chunks containing highly
disparate sentences (since it is hard to find a common thread among them),
which promotes extraction over abstraction. Another observation is that multi-
GPT-3.5 pipelines (TCG and RG) are more abstractive than single-GPT-3.5 ones
since there are two rounds of abstraction as opposed to one. All the
GPT-3.5-derived pipelines are highly abstractive in the case of FewSum, and
slightly more so than FS. This is unsurprising since the combined length of
the reviews in the case of FewSum is much smaller when compared to SPACE, and
therefore there are relatively fewer propositions to compress into general
statements. Motivated by Ladhak et al. (2022), we display the line graph of
the average Top Score vs. $3$-gram Abstractiveness for the SPACE dataset in
Figure 9. The trio of QG, TQG, and TCG define the best frontier on the
Factuality-Abstractiveness tradeoff, followed by RG, then A and Q.
|
# Local-global principle and integral Tate conjecture for certain varieties
Zhiyu Tian Beijing International Center for Mathematical Research
Peking University
100871, Beijing, China<EMAIL_ADDRESS>
###### Abstract.
We give a geometric criterion to check the validity of integral Tate
conjecture for one cycles on separably rationally connected fibrations over a
curve, and to check that the Brauer-Manin obstruction is the only obstruction
to local-global principle for zero cycles on a separably rationally connected
variety defined over a global function field.
We prove that the Brauer-Manin obstruction is the only obstruction to local-
global principle for zero cycles on all geometrically rational surfaces
defined over a global function field, and to Hasse principle for rational
points on del Pezzo surfaces of degree four defined over a global function
field of odd characteristic.
Along the way, we also prove some results about the space of one cycles on a
separably rationally connected fibration over a curve, which leads to the
equality of the coniveau filtration and the strong coniveau filtration
(introduced by Benoist-Ottem and Voisin) on degree $3$ homology of such
varieties.
###### Contents
1. 1 Introduction
1. 1.1 Local-global principle for zero cycles
2. 1.2 Integral Tate conjecture
3. 1.3 Coniveau and strong coniveau
4. 1.4 Integral Tate conjecture for one cycles: arithmetic part
5. 1.5 Algebraic equivalence
6. 1.6 Structure of the paper
2. 2 Space of one cycles
3. 3 Lawson homology
4. 4 Chow sheaves
5. 5 Integral Tate conjecture and local-global principle for zero cycles
6. 6 Examples
## 1\. Introduction
### 1.1. Local-global principle for zero cycles
Given a smooth projective variety defined over a global field, a natural and
important problem is to find criteria for the existence of rational points and
a description of the set of all rational points. Hasse principle and weak
approximation problem, or local-global principle, gives a characterization of
this set in terms of the adelic points. There are various obstructions for
local-global principle to hold, notably the so called Brauer-Manin
obstruction. A conjecture due to Colliot-Thélène states that for rationally
connected varieties defined over a global field, this is the only obstruction.
The study of zero cycles, as natural generalizations of rational points, has
also drawn lots of attentions in recent years. Motivated by the case of
rational points, Colliot-Thélène has formulated the following conjectures on
the local-global principle for zero cycles.
###### Conjecture 1.1.
[CT99, Conjecture 2.2] Let $X$ be a smooth projective variety defined over the
function field $\mathbb{F}_{q}(B)$ of a smooth curve $B$ defined over a finite
field $\mathbb{F}_{q}$. For every place $\nu$ of $\mathbb{F}_{q}(B)$, let
$z_{\nu}\in CH_{0}(X_{v})$. Suppose that for all element $A\in
Br(X)\\{\ell\\}$, we have $\sum_{\nu}Inv(A(z_{\nu}))=0$. Then for all $n>0$,
there is a cycle $z_{n}\in CH_{0}(X)$ such that for all $\nu$ we have that
$cl(z_{n})=cl(z_{\nu})\in
H^{2d}_{\text{\'{e}t}}(X_{\nu},\mu_{\ell^{n}}^{\otimes d}).$
Here $Inv(A(z_{\nu}))$ means the value of $(A,z_{\nu})$ under the pairing
$Br(X_{\nu})\\{\ell\\}\times CH_{0}(X_{\nu})\to\mathbb{Q}/\mathbb{Z}.$
A particular case of the above conjecture is the following.
###### Conjecture 1.2.
Let $X$ be a smooth projective variety defined over the function field
$\mathbb{F}_{q}(B)$ of a smooth curve $B$ defined over a finite field
$\mathbb{F}_{q}$. Suppose that for every place $\nu$ of $\mathbb{F}_{q}(B)$,
there is a zero cycle
$z_{\nu}\in CH_{0}(X_{v})$
of degree prime to $\ell$. Suppose that for all element $A\in
Br(X)\\{\ell\\}$, we have
$\sum_{\nu}Inv(A(z_{\nu}))=0.$
Then there is a cycle $z\in CH_{0}(X)$ of degree prime to $\ell$.
In this paper, for an abelian group $A$, we use
$A\hat{\otimes}\mathbb{Z}_{\ell}$ to denote the inverse limit
$\lim\limits_{\longleftarrow}A/\ell^{n}A$. The following stronger form of the
above conjectures is also well-known.
###### Conjecture 1.3.
Let $X$ be a smooth projective variety defined over a global field $K$. Let
$l$ be a prime number invertible in $K$. There is an exact sequence:
$CH_{0}(X)\hat{\otimes}\mathbb{Z}_{\ell}\to\Pi_{\nu\in\Omega(K)}CH_{0}(X_{\nu})\hat{\otimes}\mathbb{Z}_{\ell}\to
Hom(Br(X)\\{\ell\\},\mathbb{Q}/\mathbb{Z}).$
Conjectures 1.1 and 1.2 are consequences of this via considering the
commutative diagram of various cycle class maps. On the other hand, if the
cycle class map $CH_{0}(X_{\nu})\hat{\otimes}\mathbb{Z}_{\ell}\to
H^{2d}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(d))$ is injective, Conjecture 1.1
and this stronger conjecture 1.3 are equivalent. In general the injectivity
fails. But we will see that in many (and conjecturally all) cases of interest
to us, the injectivity holds.
One of the main theorems of this article is the following.
###### Theorem 1.4.
Let $X$ be a smooth projective geometrically rational surface defined over the
function field $\mathbb{F}_{q}(B)$ of a smooth projective curve $B$. Then
Conjectures 1.3, and hence 1.1 and 1.2 hold true for $X$ .
By a happy coincidence, we deduce a corollary for rational points.
###### Theorem 1.5.
Let $X$ be a del Pezzo surface of degree $4$ defined over a global function
field of odd characteristic. Then Brauer-Manin obstruction is the only
obstruction for Hasse principle for rational points on $X$ to hold.
###### Proof.
If there is a rational point everywhere locally and satisfies the Brauer-Manin
constraint, there is a zero cycle of degree $1$ over the function field by
Theorem 1.4. A del Pezzo surface of degree $4$ is a complete intersection of
$2$ quadrics in $\mathbb{P}^{4}$. Such a complete intersection has a rational
point if and only if it has an odd degree $0$-cycle [Bru78]. Hence we have the
result. ∎
###### Remark 1.6.
One is also interested to study weak approximation for rational points on a
del Pezzo surface of degree $4$. For a del Pezzo surface of degree $4$ over a
number field, assuming that there is a rational point, Salberger and
Skorobogatov[SS91] prove that the Brauer-Manin obstruction is the only
obstruction to weak approximation. As the author has been informed by Colliot-
Thélène, essentially the same argument also proves that over a global function
field of odd characteristic, Brauer-Manin obstruction is the only obstruction
to weak approximation once there is a rational point. In characteristic $2$,
some partial results are contained in the joint work of the author with Letao
Zhang [TZ18].
We finish this section with some previously known results. There is a vast
literature on the local-global principles for zero-cycles/rational points on
geometrically rational surfaces. Let us only mention a few relevant results
and refer the readers to survey articles such as [Wit18] etc. for a more
comprehensive list.
Colliot-Thélène proved Conjecture 1.3 holds for ruled surfaces defined over
number fields [CT00]. The global function field version for ruled surfaces is
proved by Parimala-Suresh [PS16], whose proof depends on the computation of
degree $3$ unramified cohomology and also establishes the integral Tate
conjecture for conic bundle over surfaces defined over finite fields. An
interesting example of cubic surfaces of the form
$(f+tg=0)\subset\mathbb{P}^{3}\times\mathbb{A}^{1}_{t}$ is studied by Colliot-
Thélène-Swinnerton-Dyer [CTSD12]. In addition to proving that Hasse-principle
for zero cycles holds for cubic surfaces of this form, they also prove that
the existence of rational points is equivalent to the existence of a zero
cycle of degree $1$ for such surfaces.
The study of complete intersection of two quadrics has also drawn lots of
attention. It starts with the work of Colliot-Thélène, Sansuc, and Swinnerton-
Dyer [CTSSD87]. Heath-Brown proved that Hasse principle for rational points
holds for smooth complete intersections of two quadrics in $\mathbb{P}^{7}$
over number fields [HB18], see also a different proof by Colliot-Thélène
[CT22]. Under the assumption on finiteness of Tate-Shafarevich groups of
elliptic curves and the validity of Schinzel’s hypothesis, Wittenberg proved
Hasse principle holds for such complete intersections in $\mathbb{P}^{5}$ and
some case in $\mathbb{P}^{4}$ over number fields [Wit07]. The author has shown
in a previous paper [Tia17] that Hasse principle for rational points holds for
smooth complete intersections of two quadrics in $\mathbb{P}^{n},n\geq 5$
defined over a global function field of odd characteristic.
### 1.2. Integral Tate conjecture
Our approach to Theorem 1.4 is based on the close relation between an integral
version of Tate conjecture and Colliot-Thélène’s conjectures, first studied by
Saito [Sai89] and Colliot-Thélène [CT99].
Let $X$ be a smooth projective geometrically irreducible variety of dimension
$d$ defined over a finite field $\mathbb{F}$. We have the cycle class maps:
(1) $CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to H^{2d-2}(X,\mathbb{Z}_{\ell}(d-1)),$
(2) $CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to
H^{2d-2}(X,\mathbb{Z}_{\ell}(d-1))\to
H^{2d-2}(\bar{X},\mathbb{Z}_{\ell}(d-1))^{G},$ (3)
$CH_{1}(\bar{X})\otimes\mathbb{Z}_{\ell}\to\cup_{K/\mathbb{F}}H^{2d-2}(\bar{X},\mathbb{Z}_{\ell}(d-1))^{G_{K}}\subset
H^{2d-2}(\bar{X},\mathbb{Z}_{\ell}(d-1)).$
We also have the corresponding cycle class maps after tensoring with
$\mathbb{Q}_{\ell}$. Tate conjecture predicts that the cycle class map on
codimension $r$ cycles
$CH^{r}(X)\otimes\mathbb{Q}_{\ell}\to
H^{2r}_{\text{\'{e}t}}(X,\mathbb{Q}_{\ell}(r))$
is surjective for any smooth projective varieties defined over a finite field.
While the cycle class map is in general not surjective for $\mathbb{Z}_{\ell}$
coefficients, one is still interested in knowing in which cases surjectivity
still holds. This is usually called the integral Tate conjecture (even though
it is not true in general).
###### Definition 1.7.
Let $X$ be a smooth, proper variety over an algebraically closed field. Given
any $f:\mathbb{P}^{1}\to X$, the pull-back $f^{*}T_{X}$ decomposes as direct
sum of line bundles $\oplus_{i=1}^{\dim X}\mathcal{O}(a_{i})$. We say that $X$
is _separably rationally connected_ , or _SRC_ , if $a_{i}>0$ for every $i$.
We say that $X$ is _separably rationally connected in codimension $1$_ or _SRC
in codimension $1$_ if there is a morphism $f$ for which $a_{i}\geq 0$ for
every $i$, with strict inequality for all but one $a_{i}$.
###### Remark 1.8.
The term SRC is introduced by Kollár-Miyaoka-Mori [KMM92]. The term SRC in
codimension $1$ is introduced in [KT23]. Main examples include SRC varieties
and fibrations over a curve with smooth proper SRC general fibers. In
characteristic $0$, these are all the examples. In positive characteristic
there are more examples. In any case, one can take the quotient by free
rational curves on a variety that is SRC in codimension $1$. The quotient is
either a curve or a point. In particular, the Chow group of $0$-cycles on such
varieties is supported in a curve.
The results of this paper, together with some results proved by the author in
[Tia20], strongly suggest that the following is true.
###### Conjecture 1.9.
Let $X$ be a smooth projective variety defined over a finite field. Assume
that $X$ is separably rationally connected in codimension $1$. Then the cycle
class map
$CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to
H^{2d-2}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(d-1))$
is surjective, where $d=\dim X$.
We refer the readers to Theorem 1.13 and Remark 1.15 for evidences of this
conjecture.
The connection between integral Tate conjecture and Conjecture 1.1, 1.2 is the
following.
###### Theorem 1.10 ([CT99] Proposition 3.2, [Sai89] Corollary (8-6)).
Let $\mathbb{F}$ be a finite field, $C$ a smooth projective geometrically
connected curve over $\mathbb{F}$, and $K$ the function field of $C$. Let
$\mathcal{X}$ be a smooth projective geometrically connected variety of
dimension $d+1$ defined over $\mathbb{F}$, equipped with a morphism
$p:\mathcal{X}\to C$, whose generic fiber is smooth and geometrically
irreducible. Let $l$ be a prime different from the characteristic.
1. (1)
If the cycle class map
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
is surjective, Conjectures 1.1 and 1.2 are true.
2. (2)
If the cycle class map
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))\to
H^{2d-2}_{\text{{\'{e}}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d-1))^{G}$
is surjective, or if
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
is surjective modulo torsion, Conjecture 1.2 is true.
###### Remark 1.11.
The cited references only contain a proof of the first statement. But the
second statement follows from the same proof. The general result of Saito
produces a cohomology class $\xi\in H^{2d}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
whose restriction to each local place coincide with the class of
$z_{\mu}$([CT99, Proposition 3.1]). The various types of integral Tate
conjecture are simply used to find a global cycle whose class agrees with
$\xi$ in various cohomology groups. See also Page 19 of the slide of Colliot-
Thélène’s lecture at Cambridge in 2008 (available at
https://www.imo.universite-paris-saclay.fr/~jean-louis.colliot-
thelene/expocambridge240809.pdf).
A result of Schoen [Sch98] says that if the Tate conjecture is true for
divisors on all smooth projective surfaces defined over finite fields, then
for any smooth projective variety $V$ defined over a finite field
$\mathbb{F}$, the cycle class map
$CH_{1}(\bar{V})\otimes\mathbb{Z}_{l}\to\cup_{K/\mathbb{F}}H^{2d-2}(V,\mathbb{Z}_{\ell}(d-1))^{\text{Gal}(\bar{\mathbb{F}}/K)}\subset
H^{2d-2}(\bar{V},\mathbb{Z}_{\ell}(d-1))$
is surjective, where $\bar{V}$ is the base change of $V$ to an algebraic
closure of $\mathbb{F}$.
If furthermore $V$ is SRC in codimnesion $1$, since its Chow group of zero
cycles is supported in a curve, it is easy to see that every class in
$H^{2d-2}(\bar{V},\mathbb{Q}_{\ell}(d-1))$ is algebraic. Thus every class in
$H^{2d-2}(\bar{V},\mathbb{Z}_{\ell}(d-1))$ is fixed by some open subgroup of
the Galois group. So in this case, Schoen’s theorem implies that we always
have a surjection
$CH_{1}(V)\otimes\mathbb{Z}_{\ell}\to H^{2d-2}(V,\mathbb{Z}_{\ell}(d-1)),$
provided that Tate conjecture holds for all surfaces.
The paper [CTS10] discussed the implication of Schoen’s result for varieties
defined over $\bar{\mathbb{F}}(C)$, the function field of a curve defined over
$\bar{\mathbb{F}}$. Colliot-Thélène and Kahn analyzed the surjectivity of the
$\mathbb{Z}_{\ell}$-coeffiecient cycle class map of codimension $2$ cycles and
its relation with degree $3$ unramified cohomology
$H^{3}_{\text{nr}}(X,\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell}(2))$ in [CTK13] (over
the complex numbers, such a relation is studied in [CTV12]). For the sake of
brevity, and since we will not need these notions for the other parts of this
paper, we will not define this invariant. Instead, we refer the interested
reader to these papers and the references therein for definitions and
properties of unramified cohomology. In particular, if the unramified
cohomology $H^{3}_{\text{nr}}(X,\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell}(2))$
vanishes, the cokernal of $CH^{2}(X)\otimes\mathbb{Z}_{\ell}\to
H^{4}(X,\mathbb{Z}_{\ell}(2))$ is torsion free [CTK13, Théorème 2.2]. Thus if,
in addition, we know that the cokernal is torsion (for instance, if the Chow
group of zero cycles with rational coefficients is universally supported in a
surface [CTK13, Proposition 3.2]), we know the cycle class map is surjective.
One should also note that by the Tate conjecture, one expects the cokernal to
be torsion. In general, they deduced a short exact sequence relating various
Chow groups of codimension $2$ cycles and degree $3$ unramified cohomology.
Their short exact sequence for varieties over finite fields reads the
following ([CTK13, Théorème 6.8]):
$\displaystyle 0\to$ $\displaystyle\text{Ker}(CH^{2}(X)\to CH^{2}(\bar{X}))\to
H^{1}(\mathbb{F},\oplus_{\ell}H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(2))_{\text{tors}})$
(4) $\displaystyle\to$
$\displaystyle\text{Ker}(H^{3}_{\text{nr}}(X,\mathbb{Q}/\mathbb{Z}(2))\to
H^{3}_{\text{nr}}(\bar{X},\mathbb{Q}/\mathbb{Z}(2)))$ $\displaystyle\to$
$\displaystyle\text{Coker}(CH^{2}(X)\to CH^{2}(\bar{X})^{G})\to 0$
Of course, one can deduce from this a similar exact sequence for
$\ell$-primary torsions.
In particular, we can apply their results to $3$-folds, which then relates the
integral Tate conjecture to the vanishing of degree $3$ unramified cohomology.
Note that by the Lefschetz hyperplane theorem, if we can prove integral Tate
conjecture for one cycles on all $3$-folds, we prove integral Tate conjecture
for one cycles on all smooth projective varieties.
Several groups of authors proved the vanishing of the degree $3$ unramified
cohomoogy on certain threefolds and deduce the integral Tate conjecture for
one cycles, and thus proving Conjecture 1.1 and 1.2 for some surfaces defined
over a global function field. See, for example, [PS16] for the case of conic
bundles over a surface, [CTS21] and [Sca22] for the case of a product of a
curve with a $CH_{0}$-trivial surface.
We prove Theorem 1.4 as a consequence of the following case of the integral
Tate conjecture for one cycles.
###### Theorem 1.12.
Let $\pi:\mathcal{X}\to B$ be a projective flat family of surfaces over a
smooth projective curve $B$ defined over a finite field $\mathbb{F}_{q}$.
Assume that $\mathcal{X}$ is smooth and that the geometric generic fiber is a
smooth rational surface. Then integral Tate conjecture holds for one cycles.
More concretely, the cycle class map
$CH_{1}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{4}_{\text{\'{e}t}}(\mathcal{X},\mathbb{Z}_{\ell}(2))$
is surjective.
In general, one can deduce the following geometric criterion for the validity
of the integral Tate conjecture 1.9 and the local-global principles for
separably rationally connected varieties defined over global function fields.
Given a variety $V$ defined over a field $k$, we denote by $A_{1}(V)$ the
group of one cycles in $V$ modulo algebraic equivalence. We also use
$\overline{V}$ to denote the base change of $V$ to an algebraic closure of
$k$.
###### Theorem 1.13.
Let $\pi:\mathcal{X}\to B$ be a projective flat family of varieties over a
smooth projective curve $B$ defined over a finite field $\mathbb{F}_{q}$.
Assume that $\mathcal{X}$ is smooth and that the generic fiber is smooth,
separably rationally connected, and of dimension $d$. Consider the following
hypothesis:
* (A)
The cycle class map $A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d))$ is
surjective.
* (B)
The cycle class map $A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d))$ is
injective.
* (C)
The cycle class map from higher Chow groups
$\lim\limits_{\xleftarrow[n]{}}CH_{1}(\overline{\mathcal{X}},1,\mathbb{Z}/\ell^{n}\mathbb{Z})\to
H^{2d-1}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d))$
is surjective.
* (D)
The coniveau filtration
$N^{1}H^{2d-1}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell})$ is
the whole cohomology group
$H^{2d-1}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}).$
If $\overline{\mathcal{X}}$ satisfies hypothesis (A) and (B), then the cycle
maps
$CH_{1}(\mathcal{X})\otimes\mathbb{Z}_{l}\to
H^{2d}_{\text{\'{e}t}}(\mathcal{X},\mathbb{Z}_{l}(d))\to
H^{2d}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{l}(d))^{Gal(\bar{\mathbb{F}}_{q}/\mathbb{F}_{q})}$
is surjective, and Conjecture 1.2 holds for the generic fiber $X$ over
$\mathbb{F}_{q}(B)$.
If $\overline{\mathcal{X}}$ satisfies hypothesis (C) or (D), then the cycle
maps
$CH_{1}(\mathcal{X})_{\text{alg}}\otimes\mathbb{Z}_{l}\to
H^{1}(\mathbb{F}_{q},H^{2d-1}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{l}(d)))$
is surjective.
Thus Conjecture 1.1 holds for the generic fiber $X$ over $\mathbb{F}_{q}(B)$
if either hypothesis (A), (B), (C) or (A), (B), (D) hold.
###### Remark 1.14.
The statements in Hypothesis (A), (B), (C), (D) only depends on the stable
birational class of the generic fiber of $\overline{\mathcal{X}}\to\bar{B}$.
In particular, they only depend on the stable birational class of the generic
fiber $X$ over the field $\mathbb{F}_{q}(B)$ (assuming that there is a smooth
projective model for every stable birational class of $X$). Also note that
Conjectures 1.1, 1.2, 1.3, 1.9 only depend on the stable birational class of
the variety over $\mathbb{F}_{q}(B)$ (or $\mathbb{F}$ for Conjecture 1.9).
###### Remark 1.15.
We make a few simple remarks about the validity of the hypothesis above. First
of all, it is a simple exercise to prove that all these hypothesis hold if we
use $\mathbb{Q}_{\ell}$-coefficient, and that they hold for all but finitely
many $\ell$.
As discussed above, Hypothesis (A) follows from Tate’s conjecture on surfaces.
The author has made conjectures on the Kato homology of rationally connected
fibrations over an algebraically closed field of characteristic $0$ in
[Tia20]. A special case of the conjecture predicts that for a rationally
connected fibration over a curve defined over an algebraically closed field of
characteristic $0$, hypothesis (A), (B), (C), and (D) hold. It is quite
reasonable to believe that the same is true for separably rationally connected
fibrations in characteristic $p>0$. We discuss some examples in Section 6.
Now we explain a corollary of Theorem 1.12, which confirm a conjecture of
Colliot-Thélène and Kahn ([CTK13, Conjecture 5.8]) up to $p$ torsion.
###### Corollary 1.16.
Let $X$ be a smooth projective threefolds defined over a finite field
$\mathbb{F}$. Assume that $X$ admits a fibration structure over a smooth
projective curve with smooth projective geometrically rational generic fiber.
Then we have
$H^{3}_{\text{nr}}(X,\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell}(2))=0,$
for any $\ell$ invertible in $\mathbb{F}$, and a short exact sequence
$0\to H^{1}(\mathbb{F},H^{3}(\bar{X},\mathbb{Z}_{\ell}(2))\\{\ell\\}\to
CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to
CH_{1}(\bar{X})^{G}\otimes\mathbb{Z}_{\ell}\to 0.$
###### Proof.
The vanishing of $H^{3}_{\text{nr}}(X,\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell}(2))$
follows from Theorem 1.12, [CTK13, Theorem 2.2, Proposition 3.2], and the fact
that $CH_{0}(\bar{X})$ is supported in a curve.
It then follows from the exact sequence (1.2) that we have the above
descritption of the Chow gruops of $X$ and $\bar{X}$. ∎
###### Remark 1.17.
Theorem 1.13 holds for smooth projective separably rationally connected
varieties. We have made the proof works in both cases. A cheaper way to get
this result is to note that the validity of the integral Tate conjecture is a
stable birational invariant and apply the above theorems to the product
$\mathbb{P}^{1}\times X$ ($X$ separably rationally connected) as a fibration
over $\mathbb{P}^{1}$. Unfortunately, for a separably rationally connected
threefold $V$ defined over ${\mathbb{F}}_{q}$, we do not know if the cycle
class map
$CH_{1}(\bar{V})\otimes\mathbb{Z}_{\ell}\to
H^{4}(\bar{V},\mathbb{Z}_{\ell}(2))$
is surjective. Once we know this (e.g. if we are willing to assume the Tate
conjecture for surfaces), the same argument as above shows that
$CH_{1}({V})\otimes\mathbb{Z}_{\ell}\to H^{4}({V},\mathbb{Z}_{\ell}(2))$
is surjective. One can also deduce the vanishing of degree $3$ unramified
cohomology and the short exact sequence of Chow groups as above.
### 1.3. Coniveau and strong coniveau
This section is a digression into several a priori different notions of
conveau filtraions on the cohomology of a variety introduced by Benoist-Ottem
and Voisin in [BO21, Voi22]. These notions will play an important role when we
return to discuss the integral Tate conjecture in the next section.
Let us first review the definitions.
###### Definition 1.18.
Let $X$ be a smooth projective variety of dimension $n$ defined over an
algebraically closed field. Given an abelian group $A$ that is one of
$\mathbb{Z}/n\mathbb{Z},\mathbb{Z}_{\ell},\mathbb{Z},\mathbb{Q}$, or
$\mathbb{Q}_{\ell}$, we simply write $H^{k}(X,A)$ as either the étale
cohomology with coefficient $A$ or the singular cohomology with coefficient
$A$ (if $X$ is a complex variety), we have the following closely related
filtrations on the cohomolgy $H^{k}(X,A)$.
1. (1)
The coniveau filtration:
$N^{c}H^{k}(X,A):=\sum_{f:Y\to X}f_{*}(H_{2n-k}(Y,A))\subset
H_{2n-k}(X,A)\cong H^{k}(X,A),$
where the sum is taken over all morphisms from projective algebraic sets
$f:Y\to X,\dim Y\leq n-c$;
2. (2)
The strong coniveau filtration:
$\tilde{N}^{c}H^{k}(X,A):=\sum_{f:Y\to X}f_{*}(H_{2n-k}(Y,A))\subset
H_{2n-k}(X,A)\cong H^{k}(X,A),$
where the sum is taken over all morphisms from smooth projective varieties
$f:Y\to X,\dim Y\leq n-c$.
3. (3)
The strong cylindrical filtration:
$\tilde{N}_{c,\text{cyl}}H^{k}(X,A):=\sum\Gamma_{*}(H_{2n-k-2c}(Z,A))\subset
H_{2n-k}(X,A)\cong H^{k}(X,A)$
where the sum is taken over all _smooth_ projective varieties $Z$ and
correspondences $\Gamma\subset Z\times X$ of relative dimension $c$ over $Z$.
4. (4)
The strong equidimensional cylindrical filtration:
$\tilde{N}_{c,\text{cyl},\text{eq}}H^{k}(X,A):=\sum\Gamma_{*}(H_{2n-k-2c}(Z,A))\subset
H_{2n-k}(X,A)\cong H^{k}(X,A)$
where the sum is taken over all smooth projective varieties $Z$ and
correspondences $\Gamma\subset Z\times X$ that is equidimensional of relative
dimension $c$ over $Z$.
5. (5)
The semi-stable filtration: $N_{c,\text{st},\text{cyl}}H^{k}(X,\mathbb{Z})$ as
the group generated by the cylinder homomorphisms
$f_{*}\circ p^{*}:H_{2n-k-2c}(Z,\mathbb{Z})\to H_{2n-k}(X,\mathbb{Z})\cong
H^{k}(X,\mathbb{Z}),$
for all morphisms $f:Y\to X$, and flat projective morphisms $p:Y\to Z$ of
relative dimension $c$ with simple normal crossing fibers, where $\dim Z\leq
2n-k-2c$.
6. (6)
We use the notations $N^{c}H_{k}$ etc. to denote the filtrations on Borel-
Moore or singular homology $H_{k}$. Since $X$ is smooth, this is the same as
the filtrations $N^{c}H^{2d-k}$.
A general relation between these filtrations is the following.
###### Lemma 1.19.
[Voi22, Proposition 1.3] We have the following inclusions:
$\tilde{N}_{n-c,\text{cyl},\text{eq}}H^{2c-1}(X,A)\subset\tilde{N}_{n-c,\text{cyl}}H^{2c-1}(X,A)=\tilde{N}^{c}H^{2c-1}(X,A)\subset
N^{c}H^{2c-1}(X,A).$
The only non-obvious part, the equality in the middle, is proved by Voisin
[Voi22, Proposition 1.3].
A natural question is whether or not these filtrations agree with each other.
If we use $\mathbb{Q}$ or $\mathbb{Q}_{\ell}$ coefficients, the theory of
weights shows that the strong coniveau and coniveau filtrations are
equivalent. Since the difference between some of these filtrations also gives
stable birational invariants, one wonders if this could be used to prove non-
stable-rationality for some rationally connected varieties.
Examples with $\mathbb{Z}$-coefficients where the strong coniveau filtration
and coniveau filtration differ are constructed in [BO21]. More precisely, they
prove the following.
###### Theorem 1.20.
[BO21, Theorem 1.1] For all $c\geq 1$ and $k\geq 2c+1$, there is a smooth
projective complex variety $X$ such that the inclusion
$\tilde{N^{c}}H^{k}(X,\mathbb{Z})\subset N^{c}H^{k}(X,\mathbb{Z})$
is strict. One may choose $X$ to have torsion canonical bundle. If $c\geq 2$,
one may choose $X$ to be rational.
The examples as above usually have large dimension especially when $c$ is
large. For lower dimensional examples, Benoist-Ottem proved the following.
###### Theorem 1.21.
[BO21, Theorem 1.2] For $k\in\\{3,4\\}$, there is a smooth projective complex
variety $X$ of dimension $k+1$ with torsion canonical bundle such that the
inclusion
$\tilde{N}^{1}H^{k}(X,\mathbb{Z})\subset N^{1}H^{k}(X,\mathbb{Z})$
is strict.
These examples leave the case of $c=1$ open for threefolds and for rationally
connected varieties. Voisin studied the strong coniveau filtrations on
$H^{2d-3}$ [Voi22].
###### Theorem 1.22.
[Voi22, Theorem 2.6, Corollary 2.7, Theorem 2.17] Let $X$ be a smooth
projective variety of dimension $d$ defined over $\mathbb{C}$.
1. (1)
Assume the Walker Abel-Jacobi map ([Wal07])
$\phi:CH_{1}(X)_{\text{alg}}\to J(N^{1}H^{2d-3}(X,\mathbb{Z}))$
is injective on torsions. Then we have
$N_{1,\text{st},\text{cyl}}H^{2d-3}(X,\mathbb{Z})/\text{Tor}=N^{1}H^{2d-3}(X,\mathbb{Z})/\text{Tor}.$
2. (2)
If $\dim X$ is $3$, we have
$N_{1,\text{cyl},\text{st}}H^{3}(X,\mathbb{Z})/\text{Tor}=N^{1}H^{3}(X,\mathbb{Z})/\text{Tor}.$
3. (3)
If $X$ is rationally connected, we have
$N_{1,\text{cyl},\text{st}}H^{2d-3}(X,\mathbb{Z})=\tilde{N}_{1}H^{2d-3}(X,\mathbb{Z}).$
As a consequence,
$N_{1,\text{cyl},\text{st}}H^{2d-3}(X,\mathbb{Z})/\text{Tor}=\tilde{N}^{1}H^{2d-3}(X,\mathbb{Z})/\text{Tor}=N^{1}H^{2d-3}(X,\mathbb{Z})/\text{Tor}.$
We prove an improvement of Voisin’s results.
###### Theorem 1.23.
Let $X$ be a complex smooth projective variety of dimension $d$. Then the
following two filtrations agree with each other:
$N_{1,\text{st},\text{cyl}}H_{3}(X,\mathbb{Z})=N^{1}H^{2d-3}(X,\mathbb{Z}).$
Assume furthermore that $X$ is SRC in codimension $1$. There is a smooth
projective curve $C$ with a family of $1$-dimensional cycles $\Gamma\subset
C\times X$ such that
$\Gamma_{*}:H_{1}(C,\mathbb{Z})\to H_{3}(X,\mathbb{Z})$
has the same image as the s-map $s:L_{1}H_{3}(X)\to H_{3}(X)$, which is the
same as the $N^{1}H^{3}(X,\mathbb{Z})$. In particular, the following
filtrations on $H^{2d-3}(X,\mathbb{Z})$ introduced in Definition 1.18 are the
same:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{2d-3}(X,\mathbb{Z})=\tilde{N}_{1,\text{cyl}}H^{2d-3}(X,\mathbb{Z})=\tilde{N}^{d-1}H^{2d-3}(X,\mathbb{Z})=N^{d-1}H^{2d-3}(X,\mathbb{Z}).$
For the definition of Lawson homology and s-map, see Definition 3.5 and 3.10
in Section 3.
An immediate corollary is the following.
###### Theorem 1.24.
Let $X$ be a complex smooth projective variety, that is SRC in codimension
$1$. Then the following filtrations on $H^{3}(X,\mathbb{Z})$ introduced in
Definition 1.18 coincide with the whole cohomology group:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{3}(X,\mathbb{Z})=\tilde{N}_{1,\text{cyl}}H^{3}(X,\mathbb{Z})=\tilde{N}^{1}H^{3}(X,\mathbb{Z})=N^{1}H^{3}(X,\mathbb{Z})=H^{3}(X,\mathbb{Z}).$
###### Remark 1.25.
Using the decomposition of the diagonal argument, one can show that when $X$
is SRC in codimension $1$, for each $i$, there is a smooth projective variety
$Y_{i}$ and a family of cycles $\Gamma_{i}\subset Y_{i}\times X$ such that the
cokernal of $\Gamma_{*}:H_{i}(Y_{i})\to H_{i+2}(X)$ is $N$-torsion for a fixed
$N$. So we may consider the s-map from $\mathbb{Z}/N$ Lawson homology (defined
as the homotopy group of the topological group $Z_{r}(X)\otimes\mathbb{Z}/N$)
to $H_{3}(X,\mathbb{Z}/N)$. We have long exact sequences
$\begin{CD}L_{1}H_{i}(X,\mathbb{Z})@>{\cdot
N}>{}>L_{1}H_{i}(X,\mathbb{Z})@>{}>{}>L_{1}H_{i}(X,\mathbb{Z}/N)@>{}>{}>L_{1}H_{i-1}(X,\mathbb{Z})\\\
@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\\ H_{i}(X,\mathbb{Z})@>{\cdot
N}>{}>H_{i}(X,\mathbb{Z})@>{}>{}>H_{i}(X,\mathbb{Z}/N)@>{}>{}>H_{i-1}(X,\mathbb{Z}).\\\
\end{CD}$
By results of Suslin-Voevodsky [SV96] and the Bloch-Kato conjecture proved by
Voevodsky, there is an isomorphism
$L_{1}H_{2+i}(X,\mathbb{Z}/N)\cong
CH_{1}(X,i,\mathbb{Z}/N)\cong\mathbb{H}^{i}(X,\tau^{\leq\dim
X-1}R\pi_{*}(\mathbb{Z}/N))$
between torsion Lawson homology, Bloch’s higher Chow group, and certain
Zariski cohomology group, where $\pi:X_{cl}\to X_{zar}$ is the continuous map
from $X(\mathbb{C})$ with the analytic topology to $X$ with the Zariski
topology.
We also have a long exact sequence:
$\ldots\to L_{1}H_{k}(X,\mathbb{Z}/N)\to H_{k}(X,\mathbb{Z}/N)\to
KH_{k}(X,\mathbb{Z}/N)\to L_{1}H_{k-1}(X,\mathbb{Z}/N)\to\ldots$
where $KH_{k}(X,\mathbb{Z}/N)=H^{\dim X-k}(X,R^{\dim X}\pi_{*}\mathbb{Z}/N)$
is the so-called Kato homology. The author has made a number of conjectures
about Kato homologies of a rationally connected fibration in [Tia20]. Special
cases of these conjectures imply that there is an isomorphism
$L_{1}H_{i}(X,\mathbb{Z}/N)\cong H_{i}(X,\mathbb{Z}/N)$ for all $k$ and all
rationally connected varieties and rationally connected fibrations over a
curve defined over the complex numbers. This in turn would imply the s-maps
$L_{1}H_{i}(X,\mathbb{Z})\to H_{i}(X,\mathbb{Z})$ are isomorphisms.
We have a similar result that applies to fields of positive characteristic.
###### Theorem 1.26 (=Theorem 4.14).
Let $X$ be a $d$-dimensional smooth projective variety defined over an
algebraically closed field, which is separably rationally connected in
codimension $1$. There is a smooth projective curve $C$ with a family of
$1$-dimensional cycles $\Gamma\subset C\times X$ such that
$\Gamma_{*}:H_{1}^{\text{BM}}(C,\mathbb{Z}_{\ell})\to
H_{3}^{\text{BM}}(X,\mathbb{Z}_{\ell})\cong
H^{2d-3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell})$
surjects onto $N^{1}H^{2d-3}(X,\mathbb{Z}_{\ell})$.
###### Theorem 1.27 (=Theorem 4.16).
Let $X$ be a smooth projective defined over an algebraically closed field,
which is separably rationally connected in codimension $1$. Assume $X$ is a
$3$-fold. Then the following filtrations on $H^{3}(X,\mathbb{Z}_{\ell})$
introduced in Definition 1.18 equal the whole cohomology group:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{3}(X,\mathbb{Z}_{\ell})=\tilde{N}_{1,\text{cyl}}H^{3}(X,\mathbb{Z}_{\ell})=\tilde{N}^{1}H^{3}(X,\mathbb{Z}_{\ell})=N^{1}H^{3}(X,\mathbb{Z}_{\ell})=H^{3}(X,\mathbb{Z}_{\ell}).$
### 1.4. Integral Tate conjecture for one cycles: arithmetic part
We continue the discussion on integral Tate conjecture in this section.
The Serre-Hochschild spectral sequence gives an exact sequence:
$0\to
H^{1}(\mathbb{F}_{q},H^{2d-2r-1}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r)))\to
H^{2d-2r}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(d-r))\to
H^{2d-2r}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r))^{G}\to 0.$
Thus the integral Tate conjecture consists of a geometric part, i.e.
surjectivity of
$CH_{r}(X)\otimes\mathbb{Z}_{\ell}\to
H^{2d-2r}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r))^{G},$
and an arithmetic part, i.e. surjectivity of
$CH_{r}(X)_{\text{hom}}\otimes\mathbb{Z}_{\ell}\to
H^{1}(\mathbb{F}_{q},H^{2d-2r-1}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r))),$
where $CH_{r}(X)_{\text{hom}}\otimes\mathbb{Z}_{\ell}$ is the “geometrically
homologically trivial” part, i.e. the kernal of
$CH_{r}(X)\otimes\mathbb{Z}_{\ell}\to
H^{2d-2r}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r))^{G}.$
In a recent preprint [SS22], Scavia and Suzuki systematically investigated the
question of the surjectivity in the arithmetic part and relate them to the
strong coniveau filtration. For codimension $2$ cycles, they obtain the
following results.
###### Theorem 1.28.
[SS22, Theorem 1.3] Let $\mathbb{F}$ be a finte field and $\ell$ be a prime
number invertible in $\mathbb{F}$,and suppose that $\mathbb{F}$ contains a
primitive $\ell^{2}$-th root of unity. There exists a smooth projective
geometrically connected $\mathbb{F}$-variety X of dimension $2\ell+2$ such
that the map
$CH^{2}(X)\otimes\mathbb{Z}_{\ell}\to
H^{4}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(4))^{G}$
is surjective whereas the map
$CH^{2}(X)_{\text{hom}}\otimes\mathbb{Z}_{\ell}\to
H^{1}(\mathbb{F}_{q},H^{3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(2)))$
is not.
###### Theorem 1.29.
[SS22, Theorem 1.4] Let $p$ be an odd prime. There exist a finite field
$\mathbb{F}$ of characteristic $p$ and a smooth projective geometrically
connected fourfold $X$ over $\mathbb{F}$ for which the image of the
composition
$H^{1}(\mathbb{F},H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{2}(2))_{\text{tors}})\to
H^{1}(\mathbb{F},H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{2}(2)))\to
H^{4}(X,\mathbb{Z}_{2}(2))$
contains a non-algebraic torsion class.
Shortly after Scavia-Suzuki’s preprint appeared, Benoist studied steenrod
operations on Chow groups and cohomologies [Ben22]. He obtained new examples
of non-algebraic cohomology classes over many fields
($\mathbb{C},\mathbb{R},\bar{\mathbb{F}}_{q},\mathbb{F}_{q}$) and for
cohomology classes on algebraizable smooth manifolds. In the case of finite
fields, his results removed the assumptions on $\ell^{2}$-th roots of unity in
Scavia-Suzuki’s results.
###### Theorem 1.30.
[Ben22, Theorem 4.12] Let $p\neq\ell$ be prime numbers, and let $\mathbb{F}$
be a finite subfield of $\bar{\mathbb{F}}_{p}/\mathbb{F}_{p}$. There exist a
smooth projective geometrically connected variety $X$ of dimension $2\ell+3$
over $\mathbb{F}$ and a non-algebraic class
$x\in\text{Ker}(H^{4}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(2))\to
H^{4}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(2)))$.
The failure of the surjectivity is related to the discrepancy of the strong
coniveau and coniveau filtration by the following.
###### Theorem 1.31.
[SS22, Theorem 1.5, Proposition 7.6] Let $X$ be a smooth projective
geometrically connected variety over a finite field $\mathbb{F}$ and $\ell$ be
a prime number invertible in $\mathbb{F}$. Suppose that the coniveau and
strong coniveau on $H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(2))$ coincide:
$N^{1}H^{3}(X,\mathbb{Z}_{\ell}(2))=\tilde{N}^{1}H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(2))$.
Then the $\ell$-adic algebraic Abel-Jacobi map is an isomorphism:
$CH^{2}(X)_{\text{alg}}\otimes\mathbb{Z}_{\ell}\to
H^{1}(\mathbb{F},N^{1}H^{3}(X,\mathbb{Z}_{\ell}(2))).$
In general, we have a surjection
$CH^{r}(X)_{\text{F-alg}}\otimes\mathbb{Z}_{\ell}\to\text{Image}(H^{1}(\mathbb{F},\tilde{N}^{r-1}H^{2r-1}(X,\mathbb{Z}_{\ell}(r)))\to
H^{1}(\mathbb{F},N^{r-1}H^{2r-1}(X,\mathbb{Z}_{\ell}(r)))).$
In the paper of Scavia-Suzuki [SS22], they use $CH^{r}(X)_{\text{F-alg}}$ to
denote cycles algebraically equivalent to zero over $\mathbb{F}$, and
$CH^{r}(X)_{\text{alg}}$ to denote cycles defined over $\mathbb{F}$ that are
algebraically equivalent to zero over $\bar{\mathbb{F}}$. However, for
codimension $2$ cycles on varieties defined over $\mathbb{F}$, there is no
known example where $CH^{2}(X)_{\text{F-alg}}$ and $CH^{2}(X)_{\text{alg}}$
differ (Question 8.2, [SS22]), and
$CH^{2}(X)_{\text{F-alg}}\otimes\mathbb{Z}_{\ell}$ and
$CH^{2}(X)_{\text{alg}}\otimes\mathbb{Z}_{\ell}$ are isomorphic if the strong
coniveau coincides with the coniveau filtration on $H^{3}$ ([SS22, Propostion
7.10, 7.11]). Moreover, for one cycles on a separably rationally connected
variety or a separably rationally connected fibration over a curve defined
over a finite field, we know $CH_{1}(X)_{\text{F-alg}}$ and
$CH_{1}(X)_{\text{alg}}$ are the same [KT23, Theorem 6].
While the surjectivity in the arithmetic part is not true in general, we do
expect this to be true for one cycles on varieties that is separably
rationally connected in codimension $1$.
As a corollary of Theorem 4.7 and the work of Scavia-Suzuki, we get the
following results regarding the arithemtic part of the cycle class map.
###### Corollary 1.32 (=Corollary 4.17).
Let $X$ be a smooth projective variety of dimension $d$ defined over a finite
field $\mathbb{F}_{q}$, which is separably rationally connected in codimension
$1$. Then we have a surjection
$CH_{1}(X)_{\text{alg}}\to
H^{1}(\mathbb{F}_{q},N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1)).$
Furthermore, assume one of the followings
1. (1)
$N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))=H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))$.
2. (2)
The cycle class map
$cl:\lim\limits_{\xleftarrow[n]{}}CH_{1}(\bar{X},1,\mathbb{Z}/\ell^{n})\to
H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is surjective.
Then every class in
$H^{1}(\mathbb{F}_{q},H^{3}(\bar{X},\mathbb{Z}_{\ell}(d-1)))$ is the class of
an algebraic cycle. In particular, this holds if $X$ has dimension $3$.
### 1.5. Algebraic equivalence
The key technical result in proving the main theorems is the joint work of the
author with János Kollár [KT23] studying algebraic equivalences of one cycles
on smooth projective varieties.
Algebraically equivalence between two cycles means that one has to add
complicated cycles to both of them and then get a family of cycles over a
curve. Given two stable maps, supposing that they give algebraically
equivalent cycles, it is not clear that this algebraic equivalence of cycles
can be realized as a deformation of stable maps. In the joint work with
Kollár, we prove that this is always possible for curves on smooth projective
varieties.
###### Theorem 1.33.
[KT23, Theorem 1] Let $X$ be a smooth, projective variety over an
algebraically closed field $K$. Let $\pi_{i}:C_{i}\to X$ (for $i\in I$) be
finitely many morphisms of nodal curves to $X$ such that the
$(\pi_{i})_{*}[C_{i}]$ are algebraically equivalent to each other. Then there
is a morphism from a single nodal curve $\pi_{R}:R\to X$ and a family of
connected nodal curves over a connected curve $B$ with a morphism to $X$:
$B\xleftarrow{}S\xrightarrow{\pi}X$ such that for any $i\in I$, there is a
point $b_{i}\in B$ with fiber $S_{i}\cong C_{i}\cup R$ and
$(\pi|_{S_{i}}:C_{i}\to X)\cong(\pi_{i}:C_{i}\to X),(\pi|_{S_{i}}:R\to
X)\cong(\pi_{R}:R\to X)$.
That is, the deformation of cycles is visible as deformation of maps. This
result yields many interesting applications. For example, the study leads to
the following theorem.
###### Theorem 1.34.
[KT23, Theorem 6] Let $X_{k}$ be a smooth, projective variety over a perfect
field $k$, which is separably rationally connected in codimension $1$. Then
the kernel of the natural map
$A_{1}(X_{k})\to A_{1}({X}_{\bar{k}})$
is either trivial or $\mathbb{Z}/2\mathbb{Z}$. More precisely,
1. (1)
the kernel is trivial if $X_{k}$ contains an odd degree $0$-cycle, and
2. (2)
if $Z=\sum_{i}d_{i}C_{i}$ and $Z_{\bar{k}}$ is algebraically equivalent to $0$
over $\bar{k}$, then $Z$ is algebraically equivalent to $0$ over $k$ if and
only if the index of $X$ divides
$\chi(Z):=\sum_{i}d_{i}\chi(C_{i},\mathcal{O}_{C_{i}})$.
Recall that a _pseudo algebraically closed field_ (or a _PAC field_ for short)
is a field where every geometrically integral variety has a rational point.
###### Theorem 1.35.
[KT23, Theorem 7] Let $X_{k}$ be a smooth projective variety defined over a
perfect field $k$. Assume that every geometrically irreducible $k$-variety has
a $0$-cycle of degree $1$ (e.g. $k$ is a finite field or a PAC field). Assume
that $X$ is separably rationally connected in codimension $1$. We have an
isomorphism
$A_{1}(X_{k})\cong A_{1}(X_{\bar{k}})^{G},$
where $G$ is the absolute Galois group of $k$.
If $k$ is not perfect (and every geometrically irreducible $k$-variety has a
$0$-cycle of degree $1$), then we have an isomorphism after inverting the
characteristic $p$.
### 1.6. Structure of the paper
This paper consists of applications of the results in [KT23].
The first application is in Section 2, where we describe some structural
results of the space of one cycles. These structural results are then used in
Sections 3 4 to study the coniveau filtration on $H_{3}$ of a separably
rationally connected fibration over a curve. The case of complex varieties
uses Lawson homology and is topological. Moreover, it gives “integral”
results. We present this first to give the readers some flavor of the
argument. The general case has to use the Chow sheaves introduced by Suslin
and Voevodsky, hence more abstract. Unfortunately, in this case we only have
results for torsion coefficients and have to pass to the inverse limit from
time to time. The criterion for the surjection of the cycle class map onto the
arithmetic part is proved in Corollary 4.17.
Finally, the applications to local-global principles are discussed in Section
5. In Section 6, we give some examples where the criterion in Theorem 1.13 can
be effectively checked.
Acknowledgment: I would like to thank Jean-Louis Colliot-Thélène and Olivier
Wittenberg for many helpful and constructive comments. I am grateful to János
Kollár for generously sharing his ideas and for co-authoring the article
[KT23] which produces much stronger results than those proved in the first
version of this paper, and which provides the technical results needed for
this paper. This work is partially supported by NSFC grants No. 11890660 and
No.11890662.
## 2\. Space of one cycles
In this section, we fix an algebraically closed field $k$ of any
characteristic. We remind the readers that a variety is always assumed to be
irreducible through out the paper, and thus connected. Sometimes we add the
word irreducible just to emphasis this.
###### Definition 2.1.
Let $X,Y$ be finite type reduced separated $k$-schemes. A family of relative
cycles of equi-dimension $r$ over $Y$ is a formal linear combination of
integral subschemes $\mathcal{Z}=\sum m_{i}Z_{i},Z_{i}\subset Y\times
X,m_{i}\in\mathbb{Z}$ such that
1. (1)
Each $Z_{i}$ dominates one irreducible component of $Y$
2. (2)
Each fiber of $Z_{i}\to Y$ has dimension $r$.
3. (3)
A fat point condition is satisfied (see [Kol96, Chapter I, 3.10.4] or [SV00,
Definition 3.1.3]).
4. (4)
A field of definition condition is satisfied. Namely, for any point $y\in Y$,
there are integral subschemes $\gamma_{i}$ of $X$ defined over the residue
field $\kappa(y)$ such that the cycle theoretic fiber ([Kol96, Chapter I,
3.10.4]) of $\mathcal{Z}$ over $y$ is $\sum m_{i}\gamma_{i}$.
We write $\mathcal{Z}^{+}$ and $\mathcal{Z}^{-}$ as the positive and negative
parts, i.e. the sum of $Z_{i}$’s with positive and negative coefficients.
We say this family has proper support if $Z_{i}\to Y$ are proper for every
$i$.
It is often convenient to allow some redundancy in the linear combination,
especially when we consider pull-back and restrictions. For example, we allow
an expression of the form $(Z_{1}+Z_{3})-(Z_{2}+Z_{3})$. When this happens, we
think of $(Z_{1}+Z_{3})$ (resp. $(Z_{2}+Z_{3})$) as the positive (resp.
negative) part.
We refer the interested readers to [Kol96] Section I.3, I.4 and [SV00] Section
3 for details about the last two conditions and the subtle points about these
definitions. We only remark that with this definition, one can pull-back
families of cycles.
We adopt Kollár’s [Kol96] convention of only considering families of cycles
over a reduced base. Suslin and Voevodsky [SV00] consider more general base.
For our purpose, it is perfectly fine to always work over a reduced base. We
call a separated, finite type, reduced $k$-scheme an _algebraic set_.
###### Remark 2.2.
We would also like to mention that for a normal variety, condition $3$ is
automatic. Condition $4$ is always satisfied in characteristic $0$, or if the
base is regular, or if all the $m_{i}$’s are invertible in the field $k$. It
is introduced to make sure that pulling-back of families of cycles is always
defined and one has a presheaf of relative cycles.
###### Definition 2.3.
Given a family of projective curves $S\to B$ and a $B$-morphisms $F:S\to
B\times X$, we can associate to a family of cycles $\Gamma_{S}\subset B\times
X$. So given a family of cycles $\Gamma\subset B\times X$ over $B$, we say it
is _nodal_ , if it is induced from a family of nodal curves as above.
###### Proposition 2.4.
Let $S_{1},S_{2}$ be two connected algebraic sets and $\Gamma_{1}\subset
S_{1}\times X,\Gamma_{2}\subset S_{2}\times X$ be two equidimensional families
of $r$-cycles in $X$. Assume that the cycles in the two families are
algebraically equivalent. There is a connected algebraic set $S$ and an
equidimensional family of $r$-cycles $\Gamma\subset S\times X$, and morphisms
$f_{1}:S_{1}\to S,f_{2}:S_{2}\to S$ such that
$\Gamma_{1}=f_{1}^{*}\Gamma,\Gamma_{2}=f_{2}^{*}\Gamma$. Moreover, if both
$S_{1}$ and $S_{2}$ are normal/smooth/projective, we may choose $S$ to be
normal/smooth/projective.
###### Proof.
Take two points $s_{1}\in S_{2},s_{2}\in S_{2}$. Denote by
$\gamma_{1},\gamma_{2}$ the cycle over $s_{1},s_{2}$. By assumption,
$\gamma_{1},\gamma_{2}$ are algebraically equivalent. Thus there is a smooth
projective curve $S_{3}$, two points $a,b\in S_{3}$ and a family of $r$-cycles
$\Gamma_{3}\subset S_{3}\times X$ such that the cycle over $a,b$ are
$\gamma_{1},\gamma_{2}$. This follows from the definition of algebraic
equivalence. Indeed, by definition of algebraic equivalence, we may find a
cycle $\Delta$ and a family of cycles $\Gamma_{T}$ over a smooth projective
curve $T$ and two points $t_{1},t_{2}\in T$ such that the cycle over $t_{1}$
(resp. $t_{2}$) is $\gamma_{1}+\Delta$ (resp. $\gamma_{2}+\Delta$). Then we
take $S_{3}$ to be $T$ and the family of cycles $\Gamma_{3}$ to be
$\Gamma_{T}-T\times\Delta$.
Define $S=S_{1}\times S_{2}\times S_{3}$, with $p_{i}:S\to S_{i},i=1,2,3,$ the
projections. Define
$\Gamma=p_{1}^{*}\Gamma_{1}+p_{2}^{*}\Gamma_{2}-p_{3}^{*}\Gamma_{3}$. Finally,
define
$f_{1}:S_{1}\to S,x\mapsto(x,s_{2},b)$
and
$f_{2}:S_{2}\to S,y\mapsto(s_{1},y,a).$
One easily checks these satisfy the conditions.
Since $S_{3}$ is a smooth projective curve, if both $S_{1},S_{2}$ are normal,
or smooth, or projective, so is $S$. ∎
Now we can state the main technical result of this section.
###### Theorem 2.5.
Let $X$ be a smooth projective variety defined over an algebraically closed
field $k$. Let $(U,\Gamma_{U})$ be an equi-dimensional family of one
dimensional cycles over an irreducible variety $U$ and let $u_{0},u_{1}\in U$
be two points in $U$ such that
$\gamma_{0}:=\Gamma_{U}|_{u_{0}}=\gamma_{1}:=\Gamma_{U}|_{u_{1}}=\gamma$
as cycles. Then there is a family of one-dimensional cycles $(V,\Gamma_{V})$
over a smooth quasi-projective variety $V$ with a morphism $f:V\to U$ such
that $f^{*}\Gamma_{U}=\Gamma_{V}$, and a lifting $v_{0},v_{1}$ of
$u_{0},u_{1}$, such that
1. (1)
The morphism $f:V\to U$ is projective and surjective.
2. (2)
We may take the family over $V$ to be nodal as in Definition 2.3. We still use
$\Gamma_{V}$ to denote the family of nodal curves in the following.
3. (3)
There is a nodal deformation equivalence $T\leftarrow\Gamma_{T}\to X$ over a
two pointed connected nodal curve $(T,t_{0},t_{1})$ between
$\Gamma_{V}|_{v_{0}}$ and $\Gamma_{V}|_{v_{1}}$.
4. (4)
For each $k$-point $t$ of $T$, the cycle over $t$ is $\gamma$.
5. (5)
For a general point $u$ in $U$, and for any pair of points $v,v^{\prime}$ in
the inverse image of $u$ in $V$, there is a nodal deformation equivalence
$(C_{v,v^{\prime}},c,c^{\prime})\leftarrow\Gamma_{C}\to
X,\Gamma_{C}|_{c}\cong\Gamma_{V}|_{v},\Gamma_{C}|_{c^{\prime}}\cong\Gamma_{V}|_{v^{\prime}}$
over a connected two pointed nodal curve $(C_{v,v^{\prime}},c,c^{\prime})$.
6. (6)
For any point $c\in C$, the cycle of $\Gamma_{C}$ at $c$ is the same as that
of $\Gamma_{U}$ at the point $u$.
###### Theorem 2.6.
Keep the same assumptions as in Theorem 2.5. Assume furthermore that $X$ is
separably rationally connected in codimension $1$. Then $V,T,C_{v,v^{\prime}}$
admit a morphism to $(W,\Gamma_{W})$, where $W$ is a normal projective variety
and $\Gamma_{W}$ is a family cycles over $W$. In characteristic $0$, we may
choose $W$ to be irreducible, smooth and projective. In general, we may take
$W$ to be irreducible, normal, projective and smooth near the nodal points of
$T,C$, and $v_{0},v_{1}$.
###### Remark 2.7.
This theorem is special to one cycles on varieties that is SRC in codimension
$1$.
Indeed, if the statements were true for a variety $X$ and families of
$r$-dimensional cycles, then the same argument as in Sections 3 and 4 would
prove that the strong coniveau filtration $\tilde{N}^{r}H^{2r+1}(X)$ coincide
with the coniveau filtration $N^{r}H^{2r+1}$. But the examples of Benoist-
Ottem [BO21], Scavia-Suzuki [SS22] (cited in Section 1.3 and 1.4 as Theorems
1.20, 1.21, 1.28, 1.29) shows that this is not true in general.
###### Remark 2.8.
We remark that the positive and negative part of the families of cycles may
vary along $T$. The theorem only states that the difference remains constant.
###### Remark 2.9.
Even if we start with a family of effective cycles, for the statements to be
true, we have to use non-effective cycles.
Moreover, the statement is not a simple corollary of the existence of the
universal family over the Chow variety (which is true only in characteristic
$0$). This is because we require that the family is parameterized by a normal
variety, while the Chow variety is only semi-normal in [Kol96] by definition
or satisfies no such normality condition at all in some other references such
as [Fri91].
It is possible that the morphism from the normalization of the Chow variety to
the Chow variety maps two points to the same point. If this happens, we take
$U$ to be the normalization of the Chow variety, $u,u^{\prime}$ to be the two
points mapping to the same point in the Chow variety, the existence of $V,T,W$
in this case cannot be deduced from the existence of the universal family over
the Chow variety.
###### Remark 2.10.
Finally we remark that $U$ being irreducible is not essential in the proof.
But it simplifies the argument. If $U$ is reducible and connected, one can use
similar argument as in [KT23, Section 8] to find a connected algebraic set
$V$. But in this case, we cannot choose $V$ to admit a morphism to $U$. The
best one can have is that for each irreducible component of $U$, there is an
irreducible component of $V$ with a projective, surjetive morphism to this
component.
###### Proof of Theorem 2.5.
First, using Nagata’s compactification and Chow lemma, we can make a base
change that is a birational projective morphism and replace $U$ with a quasi-
projective variety. So in the following, we assume $U$ is quasi-projective.
Up to a purely inseparable base change, we may assume that the generic fiber
of the family is comes from nodal curves.
We write $\Gamma_{U}=\Gamma^{+}_{U}-\Gamma^{-}_{U}$ as its positive and
negative part. We write
$\gamma_{0}^{+}=\Gamma^{+}_{U}|_{u_{0}},\gamma_{0}^{-}=\Gamma^{-}_{U}|_{u_{0}},\gamma_{1}^{+}=\Gamma^{+}_{U}|_{u_{1}},\gamma_{1}^{-}=\Gamma^{-}_{U}|_{u_{1}}.$
By assumption,
$\gamma_{0}^{+}-\gamma_{0}^{-}=\gamma_{1}^{+}-\gamma_{1}^{-}.$
We take a general complete intersection $V^{\prime}$ (of the same dimension as
$U$) containing $v_{0}=(u_{0},u_{1})$ and $v_{1}=(u_{1},u_{0})$ in the product
$U\times U$. There are two families of nodal curves
$\Gamma_{p}=\Gamma_{p}^{+}-\Gamma_{p}^{-},\Gamma_{q}=\Gamma_{q}^{+}-\Gamma_{q}^{-}$
over $V^{\prime}$ induced by the two projections $p,q:V^{\prime}\to U$. We
have the family of cycles over $V^{\prime}$
${\Gamma}_{V^{\prime}}=(\Gamma_{p}^{+}+\Gamma_{q}^{-})-(\Gamma_{p}^{-}+\Gamma_{q}^{-}).$
Then the positive part over $v_{0}$ is $\gamma_{0}^{+}+\gamma_{1}^{-}$, and
the positive part over $v_{1}$ is $\gamma_{1}^{+}+\gamma_{0}^{-}$. Thus they
are the same as cycles. Moreover, the restriction of the negative parts of
$\Gamma_{V^{\prime}}$ to $v_{0},v_{1}$ are the same.
So now we have two families of cycles, and the restriction of each family to
$v_{0},v_{1}$ gives the same cycle. We first prove the statement for each
family. Then we take base changes for both families such that they are over
the same base and subtract them. Here we use the fact that the fiber of the
family over the singular points of $T,C$ and $v_{0},v_{1},v,v^{\prime}$ are
all nodal curves, and thus the base change will change nothing in their
neighborhood.
From now on we assume there is a nodal family of effective cycles.
The existence of a generically finite base change $V^{\prime\prime}\to U$, the
lifting $v_{0},v_{1}$ and the curve $T$ follows from [KT23, Theorem 58].
Unwrapping the content of this theorem, one gets the following:
1. (1)
There is a nodal curve $Z$ and a family of r-tuple pf complete intersection
curves $r|H^{\text{ci}}|\to V^{\prime\prime}$.
2. (2)
One can glue them together to get a family of nodal curves
$C_{V^{\prime\prime}}\to V^{\prime\prime}$.
3. (3)
There is a family of nodal curves over a two pointed curve $X\leftarrow
C_{T}\to(T,t_{0},t_{1})$, such that the restriction of the family $C_{T}$ to
$t_{0},t_{1}$ coincide with the restriction of of $C_{V^{\prime\prime}}$ to
$v_{0},v_{1}$.
4. (4)
For each $t\in T$, the cycle over $t$ is $Z+\sum_{i=1}^{r}L_{i}(t)$, where
$\cup L_{i}\to T$ is a family of $r$-tuple complete intersection curves.
5. (5)
The restriction of $\cup L_{i}|$ to ${t_{0}},t_{1}$ coincide with the
restriction of $r|H^{\text{ci}}|$ to $v_{0},v_{1}$.
6. (6)
The families of curves induces a morphism $V^{\prime\prime}\cup
T\to\text{Hilb}_{1}(X\times\mathbb{P}^{3})$.
This is almost what we want, except that the cycles over $V^{\prime\prime}$
and $T$ changes by a constant cycle $Z$ and a family of r-tuples of complete
intersection curves in $r|H^{\text{ci}}|$. So we subtract the corresponding
family.
To get a finite base change $V\to U$, one can take the graph closure of
$V^{\prime}$ with respect to the morphism to
$\text{Hilb}_{1}(X\times\mathbb{P}^{3})$ and then apply semi-stable reduction.
We may assume that $V$ is smooth using de Jong’s alteration (or resolution of
singularities in characteristic zero).
We note that the base change $V\to U$ consists of two steps: first a purely
inseparable base change $V^{\prime}\to U$ such that a general fiber becomes
nodal, then a further base change $V\to V^{1}\to U$ such that all fibers
becomes semi-stable. The second step does not change general fibers.
Therefore, for a general point $u\in U$, denote by $v^{1}\in V^{1}$ its
inverse image in $V^{\prime}$, and two of its inverse image by
$v,v^{\prime}\in V$. The fiber of the family of stable maps over
$v,v^{\prime}$ consists of complete intersection curves and the nodal curve
over $v^{1}$. They only differ by the complete intersection curves. So we can
construct a deformation over a curve $C$ from the fiber over $v$ to the fiber
over $v^{\prime}$ by deforming the complete intersection curves. ∎
###### Proof of Theorem 2.6.
This is essentially [KT23, Corollary 59]. As in the proof of Theorem 2.5, we
reduces the theorem to the case of an effective family. The point is, we can
attach families of curves constructed in [KT23, Theorem 43] to the family
(after a base change), so that the fiber (of the new family of curves) over
the nodes of $T,C$ and $v_{0},v_{1}$ has unobstructed deformation. Thus the
nodes in $T,C$ and $v_{0},v_{1}$ map to smooth points in the Hilbert scheme of
$X\times\mathbb{P}^{3}$. We take $W$ to be the normalization of the unique
geometrically irreducible component containing the image of $V,T,C$. In
characteristic $0$, we may even take a resolution of singularities of $W$ that
is isomorphic over the smooth locus. ∎
## 3\. Lawson homology
Let $X$ be a complex projective variety and we fix a very ample line bundle
$\mathcal{O}(1)$.
All the degree’s are taken with respect to this line bundle. Let
$\text{Chow}_{r,d}(X)$ be the Chow variety parameterizing degree $d$,
$r$-dimensional cycles of $X$ and
$\text{Chow}_{r}(X)=\coprod_{d\geq 0}\text{Chow}_{r,d}(X),$
where $\text{Chow}_{r,0}(X)$ is defined to be a single point corrsponding to
the zero cycle. We give the set $\text{Chow}_{r}(X)(\mathbb{C})$ the structure
of a topological monoid, where the topological structure comes from the
analytic topology on $\text{Chow}_{r,d}(X)(\mathbb{C})$ and the monoid
structure is the sum of cycles. Define $Z_{r}(X)$ to be the group completion
of $\text{Chow}_{r}(X)(\mathbb{C})$. It has a topological group structure. The
topology can be defined in several equivalent ways. These are studied by Lima-
Filho [LF94].
###### Definition 3.1.
We first define the category $I^{\text{eq}}$. The objects are pairs
$(S,\Gamma)$ consisting of a normal variety $S$ and a family of equi-
dimensional $r$-dimensional cycles $\Gamma$, and whose morphisms between
$(S,\Gamma)$ and $(S^{\prime},\Gamma^{\prime})$ are all the morphisms $f:S\to
S^{\prime}$ such that $\Gamma=f^{*}\Gamma^{\prime}$.
Define the topological space $Z_{r}(X)^{\text{eq}}$ as the colimit of all the
topological spaces $S(\mathbb{C})$ over the category $I^{\text{eq}}$.
More precisely, each $(S,\Gamma)$ in $I^{\text{eq}}$ gives a map of sets
$\phi_{S}:S(\mathbb{C})\to Z_{r}(X)$. The topology of $Z_{r}(X)^{\text{eq}}$
is defined in such a way that a subset $T\subset Z_{r}(X)$ is closed if and
only if $\phi_{S}^{-1}(T)$ is closed for all $(S,\Gamma)$.
###### Lemma 3.2.
In the definition of $Z_{r}(X)^{\text{eq}}$, we may take a subset consisting
of family of equidimensional cycles over normal projective varieties (or
smooth projective varieties).
###### Proof.
Given any family of equidimensional cycles $\Gamma\to S$, we may find a normal
projective variety (resp. smooth projective variety) $T$, a family
$\Gamma_{T}\to T$, and an open subset $T^{0}$ of $T$ such that there is a
surjective proper map $p:T^{0}\to S$ and $\Gamma_{T}|_{T^{0}}$ is
$\Gamma\times_{S}T^{0}$.
Note that we have a factorization $T^{0}(\mathbb{C})\to S(\mathbb{C})\to
Z_{r}(X)$. A set in $S(\mathbb{C})$ is closed if and only if its inverse image
under $p^{-1}$ in $T^{0}(\mathbb{C})$ is closed. That is, the topology of
$S(\mathbb{C})$ is the quotient topology coming from $T^{0}(\mathbb{C})\to
S(\mathbb{C})$.
Thus the topology on $Z_{r}(X)^{\text{eq}}$ is determined by families over
normal varieties (resp. smooth varieties) such that the family has an
extension over a normal (resp. smooth) projective compactification.
Therefore, when defining $Z_{r}(X)^{\text{eq}}$ as a colimit, we may take only
normal (resp. smooth) projective varieties. ∎
###### Definition 3.3.
Define the topological space $Z_{r}(X)^{\text{Chow}}$ as the quotient of
$\text{Chow}_{r}(X)(\mathbb{C})\times\text{Chow}_{r}(X)(\mathbb{C})$
by $\text{Chow}_{r}(X)(\mathbb{C})$, where the action is
$(a,b)\mapsto(a+c,b+c)$ for $c\in\text{Chow}_{r}(X)(\mathbb{C})$.
###### Theorem 3.4 ([LF94], Theorem 3.1, Theorem 5.2, Corollary 5.4).
The identity map induces homeomorphisms
$Z_{r}(X)^{\text{eq}}\cong Z_{r}(X)^{\text{Chow}}.$
Here is the definition of Lawson homology, first studied in [Law89].
###### Definition 3.5.
Let $X$ be a complex projecitve variety. Define the Lawson homology
$L_{r}H_{n+2r}(X)$ as the homotopy group $\pi_{n}(Z_{r}(X))$.
###### Example 3.6 (Dold-Thom isomorphism).
Consider $Z_{0}(X)$, the group of zero cycles on $X$. The classical Dold-Thom
theorem implies that there is an isomorphism
$L_{0}H_{n}(X)\cong H_{n}(X,\mathbb{Z}),$
###### Example 3.7 (Hurewitz map).
The Hurewitz map is induced by the inclusion $X\to Z_{0}(X)$:
$\pi_{k}(X)\to\pi_{k}(Z_{0}(X))\cong H_{k}(X,\mathbb{Z}).$
###### Theorem 3.8.
Let $X$ be a complex smooth projective variety. Then for any loop $L$ in
$Z_{1}(X)$, there is a projective algebraic set $Y$ with a family of nodal
curves $\Gamma\to Y\times X$ over $Y$ such that the map
$\Phi:L_{0}H_{1}(Y)=\pi_{1}(Z_{0}(Y))\to L_{1}H_{3}(X)=\pi_{1}(Z_{1}(X))$
induced by the family $\Gamma$ contains the class $[L]$ in $L_{1}H_{3}(X)$.
Assume furthermore that either $X$ is rationally connected or $X$ is a
rationally connected fibration over a curve, we may take $Y$ to be smooth.
We first introduce some notations. Given a projective algebraic set $S$
parameterizing a family of one dimensional cycles of $X$, there is an induced
continuous map between topological groups:
$Z_{0}(S)\to Z_{1}(X).$
We denote by $I(S)$ the image of this map, i.e. the closed subgroup of
$Z_{1}(X)$ generated by the cycles over $S$, and $K(S)$ the kernal of this
map.
The first observation in the proof of Theorem 3.8 is the following.
###### Lemma 3.9.
Let $X$ be a complex smooth projective variety. For any class $[L]$ in
$L_{1}H_{3}(X)=\pi_{1}(Z_{1}(X))$, there is a normal projective variety $U$
and a family of equidimensional one cycles $\gamma_{U}$ over $U$ such that
$[L]$ is represented by a continuous map
$I=[0,1]\to U\to Z_{1}(X).$
Note that $I\to U$ is not a loop in general.
###### Proof.
Denote by $Z_{1}(X)^{0}$ the neutral component of the topological group
$Z_{1}(X)$, i.e., the connected component containing the identity. We may
assume $L$ lies in $Z_{1}(X)^{0}$. Cycles in $Z_{1}(X)^{0}$ are precisely the
cycles algebraically equivalent to $0$. By Proposition 2.4, the topological
group $Z_{1}(X)^{0}$ is a filtered colimit over closed subgroups generated by
one-dimensional cycles parameterized by normal projective varieties.
Homotopy groups commutes with filtered colimits. Thus there is an irreducible
normal projective variety $S$ with a family of one dimensional cycles over $S$
such that the induced map
$\pi_{1}(I(S))\to\pi_{1}(Z_{1}(X))\cong L_{1}H_{3}(X)$
contains the class $[L]$ in $\pi_{1}(Z_{1}(X))$.
The fibration $K(S)\to Z_{0}(S)\to I(S)$ gives a long exact sequence of
homotopy groups:
$\ldots\to\pi_{1}(Z_{0}(S))\to\pi_{1}(I(S))\to\pi_{0}(K(S))\to\ldots.$
A loop in $I(S)$ lifts to a continuous map from the unit interval $I=[0,1]$ to
$Z_{0}(S)$, such that $0,1$ map to two points in $Z_{0}(S)$ that parameterize
the same cycle in $X$.
We may assume that the family over $I$ is the difference of two families of
effective $0$-cycles of degree $d+/d-$ in $S$. That is, it corresponds to the
difference of two continuous maps $f^{+}:I\to S^{(d+)},f^{-}:I\to S^{(d-)}$,
which is the same as a continuous map $f:I\to S^{(d+)}\times S^{(d-)}$ with
$0$ mapping to a point $x=(x^{+},x^{-})$ and $1$ mapping to a point
$y=(y^{+},y^{-})$.
A family of one cycles over $S$ induces a family of cycles over $S^{(d+)}$ and
$S^{(d-)}$. Let us denote them by $\Gamma_{d+},\Gamma_{d-}$.
The loop is the composition $I\to S^{(d+)}\times S^{(d-)}\to Z_{0}(S)\to
Z_{1}(X)$, where the middle map is taking the difference.
Let us use a different family of cycles
$\pi_{+}^{*}\Gamma_{d+}-\pi_{d-}^{*}\Gamma_{d-}$ on the product
$S^{(d+)}\times S^{(d-)}$, where $\pi_{+/-}$ is the projection to
$S^{(d+)/(d-)}$. This family of cycles induces a continuous map
$S^{(d+)}\times S^{(d-)}\to Z_{1}(X)$ such that the composition $I\to
S^{(d+)}\times S^{(d-)}\to Z_{1}(X)$ is the loop $L$.
We take $U$ to be $S^{(d+)}\times S^{(d-)}$ and $\gamma_{U}$ to be
$\pi_{+}^{*}\Gamma_{d+}-\pi_{d-}^{*}\Gamma_{d-}$. ∎
###### Proof of Theorem 3.8.
By Lemma 3.9, there is a normal projective variety $U$ and a family of
equidimensional one cycles $\gamma_{U}$ over $U$ such that $[L]$ is
respresented by a continuous map
$f:I=[0,1]\to U\to Z_{1}(X).$
Denote by $x,y\in U$ the image of $0,1$ by $f$. The cycle over $x,y$ are the
same by assumption. Now we are in the set-up of Theorem 2.5. Thus there is a
smooth projective variety $V$ with a generically finite surjective morphism
$V\to U$, a lifting $x_{V},y_{V}$ of $x,y$ to $V$, and families of constant
cycles parameterized by curves $T,C$ connecting $x_{V},y_{V}$ and respectively
inverse images of a general point in $S$. We take $Y$ to be $V\cup T\cup C$ in
this case.
The morphism $V\to U$ induces a continuous map between topological groups
$Z_{0}(V)\to Z_{0}(U)$. Denote by $K$ the kernal topological group. As a
group, $K$ is generated by cycles of the form $a-b$, where $a,b$ are points in
a fiber of $V\to U$. Note that $I(V)=I(U)$. Thus we have a fibration sequence
of topological groups:
$0\to K\to K(V)\to K(U)\to 0.$
We have commutative diagrams:
$\begin{CD}\pi_{1}(Z_{0}(Y))@>{}>{}>\pi_{1}(I(Y))@>{}>{}>\pi_{0}(K(Y))\\\
@A{}A{}A@A{}A{}A@A{}A{}A\\\
\pi_{1}(Z_{0}(V))@>{}>{}>\pi_{1}(I(V))@>{}>{}>\pi_{0}(K(V))@<{}<{}<\pi_{0}(K)\\\
@V{}V{}V@V{}V{=}V@V{}V{}V\\\
\pi_{1}(Z_{0}(U))@>{}>{}>\pi_{1}(I(U))@>{}>{}>\pi_{0}(K(U))\\\ \end{CD}$
The obstruction of lifting the class $[L]$ in $\pi_{1}(I(V))$ is in
$\pi_{0}(K(V))$ and maps to $[x-y]$ in $\pi_{1}(K(U))$. The class
$[x_{V}-y_{V}]$ differs from the obstruction class by an element in
$\pi_{0}(K)$.
We take the stein factorization $V\to V^{\prime}\to U$, where $V\to
V^{\prime}$ has connected fibers (hence birational) and $V^{\prime}\to U$ is
finite. Therefore $\pi_{0}(K)$ is finitely generated by classes of the form
$[a-b]$, where $a,b$ are points in the fiber over a general point in $U$.
The class $[L]$ maps to $\pi_{1}(I(Y))$, with obstruction class the push-
forward of $[x_{V}-y_{V}]$ modulo classes in $\pi_{0}(K)$.
By the existence of the families of constant cycles over the curves $T,C$ in
Theorem 2.6, we have
1. (1)
The composition
$\pi_{0}(K)\to\pi_{0}(K(V))\to\pi_{0}(K(Y))$
is the zero map.
2. (2)
The push-forward of the class $[x_{V}-y_{V}]$ vanishes in $\pi_{0}(K(Y))$.
Thus the class of the loop $L$ in $\pi_{1}(I(Y))$ is contained in
$\pi_{1}(Z_{0}(Y))$.
Finally, if $X$ is rationally connected in codimension $1$, by Theorem 2.6,
all these families over $V,T,C$ come from pulling back from a family of cycles
over a smooth projective variety. In this case, we take $Y$ to be this smooth
projective variety. ∎
Now we introduce another ingredient.
###### Lemma 3.10.
[FM94, Page 709, 1.2.1] There is a continuous map, the _s-map_ :
$Z_{r}(X)\wedge\mathbb{P}^{1}\to Z_{r-1}(X)$ inducing the s-map on Lawson
homologies $s:L_{r}H_{k}(X)\to L_{r-1}H_{k}(X)$.
###### Remark 3.11.
The construction of the s-map depends on a deep result: Lawson’s algebraic
suspension theorem. A geometric way of describing the s-map is the following.
Given a cycle $\Gamma$, take a general pencil of divisors
$D_{t}(t\in\mathbb{P}^{1})$ that intersect $\Gamma$ properly, and the s-map
sends $([\Gamma],t)$ to the cycle $\Gamma\cdot D_{t}-\Gamma\cdot D_{0}$.
###### Definition 3.12.
Let $Y$ be a semi-normal variety. Let $Z\subset Y\times X$ be a family of
$r$-dimensional cycle over $Y$ corresponding to a morphism $f:Y\to Z_{r}(X)$.
We define the _correspondence homomorphism_
$\Phi_{f}:H_{k}(Y,\mathbb{Z})\to H_{k+2r}(X,\mathbb{Z})$
as the composition
$H_{k}(Y,\mathbb{Z})\cong\pi_{k}(Z_{0}(Y))\to\pi_{k}(Z_{r}(X))\xrightarrow{s^{\circ
k}}\pi_{k+2r}(Z_{0}(X))\cong H_{k+2r}(X,\mathbb{Z}),$
where the map $\pi_{k}(Z_{r}(X))\to\pi_{k+2r}(Z_{0}(X))$ is induced by $k$-th
iterations of the $s$-map.
###### Theorem 3.13 ([FM94] Theorem 3.4).
Let $Y$ be a smooth projective variety and $\Gamma\subset Y\times X$ be a
family of $r$-dimensional cycle over $Y$ corresponding to a morphism $f:Y\to
Z_{r}(X)$. We have
$\Phi_{f}=\Gamma_{*}:H_{k}(Y,\mathbb{Z})\to H_{k+2r}(X,\mathbb{Z}),$
where $\Gamma_{*}$ is the map defined using $Z$ as a correspondence.
With this result, we can prove the main results over complex numbers.
###### Theorem 3.14.
Let $X$ be a smooth projective variety. There is a projective curve $C$ with a
nodal family of cycles $\Gamma\subset C\times X$ inducing a map $f:C\to
Z_{1}(X)$ such that
$\Phi_{f}:H_{1}(C,\mathbb{Z})\to H_{3}(X,\mathbb{Z})$
has the same image as the s-map $s:L_{1}H_{3}(X)\to H_{3}(X)$, which is the
same as the coniveau filtration $N^{1}H_{3}(X,\mathbb{Z})$.
Furthermore, if $X$ is rationally connected in codimension $1$, we may take
$C$ to be a smooth projective curve, and $\Phi_{f}=\Gamma_{*}$.
###### Proof.
Recall that there is an isomorphism $L_{0}H_{k}(S)\cong H_{k}(S)$ for any
projective algebraic set $S$ by the Dold-Thom theorem. We have a commutative
diagram:
$\begin{CD}L_{0}H_{1}(Y)@>{f_{*}}>{}>L_{1}H_{3}(X)\\\
@V{\cong}V{}V@V{s}V{}V\\\ H_{1}(Y)@>{\Phi_{f}}>{}>H_{3}(X)\end{CD}$
The image of the s-map is finitely generated. Thus we may find finitely many
projective algebraic sets $Y_{i}$ and families of semistable curves
$\Gamma_{i}\to Y_{i}\times X$ such that the generators of the image of s-map
are contained in the image of the correspondence homomophisms
$\Phi_{i*}:H_{1}(Y_{i})\to H_{3}(X)$. Then we take $Y$ to be the product
$\Pi_{i}Y_{i}$ and $\Gamma=\sum_{i}\pi_{i}^{*}\Gamma_{i}$. Clearly $\Phi_{*}$
contains the image of all the $\Phi_{i*}$.
By taking general hyperplane sections containing all the singularities and all
the one dimensional irreducible components, we may find a projective curve
$C\subset Y$ such that $\pi_{1}(C_{1})\to\pi_{1}(Y)$ is surjective. Then we
simply restrict the family of cycles to $C$.
If $X$ is rationally connected in codimension $1$, we may take all the
$Y_{i}$’s to be smooth by Theorem 3.8, and thus $C$ to be a general complete
intersection of very ample divisors, which is a smooth projective curve.
Finally, we note that the image of the s-map
$s:L_{1}H_{3}(X)\to H_{3}(X)$
is $N^{1}H_{3}(X,\mathbb{Z})$ by [Wal07, Proposition 2.8]. ∎
The immediate consequence is the following.
###### Theorem 3.15.
Let $X$ be a smooth projective rationally connected variety or a rationally
connected fibration over a curve. Assume $X$ is a $3$-fold. Then all the
filtrations on $H^{3}(X,\mathbb{Z})$ introduced in Definition 1.18 equal the
whole cohomology group:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{3}(X,\mathbb{Z})=\tilde{N}_{1,\text{cyl}}H^{3}(X,\mathbb{Z})=\tilde{N}^{1}H^{3}(X,\mathbb{Z})=N^{1}H^{3}(X,\mathbb{Z})=H^{3}(X,\mathbb{Z}).$
###### Proof.
By the decomposition of the diagonal argument,
$L_{1}H_{k}(X)\otimes\mathbb{Q}\cong H_{k}(X,\mathbb{Q})\cong
H^{k}(X,\mathbb{Q}).$
Thus we know that
$s:L_{1}H_{3}(X)\to H_{3}(X,\mathbb{Z})\cong H^{3}(X,\mathbb{Z})$
is surjective by [Voi08, Corollary 3.1]. ∎
## 4\. Chow sheaves
In this section we discuss the general case over an algebraically closed field
$k$.
Sometimes we may “invert $p$”, by taking the tensor product of a sheaf with
$\mathbb{Z}[\frac{1}{p}]$. In this scenario, we understand that $p$ is $1$ if
the base field has characteristic $0$ and equal the characteristic otherwise.
We use $\mathbb{Z}_{l}$ coefficient étale cohomology for $\ell$ a prime
number, non-zero in the field $k$.
One can also define Lawson homology with $\mathbb{Z}_{\ell}$ coefficients in
this context [Fri91]. But the construction of the analogue of $Z_{r}(X)$ is
more complicated . Also Lawson homology in this context is much less studied.
Many of the known results for complex varieties have not been explicitly
stated to hold, even though one can imagine that they are still true. For
example, the author do not know a reference for the construction of the s-map,
Neither could the author find the analogue of Friedlander-Mazur’s result
(Theorem 3.13 ) explicitly stated. If one had developed all the necessary
results in this general context, presumably the argument in last section works
the same way.
So we decided to use another approach for the general case.
###### Definition 4.1.
A finite correspondence from $Y$ to $X$ is a family of relative cycles of
dimension $0$ with proper support over $Y$.
###### Definition 4.2.
Let $\text{Sch}_{k}$ be the category of finite type separated $k$-schemes. Let
$\text{Cor}_{k}$ be category whose objects are separated finite type
$k$-schemes and morphisms finite correspondences. Let $\text{SmCor}_{k}$ be
the full subcategory whose objects are smooth $k$-varieties. In this
subcategory a finite correspondence between from $X$ to $Y$ is a linear
combination of closed subvarieties of $X\to Y$ that are finite surjective onto
one of the irreducible components of $X$.
Recall that the h-topology is generated by covers that are universal
topological epimorphisms. Since we will only deal with noetherian schemes,
this is the same as the topology generated by Zariski covers and covers that
are proper surjective morphisms.
The qfh-topology is generated by covers in the h-topology that are also quasi-
finite.
Later we will only use the fact that a surjective proper morphism is an
h-cover.
###### Definition 4.3.
We define the presheaf $Z_{\text{fl}}(X,r)$ on the category $\text{Sch}_{k}$
whose value on a scheme $S$ is a formal linear combination of integral
subschemes $Z\subset S\times X$ that is flat, of equidimension $r$ over $S$.
We also define $Z_{\text{eq}}(X,r)$ on the category $\text{Sch}_{k}$ whose
value on a scheme $S$ is the group of families of cycles in $X$ of
equidimension $r$ over $S$.
We also define $Z(X,r)$ on the category $\text{Sch}_{k}$ whose value on a
scheme $S$ is the group of families of cycles of dimension $r$ in $X$ over
$S$.
We define $Z_{\text{fl}}^{\text{eff}}(X,r)$ (resp.
$Z_{\text{eq}}^{\text{eff}}(X,r),Z^{\text{eff}}(X,r)$) on the category
$\text{Sch}_{k}$ whose value on a scheme $S$ is the monoid of families of
effective cycles in $X$ of equidimension $r$ over $S$.
Similarly, we define
$C_{\text{fl}}(X,r),C_{\text{eq}}(X,r),C(X,r),C_{\text{fl}}^{\text{eff}}(X,r),C_{\text{eq}}^{\text{eff}}(X,r),C^{\text{eff}}(X,r)$
as the counterpart of the above presheaves for families of cycles with proper
support over $S$.
The sheaf $C_{\text{fl}}(X,r)$ is denoted by
$\mathbb{\mathbb{Z}}\text{PropHilb}$ in [SV00].
Since later we will consider cycles on proper schemes, with the purpose of
keeping the names consistent with previous section, we will use
$Z_{\text{fl}}(X,r)$ etc. notations.
Note that we do not require the subschemes to be equidimensional over $S$ in
the definition of $Z(X,r)$ and $C(X,r)$. It is possible to have higher
dimensional fibers ([SV00, Example 3.1.9]). However, $Z^{\text{eff}}(X,r)$ is
the same as $Z^{\text{eff}}_{\text{eq}}(X,r)$. Similarly for the properly
supported version.
We have the following.
###### Proposition 4.4.
[SV00, Proposition 4.2.7, 4.2.6, Lemma 4.2.13] The presheaf
$Z_{\text{eq}}(X,r)\otimes\mathbb{Z}[\frac{1}{p}]$ is a qfh-sheaf and the
presheaf $Z(X,r)\otimes\mathbb{Z}[\frac{1}{p}]$ is an h-sheaf. Moreover, the
sheafification in the h topology of $Z_{\text{eq}}(X,r)$ is the same as that
of $Z(X,r)$.
In the following, we write $Z_{r}^{h}(X)$ as the h-sheaf associated to
$Z_{\text{fl}}(X,r)$ (which is the same as that of $Z_{\text{eq}}(X,r)$ or
$Z(X,r)$).
In the following, given a presheaf $\mathcal{F}$, we use $C^{*}(\mathcal{F})$
to denote the Suslin complex of presheaves (with non-positive degrees). That
is, $C^{-i}(\mathcal{F})(S)=\mathcal{F}(S\times\Delta^{i})$, where
$\Delta^{i}=\text{Spec }k[t_{0},\ldots,t_{i}]/\langle\sum_{j}t_{j}=1\rangle$
is the algebraic $i$-simplex.
If $\mathcal{F}$ is a torsion, homotopy invariant étale sheaf with transfers,
or a qfh, or h sheaf on the category of schemes over $X$, with torsion order
prime to $p$, the Suslin rigidity theorem [SV96, Theorem 4.5] implies that
$\mathcal{F}$ is a locally constant sheaf. Since we work over an algebraically
closed field, locally constant is the same as constant. Thus for any torsion
sheaf $\mathcal{G}$ with torsion order prime to $p$, its Suslin complex
$C^{*}(\mathcal{G})$ is isomorphic to the pull-back of a complex of locally
constant sheaves. Moreover if $\mathcal{F}$ is a constant étale sheaf, we have
isomorphisms ([SV96, Theorem 10.2, 10.7])
$H^{i}_{\text{\'{e}t}}(X,\mathcal{F})\cong
H^{i}_{\text{qfh}}(X,\mathcal{F^{\text{qfh}}})\cong
H^{i}_{\text{h}}(X,\mathcal{F^{\text{h}}}).$
Since we assume that $k$ is algebraically closed, $\text{Spec }k$ has no
higher cohomology for any sheaf in any of these three topologies. Therefore
for any complex of constant sheaves, we also have the isomorphism of
cohomologies for $X=\text{Spec }k$. In particular, above discussions apply to
the complex $C^{*}(Z_{\text{eq}}(X,r))\otimes\mathbb{Z}/N\mathbb{Z}$. We may
identify the cohomology of this complex.
###### Theorem 4.5.
Let $X$ be a quasi-projective variety defined over an algebraically closed
field $k$. Let $N$ be an integer, non-zero in the field. We have the following
isomorphisms.
$\displaystyle H^{i}_{\text{h}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}^{\text{h}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z}))\cong
H^{i}_{\text{qfh}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}^{\text{qfh}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z}))$
$\displaystyle\cong$ $\displaystyle H^{i}_{\text{\'{e}t}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z}))\cong
H^{i}_{\text{Ab}}(C^{*}(Z_{\text{eq}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z})(\text{Spec
}k))$ $\displaystyle\cong$ $\displaystyle
CH_{r}(X,-i,\mathbb{Z}/N\mathbb{Z}).$
###### Proof.
The first three cohomology groups are isomorphic as discussed above. They are
all equal to the cohomology of the complex
$C^{*}(Z_{\text{eq}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z})(\text{Spec }k))$,
since this complex computes the cohomology group in the qfh and étale
topology.
Finally, under the hypothesis that resolution of singularities holds, Suslin
[Sus00, Theorem 3.2] proves that for any quasi-projective variety $X$, we have
an isomorphism
$H^{i}_{\text{Ab}}(C^{*}(Z_{\text{eq}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z})(\text{Spec
}k))\cong CH_{r}(X,-i,\mathbb{Z}/N\mathbb{Z}).$
Using Gabber’s refinement of de Jong’s alteration theorem, Kelly [Kel17,
Theorem 5.6.4] removed the resolution of singularities hypothesis. ∎
###### Corollary 4.6.
Let $X$ be a quasi-projective variety defined over an algebraically closed
field $k$. Let $N$ be an integer, non-zero in the field. We have the following
isomorphisms.
$\displaystyle H^{i}_{\text{\'{e}t}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}(X,0)\otimes\mathbb{Z}/N\mathbb{Z}))\cong
H^{i}_{\text{qfh}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}(X,0)\otimes\mathbb{Z}/N\mathbb{Z}))$
$\displaystyle\cong$ $\displaystyle H^{i}_{\text{h}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}(X,0)\otimes\mathbb{Z}/N\mathbb{Z}))\cong
H^{i}_{\text{Ab}}(C^{*}(Z_{\text{eq}}(X,0)\otimes\mathbb{Z}/N\mathbb{Z})(\text{Spec
}k))$ $\displaystyle\cong$ $\displaystyle
CH_{0}(X,-i,\mathbb{Z}/N\mathbb{Z})\cong
H^{2d-i}_{\text{\'{e}t}}(X,\mathbb{Z}/N\mathbb{Z}),$
where $d$ is the dimension of $X$. In particular, all the groups are finite.
###### Proof.
The last equality follows from [SV96, Corollary 7.8] and [Gei00, Theorem 3.5],
[Kel17, Theorem 5.6.1]. Clearly the étale cohomology group is finite. ∎
We use $A_{1}(X)$ to denote the group of one cycles in $X$ modulo algebraic
equivalence. For any abelian group $A$ and any integer $m$, we use $A[m]$ to
denote the group of $m$-torsions in $A$, and $A/m$ to denote the quotient
$A/mA$.
For any integer $N$ invertible in the field $k$, we have a homomorphism
$CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z})\to CH_{1}(X,0,\mathbb{Z})[N]$
that comes from the long exact sequence of higher Chow groups with
$\mathbb{Z}$ and $\mathbb{Z}/N$ coeffiecients. Composing with the surjective
map
$CH_{1}(X,0,\mathbb{Z})[N]\to A_{1}(X)[N],$
we have a homomorphism
$CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z})\to A_{1}(X)[N].$
Now we can state the counterpart of Theorem 3.8.
###### Theorem 4.7.
Let $X$ be a smooth projective variety defined over an algebraically closed
field $k$ of characteristic $p$. Fix an integer $N$ that is invertible in $k$.
For any class $[L]$ in the kernal of the map
$H_{h}^{-1}(\text{Spec
}k,C^{*}(Z_{1}^{h}(X))\otimes\mathbb{Z}/N\mathbb{Z})\cong
CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z})\to A_{1}(X)[N],$
there is a projective algebraic set $Z$ and a family of one-dimensional nodal
curves over $Z$ such that the class $[L]$ is in the image of
$H_{h}^{-1}(\text{Spec }k,C^{*}(Z_{0}^{h}(Z))\otimes\mathbb{Z}/N\mathbb{Z})\to
H_{h}^{-1}(\text{Spec }k,C^{*}(Z_{1}^{h}(X))\otimes\mathbb{Z}/N\mathbb{Z})$
induced by this family of cycles. Assume furthermore that $X$ is separably
rationally connected in codimension $1$. We may take $Z$ to be normal.
###### Remark 4.8.
This result is a priori weaker than Theorem 3.8 over complex numbers. We have
a short exact sequence
$0\to L_{1}H_{3}(X)/N\to L_{1}H_{3}(X,\mathbb{Z}/N)\to L_{1}H_{2}(X)[N]\to 0.$
Since $L_{1}H_{3}(X,\mathbb{Z}/N)\cong CH_{1}(X,1,\mathbb{Z}/N)$ and
$L_{1}H_{2}(X)[N]\cong A_{1}(X)[N]$, Theorem 4.7 only says that classes in
$L_{1}H_{3}(X)/N$ comes from a smooth projective variety.
But if we know that $L_{1}H_{3}(X)$ is finitely generated, then we can find
the lift. Conjecturally, this group is isomorphism to $H_{3}(X,\mathbb{Z})$,
thus finitely generated.
The proof of Theorem 4.7 is analogous to that of Theorem 3.8. We first prove
the analogue of Lemma 3.9.
###### Lemma 4.9.
Let $X$ be a smooth projective variety defined over an algebraically closed
field $k$ of characteristic $p$. Assume that $X$ is either a separablly
rationally connected variety or a separably rationally connected fibration
over a curve. Fix an integer $N$ that is invertible in $k$. For any class
$[L]$ in
$H_{h}^{-1}(\text{Spec
}k,C^{*}(Z_{1}^{h}(X))\otimes\mathbb{Z}/N\mathbb{Z})\cong
CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z}),$
there is a normal projective variety $U$, a family of equidimensional one
cycles $\gamma_{U}$ over $U$ and a morphism $f:\Delta^{1}\to U$ such that
$[L]$ is represented by $f^{*}\gamma_{U}$ over $\Delta^{1}$.
###### Proof.
We could translate the proof of Theorem 3.8 in the context of h-sheaves. But
here is an easier argument using Hilbert schemes.
The class $[L]$ is represented by a family of cycles
$\sum_{i}m_{i}\Gamma_{i},m_{i}\in\mathbb{Z}/N$ over $\Delta^{1}$, where
$\Gamma_{i}\subset\Delta^{1}\times X$ is an integral subvariety. Since
$\Delta^{1}$ is one dimensional, the projection $\Gamma_{i}\to\Delta^{1}$ is
flat. Thus we get a morphism $f_{i}$ from $\Delta^{1}$ to the Hilbert scheme.
The universal subscheme over the Hilbert scheme gives a family of cycles over
the Hilbert scheme. Therefore, we may take $U$ to be the normalization of of
products of irreducible components of the Hilbert scheme and $\gamma_{U}$ to
be the family of cycles (with appropriate multiplicity) coming from universal
subschemes. ∎
We will need the following observation later in the proof.
###### Lemma 4.10.
Let $T$ be a connected projective algebraic set over an algebraically closed
field $k$, and let $x,y$ be two points in $T$. Let $\mathcal{F}$ be a sheaf of
abelian groups in the qfh or h topology, or an étale sheaf with transfers. Fix
an integer $N$ invertible in $k$. Write $F_{x}$ (resp. $F_{y}$) the
restriction of $F\in\mathcal{F}\otimes\mathbb{Z}/N\mathbb{Z}(T)$ to $x$ (resp.
$y$). Then $F_{x}=F_{y}$ in $H^{0}(\text{Spec
}k,C^{*}(\mathcal{F})\otimes\mathbb{Z}/N\mathbb{Z})$, where the cohomology in
taken in the étale topology, qfh topology or h topology.
###### Proof.
Elements in $\mathcal{F}\otimes\mathbb{Z}/N\mathbb{Z}(T)$ induces a unique
morphism
$Z_{0}(T)\otimes\mathbb{Z}/N\to\mathcal{F}\otimes\mathbb{Z}/N\mathbb{Z}.$
If $\mathcal{F}$ is a sheaf with transfers, this is the Yoneda Lemma. If
$\mathcal{F}$ is a qfh sheaf or h sheaf, this follows from the fact that the
qfh sheafification of $Z_{0}(T)[\frac{1}{p}]$ is the free sheaf
$\mathbb{Z}[\frac{1}{p}][T]$ generated by the presheaf of sets
$\text{Hom}(\cdot,T)$ ([SV96, Theorem 6.7]). Thus the class
$[\mathcal{F}_{x}]$ (resp. $[\mathcal{F}_{y}]$) is the image of $[x]$ (resp.
$[y]$) under the map
$H^{0}(\text{Spec }k,C^{*}(Z_{0}(T))\otimes\mathbb{Z}/N\mathbb{Z})\to
H^{0}(\text{Spec }k,C^{*}(\mathcal{F})\otimes\mathbb{Z}/N\mathbb{Z}).$
So it suffices to show that $[x]=[y]$ in $H^{0}(\text{Spec
}k,Z_{0}(T)\otimes\mathbb{Z}/N\mathbb{Z})$. But the latter cohomology group is
$CH_{0}(T)\otimes\mathbb{Z}/N\mathbb{Z}\cong\mathbb{Z}/N\mathbb{Z}$ by Lemma
4.6 and the isomorphism is given by the degree map. Any two points $x,y$ give
the same class in $H^{0}(\text{Spec
}k,Z_{0}(T)\otimes\mathbb{Z}/N\mathbb{Z})$. ∎
Now we begin the proof Theorem 4.7.
###### Proof of Theorem 4.7.
Given a normal projective variety $S$ parameterizing a family of one
dimensional cycles of $X$, there is an induced morphism of h-sheaves:
$Z_{0}^{h}(S)\to Z_{1}^{h}(X).$
We denote by $I(S)$ (resp. $K(S)$) the image h-sheaf (resp. kernal h-sheaf) of
this map.
By Lemma 4.9, there is a normal projective variety $U$, a family of
equidimensional one cycles $\gamma_{U}$ over $U$ and a morphism
$f:\Delta^{1}\to U$ such that $[L]$ is represented by $f^{*}\gamma_{U}$ over
$\Delta^{1}$.
Denote by $\gamma_{0},\gamma_{1}$ the restriction of the family of cycles
$\gamma_{U}$ over $U$ to $0,1\in\Delta^{1}$. Then
$\gamma_{0}-\gamma_{1}=N(\gamma_{0,1})$ for some cycle $\gamma_{0,1}$. The
image of $[L]$ in $CH_{1}(X,0,\mathbb{Z})[N]$ and $A_{1}(X)[N]$ is the class
of $\gamma_{0,1}$.
If $\gamma_{0,1}$ is zero in $A_{1}(X)[N]$, that is, if $\gamma_{0,1}$ is
algebraically equivalent to $0$, then by Proposition 2.4, there is a smooth
projective curve $D$ with a family of cycles $\gamma_{D}$ and two points
$d,d^{\prime}$ such that $\Gamma_{d}$ is $0$ and $\Gamma_{d^{\prime}}$ is
$\gamma_{0,1}$.
Consider the product $U\times D$. We have a family of cycles
$\gamma=\pi_{U}^{*}\gamma_{U}+N\pi_{D}^{*}\gamma_{D}$.
There are three points in $S=U\times D$,
$x=(f(0),d),y=(f(1),d),z=(f(1),d^{\prime})$
such that
1. (1)
$\gamma_{x}=\gamma_{z}=\gamma_{0}$.
2. (2)
There is a curve $C$ containing $y,z$ such that for every point $c\in C$, the
cycle $\gamma_{c}$ equals $\gamma_{1}$ in
$Z_{1}(X)\otimes\mathbb{Z}/N(\text{Spec }k)$.
As in the proof of Theorem 3.8, we apply the second part of Theorem 2.6 to
find a normal projective variety $V$, a projective algebraic set $Y$ with a
surjective projective morphism $V\to S$ and an embedding $V\to Y$, and
liftings $x_{V},y_{V},z_{V}$ of the points $x,y,z$ such that
1. (1)
The two points $x_{V}$ and $z_{V}$ are connected by a chain of curves in $Y$
parameterizing constant cycles.
2. (2)
The two points $y_{V}$ and $z_{V}$ are connected by a curve $D_{V}$ in $V$
parameterizing cycles divisible by $N$.
Denote by $K$ the kernal of the morphism between h sheaves
$Z_{0}(V)\to Z_{0}(S).$
Here $V\to S$ is proper and surjective. So the above morphism of h sheaves is
surjective. Then we have an isomorphism of h sub-sheaves of $Z_{1}(X)$
$I(V)\cong I(S).$
It follows that we have a short exact sequence of h sheaves:
$0\to K\to K(V)\to K(S)\to 0.$
We have commutative diagrams:
$\begin{CD}H^{-1}_{h}(C^{*}(Z_{0}(Y))/N)@>{}>{}>H^{-1}_{h}(C^{*}(I(Y)/N))@>{}>{}>H^{0}_{h}(C^{*}(K(Y))/N)\\\
@A{}A{}A@A{}A{}A@A{}A{}A\\\
H^{-1}_{h}(C^{*}(Z_{0}(V))/N)@>{}>{}>H^{-1}_{h}(C^{*}(I(V))/N)@>{}>{}>H^{0}_{h}(C^{*}(K(V))/N)\\\
@V{}V{}V@V{}V{=}V@V{}V{}V\\\
H^{-1}_{h}(C^{*}(Z_{0}(S))/N)@>{}>{}>H^{-1}_{h}(C^{*}(I(S))/N)@>{}>{}>H^{0}_{h}(C^{*}(K(S))/N)\\\
\end{CD}$
The obstruction of lifting the class $[L]$ in $H^{-1}_{h}(\text{Spec
}k,C^{*}(I(V))\otimes\mathbb{Z}/N\mathbb{Z})$ is in $H^{0}_{h}(\text{Spec
}k,C^{*}(K(V))$ and maps to $[x-y]$ in $H^{0}_{h}(\text{Spec
}k,C^{*}(K(S))\otimes\mathbb{Z}/N\mathbb{Z})$. The class $[x_{T}-y_{T}]$
differs from the obstruction class by an element in
$H^{0}_{h}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z}).$
Given the morphism $V\to S$, we have a long exact sequence
$\displaystyle\ldots\to$ $\displaystyle H^{-1}_{h}(\text{Spec
}k,C^{*}(Z_{0}(V))\otimes\mathbb{Z}/N\mathbb{Z})\to H^{-1}_{h}(\text{Spec
}k,C^{*}(Z_{0}(S))\otimes\mathbb{Z}/N\mathbb{Z})$ $\displaystyle\to$
$\displaystyle H^{0}_{h}(\text{Spec
}k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})\to H^{0}_{h}(\text{Spec
}k,C^{*}(Z_{0}(T))\otimes\mathbb{Z}/N\mathbb{Z})\to\ldots$
Therefore $H^{0}_{h}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is
finitely generated by Corollary 4.6. By Lemma 4.12, any class in
$H^{0}_{h}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is equivalent
to the class of the form $[a-b]$, where $a,b$ are points in the fiber over a
general point in $S$.
The class $[L]$ maps to $H^{-1}_{h}(\text{Spec
}k,C^{*}(I(Y))\otimes\mathbb{Z}/N\mathbb{Z})$, with obstruction class the
push-forward of $[x_{V}-y_{V}]$ modulo classes in $H^{0}_{h}(\text{Spec
}k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$.
By the existence of the families of constant cycles in the second part of
Theorem 2.6, and by Lemma 4.10, we have
1. (1)
The composition
$H^{0}_{h}(\text{Spec }k,C^{*}(K)/N)\to H^{0}_{h}(\text{Spec
}k,C^{*}(K(V))/N)\to H^{0}_{h}(\text{Spec }k,C^{*}(K(Y))/N)$
is the zero map.
2. (2)
The push-forward of the class $[x_{V}-y_{V}]$ vanishes in
$H^{0}_{h}(\text{Spec }k,C^{*}(K(Y))\otimes\mathbb{Z}/N\mathbb{Z})$ (by
applying Lemma 4.10 to $[x_{V}-y_{V}]$ and $[x_{V}-x_{V}]=0$).
Thus the class $[L]$ in $H^{-1}_{h}(\text{Spec }k,C^{*}(I(Y))/N)$ comes from
$H^{-1}_{h}(\text{Spec }k,C^{*}(Z_{0}(Y))/N)$.
Finally, if $X$ is separably rationally connected in codimension $1$, we may
take $Y$ to be normal by Theorem 2.6. We use Gabber’s refinement of de Jong’s
alteration to find a smooth projective variety $Z$ and a projective alteration
$Z\to Y$ whose degree is relatively prime to $N$. Then
$CH_{0}(Z,1,\mathbb{Z}/N\mathbb{Z})\to CH_{0}(Y,1,\mathbb{Z}/N\mathbb{Z})$
is surjective by Lemma 4.13. Pulling back the families of cycles over $Y$
gives a family of cycles over $Z$. Then the theorem follows from the following
commutative diagram
$\begin{CD}CH_{0}(Z,1,\mathbb{Z}/N)@>{}>{}>CH_{0}(Y,1,\mathbb{Z}/N)@>{\Gamma_{*}}>{}>CH_{1}(X,1,\mathbb{Z}/N)\\\
@V{\cong}V{}V@V{\cong}V{}V@V{}V{}V\\\
H_{1}(Z,\mathbb{Z}/N)@>{}>{}>H_{1}(Y,\mathbb{Z}/N)@>{\Gamma_{*}}>{}>H_{3}(X,\mathbb{Z}/N).\end{CD}$
∎
The lemmas used in the proof are the following.
###### Lemma 4.11.
Let $X\to Y$ be a flat and finite morphism defined over an algebraically
closed field $k$, where $Y$ is a normal variety (but $X$ is not necessarily
normal). Let $N$ be an integer invertible over $k$. Denote by $K$ the kernal h
sheaf of $Z_{0}(X)\to Z_{0}(Y)$. Then for any chosen general point in $Y$,
$H_{h}^{0}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is generated
by class of the form $[t_{1}-t_{2}]$, for $t_{1},t_{2}$ in the fiber of this
chosen general point in $Y$.
###### Proof.
Let $x_{1},x_{2}$ be two points in the fiber over $y\in Y$. Clearly
$H_{h}^{0}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is generated
by class of the form $[x_{1}-x_{2}]$ for all the pairs of points with the same
image in $Y$. We will show that for any chosen general point $t\in Y$, the
class $[x_{1}-x_{2}]$ is equivalent to a class $[t_{1}-t_{2}]$ for some points
$t_{1},t_{2}$ in the fiber over $t$. Consider the correspondence
$X\times_{Y}X\subset X\times X$. Since $X\to Y$ is assumed to be flat,
$X\times_{Y}X\to X$ is flat. We take an irreducible component $D$ containing
$(x_{1},x_{2})$, which dominates (and thus surjects onto) $X$. There are two
points $x_{D},t_{D}$ in $D$ such that the following conditions are satisfied.
1. (1)
There is a surjective morphism $f:D\to Y$ that maps $x_{D}$ (resp. $t_{D}$) to
$y$ (resp. $t$).
2. (2)
There are two morphisms $f_{1},f_{2}:D\to X$ such that
$f_{1}(x_{D})=x_{1},f_{2}(x_{D})=x_{2}$.
3. (3)
The composition of $f_{1},f_{2}$ with the morphism $q:X\to Y$ gives the
morphism $f:D\to Y$.
Then by Lemma 4.10, the class $[x_{1}-x_{2}]$ is the same as
$[f_{1}(t_{D})-f_{2}(t_{D})]$. ∎
###### Lemma 4.12.
Let $p:X\to Y$ be a generically finite surjective morphism between normal
projective varieties over an algebraically closed field $k$. Let $N$ be an
integer invertible over $k$. Denote by $K$ the kernal h sheaf of $Z_{0}(X)\to
Z_{0}(Y)$. Then the cohomology group $H_{h}^{0}(\text{Spec
}k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is generated by class of the form
$[t_{1}-t_{2}]$, for $t_{1},t_{2}$ in the fiber of any chosen general point in
$Y$.
###### Proof.
There is a birational projective variety morphism $Y^{\prime}\xrightarrow{q}Y$
such that the strict transform $X^{\prime}$ of $X$ is flat over $Y^{\prime}$.
That is, we have a commutative diagram:
$\begin{CD}X^{\prime}@>{q^{\prime}}>{}>X\\\ @V{p^{\prime}}V{}V@V{p}V{}V\\\
Y^{\prime}@>{q}>{}>Y\\\ \end{CD}$
We denote by $K(p)$ etc. to denote the kernal h sheaf of $Z_{0}(X)\to
Z_{0}(Y)$ etc.. There is a commuative diagram of short exact sequences of h
sheaves
$\setcounter{MaxMatrixCols}{11}\begin{CD}0@>{}>{}>K(q^{\prime})@>{}>{}>Z_{0}(X^{\prime})@>{q_{*}^{\prime}}>{}>Z_{0}(X)@>{}>{}>0\\\
@V{}V{}V@V{}V{}V@V{p_{*}^{\prime}}V{}V@V{p_{*}}V{}V@V{}V{}V\\\
0@>{}>{}>K(q)@>{}>{}>Z_{0}(Y^{\prime})@>{q_{*}}>{}>Z_{0}(Y)@>{}>{}>0,\end{CD}$
which also gives commuatative diagrams after tensoring with
$\mathbb{Z}/N\mathbb{Z}$. Then we have long exact sequences:
$CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z})\to CH_{1}(Y,1,\mathbb{Z}/N\mathbb{Z})\to
H_{h}^{0}(\text{Spec }k,C^{*}(K(p))\otimes\mathbb{Z}/N\mathbb{Z})\ldots,$
$CH_{1}(X^{\prime},1,\mathbb{Z}/N\mathbb{Z})\to
CH_{1}(Y^{\prime},1,\mathbb{Z}/N\mathbb{Z})\to H_{h}^{0}(\text{Spec
}k,C^{*}(K(p^{\prime}))\otimes\mathbb{Z}/N\mathbb{Z})\ldots.$
The cohomology group $H_{h}^{0}(\text{Spec
}k,C^{*}(K(p))\otimes\mathbb{Z}/N\mathbb{Z})$ (resp. $H_{h}^{0}(\text{Spec
}k,C^{*}(K(p^{\prime}))\otimes\mathbb{Z}/N\mathbb{Z})$ ) is generated by
cycles of the form $y_{1}-y_{2}$ for $y_{1},y_{2}$ in the same fiber. So it
suffices to show that such cycles are zero.
We first show that
$CH_{0}(Y^{\prime},1,\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(Y,1,\mathbb{Z}/N\mathbb{Z})$
is surjective. This is because $Y^{\prime}\to Y$ has connected fibers. So for
any two points in the same fiber, by Lemma 4.10, the class of the difference
is zero. Since
$CH_{0}(Y^{\prime},0,\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(Y,0,\mathbb{Z}/N\mathbb{Z})$
is an isomorphism, we know that $H_{h}^{0}(\text{Spec
}k,C^{*}(K(q))\otimes\mathbb{Z}/N\mathbb{Z})$ vanishes.
By the same argument, $H_{h}^{0}(\text{Spec
}k,C^{*}(K(q^{\prime}))\otimes\mathbb{Z}/N\mathbb{Z})$ vanishes. Then a simple
diagram chasing shows that
$H_{h}^{0}(\text{Spec }k,C^{*}(K(p^{\prime}))\otimes\mathbb{Z}/N\mathbb{Z})\to
H_{h}^{0}(\text{Spec }k,C^{*}(K(p))\otimes\mathbb{Z}/N\mathbb{Z})$
is surjective.
Thus the statements follow form Lemma 4.11. ∎
###### Lemma 4.13.
Let $p:X\to Y$ be a generically finite morphism between normal projective
varieties over an algebraically closed field $k$. Let $N$ be an integer
invertible over $k$. Assume that $\deg p$ is relatively prime to $N$. Then we
have a surjection
$CH_{0}(X,1,\mathbb{Z}/N\mathbb{Z})\to CH_{0}(Y,1,\mathbb{Z}/N\mathbb{Z}).$
###### Proof.
By Lemma 4.12, and the long exact sequence
$\displaystyle CH_{0}(X,1,\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(Y,1,\mathbb{Z}/N\mathbb{Z})\to H^{0}_{h}(\text{Spec }k,C^{*}(K)/N)$
$\displaystyle\to$ $\displaystyle
CH_{0}(X,0,\mathbb{Z}/N)\xrightarrow{\cong}CH_{0}(Y,0,\mathbb{Z}/N),$
it suffices to show that for a general point $y\in Y$ and any two points
$x_{1},x_{2}$ in the fiber of $y$, the class $[x_{1}-x_{2}]$ is zero in
$H^{0}_{h}(\text{Spec }k,C^{*}(K)/N)$.
By the Bertini theorem for étale fundamental groups, there is a general
complete intersection curve $H$ such that the inverse image $H^{\prime}$ in
$Y^{\prime}$ is irreducible. For $H$ general, the morphism $H^{\prime}\to H$
is flat and finite of degree prime to $N$. Thus
$\displaystyle CH_{0}(H^{\prime},1,\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(H,1,\mathbb{Z}/N\mathbb{Z})\to H_{h}^{0}(\text{Spec
}k,C^{*}(K_{H})\otimes\mathbb{Z}/N\mathbb{Z})$ $\displaystyle\to
CH_{0}(H^{\prime},\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(H,\mathbb{Z}/N\mathbb{Z}),$
where $K_{H}$ is the kernal h sheaf of $Z_{0}(H^{\prime})\to Z_{0}(H)$. The
map
$CH_{0}(H^{\prime},\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(H,\mathbb{Z}/N\mathbb{Z})$
is an isomorphism. On the other hand, since $p:H^{\prime}\to H$ is flat and
finite, we have pull-back and push-forward on all the higher Chow groups. The
composition of pull-back and push-forward
$CH_{0}(H,1,\mathbb{Z}/N\mathbb{Z})\xrightarrow{p^{*}}CH_{0}(H^{\prime},1,\mathbb{Z}/N\mathbb{Z})\xrightarrow{p_{*}}CH_{0}(H,1,\mathbb{Z}/N\mathbb{Z})$
is multiplcation by $\deg p$. Since the degree of the map is relatively prime
to $N$,
$CH_{0}(H^{\prime},1,\mathbb{Z}/N\mathbb{Z})\xrightarrow{q_{*}}CH_{0}(H,1,\mathbb{Z}/N\mathbb{Z})$
is surjective. Thus for any two points $t_{1},t_{2}$ over a general point
$t\in Y$, the class $[t_{1}-t_{2}]$ vanishes in $H_{h}^{0}(\text{Spec
}k,C^{*}(K_{H})\otimes\mathbb{Z}/N\mathbb{Z})$. So does its push-forward in
$H_{h}^{0}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$. ∎
Fix a prime number $\ell$ different from the characteristic of $k$. In the
following theorem, we omit all the Tate twists for simplicity of notations.
###### Theorem 4.14.
Let $X$ be a smooth projective variety defined over an algebraically closed
field, which is separably rationally connected in codimension $1$. There is a
smooth projective curve $C$ with a family of $1$-dimensional cycles
$\Gamma\subset C\times X$ such that
$\Gamma_{*}:H_{1}^{\text{BM}}(C,\mathbb{Z}_{\ell})\to
H_{3}^{\text{BM}}(X,\mathbb{Z}_{\ell})$
surjects onto $N^{1}H_{3}(X,\mathbb{Z}_{\ell})$.
###### Proof.
In the following, we use Borel-Moore homology. For simplicity of notations, we
only write them as $H_{1},H_{3}$. Let $NH_{3}(X,\mathbb{Z}/\ell^{n})$ be the
coniveau filtration on the homology $N^{1}H_{3}(X,\mathbb{Z}/\ell^{n})$.
Denote by $\tilde{N}H_{3}(X,\mathbb{Z}/\ell^{n})$ the strong coniveau
filtration $\tilde{N}^{1}H_{3}(X,\mathbb{Z}/\ell^{n})$.
For a smooth projective variety $Y$, we have
$H_{1}(Y,\mathbb{Z}_{\ell})/\ell^{n}\cong H_{1}(Y,\mathbb{Z}/\ell^{n})$, since
$H_{0}(Y,\mathbb{Z}_{\ell})\cong\mathbb{Z}_{\ell}$ is torsion free. Therefore,
$\tilde{N}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to\tilde{N}H_{3}(X,\mathbb{Z}/\ell^{n})$
is surjective.
We have a commutative diagram
$\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
73.91812pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\oplus_{(S,\Gamma_{S})}CH_{0}(S,1,\mathbb{Z}/\ell^{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
57.49832pt\raise 6.075pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.70833pt\hbox{$\scriptstyle{\oplus\Gamma_{S*}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 97.91812pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
0.0pt\raise-30.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
97.91812pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{CH_{1}(X,1,\mathbb{Z}/\ell^{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
197.08075pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern-3.0pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
135.49944pt\raise-30.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
197.08075pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{A_{1}(X)[\ell^{n}]\ignorespaces\ignorespaces\ignorespaces\ignorespaces\to
0}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
226.58502pt\raise-30.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx<EMAIL_ADDRESS>0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{0\to
H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
73.91814pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@hook{1}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
102.0091pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
102.0091pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{NH_{3}(X,\mathbb{Z}/\ell^{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
197.32379pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
197.32379pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H_{2}(X,\mathbb{Z}_{\ell})[\ell^{n}]}$}}}}}}}\ignorespaces}}}}\ignorespaces,$
where the direct sum is taken over families of equidimensional one cycles over
smooth projective varieties.
By Theorem 4.7, the upper row is exact. The lower row is also exact, since it
comes from
$0\to H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to H_{3}(X,\mathbb{Z}/\ell^{n})\to
H_{2}(X,\mathbb{Z}_{\ell})[\ell^{n}]\to 0.$
The vertical maps are cycle class maps.
The middle vertical map
$CH_{1}(X,1,\mathbb{Z}/\ell^{n})\to NH_{3}(X,\mathbb{Z}/\ell^{n})$
is surjective, since for any surface $\Sigma$, not necessarily smooth, we have
a surjection
$CH_{1}(\Sigma,1,\mathbb{Z}/\ell^{n})\to H_{3}(\Sigma,\mathbb{Z}/\ell^{n}).$
When this surface is smooth, it is a consequence of the Bloch-Kato conjecture.
The general case can be proven using the localization sequence for higher Chow
groups and Borel-Moore homology.
The left vertical arrow is the direct sum of the composition
$CH_{0}(S,1,\mathbb{Z}/\ell^{n})\to H_{1}(S,\mathbb{Z}/\ell^{n})\cong
H_{1}(S,\mathbb{Z}_{\ell})/\ell^{n}\xrightarrow{\Gamma_{S*}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n},$
Since the cycle class map induces an isomorphism
$CH_{0}(S,1,\mathbb{Z}/\ell^{n})\cong H_{1}(S,\mathbb{Z}/\ell^{n})\cong
H_{1}(S,\mathbb{Z}_{\ell})/\ell^{n}$, the left vertical arrow has the same
cokernal as
$\tilde{N}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to
H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap NH_{3}(X,\mathbb{Z}/\ell^{n}).$
By the snake lemma, this cokernal is isomorphic to the cokernal $C_{n}$ of
$\text{Ker}(CH_{1}(X,1,\mathbb{Z}/\ell^{n})\to
H_{3}(X,\mathbb{Z}/\ell^{n}))\to\text{Ker}(A_{1}[\ell^{n}]\to
H_{2}(X,\mathbb{Z}_{\ell})[\ell^{n}]).$
The connecting maps $C_{n+m}\to C_{n}$ are multiplication by $\ell^{m}$.
Therefore the inverse limit $\lim\limits_{\xleftarrow{}}C_{n}$ is torsion
free.
We have a factorization
$\tilde{N}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to{N}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to
H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap NH_{3}(X,\mathbb{Z}/\ell^{n})\subset
H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}.$
Taking inverse limit, we get
$\tilde{N}H_{3}(X,\mathbb{Z}_{\ell})\to{N}H_{3}(X,\mathbb{Z}_{\ell})\to\lim\limits_{\xleftarrow[n]{}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})\subset H_{3}(X,\mathbb{Z}_{\ell}).$
Therefore the map
${N}H_{3}(X,\mathbb{Z}_{\ell})/\to\lim\limits_{\xleftarrow[n]{}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})$
is injective. Since the cokernal of
${N}H_{3}(X,\mathbb{Z}_{\ell})/\to\lim\limits_{\xleftarrow[n]{}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})$ is torsion free, so is the cokernal of
$\tilde{N}H_{3}\to NH_{3}$. On the other hand, we know that the cokernal is
torsion. So it has to be zero. That is, the strong coniveau filtration
coincide with the coniveau filtration.
Therefore there is a smooth projective variety $Y$ and a family of cycles
$\Gamma_{Y}$ such that the induced map
$\Gamma_{Y*}:H_{1}(Y,\mathbb{Z}_{\ell})\to{N}^{1}H_{3}(X,\mathbb{Z}_{\ell})$
is surjective. By taking hyperplane sections in $Y$, we may find a smooth
projective curve $C$ with a family of cycles $\Gamma$ such that
$\Gamma_{*}:H_{1}(C,\mathbb{Z}_{\ell})\to{N}^{1}H_{3}(X,\mathbb{Z}_{\ell})$
is surjective.
Finally, for later use, we note that in the proof, we also prove that for $X$
SRC in codimension $1$, the maps
(5) $\tilde{N}^{1}H_{3}(X,\mathbb{Z}_{\ell})\to
N^{1}H_{3}(X,\mathbb{Z}_{\ell})\to\lim\limits_{\xleftarrow[n]{}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})$
are isomorphisms. The first isomorphism is already shown above. We have
already shown that the second map is injective. The cokernal is torsion since
the cokernal of $NH_{3}\to H_{3}$ is torsion. By the first isomorphism and the
fact that the composition has torsion free cokernal, the cokernal of the
second map is also torsion free, and thus zero. ∎
###### Remark 4.15.
When $X$ is only smooth projective, one can prove that the filtration
$N_{1,\text{cyl},\text{st}}H_{3}(X,\mathbb{Z}_{\ell})$ is the same as
$N^{1}H_{3}(X,\mathbb{Z}_{\ell})$ by the same argument.
###### Theorem 4.16.
Let $X$ be a smooth projective $3$-fold over an algebraically closed field.
Assume that $X$ is separably rationally connected in codimension $1$. Then all
the filtrations on $H^{3}(X,\mathbb{Z}_{\ell})$ introduced in Definition 1.18
equal the whole cohomology group:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{3}(X,\mathbb{Z}_{\ell})=\tilde{N}_{1,\text{cyl}}H^{3}(X,\mathbb{Z}_{\ell})=\tilde{N}^{1}H^{3}(X,\mathbb{Z}_{\ell})=N^{1}H^{3}(X,\mathbb{Z}_{\ell})=H^{3}(X,\mathbb{Z}_{\ell}).$
###### Corollary 4.17.
Let $X$ be a smooth projective variety of dimension $d$ defined over a finite
field $\mathbb{F}_{q}$, that is separably rationally connected in codimension
$1$. Assume one of the followings
1. (1)
$N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))=H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))$.
2. (2)
The cycle class map
$cl:\lim\limits_{\xleftarrow[n]{}}CH_{1}(\bar{X},1,\mathbb{Z}/\ell^{n})\to
H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is surjective.
Then every class in
$H^{1}(\mathbb{F}_{q},H^{3}(\bar{X},\mathbb{Z}_{\ell}(d-1)))$ is the class of
an algebraic cycle defined over $\mathbb{F}_{q}$. In particular, this holds if
$X$ has dimension $3$.
###### Proof.
We first show that the surjectivity of the cycle class map
$cl:\lim\limits_{\xleftarrow[n]{}}CH_{1}(\bar{X},1,\mathbb{Z}/\ell^{n})\to
H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
implies that
$N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))=H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))$.
In fact, we have
$\displaystyle\lim\limits_{\xleftarrow[n]{}}CH_{1}(\bar{X},1,\mathbb{Z}/\ell^{n})\to\lim\limits_{\xleftarrow[n]{}}N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))$
$\displaystyle\to$
$\displaystyle\lim\limits_{\xleftarrow[n]{}}H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))=H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1)).$
Therefore
$\lim\limits_{\xleftarrow[n]{}}N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))\to\lim\limits_{\xleftarrow[n]{}}H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))=H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is surjective. On the other hand, since $N^{1}H_{3}(X,\mathbb{Z}/\ell^{n})$ is
a subgroup of $H_{3}(X,\mathbb{Z}/\ell^{n})$, the inverse limit is injective,
hence an isomorphism. We have an exact sequence
$\displaystyle
0\to\lim\limits_{\xleftarrow[n]{}}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))/\ell^{n}\cap
N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))$
$\displaystyle\xrightarrow{\phi}$
$\displaystyle\lim\limits_{\xleftarrow[n]{}}N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))\to\lim\limits_{\xleftarrow[n]{}}H^{2d-2}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))[\ell^{n}],$
where the first inverse limit is
$N^{1}H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$ by (5), and
the last inverse limit is torsion free. Since the quotient of
$H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))/N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is torsion for separably rationally connected varieties or separably
rationally connected fibrations over a curve, we know that $\phi$ is an
isomorphism and thus $N^{1}H^{2d-3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell})\to
H^{2d-3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell})$ is an isomorphism.
Therefore, by Theorem 4.14, there is a smooth projective curve $C$ defined
over $\bar{\mathbb{F}}_{q}$ with a family of one-dimensional cycles
$\Gamma\subset C\times\bar{X}$ such that
$\Gamma_{*}:H^{1}_{\text{\'{e}t}}(C,\mathbb{Z}_{\ell}(1))\to
H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is surjective. Then this corollary follows from [SS22, Proposition 7.6], the
statement of which is recalled in Theorem 1.31. ∎
## 5\. Integral Tate conjecture and local-global principle for zero cycles
Let $X$ be a smooth projective geometrically irreducible variety of dimension
$d$ defined over a finite field $\mathbb{F}$. We have the cycle class maps:
(6) $CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to H^{2d-2}(X,\mathbb{Z}_{\ell}(d-1)),$
Recall the integral Tate conjecture asks the following question.
###### Question 5.1.
For which smooth projective variety $X$ defined over $\mathbb{F}$, and which
$r$, is the cycle class map (6) surjective?
We mention another closely related question.
###### Question 5.2.
Let $X$ be a smooth projecitve variety defined over a henselian local field
with finite residue field. Is the cycle class map
$CH_{0}(X)\hat{\otimes}\mathbb{Z}_{\ell}\to H^{2d}(X,\mathbb{Z}_{\ell}(d))$
injective? Here $\ell$ is invertible in the residue field.
###### Remark 5.3.
Question 5.2 has a positive answer if $X$ is a geometrically rational surface,
and has a regular model with SNC central fiber ([EW16, Theorem 3.1] in general
and [Sai91, Theorem A] for the case of $p$-adic fields). In this case, the
proof in [EW16] also shows that the closed fiber also satisfies a version of
the integral Tate conjecture. For $X$ defined over a Laurant field
$\mathbb{F}_{q}\left(\leavevmode\hbox{\set@color$\left(\leavevmode\hbox{\set@color$t$}\right)$}\right)$,
a regular model with SNC central fiber always exists since we have resolution
of singularities for $3$-folds.
If Question 5.2 has a positivie answer for the generic fiber $X$, then
Conjectures 1.1 and 1.3 are equivalent for $X$.
###### Remark 5.4.
We also note that the results in [Tia20] suggest that Question (5.2) should be
true for separably rationally connected varieties, provided that the
characteristic $p$ analogues of the conjecture $\textbf{R}(n,3)$ about Kato
homology in loc. cit. is true, and that we have the minimal model program
established in positive and mixed characteristic.
As discussed in Theorem 1.10 and the remark that follows this theorem in the
introduction, various types of integral Tate conjectures would imply various
versions of Colliot-Thélène’s conjectures 1.1, 1.2.
We can deduce Theorem 1.13 from Theorem 1.35 and Corollary 4.17.
###### Proof of Theorem 1.13.
Recall that $G=\text{Gal}(\bar{\mathbb{F}}_{q}/\mathbb{F}_{q})$ is the
absolute Galois group. By Theorem 1.35, we have an isomorphism
$A_{1}(\mathcal{X})\cong A_{1}(\overline{\mathcal{X}})^{G}.$
Under the assumptions (A), (B) of Theorem 1.13, we know that there is an
isomorphism of $\text{Gal}(\bar{\mathbb{F}}_{q}/\mathbb{F}_{q})$-modules:
$A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}\cong
H^{2d}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d)).$
Note that $G$ is generated by the Frobenius $F$ and
$A_{1}(\overline{\mathcal{X}})^{G}$ is the kernal of $F^{*}-\text{id}$. Since
$\mathbb{Z}_{\ell}$ is a flat $\mathbb{Z}$-module, we have
$A_{1}(\overline{\mathcal{X}})^{G}\otimes\mathbb{Z}_{\ell}$ is the kernal of
$(F^{*}-\text{id})\otimes\text{id}_{\mathbb{Z}_{\ell}}:A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}\to
A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}.$
That is,
$A_{1}(\overline{\mathcal{X}})^{G}\otimes\mathbb{Z}_{\ell}\cong(A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell})^{G}\cong
H^{2d}(\overline{\mathcal{X}},\mathbb{Z}_{\ell})^{G},$
where $\mathbb{Z}_{\ell}$ is equipped with the trivial action of $G$.
By part (2) of Theorem 1.10, this proves the first part of the theorem. The
second part of the theorem is just Corollary 4.17. ∎
###### Proof of Theorem 1.12.
First note that the surjectivity of the cycle class map is a birational
invariant. So using resolution of singularities for $3$-folds [CP08, CP09,
Abh98], we may assume that the singular fibers are SNC divisors. The result of
Bloch-Srinivas [BS83] shows that the Griffiths group of $1$ cycles on
$\overline{\mathcal{X}}$ is $p$-torsion. Thus the hypothesis (B) in Theorem
1.13 is satisfied. As for hypothesis (A), we have a commutative diagram of
localization exact sequences:
$\begin{CD}\oplus
CH_{1}(\overline{\mathcal{X}}_{i})\otimes\mathbb{Z}_{\ell}@>{}>{}>CH_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}@>{}>{}>CH_{1}(\overline{\mathcal{X}}^{0})\otimes\mathbb{Z}_{\ell}@>{}>{}>0\\\
@V{}V{}V@V{}V{}V@V{}V{}V\\\ \oplus
H^{4}_{\overline{\mathcal{X}}_{i}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(2))@>{}>{}>H^{4}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(2))@>{}>{}>H^{4}(\overline{\mathcal{X}}^{0},\mathbb{Z}_{\ell}(2))\\\
\end{CD}$
Here $\mathcal{X}_{i}$ is a singular fiber of the fibration $\mathcal{X}\to B$
and $\mathcal{X}^{0}$ is the complement of all the singular fibers
$\mathcal{X}_{i}$. By Section 4.3 in [EW16], the first vertical map is
surjective. We may assume that $\overline{\mathcal{X}}^{0}$ is over an affine
curve (i.e. the direct sum on the left is non-trivial). A simple calculation
then shows that $H^{4}(\overline{\mathcal{X}}^{0})$ is one dimensional and
spanned by the class of a section. Thus the third cycle class map is also
surjective by [dJS03]. So the middle one is surjective.
Hypothesis (C) and (D) are also satisfied in this case. For simplicity, we
only explain how to prove hypothesis (D). On the one hand, the cokernal is
torsion for separably rationally connected fibrations by a decomposition of
diagonal argument. On the other hand, Bloch-Kato conjecture proved by
Voevodsky (in dimension $3$, we can also use the Merkurjev-Suslin theorem)
implies that it is torsion free for all smooth projective $3$-fold. Hence the
cokernal has to vanish.
Thus Theorem 1.13 implies that $CH_{1}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{4}(\mathcal{X},\mathbb{Z}_{l}(2))$ is surjective. ∎
###### Proof of Theorem 1.4.
This follows from combining Theorem 1.10, 1.12, and Remark 5.3. ∎
## 6\. Examples
We conclude this article with some examples where one can check the conditions
in Theorem 1.13.
###### Proposition 6.1.
Let $\mathcal{X}\subset\mathbb{P}_{B}(E)$ be a family of complete
intersections of degree $d_{1},\ldots,d_{c}$ in $\mathbb{P}^{n}$ over a smooth
projective curve $B$ over $\mathbb{F}_{q}$. Assume that the generic fiber $X$
is smooth separably rationally connected of dimension $d,d\geq 5$ and that
$\sum d_{i}^{2}\leq n$. Also assume that $\mathcal{X}$ is smooth. Then the
cycle class map
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
is surjective and Conjectures 1.1 and 1.2 hold for the generic fiber $X$ over
$\mathbb{F}_{q}(B)$.
###### Proof.
By the dimension assumption and the affine vanishing for étale cohomology,
there is an isomorphism
$H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))\cong
H^{2d-2}_{\text{{\'{e}}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d-1))^{G},$
which is spanned by the class of a section and a line in a fiber. So it
suffices to show that over the algebraic closure $\overline{\mathbb{F}}_{q}$,
every multisection is rationally equivalent to a multiple of any fixed section
modulo lines in fibers, and that every line in a fiber is algebraically
equivalent to any line in any smooth fiber.
Both statements follows from some well-known argument. More precisely, the
space of a chain of two lines passing through two points in a complete
intersection of degree $d_{1},\ldots,d_{c}$ is defined by equations of degree
$1,1,2,2,\ldots,d_{1}-1,d_{1}-1,d_{1},1,1,\ldots,d_{2}-1,d_{2}-1,d_{2},\ldots,1,1,\ldots,d_{c}-1,d_{c}-1,d_{c}$
in $\mathbb{P}^{n}$ (see, for example, Lemma 3.4 in [Pan18]). Thus by the
classical Tsen-Lang theorem, for any family of complete intersections of
degree $(d_{1},\ldots,d_{c})$ over a smooth curve $T/\bar{\mathbb{F}}_{q}$,
and for any two sections of this family, there is a family of chain of two
lines in the complete intersections over $T$ such that the two sections lie in
this family of chain of two lines. Any two sections in a
$\mathbb{P}^{1}$-bundle over a curve are rationally equivalent modulo general
fibers. Thus any two sections are rationally equivalent up to lines in general
fibers. This in turn implies that any two multi-sections of the same degree
are rationally equivalent up to lines in general fibers. Since any curve in a
fiber is rationally equivalent to the difference of two multi-sections of the
same degree, it is also rationally equivalent to lines in general fibers.
Finally, since the Fano scheme of lines of a complete intersection is
connected as long as it has positive dimension [DM98, Théroème 2.1], all lines
in a fiber are algebraically equivalent. ∎
###### Remark 6.2.
The dimension assumption is not restrictive. The only low dimensional examples
satisfying the numerical conditions are quadrics and linear spaces. One can
check by hands that integral Tate conjecture holds for them.
###### Remark 6.3.
In general it is still an open question if a smooth Fano complete intersection
is separably rationally connected. However, one can show that if the
characteristic $p$ is larger than all the $d_{i}$, then every smooth Fano
complete intersection of degree $d_{1},\ldots,d_{c}$ is separably rationally
connected [STZ22].
###### Proposition 6.4.
Let $X$ be a smooth proper variety that is also a homogeneous variety under an
integral linear algebraic group $G$ over $\mathbb{F}_{q}(B)$. Assume that $X$
admits a regular projective model $\mathcal{X}\to B$. Then the cycle class map
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
is surjective and Conjecture 1.2 holds for $X$.
###### Proof.
It is well-known that $\bar{G}$ over $\bar{\mathbb{F}}_{q}(B)$ is rational.
Thus $\overline{\mathcal{X}}\to\bar{B}$ is birational to
$\bar{B}\times_{\bar{\mathbb{F}}_{q}}\mathbb{P}^{n}\to\bar{B}$. So the
conditions in Theorem 1.13 are satisfied. ∎
###### Remark 6.5.
Liang [Lia13] proved that if the Brauer-Manin obstruction is the only
obstruction to weak approximation of rational points in a rationally connected
variety over a number field $K$ and all of its finite field extensions, the
number field analogue of Conjecture 1.3 is true. As a corollary, he proved
Conjecture 1.3 for all smooth proper varieties birational to a homogeneous
space under a linear algebraic groups with connected stablizer. Harpaz and
Wittenberg [HW20] proved Conjecture 1.3 for all smooth proper varieties
birational to a homogeneous space under a linear algebraic group. One could
expect that this also holds in the global function field case by essentially
the same proof (modulo some characteristic p issues).
###### Theorem 6.6.
Let $C$ be a smooth projective geometrically integral curve defined over a
global function field $\mathbb{F}(B)$. Assume that $C$ has a zero cycle of
degree $1$ over $\bar{\mathbb{F}}(B)$. Let $X(r,L)$ be the moduli space of
stable vector bundles of rank $r$ and fixed determinant
$L\in\text{Pic}^{d}(X)$, with $r,d$ relatively prime to each other. Assume
that $X(r,d)$ has a smooth projective model over $B$. Then the local-global
principle for zero cycles (i.e. Conjecture 1.1) holds for $X(r,L)$.
###### Proof.
It is well-knonw that $X(r,L)$ is geometrically rational [Hof07, New75, New80,
KS99] and has geometric Picard group isomorphic to $\mathbb{Z}$. In fact, as
long as the curve, defined over any field $k$, has a $k$-rational point,
$X(r,L)$ is rational over $k$ [Hof07]. Using a norm argument, one can prove
that the hypothesis (A)-(D) in Theorem 1.13 holds for
$X(r,L)\otimes\bar{\mathbb{F}}$ under our assumptions. Hence Theorem 1.13
implies the statements. ∎
## References
* [Abh98] S.S. Abhyankar. Resolution of singularities of embedded algebraic surfaces. 2nd, enl. ed. Berlin: Springer, 2nd, enl. ed. edition, 1998.
* [Ben22] Olivier Benoist. Steenrod operations and algebraic classes. arXiv preprint https://arxiv.org/abs/2209.03685, 2022.
* [BO21] Olivier Benoist and John Christian Ottem. Two coniveau filtrations. Duke Math. J., 170(12):2719–2753, 2021.
* [Bru78] Armand Brumer. Remarques sur les couples de formes quadratiques. C. R. Acad. Sci. Paris Sér. A-B, 286(16):A679–A681, 1978.
* [BS83] S. Bloch and V. Srinivas. Remarks on correspondences and algebraic cycles. Amer. J. Math., 105(5):1235–1253, 1983.
* [CP08] Vincent Cossart and Olivier Piltant. Resolution of singularities of threefolds in positive characteristic. I. Reduction to local uniformization on Artin-Schreier and purely inseparable coverings. J. Algebra, 320(3):1051–1082, 2008.
* [CP09] Vincent Cossart and Olivier Piltant. Resolution of singularities of threefolds in positive characteristic. II. J. Algebra, 321(7):1836–1976, 2009.
* [CT99] Jean-Louis Colliot-Thélène. Conjectures de type local-global sur l’image des groupes de Chow dans la cohomologie étale. In Algebraic $K$-theory (Seattle, WA, 1997), volume 67 of Proc. Sympos. Pure Math., pages 1–12. Amer. Math. Soc., Providence, RI, 1999.
* [CT00] Jean-Louis Colliot-Thélène. Principe local-global pour les zéro-cycles sur les surfaces réglées. J. Amer. Math. Soc., 13(1):101–127, 2000. With an appendix by E. Frossard and V. Suresh.
* [CT22] Jean-Louis Colliot-Thélène. Retour sur l’arithmétique des intersections de deux quadriques, avec un appendice par A. Kuznestov. preprint, https://arxiv.org/abs/2208.04121, 2022.
* [CTK13] Jean-Louis Colliot-Thélène and Bruno Kahn. Cycles de codimension 2 et $H^{3}$ non ramifié pour les variétés sur les corps finis. J. K-Theory, 11(1):1–53, 2013.
* [CTS10] Jean-Louis Colliot-Thélène and Tamás Szamuely. Autour de la conjecture de Tate à coefficients ${\bf Z}_{\ell}$ pour les variétés sur les corps finis. In The geometry of algebraic cycles, volume 9 of Clay Math. Proc., pages 83–98. Amer. Math. Soc., Providence, RI, 2010.
* [CTS21] Jean-Louis Colliot-Thélène and Federico Scavia. Sur la conjecture de tate entière pour le produit d’une courbe et d’une surface $ch_{0}$-triviale sur un corps fini. preprint, arXiv:2001.10515v4, 2021.
* [CTSD12] J.-L. Colliot-Thélène and Peter Swinnerton-Dyer. Zero-cycles and rational points on some surfaces over a global function field. Acta Arith., 155(1):63–70, 2012.
* [CTSSD87] Jean-Louis Colliot-Thélène, Jean-Jacques Sansuc, and Peter Swinnerton-Dyer. Intersections of two quadrics and Châtelet surfaces. I. J. Reine Angew. Math., 373:37–107, 1987.
* [CTV12] Jean-Louis Colliot-Thélène and Claire Voisin. Cohomologie non ramifiée et conjecture de Hodge entière. Duke Math. J., 161(5):735–801, 2012.
* [dJS03] A. J. de Jong and J. Starr. Every rationally connected variety over the function field of a curve has a rational point. Amer. J. Math., 125(3):567–580, 2003.
* [DM98] Olivier Debarre and Laurent Manivel. Sur la variété des espaces linéaires contenus dans une intersection complète. Math. Ann., 312(3):549–574, 1998.
* [EW16] Hélène Esnault and Olivier Wittenberg. On the cycle class map for zero-cycles over local fields. Ann. Sci. Éc. Norm. Supér. (4), 49(2):483–520, 2016. With an appendix by Spencer Bloch.
* [FM94] E. M. Friedlander and B. Mazur. Correspondence homomorphisms for singular varieties. Ann. Inst. Fourier (Grenoble), 44(3):703–727, 1994.
* [Fri91] Eric M. Friedlander. Algebraic cycles, Chow varieties, and Lawson homology. Compositio Math., 77(1):55–93, 1991.
* [Gei00] Thomas Geisser. Applications of de Jong’s theorem on alterations. In Resolution of singularities (Obergurgl, 1997), volume 181 of Progr. Math., pages 299–314. Birkhäuser, Basel, 2000.
* [HB18] D. R. Heath-Brown. Zeros of pairs of quadratic forms. J. Reine Angew. Math., 739:41–80, 2018.
* [Hof07] Norbert Hoffmann. Rationality and Poincaré families for vector bundles with extra structure on a curve. Int. Math. Res. Not. IMRN, (3):Art. ID rnm010, 30, 2007.
* [HW20] Yonatan Harpaz and Olivier Wittenberg. Zéro-cycles sur les espaces homogènes et problème de Galois inverse. J. Amer. Math. Soc., 33(3):775–805, 2020.
* [Kel17] Shane Kelly. Voevodsky motives and $l$dh-descent. Astérisque, (391):125, 2017.
* [KMM92] János Kollár, Yoichi Miyaoka, and Shigefumi Mori. Rationally connected varieties. J. Algebraic Geom., 1(3):429–448, 1992.
* [Kol96] János Kollár. Rational curves on algebraic varieties, volume 32 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer-Verlag, Berlin, 1996.
* [KS99] Alastair King and Aidan Schofield. Rationality of moduli of vector bundles on curves. Indag. Math. (N.S.), 10(4):519–535, 1999.
* [KT23] János Kollár and Zhiyu Tian. Stable maps of curves and algebraic equivalence of $1$-cycles. arXiv preprint, https://arxiv.org/abs/2302.07069, 2023.
* [Law89] H. Blaine Lawson, Jr. Algebraic cycles and homotopy theory. Ann. of Math. (2), 129(2):253–291, 1989.
* [LF94] Paulo Lima-Filho. The topological group structure of algebraic cycles. Duke Math. J., 75(2):467–491, 1994.
* [Lia13] Yongqi Liang. Arithmetic of 0-cycles on varieties defined over number fields. Ann. Sci. Éc. Norm. Supér. (4), 46(1):35–56 (2013), 2013\.
* [New75] P. E. Newstead. Rationality of moduli spaces of stable bundles. Math. Ann., 215:251–268, 1975.
* [New80] P. E. Newstead. Correction to: “Rationality of moduli spaces of stable bundles” [Math. Ann. 215 (1975), 251–268; MR 54 #7468]. Math. Ann., 249(3):281–282, 1980.
* [Pan18] Xuanyu Pan. Spaces of conics on low degree complete intersections. Trans. Amer. Math. Soc., 370(8):5381–5400, 2018.
* [PS16] Raman Parimala and Venapally Suresh. Degree 3 cohomology of function fields of surfaces. Int. Math. Res. Not. IMRN, (14):4341–4374, 2016.
* [Sai89] S. Saito. Some observations on motivic cohomology of arithmetic schemes. Invent. Math., 98(2):371–404, 1989.
* [Sai91] Shuji Saito. On the cycle map for torsion algebraic cycles of codimension two. Invent. Math., 106(3):443–460, 1991.
* [Sca22] Federico Scavia. Autour de la conjecture de Tate entière pour certains produits de dimension 3 sur un corps fini. Épijournal Géom. Algébrique, 6:Art. 11, 28, 2022.
* [Sch98] Chad Schoen. An integral analog of the Tate conjecture for one-dimensional cycles on varieties over finite fields. Math. Ann., 311(3):493–500, 1998.
* [SS91] P. Salberger and A. N. Skorobogatov. Weak approximation for surfaces defined by two quadratic forms. Duke Math. J., 63(2):517–536, 1991.
* [SS22] Federico Scavia and Fumiaki Suzuki. Non-algebraic geometrically trivial cohomology classes over finite fields. 2022\.
* [STZ22] Jason M. Starr, Zhiyu Tian, and Runhong Zong. Weak approximation for Fano complete intersections in positive characteristic. Ann. Inst. Fourier (Grenoble), 72(4):1503–1534, 2022.
* [Sus00] Andrei A. Suslin. Higher Chow groups and etale cohomology. In Cycles, transfers, and motivic homology theories, volume 143 of Ann. of Math. Stud., pages 239–254. Princeton Univ. Press, Princeton, NJ, 2000.
* [SV96] Andrei Suslin and Vladimir Voevodsky. Singular homology of abstract algebraic varieties. Invent. Math., 123(1):61–94, 1996.
* [SV00] Andrei Suslin and Vladimir Voevodsky. Relative cycles and Chow sheaves. In Cycles, transfers, and motivic homology theories, volume 143 of Ann. of Math. Stud., pages 10–86. Princeton Univ. Press, Princeton, NJ, 2000.
* [Tia17] Zhiyu Tian. Hasse principle for three classes of varieties over global function fields. Duke Math. Journal, 166(17):3349–3424, 2017.
* [Tia20] Zhiyu Tian. Zero cycles on rationally connected varieties over laurent fields. arXiv preprint, https://arxiv.org/abs/2010.04996, 2020.
* [TZ18] Zhiyu Tian and Letao Zhang. Weak approximation for cubic hypersurfaces and degree 4 del Pezzo surfaces. Int. Math. Res. Not. IMRN, (3):762–784, 2018.
* [Voi08] Mircea Voineagu. Semi-topological $K$-theory for certain projective varieties. J. Pure Appl. Algebra, 212(8):1960–1983, 2008.
* [Voi22] Claire Voisin. On the coniveau of rationally connected threefolds. Geom. Topol., 26(6):2731–2772, 2022.
* [Wal07] Mark E. Walker. The morphic Abel-Jacobi map. Compos. Math., 143(4):909–944, 2007.
* [Wit07] Olivier Wittenberg. Intersections de deux quadriques et pinceaux de courbes de genre 1/Intersections of two quadrics and pencils of curves of genus 1, volume 1901 of Lecture Notes in Mathematics. Springer, Berlin, 2007.
* [Wit18] Olivier Wittenberg. Rational points and zero-cycles on rationally connected varieties over number fields. In Algebraic geometry: Salt Lake City 2015, volume 97 of Proc. Sympos. Pure Math., pages 597–635. Amer. Math. Soc., Providence, RI, 2018.
|
# BotSIM: An End-to-End Bot Simulation Toolkit
for Commercial Task-Oriented Dialog Systems
Guangsen Wang Shafiq Joty Junnan Li Steven C.H. Hoi
Salesforce Research
{guangsen.wang, sjoty, junnan.li<EMAIL_ADDRESS>
###### Abstract
We introduce BotSIM, a modular, open-source Bot SIMulation environment with
dialog generation, user simulation and conversation analytics capabilities.
BotSIM aims to serve as a one-stop solution for large-scale data-efficient
end-to-end evaluation, diagnosis and remediation of commercial task-oriented
dialog (TOD) systems to significantly accelerate commercial bot development
and evaluation, reduce cost and time-to-market. BotSIM adopts a layered design
comprising the infrastructure layer, the adaptor layer and the application
layer. The infrastructure layer hosts key models and components to support
BotSIM’s major functionalities via a streamlined “generation-simulation-
remediation” pipeline. The adaptor layer is used to extend BotSIM to
accommodate new bot platforms. The application layer provides a suite of
command line tools and a Web App to significantly lower the entry barrier for
BotSIM users such as bot admins or practitioners. In this report, we focus on
the technical designs of various system components. A detailed case study
using Einstein BotBuilder is also presented to show how to apply BotSIM
pipeline for bot evaluation and remediation. The detailed system descriptions
can be found in our system demo paper (Wang et al., 2022). The toolkit is
available at: _https://github.com/salesforce/BotSIM_.
## 1 Introduction
Figure 1: BotSIM System Design.
The typical dialog system development cycle consists of dialog design, pre-
deployment testing, deployment, performance monitoring, model improvement and
iteration. As in any production software system, effective and comprehensive
testing at all stages is of paramount importance. Unfortunately, _evaluating
and troubleshooting_ production TOD systems is still a largely manual process
requiring large amount of real human conversations with the bots. This process
is time-consuming, expensive, and inevitably fails to capture the breadth of
language variation present in the real world (Tan et al., 2021). The time- and
labor-intensive nature of such an approach is further exacerbated when the
developer significantly changes the dialog flows, since new sets of test
dialogs will need to be collected (Benvie et al., 2020). Performing
comprehensive end-to-end evaluation of both NLU and dialog-level performance
(_e.g._ , task success rate) is highly challenging due to the need for further
annotated efforts. Finally, there is a lack of analytical tools for
interpreting test results and troubleshooting underlying bot issues. To
address these limitations, we present _BotSIM_ , an open source Bot SIMulation
environment with dialog generation, user simulation (Schatzmann et al., 2007)
and conversation analytics capabilities. BotSIM aims to serve as a one-stop
solution for bot practitioners to perform large-scale data-efficient end-to-
end evaluation, diagnosis and remediation of commercial TOD systems. The
modular design also allows bot developers to extend the framework to more bot
platforms.
The toolkit design is depicted in Figure 1. It consists of three layers: the
infrastructure layer, the adaptor layer and the application layer. The
infrastructure layer supports the upper layers through the key modules (NLU
and NLG) and components (the generator, simulator and remediator). They are
the “engines” to power BotSIM’s “generation-simulation-remediation” pipeline
for end-to-end diagnosis, evaluation and remediation of commercial TOD bots.
More details regarding the functionalities of these modules and components are
given in (Wang et al., 2022). BotSIM currently supports two bot platforms,
including Salesforce Einstein BotBuilder and Google DialogFlow CX. The adaptor
layer is designed for bot developers to extend BotSIM to new bot platforms. In
addition to defining a set of unified parser and client interfaces, the
adaptor layer also provides some reference implementations for developers to
start their own implementations of the platform-dependent parsers and chat API
clients. The application layer offers a suite of command line tools and an
easy-to-use Web App. The Web app is designed for bot admins to directly apply
BotSIM to their bots for evaluation and remediation without diving into the
technical details of various system components. On the other hand, the command
line tools are provided to bot practitioners for a better understanding of the
toolkit to customize their own evaluation and remediation pipeline. They can
learn more about how different BotSIM components work in terms of the required
inputs, expected outputs and their functionalities. We have provide detailed
tutorials of how to use these tools in our code documentation.
## 2 Related Work
| Methods | Stages | Automation | Metrics
---|---|---|---|---
Regression | End-to-end | Pre-deployment | Monitoring | | Test case
---
curation
| User
---
Simulation
NLU | | Task
---
Completion
CX | ✓ | | | ✓ | | | |
Watson | ✓ | | | ✓ | | | ✓ | ✓
Botium | ✓ | | | | | | ✓ |
BotSIM | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓
Table 1: Comparison of bot evaluation capabilities of the reviewed commercial
bot platforms
We present the related work of BotSIM from both toolkit design and application
perspectives.
### 2.1 Toolkit Design
To the best of our knowledge, there are no open source toolkits designed for
simulation, evaluation and remediation of commercial chat bots. Therefore, we
review some of the research toolkits/libraries designed for TOD systems and
are equipped with user simulators. They include ConvLab-2 (Zhu et al., 2020),
TC-Bot Li et al. (2016, 2017) and user-simulator Shi et al. (2019).
* •
ConvLab-2 111https://github.com/thu-coai/ConvLab-2 is a research toolkit for
building TOD systems with state-of-the-art dialog models. It was used in DSTC9
Track 2 222https://convlab.github.io/index.html for the multi-domain TOD
challenge. In addition, it also can perform end-to-end evaluation, and
diagnose the weakness of the research systems. The essential differences to
BotSIM include 1) the toolkit is limited to research TOD systems to support
research models and datasets; 2) the evaluation is based on the annotated
dialogs.
* •
TC-Bot 333https://github.com/MiuLab/TC-Bot is another research-oriented
framework for building end-to-end neural-based TOD systems.TC-Bot utilises a
dialog-act level agenda-based user simulator to interact with the neural-based
end-to-end dialog agent for policy training. Some of our simulator rules and
designs are inspired and adapted from a simpler version of the TC-Bot called
GO-Bot-DRL 444https://github.com/maxbren/GO-Bot-DRL.
* •
The “user-simulator” 555https://github.com/wyshi/user-simulator provides
implementations and comparisons of different user simulators, such as the
agenda-based user simulator and neural-based user simulator based on Sequicity
(Lei et al., 2018). The work aims to provide an evaluation framework for user
simulator study in terms of their impacts on the trained dialog policies.
The focus of these research toolkits is to build an end-to-end (neural-based)
dialog agent prototype constrained to certain domains (_e.g._ , movie booking,
restaurant domain of the MultiWOZ (Budzianowski et al., 2018) dataset). On the
contrary, BotSIM is designed as a holistic bot simulation environment for
task-agnostic end-to-end evaluation, diagnosis and remediation of commercial
bots. The agenda-based user simulator is only a small part of the toolkit as
shown in Figure 1. It is designed for bot practitioners to significantly
accelerate commercial bot development and evaluation, reducing human efforts,
cost and time-to-market.
### 2.2 Commercial Bot Evaluation
The overall comparison of different platforms in terms of testing capabilities
is given in Table 1. The detailed system reviews can be found in the system
demo paper (Wang et al., 2022). Most current commercial platforms only focus
on the regression testing rather than NLU or task-completion metrics. While
regression testing is important to ensure correct and consistent system
behaviours, it is also vital to perform pre-deployment evaluation to avoid
poor user adoption and retention rate. Although some of the platforms offer
turn-level NLU performance metrics, they require human efforts in curating or
annotating a large number of test cases. In addition, the NLU metrics do not
directly translate to goal completion performance. On the other hand, BotSIM
can help circumvent these limitations via large scale automatic dialog
generation and simulation.
## 3 BotSIM Design
In this section, we present more details of the BotSIM design in Figure 1. The
key design principles of BotSIM include modularity, extensibility and
usability. These principles allow BotSIM to be adopted both by developers as a
framework and bot end-users as an easy-to-use application. To achieve these,
BotSIM adopts a layered design comprising the infrastructure layer, the
adaptor layer and the application layer.
### 3.1 Infrastructure Layer
As the name suggests, the infrastructure layer is designed to offer
fundamental model support for the framework. BotSIM’s “generation-simulation-
remediation” pipeline is powered by the models and components that reside in
this layer. The models include the natural language understanding (NLU),
natural language generation (NLG) models and the key components include the
generator, the simulator and the remediator.
botsim.models package hosts BotSIM’s NLU and NLG models. From a dialogue
system perspective, BotSIM can be viewed as a counterpart to a chatbot as
shown in Figure 2: it needs to “understand” chatbot messages (NLU) and
“respond” in natural languages (NLG) to carry on the conversation. Currently,
fuzzy matching-based NLU and template-based NLG models are provided for
efficiency reasons. More advanced NLU and NLG models can also be incorporated
by the developers by following the recipes given in the code documentation.
botsim.modules consists of the three key components to power BotSIM’s
“generation-simulation-remediation” pipeline.
* •
botsim.modules.generator provides two major functionalities: 1) the parser
parses the input bot designs in the form of either metadata (Einstein
BotBuilder) or API (DialogFlow CX) to infer the dialog-act maps (BotSIM’s
NLU); 2) the large pre-trained language model based paraphraser generates
paraphrases from the input intent utterances. These paraphrases are used as
intent queries in the simulation goals to probe bots’ intent models, which
allows BotSIM to perform large-scale data-efficient bot evaluation even before
bots are deployed.
* •
botsim.modules.simulator implements the dialog-act level agenda-based user
simulation in abus. It also defines a simulation API client interface
simulation_client_base.
* •
botsim.modules.remediator analyzes the simulated dialogs and produces the
performance metrics and conversational analytics to support the dashboard
visualisation. These metrics include both the end-to-end performance such as
the task completion rates and the NLU performance. It also offers a suite of
analytical tools to provide actionable insights to troubleshoot and improve
the current systems. Such tools include confusion matrix analysis and
visualisation (analytics, visualization_cm), dialog design graph exploration
(dialog_graphs).
### 3.2 Adaptor Layer
The adaptor layer is designed for bot developers to extend BotSIM to new bot
platforms. To cover new bot platforms, the following two most important
platform-specific modules of the layer must be implemented. We also provided
more details in the code documentation. To accommodate a new bot platform,
create a new package under botsim.platforms and implement the following
classes.
parser acts as an “adaptor” to unify bot definitions (_e.g._ conversation
flows, intents/tasks) from different platforms to a common representation of
dialog act maps. The dialog act maps are used as BotSIM NLU to map bot
messages to dialog acts. The rational is that most commercial TOD bots follow
a “rule-action-message” design scheme and there exist clear mappings from
system messages to rules/actions. For example, according to bot definitions,
“May I get your email?” (bot message) is used to “Collect” (action) the
“Email” (slot) with entity type “Email” from the user. One of the most
important parser function is to parse input bot definitions (_e.g._ ,
MetaData, API) to capture such mappings, infer the “request_Email” dialog act
and add the bot message to the mapping candidates of the dialog act. An
example of the dialog act map is given in Figure 3. The major parser functions
are listed in the code snippet below:
⬇
1class Parser:
2 """ Parser interface"""
3 def __init__(self):
4 # The aggregated dialog act maps to be used as
5 # BotSIM NLU
6 self.dialog_act_maps = {}
7 # mapping from dialogs to their entities
8 self.dialog_ontology = {}
9 # conversation graph data to support path explorer
10 self.conv_graph_visualisation_data = {}
11 def extract_local_dialog_act_map(self):
12 def conversation_graph_modelling(self, local_dialog_act_maps):
13 """Model local dialog act maps and their transitions as a graph.
14 :param local_dialog_act_maps: local dialog act maps (one per dialog)
obtained from the parser (extract_local_dialog_act_map)
15 :return:
16 dialog_act_maps: aggregated dialog act maps obtained via graph traversal.
These maps need to be reviewed/revised by users before used as BotSIM NLU
17 conv_graph_visualisation_data: conversation graph data to support dialog
path visualisation
18 """
19 def extract_ontology(self):
Note the implementations of the parsers are highly platform-dependent. They
require the developers to have access to bot platform, design and API
documents to understand how bots are designed and how user inputs are elicited
by the bots. There is also need to constantly revisit the implementations to
incorporate new features that might be missed in the current implementations.
We have provided our current implementations of BotBuilder and DialogFlow CX
parsers for references under botsim.platforms.botbuilder and
botsim.platforms.dialogflow_cx.
simulation_client is the other platform-dependent component for BotSIM to
exchange conversations with bots via API calls.
⬇
1from botsim.modules.simulator.user_simulator import UserSimulator
2class UserSimulatorClientInterface:
3 def __int__(self, config):
4 self.config = config
5 self._prepare_simulation()
6 def _prepare_simulation(self):
7 """prepare simulation according to the simulation configuration"""
8 def perform_batch_simulation(self, simulation_goals, dialog_name,
start_episode, config, batch_size=25):
9 """perform a batch of dialog user simulations using simulation_goals
10 :param simulation_goals: list of all simulation goals
11 :param dialog_name: name of the dialog under simulation
12 :param start_episode: starting goal index of simulation_goals
13 :param config: simulation settings including information such as API tokens
14 :param batch_size: number of simulation sessions per batch"""
15 user_simulator = UserSimulator(simulation_goals, config)
16 ...
Figure 2 depicts how a dialogue turn between the bot and BotSIM is conducted
via bot APIs. Using dialog act maps as NLU, a rule-based dialog state manager
(policy implemented in botsim.modules.simulator) takes in the bot dialog acts
and produces the corresponding user dialog acts. The user dialog acts are
converted to natural language responses by the template-based NLG. The natural
language responses are sent back via API. The conversation ends when the task
has been successfully finished or an error has been captured. Similar to the
parser, the implementation of the client is also highly platform-dependent.
Developers can refer to our implementations for BotBuilder and DialogFlow CX
when extending BotSIM to new bot platforms.
Figure 2: An exchange of dialog turns between bot and BotSIM during dialog
simulation
### 3.3 Application Layer
The application layer is designed to significantly flatten the learning curves
of BotSIM for both bot developers/practitioners and end-users.
Command Line Tools botsim.cli contains a set of command line tools for
practitioners to learn more about the major BotSIM components. The
“generation-simulation-evaluation” pipeline has been split into multiple
stages to expose the required inputs and expected outputs for each stage. They
serve as basic building blocks for bot practitioners to build their customized
pipelines or apply only certain tasks rather than the whole BotSIM pipeline.
⬇
1python botsim/cli/run_generator_parser.py \
2 \--platform $platform \
3 \--test_name $name
4python botsim/cli/run_generator_paraphraser.py \
5 \--platform $platform \
6 \--test_name $name
7python botsim/cli/run_generator_goal_generation.py \
8 \--platform $platform \
9 \--test_name $name
10python botsim/cli/run_simulator.py \
11 \--platform $platform \
12 \--test_name $name
13python botsim/cli/run_remediator.py \
14 \--platform $platform \
15 \--test_name $name
Streamlit Web App botsim.streamlit_app is a multi-page easy-to-use Web app for
end users such as bot admins without diving into technical details. The app
can be built as a docker image and deployed to various cloud platforms (_e.g._
GCP) for access to more computation resources. We use Streamlit
666https://streamlit.io/ to build the front-end pages. Flask is used to
implement the backend APIs for Streamlit to invoke BotSIM functionalities. The
app is also equipped with a SQL database to keep track of simulation stages
and simulation performance. BotSIM supports two types of SQL databases
including Sqlite3 and Postgres.
## 4 Einstein BotBuilder Case Study
In this section, we show how to perform end-to-end evaluation and remediation
of the pre-built “Template Bot” from the Salesforce Einstein BotBuilder
platform. We follow BotSIM’s “generation-simulation-remediation” pipeline as
detailed in (Wang et al., 2022). The “Template Bot” has six dialog intents.
Each intent has a set of hand-crafted training utterances. For controllable
experiments, we sample 150 utterances per dialog as the training set (train-
original) and use the remaining for evaluation (eval-original). The six
intents are: “Transfer to agent (TA)”, “End chat (EC)”, “Connect with sales
(CS)”, “Check issue status (CI)”, “Check order status (CO)” and “Report an
issue (RI)”. The dataset information is given in Table 2.
Dataset | Intent enquiries | TA | EC | CS | CI | CO | RI
---|---|---|---|---|---|---|---
Train | train-original | 150 | 150 | 150 | 150 | 150 | 150
train-augmented | 255 | 184 | 212 | 268 | 215 | 294
Dev | train-paraphrases | 1465 | 1467 | 1754 | 1989 | 1895 | 1786
Eval | eval-original | 182 | 145 | 183 | 222 | 205 | 178
eval-paraphrases | 1190 | 933 | 1648 | 2172 | 1936 | 1795
Table 2: Dataset information for the Einstein Template Bot case study.
### 4.1 Parse bot metadata
The required inputs for BotSIM include 1) bot design metadata containing the
bot designs (_e.g._ , intents/dialogs, entities) 2) intent utterance metadata
3) LiveAgent API information. They can be retrieved from users’ Salesforce
org. BotSIM starts by parsing the input metadata and generates the NLU (Dialog
Act Maps) and NLG (Response) templates needed for dialog simulation.
### 4.2 Revise dialog act maps and ontology
Figure 3: Revised dialog act map for dialog intent “Check the status of an
existing issue” of Einstein BotBuilder
The generated NLU dialog act maps convert bot messages to dialog acts via
fuzzy matching during dialog simulation. An example of dialog act maps is
shown in Figure 3. The aggregated dialog act map is inferred automatically by
the parser by modelling the bot design as graphs: 1) individual dialogs are
firstly parsed to get their “local” dialog act maps; 2) individual dialogs
(including sub-dialogs such as “Case Lookup”) are modelled as vertices
associated with their “local” dialog act maps; 3) dialog transitions of the
entire bot are modelled as graph edges. The final aggregated dialog act map of
a dialog is created by collecting all the “local” dialog act maps along the
paths starting from the dialog node to the successful dialogs (_e.g.,_
“End_Chat”). Meanwhile, the parser also extracts the bot entity ontology. It
lists all entities and their randomly initialized values for each dialog.
As the only “human-in-the-loop” step in the BotSIM pipeline, review or
revision of the automatically inferred dialog act maps and ontology are
required by BotSIM users. For dialog act maps, the two special dialog acts,
“dialog_success_message” and “intent_success_message”, are the golden labels
indicating a successful task completion and a correct intent classification,
respectively. They are inferred heuristically by regarding the first dialog
message as the “intent_success_message” and the last message as the
“dialog_success_mesage”. Users are required to verify these two dialog acts
for each evaluation dialog to ensure their correctness. Note the review of
dialog act maps is a one-time effort unless there are significant changes made
to the bot design. Since the entity values of the ontology are mostly related
to users’ products and services, the randomly initialised values in the
ontology file may be replaced with real ones in order to evaluate the named
entity recognition (NER) model. The revised dialog act maps and the ontology
are subsequently used to create the simulation goals as in Figure 4.3 for
dialog simulation.
### 4.3 Paraphrase and goal generation
Agenda-based user simulation requires pre-defined goals to ensure that the
simulated user behaves in a consistent, goal-directed manner Schatzmann et al.
(2007). The goals consist of a set of constraints ( dialog acts and entity
slot-value pairs) needed to complete the task. An exemplar simulation goal for
“check the status of an existing issue” is given below:
⬇
1{
2 "Goal": {
3 "Check_the_status_of_an_existing_issue_0": {
4 "inform_slots": {
5 "Has_case_number@_Boolean": "no",
6 "Case_Number@Case_Number_Entity": "P3V4S6",
7 "Email_for_Look_Up@Email_Address_Entity": <EMAIL_ADDRESS>
8 "intent": "I would like to know about my issue.",
9 "Needs_transfer_to_agent@_Boolean": "no",
10 "Needs_something_else@_Boolean": "no",
11 "Goodbye@Goodbye_Entity": "goodbye",
12 "Needs_to_add_case_comment@_Boolean": "no",
13 "Case_Comments@_Text": "P94RMU"
14 },
15 "request_slots": {
16 "Check_the_status_of_an_existing_issue": "UNK"
17 },
18 "name": "Check_the_status_of_an_existing_issue"
19 }
20 }
21}
Note how the entity slots are taken from the dialog act maps in Figure 3. In
particular, the “intent” slot contains the intent query to test the intent
models. To enable pre-deployment testing without any human-written test cases,
we apply the paraphrasing models to the “train-original” utterances to get the
“train-paraphrases” dataset. Compared to the number of original utterances,
the “train-paraphrases” dataset is roughly ten times larger as shown in Table
2. Simulation goals are created by taking the “train-paraphrases” as the
intent queries to simulate the variations in real user intent queries. The
“train-paraphrases” goals are subsequently used as the development set to
perform end-to-end evaluation of the dialog system via dialog simulation.
### 4.4 Dialog simulation results
In this study, we focus on the intent model for two reasons. Firstly, the
intent model can be retrained. Secondly, wince we do not have any customer
data, the entity values in the goals are randomly generated and may not
reflect the real-world values. After dialog simulation with “train-
paraphrases” goals, the original intent training sets are augmented with the
wrongly classified intent queries (“train-augmented”) to re-train the intent
model according to the remediation suggestions. We then compare the
performance before and after retraining on the goals created from the human-
written “eval-original” utterances in Table 3 in terms of intent F1 scores.
Model | Eval. | TA | EC | CS | CI | CO | RI
---|---|---|---|---|---|---|---
Baseline | original | 0.92$\pm$0.03 | 0.95$\pm$0.02 | 0.89$\pm$0.03 | 0.93$\pm$0.03 | 0.94$\pm$0.02 | 0.82$\pm$0.04
paraphr. | 0.88$\pm$0.01 | 0.93$\pm$0.01 | 0.85$\pm$0.01 | 0.91$\pm$0.01 | 0.93$\pm$0.01 | 0.77 $\pm$0.02
Retrained | original | 0.92$\pm$0.03 | 0.97$\pm$0.02 | 0.93$\pm$0.03 | 0.95$\pm$0.02 | 0.96$\pm$0.02 | 0.87$\pm$0.04
paraphr. | 0.89$\pm$0.01 | 0.94$\pm$0.01 | 0.90$\pm$0.01 | 0.94$\pm$0.01 | 0.94$\pm$0.01 | 0.80$\pm$0.02
Table 3: Results for the Einstein Bots case study, before and after retraining
the intent model with the augmented training set (F1 with 95% confidence
interval computed with 10K bootstrapped samples).
We observe consistent improvements for all intents after model retraining. The
more challenging the intents (lower F1s) are, _e.g.,_ “Report an issue” and
“Connect with sales”, the larger performance gains are observed compared to
the easier intents such as “End Chat” (higher F1s). This demonstrates the
efficacy of BotSIM in intent model improvement. We conjecture the improved
performance of the more challenging intents is due to more paraphrases being
selected for retraining to better cover the language variation.
### 4.5 Diagnosis and Remediation Dashboard
The remediator generates health diagnosis reports, performs analyses, and
provides actionable recommendations to troubleshoot and improve dialog
systems. The major dashboard components are presented in Figure 4, 5, 6, 7.
#### Bot health reports.
The bot “health” dashboard consists of a set of multi-scale performance
reports. At the highest level, users can have a historical view of most recent
simulation/test sessions (_e.g.,_ after each major bot update). The historical
performance comparison can help users evaluate the impacts of bot changes
quantitatively, from which they can make decisions like whether or not keep
certain changes. In the test session performance summary report, users can
have the details of a selected test session including the data distribution,
overall dialog performance metrics across all dialog intents of the test
session. The dialog-specific performance report provides detailed detailed
intent and NER performance of the selected dialig intent. Through the dialog-
specific performance report, one can quickly identify the most confusing
intents and entities. This saves significant efforts and helps better
allocation of resources for troubleshooting and bot improvement.
#### Remediation recommendations.
In addition to the diagnosis reports, the remediator also provides actionable
insights/suggestions for users to remedy some of the identified issues. The
root causes of the failed conversations are identified via backtracking of the
simulation agenda. The recommendation dashboards (Figure 5 and Figure 6) allow
detailed investigation of all intent or NER errors along with their
corresponding simulated chat logs. For intent models, the paraphrase intent
queries that lead to intent errors are grouped by the original intent
utterances. These original intent utterances are sorted by the number of
intent errors of their paraphrases in descending order (drop-down list of the
Remediation suggestions in Figure 5). Depending on the wrongly classified
intents, the remediator would suggest some follow-up actions. For example, 1)
augmenting the intent training set with the queries deemed to be out-of-domain
by the current intent model, 2) moving the intent utterance to another intent
if most of paraphrases of the former intent utterance are classified to the
latter intent. Note the suggestions are meant to be used as guidelines rather
than strictly followed. More importantly, they can always be extended by users
to include domain expertise in troubleshooting bots related to their
products/services.
#### Conversation analytics.
Another useful component of the Remediator is the suite of conversation
analytical tools as shown in Figure 7. They further help bot practitioners
gain more insights for troubleshooting and improving their dialog systems. The
confusion matrix analysis breaks down the intent model performance into
(sortable) recall, precision and F1 accuracies to help identify the worse
performing intents. It also detects potential intent overlaps using the
clustering algorithms in (Thoma, 2017) based on the performance metrics.
Another useful analytical tool is the tSNE (van der Maaten and Hinton, 2008)
clustering of the intent utterances using sentence transformer (Reimers and
Gurevych, 2019) embeddings. The tSNE visualisation enables users to gauge the
training data quality. It is also an effective tool in identifying overlapping
intents and can potential benefit new intent discovery as well.
#### Dialog path explorer
Lastly, powered by parsers’ conversation graph modelling capability, the
dialog path explorer can be used to visualise different dialog paths of the
current bot design. For example, users can select the “source” and “target”
dialogs and explore the generated dialog paths. Not only is the tool valuable
for comprehensive testing coverage of conversation paths, it also offers a
controllable approach to troubleshooting dialog design related errors or even
helping improve the current bot design.
## 5 Conclusion
We presented the design of BotSIM, an open source, modular end-to-end bot
simulation toolkit for evaluation, diagnosis and remediation of commercial TOD
systems. Through the streamlined “generation-simulation-remediation“ pipeline,
BotSIM can be adopted to accelerate commercial bot development and evaluation,
reduce cost and time-to-market. Thanks to the layered design, not only can it
be used as a framework for bot developers to extend BotSIM to new bot
platforms, it also offers a suite of easy-to-use command line tools and Web
App for bot practitioners to directly apply BotSIM to their bots. We have
open-sourced the toolkit at _https://github.com/salesforce/BotSIM_ including
the Streamlit Web App. We also provides detailed documentations to accompany
the code. We welcome contributions from the community to help improve and
extend BotSIM further.
## 6 Limitations
For efficiency reasons, BotSIM adopts a template-based NLG model for
converting user dialog acts to natural languages. Although the template-NLG is
more controllable and flexible compared to the model-based NLG, they may lack
naturalness. One possible future improvement includes a combination of
template-based NLG and the model-based NLG. For example, we can train a model-
based NLG to generate templates (Wiseman et al., 2018) for BotSIM’s response
templates. In this way, both efficiency and naturalness can be achieved.
Currently, BotSIM is trained and evaluated utilizing English text. We leave
multi-lingual bot simulation capability as one of our future works.
## 7 Broader Impact
The pretrained language-model based paraphrasers (T5-base and Pegasus) used in
this study are pretrained and finetuned with large scale of text corpora
scraped from the web, which may contain biases. These biases may even be
propagated to the generated paraphrases, causing harm to the subject of these
stereotypes. Although the paraphrasing models are only applied to generate the
testing intent queries, BotSIM users are advised to take into consideration
these ethical issues and may wish to manually inspect or otherwise filter the
generated paraphrases. It is also noteworthy that to prevent any data privacy
leakage, the information produced in the simulation (the entity values in the
BotSIM ontology) is randomly generated, and therefore fake. This includes the
email addresses, names.
Figure 4: Remediator dashboard: bot health reports. Figure 5: Remediator
dashboard: intent model remediation suggestions. Figure 6: Remediator
dashboard: NER remediation suggestions. Figure 7: Remediator dashboard:
Conversational analytical tools.
## References
* Benvie et al. (2020) Adam Benvie, Eric Wayne, and Matthew Arnold. 2020. Watson assistant continuous improvement best practices.
* Budzianowski et al. (2018) Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gašić. 2018. Multiwoz – a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling.
* Lei et al. (2018) Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018\. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1437–1447, Melbourne, Australia. Association for Computational Linguistics.
* Li et al. (2016) Xiujun Li, Zachary C. Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. 2016. A user simulator for task-completion dialogues. _CoRR_ , abs/1612.05688.
* Li et al. (2017) Xuijun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end task-completion neural dialogue systems. In _Proceedings of The 8th International Joint Conference on Natural Language Processing_.
* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks.
* Schatzmann et al. (2007) Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007\. Agenda-based user simulation for bootstrapping a POMDP dialogue system. In _Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers_ , pages 149–152, Rochester, New York. Association for Computational Linguistics.
* Shi et al. (2019) Weiyan Shi, Kun Qian, Xuewei Wang, and Zhou Yu. 2019. How to build user simulators to train rl-based dialog systems. _arXiv preprint arXiv:1909.01388_.
* Tan et al. (2021) Samson Tan, Shafiq Joty, Kathy Baxter, Araz Taeihagh, Gregory A. Bennett, and Min-Yen Kan. 2021. Reliability testing for natural language processing systems. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 4153–4169, Online. Association for Computational Linguistics.
* Thoma (2017) Martin Thoma. 2017. Analysis and optimization of convolutional neural network architectures.
* van der Maaten and Hinton (2008) Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. _Journal of Machine Learning Research_ , 9(86):2579–2605.
* Wang et al. (2022) Guangsen Wang, Samson Tan, Shafqi Joty, Guang Wu, Jimmy Au, and Steven Hoi. 2022\. Botsim: An end-to-end bot simulation framework for commercial task-oriented dialog systems.
* Wiseman et al. (2018) Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning neural templates for text generation. _CoRR_ , abs/1808.10122.
* Zhu et al. (2020) Qi Zhu, Zheng Zhang, Yan Fang, Xiang Li, Ryuichi Takanobu, Jinchao Li, Baolin Peng, Jianfeng Gao, Xiaoyan Zhu, and Minlie Huang. 2020. Convlab-2: An open-source toolkit for building, evaluating, and diagnosing dialogue systems. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_.
|
# The Sheaf Representation of Residuated Lattices
Huarong Zhang Dongsheng Zhao College of Sciences
China Jiliang University
Hangzhou, China Mathematics and Mathematics Education
National Institute of Education
Nanyang Techonological University
Singapore 637616
###### Abstract
The residuated lattices form one of the most important algebras of fuzzy
logics and have been heavily studied by people from various different points
of view. Sheaf presentations provide a topological approach to many algebraic
structures. In this paper, we study the topological properties of prime
spectrum of residuated lattices, and then construct a sheaf space to obtain a
sheaf representation for each residuated lattice.
###### keywords:
Residuated lattice, sheaf representation, prime spectrum.
††journal: Electronic Notes in Theoretical Informatics and Computer
Science††volume: 2††thanks: I would like to express my sincere thanks to
Professor Dongsheng Zhao who hosted my visit to National Institute of
Education, Nanyang Technological University. The authors would like to thank
the anonymous reviewers for their professinal comments that have improved this
paper substantially. This work is supported by the National Natural Science
Foundation of China (No. 11701540, 12171445)††thanks: Email:
<EMAIL_ADDRESS>Email<EMAIL_ADDRESS>
## 1 Introduction
Residuated lattices, introduced by Ward and Dilworth [20], constitute the
semantics of Höhle’s Monoidal Logic [14]. Such algebras provide the
fundamental framework for algebras of logics. Many familiar algebras, such as
Boolean algebras, $\operatorname{MV}$-algebras, $\operatorname{BL}$-algebras,
$\operatorname{MTL}$-algebras, $\operatorname{NM}$ algebras ($R_{0}$-algebras)
and Heyting algebras, are special types of residuated lattices.
In dealing with certain type of problems, sheaf representations of algebras
often provide powerful tools as they convert the study of algebras to the
study of stalks, a topological structure. Thus, in the past decades, sheaf
spaces [2, 17, 18] have been constructed for various types of algebras to
obtain their corresponding sheaf representations.
In the case of algebras for fuzzy logics, Ghilardi and Zawadowski constructed
the Grothendieck-type duality and got the sheaf representation for Heyting
algebras [10]. Many scholars investigated the sheaf representations of
$\operatorname{MV}$-algebras [3, 5, 6, 7, 8, 9]. Here we give an outline of
their differences. In [5], Dubuc and Poveda use $\operatorname{Spec}(L)$
(which is endowed with the co-Zariski topology) as the base space and
$\operatorname{MV}$-chains as the stalks. In [7], Filipoiu and Georgescu used
$\operatorname{Max}(L)$ (which is endowed with the Zariski topology) as the
base space and $\operatorname{local}\operatorname{MV}$-algebras as the stalks.
Di Nola, Esposito and Gerla [3] improved the methods of [7], by choosing the
stalks from given classes of $\operatorname{local}\operatorname{MV}$-algebras.
Ferraioli and Lettieri [6], combining the techniques in [5] and [7], got two
types of sheaf representation of $\operatorname{MV}$-algebras. In [8] and [9],
Gehrke, Gool and Marra provided a general framework for previously known
results on sheaf representations of $\operatorname{MV}$-algebras as [5] and
[7], through the lens of Stone-Priestley duality, using canonical extensions
as an essential tool. In terms of the sheaf representations of
$\operatorname{BL}$-algebras, Di Nola and Leuştean, who adopted
$\operatorname{Spec}(L)(\operatorname{Max}(L))$ as the base space and
$\operatorname{BL}$-algebras
($\operatorname{local}\operatorname{BL}$-algebras) as the stalks, obtained the
sheaf representation and the compact representation of
$\operatorname{BL}$-algebras [3, 16].
In [8], Gehrke and Gool dealt with sheaf representations of a
$\mathcal{V}$-algebra. In their definition Def 3.1 [8], they required that the
$\mathcal{V}$-algebra $A$ is isomorphic to $FY$, where $F$ is a sheaf, and
$FY$ is the algebra of global sections of $F$. In this paper, we loosen the
isomorphism condition in [8] and the requirement on the stalks in [6] and
further extend above results to the more general structures; namely, define
the sheaf spaces of residuated lattices and obtain the sheaf representation of
residuated lattices.
## 2 Preliminaries
In this section, we recall some basic notions and results to be used in the
sequel.
###### Definition 2.1.
([20]) A residuated lattice is an algebra
$(L,\wedge,\vee,\otimes,\rightarrow,0,1)$ satisfying the following conditions:
1. (1)
$(L,\wedge,\vee,0,1)$ is a bounded lattice;
2. (2)
$(L,\otimes,1)$ is a commutative monoid with identity 1;
3. (3)
for any $x,y,z\in L$, $x\otimes y\leq z$ iff $x\leq y\rightarrow z$.
In the following, we shall use the shorthand $L$ to denote
$(L,\wedge,\vee,\otimes,\rightarrow,0,1)$.
###### Definition 2.2.
([13, 19]) A nonempty subset $F$ of a residuated lattice $L$ is called a
filter if
1. (1)
$x\in F$ and $x\leq y$ imply $y\in F$;
2. (2)
$x\in F$ and $y\in F$ imply $x\otimes y\in F$.
###### Theorem 2.3.
([13, 19]) A nonempty subset $F$ of a residuated lattice $L$ is a filter if
1. (1)
$1\in F$;
2. (2)
$x\in F$ and $x\rightarrow y\in F$ imply $y\in F$.
###### Remark 2.4.
A filter is _proper_ if $F\neq L$. We will use $\mathcal{F}(L)$ to denote the
set of all filters of a residuated lattice $L$. Note that $\\{1\\}$ and $L$
are filters. For any $X\subseteq L$, the filter of $L$ generated by $X$ (the
smallest filter containing $X$) will be denoted by <$X$>. In particular, the
filter generated by $\\{a\\}$ will be denoted by <$a$>. To each filter $F$, we
can associate a congruence relation $\equiv_{F}$ on $L$ given by
$x\equiv_{F}y$ iff $((x\rightarrow y)\otimes(y\rightarrow x))\in F.$
Let $L/F$ denote the set of the congruence classes of $\equiv_{F}$, i.e.,
$L/F=\\{x/F|x\in L\\}$, where
$x/F:=\\{y\in L|y\equiv_{F}x\\}.$ Define operations on $L/F$ as follows:
$x/F\sqcap y/F=(x\wedge y)/F$, $x/F\sqcup y/F=(x\vee y)/F$,
$x/F\odot y/F=(x\otimes y)/F$, $x/F\rightharpoonup y/F=(x\rightarrow y)/F$.
###### Remark 2.5.
([11, 12]) It is easy to show that if $\\{F_{i}:i\in
I\\}\subseteq\mathcal{F}(L)$ is a directed family ($\forall i_{1},i_{2}\in
I,\exists i_{s}\in I$ such that $F_{i_{1}}\subseteq F_{i_{s}}$ and
$F_{i_{2}}\subseteq F_{i_{s}}$), then $\bigcup_{i\in
I}F_{i}\in\mathcal{F}(L)$. Thus, for any $x\in L,$ <$x$>$\ll$<$x$> holds in
the complete lattice $(\mathcal{F}(L),\subseteq)$ (see [11, 12] for the
definition of the way below relation $\ll$). It’s not clear whether
$F\in\mathcal{F}(L)$ and $F\ll F$ implies $F=$ <$x$> for some $x\in L$.
###### Lemma 2.6.
([1]) Let $F$ be a filter of a residuated lattice $L$. Then
($L/F,\sqcap,\sqcup,\odot,\rightharpoonup,0/F,1/F)$ is a residuated lattice.
###### Remark 2.7.
([1]) The following properties hold on any residuated lattice:
1. (1)
$x\vee(y\otimes z)\geq(x\vee y)\otimes(x\vee z)$;
2. (2)
<$x$>$\cap$<$y$>$=$ <$x\vee y$>.
###### Definition 2.8.
([1]) A proper filter $P$ of a residuated lattice $L$ is called a prime filter
if for any $x,y\in L$, $x\vee y\in P$ implies $x\in P$ or $y\in P$.
The set of all prime filters of a residuated lattice $L$ is called the _prime
spectrum_ of $L$ and is denoted by $\operatorname{Spec}(L)$.
###### Lemma 2.9.
Let $P$ be a proper filter of a residuated lattice $L$. Then $P$ is a prime
filter iff $F_{1}\cap F_{2}\subseteq P$ implies $F_{1}\subseteq P$ or
$F_{2}\subseteq P$ for any $F_{1},F_{2}\in\mathcal{F}(L)$.
###### Proof 2.10.
Assume that $F_{1}\cap F_{2}\subseteq P$ with $F_{1}\nsubseteq P$ and
$F_{2}\nsubseteq P$. Then there exist $x\in F_{1},y\in F_{2}$ such that
$x\notin P$, $y\notin P$. Thus $x\vee y\in F_{1}\cap F_{2}$ and $x\vee y\notin
P$, since $P$ is a prime filter. This shows that $F_{1}\cap F_{2}\nsubseteq
P$, a contradiction. Conversely, assume that $x\vee y\in P$ with $x\notin P$
and $y\notin P$. Thus <$x$>$\nsubseteq P$ and <$y$>$\nsubseteq P$. Therefore
<$x$>$\cap$<$y$>$\nsubseteq P$. We have <$x\vee y$>$\nsubseteq P$, that is,
$x\vee y\notin P$, again a contradiction.
Next, for any $X\subseteq L$, we will write $D(X)=\\{P\in$
Spec$(L)|X\nsubseteq P\\}$. For any $a\in L$, $D(\\{a\\})$ shall be denoted
simply by $D(a)$.
###### Lemma 2.11.
Let $L$ be a residuated lattice. Then
1. (1)
$X\subseteq Y\subseteq L$ implies $D(X)\subseteq D(Y)$;
2. (2)
$D(X)=D$(<$X$>).
###### Proof 2.12.
(1) is trivial.
(2) Since $X\subseteq$ <$X$>, from (1), we have that $D(X)\subseteq D$(<$X$>).
Conversely, suppose $P\in D$(<$X$>), then <$X$>$\nsubseteq P$. It follows, by
the definition of <$X$>, that $X\nsubseteq P$. That is, $P\in D(X)$. Thus, we
have $D(X)=$ D(<$X$>).
We now recall some basic notions about topology to be used later. For more
about these, we refer to [15].
A topological space is a pair $(X,\tau)$, where $X$ is a nonempty set and
$\tau$ is a family of subsets of $X$, called the topology, such that (i)
$\emptyset,X\in\tau$, (ii) a finite intersection of members of $\tau$ is in
$\tau$ and (iii) an arbitrary union of members of $\tau$ is in $\tau$.
The members of $\tau$ are called _open sets_ of $X$ and the elements of $X$
are called _points_. A _neighbourhood of a point $x$_ in a topological space
$X$ is a subset $W\subseteq X$ such that there exists an open set $U$ of $X$
satisfying $x\in U\subseteq W$. A set $U$ is open iff $U$ is the neighbourhood
of every $x\in U$. A _base $\mathcal{B}$_ for a topology $\tau$ is a
collection of open sets in $\tau$ such that every open set in $\tau$ is a
union of some members of $\mathcal{B}$.
###### Lemma 2.13.
A collection $\mathcal{B}$ of subsets of set $X$ is the base for some topology
iff $X=\bigcup\\{V:V\in\mathcal{B}\\}$ and if $V_{1},V_{2}\in\mathcal{B},x\in
V_{1}\cap V_{2}$, then there exists $V\in\mathcal{B}$ such that $x\in
V\subseteq V_{1}\cap V_{2}$.
A function $f:X\longrightarrow Y$ from a topological space $(X,\tau)$ to a
topological space $(Y,\sigma)$ is _continuous at a point $x\in X$_ if for any
neighbourhood $V$ of $f(x)$, there is a neighbourhood $U$ of $x$ such that
$f(U)\subseteq V$. The function is called _continuous_ if it is continuous
everywhere. For any function $f:X\longrightarrow Y$ between two topological
spaces, $f$ is continuous iff for any open set $W$ of $Y$, $f^{-1}(W)$ is open
in $X$ iff for any open set $V$ in a base $\mathcal{B}$ of $Y$, $f^{-1}(V)$ is
open in $X$. A function $f:X\longrightarrow Y$ between two topological spaces
$X$ and $Y$ is an _open function_ if for any open set $U$ of $X$, $f(U)$ is an
open set of $Y$. A function $f:X\longrightarrow Y$ between two topological
spaces $X$ and $Y$ is an open function iff for any open set $W$ in a base of
$X$, $f(W)$ is open in $Y$. A bijective function $f:X\longrightarrow Y$
between two topological spaces is a _homeomorphism_ if both $f$ and $f^{-1}$
are continuous. A bijective function $f:X\longrightarrow Y$ between two
topological spaces is a homeomorphism iff $f$ is continuous and open.
###### Theorem 2.14.
For any residuated lattice $L$, the family $\\{D(X)|X\subseteq L\\}$ is a
topology on Spec($L$), which we call the _Stone topology_ on $L$.
###### Proof 2.15.
We complete the proof by verifying each of the following.
(1) $D(L)=\operatorname{Spec}(L)$ and $D(1)=\emptyset$.
(2) For any $X\subseteq L$ and $Y\subseteq L$, $D(X)\bigcap
D(Y)=D$(<X>$\cap$<Y>).
(3) For any family $\\{X_{i}|i\in I\\}$ of subsets of $L$, $D(\bigcup_{i\in
I}X_{i})=\bigcup_{i\in I}D(X_{i})$.
For any $P\in\operatorname{Spec}(L),L\nsubseteq P$. Thus
$D(L)=\operatorname{Spec}(L)$. For any
$P\in\operatorname{Spec}(L),\\{1\\}\subseteq P$. Hence $P\notin D(1)$.
Therefore $D(1)=\emptyset$. Thus (1) holds.
Since <X>$\cap$<Y>$\subseteq$<X>,<Y>, by Lemma 2.10, we have $D$(<X>$\cap$<Y>)
$\subseteq D$(<$X$>) $\bigcap D$(<$Y$>) $=D(X)\bigcap D(Y)$. Conversely,
suppose that $P\in D(X)\bigcap D(Y)$, then $X\nsubseteq P$ and $Y\nsubseteq
P$. Hence <$X$>$\nsubseteq P$ and <$Y$>$\nsubseteq P$. By Lemma 2.9, we have
<$X$>$\cap$<$Y$>$\nsubseteq P$. This shows that $P\in D($<X>$\cap$<Y>).
Therefore $D(X)\bigcap D(Y)=D$(<$X$>) $\bigcap D$(<$Y$>)$\subseteq
D$(<$X$>$\cap$<$Y$>). Hence (2) holds.
Lastly, we verify (3). Suppose that $P\in D(\bigcup_{i\in I}X_{i})$, then
there exists $i\in I$ such that $X_{i}\nsubseteq P$. Thus we have $P\in
D(X_{i})\subseteq\bigcup_{i\in I}D(X_{i})$. Hence $D(\bigcup_{i\in
I}X_{i})\subseteq\bigcup_{i\in I}D(X_{i})$. The reverse inclusion holds by
Lemma 2.10 (1).
###### Remark 2.16.
By Lemma 2.10 (2) and Theorem 2.12, we know that the open sets in the spectrum
$\operatorname{Spec}(L)$ are exactly the subsets in
$\\{D(F):F\in\mathcal{F}(L)\\}$.
###### Theorem 2.17.
For any residuated lattice $L$, the family $\\{D(a)\\}_{a\in L}$ is a base for
the _Stone topology_ on $\operatorname{Spec}(L)$.
###### Proof 2.18.
Suppose that $X\subseteq L$ and $D(X)$ is an arbitrary open set of
$\operatorname{Spec}(L)$, then $D(X)=D(\bigcup_{a\in X}\\{a\\})=\bigcup_{a\in
X}D(a)$. Hence every open set $U$ of $\operatorname{Spec}(L)$ is the union of
a subset of $\\{D(a)\\}_{a\in U}$.
###### Proposition 2.19.
For any $P\in\operatorname{Spec}(L)$, $O(P)$ is a proper filter of a
residuated lattice $L$ satisfying $O(P)\subseteq P$, where $O(P)=\\{x\in
L|a\vee x=1$ for some $a\in L-P\\}$.
###### Proof 2.20.
Since $1\notin L-P$, it follows immediately that $0\notin O(P)$. If $x\in
O(P)$ and $x\leq y$, then there exists $a\in L-P$ such that $a\vee x=1$. Hence
$1=x\vee a\leq y\vee a$. Therefore $y\vee a=1$, showing that $y\in O(P)$.
Next, if $x,y\in O(P)$, then there exist $a,b\in L-P$ such that $a\vee x=1$
and $b\vee y=1$. So $a\vee b\in L-P$, because $P$ is a prime filter of $L$.
Thus $(a\vee b)\vee(x\otimes y)\geq(a\vee b\vee x)\otimes(a\vee b\vee
y)=1\otimes 1=1$. Therefore $(a\vee b)\vee(x\otimes y)=1$. This shows that
$x\otimes y\in O(P)$. For any $x\in O(P)$, there exists $a\in L-P$ such that
$a\vee x=1\in P$. Thus $x\in P$.
###### Example 2.21.
Let $L=\\{0,a,b,c,1\\}$ with $0<a,b<c<1$ and $a,b$ incomparable. The
operations $\otimes$ and $\rightarrow$ are defined as follows:
$\otimes$ | 0 | $a$ | $b$ | $c$ | 1
---|---|---|---|---|---
0 | 0 | 0 | 0 | 0 | 0
$a$ | 0 | $a$ | 0 | $a$ | $a$
$b$ | 0 | 0 | $b$ | $b$ | $b$
$c$ | 0 | $a$ | $b$ | $c$ | $c$
1 | 0 | $a$ | $b$ | $c$ | 1
$\rightarrow$ | 0 | $a$ | $b$ | $c$ | 1
---|---|---|---|---|---
0 | 1 | 1 | 1 | 1 | 1
$a$ | $b$ | 1 | $b$ | 1 | 1
$b$ | $a$ | $a$ | 1 | 1 | 1
$c$ | 0 | $a$ | $b$ | 1 | 1
1 | 0 | $a$ | $b$ | $c$ | 1
Then $L$ becomes a residuated lattice (see [1]). The filters of $L$ are
$\\{1\\},\\{c,1\\}$, $\\{a,c,1\\},\\{b,c,1\\}$ and $L$. It is easy to check
that the prime filters of $L$ are $\\{a,c,1\\},\\{b,c,1\\}$, and
$O(\\{a,c,1\\})=\\{1\\}$, $O(\\{b,c,1\\})=\\{1\\}$.
###### Example 2.22.
Let $L=\\{0,a,b,1\\}$ with $0<a,b<1$ and $a,b$ incomparable. The operations
$\otimes$ and $\rightarrow$ are defined as follows:
$\otimes$ | 0 | $a$ | $b$ | 1
---|---|---|---|---
0 | 0 | 0 | 0 | 0
$a$ | 0 | $a$ | 0 | $a$
$b$ | 0 | 0 | $b$ | $b$
1 | 0 | $a$ | $b$ | 1
$\rightarrow$ | 0 | $a$ | $b$ | 1
---|---|---|---|---
0 | 1 | 1 | 1 | 1
$a$ | $b$ | 1 | $b$ | 1
$b$ | $a$ | $a$ | 1 | 1
1 | 0 | $a$ | $b$ | 1
It is routine to verify that with the above operations, $L$ is a residuated
lattice and the filters of $L$ are $\\{1\\},\\{a,1\\},\\{b,1\\}$ and $L$. In
addition, the prime filters of $L$ are $\\{a,1\\},\\{b,1\\}$ and
$O(\\{a,1\\})=\\{a,1\\}$, $O(\\{b,1\\})=\\{b,1\\}$.
## 3 The sheaf representations of residuated lattices
In this section, we introduce the notion of sheaf space of residuated lattices
and construct the sheaf representations of residuated lattices.
###### Definition 3.1.
A sheaf space of residuated lattices is a triple $(E,p,X)$ satisfying the
following conditions:
1. (1)
Both $E$ and $X$ are topological spaces.
2. (2)
$p:E\longrightarrow X$ is a local homeomorphism from $E$ onto $X$, i.e. for
any $e\in E$, there are open neighbourhoods $U$ and $U^{\prime}$ of $e$ and
$p(e)$ such that $p$ maps $U$ homeomorphically onto $U^{\prime}$.
3. (3)
For any $x\in X,p^{-1}(\\{x\\})=E_{x}$ is a residuated lattice.
4. (4)
The functions defined by $(a,b)\longmapsto a\wedge_{x}b,(a,b)\longmapsto
a\vee_{x}b,(a,b)\longmapsto a\otimes_{x}b,(a,b)\longmapsto a\rightarrow_{x}b$
from the set $\\{(a,b)\in E\times E|p(a)=p(b)\\}$ into $E$ are continuous,
where $x=p(a)=p(b)$.
5. (5)
The functions $\underline{0},\underline{1}:X\longrightarrow E$ assigning to
every $x$ in $X$ the $0_{x}$ and $1_{x}$ of $E_{x}$ respectively, are
continuous.
###### Remark 3.2.
In the Definition 3.1, $E$ is usually called the total space, $X$ as the base
space and $E_{x}$ is called the stalk of $E$ at $x\in X$.
###### Definition 3.3.
Let $(E,p,X)$ be a sheaf space of residuated lattices. For any $Y\subseteq X$,
a function $\sigma:Y\longrightarrow E$ is called a section over $Y$ if it is
continuous such that for any $y\in Y,p(\sigma(y))=y$.
###### Remark 3.4.
If we define the operations pointwisely on the set of all sections over $Y$,
it constitutes a residuated lattice. We denote it by $\Gamma(Y,E)$. The
elements of $\Gamma(X,E)$ are called _global sections_.
###### Definition 3.5.
([19]) Suppose that $L$ and $L^{\prime}$ are residuated lattices. A residuated
lattice morphism is a function $h:L\longrightarrow L^{\prime}$ such that
$h(a\wedge_{L}b)=h(a)\wedge_{L^{\prime}}h(b),h(a\vee_{L}b)=h(a)\vee_{L^{\prime}}h(b),h(a\otimes_{L}b)=h(a)\otimes_{L^{\prime}}h(b),h(a\rightarrow_{L}b)=h(a)\rightarrow_{L^{\prime}}h(b)$
and $h(0)=0^{\prime},h(1)=1^{\prime}$.
###### Definition 3.6.
A sheaf representation of a residuated lattice $L$ will mean an injective
residuated lattice morphism $\phi:L\longrightarrow\Gamma(X,E)$ from $L$ to the
residuated lattice $\Gamma(X,E)$ of global sections of a sheaf space of
residuated lattices $(E,p,X)$.
###### Lemma 3.7.
Let $(E,p,X)$ be a sheaf space of residuated lattices. If we define
$p_{x}:\Gamma(X,E)\longrightarrow E_{x}$ by $p_{x}(\sigma)=\sigma(x)$, then
for any $x\in X,$ $p_{x}$ is a residuated lattice morphism.
###### Proof 3.8.
Here we only prove that $\otimes$ is a morphism, the proofs for other
operations are similar. For any $\sigma,\mu\in\Gamma(X,E)$,
$p_{x}(\mu\otimes\sigma)=(\mu\otimes\sigma)(x)=\mu(x)\otimes_{x}\sigma(x)=p_{x}(\mu)\otimes_{x}p_{x}(\sigma)$,
and
$p_{x}(\underline{0})=\underline{0}(x)=0_{x},p_{x}(\underline{1})=\underline{1}(x)=1_{x}.$
###### Lemma 3.9.
If $L$ is a residuated lattice, then for any $a\in
L,V(a)=\\{P\in\operatorname{Spec}(L)|a\in O(P)\\}$ is open in
$\operatorname{Spec}(L)$.
###### Proof 3.10.
Assume that $P\in V(a)$, then $a\in O(P)$. Thus there exists $b\in L-P$ such
that $a\vee b=1$. If $Q\in D(b)$, then $b\notin Q$. Hence $a\in O(Q)$, i.e.
$Q\in V(a)$. Therefore $P\in D(b)\subseteq V(a)$. This shows that $V(a)$ is
open.
In the sequel, we will construct a sheaf space for each residuated lattice
using the residuated lattice $L$ and the topological space
$\operatorname{Spec}(L)$. Let $E_{L}$ be the disjoint union of the set
$\\{L/O(P)\\}_{P\in\operatorname{Spec}(L)}$ and
$\pi:E_{L}\longrightarrow\operatorname{Spec}(L)$ the canonical projection.
###### Theorem 3.11.
Let $L$ be a residuated lattice. Then the family
$\mathcal{B}=\\{D(F,a):F\in\mathcal{F}(L)$ and $a\in L\\}$ is a base for a
topology on $E_{L}$, where $D(F,a)=\\{a_{P}:P\in D(F)\\}$ and $a_{P}=a/O(P)$.
###### Proof 3.12.
We complete the proof in two steps.
(i) For every $D_{1},D_{2}\in\mathcal{B}$ and $x\in D_{1}\cap D_{2}$, there
exists a $D\in\mathcal{B}$ such that $x\in D\subseteq D_{1}\cap D_{2}$.
Take $D_{1}=D(F_{1},a),D_{2}=D(F_{2},b)$ with $F_{1},F_{2}\in\mathcal{F}(L)$
and $a,b\in L$. Suppose that $x\in D(F_{1},a),$ $x\in D(F_{2},b)$, then there
exists $P\in D(F_{1})$ and $Q\in D(F_{2})$ such that $x=a_{P}=b_{Q}$. Thus
$P=Q$ and $(a\rightarrow b)\otimes(b\rightarrow a)\in O(P)$. Hence $P\in
D(F_{1})\cap D(F_{2})\cap V((a\rightarrow b)\otimes(b\rightarrow a)):=W$. By
Remark 2.13 and Lemma 3.8, we have that $W$ is open in Spec$(L)$. Hence there
exists a filter $F$ such that $P\in D(F)\subseteq W\subseteq D(F_{1})\cap
D(F_{2}).$ Therefore $D(F,a)=\\{a_{p}:P\in D(F)\\}\subseteq D(F_{1},a)$ and
$D(F,a)=\\{a_{P}:P\in D(F)\\}$ $=\\{b_{P}:P\in D(F)\\}\subseteq\\{b_{P}:P\in
D(F_{2})\\}=D(F_{2},b)$, because $(a\rightarrow b)\otimes(b\rightarrow a)\in
O(P)$. Therefore $x\in D(F,a)\subseteq D(F_{1},a)\cap D(F_{2},b)$.
(ii) For every $x\in E_{L}$, there exists a $D\in\mathcal{B}$ with $x\in D$.
Suppose that $x\in E_{L}$, then there exists $a\in L$ and
$P\in\operatorname{Spec}(L)$ such that $x=a_{P}$. Thus there exists
$G\in\mathcal{F}(L)$ such that $P\in D(G)$. This shows that $x\in D(G,a)$.
In the sequel, we will use $\mathcal{T}(\mathcal{B})$ to denote the topology
on $E_{L}$ generated by the above $\mathcal{B}$.
###### Theorem 3.13.
The assignment $\pi:E_{L}\longrightarrow\operatorname{Spec}(L)$ defined by
$a_{P}\longmapsto P$ is a local homeomorphism of
$(E_{L},\mathcal{T}(\mathcal{B}))$ onto $\operatorname{Spec}(L)$.
###### Proof 3.14.
The mapping $\pi$ is well defined and it is clear that $\pi$ is surjective.
Suppose that $a_{P}\in E_{L}$ and $U=D(F,a)$ is an open neighbourhood of
$a_{P}$ from $\mathcal{B}$. Obviously, $\pi(D(F,a))=D(F)$. The restriction
$\pi_{U}$ of $\pi$ to $U$ is injective from $U$ into $D(F)$.
(i) $\pi_{U}$ is continuous: In fact, suppose that $D(G)$ is an open set of
$\operatorname{Spec}(L)$, then $D(F)\cap D(G)=D(F\cap G)$ is a base open set
in $D(F)$. Also $\pi^{-1}_{U}(D(F\cap G))=\\{a_{P}:P\in D(F\cap G)\\}=D(F\cap
G,a)$ and it is an open subset of $D(F,a)$.
(ii) $\pi_{U}$ is open: To see this, assume that $D(H,b)$ is a base open set
of $E_{L}$. Then $D(H,b)\cap U$ is a base open subset of $U$. Also
$\pi_{U}(U\cap D(H,b))=D(F)\cap D(H)$, which is open in $D(F)$.
###### Proposition 3.15.
For any $a\in L$, the function $\hat{a}:\operatorname{Spec}(L)\longrightarrow
E_{L}$ defined by $\hat{a}(P)=a_{P}$ is a global section of
$(E_{L},\pi,\operatorname{Spec}(L))$.
###### Proof 3.16.
First, $\pi(\hat{a}(P))=\pi(a_{P})=P$. Next we prove that $\hat{a}$ is
continuous. Actually, for any $D(F,a)\in\mathcal{B},$
${\hat{a}}^{-1}(D(F,a))=D(F)$, which is open in $\operatorname{Spec}(L)$. And
for any $b\in L$, $b\neq a,D(F,b)\in\mathcal{B},$
$\displaystyle{\hat{a}}^{-1}(D(F,b))$
$\displaystyle=D(F)\bigcap\\{P|a_{P}=b_{P}\\}$ (1)
$\displaystyle=D(F)\bigcap\\{P\in\operatorname{Spec}(L)|(a\rightarrow
b)\otimes(b\rightarrow a)\in O(P)\\}$
$\displaystyle=D(F)\bigcap\\{P\in\operatorname{Spec}(L)|a\rightarrow b\in
O(P)\\}\bigcap\\{P\in\operatorname{Spec}(L)|b\rightarrow a\in O(P)\\}$
$\displaystyle=D(F)\bigcap V(a\rightarrow b)\bigcap V(b\rightarrow a)$
By Remark 2.13 and Lemma 3.8, we know that ${\hat{a}}^{-1}(D(F,b))$ is open in
$\operatorname{Spec}(L)$.
###### Corollary 3.17.
The functions $\hat{0}:\operatorname{Spec}(L)\longrightarrow E_{L}$ and
$\hat{1}:\operatorname{Spec}(L)\longrightarrow E_{L}$ are global sections of
$(E_{L},\pi,\operatorname{Spec}(L))$.
Let $E_{L}\vartriangle E_{L}=\bigcup\\{E_{P}\times E_{P}:P\in$ Spec$(L)\\}$
and equip $E_{L}\vartriangle E_{L}$ with the subspace topology of the product
space $E_{L}\times E_{L}$. It is well known that a base for the topology on
$E_{L}\times E_{L}$ is $\mathcal{B}^{\prime}=\\{D(F,a)\times
D(G,b):F,G\in\mathcal{F}(L)$ and $a,b\in L\\}$. Thus a base for the induced
topology on $E_{L}\vartriangle E_{L}$ is given by
$\mathcal{B}^{\prime\prime}=\\{(B(a,b),F):F\in\mathcal{F}(L)$ and $a,b\in
L\\}$, where $(B(a,b),F)$ is the set $\\{(a_{P},b_{P}):P\in D(F)\\}$.
###### Proposition 3.18.
For any $P\in\operatorname{Spec}(L)$, the functions $(a_{P},b_{P})\longmapsto
a_{P}\wedge_{P}b_{P},(a_{P},b_{P})\longmapsto a_{P}\vee_{P}b_{P}$,
$(a_{P},b_{P})\longmapsto a_{P}\otimes_{P}b_{P},(a_{P},b_{P})\longmapsto
a_{P}\rightarrow_{P}b_{P}$ from the set $\\{(a_{P},b_{P})\in E_{L}\times
E_{L}|\pi(a)=\pi(b)\\}$ into $E_{L}$ are continuous, where $P=\pi(a)=\pi(b)$.
###### Proof 3.19.
We only prove the continuity of the operation $\otimes_{P}$. The proofs for
the rest of the operations are similar. Let $(a_{P},b_{P})\in
E_{L}\vartriangle E_{L}$ and $D(F,a\otimes b)$ a neighbourhood of
$(a\otimes_{P}b)_{P}$. Then $(B(a,b),F)$ is a neighbourhood of
$(a_{P},b_{P})$, whose image by $\otimes_{P}$ is contained in $D(F,a\otimes
b)$.
###### Theorem 3.20.
For any residuated lattice $L$, $(E_{L},\pi,\operatorname{Spec}(L))$ is a
sheaf space of $L$.
###### Proof 3.21.
For any $P\in\operatorname{Spec}(L)$, $\pi^{-1}(\\{P\\})=L/O(P)$. And for any
$P\in\operatorname{Spec}(L)$, $O(P)$ is a proper filter of $L$, thus $L/O(P)$
is a residuated lattice. By Theorem 3.10, Proposition 3.11, Corollary 3.12 and
Proposition 3.13, we deduce that $(E_{L},\pi,\operatorname{Spec}(L))$ is a
sheaf space of $L$.
###### Lemma 3.22.
([1]) If $F$ is a filter of a residuated lattices $L$ and $a\in L-F$, then
there exists a prime filter $P$ of $L$ such that $F\subseteq P$ and $a\notin
P$.
###### Proposition 3.23.
$\bigcap\\{P|P\in\operatorname{Spec}(L)\\}=\\{1\\}$.
###### Proof 3.24.
Clearly $\\{1\\}\subseteq\bigcap\\{P|P\in\operatorname{Spec}(L)\\}$.
Conversely assume that $a\neq 1$, then by Lemma 3.15, there is a
$P\in\operatorname{Spec}(L)$ such that $a\notin P$. Thus
$a\notin\bigcap\\{P|P\in\operatorname{Spec}(L)\\}$. Therefore
$\bigcap\\{P|P\in\operatorname{Spec}(L)\\}\subseteq\\{1\\}$.
For any $P\in\operatorname{Spec}(L)$, $O(P)$ is a subset of $P$ and $1\in
O(P)$, thus the result below follows immediately.
###### Corollary 3.25.
$\bigcap\\{O(P)|P\in\operatorname{Spec}(L)\\}=\\{1\\}$.
###### Theorem 3.26.
If $L$ is a residuated lattice, then the family
$\\{O(P)\\}_{P\in\operatorname{Spec}(L)}$ canonically determines a sheaf
representation of $L$.
###### Proof 3.27.
Define $\varphi:L\longrightarrow\Gamma(\operatorname{Spec}(L),E_{L})$ by
$\varphi(a)=\hat{a}$. We only prove that for any $a,b\in L,\varphi(a\otimes
b)=\varphi(a)\otimes_{P}\varphi(b)$. The proofs for rest of the operations are
similar. For any $P\in\operatorname{Spec}(L)$, $\varphi(a\otimes
b)(P)=(\widehat{a\otimes b})(P)=a\otimes
b/O(P)=a/O(P)\otimes_{P}b/O(P)=\hat{a}(P)\otimes_{P}\hat{b}(P)=\varphi(a)(P)\otimes_{P}\varphi(b)(P)=(\varphi(a)\otimes_{P}\varphi(b))(P)$.
Thus $\varphi(a\otimes b)=\varphi(a)\otimes_{P}\varphi(b)$. Next, we prove
that the mapping $\varphi$ is injective. Assume that $\varphi(a)=\varphi(b)$.
Then for any $P\in\operatorname{Spec}(L)$, $a_{P}=b_{P}$. Thus $(a\rightarrow
b)\otimes(b\rightarrow
a)\in\bigcap\\{O(P)|P\in\operatorname{Spec}(L)\\}=\\{1\\}$, i.e. $a=b$.
###### Problem 3.28.
For what $L$, is the mapping $\varphi$ surjective?
## 4 Conclusions and future work
In this paper, we investigate the properties of the family of all the prime
filters of residuated lattices. Based on this, we construct the sheaf space of
residuated lattices and obtain a sheaf representation of residuated lattices.
In [6], Ferraioli and Lettieri took the primary ideals as the corresponding
ideals of the prime ideals and proved that every $\operatorname{MV}$-algebra
and the $\operatorname{MV}$-algebra of all global sections of its sheaf space
are isomorphic. In [8, 9], the scholars proved every
$\operatorname{MV}$-algebra $A$ is isomorphic to the
$\operatorname{MV}$-algebra of global sections of a sheaf $F$ of
$\operatorname{MV}$-algebras with stalks that are linear. In our future work,
we will investigate when these results hold for a residuated lattice,
specifically, for what residuated lattice $L$, the mapping
$\varphi:L\longrightarrow\Gamma(\operatorname{Spec}(L),E_{L})$ is surjective.
For example, is $\varphi$ surjective for any Heyting algebra $L$?
## References
* [1] Cretan, R., and A. Jeflea, _On the lattice of congruence fillters of a residuated lattice_ , Annals of the University of Craiova-Mathematics and Computer Science Series 33 (2006):174 - 188. ISSN 1223-6934
* [2] Davey, B.A., _Sheaf spaces and sheaves of universal algebras_ , Mathematische Zeitschrift 134(4) (1973): 275-290.
https://doi.org/10.1007/BF01214692
* [3] Di Nola, A., I. Esposito and B. Gerla, _Local algebras in the representation of $\operatorname{MV}$-algebras_, Algebra Universalis 56 (2007):133 - 164.
https://doi.org/10.1007/s00012-007-1984-6
* [4] Di Nola, A., and L. Leuştean, _Compact representations of $\operatorname{BL}$-algebras_, Archive for Mathematical Logic 42(08) (2003):737 - 761.
https://doi.org/10.1007/s00153-003-0178-y
* [5] Dubuc, E.J., and Y.A. Poveda, _Representation theory of $\operatorname{MV}$-algebras_, Annals of Pure & Applied Logic 161(08) (2008):1024 - 1046.
https://doi.org/10.1016/j.apal.2009.12.006
* [6] Ferraioli, A.R., and A. Lettieri, _Representations of $\operatorname{MV}$-algebras by sheaves_, Mathematical Logic Quarterly 57(01) (2011):27 - 43.
https://doi.org/10.1002/malq.200910116
* [7] Filipoiu, A., and G. Georgescu, _Compact and Pierce representations of $\operatorname{MV}$-algebras_, Revue Roumaine des Mathematiques Pures et Appliquees 40(07) (1995):599 - 618. Available online at:
https://www.researchgate.net/publication/265548706.
* [8] Gehrke, M., and S.J.v. Gool, _Sheaves and duality_ , Journal of Pure and Applied Algebra 222(08) (2018):2164 - 2180.
https://doi.org/10.1016/j.jpaa.2017.09.004
* [9] Gehrke, M., S.J.v. Gool and V. Marra, _Sheaf representations of $\operatorname{MV}$-algebras and lattice-ordered abelian groups via duality_, Journal of Algebra 417 (2014):290 - 332.
https://doi.org/10.1016/j.jalgebra.2014.06.031
* [10] Ghilardi, S., and M. Zawadowski, _A sheaf representation and duality for finitely presented Heyting algebras_ , The Journal of Symbolic Logic 60(03) (1995):911 - 939.
https://doi.org/10.2307/2275765.
* [11] Gierz G., K.H. Hofmann, K. Keimel, J.D. Lawson, M. Mislove, and D.S. Scott,“Continuous Lattices and Domains, Vol.93 of Encyclopedia of Math. Appl.,” Cambridge University Press, Cambridge, U.K., 2003. ISBN: 0-521-80338-1.
https://doi.org/10.1017/CBO9780511542725
* [12] Goubault-Larrecq J., “Non-Hausdorff topology and domain theory, Vol. 22 of New Mathematical Monographs,” Cambridge University Press, N.Y, 2013. ISBN: 9781107034136.
https://doi.org/10.1017/CBO9781139524438
* [13] Hájek, P., “Metamathematics of Fuzzy Logic,” Kluwer Academic Publishers, Dordrecht, 1998. ISBN: 978-1-4020-0370-7.
https://doi.org/10.1007/978-94-011-5300-3
* [14] Höhle, U., and P. Klement, “Non-Classical Logics and their Applications to Fuzzy Subsets,” Kluwer Academic Publishers, Dordrecht, 1995. ISBN:978-94-010-4096-9.
https://doi.org/10.1007/978-94-011-0215-5
* [15] Kelley, J.L., “General Topology,” Courier Dover Publications, Mineola, N.Y., 2017. ISBN:9780486815442
* [16] Leuştean, L., _Sheaf representations of $\operatorname{BL}$-algebras_, Soft Computing 9 (2005):897 - 909.
https://doi.org/10.1007/s00500-004-0449-5
* [17] Mac Lane, S., and I. Moerdijk, “Sheaves in Geometry and Logic,” Springer-Verlag, N.Y., 1992. ISBN:978-0-387-97710-2.
https://doi.org/10.1007/978-1-4612-0927-0
* [18] Tennison, B.R., “Sheaf Theory,” Cambridge University Press, Cambridge, U.K., 1975. ISBN: 978-0-521-20784-3.
https://doi.org/10.1017/CBO9780511661761
* [19] Turuen, E., “Mathematics Behind Fuzzy Logic,” Physica-Verlag, Heidelberg, 1999. ISBN:3-7908-1221-8
* [20] Ward, W., and R.P. Dilworth, Residuated lattices, Transactions of the American Mathematical Society 45(03) (1939):335 - 354.
https://doi.org/10.2307/1990008
|
# Weakly Supervised Learning Significantly Reduces the
Number of Labels Required for Intracranial
Hemorrhage Detection on Head CT
Jacopo Teneggi111Department of Computer Science, Johns Hopkins University,
Baltimore, MD, 21218 222Mathematical Institute for Data Science (MINDS), Johns
Hopkins University, Baltimore, MD, 21218 Paul H. Yi333University of Maryland
Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology
and Nuclear Medicine, University of Maryland School of Medicine, Baltimore,
MD, 21201 Jeremias Sulam444Department of Biomedical Engineering, Johns Hopkins
University, Baltimore, MD, 21218 22footnotemark: 2
###### Abstract
Modern machine learning pipelines, in particular those based on deep learning
(DL) models, require large amounts of labeled data. For classification
problems, the most common learning paradigm consists of presenting labeled
examples during training, thus providing _strong supervision_ on what
constitutes positive and negative samples. As a result, the adequate training
of these models demands the curation of large datasets with high-quality
labels. This constitutes a major obstacle for the development of DL models in
radiology—in particular for cross-sectional imaging (e.g., computed tomography
[CT] scans)—where labels must come from manual annotations by expert
radiologists at the image or slice-level. These differ from examination-level
annotations, which are coarser but cheaper, and could be extracted from
radiology reports using natural language processing techniques. This work
studies the question of _what kind of labels_ should be collected for the
problem of intracranial hemorrhage detection in brain CT. We investigate
whether _image_ -level annotations should be preferred to _examination_ -level
ones. By framing this task as a multiple instance learning (MIL) problem, and
employing modern attention-based DL architectures, we analyze the degree to
which different levels of supervision improve detection performance. We find
that strong supervision (i.e., learning with local image-level annotations)
and weak supervision (i.e., learning with only global examination-level
labels) achieve comparable performance in examination-level hemorrhage
detection (the task of selecting the images in an examination that show signs
of hemorrhage) as well as in image-level hemorrhage detection (highlighting
those signs within the selected images). Furthermore, we study this behavior
as a function of the number of labels available during training. Our results
suggest that local labels may not be necessary at all for these tasks,
drastically reducing the time and cost involved in collecting and curating
datasets.
## 1 Introduction
Modern Deep Learning (DL) models continue to drive exciting advances across
several medical imaging tasks, from image reconstruction and enhancement [2,
17, 3, 87, 47], to automatic lesion detection and segmentation [16, 35, 52].
DL models for classification and detection are especially desirable for
Computer-Aided Diagnosis (CAD) systems in radiology, potentially supporting
clinicians in their decision-making by providing a second opinion on subtle
cases, or prioritizing the most severe ones [61, 73, 85]. Indeed, recent
results indicate that the performance of these machine learning models can be
comparable to that of expert physicians in many scenarios [64, 69], and they
hold significant promise for the automation of diagnosis, especially in
underserved areas where access to radiology expertise might be limited [48,
21, 51, 62, 90, 7].
In this work, we center our attention on the development of DL models for
Intracranial Hemorrhage (ICH) detection in head Computed Tomography (CT). In
this context, given a new CT scan, the task is to detect the presence of any
type of brain hemorrhage. ICH is a potentially life-threatening condition
consisting of bleeding inside of the brain which can have several different
causes, from trauma to drug abuse [33]. ICH accounts for approximately 10% to
20% of all strokes [5], and expert radiologists can diagnose ICH from
unenhanced head CT scans by analyzing the location, shape, and size of the
lesions [33]. The large number of head CT scans produced daily, and the
importance of a quick diagnosis for an effective treatment of severe cases,
make ICH detection one of the most popular applications of deep learning in
radiology thus far [12]. Many recent works have explored deep learning
solutions to different challenges in developing machine learning pipelines for
ICH detection, such as the volumetric nature of CT data, the windowing range,
and the lack of confidence in _black-box_ predictors [92, 46, 54, 53].
At the same time, the development of these high-performing models can be
notoriously time-consuming and expensive, largely due to the significant
amount of required training data. The most common approach to training DL
models for medical imaging classification and detection is _supervised
learning_ , wherein a collection of images with ground-truth labels are
presented to the model. These examples serve the purpose of describing what
constitutes a sample from a given class, or how a specific finding looks like
in a given image. Naturally, this requires having access to large amounts of
labeled data that must be collected by radiologists who manually annotate
hundreds or thousands of images—a laborsome and time-consuming process that
often results in very high costs [33].
Some recent research efforts have explored ways of alleviating these
limitations. _Semi supervised_ learning approaches, for example, extract low-
quality labels automatically from clinical notes stored in the Electronic
Health Record (EHR) system of a medical institution. The authors in [32] and
[84] show how weak labels extracted automatically from clinical reports enable
whole-body abnormality detection in PET/CT and body CT, respectively. Although
semi supervised learning alleviates the need for large amounts of data with
ground-truth annotations, collecting _some_ amount of annotated data remains
central to training and, importantly, testing these models, and the central
aforementioned limitations persist.
In detection problems in particular—where the label of a sample is determined
by the presence of a specific finding—it remains unclear _what kind_ of labels
should be sought after. In the hemorrhage detection problem described above,
should ground-truth binary labels be collected for every image in an
examination? This can be implemented by labeling an image as ‘1’ if it
contains signs of hemorrhage, or ‘0’ otherwise. Or would coarse, examination-
level annotations that only indicate the presence of hemorrhage somewhere in
the scan (but not in which image) suffice? On the one hand, it is clear that
the amount of information in each label decreases as we provide coarser
annotation. That is, there might be other findings in a scan (e.g., midline
shift effects, external hemotomas, signs of prior surgery, asymmetries) that
may be highly correlated with intracranial hemorrhage in the training data. A
coarse examination-level binary label may not provide enough information to
disambiguate them. At the same time, coarser annotations can lead to huge
improvements in data curation time and annotation, since radiologists need
only to provide a binary response for each examination.
In this work, we address these fundamental questions using a weakly supervised
approach different from semi supervised learning: _Multiple Instance Learning_
(MIL) [29, 59, 89]. In MIL problems, one regards every input as a _bag of
instances_ , and the label of the bag is determined by the labels of its
instances. This framework naturally fits the problem of hemorrhage detection
in head CT, since an examination is considered positive (i.e., its coarse,
global label is positive) as soon as it contains at least one image with
evidence of hemorrhage (i.e., it contains an image with a positive local
label). MIL is a particular case of weakly supervised learning, wherein labels
are only available for bags (i.e., examinations) instead of instances (i.e.,
images). By employing a state-of-the-art MIL model [43] that can be trained
with either global or local labels, we study whether strong supervision with
expensive local labels leads to significantly higher performance in hemorrhage
detection in head CT, or whether weak supervision—which is cheaper to
obtain—can provide comparable models.
### Summary of contributions
We show that weakly supervised learning can produce DL models for ICH
detection with performance matching that of DL models trained using strong
supervision—all while using $\approx 35$-times fewer labels. Furthermore,
these weakly supervised models had better generalization on at least one
external dataset. Finally, we show that weakly supervised DL models have
comparable localization ability of ICH on both the image- and pixel-levels,
which is a key feature towards explainability and building trust with
clinician end-users. These results inform how data should be collected for
this and other similar tasks in radiology, providing a solution to the primary
bottleneck in development of high-performing DL models in medical imaging.
## 2 Results
For a positive head CT scan, we will refer to _examination-level_ hemorrhage
detection as the task of retrieving the images that contain signs of ICH; and
_image-level_ hemorrhage detection as the task of highlighting these findings
within the retrieved images. We rephrase both examination- and image-level
hemorrhage detection as MIL binary classification problems [29, 59, 89] (see
Section 4.1 for details on supervised learning and MIL), and evaluate the
performance of models trained with local (image-level) annotations and global
(examination-level) labels. We refer to the former as a strong learner
(${\mathcal{SL}}$), as it is trained via strong supervision, and weak learner
(${\mathcal{WL}}$) to the latter, since it only uses weak supervision.
### 2.1 Datasets
We train a strong and a weak learner on the RSNA 2019 Brain CT Hemorrhage
Challenge dataset [33], which comprises 21,784 examinations (with a positive
rate of 41%) for a total of 752,803 images (with a positive rate of
14%).555For the sake of simplicity, we will refer to the RSNA 2019 Brain CT
Hemorrhage Challenge dataset as “RSNA dataset”, which is available at:
https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection. Every image
in the RSNA dataset was labeled by expert neuroradiologists with the type(s)
of hemorrhage present (i.e., epidural, intraparenchymal, intraventricular,
subarachnoid, or subdural). We use 80% of the data for training and 20% for
validation. Splits were created by random sampling of examinations, rather
than images, and the same splits were used for both models in order to
guarantee a fair comparison between them. Table 1 shows the distribution of
positive and negative labels—note that while the total _number of images_ is
the same for each model, the weak learner has access to $\approx 35$-times
fewer _total labels_ , which is the average number of images in a scan across
the dataset.
Table 1: Number of positive and negative labels in the RSNA dataset for strong and weak learners. | Training | Validation
---|---|---
Learner | Positive labels | Negative labels | Positive labels | Negative labels
full supervision | 86,295 ($\approx$ 14%) | 515,635 ($\approx$ 86%) | 21,489 ($\approx$ 14%) | 129,003 ($\approx$ 86%)
weak supervision | 7,100 ($\approx$ 40%) | 10,288 ($\approx$ 60%) | 1,776 ($\approx$ 40%) | 2,572 ($\approx$ 60%)
Table 2: Number of positive and negative examinations in the CQ500 and CT-ICH datasets, alongside the total number of images contained in the two datasets. Dataset | Positive examinations | Negative examinations | Total images
---|---|---|---
CQ500 [20] | 212 ($\approx$ 49%) | 224 ($\approx$ 51%) | 15,156
CT-ICH [41] | 36 ($\approx$ 48%) | 39 ($\approx$ 52%) | 2,539
In addition to the validation split of the RSNA dataset, we evaluate our
resulting models on two external test sets—the CQ500 dataset (436 examinations
with a positive rate of 49%) [20] and the CT-ICH dataset (75 examinations with
a positive rate of 48%) [41, 42, 36].666The CQ500 dataset is available at:
http://headctstudy.qure.ai/dataset; the CT-ICH dataset is available at:
https://physionet.org/content/ct-ich/1.3.1/. Table 2 shows the distribution of
positive and negative examinations in the two external test sets and their
total number of images. We note that the CQ500 dataset only provides
examination-level labels, while the CT-ICH dataset provides both image-level
labels and manual pixel-level segmentations of the bleeds performed by two
expert radiologists. Hence, we extend the CQ500 dataset with the ICH bounding
box annotations provided for this dataset by three radiologists with varying
degree of experience, available in the BHX dataset [70, 36].777The BHX dataset
is available at: https://physionet.org/content/bhx-brain-bounding-box/1.1/. We
include details on the preprocessing of the images for all three datasets in
Section 4.3.
### 2.2 Attention-based MIL enables training with local or global labels
We frame the ICH detection task as an MIL binary classification problem. We
include a detailed description of the model architectures and their training
procedures in Sections 4.2 and 4.4, respectively.888Code to reproduce the
experiments in this paper is available at: https://github.com/Sulam-
Group/MIL_ICH. Here, we briefly describe how state-of-the-art attention-based
MIL models [43] enable us to precisely investigate whether classical strong
supervision with expensive local labels provides an advantage over weak
supervision with cheap global labels.
(a) Pictorial representation of a strong learner $h$ (i.e., ${\mathcal{SL}}$).
(b) Pictorial representation of a weak learner $H$ (i.e., ${\mathcal{WL}}$).
Figure 1: Model architectures. Fig. 1(a): The strongly supervised model makes
a prediction on every input image, and it requires their labels for training.
Fig. 1(b): The weakly supervised model makes a prediction for the entire
examination, and thus only requires their labels for training. Note that both
learners use the same encoder (i.e., a ResNet18 [40]) and the same fully-
connected layer as the final classifier.
We regard an individual image in a CT scan as a vector $x\in{\mathbb{R}}^{d}$.
Each image is associated with a binary label $y$ that indicates the presence
$(y=1)$ or absence $(y=0)$ of signs of hemorrhage in the image. A single CT
scan of a patient can be naturally seen as the stacking of $r$ images along
the scanner’s axis, i.e.
$X=[x^{(1)},x^{(2)},\dots,x^{(r)}]\in{\mathbb{R}}^{dr}$. Analogously to
images, examinations are also associated with a binary label $Y$ indicating
whether any image in the examination presents signs of hemorrhage $(Y=1)$, or
every image in the examination is healthy $(Y=0)$. Note that the label of an
examination can be determined from the labels of the images in the examination
(if they are available), since the presence of hemorrhage in any image implies
the presence of hemorrhage in the examination. This observation can be
formalized by stating that the examination’s label is the logical OR function
of the labels of the images in the examination, i.e.
$Y={\texttt{OR}}(y^{(1)},y^{(2)},\dots,y^{(r)})$.
In the traditional strongly supervised setting, a predictor $h$ is trained on
a collection of $n_{s}$ training samples $\\{(x_{i},y_{i})\\}_{i=1}^{n_{s}}$,
with the goal of obtaining $h(x)$ as an accurate predictor of the label of a
new sample $x$, i.e. as an approximation of the conditional expectation of $y$
given $x$. In this work, $h$ is given by the composition of a feature
extractor $f$, implemented by a Convolutional Neural Network (CNN) that
encodes a $d$-dimensional input (here $d=512\times 512$ pixels) into a feature
vector of size 256, with a binary classifier $g$ that receives the feature
vector and returns a value in the unit interval $[0,1]$. To summarize,
$h:\leavevmode\nobreak\ {\mathbb{R}}^{d}\to[0,1]$ such that $h(x)=g(f(x))$. We
remark that training $h$ requires the collection, annotation, and curation of
$n_{s}$ pairs of input images with their respective labels, which is time-
consuming and costly.
To grant these functions the ability to learn with global labels only, we
propose an attention-based MIL DL architecture [43] that can predict the
presence of hemorrhage on entire examinations of arbitrary length $r$
directly, which we denote $H:\leavevmode\nobreak\ {\mathbb{R}}^{dr}\to[0,1]$.
This predictor accepts an entire stack of images as input and it predicts the
presence or absence of ICH in it. Unlike the previous case, training such a
predictor $H$ only requires collecting $n_{w}$ training samples of pairs
$\\{(X_{i},Y_{i})\\}_{i=1}^{n_{w}}$, where $Y_{i}$ are the global labels of
the examinations—and thus, the local labels of each image are not needed.
Since there are a large number of images per examination ($r$ is about 30 for
a typical scan), the number of examination labels is much lower than that of
image labels, i.e. $n_{w}\ll n_{s}$.
The predictor $H$ is a multiple instance learning (MIL) model, as it receives
as input a collection (i.e., a bag) of $r$ images (i.e., instances). MIL has a
long tradition in machine learning [29, 74, 76, 75] and in biomedical imaging
in particular [4, 68, 18, 88]. However, its applications to the task of ICH
detection remain limited [72, 91, 57]. Similarly to previous works [72, 91,
57] we use the attention-based MIL framework recently developed in [43], which
parametrizes such an MIL predictor $H$ by composing $r$ instance-wise encoders
with an attention mechanism [9, 86], $a$, and a final classifier $g$.
Succinctly, we can write
$H(X)=g(a([f(x^{(1)}),f(x^{(2)}),\dots,f(x^{(r)})]))$. This MIL predictor, as
well as the strongly supervised one, are depicted in Fig. 1.
Importantly, to make comparisons between the local (strong) and global (weak)
predictors, the feature extractor $f$ that encodes each image in an
examination, as well as the binary classifier $g$, in $H$ are the same as the
ones described in the context of traditional supervised learning and used in
the image-level predictor $h$. Moreover, if an examination $X$ contains a
single image (i.e., $r=1$) the attention mechanism $a$ reduces to the identity
map. It follows that for these cases $H(X)=g(f(X))=h(x)$, and thus $H$ is
equivalent to the fully supervised predictor $h$. For this reason, the MIL
model $H$ generalizes the image-wise predictor $h$ while maintaining the core
feature extractor and classifier ($f$ and $g$, respectively). In this work, we
compare the resulting image-wise classifier $h$, trained using the local
annotations from every image, and examination-wise classifier $H$, trained
using only global labels for every examination.
### 2.3 MIL provides comparable performance on examination-level binary
classification
We compare the strong and weak learners on the examination-level binary
classification problem, i.e. the task of predicting whether a new examination
$X$ (with $r$ images) contains any signs of hemorrhage. For the MIL learner
$H$, the examination-level prediction is simply ${\widehat{Y}}_{w}=H(X)$. On
the other hand, the strongly supervised predictor $h$ can predict on single
images only. Since the ground-truth examination-level label $Y$ can be
expressed as the logical OR of the labels of the images in the examination, it
is natural to define the examination-level prediction of $h$ as
${\widehat{Y}}_{s}=\max(h(x^{(1)}),h(x^{(2)}),\dots,h(x^{(r)}))$, which
extends the logical OR to real-valued functions on the unit interval $[0,1]$.
Fig. 2 shows the ROC curves with their AUC’s for the strong learner $h$ (i.e.,
${\mathcal{SL}}$: _strong learner_) and the MIL learner $H$ (i.e.,
${\mathcal{WL}}$: _weak learner_) on the validation split of the RSNA dataset
as well as the CQ500 and CT-ICH datasets. AUC’s are compared via a one-sided
DeLong’s test [27]. Figs. 2(a) and 2(b) demonstrate that there is virtually no
difference in performance between the strong and the weak learner on the
validation split of the RSNA dataset and the CQ500 dataset, respectively. The
strong learner achieves an AUC of $0.961$, whereas the weak learner obtains an
AUC of $0.960$ ($p=0.636$) on the validation split of the RSNA dataset, and of
$0.901$ and $0.921$ ($p=0.147$), respectively, on the CQ500 dataset. In fact,
Fig. 2(c) suggests that the weak learner has significantly better
generalization power on the CT-ICH dataset (AUC’s of $0.924$ and $0.954$,
respectively, $p=0.032$).
(a) RSNA dataset
$(p=0.636)$. (b) CQ500 dataset
$(p=0.147)$. (c) CT-ICH dataset
$(p=0.032)$.
Figure 2: Comparison of a strong learner (${\mathcal{SL}}$) and an MIL learner
(${\mathcal{WL}}$) on the examination-level binary classification problem.
AUC’s are compared via a one-sided DeLong’s test.
### 2.4 MIL provides comparable performance on examination-level hemorrhage
detection
Recall that we refer to _examination-level_ hemorrhage detection as the task
of retrieving the positive images within a positive examination. That is,
identifying those images in a scan that show signs of hemorrhage (if any are
present). For the strong learner $h$, this is no different than predicting the
presence of hemorrhage individually on each of the images in the examination,
and selecting the predicted positive images. For the MIL learner $H$, on the
other hand, there is no unique way to perform this image-wise selection. A
very popular approach relies on employing the attention mechanism $a$ as an
instance selector, since this function explicitly assigns weights (between 0
and 1) to each instance in the bag, thus reflecting some notion of importance
towards the overall label of the bag [72, 53, 77, 71]. Alternatively, one can
resort to other notions of importance, such as those based on game-theoretic
principles. Shapley coefficients provide a natural way to do this by assigning
scores to each image that reflect their contributions towards the overall
examination prediction [79, 58, 82]. We explore both of these approaches here.
Intuitively, we expect an accurate MIL predictor $H$ to assign large attention
weights to the positive images within a positive examination. Then, we select
those images whose attention weights are no smaller than a certain threshold,
$t$. In this work we use $t=1/r$, which corresponds to uniform attention
across all $r$ images in an examination, but this choice for $t$ is not
crucial and other options exist.999Indeed, since we employ sparse-max [60, 67,
23] for the attention mechanism, this behavior is relatively independent of
the chosen threshold. Although attention weights are extensively used in
recent literature [72, 53, 77, 71] to select important instances, their
theoretical underpinnings remain scarce [45, 31]. On the contrary, the Shapley
value [79] has gained substantial popularity in the machine learning
literature [24] because of its precise theoretical properties. Here, we
introduce the first Shapley-based explanation method specifically designed for
deep set predictors [94] (such as the MIL predictor $H$) by extending _h-Shap_
[82], a hierarchical extension of the Shapley value (see Section 4.5.1 for
details).101010h-Shap is available at https://github.com/Sulam-Group/h-shap.
Similarly to the attention-based selection method, we select the images which
have a Shapley value no smaller than $t=1/r$.
(a) RSNA dataset (b) CT-ICH dataset
Figure 3: Comparison of a strong learner (${\mathcal{SL}}$) and a weak learner
(${\mathcal{WL}}$) on examination-level hemorrhage detection. 3(a) Average
recall (TPR) as a function of hemorrhage sequence length on the validation
split of the RSNA dataset. 3(b) Average TPR on the CT-ICH dataset.
We compare the strong and the weak learners on examination-level hemorrhage
detection by means of their examination-level $f_{1}$ score over the true
hemorrhage sequences (i.e., series of consecutive positive images) in true
positive examinations (see Section 4.6 for details). Fig. 3(a) shows the
average recall on the RSNA validation split as a function of the number of
consecutive positive images in a hemorrhage sequence. We can appreciate how
there is no significant gap between the performance of the strong learner
${\mathcal{SL}}$ compared to the weak learner ${\mathcal{WL}}$, with either
detection strategy. Fig. 3(a) shows that, as expected, it is in general harder
for all learners to detect short hemorrhage sequences that may comprise only a
few consecutive positive images. Fig. 3(b) shows the average recall on the CT-
ICH dataset. In this case, we do not stratify the results as a function of
hemorrhage sequence length given the relatively small amount of examinations
in the dataset. We see here as well that there in no significant
generalization power difference across learners or detection strategies, with
a slight advantage for the weak learner with the Shapley-based selection
method.
(a) CQ500 dataset.
(b) CT-ICH dataset.
Figure 4: Example saliency maps on some predicted positive images that contain
hemorrhage. Saliency maps where obtained by applying GRAD-CAM to the last
convolutional layer of both the strong learner (${\mathcal{SL}}$) and the weak
learner (${\mathcal{WL}}$), and by means of h-Shap. All saliency maps are
thresholded using Otsu’s method to reduce noise.
### 2.5 MIL provides comparable performance on image-level hemorrhage
detection
_Image-level_ hemorrhage detection refers to the localization of the
hemorrhage, or its signs, within the selected images (e.g., hyperdense
regions, spots, asymmetries). Recall that in this work, we train learners on
binary classification tasks either locally (on images for the strong learner
$h$), or globally (on examinations for the MIL learner $H$) rather than on a
segmentation task. In particular, the RSNA dataset does not provide ground-
truth segmentations of the bleeds. In this context, one can attempt to
localize the signs of hemorrhage by explaining the models’ predictions and
find their most important features (i.e., pixels) in order to bridge
classification with detection. To this end, we use and compare two machine
learning explainability methods [14, 56]: $(i)$ Grad-CAM [78]111111Grad-CAM is
available at https://github.com/jacobgil/pytorch-grad-cam., which is a
saliency method based on sensitivity analysis and very popular in radiology
[63, 26, 19], and $(ii)$ h-Shap [82], an efficient hierarchical extension to
Shapley-based image explanations that provably retains several of the
theoretical benefits of game theoretic explanations (see Section 4.5.2 for
details). To our knowledge, this is the first time explanation methods have
been used to explain the predictions of a bag-level [4] MIL classifier at the
pixel-level [18]. Even though the saliency maps produced by explanation
methods provide a weaker sense of localization, they allow users to interpret
a model’s prediction and investigate their complex mechanisms.
(a) CQ500 dataset. (b) CTICH dataset.
Figure 5: Image-level hemorrhage detection performance for a strong learner
(${\mathcal{SL}}$) and weak learner (${\mathcal{WL}}$) on the CQ500 and CT-ICH
datasets with saliency maps obtained with GRAD-CAM and h-Shap. The $f_{1}$
scores are computed between the binarized saliency maps and the ground-truth
bounding box annotations. For a fair comparison, we show the $f_{1}$ score
distributions of true positive images explained by both models (i.e., 1,162
images for the CQ500 dataset and 130 images for the CT-ICH dataset).
Fig. 4 presents an example of saliency maps for both strong and weak learners,
using both Grad-CAM and h-Shap, for every type of hemorrhage in the CQ500 and
CT-ICH datasets: epidural (EDH), intraparenchymal (IPH), intraventricular
(IVH), subarachnoid (SAH), and subdural (SDH). We remark that the CT-ICH
dataset provides manual segmentations of the ground-truth lesions, while we
use the BHX extension of the CQ500 dataset to obtain ground-truth bounding-
boxes for this latter case. Fig. 4 demonstrates that saliency maps produced by
either predictor align well with the ground-truth annotations, with no clear
advantage of the strongly supervised model. Interestingly, we can also
appreciate how the saliency maps concentrate around the target lesions rather
than other findings that may correlate well with the presence of ICH in the
training set (such as external hematomas due to injury, midline shift effects,
or compression of the ventricles).
We further quantitatively evaluate the alignment of the binarized saliency
maps with the ground-truth annotations via pixel-level $f_{1}$ scores. Fig. 5
depicts the distribution of these $f_{1}$ scores stratified by hemorrhage type
for the CQ500 and CT-ICH datasets. For a fair comparison between the strong
learner (${\mathcal{SL}}$) and the MIL weak learner (${\mathcal{WL}}$) we show
the distribution of the $f_{1}$ scores for true positive images that were both
predicted by the strong learner to contain signs of hemorrhage and selected by
the weak learner via Shapley coefficients thresholding. In particular, we
compare 1,162 images from the CQ500 dataset and 130 images from the CT-ICH
dataset. Fig. 5 confirms that there is no clear advantage of strong
supervision for image-level hemorrhage detection. Specifically, there is no
combination of learner and explanation method that consistently outperforms
all others across all types of hemorrhage and datasets. Thus, these results
suggest that image-level hemorrhage detection can be performed with comparable
performance completely without image-level labels.
### 2.6 Attention-based MIL significantly reduces the number of labels
required
So far, we investigated the performance of a strong and a weak learner trained
on the entire training split of the RSNA dataset—as typically done in these
applications [80, 13, 38]. We now study this behavior as a function of the
number of labels available to each model during training, $m$. For the strong
learner, $m$ thus refers to the number of labeled images, whereas for the weak
learner, which only has access to examination-level information, $m$ refers to
the numbers of labeled examinations. Note that this quantification is useful
because, if the cost of obtaining a label is comparable in both image- and
examination-wise cases, this number $m$ reflects an overall cost associated
with the annotation of a dataset. Note that in practice, it is a much easier
and faster task for an expert radiologist to quickly scroll through the images
in an examination and determine whether the whole scan has signs of
hemorrhage, rather than having to label all the images in the scan
individually. In order to account for the increased variance of the training
process with smaller number of samples, we repeat the training process an
increasing number of times on random subsets of images or examinations as $m$
decreases (see Section 4.7 for details). The obtained models are evaluated on
a fixed subset of 1,000 examinations from the validation split of the RSNA
dataset.
Fig. 6 shows the mean AUC’s of strong and weak learners, with their 95%
confidence intervals, on the examination-level binary classification problem
as a function of the number of labels available to each model during training,
$m$. MIL learners show a slight advantage over strong learners on the CQ500
and CT-ICH datasets, while they overlap for the most part with strong learners
on the validation split of the RSNA dataset. Furthermore, the performance of
MIL learners show a larger variance compared to strong learners. These results
suggest that although MIL learners can provide comparable or better
performance than strong learners on the examination-level binary
classification problem, they might be harder to train. This agrees with
intuition that the MIL framework does provide a weaker sense of supervision,
and the learners might need to disambiguate the true concept (i.e., ICH) from
others that might correlate well with examination-level labels.
(a) RSNA dataset (b) CQ500 dataset (c) CT-ICH dataset
Figure 6: Mean performance and 95% confidence interval of strong
(${\mathcal{SL}}$) and weak (${\mathcal{WL}}$) learners on the examination-
level binary classification problem as a function of number of labels $m$. For
the RSNA dataset, we validate models on a fixed subset of 1,000 examinations.
Note that the points after $m=1\times 10^{4}$ have zero variance because we
repeat the training process only once.
Finally, Fig. 7 depicts the mean hemorrhage-level detection $f_{1}$ scores and
their 95% confidence intervals over the validation split of the RSNA dataset
as a function of the numbers of labels available. Confidence intervals are
computed across repetitions of the training process with the same number of
labels, thus capturing the variance of the training process. Since we train
only one model for $m>10^{4}$ labels, the variance vanishes. We see that for
$m\leq 10^{4}$, strong supervision does in fact provide a significant
advantage over weak supervision. However, MIL learners quickly outperform
strongly supervised ones for $m\gtrsim 10^{4}$. Importantly, these results
confirm that attention-based models trained on examination-level binary labels
can provide comparable performance to traditional classifiers trained on
image-level labels while requiring $\approx 35$-times fewer labels. Note that
the curves for the weak learners interrupt after $10^{4}$ labels because they
reach the limit of the training data size—the training split of the RSNA
dataset contains $\approx 17\times 10^{4}$ labeled examinations, whereas there
are about $35$ times more image labels.
Figure 7: Mean examination-level hemorrhage detection $f_{1}$ score as a
function of number of labels $m$ on a fixed subset of 1,000 examinations from
the validation split of the RSNA dataset.
## 3 Discussion
In this study, we compared the performance of predictive models for ICH
detection in Head CT scans trained with strong supervision (i.e., having one
label per image within an examination) or weak supervision (i.e., using a
single label for each entire examination). The methodology is based on recent
Multiple Instance Learning (MIL) approaches via attention mechanisms [86, 43],
which strictly generalizes predictors that are trained with strong
supervision. This framework enabled the use of models that have the same
architecture and main components in either setting, making these comparisons
precise and fair. We found that weakly supervised models had comparable
performance to strongly supervised models, despite using approximately 35
times fewer labels. On one external dataset, the weakly supervised models
actually had significantly higher performance, suggesting better
generalizability. Importantly, weakly supervised models also had comparable
ability to localize ICH on image- and pixel-levels. Altogether, these findings
indicate that image-level annotations are not necessary to train high-
performing and explainable DL models for diagnosis of ICH on head CT.
Our first result demonstrated that strong supervision is not at all necessary
for weak, or global, prediction tasks, as long as sufficient data is
available. More precisely, Fig. 2 demonstrates that whether a predictor is
trained on image-wise or examination-wise labels, they obtain virtually the
same AUC in the task of predicting the presence or absence of ICH. In one of
the studied datasets (CT-ICH [41]), there is in fact a slight advantage to the
latter, suggesting better generalizability to different clinical populations.
This is not surprising, as the weak learner is precisely trained on the task
of detecting hemorrhage at the examination level. Nevertheless, this
generalizability advantage is important, given the well-documented drops in
performance of DL models for medical imaging diagnosis on external test sets
[95], which threaten the safe deployment of DL models in real-world clinical
practice.
Interestingly, our results further demonstrate that these observations hold
even for the case of examination-level hemorrhage detection, i.e. the task of
finding the images within each examination where signs of hemorrhage are
present. More precisely, even though the MIL model was only trained on global
examination-wise labels, one can make predictions about each of the
constituent images by either studying their attention weights, or by employing
game-theoretic tools like the Shapley value [79, 58]. In either case, the
ability to detect the positive images within positive examination is
comparable to the performance of a model trained with strong supervision, i.e.
with a label per image (Fig. 3(b)). We emphasize that the ability to identify
examination-level hemorrhage is of critical practical importance for
radiologists’ workflows. In current triage use cases of DL models in
radiology, cases with potentially actionable findings, such as ICH, are
flagged by DL models for radiologist review, after which a radiologist must
confirm whether they agree or disagree with the diagnosis. Having image
“flags” beyond the mere presence or absence of hemorrhage that show which
specific images have a hemorrhage prediction are critical for: $(i)$ allowing
a radiologist to expeditiously confirm presence or absence of hemorrhage, and
$(ii)$ building trust with radiologists and other physician end-users, who
have been shown to be less trusting of diagnostic results generated by
automated systems in medical imaging compared to those provided by human
experts [34].
When further analyzing the resulting models with both a popular saliency
method (Grad-CAM [78]) as well as newer approaches to interpretability with
theoretical guarantees (h-Shap [82]), both models indicate having captured the
same semantic concepts that constitute ICH in head CT (Fig. 4). Indeed, our
qualitative and quantitative comparisons of these saliency maps indicate that
the ability to find the corresponding hemorrhages within each of the images is
virtually the same, and only mild differences exist once stratified per ICH
type (Fig. 5). To this point, we remark that to verify whether a model did
learn the desired concept (i.e., ICH) instead of other spurious correlations
in the training data is especially important in medical imaging. As modern
machine learning models continue becoming increasingly complex, gaining
insights on their decision making process is fundamental for a responsible use
in real-world scenarios. As discussed above, building trust with physician
end-users is paramount, and providing pixel-level explanations for specific
disease identification will be important towards this goal. Furthermore,
medical institutions may be required by certain laws to provide explanations
of what lead an automatic systems to recommend a certain treatment or to
provide a specific diagnosis [37].
Our work also has limitations. First, we evaluated only a single diagnostic
use case of ICH detection on CT scans of the head, albeit with multiple
datasets from different clinical populations. However, our approach is
applicable to any other medical imaging use case that utilizes cross-sectional
imaging, including diagnosis of disease on CT of other body parts, as well as
on other imaging modalities, such as MRI. Future studies will apply our
approach to other use cases to validate its generalizability in other
diagnostic scenarios and imaging modalities. Second, while this study
demonstrated that indeed the examination-level annotations suffice for ICH
detection in CT once enough training data is available, some image-level
annotations were needed to validate our methodology. In future extensions to
other diagnostic tasks or imaging modalities, this minimal amount of locally
annotated data will also be necessary for validation purposes. This number of
local annotations is very small, however: in this work we employed 1,000
examinations of the validation split of the RSNA dataset to this end,
requiring about $35\times 10^{3}$ image-level labels. This represents less
than 6% of the number of image-level labels needed to train an alternative
strongly supervised model. Third, the weakly supervised method currently only
evaluates medical imaging data; given the potential improvements in imaging
diagnoses using multimodal AI models [1] incorporating multiple types of
medical data (e.g., imaging, clinical symptoms, laboratory values), developing
weakly supervised DL models that can incorporate multiple data types is an
important topic for future study. Finally, although Convolutional Neural
Networks (CNNs)—such as the models used in this work—remain the most popular
deep learning architecture in medical imaging, it remains important to
investigate whether these results extend to other parametrization of the
predictors and architecture choices, for example to Vision Transformers (ViTs)
[30, 49]—which are rapidly gaining popularity in the field.
In summary, our results indicate that training DL models with weak or strong
supervision provides comparable performance for the tasks of ICH detection in
head CT across three different levels of granularity: $(i)$ global binary
prediction, $(ii)$ examination-level detection, and $(iii)$ image-level
detection. Our last results explore these points further by studying the
performance of strong and weak learners on the global binary classification
problem, as well as on examination-level hemorrhage detection as a function of
the number of labels available during training. These results indicate that,
indeed, weakly supervised learning enables a significant reduction in the need
for annotations: once the number of labels provided is large enough ($m\gtrsim
5\times 10^{3}$) weakly supervised models achieve comparable performance to
strongly supervised models at a fraction of the provided labels. However, for
the strongly supervised predictor, these labels represent the number of
labeled images, whereas the for the MIL predictors, $m$ represents only global
information of the entire examination—which can be easily collected, e.g. from
clinical reports. This approach could apply to other 3D cross-sectional
imaging tasks, such as MRI diagnosis, potentially saving thousands of hours of
annotation labor by radiologists [33], thereby alleviating the biggest
bottleneck in developing high-performing DL models for medical imaging
diagnosis.
## 4 Methods
### 4.1 Learning paradigms
#### 4.1.1 Supervised learning
In supervised learning settings, given input and output domains
${\mathcal{X}}$ and ${\mathcal{Y}}$, one is interested in predicting a
response $y\in{\mathcal{Y}}$ on an input $x\in{\mathcal{X}}$ with a predictor
$h:\leavevmode\nobreak\ {\mathcal{X}}\to{\mathcal{Y}}$. Given a loss function
$\ell(y,h(x))$ that penalizes the dissimilarity between the true label $y$ and
the predicted label $h(x)$, we search for a predictor $h$ with low risk over a
suitable family of predictors (e.g., Convolutional Neural Networks). This
search is usually carried out by minimizing the empirical loss over a training
set $\\{(x_{i},y_{i})\\}_{i=1}^{n_{s}}$ such that
$h=\operatorname*{arg\,min}_{h^{\prime}}\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\ell(y_{i},h^{\prime}(x_{i})).$
(1)
#### 4.1.2 Multiple-Instance Learning (MIL)
Multiple Instance Learning (MIL) [29, 59, 89] generalizes the supervised
learning framework to _bags_ of inputs. Formally, recall that ${\mathcal{X}}$
and ${\mathcal{Y}}$ are input and output domains, and let
$X=(x^{(1)},x^{(2)},\dots,x^{(r)})\in{\mathcal{X}}^{r}$, $r\in{\mathbb{N}}$
indicate a bag with $r$ _instances_. Furthermore, the MIL paradigm assumes
that the bag-level response $Y\in{\mathcal{Y}}$ is a known function of the
instance-level responses $y^{(1)},y^{(2)},\dots,y^{(r)}$, which can encompass
a wide variety of choices [11, 8, 6, 74, 75, 88].
In this work, we focus on MIL binary classification, such that
$Y=\texttt{OR}(y^{(1)},y^{(2)},\dots,y^{(r)}),$ (2)
and we search for a _bag-level_ classifier $H:\leavevmode\nobreak\
{\mathcal{X}}^{r}\to{\mathcal{Y}}$ with low risk over a suitable class of
predictors. Similarly to the supervised learning paradigm, given a loss
function $\ell(Y,H(X))$ that penalizes wrong predictions, $H$ is found by
optimization of the empirical loss over a training set
$\\{(X_{i},Y_{i})\\}_{i=1}^{n_{w}}$ of labeled bags, such that
$H=\operatorname*{arg\,min}_{H^{\prime}}\frac{1}{n_{w}}\sum_{i=1}^{n_{w}}\ell(Y_{i},H^{\prime}(X_{i})).$
(3)
Importantly, we remark that an MIL learner does not have access to the
underlying instance-level labels. Finally, note that:
* •
The examination-level binary classification problem satisfies the MIL
assumption in Eq. 2, as the global label of an examination is positive as soon
as it contains at least one image with signs of hemorrhage (i.e., a positive
image). Equivalently,
* •
The local image-level labels can also be phrased as an instance of Eq. 2. In
particular, an image should be labeled positively as soon as it contains signs
of hemorrhage.
### 4.2 Model architecture details
#### 4.2.1 Strong learner
The strong learner $h$ is the composition of a feature extractor $f$ with a
binary classifier $g$ implemented by a fully connected layer with sigmoid
activation. In this work, $f$ is a ResNet18 [40] pretrained on ImageNet [28]
that encodes an input image of size $512\times 512$ pixels into a vector of
size 256, as illustrated in Fig. 1(a).
#### 4.2.2 Weak learner
In addition to the same feature extractor $f$ and final classifier $g$
employed for the strong learner, the weak learner $H$ comprises a two-layer
attention mechanism $a$ as proposed in [43] (see Fig. 1(b)). For an input
examination with $r$ images, the attention mechanism combines the $r$ image-
level feature vectors into a single examination-level feature vector which can
be expressed as a convex combination of the image-level feature vectors. In
this work—differently from the original work in [43]—we use the _sparsemax_
activation function [60, 67, 23] rather then the softmax function to favor
sparse attention weights.121212The entmax package is available at:
https://github.com/deep-spin/entmax.
### 4.3 Data preprocessing
The images in the three datasets used in this work were annotated by expert
neuroradiologists of varying degree of expertise with the type(s) of
hemorrhage present in the image. We group the original classes into ‘normal’
(i.e., label 0, no type of hemorrhage) and ‘with hemorrhage’ (i.e., label 1,
any type of hemorrhage). Images are provided in DICOM and NIfTI format, so we:
1. 1.
Convert them to Hounsfield Units (HUs) [15], then
2. 2.
Window them using the standard brain window setting, i.e. WL = 40 and WW = 80
[83], and finally
3. 3.
Normalize them with min-max normalization.
This way, pixel intensities represent the same HU value (and hence, tissue)
across all datasets.
### 4.4 Training procedures
Experiments were performed on Nvidia Quadro RTX 5000 GPU’s on a private
cluster and on the Azure Machine Learning (ML) platform [10] via Microsoft
Research’s Project InnerEye open-source software tools
(https://aka.ms/InnerEyeOSS).131313Project InnerEye is available at:
https://github.com/microsoft/InnerEye-DeepLearning.
To account for the high label imbalance in the training split of RSNA dataset
and for the gap in difficulty between the prediction of the presence of
hemorrhage compared to predicting its absence, models were trained using
_focal loss_ [55]—a variation of binary cross-entropy loss. All models were
trained for 15 epochs with a learning rate decay of 0.3 every 3 epochs. We
chose the best performing model across epochs according to validation
accuracy.
#### 4.4.1 Image-level augmentation
We use TorchIO’s [65] library of spatial and intensity
transformations.141414TorchIO is available at
https://github.com/fepegar/torchio. Specifically, every image is augmented
independently via random flips, affine transformations, deformations and
rotations, and one out of addition of random noise, random bias field, random
anisotropy, random gamma transformation, or random ghosting artifacts.
#### 4.4.2 Examination-level augmentation
We randomly sample without replacement between 10 and $r$ images within the
same examination. This sub-sampling augmentation strategy does not rely on
image-level labels and it can be used in weakly supervised scenarios where
only examination-level labels are available. Intuitively, sampling at least 10
images controls the probability of flipping a positive examination to a
negative one. That is, sampling a subset of all negative images from a
positive examination would result in a wrong label (i.e., the subset would be
labeled positively even if it did not contain any positive images). Although
we cannot completely rule out this event without knowing local labels, we can
reduce its probability to a tolerable level for the weak learner. Formally,
given a positive examination $X$ of length $r$ with $K$ positive images,
sample a random subset $S$ of images without replacement. Denote $Y_{S}$ the
true global label of the subset, and note that
$p_{\text{flip}}={\mathbb{P}}[Y_{S}=0\mid Y=1]$ is a decreasing function of
the size of the subset $S$ and it follows a hypergeometric distribution. In
this work, we estimate $p_{\text{flip}}$ over the training split of the RSNA
dataset, and obtain $p_{\text{flip}}\leq 4\times 10^{-3}$. We remark that to
estimate $p_{\text{flip}}$ we used the image-level labels provided in the
training split of the RSNA dataset. In practical scenarios this information
can easily be replaced by prior knowledge of expert radiologists about the
problem at hand.
#### 4.4.3 Training strong learners
Strong learners were trained using Adam optimizer [50] with learning rate of
$1\times 10^{-5}$, weight decay of $1\times 10^{-7}$, and batch size of 64.
During training, we add a dropout layer with $p=0.50$ between the encoder $f$
and the binary classifier $g$.
#### 4.4.4 Training weak learners
Weak learners were trained using Stochastic Gradient Descent (SGD) with
momentum equal to $0.9$ [81], learning rate of $1\times 10^{-3}$, weight decay
of $1\times 10^{-4}$, and batch size of 1. We remark that the choice of batch
size equal to 1 comes both from memory limitations and gradient propagation
imbalances through the attention mechanism for volumes with different numbers
of images. During training, we add both a dropout layer with $p=0.50$ between
the encoder $f$ and the binary classifier $g$ and a dropout layer with
$p=0.25$ after the first layer of the attention mechanism.
### 4.5 Explaining model predictions with h-Shap
We use h-Shap [82], a Shapley-based explanation method with provable runtime
and accuracy for problems that satisfy the binary MIL assumption in Eq. 2 to
select the positive images in an examination, and to highlight signs of
hemorrhage within the selected images.
#### 4.5.1 Examination-level hemorrhage detection
We extend the original implementation of h-Shap to explain the examination-
level prediction of a weak learner. Since the global binary label satisfies
Eq. 2, one can explore a binary tree of the input examination and
hierarchically compute the exact Shapley coefficient of every image in the
examination [see 82, Theorem 3.4]. The symmetry axiom of the Shapley value
[79] implies that the positive images in an examination should receive the
same coefficient. Thus, one can use an importance threshold $t=1/r$ and select
those images whose Shapley values are $\geq t$. We remark that—as recently
noted by others [25, 44]—explaining predictions on sets with the Shapley value
is particularly attractive because it does not require to sample an
uninformative baseline to mask features [58]. In fact, the weak learner can
predict on sequences of arbitrary length and it is permutation invariant [94],
hence one can simply remove images from an examination without having to
replace them.
#### 4.5.2 Image-level hemorrhage detection
Bleeds can present complex and irregular shapes. However, h-Shap explores
fixed quad-trees of the input image. Thus, we extend the original
implementation with standard ideas of cycle spinning [22]. Denote $s$ the
minimal feature size in h-Shap (i.e., the size of the smallest leaf explored
by the algorithm), and let $\bm{\rho}=\\{\rho_{i}\\}_{i=1}^{n_{\rho}}$ be
$n_{\rho}$ equally spaced radii between 0 and $s$, and let
$\bm{\alpha}=\\{\alpha_{i}\\}_{i=1}^{n_{\alpha}}$ be $n_{\alpha}$ equally
spaced angles between 0 and $2\pi$. Then, we average the saliency maps
obtained by cycle spinning the original partition by the vector
$(\rho\cos(\alpha),\rho\sin(\alpha))$, $\rho\in\bm{\rho},\leavevmode\nobreak\
\alpha\in\bm{\alpha}$. Finally, we note that we use the unconditional
expectation over the training split of the RSNA dataset to mask features,
which is a valid choice in MIL binary classification problems [82]. In this
work, we use h-Shap with an absolute importance tolerance $\tau$ equal to 0
(i.e. h-Shap explores all partitions with a positive Shapley coefficient),
minimal feature size $s=64$, number of radii $n_{\rho}=3$, and number of
angles $n_{\alpha}=12$.
### 4.6 Comparing strong and weak learners on examination-level hemorrhage
detection
In this section we expand on the methodology and choice of parameters for
comparing strong and weak learners on examination-level hemorrhage detection.
All choices were made to provide a fair comparison between strongly supervised
and weakly supervised models.
#### 4.6.1 Choosing the classification threshold
Examination-level hemorrhage detection is performed only for predicted
positive examinations. Recall that both strong and weak learners are real-
valued functions on the unit interval $[0,1]$. Thus, a threshold $t\in[0,1]$
(e.g., $0.5$) is required to binarize their predictions. The choice of $t$
induces a False Positive Rate (FPR) and a True Positive Rate (TPR) on images
(for a strong learner) or on examinations (for a weak learner). In this work,
we use Youden’s $J$ statistic [93, 66, 39] to find the threshold $t^{*}$ that
maximizes the difference of TPR and FPR, i.e. $J=\text{TPR}-\text{FPR}$. Then:
* •
For a strong learner $h$, we choose the threshold $t^{*}_{s}$ that maximizes
$J$ on the image-level labels, and,
* •
For a weak learner $H$, we choose the threshold $t^{*}_{w}$ that maximizes $J$
on the examination-level labels.
We remark that there exist other methods to choose the threshold $t^{*}$, and
the main results discussed in this work do not strongly depend on this choice.
For completeness, Figs. A.2 and A.3 show the equivalent of Figs. 3 and 7 where
instead of maximizing Youden’s $J$, $t^{*}$ is chosen to minimize the distance
to the $(0,1)$ point (perfect classification), which can be written as
$d=\sqrt{\text{FPR}^{2}+\left(1-\text{TPR}\right)^{2}}$ and is also common in
the literature [66, 39].
#### 4.6.2 Choosing the best minimal sequence length
To reduce the false positive rate in the predicted hemorrhage sequences, we
fine-tune the minimal number of consecutive positive images that have to be
selected by each method in order for a series of consecutive selected images
to be considered a candidate hemorrhage sequence. We set this length to 4 for
strong learner, to 2 for weak learners when using attention weights to select
images, and to 3 for weak learners when using Shapley values, guaranteeing the
best performance for each method. Fig. A.1 depicts the examination-level
$f_{1}$ score as a function of this minimal sequence length on the validation
split of the RSNA dataset with both Youden’s $J$ and distance to the $(0,1)$
point, which motivate these choices.
#### 4.6.3 Computing the examination-level $f_{1}$ score
Denote $T=\\{T_{1},T_{2},\dots,T_{n}\\}$ the set of true hemorrhage sequences
(i.e., non-overlapping series of consecutive positive images) in a positive
examination. That is, $T_{i}$ contains the indices of the images in the
$i^{\text{th}}$ hemorrhage sequence. Let $S=\\{s_{1},s_{2},\dots,s_{r}\\}$ be
the local estimator used to select positive images depending on the type of
learner: single-image logits for a strong learner, and either attention
weights or Shapley values for a weak learner. Denote
$P=\\{P_{1},P_{2},\dots,P_{n}\\}$ the predicted hemorrhage sequences by the
learner. We define the True Positive (TP), False Positive (FP), and Predicted
Positive (PP) sequences as
$\displaystyle{\text{TP}}\coloneqq\\#\\{i\in[n]:\leavevmode\nobreak\ \exists
P_{j}\in P:\leavevmode\nobreak\ (\operatorname*{arg\,max}_{k\in
P_{j}}\leavevmode\nobreak\ s_{k})\in T_{i}\\}$ (4)
$\displaystyle{\text{FP}}\coloneqq\\#\\{j\in[m]:\leavevmode\nobreak\ \nexists
T_{i}\in T:\leavevmode\nobreak\ (\operatorname*{arg\,max}_{k\in
P_{j}}\leavevmode\nobreak\ s_{k})\in T_{i}\\}$ (5)
$\displaystyle{\text{PP}}={\text{TP}}+{\text{PP}}.$ (6)
Put into words, for every true hemorrhage sequence $T_{i}\in T$, we count one
true positive prediction if there exists a predicted hemorrhage sequence in
$P_{j}\in P$ such that the image with the larges estimator value within
$P_{j}$ is contained in $T_{i}$. Note that this definition of TP does not
double count predicted sequences that may correspond to the same true one, and
using the $\operatorname*{arg\,max}$ avoids the trivial case where a model may
select all images, or a few very long sequences that could include multiple
true ones. Similarly, we count one false positive prediction for every
predicted sequence $P_{j}$ for which there does not exists a corresponding
true one. The $f_{1}$ score is then defined as the harmonic mean of precision
and recall, i.e.
$\displaystyle\text{precision}=\frac{{\text{TP}}}{{\text{PP}}},\quad\text{recall}=\frac{TP}{\lvert
T\rvert}$ (7) $\displaystyle
f_{1}=2\cdot\frac{\text{precision}\cdot\text{recall}}{\text{precision}+\text{recall}}.$
(8)
We note that this procedure reflects how a machine learning model could be
deployed in a clinical setting to detect hemorrhage sequences.
### 4.7 Training multiple times on the same number of labels
When training models on a fixed number of labels $m$, we randomly sample
without replacement a subset of the original training split of the RSNA
dataset (of images for the strong learners, and of examinations for the weak
learners) that maintains the label proportions of the original dataset. In
particular, we use 15 distinct values of $m$: $\bm{m}=[12,\ 24,\ 40,\ 52,\
64,\ 80,\ 100,\ 152,\ 200,\ 252,\allowbreak\ 520,\ 796,\ 10^{3},\ 10\times
10^{3},\ 17\times 10^{3}]$. For each choice of $m$, we train a decreasing
number of models to account for the variance in the training process. In
particular, we repeat the training process 20 times when $m\leq 252$; 15 times
for $m=520$; 10 times for $m=796,\ 10^{3}$; 6 times for $m=10\times 10^{3}$;
and 1 time for $m\geq 17\times 10^{3}$. Finally, instead of training models
for a fixed number of epochs, we set a _patience_ parameter such that training
is terminated if the validation accuracy of a model does not increase for more
than 3 consecutive epochs.
## Appendix A Figures
(a) Youden’s $J$ (b) Distance to $(0,1)$
Figure A.1: Examination-level $f_{1}$ score as a function of minimal sequence
length for a strong learner (${\mathcal{SL}}$) and a weak learner
(${\mathcal{WL}}$) on the validation split of the RSNA dataset. 1(a) Results
with Youden’s $J$ statistic. 1(b) Results with distance to $(0,1)$ point. Note
that the best minimal sequence length does not depend on the threshold
$t^{*}$.
(a) RSNA dataset (b) CT-ICH dataset
Figure A.2: Comparison of a strong learner (${\mathcal{SL}}$) and a weak
learner (${\mathcal{WL}}$) on examination-level hemorrhage detection. 3(a)
Average recall (TPR) as a function of hemorrhage sequence length on the
validation split of the RSNA dataset. 3(b) Average TPR on the CT-ICH dataset.
These results are computed by choosing the threshold $t^{*}$ that minimizes
the distance to the $(0,1)$ point. Figure A.3: Mean examination-level
hemorrhage detection $f_{1}$ score as a function of number of labels $m$ on a
fixed subset of 1,000 examinations from the validation split of the RSNA
dataset. These results are computed by choosing the threshold $t^{*}$ that
minimizes the distance to the $(0,1)$ point.
## References
* Acosta et al. [2022] Julián N Acosta, Guido J Falcone, Pranav Rajpurkar, and Eric J Topol. Multimodal biomedical ai. _Nature Medicine_ , 28(9):1773–1784, September 2022.
* Ahishakiye et al. [2021] Emmanuel Ahishakiye, Martin Bastiaan Van Gijzen, Julius Tumwiine, Ruth Wario, and Johnes Obungoloch. A survey on deep learning in medical image reconstruction. _Intelligent Medicine_ , 1(03):118–127, 2021\.
* Alenezi and Santosh [2021] Fayadh Alenezi and KC Santosh. Geometric regularized hopfield neural network for medical image enhancement. _International Journal of Biomedical Imaging_ , 2021, 2021.
* Amores [2013] Jaume Amores. Multiple instance classification: Review, taxonomy and comparative study. _Artificial intelligence_ , 201:81–105, 2013.
* An et al. [2017] Sang Joon An, Tae Jung Kim, and Byung-Woo Yoon. Epidemiology, risk factors, and clinical features of intracerebral hemorrhage: an update. _Journal of stroke_ , 19(1):3, 2017.
* Andrews et al. [2002] Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hofmann. Support vector machines for multiple-instance learning. _Advances in neural information processing systems_ , 15, 2002.
* Attia et al. [2021] Zachi I Attia, David M Harmon, Elijah R Behr, and Paul A Friedman. Application of artificial intelligence to the electrocardiogram. _European heart journal_ , 42(46):4717–4730, 2021\.
* Auer et al. [1998] Peter Auer, Philip M Long, and Aravind Srinivasan. Approximating hyper-rectangles: learning and pseudorandom sets. _Journal of Computer and System Sciences_ , 57(3):376–388, 1998.
* Bahdanau et al. [2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_ , 2014.
* Barnes [2015] Jeff Barnes. Azure machine learning. In _Microsoft Azure Essentials_. Microsoft, 2015.
* Blum and Kalai [1998] Avrim Blum and Adam Kalai. A note on learning from multiple-instance examples. _Machine learning_ , 30(1):23–29, 1998.
* Buchlak et al. [2022] Quinlan D Buchlak, Michael R Milne, Jarrel Seah, Andrew Johnson, Gihan Samarasinghe, Ben Hachey, Nazanin Esmaili, Aengus Tran, Jean-Christophe Leveque, Farrokh Farrokhi, et al. Charting the potential of brain computed tomography deep learning systems. _Journal of Clinical Neuroscience_ , 99:217–223, 2022.
* Burduja et al. [2020] Mihail Burduja, Radu Tudor Ionescu, and Nicolae Verga. Accurate and efficient intracranial hemorrhage detection and subtype classification in 3d ct scans with convolutional and long short-term memory neural networks. _Sensors_ , 20(19):5611, 2020.
* Burkart and Huber [2021] Nadia Burkart and Marco F Huber. A survey on the explainability of supervised machine learning. _Journal of Artificial Intelligence Research_ , 70:245–317, 2021.
* Buzug [2011] Thorsten M Buzug. Computed tomography. In _Springer handbook of medical technology_ , pages 311–342. Springer, 2011.
* Cai et al. [2020] Lei Cai, Jingyang Gao, and Di Zhao. A review of the application of deep learning in medical image classification and segmentation. _Annals of translational medicine_ , 8(11), 2020.
* Chen et al. [2022] Yutong Chen, Carola-Bibiane Schönlieb, Pietro Liò, Tim Leiner, Pier Luigi Dragotti, Ge Wang, Daniel Rueckert, David Firmin, and Guang Yang. Ai-based reconstruction for fast mri—a systematic review and meta-analysis. _Proceedings of the IEEE_ , 110(2):224–245, 2022\.
* Cheplygina et al. [2019] Veronika Cheplygina, Marleen de Bruijne, and Josien PW Pluim. Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. _Medical image analysis_ , 54:280–296, 2019.
* Chien et al. [2022] Jong-Chih Chien, Jiann-Der Lee, Ching-Shu Hu, and Chieh-Tsai Wu. The usefulness of gradient-weighted cam in assisting medical diagnoses. _Applied Sciences_ , 12(15):7748, 2022.
* Chilamkurthy et al. [2018] Sasank Chilamkurthy, Rohit Ghosh, Swetha Tanamala, Mustafa Biviji, Norbert G Campeau, Vasantha Kumar Venugopal, Vidur Mahajan, Pooja Rao, and Prashant Warier. Deep learning algorithms for detection of critical findings in head ct scans: a retrospective study. _The Lancet_ , 392(10162):2388–2396, 2018.
* Choy et al. [2018] Garry Choy, Omid Khalilzadeh, Mark Michalski, Synho Do, Anthony E Samir, Oleg S Pianykh, J Raymond Geis, Pari V Pandharipande, James A Brink, and Keith J Dreyer. Current applications and future impact of machine learning in radiology. _Radiology_ , 288(2):318, 2018.
* Coifman and Donoho [1995] Ronald R Coifman and David L Donoho. Translation-invariant de-noising. In _Wavelets and statistics_ , pages 125–150. Springer, 1995.
* Correia et al. [2019] Gonçalo M Correia, Vlad Niculae, and André FT Martins. Adaptively sparse transformers. In _Proc. EMNLP-IJCNLP (to appear)_ , 2019.
* Covert et al. [2021] Ian Covert, Scott Lundberg, and Su-In Lee. Explaining by removing: A unified framework for model explanation. _Journal of Machine Learning Research_ , 22(209):1–90, 2021.
* Covert et al. [2022] Ian Covert, Chanwoo Kim, and Su-In Lee. Learning to estimate shapley values with vision transformers. _arXiv preprint arXiv:2206.05282_ , 2022.
* Deepika et al. [2022] Pon Deepika, Prasad Sistla, Ganesh Subramaniam, and Madhav Rao. Deep learning based automated screening for intracranial hemorrhages and grad-cam visualizations on non-contrast head computed tomography volumes. In _2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)_ , pages 01–05. IEEE, 2022.
* DeLong et al. [1988] Elizabeth R DeLong, David M DeLong, and Daniel L Clarke-Pearson. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. _Biometrics_ , pages 837–845, 1988.
* Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , pages 248–255. Ieee, 2009.
* Dietterich et al. [1997] Thomas G Dietterich, Richard H Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. _Artificial intelligence_ , 89(1-2):31–71, 1997\.
* Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ , 2020.
* Ethayarajh and Jurafsky [2021] Kawin Ethayarajh and Dan Jurafsky. Attention flows are shapley value explanations. _arXiv preprint arXiv:2105.14652_ , 2021.
* Eyuboglu et al. [2021] Sabri Eyuboglu, Geoffrey Angus, Bhavik N Patel, Anuj Pareek, Guido Davidzon, Jin Long, Jared Dunnmon, and Matthew P Lungren. Multi-task weak supervision enables anatomically-resolved abnormality detection in whole-body fdg-pet/ct. _Nature communications_ , 12(1):1–15, 2021.
* Flanders et al. [2020] Adam E Flanders, Luciano M Prevedello, George Shih, Safwan S Halabi, Jayashree Kalpathy-Cramer, Robyn Ball, John T Mongan, Anouk Stein, Felipe C Kitamura, Matthew P Lungren, et al. Construction of a machine learning dataset through collaboration: the rsna 2019 brain ct hemorrhage challenge. _Radiology: Artificial Intelligence_ , 2(3):e190211, 2020.
* Gaube et al. [2021] Susanne Gaube, Harini Suresh, Martina Raue, Alexander Merritt, Seth J Berkowitz, Eva Lermer, Joseph F Coughlin, John V Guttag, Errol Colak, and Marzyeh Ghassemi. Do as AI say: susceptibility in deployment of clinical decision-aids. _npj Digital Medicine_ , 4(1):31, 2021. ISSN 2398-6352. doi: 10.1038/s41746-021-00385-9. URL https://doi.org/10.1038/s41746-021-00385-9.
* Giger [2018] Maryellen L Giger. Machine learning in medical imaging. _Journal of the American College of Radiology_ , 15(3):512–520, 2018.
* Goldberger et al. [2000] Ary L Goldberger, Luis AN Amaral, Leon Glass, Jeffrey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung-Kang Peng, and H Eugene Stanley. Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. _circulation_ , 101(23):e215–e220, 2000.
* Goodman and Flaxman [2017] Bryce Goodman and Seth Flaxman. European union regulations on algorithmic decision-making and a “right to explanation”. _AI magazine_ , 38(3):50–57, 2017.
* Gudigar et al. [2021] Anjan Gudigar, U Raghavendra, Ajay Hegde, Girish R Menon, Filippo Molinari, Edward J Ciaccio, and U Rajendra Acharya. Automated detection and screening of traumatic brain injury (tbi) using computed tomography images: a comprehensive review and future perspectives. _International journal of environmental research and public health_ , 18(12):6499, 2021.
* Habibzadeh et al. [2016] Farrokh Habibzadeh, Parham Habibzadeh, and Mahboobeh Yadollahie. On determining the most appropriate test cut-off value: the case of tests with continuous results. _Biochemia medica_ , 26(3):297–307, 2016.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 770–778, 2016.
* Hssayeni [2020] Murtadha Hssayeni. Computed tomography images for intracranial hemorrhage detection and segmentation. _Intracranial Hemorrhage Segmentation Using A Deep Convolutional Model. Data_ , 5(1), 2020.
* Hssayeni et al. [2020] Murtadha D Hssayeni, Muayad S Croock, Aymen D Salman, Hassan Falah Al-khafaji, Zakaria A Yahya, and Behnaz Ghoraani. Intracranial hemorrhage segmentation using a deep convolutional model. _Data_ , 5(1):14, 2020.
* Ilse et al. [2018] Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based deep multiple instance learning. In _International conference on machine learning_ , pages 2127–2136. PMLR, 2018.
* Jain et al. [2022] Saachi Jain, Hadi Salman, Eric Wong, Pengchuan Zhang, Vibhav Vineet, Sai Vemprala, and Aleksander Madry. Missingness bias in model debugging. _arXiv preprint arXiv:2204.08945_ , 2022.
* Jain and Wallace [2019] Sarthak Jain and Byron C Wallace. Attention is not explanation. _arXiv preprint arXiv:1902.10186_ , 2019.
* Kaka et al. [2021] Hussam Kaka, Euan Zhang, and Nazir Khan. Artificial intelligence and deep learning in neuroradiology: exploring the new frontier. _Canadian Association of Radiologists Journal_ , 72(1):35–44, 2021.
* Kang et al. [2017] Eunhee Kang, Junhong Min, and Jong Chul Ye. A deep convolutional neural network using directional wavelets for low-dose x-ray ct reconstruction. _Medical physics_ , 44(10):e360–e375, 2017.
* Kawooya [2012] Michael G Kawooya. Training for rural radiology and imaging in Sub-Saharan africa: Addressing the mismatch between services and population. _J. Clin. Imaging Sci._ , 2, 2012.
* Khan et al. [2022] Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. Transformers in vision: A survey. _ACM computing surveys (CSUR)_ , 54(10s):1–41, 2022.
* Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014.
* Langlotz [2019] Curtis P Langlotz. Will artificial intelligence replace radiologists? _Radiology. Artificial intelligence_ , 1(3), 2019.
* Latif et al. [2019] Jahanzaib Latif, Chuangbai Xiao, Azhar Imran, and Shanshan Tu. Medical imaging using machine learning and deep learning algorithms: a review. In _2019 2nd International conference on computing, mathematics and engineering technologies (iCoMET)_ , pages 1–5. IEEE, 2019.
* Lee et al. [2019] Hyunkwang Lee, Sehyo Yune, Mohammad Mansouri, Myeongchan Kim, Shahein H Tajmir, Claude E Guerrier, Sarah A Ebert, Stuart R Pomerantz, Javier M Romero, Shahmir Kamalian, et al. An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets. _Nature biomedical engineering_ , 3(3):173–182, 2019.
* Lee et al. [2020] Ji Young Lee, Jong Soo Kim, Tae Yoon Kim, and Young Soo Kim. Detection and classification of intracranial haemorrhage on ct images using a novel deep-learning algorithm. _Scientific Reports_ , 10(1):1–7, 2020.
* Lin et al. [2017] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In _Proceedings of the IEEE international conference on computer vision_ , pages 2980–2988, 2017.
* Linardatos et al. [2020] Pantelis Linardatos, Vasilis Papastefanopoulos, and Sotiris Kotsiantis. Explainable ai: A review of machine learning interpretability methods. _Entropy_ , 23(1):18, 2020.
* López-Pérez et al. [2022] Miguel López-Pérez, Arne Schmidt, Yunan Wu, Rafael Molina, and Aggelos K Katsaggelos. Deep gaussian processes for multiple instance learning: Application to ct intracranial hemorrhage detection. _Computer Methods and Programs in Biomedicine_ , 219:106783, 2022.
* Lundberg and Lee [2017] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. _Advances in neural information processing systems_ , 30, 2017.
* Maron and Lozano-Pérez [1997] Oded Maron and Tomás Lozano-Pérez. A framework for multiple-instance learning. _Advances in neural information processing systems_ , 10, 1997.
* Martins and Astudillo [2016] Andre Martins and Ramon Astudillo. From softmax to sparsemax: A sparse model of attention and multi-label classification. In _International conference on machine learning_ , pages 1614–1623. PMLR, 2016.
* Montagnon et al. [2020] Emmanuel Montagnon, Milena Cerny, Alexandre Cadrin-Chênevert, Vincent Hamilton, Thomas Derennes, André Ilinca, Franck Vandenbroucke-Menu, Simon Turcotte, Samuel Kadoury, and An Tang. Deep learning workflow in radiology: a primer. _Insights into imaging_ , 11(1):1–15, 2020.
* Panesar and Panesar [2020] Arjun Panesar and Harkrishan Panesar. Artificial intelligence and machine learning in global healthcare. _Handbook of Global Health_ , pages 1–39, 2020.
* Panwar et al. [2020] Harsh Panwar, PK Gupta, Mohammad Khubeb Siddiqui, Ruben Morales-Menendez, Prakhar Bhardwaj, and Vaishnavi Singh. A deep learning and grad-cam based color visualization approach for fast detection of covid-19 cases using chest x-ray and ct-scan images. _Chaos, Solitons & Fractals_, 140:110190, 2020.
* Patel et al. [2019] Bhavik N Patel, Louis Rosenberg, Gregg Willcox, David Baltaxe, Mimi Lyons, Jeremy Irvin, Pranav Rajpurkar, Timothy Amrhein, Rajan Gupta, Safwan Halabi, Curtis Langlotz, Edward Lo, Joseph Mammarappallil, A J Mariano, Geoffrey Riley, Jayne Seekins, Luyao Shen, Evan Zucker, and Matthew Lungren. Human-machine partnership with artificial intelligence for chest radiograph diagnosis. _NPJ Digit Med_ , 2:111, November 2019.
* Pérez-García et al. [2021] Fernando Pérez-García, Rachel Sparks, and Sebastien Ourselin. Torchio: a python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. _Computer Methods and Programs in Biomedicine_ , 208:106236, 2021.
* Perkins and Schisterman [2006] Neil J Perkins and Enrique F Schisterman. The inconsistency of “optimal” cutpoints obtained using two criteria based on the receiver operating characteristic curve. _American journal of epidemiology_ , 163(7):670–675, 2006.
* Peters et al. [2019] Ben Peters, Vlad Niculae, and André F. T. Martins. Sparse sequence-to-sequence models. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 1504–1519, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1146. URL https://aclanthology.org/P19-1146.
* Quellec et al. [2017] Gwenolé Quellec, Guy Cazuguel, Béatrice Cochener, and Mathieu Lamard. Multiple-instance learning for medical image and video analysis. _IEEE reviews in biomedical engineering_ , 10:213–234, 2017\.
* Rajpurkar et al. [2018] Pranav Rajpurkar, Jeremy Irvin, Robyn L Ball, Kaylie Zhu, Brandon Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis P Langlotz, Bhavik N Patel, Kristen W Yeom, Katie Shpanskaya, Francis G Blankenberg, Jayne Seekins, Timothy J Amrhein, David A Mong, Safwan S Halabi, Evan J Zucker, Andrew Y Ng, and Matthew P Lungren. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. _PLoS Med._ , 15(11):e1002686, November 2018.
* Reis et al. [2020] Eduardo Pontes Reis, Felipe Nascimento, Mateus Aranha, F Mainetti Secol, Birajara Machado, Marcelo Felix, Anouk Stein, and Edson Amaro. Brain hemorrhage extended (bhx): Bounding box extrapolation from thick to thin slice ct images, 2020.
* Roscher et al. [2020] Ribana Roscher, Bastian Bohn, Marco F Duarte, and Jochen Garcke. Explainable machine learning for scientific insights and discoveries. _Ieee Access_ , 8:42200–42216, 2020.
* Saab et al. [2019] Khaled Saab, Jared Dunnmon, Roger Goldman, Alex Ratner, Hersh Sagreiya, Christopher Ré, and Daniel Rubin. Doubly weak supervision of deep learning models for head ct. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 811–819. Springer, 2019.
* Saba et al. [2019] Luca Saba, Mainak Biswas, Venkatanareshbabu Kuppili, Elisa Cuadrado Godia, Harman S Suri, Damodar Reddy Edla, Tomaž Omerzu, John R Laird, Narendra N Khanna, Sophie Mavrogeni, et al. The present and future of deep learning in radiology. _European journal of radiology_ , 114:14–24, 2019.
* Sabato and Tishby [2009] Sivan Sabato and Naftali Tishby. Homogeneous multi-instance learning with arbitrary dependence. In _COLT_. Citeseer, 2009.
* Sabato and Tishby [2012] Sivan Sabato and Naftali Tishby. Multi-instance learning with any hypothesis class. _The Journal of Machine Learning Research_ , 13(1):2999–3039, 2012.
* Sabato et al. [2010] Sivan Sabato, Nathan Srebro, and Naftali Tishby. Reducing label complexity by learning from bags. In _Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics_ , pages 685–692. JMLR Workshop and Conference Proceedings, 2010.
* Schlemper et al. [2019] Jo Schlemper, Ozan Oktay, Michiel Schaap, Mattias Heinrich, Bernhard Kainz, Ben Glocker, and Daniel Rueckert. Attention gated networks: Learning to leverage salient regions in medical images. _Medical image analysis_ , 53:197–207, 2019.
* Selvaraju et al. [2017] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In _Proceedings of the IEEE international conference on computer vision_ , pages 618–626, 2017.
* Shapley [1953] Lloyd S Shapley. A value for n-person games. _Contributions to the Theory of Games_ , 2(28):307–317, 1953.
* Shehab et al. [2022] Mohammad Shehab, Laith Abualigah, Qusai Shambour, Muhannad A Abu-Hashem, Mohd Khaled Yousef Shambour, Ahmed Izzat Alsalibi, and Amir H Gandomi. Machine learning in medical applications: A review of state-of-the-art methods. _Computers in Biology and Medicine_ , 145:105458, 2022.
* Sutskever et al. [2013] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In _International conference on machine learning_ , pages 1139–1147. PMLR, 2013.
* Teneggi et al. [2022] Jacopo Teneggi, Alexandre Luster, and Jeremias Sulam. Fast hierarchical games for image explanations. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2022.
* Turner and Holdsworth [2011] PJ Turner and G Holdsworth. Ct stroke window settings: an unfortunate misleading misnomer? _The British journal of radiology_ , 84(1008):1061–1066, 2011.
* Tushar et al. [2021] Fakrul Islam Tushar, Vincent M D’Anniballe, Rui Hou, Maciej A Mazurowski, Wanyi Fu, Ehsan Samei, Geoffrey D Rubin, and Joseph Y Lo. Classification of multiple diseases on body ct scans using weakly supervised deep learning. _Radiology: Artificial Intelligence_ , 4(1):e210026, 2021.
* Ueda et al. [2019] Daiju Ueda, Akitoshi Shimazaki, and Yukio Miki. Technical and clinical overview of deep learning in radiology. _Japanese journal of radiology_ , 37(1):15–33, 2019.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_ , 30, 2017.
* Wang et al. [2021] Tonghe Wang, Yang Lei, Yabo Fu, Jacob F Wynne, Walter J Curran, Tian Liu, and Xiaofeng Yang. A review on medical imaging synthesis using deep learning and its clinical applications. _Journal of applied clinical medical physics_ , 22(1):11–36, 2021.
* Wang et al. [2022] Zhenzhen Wang, Carla Saoud, Sintawat Wangsiricharoen, Aaron W. James, Aleksander S. Popel, and Jeremias Sulam. Label cleaning multiple instance learning: Refining coarse annotations on single whole-slide images. _IEEE Transactions on Medical Imaging_ , pages 1–1, 2022. doi: 10.1109/TMI.2022.3202759.
* Weidmann et al. [2003] Nils Weidmann, Eibe Frank, and Bernhard Pfahringer. A two-level learning method for generalized multi-instance problems. In _European Conference on Machine Learning_ , pages 468–479. Springer, 2003.
* Weissglass [2021] Daniel E Weissglass. Contextual bias, the democratization of healthcare, and medical artificial intelligence in low-and middle-income countries. _Bioethics_ , 2021.
* Wu et al. [2021] Yunan Wu, Arne Schmidt, Enrique Hernández-Sánchez, Rafael Molina, and Aggelos K Katsaggelos. Combining attention-based multiple instance learning and gaussian processes for ct hemorrhage detection. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 582–591. Springer, 2021.
* Yeo et al. [2021] Melissa Yeo, Bahman Tahayori, Hong Kuan Kok, Julian Maingard, Numan Kutaiba, Jeremy Russell, Vincent Thijs, Ashu Jhamb, Ronil V Chandra, Mark Brooks, et al. Review of deep learning algorithms for the automatic detection of intracranial hemorrhages on computed tomography head imaging. _Journal of neurointerventional surgery_ , 13(4):369–378, 2021.
* Youden [1950] William J Youden. Index for rating diagnostic tests. _Cancer_ , 3(1):32–35, 1950.
* Zaheer et al. [2017] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. _Advances in neural information processing systems_ , 30, 2017.
* Zech et al. [2018] John R. Zech, Marcus A. Badgeley, Manway Liu, Anthony B. Costa, Joseph J. Titano, and Eric Karl Oermann. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. _PLOS Medicine_ , 15(11):1–17, 11 2018. doi: 10.1371/journal.pmed.1002683. URL https://doi.org/10.1371/journal.pmed.1002683.
|
# Compressing Cross-Lingual Multi-Task Models at Qualtrics
Daniel Campos111This work was done while the first author was doing an
internship at Qualtrics
University of Illinois
Urbana-Champagne
<EMAIL_ADDRESS>Daniel Perry
Qualtrics
Seattle, WA
<EMAIL_ADDRESS>Samir Joshi
Qualtrics
Seattle, WA
<EMAIL_ADDRESS>Yashmeet Gambhir
Qualtrics
Seattle, WA
<EMAIL_ADDRESS>Wei Du
Qualtrics
Seattle, WA
<EMAIL_ADDRESS>Zhengzheng Xing
Qualtrics
Seattle, WA
<EMAIL_ADDRESS>Aaron Colak
Qualtrics
Seattle, WA
<EMAIL_ADDRESS>
###### Abstract
Experience management is an emerging business area where organizations focus
on understanding the feedback of customers and employees in order to improve
their end-to-end experiences. This results in a unique set of machine learning
problems to help understand how people feel, discover issues they care about,
and find which actions need to be taken on data that are different in content
and distribution from traditional NLP domains. In this paper, we present a
case study of building text analysis applications that perform multiple
classification tasks efficiently in 12 languages in the nascent business area
of experience management. In order to scale up modern ML methods on experience
data, we leverage cross lingual and multi-task modeling techniques to
consolidate our models into a single deployment to avoid overhead. We also
make use of model compression and model distillation to reduce overall
inference latency and hardware cost to the level acceptable for business needs
while maintaining model prediction quality. Our findings show that multi-task
modeling improves task performance for a subset of experience management tasks
in both XLM-R and mBert architectures. Among the compressed architectures we
explored, we found that MiniLM achieved the best compression/performance
tradeoff. Our case study demonstrates a speedup of up to 15.61x with 2.60%
average task degradation (or 3.29x speedup with 1.71% degradation) and
estimated savings of 44% over using the original full-size model. These
results demonstrate a successful scaling up of text classification for the
challenging new area of ML for experience management.
## 1 Introduction
Experience management enables businesses and organizations to effectively
adapt to actionable feedback from their customers and employees. Understanding
and managing a customer or employee experience requires analyzing a
combination of survey questions, social media, and other sources of experience
data together in order to derive insight. Deriving insight requires
effectively predicting the feelings, emotions, as well as the requested
actions from feedback in a scalable way. To accurately predict these insights
we leverage state-of-the-art pretrained language models.
However, many of the best pretrained models require significant resources to
deploy at scale. For example, the best performing models for tasks such as
semantic similarity of sentences Wang et al. (2018) can have hundreds of
billions of parameters Smith et al. (2022). Even more pedestrian (in terms of
size) models such as BERT-base Devlin et al. (2019) can still be relatively
expensive and latency-prone for a typical business use case, especially
without specialized hardware or accelerators. One way to both achieve high
prediction accuracy and scale up is to leverage model compression.
While there is substantial literature on model compression, it can be
difficult to sort through all the methods and evaluate them on a case by case
basis. Our contribution is a specific case study evaluation of model
compression methods for Qualtrics models on experience management data and its
unique challenges. In this work, we are particularly interested in building
efficient text classifiers, an important problem in the experience management
domain. Indeed, unstructured data constitutes more than 80% of the experience
management data. As such, analyzing text data across dimensions such as
sentiment, emotion, actionability, effort, intent, topic, urgency, and
toxicity is one of the most the foundational challenges in this emerging
space. We share details about what worked and did not, which can benefit the
industry at large as others begin to adopt model compression in their
organizations for their use cases. This is particularly timely in our current
market as many companies in emerging business areas are looking to reduce
costs and model compression is an effective way to reduce ML inference costs,
both financially and environmentally.
### 1.1 Motivating Constraints: Engineering Overhead, Cost, Latency
Our goal in pursuing this compression and consolidation work was to reduce
overall model hosting costs while preserving model quality. Two areas we are
focused on are reducing burden for engineering support of hosting our new
models in production, as well as the direct cost and latency of the models
themselves.
Since we deploy and support our models in a microservices framework, each
model typically lives behind a specific model endpoint or service so each
model has a static cost for the base capacity, and variable cost for the
elastic capacity. If we use single-task monolingual models, this results in
needing to support in production a specific service per task language pair.
Similarly, for single-task NLP models, the encoder, which can account for 90+%
of the computation for classification models, must run for each task,
regardless of how similar it is between tasks.
In contrast, a multi-task cross-lingual model consolidates this repetitive
computation and removes the instance hosting overhead for additional
languages. For this reason, we focused on the ability to support multiple
tasks per model, as well as a cross-lingual model.
In addition, by developing smaller models, we hope to achieve reduced latency
for runtime needs while also reducing costs by providing flexibility to deploy
on less costly hardware.
### 1.2 The Tension Between Model Consolidation and Compression
There is an interesting tension that arises as we both combine multiple models
into a single multi-task cross-lingual model and also reduce the size and
capacity of that model. While prior work has also looked at these different
facets of model consolidation and compression in isolation Wang et al. (2020a,
b); Mukherjee et al. (2021); Jiao et al. (2021); Sanh et al. (2019); Jiao et
al. (2020); de Wynter and Perry (2020); Yang et al. (2019), in this work we
investigate how these approaches work together to consolidate and compress a
model, and how that impacts model performance on the target tasks.
We are unable to analyze this tension for all NLP tasks in general, but here
we present evidence towards understanding the tradeoffs for specific cases,
relevant to work at our company. These results can inform future theoretical
work as well as more practical application at other organizations.
## 2 Cross-Lingual Multi-Task (XLMT) Model Compression Methods
As described above, we are motivated to both consolidate task and language
support into a single cross-lingual multi-task (XLMT) model and at the same
time pursue a compressed version of that model to reduce capacity and make the
model faster and less expensive to run.
### 2.1 Cross-lingual Modeling
There has been a strong movement towards multi-lingual and cross-lingual
models. One of the first multi-lingual BERT models was “multi-lingual BERT”
(mBert), from Devlin et al. (2019), which extended “monolingual BERT” by
training across a dataset with multiple languages represented. Cross-lingual
modeling (XLM), presented in Conneau and Lample (2019), further improved over
multi-lingual modeling by introducing additional cross-lingual pretraining
tasks, and XLM-Roberta (XLM-R) Conneau et al. (2019) developed a strong cross-
lingual model using techniques from Roberta Liu et al. (2019) and showed
better performance beyond previous multi-lingual and cross-lingual models.
In this work we show results using both the mBert and XLM-R pretrained models
on which we build our task-specific classifiers. In the original paper Conneau
et al. (2019) the authors showed a decrease in model performance as more and
more languages were introduced. We explore the effect of training on
monolingual vs cross-lingual settings, and how it impacts our combined model
performance.
### 2.2 Multi-task Learning for NLP
Multi-task learning (MTL) can not only merge tasks into a single model but
also improve task performance by sharing common layers. For instance, Lin et
al. (2018) proposed an architecture that shares the same character embedding
layer showing effective results for low-resource settings. Other types of MTL
include hierarchical architectures, such as Vijayaraghavan et al. (2017) where
separate tasks are learned and then combined using a final attenuation layer
and He et al. (2019) where the first task output feeds into a second task in
sequence.
In this work we explore how combining multiple tasks into a single cross-
lingual model impacts performance on each of those tasks individually. Our
approach leverages a common base model with multiple task heads. The multi-
task multiclass classification loss function we use consists of a simple sum
of cross-entropy losses,
$L_{\mbox{MT}}=\frac{1}{N}\sum_{t=1}^{T}\sum_{i=1}^{N^{t}}\left[-\sum_{c\in
C_{t}}\left(\ell_{i,c}^{t}\log p_{i,c}^{t}\right)\right],$ (1)
where $N=\sum_{t=1}^{T}{N^{t}}$ is the total number of data points from $T$
tasks and $N^{t}$ is the number of data points for the $t$-th task. $C_{t}$ is
the number of classes for task $t$. $\ell_{i,c}^{t}$ is either $0$ or $1$,
indicating whether class label $c$ is the correct classification of the $i$-th
data point from the $t$-th task, and $p_{i,c}^{t}$ are the corresponding
predicted probabilities.
### 2.3 Model Compression
#### 2.3.1 Knowledge Distillation
Knowledge distillation (KD) popularized by Hinton et al. (2015) and aims to
create smaller models which approximate the performance of the larger models
by teaching the smaller model (student model) to emulate the larger model
(teacher model).
The original approach used the final layer logit-based knowledge distillation,
where the concept is to minimize the distance (i.e., KL divergence loss
function) of logit output (final layer) between teacher and student models.
Later work, including many applications in NLP, introduced variations on this
idea, including Sanh et al. (2019) which applied a combined loss, including
masked language modeling loss, cosine distance loss, and KL divergence loss to
reduce BERT model size. More generally, we can also align the intermediate
features between the teacher and the student models rather than just the final
layer, such as Jiao et al. (2020) which used many intermediate layers for
distillation. MiniLM was introduced in Wang et al. (2020a) using self-
attention distribution transfer and self-attention value-relation transfer to
achieve competitive performance in both monolingual and multilingual models.
In this work, we have primarily investigated distilling using the task
specific logits produced by the final layer. Exploring additional intermediate
representation distillation is left to future work to potentially improve
performance in the smallest models we tested. Focusing on the last layer
results in the following modified loss:
$\displaystyle L_{\mbox{MT-KD}}$ $\displaystyle=$
$\displaystyle\frac{1}{N}\sum_{t=1}^{T}\sum_{i=1}^{N^{t}}$
$\displaystyle\left[-\left(\sum_{c\in C_{t}}\ell_{i,c}^{t}\log
p_{i,c}^{t}\right)\right.$ (2) $\displaystyle\left.+\alpha
F^{2}\left(\sum_{c\in
C_{t}}\hat{q}_{i,c}^{t}\log\frac{\hat{q}_{i,c}^{t}}{\hat{p}_{i,c}^{t}}\right)\right],$
where $q_{i,c}^{t}$ is the teacher model prediction of the $i$-th data point
from the $t$-th task,
$\hat{q}_{i,c}^{t}=\frac{\exp(q_{i,c}^{t}/F)}{\sum_{j}\exp(q_{j,c}^{t}/F)}$ is
the temperature modified teacher prediction,
$\hat{p}_{i,c}^{t}=\frac{\exp(p_{i,c}^{t}/F)}{\sum_{j}\exp(p_{j,c}^{t}/F)}$ is
the temperature modified student prediction, $F$ is the temperature parameter
Hinton et al. (2015), and $\alpha$ is the teacher coefficient term controlling
the relative impact of distillation to the label loss.
#### 2.3.2 Structural Pruning
In LeCun et al. (1989) the author introduced a notion that neural networks can
be compressed by removing entire sections without major impact to accuracy.
Structural pruning compress networks by removing entire structural components
like attention heads, neurons, and even transformer layers, and leverage KD to
limit model degradation. While most previous work in compression has focused
on monolingual models, there is also a growing body of work around
multilingual and cross-lingual model compression Jiao et al. (2021); Mukherjee
and Awadallah (2020); Mukherjee et al. (2021); Wang et al. (2020a, b). We
focus on two specific compressed architectures, MiniLM Wang et al. (2020b) and
XtremeDistil Mukherjee et al. (2021) and compare them in our use case.
Ultimately we found MiniLM to be the most effective at learning our specific
set of tasks.
#### 2.3.3 Quantization
Quantization enables reduction in model size and memory footprint while also
potentially increasing inference speed. Here we consider integer quantization,
in which the precision is reduced from 32-bit floating point to 8-bit integer.
Quantization can be done during training, known as quantization aware training
(QAT), to minimize degradation, or after training, known as post training
quantization (PTQ), to compress an already trained model. Zafrir et al. (2019)
shows that by leveraging QAT, their "Q8Bert" quantized model was able to match
the performance of the base BERT model on various NLP tasks.
In this work we explore combining quantization via QAT with structural pruning
to further reduce the model size while maintaining good model performance.
## 3 Experimental Results
Our core set of results are developed around a multi-task cross-lingual model
developed internally at Qualtrics to help develop understanding around
customer feedback. The model handles three separate but related multiclass
classification tasks on text input, we refer to these tasks throughout this
paper as Task-1, Task-2, and Task-3. They refer to three text classification
tasks our group actively uses and develops, with similarities to models such
as sentiment or toxicity prediction He et al. (2019). Each of these task is
implemented as a sequence classification task where the input is direct
customer feedback. Task-1 is a multi class classification with 6 labels,
Task-2 is a multi-class classification with 4 labels, and Task-3 is a multi-
class, multi-label sequence classification with 9 classes and each class has
independent binary labels.
In our experiments our focus is on exploring the relationship between
knowledge distillation, multi-task modeling, quantization and multilingualism.
We do not seek to provide a complete understanding of how each axis impacts
the outcomes but instead seek to find the optimal way to optimize the
performance of pruned and quantized model by exploring the impact of multi-
lingual fine tuning, variations in knowledge distillation, and task specific
teachers.
### 3.1 Dataset and Compressed Architecture Selection
Our dataset consists of a collection of internal customer experience data
across multiple industries. The data has been fully anonymized and aggregated,
and is used with permission. This process protects customer data privacy and
ensures data from any specific industry or company is not over-represented or
identifiable. The resulting text dataset consists of 257k text documents
across 16 languages labeled for Task-1, and 127k text documents across 12
languages labeled for Task-2 and Task-3. A description of the task types,
number of labels, and label can be seen in Table 1. This experimental data is
similar in nature to the production datasets used in our production modeling
system.
Task-1 | Task-2 | Task-3
---|---|---
(Multi-class) | (Multi-class) | (Multi-label)
# Samples | # Samples | # Samples
T1-L1 | 93738 | T2-L1 | 83545 | T3-L1 | 33463
T1-L2 | 70218 | T2-L2 | 36018 | T3-L2 | 21562
T1-L3 | 38786 | T2-L3 | 4317 | T3-L3 | 22556
T1-L4 | 26359 | T2-L4 | 3198 | T3-L4 | 1090
T1-L5 | 18792 | | | T3-L5 | 7525
T1-L6 | 9837 | | | T3-L6 | 44485
| | | | T3-L7 | 11341
| | | | T3-L8 | 2518
| | | | T3-L9 | 1951
Table 1: Breakdown of task label distribution. Task labels are listed as
T#-L#, where T1-L1 represents Label 1 for Task-1.
For modeling we primarily use PyTorch Paszke et al. (2019) and the
Transformers library Wolf et al. (2019). For model quantization, we made use
of the SparseML library Kurtz et al. (2020); Singh and Alistarh (2020).
Instead of developing our own target architecture, we leverage existing cross-
lingual models from the literature as a first approach to model compression.
After a review of the literature, we settled on experimentation around two
cross-lingual models, XtremeDistil Mukherjee and Awadallah (2020), Mukherjee
et al. (2021) and MiniLM Wang et al. (2020a), Wang et al. (2020b). We
summarize the characteristics of the architectures evaluated in Table 2, where
the smallest model considered was 6 layers and 22M parameters.
Name | #Layer | #Param | Size
---|---|---|---
XLM-R(XLM-R Base) | 12 | 85M | 1.12GB
XtremeDistil | 6 | 22M | 91MB
MiniLM-L12(mMiniLMv2) | 12 | 41M | 236MB
MiniLM-L6(mMiniLMv2) | 6 | 22M | 215MB
Table 2: Description of model architectures evaluated. #Params refers to the
number of transformer parameters.
To further narrow to a single cross-lingual model, we performed an experiment
using a subset of our datasets that covered 11 languages and evaluated how
well the models perform in two settings: with a distillation teacher and
without a teacher. The subset contained 57k responses labeled for Task-1 and
20k labeled for Task-2 and Task-3.
Method | Task-1 | Task-2 | Task-3
---|---|---|---
XLM-R | 83.32 | 80.81 | 39.41
Xtremedistil (no teacher) | 67.69 | 69.24 | 31.23
Xtremedistil (with teacher) | 67.82 | 70.99 | 28.93
MiniLM-L12 (no teacher) | 80.79 | 77.55 | 35.99
MiniLM-L12 (with teacher) | 81.43 | 78.44 | 36.54
Table 3: Results on each task for each model architecture, reported in
Macro-F1. All models were trained for 2 epochs and reported results are the
per-task macro F1 scores.
This experiment, as shown in Table 3, indicated that MiniLM (and its variants)
would be easier to train and perform model distillation in our setting. Due to
the above results, for compressed models we targeted the MiniLM-L12
architecture. Our definition of performing better, worse, or the same was
based on the 95th percentile confidence interval of a random sample of 5
models trained from different random seeds. If we observe differences greater
than these intervals we consider them significant; otherwise we consider the
result to be the same.
### 3.2 Cross-Lingual Model Results
Our goal in developing a cross-lingual model is to reduce the overhead of
hosting multiple monolingual models. However, the single cross-lingual model
should perform at least on par with the monolingual model. To test this
assumption, we trained a single cross-lingual model and tested it across all
languages. We then trained 12 separate monolingual models, starting from the
publicly available XLMR-base pretrained weights (to avoid confounding factors
from alternative monolingual base models). We then evaluated these monolingual
models against the same cross-lingual evaluation dataset as a benchmark. A
summary of results is shown in Table 4, where we report results for _fr_ ,
_en_ , _de_ , and _ja_ languages. We also evaluated 8 other languages and
observed the same overall relative results. The best monolingual Task-1 result
overall was $73.39$ (_en_) and worst was $14.52$ (_pl_), the corresponding
cross-lingual reaching $79.12$ (_pl_). The best cross-lingual result was
$91.65$ (_en_) and the worst was $71.05$ (_ko_) with the corresponding
monolingual result dipping to $48.84$ (_ko_). We observe in every language we
examined the cross-lingual model does better than the monolingual model,
strongly supporting a move to cross-lingual modeling for our tasks.
Train Lang | Eval Lang | Task-1 | Task-2 | Task-3
---|---|---|---|---
all | fr | 87.69 | 82.43 | 36.16
fr | 68.91 | 74.04 | 26.24
all | en | 91.65 | 79.67 | 40.47
en | 73.39 | 77.2 | 33.69
all | de | 86.07 | 77.74 | 34.71
de | 68.71 | 70.65 | 23.52
all | ja | 80.3 | 70.71 | 32.2
ja | 56.22 | 64.21 | 15.9
Table 4: Cross-lingual model comparison with monolingual models, evaluated on
the same target language. Across all languages and tasks we evaluated, we
observed the cross-lingual models to outperform monolingual models.
### 3.3 Cross-Lingual Multi-Task (XLMT) Model Results
We are also interested in combining multiple tasks into a single model to
reduce the engineering overhead of hosting our model. To evaluate whether our
model maintained similar performance to single-task models, we evaluated the
combined XLMT model in comparison to the single-task models for both XLM-R and
mBert pretrained models. The experimental results in Table 5 show that the
XLMT model performed similarly to, if not better than, the single-task model
on Task-1 and Task-2 prediction. For Task-3 we observed some significant
degradation in the task performance.
To further confirm these results, we performed a similar analysis using
another multilingual model, mBert. Using mBert, we again observed some modest
gains for the first two tasks and then significant degradation for the third
task.
These results indicate our current multi-task architecture does benefit two of
the three tasks. However, for final deployment it will be important to
consider moving our third task into a separate model or develop alternative
multi-task architectures to reduce the performance gap.
Model | Train Method | Eval Task | F1
---|---|---|---
XLM-R | multi-task | Task-1 | 82.23
Task-2 | 76.03
Task-3 | 38.32
single-task | Task-1 | 81.12
Task-2 | 74.67
Task-3 | 51.27
mBert | multi-task | Task-1 | 78.88
Task-2 | 75.27
Task-3 | 35.88
single-task | Task-1 | 78.63
Task-2 | 74.31
Task-3 | 51.12
Table 5: Task-specific results for cross-lingual single-task models and multi-
task models. Macro-F1 results are reported on the full evaluation set,
consisting of all languages (16 for Task-1, 12 for Task-2/3).
### 3.4 Compressed XLMT Model Results
In developing the XLMT model, engineering overhead was reduced from $16\times
1+12\times 2=40$ individual models to a single cross-lingual multi-task
models, or two based on the outcomes of Task-3 above. However, given the size
of the XLM-Roberta model, the hosting costs associated with serving inference,
specifically given the need for GPU instances to generate predictions at low
latency, remained high. To reduce this base cost and reduce the latency of
this model, we focused on compressing the model itself. As mentioned earlier
this compressing of the model, reducing its overall capacity, is in tension
with the goals of maintaining performance of the combined XLMT model.
Our results in Table 6 show that simply performing structured layer pruning on
the model resulted in some degradation of task performance. For Task-1 with
MiniLM-12 architecture, the larger of the smaller architectures considered, we
see about $1.6$% relative degradation. MiniLM-6 shows 3.9% degradation, while
XtremeDistil shows over $20$% degradation. This same pattern holds for the
Task-2, and for Task-3 we see even less degradation for MiniLM-12.
These results strongly favor MiniLM-12 and MiniLM-6 for compressing our
specific use case.
Model | Task-1 | Task-2 | Task-3
---|---|---|---
XLM-R | 82.23 | 76.80 | 35.90
MiniLM-L12 | 80.85 | 75.86 | 35.09
MiniLM-L6 | 78.97 | 72.42 | 35.34
XtremeDistil | 61.83 | 61.59 | 24.00
Table 6: Results comparing the original MiniLM and XtremeDistil models with
the full-size XLM-R model across Task-1, Task-2, and Task-3 macro-F1 scores.
### 3.5 Distilled XLMT Model Results
To address the degradation resulting from structured layer pruning we
incorporated some model distillation using the final layers of the full-size
and compressed models. We explored using a single multi-task teacher and task-
specific teachers, as well as using a single cross-lingual teacher and
language-specific teachers. However we ultimately use cross-lingual task-
specific teachers because the performance of Task-3 as a single task model
outperformed the multi-task model, as shown in Table 5 and cross-lingual
models consistently out-performed language-specific models as shown in Table
4. To provide additional model compression after enabling distillation, we
trained the model with QAT in order to further reduce model complexity. To
evaluate model speedup, each model was run for sequences of length 128 with
batch size 32 on 1 Nvidia T4 GPU leveraging TensorRT
222https://developer.nvidia.com/tensorrt. Speedup was measured relative to the
baseline model, XLM-R (fp32).
Model | Speedup | Task-1 | Task-2 | Task-3
---|---|---|---|---
XLM-R (fp32) | x1 | 82.23 | 76.80 | 35.90
XLM-R (int8 quantized) | x3.64 | 81.09 | 73.60 | 35.80
MiniLM-L12 (fp32) | x3.29 | 79.42 | 75.1 | 35.36
MiniLM-L12 (int8 quantized) | x8.11 | 79.29 | 73.69 | 35.71
MiniLM-L6 (int8 quantized) | x15.61 | 79.05 | 73.90 | 35.84
MiniLM-L12-mBert (int8 quantized) | x8.11 | 79.05 | 73.48 | 35.57
Table 7: Results on model distillation and quantization aware training.
Task-1, Task-2, and Task-3 results are reported in macro-F1 scores. XLM-R
models were used as the teacher in all results, except for MiniLM-L12-mBert
which used mBert teachers.
The two best models after distillation were the quantized MiniLM-L6 model with
$2.60$% average relative task degradation and non-quantized MiniLM-L12 with
$2.37$% average relative task degradation. We found that quantized MiniLM-L6
was able to improve more from distillation than MiniLM-L12. While we are still
investigating full cause our current hypothesis is that the smaller model
provides some regularization against overfitting versus the larger model. In
terms of speedup our quantized MiniLM-L6 model provided the most speedup at
15.61x speedup over the baseline. In our final assessment we found that using
task-specific model distillation on the MiniLM-L6 model with quantization
provided a strong result in model size while maintaining model performance, as
shown in Table 7. However, in considering the best model overall the
MiniLM-L12 in Table 6 provided the least overall degradation of 1.71% and a
modest speedup of 3.29x.
## 4 Business Impact
An implementation of these compression techniques and choices was developed in
our production training system. At Qualtrics, this system has generated
significant business impact across customers and new features, financial,
environmental, and operational cost, system flexibility and robustness.
### 4.1 Feature impact
Given the speedup the compressed and multi-task models provide, we observe
significant increases in throughput across use cases, enabling us to serve
more customers, and enabling new features for customers that were previously
too compute intensive. As an example, this speedup enables ML pipelines that
run 3-4 models serially in the same time window as a single model.
Additionally, the flexibility of cross-lingual models enables us to serve more
customers in more languages, without large training sets or comprehensive
language specialization.
### 4.2 Financial impact
Conservatively, we estimate approximately 44% savings in terms of hardware
cost from the developments in compression of the multi-task cross-lingual
model, in comparison to an uncompressed system at similar latency and
throughput. In addition, by combining multiple tasks that are invoked with
similar load into a single model, we achieve a fraction of the total inference
cost.
This savings is driven by several factors: reducing base instance capacity;
reducing the amount of dynamic scaling; and allowing for deployment of lower
cost hardware. We note that the savings is limited by the persistent cost of
the base capacity, even when reduced, which creates a floor for cost savings
even with models that are multiple times faster.
Currently, we deploy our models on a public cloud with a cost of approximately
$80K/month/model. The compression technique used in this paper reduces cost by
44%, resulting in $35K savings/month for a single model compared to the
current model in production for the same tasks. As we push our compression
framework to various other NLP systems currently developed by this group, it
can result in a potential yearly cost savings in the single digit millions of
dollars. Considering current macro-economic conditions when there is industry
wide need for financial costs reduction, the significant savings can
strengthen the fundamentals of a typical SaaS company like Qualtrics.
### 4.3 Ethical impact
We are pleased that our efforts to compress models will support environmental
sustainability. By reducing the amount of power and resource needed to run the
same inference, we anticipate meaningful impact on environmental footprint,
though we are unable to quantify it concretely at this time.
### 4.4 Operational impact & robustness
While MiniLM-L6 provided better speedup, business needs required the lower
degradation provided by MiniLM-L12 for the first set of models. By leveraging
the compressed XLMT model, we enable additional flexibility in production
deployment scenarios, including different instance types (CPU, GPU, custom
accelerators) that enable us to serve high throughput inference while
balancing cost. This was not previously viable with larger models, which
required GPUs in order to serve high throughput inference. By enabling this
flexibility, we also improve system robustness, as the models are robust to
instance unavailability or instance downtime for any type of instance.
Additionally, savings from moving to a single multi-task model results in
overall reduced workload for our engineering teams, removing per-task
deployments and deployment pipelines and lowering the barriers to new tasks
and new language support. Specifically, multi-task and cross-lingual modeling
reduces the number of models for these tasks from 36 potential models (12
languages, 3 tasks) to a single model, reducing operational cost from 6-7 on-
call/operations engineers to 1. Compression additionally reduces this cost by
lowering latency and increasing throughput, reducing the operational cost of
mitigating latency spikes and scaling issues.
## 5 Conclusion
We have presented a case study into the Qualtrics approach for leveraging
cross-lingual and multi-task modeling techniques in combination with model
distillation and quantization techniques, in order to develop models that can
handle traffic volumes at scale. The results show that for our multi-class
classification tasks, these methods can be combined effectively to reduce
deployment overhead and maintenance and achieve up to 15.61x speedup with
2.60% average degradation. Our results explore boundary cases where
compression works well, and where it can degrade past business requirements;
where combining up to 12 languages works well; and where combining tasks works
well, and where it does not. These approaches have been necessary for us to
scale up our unique text classification problems in the growing field of
experience management. We anticipate these results will help to guide other
groups hoping to reduce model inference costs, as well as contribute to future
theoretical work around the tradeoff between model compression and model
consolidation. Looking forward, we plan to apply these methods to more complex
sequence labeling tasks and to explore additional methods such as model
sparsity and neural architecture search to see if even faster models can be
developed with acceptable levels of model performance.
## References
* Wang et al. [2018] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 353–355, 2018.
* Smith et al. [2022] Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, et al. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. _arXiv preprint arXiv:2201.11990_ , 2022.
* Devlin et al. [2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi:10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
* Wang et al. [2020a] Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. _arXiv preprint arXiv:2002.10957_ , 2020a.
* Wang et al. [2020b] Wenhui Wang, Hangbo Bao, Shaohan Huang, Li Dong, and Furu Wei. Minilmv2: Multi-head self-attention relation distillation for compressing pretrained transformers. _arXiv preprint arXiv:2012.15828_ , 2020b.
* Mukherjee et al. [2021] Subhabrata Mukherjee, Ahmed Hassan Awadallah, and Jianfeng Gao. Xtremedistiltransformers: Task transfer for task-agnostic distillation. _arXiv preprint arXiv:2106.04563_ , 2021.
* Jiao et al. [2021] Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. Lightmbert: A simple yet effective method for multilingual bert distillation. _arXiv preprint arXiv:2103.06418_ , 2021.
* Sanh et al. [2019] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. _arXiv preprint arXiv:1910.01108_ , 2019.
* Jiao et al. [2020] Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. Tinybert: Distilling bert for natural language understanding. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 4163–4174, 2020.
* de Wynter and Perry [2020] Adrian de Wynter and Daniel J Perry. Optimal subarchitecture extraction for bert. _arXiv preprint arXiv:2010.10499_ , 2020.
* Yang et al. [2019] Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, and Daxin Jiang. Model compression with multi-task knowledge distillation for web-scale question answering system. _arXiv preprint arXiv:1904.09636_ , 2019.
* Conneau and Lample [2019] Alexis Conneau and Guillaume Lample. Cross-lingual language model pretraining. _Advances in neural information processing systems_ , 32, 2019.
* Conneau et al. [2019] Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. Unsupervised cross-lingual representation learning at scale, 2019. URL https://arxiv.org/abs/1911.02116.
* Liu et al. [2019] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019. URL https://arxiv.org/abs/1907.11692.
* Lin et al. [2018] Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. A multi-lingual multi-task architecture for low-resource sequence labeling. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 799–809, 2018.
* Vijayaraghavan et al. [2017] Prashanth Vijayaraghavan, Soroush Vosoughi, and Deb Roy. Twitter demographic classification using deep multi-modal multi-task learning. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 478–483, 2017\.
* He et al. [2019] Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. _arXiv preprint arXiv:1906.06906_ , 2019.
* Hinton et al. [2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_ , 2015.
* LeCun et al. [1989] Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. _Advances in neural information processing systems_ , 2, 1989.
* Mukherjee and Awadallah [2020] Subhabrata Mukherjee and Ahmed Hassan Awadallah. Xtremedistil: Multi-stage distillation for massive multilingual models. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 2221–2234, 2020.
* Zafrir et al. [2019] Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. Q8BERT: quantized 8bit BERT. _CoRR_ , abs/1910.06188, 2019. URL http://arxiv.org/abs/1910.06188.
* Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
* Wolf et al. [2019] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. Huggingface’s transformers: State-of-the-art natural language processing. _arXiv preprint arXiv:1910.03771_ , 2019.
* Kurtz et al. [2020] Mark Kurtz, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John Carr, Michael Goin, William Leiserson, Sage Moore, Bill Nell, Nir Shavit, and Dan Alistarh. Inducing and exploiting activation sparsity for fast inference on deep neural networks. In Hal Daumé III and Aarti Singh, editors, _Proceedings of the 37th International Conference on Machine Learning_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 5533–5543, Virtual, 13–18 Jul 2020. PMLR. URL http://proceedings.mlr.press/v119/kurtz20a.html.
* Singh and Alistarh [2020] Sidak Pal Singh and Dan Alistarh. Woodfisher: Efficient second-order approximation for neural network compression, 2020.
|
# Evidence for a Massive Andromeda Galaxy Using Satellite Galaxy Proper
Motions
Ekta Patel Department of Astronomy, University of California, Berkeley, 501
Campbell Hall, Berkeley, CA, 94720, USA Kaisey S. Mandel Institute of
Astronomy and Kavli Institute for Cosmology, Madingley Road, Cambridge CB3
0HA, UK Statistical Laboratory, DPMMS, University of Cambridge, Wilberforce
Road, Cambridge CB3 0WB, UK The Alan Turing Institute, Euston Road, London
NW1 2DB, UK
###### Abstract
We present new mass estimates for Andromeda (M31) using the orbital angular
momenta of four satellite galaxies (M33, NGC 185, NGC 147, IC 10) derived from
existing proper motions, distances, and line-of-sight velocities. We infer two
masses for M31: $M_{\rm vir}=2.85^{+1.47}_{-0.77}\times 10^{12}\,M_{\odot}$
using satellite galaxy phase space information derived with HST-based M31
proper motions and $M_{\rm vir}=3.02^{+1.30}_{-0.69}\times 10^{12}\,M_{\odot}$
using phase space information derived with the weighted average of HST+Gaia-
based M31 proper motions. The precision of our new M31 mass estimates (23-50%)
improves by a factor of two compared to previous mass estimates using a
similar methodology with just one satellite galaxy and places our results
amongst the highest precision M31 estimates in recent literature. Furthermore,
our results are consistent with recently revised estimates for the total mass
of the Local Group (LG), with the stellar mass–halo mass relation, and with
observed kinematic data for both M31 and its entire population of satellites.
An M31 mass $>2.5\times 10^{12}\,M_{\odot}$ could have major implications for
our understanding of LG dynamics, M31’s merger and accretion history, and our
understanding of LG galaxies in a cosmological context.
galaxies: evolution, galaxies: fundamental parameters, galaxies: kinematics
and dynamics – Local Group, methods: statistical
††journal: ApJ††software: This work has been possible thanks to astropy (The
Astropy Collaboration et al., 2018), numpy (van der Walt et al., 2011), scipy
(Jones et al., 2001–), and matplotlib (Hunter, 2007). The IllustrisTNG data
are publicly available at https://www.tng-project.org/data/.
## 1 Introduction
Pinning down the total masses of the Milky Way (MW) and Andromeda (M31) is
vital to almost all aspects of understanding the formation and evolution of
the Local Group (LG). Nearly all galaxy parameters are directly correlated to
the total mass of a galaxy, a majority of which resides in the dark matter
halo. Therefore, constraining halo mass is also key to revealing clues about
the nature of dark matter itself.
While recent works have made significant progress towards constraining the
total mass of the MW using methods that rely on measured 6D phase space
information (3D position + 3D velocity) for various stellar substructures,
including satellite galaxies, globular clusters, stellar streams, and halo
stars (e.g., Busha et al., 2011; Boylan-Kolchin et al., 2013; González et al.,
2013; Peñarrubia et al., 2014; Cautun et al., 2014; Gibbons et al., 2014;
Peñarrubia et al., 2016; Eadie et al., 2017; Deason et al., 2021), the lack of
equivalent information for substructures around M31 has posed a number of
challenges in advancing our understanding of M31. One key piece of missing
information has been a precise mass estimate that reconciles the latest
picture of M31’s merger history, accretion history, and observed properties.
Nevertheless, a variety of methods have been previously utilized to estimate
the mass of M31 in the absence of 6D phase space information. However,
systematic differences in the assumptions, methods, and data have still led to
a large scatter ranging from $0.5-3.5\times 10^{12}\,M_{\odot}$. Examples of
such assumptions include the use of a steady-state halo and spherical halo
geometry, which may no longer be accurate in light of recent studies that have
shown that M31 has likely had an eventful recent past (e.g., Lewis et al.,
2023; Mackey et al., 2019; D’Souza & Bell, 2018; Hammer et al., 2018;
McConnachie et al., 2018, 2009; Fardal et al., 2006; Dey et al., 2022). Other
assumptions, such as those requiring constant velocity anisotropy, can either
over- or underestimate the uncertainty in the mass of M31, demonstrating the
need for less biased mass estimation techniques.
Another commonly used technique that yields high uncertainty in virial
mass111“Virial” quantities refer to quantities calculated following the
definitions provided in Bryan & Norman (1998). (or equivalent total halo mass
definitions, such as $M_{200}$) is the process of extrapolating between
enclosed masses to virial masses. The former have less associated uncertainty,
however, virial masses are often more useful for placing a galaxy in a
cosmological context. Enclosed masses have been reported for methods including
those that use galaxy rotation curves (e.g., Chemin et al., 2009; Sofue,
2015), distribution functions (e.g., Evans et al., 2000; Evans & Wilkinson,
2000), the Jeans equation (e.g., Watkins et al., 2010), stellar streams (Ibata
et al., 2004; Fardal et al., 2006, 2013; Dey et al., 2022), globular clusters
(e.g, Perrett et al., 2002; Lee et al., 2008; Galleti et al., 2006; Veljanoski
et al., 2013), and sometimes satellite galaxies (e.g., Hayashi & Chiba,
2014a). Strong assumptions combined with extrapolation techniques can then
further reduce both the accuracy and precision of mass estimation techniques.
Therefore, in this work, we turn to high-precision astrometric data and
cosmological simulations to devise a technique that statistically constrains
host galaxy virial mass, eliminating these extra sources of uncertainty.
While 6D phase space information is available for almost all of the MW’s
satellite galaxies (Gaia Collaboration et al., 2018; Simon, 2018; Fritz et
al., 2018; McConnachie & Venn, 2020a, b; Li et al., 2021; Battaglia et al.,
2022; Pace et al., 2022), the same information is only available for four of
M31’s satellite galaxies (M33, NGC 185, NGC 147, IC 10) owing to their
distance (Brunthaler et al., 2005, 2007; Sohn et al., 2020). The proper motion
(PM) of M31 itself was also only measured a decade ago for the first time
(Sohn et al., 2012; van der Marel et al., 2012a).
In previous work, we demonstrated that combining cosmological simulations with
high-precision data for satellite galaxies is a powerful technique to
constrain host galaxy virial mass, building on the statistical methods of
Busha et al. (2011) and González et al. (2013). In Patel et al. (2017b,
hereafter P17) we used a Bayesian framework to estimate the mass of M31 and
the MW using measurements of the following observed properties of the Large
Magellanic Cloud (LMC) and M33: distance relative to the MW or M31 ($r^{\rm
obs}$), velocity relative to the MW or M31 ($v^{\rm obs}_{\rm tot}$), maximum
circular velocity ($v^{\rm obs}_{\rm max}$), and orbital angular momentum
($j^{\rm obs}$). This approach included two sub-methods, the first using the
combination of instantaneous properties, namely position and velocity, and the
second focusing on dynamical properties, particularly orbital angular
momentum. We found the best estimate of M31’s mass from each sub-method to be
Mvir $=1.44^{+1.26}_{-0.69}\times 10^{12}\,M_{\odot}$ (instantaneous method)
and Mvir $=1.37^{+1.39}_{-0.75}\times 10^{12}\,M_{\odot}$ (momentum method).
Furthermore, we concluded that angular momentum is a much more reliable tracer
of host galaxy halo mass, as it is robust against bias introduced by different
host-satellite orbital configurations and whether a satellite is bound to its
host or not.
Extending the analysis of P17, in Patel et al. (2018, hereafter P18) eight
satellite galaxies were used to estimate the mass of the MW to a precision of
$\sim$30%, significantly improving on the precision achievable with only one
satellite. However, given the low number of halos and subhalos in simulations
that broadly represent the properties of MW satellites, a post-processing step
was introduced to statistically combine the posterior distributions of MW
virial mass inferred with each satellite. Thus, in practice, our “ensemble”
results, which aim to leverage the 6D phase space of eight MW satellite
galaxies simultaneously are still an approximation of the MW’s mass,
especially since the mass resulting from analyzing individual satellites
independently exhibits a scatter of a factor of three.
Here, we improve on the work of P17 by including the phase space information
for three additional M31 satellites. Given the smaller satellite sample size
compared to P18, we are also able to improve our statistical methodology. In
particular, we can relax the statistical approximation previously needed to
combine posterior distributions corresponding to individual satellites and
instead compute joint likelihoods with four satellites simultaneously. This is
expected to yield the most precise M31 mass to date, allowing all properties
of M31’s halo, including its shape, to act as free parameters and requiring no
assumptions about whether satellites are bound to M31.
This paper is organized as follows. In Section 2 we describe both the
observational data sets and the simulations used in this analysis. Section 3
briefly outlines the statistical methods from previous work and the
modifications required for the M31 system. In Section 4 we provide results for
the estimated mass of M31, and Section 5 places these results in the context
of recent literature and cosmological predictions. Finally, we conclude and
summarize in Section 6.
## 2 Simulated and Observed Galaxy Properties
In this section, we discuss the observed data for M31 and the four M31
satellite galaxies of interest (NGC 147, NGC 185, M33, IC 10). We also discuss
the IllustrisTNG-Dark simulations, the suite of dark matter only simulations
used in combination with the observed data to statistically estimate the mass
of M31.
### 2.1 Observed Satellite Properties
#### 2.1.1 Distances and Radial Velocities
We use distances from a novel homogeneous compilation (Savino et al., 2022;
Nagarajan et al., 2022) that have been derived as part of the Cycle 27 HST
Treasury survey of the M31 system (GO-15902, PI: D. Weisz) for all galaxies
except IC 10. The distances to M31 and to the satellite galaxies have been
measured from HST time-series of RR Lyrae variable stars, using a reddening-
independent calibration from the models of Marconi et al. (2015), which have
been empirically re-calibrated to ensure consistency with the Gaia eDR3
astrometric reference frame. Measuring a new distance to IC 10 was not
possible due to high extinction, so we adopt the McQuinn et al. (2017)
distance based on the tip of the red giant branch. All distance moduli and
heliocentric line-of-sight (LOS) radial velocities are listed in Table 1.
Galaxy | $(m-M)_{0}$ | $v_{\rm LOS}$ | $\mu_{\alpha*},\,\mu_{\delta}$ | Refs.
---|---|---|---|---
| [mag] | [km s-1] | [$\mu$as yr-1] |
M31 (HST+sats) | 24.45$\pm$0.06 | -301.0 | 45$\pm$13, -32$\pm$12 | 1,3,4
M31 (HST+Gaia) | 24.45$\pm$0.06 | -301.0 | 49$\pm$11, -38$\pm$11 | 1,4,8
M33 | 24.67 $\pm$0.06 | -179.2 | 23$\pm$7, 8$\pm$9 | 1,5,9
NGC 185 | 24.06$\pm$0.06 | -203.8 | 24$\pm$14, 6$\pm$15 | 1,6,11
NGC 147 | 24.33$\pm$0.06 | -193.1 | 23$\pm$14, 38$\pm$15 | 1,6,11
IC 10 | 24.43$\pm$0.03 | -348.0 | 39$\pm$9, 31$\pm$8 | 2,7,10
Table 1: Distance moduli, LOS radial velocities, and proper motions for M31
and the four satellite galaxies used in this work. References are labeled as
follows: (1) Savino et al. (2022); (2) McQuinn et al. (2017); (3) van der
Marel et al. (2012a); (4) Slipher (1913); (5) Corbelli & Schneider (1997); (6)
Geha et al. (2010); (7) Huchra et al. (1999); (8) van der Marel et al. (2019);
(9) Brunthaler et al. (2005); (10) Brunthaler et al. (2007); (11) Sohn et al.
(2020).
#### 2.1.2 M31
This work relies on the properties of satellite galaxies with respect to M31,
thus we first need to establish the M31 properties that will be used to
subsequently derive observed satellite galaxy properties. We use the M31
distance derived by Savino et al. (2022) which gives
$D_{M31}=776.2^{+22}_{-21}$ kpc. The LOS velocity, $v_{\rm LOS}=-301$ km s-1,
comes from Slipher (1913). The last necessary component to convert the
properties of satellite galaxies to an M31-centric reference frame is the
Galactocentric motion of M31, which relies on a measured PM. This velocity
acts as a zero-point of the satellites’ motion.
The first direct PM measurement for M31 was taken with the Hubble Space
Telescope (HST) in 2012 (Sohn et al., 2012; van der Marel et al., 2012a).
Multiple additional estimates (both direct and indirect) for M31’s PM have
also been reported using satellite galaxies and stellar population data
measured with both HST and Gaia (e.g., van der Marel & Guhathakurta, 2008;
Salomon et al., 2016; van der Marel et al., 2019; Salomon et al., 2021). As in
Sohn et al. (2020), we adopt two M31 PM measurements, those reported in van
der Marel et al. (2012a, referred to as HST+sats) and van der Marel et al.
(2019, referred to as HST+Gaia), which give tangential velocity zero-points of
$V_{\rm tan,HST+sats}=17\rm\,km\,s^{-1}$ (with a $1\sigma$ confidence region
of $V_{\rm tan,HST+sats}\leq 34.3$ km s-1) and $V_{\rm
tan,HST+GaiaDR2}=57^{+35}_{-31}\rm\,km\,s^{-1}$, respectively (see Table 2).
We do not consider the measurements reported in Salomon et al. (2021) using
data from Gaia eDR3 as their PMs measured with blue young main sequence stars
are consistent with and as accurate as the weighted average between HST
measurements and indirect estimates from the LOS velocities of satellite
galaxies (our HST+sats data; Sohn et al., 2012; van der Marel et al., 2012a,
see Fig. 6 in Salomon et al. (2021)). Throughout this analysis, we will
present results using observational satellite properties derived from both
sets of M31 tangential velocity zero points. Table 2 lists the relevant
observational properties for the four satellite galaxies used in this study.
In the top half, the positions, velocities, and angular momenta are derived
using the M31 HST+sats tangential velocity zero-point, and in the bottom half,
the HST+Gaia tangential velocity zero-point is used.
#### 2.1.3 M33
Position, velocity, and orbital angular momentum for M33 using the M31
HST+sats zero-point are adopted from P17 (see Table 1 and references therein)
and are listed in the first row of Table 2. In short, M33’s PM was first
measured using the Very Long Baseline Array (VLBA) by Brunthaler et al.
(2005). M33’s PM was also independently measured again in van der Marel et al.
(2019) using data from Gaia DR2. In this analysis, we adopt the weighted
average of the VLBA+Gaia DR2 PMs from van der Marel et al. (2019) whenever the
M31 HST+Gaia DR2 tangential velocity zero-point is used (bottom half of Table
2). The most substantial changes between the HST+sats and HST+Gaia data sets
are that M33’s 3D velocity vector relative to M31 increased by 55 km s-1 and
the 3D position vector relative to M31 increased by $\sim$ 20 kpc.
Our framework relies on the maximum circular velocity of only the dark matter
halo of a satellite’s rotation curve, $v_{\rm max}^{\rm obs}$. M33’s total
rotation curve was measured out to $\sim$15 kpc by Corbelli & Salucci (2000)
reaching a maximum velocity of $\approx 130$ km s-1. For this value, we use
the peak halo velocity from van der Marel et al. (2012b, 90 km s-1) for M33,
which is determined by reconstructing a model rotation curve to match the
observed data. The M33 VLBA PMs are used to determine the values listed in the
top half of Table 2.
We calculate the uncertainties on position, velocity, and orbital angular
momentum using Monte Carlo draws from the $4\sigma$ error space on the
measured LOS velocity, distance modulus, and PM measurements (see Patel et
al., 2018; van der Marel et al., 2002). The values and associated
uncertainties listed in Table 2 represent the mean and standard deviation for
each quantity using 10,000 position and velocity vectors resulting from the
Monte Carlo sampling. We assign an uncertainty of 10 km s-1 to $v_{\rm
max}^{\rm obs}$ for all satellites since this is equivalent to reported
uncertainties in rotation curves for galaxies at these distances.
Note that the new distance for M33 (226 kpc in this work vs. 203 kpc in P17;
Savino et al., 2022) increases the angular momentum from 27656$\pm$8219 kpc km
s-1 (P17) to 37158$\pm$8011 kpc km s-1 (this work), an approximate increase of
30%. We have also improved our methodology for drawing Monte Carlo samples
since P17, however, these methodological changes only affects the tails of
M33’s $j$ distribution. Despite changes in adopted $j$, M31’s mass resulting
from the properties of M33 are still consistent within the uncertainties of
the P17 value, and conclusions from Patel et al. (2017a) that M33 is likely to
be on a first infall orbit still hold. See also Appendix A.
#### 2.1.4 NGC 147 & NGC 185
The first PMs of NGC 147 and NGC 185 were recently measured using HST (Sohn et
al., 2020). To determine the appropriate maximum circular velocity values of
these two dwarf elliptical galaxies, we first use the abundance matching
relation from Moster et al. (2013)222These abundance matched masses are
consistent with those from Garrison-Kimmel et al. (2017), despite the well-
known discrepancies for different SMHM relations in the regime of low-mass
galaxies. to find the infall halo mass for NGC 147 and NGC 185 using the
stellar masses reported in McConnachie et al. (2018). From this relation, we
determine the following infall halo masses: $5\times 10^{10}\,M_{\odot}$ (NGC
147) and $4.5\times 10^{10}\,M_{\odot}$ (NGC 185). Using these halo masses, we
construct individual NFW halo profiles to best fit the dynamical masses
reported in McConnachie (2012). Dynamical mass relies on the half-light radius
and the Wolf estimator (Wolf et al., 2010) to constrain the mass within the
half-light radius. We varied the concentration of each NFW profile until the
enclosed mass at the half-light radius matched the dynamical mass. The best-
fit NFW halo profiles result in maximum circular velocities of 61 km s-1 (NGC
147) and 73 km s-1 (NGC 185).
#### 2.1.5 IC 10
We use the PM of IC 10, the furthest satellite galaxy considered in this work,
as measured with the VLBA (Brunthaler et al., 2007). To determine the maximum
circular velocity of IC 10’s dark matter halo, we adopt the following
properties from Table 2 of Oh et al. (2015): $R_{max}$, $M_{dyn}(R_{max})$,
and $M_{200}$. We first convert $M_{200}$ to virial units, which yields a
virial mass of Mvir $=1.9\times 10^{10}\,M_{\odot}$. Then, we subtract the
stellar and gaseous masses from Mvir and use this mass to construct an NFW
profile. We vary the concentration of the NFW profile, until the NFW profile’s
enclosed mass at $R_{max}$ is equivalent to $M_{dyn}(R_{max})$. The best-
fitting NFW profile results in a maximum circular velocity of 48 km s-1 for IC
10’s halo.
Galaxy | $r^{\rm obs}$ | $v^{\rm obs}_{\rm max}$ | $v^{\rm obs}_{\rm tot}$ | $j^{\rm obs}$
---|---|---|---|---
| [kpc] | [km s-1] | [km s-1] | [kpc km s-1]
M31 HST+sats $v_{\rm tan}$ zero-point
M33a | 226$\pm$13 | 90$\pm$10 | 202$\pm$40 | 38253$\pm$8010c
NGC 185 | 155$\pm$25 | 73$\pm$10 | 127$\pm$31 | 18113$\pm$7366
NGC 147 | 107$\pm$12 | 61$\pm$10 | 205$\pm$47 | 18082$\pm$6091
IC 10 | 247$\pm$24 | 48$\pm$10 | 264$\pm$44 | 56687$\pm$11063
M31 HST+Gaia DR2 $v_{\rm tan}$ zero-point
M33b | 226$\pm$13 | 90$\pm$10 | 257$\pm$50 | 41502$\pm 9886$
NGC 185 | 155$\pm$25 | 73$\pm$10 | 194$\pm$48 | 29832$\pm$9740
NGC 147 | 107$\pm$12 | 61$\pm$10 | 277$\pm$58 | 22219$\pm$8561
IC 10 | 247$\pm$24 | 48$\pm$10 | 339$\pm$51 | 66715$\pm$13045
Table 2: The adopted observed data for all four M31 satellite galaxies with
measured PMs and radial velocities. Properties include distance relative to
M31 ($r^{\rm obs}$), velocity relative to M31 ($v^{\rm obs}_{\rm tot}$),
maximum circular velocity ($v^{\rm obs}_{\rm max}$), and orbital angular
momentum ($j^{\rm obs}$). a: M33’s adopted PM is from the VLBA (Brunthaler et
al., 2005). b: M33’s PM is the weighted average between VLBA and Gaia DR2 (van
der Marel et al., 2019). c: In P17, we adopted $r^{\rm obs}$ = 203 kpc and
thus $j=27656\pm 8219\rm\,kpc\ km\,s^{-1}$ for M33 but since we have updated
M33’s distance to $r^{\rm obs}$ = 226 kpc in this work, $j$ for M33 has kpc
also increased.
### 2.2 IllustrisTNG-Dark
We use halo catalogs from the IllustrisTNG project (Nelson et al., 2018;
Pillepich et al., 2018; Naiman et al., 2018; Springel et al., 2018; Marinacci
et al., 2018; Nelson et al., 2019) to choose a broad range of host halos and
their corresponding satellites as our prior sample. IllustrisTNG is a suite of
hydrodynamical+N-body cosmological simulations. For this work, we specifically
focus on IllustrisTNG100-1-Dark (hereafter IllustrisTNG-Dark), which follows
the evolution of 18203 dark matter particles from $z\approx 127$ to $z=0$.
Each dark matter particle has a mass of $m_{DM}=6\times 10^{6}\,M_{\odot}/h$.
The following cosmological parameters are adopted for consistency with the
results from Planck Collaboration et al. (2016): $\Omega_{\Lambda,0}=0.6911$,
$\Omega_{m,0}=0.3089$, $\Omega_{b,0}=0.0486$, $\sigma_{8}=0.8159$,
$n_{s}=0.9667$, and $h=0.6774$.
As discussed in P18, we focus on a dark matter-only simulation since it yields
the largest prior sample (i.e., fewer satellites are disrupted or inhibited
from forming due to baryonic effects). However, full hydrodynamics still
yields a consistent answer for MW halo masses as compared to the dark matter-
only simulations (see P18, Fig. 6).
## 3 Statistical Methods
In P18, we used eight classical MW satellites to estimate the mass of the MW
but we were limited by the number of representative subhalos and corresponding
host halos in Illustris-1-Dark (hereafter Illustris-Dark). We found that halos
in Illustris-Dark at low redshift typically host between two and five subhalos
with properties broadly representative of the MW satellites. Therefore,
likelihood functions were evaluated per individual satellite using the same
prior sample and the results were combined using a statistical approximation
that removed the multiplicity of using the same prior for each satellite.
Since 6D phase space information is only available for four M31 satellites, it
is possible to take a more rigorous approach to estimate the mass of M31. In
short, there are enough halos hosting four subhalos representative of these
M31 satellites, and thus we do not need the additional approximation necessary
to combine the results for individual satellites as was needed in P18. It
does, however, require building a more strategic prior sample and a
modification of the likelihood functions previously implemented in P18.
### 3.1 Prior Sample Selection
For host halos and subhalos in the prior sample, the physical properties of
interest are $\bm{\Theta}=[\bm{X},m]$ where $m\equiv\log_{10}M_{\rm vir}$ and
Mvir is the virial mass333We refer to virial mass and virial radius throughout
this work. In IllustrisTNG-Dark, these represent the virial mass and radius of
FoF groups as identified by SUBFIND. SUBFIND definitions follow those of Bryan
& Norman (1998). Adopting the IllustrisTNG-Dark cosmology, this corresponds to
$\Delta_{\rm vir}=330$. of the host halo of any given subhalo. We define
$\bm{X}=\\{\bm{x}_{1},\ldots,\bm{x}_{N_{\rm sat}}\\}$ (1)
as the collection of $N_{\rm sat}$ subhalo properties and $\bm{x}_{s}=[v_{{\rm
max},s},r_{s},v_{{\rm tot},s},j_{s}]$ are the latent, observable parameters of
subhalo $s$. The observational data in Table 2, denoted as
$\bm{d}_{s}=[v_{{\rm max},s}^{\rm obs},r_{s}^{\rm obs},v_{{\rm tot},s}^{\rm
obs},j_{s}^{\rm obs}]$, are measurements of the parameters $\bm{x}_{s}$ of the
M31 satellites such that if the errors on the measurements of the parameters
$\bm{x}_{s}$ were zero, then $\bm{d}_{s}=\bm{x}_{s}$. We define
$\bm{D}=\\{\bm{d}_{1},\ldots,\bm{d}_{N_{\rm sat}}\\}$ as the collection of
measurements of the $N_{\rm sat}$ subhalo properties.
To build a new prior sample, representing draws from $P(\bm{\Theta})$, that
will simultaneously constrain the mass of M31 using the four satellites of
interest, we first select all simulated halos where the host halo has a
minimal virial mass $\geq 10^{10}\,M_{\odot}$ at $z\approx 0$ and $v_{\rm
max}<250$ km s-1 following the observed rotation curve (Corbelli et al.,
2010). Recall that we select halos from snapshots 80-99 ($z=0-0.26$) to
increase the total number of samples in the prior. While there may be repeated
halo systems from snapshot to snapshot that satisfy the following criteria, we
treat these halos as independent draws. Of these systems, those with four or
more subhalos satisfying the following criteria $(\bm{C})$ are considered:
* $C_{1}$: the subhalo $v_{\rm max}=35-100\text{ km s}^{-1}$ at $z\approx 0$
* $C_{2}$: the subhalo resides within 0.3$R_{\rm vir}$–$R_{\rm vir}$ at $z\approx 0$
* $C_{3}$: the minimal subhalo mass is $\geq 5\times 10^{9}\,M_{\odot}$ at $z\approx 0$
* $C_{4}$: the subhalo is among the 10 most massive subhalos in the host halo group (excluding the primary)
Previously, we required all subhalos to reside within the virial radius of
their host halo, however, all four satellites are in the outer halo of M31
(two are at $>100$ kpc and two are at $>$ 200 kpc). This particular subset of
M31 satellites is not representative of the radial distribution of all M31
satellites where 17/35 or nearly 50% (including NGC 147 and NGC 185) are
located at 100-200 kpc, thus we modified this criterion to only include
subhalos located outside of 0.3 Rvir. Outer satellites are known to be the
best tracers of the underlying host potential, therefore, this modification is
advantageous to constraining the virial mass of M31.
The prior sample purposefully only contains subhalos with $v_{\rm
max}=35-100\,\text{km s}^{-1}$ where 35 km s-1 ensures that analogs of IC 10
are included but also that the typical $v_{\rm max}$ values where Too Big To
Fail is most prominent around the MW and M31 ($\lesssim 40\,\rm km\,s^{-1}$ in
dark matter only simulations) are not considered (Boylan-Kolchin et al., 2011;
Tollerud et al., 2014). Avoiding the TBTF challenge also supports our choice
of using the dark matter-only version of IllustrisTNG-Dark.
These criteria yield 71,371 subhalos across 14,075 halo systems. Figure 1
shows the distribution of latent subhalo properties, $\bm{x}$, for the
IllustrisTNG-Dark subhalos in the prior sample compared to the observed
properties of the four M31 satellites, $\bm{d}$. Each panel indicates a pair
of two parameters in $\bm{x}$. The observed properties are adequately
encompassed by the properties of the prior sample, indicating that this is an
appropriate selection of subhalos. Qualitatively, Figure 1 indicates that the
observed properties are most similar to those systems with Mvir
$>10^{12}\,M_{\odot}$.
Note that while some systems have more than four subhalos satisfying these
criteria, we only use the four subhalos with the highest $v_{\rm max}$ values
in the following analysis since phase space information is only available for
four M31 satellites.
Since we consider four satellites with different observed properties,
$\bm{d}$, we rank order the subhalos encompassed in the prior to ensure that
the observed satellites are matched to the appropriate simulated counterpart.
For each host halo in the prior, subhalos are ranked from highest to lowest
$v_{\rm max}$, where $s=1$ represents the subhalo with the highest $v_{\rm
max}$. The properties of the first subhalo in each system are statistically
compared to the properties of M33 and so on, such that $s=1,2,...,N_{\rm sat}$
maps to M33, NGC 185, NGC 147, IC 10 and $N_{\rm sat}=4$, the total number of
satellites.
Therefore, though the draws from the prior defining our sample correspond to
71,371 subhalos, only 56,300 draws are used since just the first four subhalos
in each halo system are considered. These subhalos/halos will be treated as
draws from the underlying prior distribution.
Figure 1: The distribution of subhalo properties for all host-satellite
systems in the prior distribution, $P(m,\bm{x})$. Each panel shows a pair of
properties in $\bm{x}$ and points are colored by the host halo virial mass.
Observed satellite properties and their uncertainties, $\bm{d}$, with respect
to the HST+sats M31 $v_{\rm tan}$ zero-point are over-plotted in black markers
to illustrate that the observed satellite properties and their measurement
uncertainties are encompassed by the draws from the prior (see Section 3.1).
### 3.2 Likelihood Function
To estimate the mass of M31, we use a subset of the parameters $\bm{x}$ in the
following likelihood function, which is modified from those used in P18 to
include any number of satellites, $N_{\rm sat}$. All observed satellite
properties are assumed to have Gaussian errors.
In P17 and P18, we showed the advantages of the momentum method over the
instantaneous method, which uses instantaneous properties like position and
velocity, and therefore we only include results from the momentum method
throughout the rest of this work.
For the angular momentum method, $\bm{x}=(v_{\rm max},j)$, and the total
likelihood is the product of likelihoods computed over all satellites as
follows:
$\displaystyle\begin{aligned}
P&(\bm{D}|\,\bm{\Theta})=P(\bm{D}|\,\bm{X})=\prod_{s=1}^{N_{\rm
sat}}P({\bm{d}_{s}|\,\bm{x}_{s}})\\\ &=\prod_{s=1}^{N_{\rm sat}}N(j_{s}^{\rm
obs}|\,j_{s},\sigma_{j,s}^{2})\times N(v_{{\rm max},s}^{\rm obs}|\,v_{{\rm
max},s},\sigma_{v_{{\rm max},s}}^{2}),\end{aligned}$ (2)
where $N_{\rm sat}$ is the total number of satellites and for each host halo
in the sample, the subhalo with the highest value of $v_{max}$ is used in the
$s=1$ term (i.e., the M33 analog) and so on, according to the rank order
methodology discussed in Section 3.1. We invoked the assumption that the
measurements $\bm{d}$ of an individual satellite, conditional on their latent
values $\bm{x}$, have no additional dependence on the halo mass $m$, such that
$P(\bm{d}|\,\bm{x},m)=P(\bm{d}|\,\bm{x})$. This means that the measurement
errors in $\bm{d}$ are independent of $m$. Following Bayes’ theorem,
$\displaystyle P(\bm{\Theta}|\,\bm{D})\propto
P(\bm{D}|\,\bm{\Theta})P(\bm{\Theta}),$ (3)
the posterior distribution for the mass of M31 is then computed using the
likelihood, $P(\bm{D}|\,\bm{\Theta})$, and the prior, $P(\bm{\Theta})$, via
importance sampling as in P17, P18, and as described below.
Each observed satellite property is treated independently, as in previous work
(see also Busha et al., 2011). We note that this is not an ideal assumption as
all satellite properties are derived as relative quantities with respect to
M31. Furthermore, the nature of NGC 147 and NGC 185’s relation is still
uncertain. While our recent work (Sohn et al., 2020) shows that these galaxies
are not a binary pair orbiting M31 together, their PMs and LOS velocities are
still very similar, implying some correlation between these puzzling dwarf
ellipticals. A more detailed analysis including any correlations between
satellite properties is beyond the scope of this and previous work.
### 3.3 Importance Sampling
Generalizing from Section 3.2.1 of P18 to multiple satellites, Bayes’s theorem
is
$P(\bm{X},m|\,\bm{D})\propto P(\bm{D}|\,\bm{X})\times P(\bm{X},m|\,\bm{C}),$
(4)
where $\bm{C}$ denotes the dependence of the prior on the selection criteria
($\bm{C}$), the left-hand side is the posterior distribution,
$P(\bm{X},m|\,\bm{C})$ is the prior probability distribution, and
$P(\bm{D}|\,\bm{X})$ is the likelihood (see Eq. 2).
The posterior probability density function (PDF) is computed by drawing a set
of samples of size $n$, from an importance sampling function. The importance
sampling function is chosen to be the prior PDF, so importance weights are
proportional to the likelihood. With these weights, integrals summarizing the
target parameter $m$ are calculated as follows. The posterior expectation of a
function $f(\bm{\Theta})$ of the latent properties is:
$\mathbb{E}[f(\bm{\Theta})|\,\bm{D}]=\int
f(\bm{\Theta})\,P(\bm{X},m|\,\bm{D};\bm{C})\,d\bm{\Theta}.$ (5)
If $f(\bm{\Theta})$ only depends on $m$, then the posterior expectation for
$m$ is written
$\mathbb{E}[f(m)|\,\bm{D}]=\int f(m)\,P(m|\,\bm{D};\bm{C})\,dm.$ (6)
Using $n$ samples from the prior $P(m,\bm{X}|\,\bm{C})$, indexed as
$j=1,\ldots,n$, Equation 6 can be approximated as a Monte Carlo sum:
$\mathbb{E}[f(m)|\bm{D}]\approx\sum_{j=1}^{n}f(m^{j})\,w_{j},$ (7)
where $w_{j}$ are importance weights. When using only one satellite galaxy
($N_{\rm sat}=1$), weights were calculated by
$\displaystyle
w_{j}=\frac{P(\bm{d}|\,\bm{x}_{j})}{\sum^{n}_{j}P(\bm{d}|\,\bm{x}_{j})},$ (8)
where $n$ is the total number of samples from the prior. To generalize this to
multiple satellites ($N_{\rm sat}>1$), the importance weights are now
calculated as
$\displaystyle\begin{aligned} w_{j}=\frac{\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}_{j,s})}{\sum^{n}_{i=1}\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}_{i,s})}.\end{aligned}$ (9)
The derivation of these weights is given in Appendix B.
Setting $f(\bm{\Theta})=m$ as in Eq. 7 gives the posterior mean value of M31’s
virial halo mass. The weights in Eq. 9 are then used in a weighted kernel
density estimate to compute posterior probability densities over $m$. See P17
and P18 for additional details on importance sampling and kernel density
estimation.
For convenience, results are reported on a physical scale throughout as Mvir
$=X^{+U}_{-L}$ M⊙, where log10X is the posterior mean of log10Mvir and
[log10(X – L), log10(X + U)] is the 68% credible interval in log10Mvir.
Figure 2: Posterior PDFs for the virial mass of M31 inferred using the
properties of four M31 satellite galaxies. Observed satellite properties are
relative to the M31 HST+sats $v_{\rm tan}$ zero-point. Posterior PDFs are
shown for the maximum circular velocity of the satellite’s dark matter halo
($v_{\rm max}$; red) and the total orbital angular momentum ($j$; purple). The
dashed gray curve represents the underlying prior probability distribution
given equal likelihood weights. The black curve corresponds to the total
posterior PDF using $v_{\rm max}+j$. The posterior mean M31 virial mass is
$2.85^{+1.47}_{-0.77}\times 10^{12}\,M_{\odot}$ (mom.). Uncertainties
represent the 68% credible intervals. The posterior mean M31 virial masses
without the dependence on $v_{\rm max}$ is $2.47^{+1.21}_{-0.61}\times
10^{12}\,M_{\odot}$ (mom.). Excluding $v_{\rm max}$ consistently yields a
lower M31 Mvir.
## 4 Results: M31 Virial Mass Estimates
Using the observed data from Table 2 and the statistical methods described
above, we compute posterior PDFs for M31’s virial mass. M31’s mass is computed
using the properties of four satellite galaxies simultaneously and is
calculated with two different M31 tangential velocity zero-points.
### 4.1 M31 HST+sats Tangential Velocity zero-point Results
Figure 2 shows posterior PDFs using the observed properties of M31 satellites
for the HST+sats $v_{\rm tan}$ zero-point as inputs to the likelihood
functions (Eq. 2). The dashed gray line represents the underlying prior
distribution assuming equal weights. The prior encompasses four orders of
magnitude in virial mass from $10^{10}-10^{14}\,M_{\odot}$, with the highest
probability regions spanning $2\times 10^{11}-1\times 10^{13}\,M_{\odot}$ (see
Figure 1).
Individual posterior PDFs are shown for specific satellite properties,
including satellite halo maximum circular velocity ($v_{\rm max}$; red) and
satellite orbital angular momentum ($j$; purple). The posterior curves for the
momentum method (black curve) indicate results using both satellite
properties. The posterior mean M31 mass is Mvir $=2.85^{+1.47}_{-0.77}\times
10^{12}\,M_{\odot}$.
### 4.2 M31 HST+Gaia DR2 Tangential Velocity zero-point Results
Using satellite properties derived from the HST+Gaia M31 $v_{\rm tan}$ zero-
point, we follow the same methodology to compute posterior PDFs for M31’s
virial mass. We find a posterior mean M31 Mvir value of
$3.02^{+1.30}_{-0.69}\times 10^{12}\,M_{\odot}$, as illustrated in Figure 3.
This $v_{\rm tan}$ zero-point results in significantly higher M31 Mvir
estimates as compared to the HST+sats zero-point because M31’s $v_{\rm tan}$
is $\approx 40$ km s-1 higher when the HST+Gaia zero-point is adopted. The
increase in M31’s $v_{\rm tan}$ also propagates into the satellite’s derived
observed properties such that the total relative velocities and orbital
angular momenta are also larger for all four satellites relative to the
HST+sats M31 tangential velocity zero-point (see Table 2 and Appendix A).
Comparing results in Figures 2 and 3 (see also Table 3), it is clear that a
more precise measurement of M31 and M33’s PMs is crucial to further reduce the
uncertainty in the mass of M31 with this statistical framework. Results from
HST GO-15658 (M31)/GO-16274 (M33; P.I. S.T. Sohn) will be key to such
improvements as these data are expected to reach a precision three times
smaller than what is currently possible with HST or Gaia.
Satellites | Momentum | Momentum
---|---|---
| | (no vmax)
| Mvir [10${}^{12}\,M_{\odot}$] | Mvir [10${}^{12}\,M_{\odot}$]
M31 HST+sats zero-point
4 sats | $\bm{2.85^{+1.47}_{-0.77}}$ | $2.47^{+1.21}_{-0.61}$
M31 HST+Gaia DR2 zero-point
4 sats | $\bm{3.02^{+1.30}_{-0.69}}$ | $2.77^{+1.25}_{-0.64}$
Table 3: Estimated M31 virial masses using all four satellites. Posterior
means are reported both with and without including $v_{\rm max}$ in the
likelihood analysis. Including $v_{\rm max}$ generally results in masses that
are 10-15% higher than without $v_{\rm max}$. Our preferred masses are given
in bold text. Uncertainties on the total mass of M31 across methods using all
four satellites vary from 23-50%, a significant improvement on the scatter in
recent M31 literature masses, which ranges from approximately $0.7-3\times
10^{12}\,M_{\odot}$. Figure 3: Same as Figure 2 using the satellite properties
derived with the HST+Gaia weighted average M31 $v_{\rm tan}$ zero-point. The
posterior mean mass for M31 is Mvir $=3.02^{+1.30}_{-0.69}\times
10^{12}\,M_{\odot}$. When the likelihood dependence on $v_{max}$ is removed,
the posterior mean M31 mass is Mvir $=2.77^{+1.25}_{-0.64}\times
10^{12}\,M_{\odot}$.
### 4.3 Dwarf Ellipticals in a Cosmological Context
The Too Big To Fail (TBTF) challenge describes the discrepancy between the
circular velocity profiles of the most massive subhalos in dark matter-only
simulations of LG-like environments (i.e., those subhalos expected to host
luminous satellites) and the observed properties of LG dwarf satellites
(Boylan-Kolchin et al., 2011). Though this was first observed for only MW
satellites, other studies have shown that this phenomenon is also seen around
M31 and even for dwarfs in the field (Tollerud et al., 2014; Garrison-Kimmel
et al., 2014a).
Garrison-Kimmel et al. (2019) is one of several studies (e.g., Fattahi et al.,
2016; Brooks & Zolotov, 2014; Buck et al., 2019) that have shown how including
the evolution of both dark matter and baryons in a high-resolution simulation
of LG-like environments can nearly eliminate the TBTF and missing satellites
challenges. In addition to accounting for more realistic feedback processes,
the two primary factors that relieve these tensions are enhanced tidal
disruption and baryonic mass loss, neither of which can be accurately modeled
if baryons are not included.
There are still a few key outliers for which the circular velocity profiles of
subhalos from simulations and observed dwarfs are not in agreement. However,
for these outliers, the observed circular velocities are higher than their
simulated counterparts, in other words, the opposite of the TBTF challenge is
present. These outliers include the dwarf ellipticals NGC 147 and NGC 185, as
well as the starburst galaxy IC 10 (Garrison-Kimmel et al., 2017, 2019). These
high-density galaxies fail to exist across different simulation suites,
including FIRE, NIHAO, and APOSTLE. Here, we consider the consequences of
using $v^{\rm obs}_{\rm max}$ in the estimation of M31’s mass in light of this
tension.
In this simple test, we calculate the mass of M31 using just the angular
momentum of all four satellites. The purple curves in Figures 2 and 3 show the
resulting M31 mass estimates when the term describing the normal distribution
centered on $v^{\rm obs}_{\rm max}$ is eliminated from the joint likelihood
function in Eq. 2. In both cases, the posterior mean M31 masses excluding the
dependence on $v^{\rm obs}_{\rm max}$ are lower (see Table 3). The credible
intervals similarly decrease but still overlap with the results quoted in
Sections 4.1 and 4.2 (see Table 3). This suggests that the most likely mass of
M31 given the presence of NGC 147, NGC 185, and IC 10 is $\sim 3\times
10^{12}\,M_{\odot}$, or in other words, it is difficult to reconcile the
presence of galaxies with properties similar to NGC 147, NGC 185, and IC 10
around host halos $\lesssim 3\times 10^{12}\,M_{\odot}$.
These results are supported by the fact that the FIRE simulations examined in
Garrison-Kimmel et al. (2019) only span the range of $0.8-1.54\times
10^{12}\,M_{\odot}$ in virial mass. Based on our results, we would not expect
high-density dwarfs to exist around host galaxies with such small halo masses
– i.e., host galaxies with halo masses below $\sim 3\times 10^{12}\,M_{\odot}$
may not provide the appropriate conditions under which these galaxies form or
evolve into the dwarfs we see today. Additionally, there may also be other
factors at play that lead to the circular velocity discrepancy between
observations and simulations for these dwarf outliers. First, the inferred
maximum circular velocities of these galaxies may be overestimated. The
rotation curves of NGC 147 and NGC 185 in particular have only measured out to
2-3 kpc (Geha et al., 2010), thus an approximation such as that described in
Section 2.1.4 is necessary to construct circular velocity profiles out to
larger radii. It could also imply that there is a missing piece in our
understanding of galaxy formation at these dwarf mass scales. We encourage
readers to consult Garrison-Kimmel et al. (2019) for additional details.
### 4.4 Limitations of the Method
One limitation of the methodology discussed in P17 and P18 is a low effective
sample size (ESS). Low ESS values occur when there are few samples in the
prior with high importance sampling weights and thus few halo systems that
dominate the posterior PDF. This could add additional sampling noise to the
results discussed in Section 4. Furthermore, this is one reason why we do not
include results from the instantaneous method in this work, as we did in P17
and P18. The instantaneous method often produces ESS values below 10, and thus
the resulting credible interval is dominated by the dispersion across host
halo properties for those 10 systems since random samples chosen from high-
dimensional spaces (i.e., the instantaneous method needs to match three
parameters for each of four satellites in a 12-dimensional space) tend to be
farther apart than closer together.
To determine the magnitude of sampling noise using multiple satellites, we
perform a bootstrap analysis by drawing, with replacement, 25 mock catalogs of
the same size from the prior sample described in Section 3.1. These mock
catalogs are then used to estimate the mass of M31444Here we use only the
HST+sats M31 $v_{\rm tan}$ zero-point but similar results are expected for any
$v_{\rm tan}$ zero-point. We also use the total likelihood function including
$v_{\rm max}^{\rm obs}$ as given by Eq. 2. following the methods in Section 3.
The standard deviation of the posterior mean masses for 25 bootstrapped
catalogs is $0.08\times 10^{12}\,M_{\odot}$, confirming that the momentum
method is robust against low ESS and effectively captures the true dispersion
in posterior mean mass.
As a statistical approximation was implemented to combine individual satellite
galaxy posteriors to estimate the mass of the MW in P18, we tested the
validity of the approximation by using the Bayesian framework to estimate the
mass of 100 random halo systems using eight subhalos from the prior sample.
Previously, we found that for 90% of the randomly selected systems, the true
host halo virial mass was recovered within two posterior standard deviations
of the posterior mean (in dex), implying that the statistical approximation
was under-performing.
Since we remove the additional statistical approximation implemented in P18 to
simultaneously constrain the host halo mass with four satellites, we expect to
recover the true virial host halo mass for 95% of randomly selected systems in
the prior using this method if it is robust. For these mock tests, we randomly
selected 100 halo systems and assign measurement errors on subhalo properties
that are equivalent to the median of the observed data listed in Table 2
(i.e., 10 km/s on $v_{\rm max}^{\rm obs}$, 20% uncertainties on $r^{\rm obs}$
and $v_{\rm tot}^{\rm obs}$, 30% uncertainty on $j^{\rm obs}$). Applying our
methods, we find that the host virial mass is recovered within two posterior
standard deviations of the mean for 96/100 randomly selected systems, implying
that our M31 virial mass estimates in Section 4 are robust (i.e., our credible
regions appropriately capturing the true mass). We note that in this process,
we have excluded any likelihood weights corresponding to the random halo and
any of its progenitors to ensure that the most highly weighted halos are not
the tested halo itself (or the tested halo in previous snapshots of
IllustrisTNG-Dark since repeated systems are allowed within our prior.)
Finally, one must consider the irreducible uncertainty associated with cosmic
variance, or the imperfect correlation between subhalo properties and host
halo mass. In P17 and P18 we quantified how well the uncertainty due to cosmic
variance is captured by the credible intervals on the posterior mean M31
masses, finding that our methods accurately encompassed and even overestimated
the uncertainty due to cosmic variance. Here, we repeat a similar exercise
where a set of 25 halo systems are randomly selected from the prior and
assigned measurement errors equivalent to those reported for the observed data
corresponding to the satellites in our sample (see prior paragraph). For this
exercise, we specifically choose systems where all four subhalos are located
within 300 kpc of their host halo so the assigned measurement uncertainties
accurately reflect the observed data.
We then calculate the host halo mass for all 25 systems and the root mean
square (rms) error of the posterior log halo mass compared to the true log
halo mass. This is compared to the average of posterior standard deviations
(avg. $\sigma_{\rm post}$). The ratio of the rms error to the avg.
$\sigma_{\rm post}$ deviations is $\rm\frac{rms}{avg.\,\sigma_{post}}$=0.62,
confirming that cosmic variance is indeed accurately captured within our
analysis framework. However, this value is smaller than that reported in P17
where we found $\rm\frac{rms}{avg.\,\sigma_{post}}$=0.87. The decrease from
0.87 to 0.62 is likely due to a smaller ESS when matching the properties of
four satellites as opposed to just one satellite as in P17. The value
$\rm\frac{rms}{avg.\,\sigma_{post}}$=0.62 can be interpreted such that our
uncertainties are “overconfident” in accounting for cosmic variance by
$\sim$38% compared to the actual rms errors.
## 5 Discussion
### 5.1 Comparison to Literature and Previous Work
Figure 4: A selection of literature M31 mass estimates from the last two
decades. The method by which masses were determined is marked on the left and
points are grouped by symbols with the same color. Literature masses have been
converted to $M_{200}$ assuming NFW profiles and errors typically correspond
to 68% confidence intervals. If only statistical errors are reported, labels
are marked with an asterisk. Results from this work are given by black symbols
outlined in salmon. The two mass estimates using all four satellites including
$v_{\rm max}$ are shaded in grey. The extents of the gray regions span
$M_{200}=[1.60,3.81]\times 10^{12}\,M_{\odot}$ or Mvir $=[2.08,4.32]\times
10^{12}\,M_{\odot}$.
Nearly a dozen different techniques have been used to estimate the total mass
of M31, the MW, and the combined mass of the LG over the last two decades. In
Figure 4, we illustrate a selection of literature masses (Evans et al., 2000;
Evans & Wilkinson, 2000; Kafle et al., 2018; Courteau & van den Bergh, 1999;
Tamm et al., 2012; Phelps et al., 2013; Chemin et al., 2009; Sofue, 2015;
Fardal et al., 2013; Dey et al., 2022; Fardal et al., 2006; Ibata et al.,
2004; Watkins et al., 2010; Galleti et al., 2006; Lee et al., 2008; Perrett et
al., 2002; Veljanoski et al., 2013; Diaz et al., 2014; Peñarrubia et al.,
2014; Zhai et al., 2020; Peñarrubia et al., 2016; Villanueva-Domingo et al.,
2021; Carlesi et al., 2022; Hayashi & Chiba, 2014b; Patel et al., 2017b;
Tollerud et al., 2012; Côté et al., 2000) to provide context for the results
of this work. Literature data points are from the following studies:
For Figure 4, literature masses have been converted to $M_{200}$ assuming NFW
profiles and the mass-concentration relation for field halos from Klypin et
al. (2011), similar to Figure 1 of Wang et al. (2020). Here, $M_{200}$ refers
to the mass enclosed within $R_{200}$, the radius inside which the density is
200 times the critical density of the Universe. Uncertainties correspond to
the 68% errors, which are taken directly from the literature for a majority of
cases. In some situations (e.g., Carlesi et al., 2022), original errors, such
as the first and third quartiles, are adopted. For Phelps et al. (2013), 95%
errors are converted to 68% errors assuming the errors are Gaussian in linear
space. We note with an asterisk the literature masses where only statistical
errors are reported. We caution that the total error is underestimated (i.e.,
observational and systematic uncertainties are not included) for these
literature masses and therefore should not be directly compared to the
precision of other mass estimates, such as those from this work.
In Figure 4, our results fall into the “satellite phenomenology” category and
are marked as black symbols outlined in salmon (see Table 3). The gray shaded
regions encompass the two results we find using all four satellites and each
of the M31 $v_{\rm tan}$ zero points. Our masses encompass the region spanning
$M_{200}=[1.60,3.81]\times 10^{12}\,M_{\odot}$ or Mvir $=[2.08,4.32]\times
10^{12}\,M_{\odot}$.
Our posterior mean masses are most consistent with the results of Villanueva-
Domingo et al. (2021); Zhai et al. (2020); Hayashi & Chiba (2014a); Fardal et
al. (2013); Phelps et al. (2013) who have used machine learning (blue
symbols), LG dynamics (cyan symbols), satellite phenomenology (salmon
symbols), stellar streams (pink symbols), and the numerical action method
(purple symbol). Of these, the results from Villanueva-Domingo et al. (2021)
and Phelps et al. (2013) also reach similar precision to our results.
In particular, it is especially interesting to compare the precision and
values of our new results with those from P17. Previously by considering only
M33, we found Mvir $=1.37^{+1.39}_{-0.75}\times 10^{12}\,M_{\odot}$ using the
HST+sats M31 $v_{\rm tan}$ zero-point and the Illustris-Dark simulation,
whereas upon expanding the sample size to include three additional satellites
and upgrading to IllustrisTNG-Dark, we find Mvir $=2.85^{+1.47}_{-0.77}\times
10^{12}\,M_{\odot}$ (HST+sats). Though the posterior mean masses differ as a
function of satellite sample size, they are consistent with one another within
the credible intervals. Comparing the precision of these values, using only
M33 yields an uncertainty of $\sim$50-120%, while four satellites yield only
$\sim$23-50% (regardless of which M31 $v_{\rm tan}$ zero-point is adopted). We
conclude that quadrupling the satellite sample yields a reduction of more than
$\sim$50% in the uncertainty on the mass of M31555The systematic offset
between results from Illustris-Dark and IllustrisTNG-Dark is only $\sim$5%
(see Appendix A for details)..
### 5.2 Local Group Mass
The mass of M31 is intimately tied to the dynamical history of the LG and our
understanding of the LG in a cosmological context. Often, the mass of the MW,
M31, or both are used to identify analogs in cosmological simulations with
which the assembly of the LG is studied. Orbital modeling has also been used
to understand the accretion history and trajectory of substructure in the LG,
however, the assumed masses of the MW and M31 are one of the primary causes
for large uncertainties (e.g., Patel et al., 2017a; Li et al., 2021; Battaglia
et al., 2022). Understanding what fraction of the LG’s mass is attributed to
M31 is crucial to such studies and therefore, we briefly discuss how the
results presented in this analysis compare with recent LG mass estimates.
One of the most notable methods used to estimate the combined mass of the LG
is the Timing Argument (TA) based on the fact that the MW and M31 are
currently approaching one another, having overcome the effects of cosmic
expansion during the early Universe due to their strong gravitational
attraction. While the TA has been revised over time as more precise data have
become available for LG galaxies, literature masses for the LG have varied
between $2.5-5\times 10^{12}\,M_{\odot}$ through the 2010s.
Since then several studies have taken the TA one step further by including the
influence of the LMC given its substantial mass (at least $\sim$ 10% the mass
of the MW; Peñarrubia et al., 2016; Benisty et al., 2022; Chamberlain et al.,
2022). The LMC is expected to displace the MW’s disk and inner halo over time,
and impart a reflex motion on the MW (e.g. Gómez et al., 2015; Garavito-
Camargo et al., 2019; Petersen & Peñarrubia, 2020; Cunningham et al., 2020;
Garavito-Camargo et al., 2021). Furthermore, Petersen & Peñarrubia (2021)
recently measured the motion of the inner MW halo relative to the outer halo,
or “travel velocity”, confirming that the gravitational influence of the LMC
is critical to the dynamics of the LG. As such, these effects are important to
consider in the context of the TA, which has traditionally only included the
MW and M31.
Peñarrubia et al. (2016) included the effect of the LMC by assuming that the
MW and LMC could be treated as a two-point system and that M31 is in orbit
around the MW-LMC barycenter. By modeling the three galaxies and other Local
Volume galaxies as a dynamic system, they simultaneously fit for distances and
velocities of all galaxies while leaving the LMC’s mass as a free parameter in
a Bayesian fashion. In doing so, individual masses are derived for each of the
MW, M31 (see Fig. 4), the LMC, and the total LG mass for which they find
$2.64^{+0.42}_{-0.38}\times 10^{12}\,M_{\odot}$.
Benisty et al. (2022) revisited the TA allowing for a cosmological constant
and including the influence of the LMC. All galaxies are considered as
extended mass distributions rather than as point masses as in Peñarrubia et
al. (2016). The TA including the LMC yields a mass of $5.6^{+1.6}_{-1.2}\times
10^{12}\,M_{\odot}$, approximately 10% lower than the resulting LG mass when
only the pure TA (i.e, no LMC) is computed. Accounting for cosmic bias and
scatter in addition to the influence of the LMC reduces the LG mass to
$3.4^{+1.4}_{-1.1}\times 10^{12}\,M_{\odot}$.
Chamberlain et al. (2022) reevaluate the TA including the measured travel
velocity in the equations of motion describing the two-body MW-M31 system.
They use three different sets of PM and distance data for M31, including data
that overlaps with the observed data used in this work. They find LG masses of
$3.98^{+0.63}_{-0.47}\times 10^{12}\,M_{\odot}$ using an M31 distance from van
der Marel & Guhathakurta (2008) and the HST+sats M31 PM;
$4.05^{+0.51}_{-0.34}\times 10^{12}\,M_{\odot}$ using a distance from Li et
al. (2021) and the HST+sats M31 PM; and finally, an LG mass of
$4.54^{+0.77}_{-0.56}\times 10^{12}\,M_{\odot}$ using the distance from Li et
al. (2021) and Gaia eDR3 PM from Salomon et al. (2021). Chamberlain et al.
(2022) do not account for a cosmological constant and cosmic bias like Benisty
et al. (2022), yet both results consistently find that including the influence
of the LMC yields an LG mass that is approximately 10-20% lower than the pure
TA with only the MW and M31 with respect to each of their methods.
All three TA studies including the influence of the LMC neglect the influence
of M33, the third most massive galaxy in the LG, and the most massive
satellite of M31. This is because the M31 reflex motion owing to the passage
of M33 is unknown and subsequently a travel velocity has yet to be measured.
If M33 is truly on first infall as predicted by Patel et al. (2017a), the M31
reflex motion is expected to be small relative to the MW reflex motion,
however, these predictions are based on M31 masses smaller ($1.5-2\times
10^{12}\,M_{\odot}$) than those reported in this work. Therefore, it will be
necessary to revisit the orbital history of M33 with a massive M31 to predict
an accurate M31 reflex motion (Patel et al., in prep.). Preliminary results
following the methodology of Patel et al. (2017a) indicate that for a $3\times
10^{12}\,M_{\odot}$ M31, M33 passes through pericenter $\sim$4 Gyr ago at a
distance of $\sim$100-200 kpc. Once a prediction for the M31 reflex motion
exists, this value or a measurement of the M31 disk travel velocity can be
incorporated into the TA as in Chamberlain et al. (2022).
Our M31 masses are in agreement with the LG masses found in Chamberlain et al.
(2022) if we assume a MW mass of $0.96\times 10^{12}\,M_{\odot}$ (P18). Adding
our M31 masses to previous MW mass results, we find a total LG mass of $\sim
4\times 10^{12}\,M_{\odot}$. Outside of the TA method, other studies have used
cosmological simulations, machine learning, and LG dynamics to constrain the
total mass of the LG. A selection of these masses has been compiled in Table 4
of Chamberlain et al. (2022) and Fig. 4 of Benisty et al. (2022). Assuming our
P18 mass for the MW results from this work are most consistent with McLeod et
al. (2017); Zhai et al. (2020); Lemos et al. (2021) who find an LG mass of at
least $\gtrsim 4\times 10^{12}\,M_{\odot}.$
### 5.3 M31 Stellar Mass Fraction
Figure 5: The median stellar mass–halo mass relation (SMHM) from the Universe
Machine (Behroozi et al., 2019) is illustrated in black. The gray shaded
region encompasses the 1$\sigma$ variance around the median SMHM relation.
Square data points with error bars reflect the observed positions of M31
adopting $M_{*}=10^{11}\,M_{\odot}$ and the results listed in Table 3. The
stellar mass fraction corresponding to the results presented here is
consistent with the upper limits of the SMHM relation at fixed halo mass.
The stellar mass–halo mass relation (SMHM) relation, which describes the
correlation between galaxy stellar mass and dark matter halo mass, is often
used to place galaxies in a cosmological context (e.g., Moster et al., 2013;
Behroozi et al., 2013; Brook et al., 2014; Garrison-Kimmel et al., 2014b,
2017; Behroozi et al., 2019). Previously, M31 has been known to lie far
outside the scatter in the SMHM relation (e.g., McGaugh & van Dokkum, 2021),
so here, we use the halo masses reported in this work to place M31 on the SMHM
relation.
Figure 5 shows the SMHM relation from the Universe Machine (Behroozi et al.,
2019) for a range of halo masses corresponding to the approximate extents of
our prior sample. The gray shaded region encompasses the scatter around the
median SMHM as well as the associated statistical uncertainties. The observed
stellar mass–halo mass ratio is derived from our results assuming
$M_{*}=10^{11}\,M_{\odot}$ (Tamm et al., 2012; Sick et al., 2015).
Uncertainties on $M_{*}/M_{h}$ include an additional 0.1 dex uncertainty on
$M_{*}$ to account for the systematic error in converting luminosity to
stellar mass and to most accurately reflect the uncertainties for different
methods of modeling M31’s stellar mass. Our results are consistent with the
upper end of the SMHM relation, suggesting that a high mass ($\geq 3\times
10^{12}\,M_{\odot}$) is cosmologically most favorable (i.e., our HST+Gaia
result; red square).
### 5.4 Reconciling a High M31 Mass and Observed M31 Properties
Our statistical framework is intentionally designed to minimize the criteria
that determine which host halos are used as draws from the prior sample. As
discussed in Section 3.1, draws from the prior sample are only restricted by
the minimum virial mass of host halos at $z\approx 0$, selected as all halos
with a minimum mass of $10^{10}\,M_{\odot}$. The only other criterion applied
to host halos themselves is that their peak circular velocity, or $v_{\rm
max}$, is less than 250 km s-1.
We set the upper $v_{\rm max}$ limit to 250 km s-1 because this is the
approximate peak of M31’s observed rotation curve (Corbelli et al., 2010). In
practice, this is a generous upper limit since IllustrisTNG-Dark only follows
the evolution of dark matter, and therefore just the dark matter halo’s
contribution to the observed rotation curve is relevant. Based on model
decompositions of M31’s rotation curve (Patel et al., 2017a), we approximate
that the peak velocity of M31’s dark matter halo is $200\pm 20$ km s-1, where
the uncertainty in part depends on whether the halo has been adiabatically
contracted or not. Given this $v_{\rm max}$, we select only those host halos
in the prior sample with $v_{\rm max}=180-220$ and find Mvir = $2.52\pm
0.70\times 10^{12}\,M_{\odot}$, illustrating a fairly tight correlation
between $v_{\rm max}$ and Mvir. This is similar to the results reported in
Section 4 confirming that even a high mass M31, as reported in this work, can
be reconciled with observations of the M31 rotation curve.
Another approach to reconciling our M31 mass results with the observed
properties of the M31 system is to compare the radial velocity dispersion of
all known M31 satellites to the radial velocity dispersion calculated using
the first 30 subhalos in each host halo representing a draw from the prior
distribution.
Radial velocities for individual M31 satellites are taken primarily from
Collins et al. (2013) and our Table 2. These values are corrected to the
M31-centric frame by subtracting the M31 radial velocity. Taking the standard
deviation of these M31-centric radial velocities gives $\sigma_{rad}\approx
122\pm 3$ km s-1, which implies Mvir = $2.88\pm 1.24\times
10^{12}\,\,M_{\odot}$. The standard deviation in Mvir is substantial as the
correlation between Mvir and $\sigma_{rad}$ is broad at a given
$\sigma_{rad}$. The virial mass implied by the radial velocity dispersion of
$\sim$30 M31 satellites is therefore consistent with, but much broader than
the results reported in Section 4. We note that mass estimation techniques
relying on Jeans modeling (e.g., Watkins et al., 2010), for example, often
give much smaller mass uncertainties, which implies that non-equilibrium host
halos and satellite anisotropy of the satellite systems play a key role in the
IllustrisTNG-Dark simulations.
### 5.5 Implications for the M31 System
Large uncertainties in the masses of the MW and M31 are one of the main causes
of significant uncertainties in modeling the backward orbital trajectories of
halo substructures (see D’Souza & Bell, 2022, for example). We have previously
demonstrated how the orbits of the LMC and M33, the most massive satellites
with respect to the MW and M31, change when the masses of their host galaxies
are varied in Patel et al. (2017a). In particular, we presented a modified
orbital history for M33 where it is statistically expected to be on first
infall. These conclusions are based on an M31 mass of $[1.5,2]\times
10^{12}\,M_{\odot}$ and distances, LOS velocities, and PMs similar to those
derived from the M31 HST+sats zero-point used in this work.
Other studies have adopted positions and velocities derived from different
sets of PMs and/or assumed various M31 (and M33) masses and modeling
techniques. For example, Watkins et al. (2013) have tabulated a census of
orbital properties for all known M31 satellites assuming M31 $M_{\rm
vir}=1.1-1.9\times 10^{12}\,M_{\odot}$ even in the absence of full 6D phase
space data for satellites. This provides a uniform reference point for future
orbit comparisons using a higher range of M31 masses.
Additionally, McConnachie et al. (2009) find an M33 orbital history where M33
has a close passage around M31 at 2-3 Gyr ago, however, their adopted M31 mass
is $2.5\times 10^{12}\,M_{\odot}$ and their M33 mass is only 30% of the Patel
et al. (2017a) adopted value. Similarly, Putman et al. (2009) find an orbit in
agreement with that of McConnachie et al. (2009) modeling M33 as a point mass
and using a total mass of $2\times 10^{12}\,M_{\odot}$ for M31. Most recently,
Tepper-García et al. (2020) adopted gravitational potentials and masses nearly
identical to those used in Patel et al. (2017a) and found an orbital history
consistent with the Patel et al. (2017a) high mass M31 ($2\times
10^{12}\,M_{\odot}$) results where M33 makes a 50-100 kpc pericentric passage
around M31 at $\sim$ 6-6.5 Gyr ago.
Sohn et al. (2020) presented first orbital solutions using HST PMs for NGC 147
and NGC 185, however, they assumed low M31 masses relative to those presented
in this work (they adopted the same values as in Patel et al., 2017a). These
results disproved the suggestion that these two galaxies are a binary system
and found evidence for the formation of NGC 147’s tidal tails from a close
encounter with M31. A higher M31 mass would make it even less likely that NGC
147 and NGC 185 are a binary pair since the tidal field owing to M31 would be
even stronger. These orbits will be revisited in future work.
It is clearly evident that the mass of M31 (and M33) is key to accurate
interpretations of the accretion history of the entire M31 system, potentially
even into the M33 satellites regime as has been shown for LMC satellites
(e.g., Patel et al., 2020; Garavito-Camargo et al., 2019). In future work, we
will quantify the consequences of a high M31 mass on M31’s satellite
population. It will be especially interesting to see whether evidence of group
infall increases or decreases assuming a higher M31 mass with respect to the
results of Watkins et al. (2013).
It is worth noting that a higher M31 mass would also affect interpretations of
its merger history, the formation of prominent stellar structures, and the
evolution of globular clusters (see McConnachie et al., 2018, for a census of
M31 substructure). Of recent interest in particular is whether M31 recently
underwent a major (e.g., D’Souza & Bell, 2018; Hammer et al., 2018) or a minor
(e.g., Ibata et al., 2004; Font et al., 2006; Fardal et al., 2006) merger.
While addressing these topics is beyond the scope of this work, it will be
crucial to understand the impact of a high mass on the entire M31 system as
more data becomes available from HST, Gaia, DESI, JWST, and more.
## 6 Summary and Conclusions
Building on the Bayesian framework previously used to estimate the mass of the
MW with multiple satellite galaxies ( P18, ), here we have used the
IllustrisTNG-100-Dark simulation in combination with observed properties of
four M31 satellite galaxies (M33, NGC 185, NGC 147, and IC 10) to constrain
the total mass of M31. These four M31 satellites are the only M31 satellites
with available PMs from HST and/or Gaia. Throughout this work, we present two
sets of results for observed satellite properties derived from HST-based M31
PMs (denoted as HST+sats; van der Marel et al., 2012a) and HST+Gaia-based M31
PMs (denoted as HST+Gaia; van der Marel et al., 2019).
We emphasize the use of dynamical satellite properties, such as orbital
angular momentum, to constrain the mass of host galaxies, as we have shown
such techniques to be the most robust against varying orbital configurations.
(see P17, ). The main conclusions of this work are summarized below.
1. 1.
Using the modified Bayesian framework outlined in §3 and the orbital angular
momentum for four satellite galaxies, we find two preferred estimates for M31:
Mvir $=2.85^{+1.47}_{-0.77}\times 10^{12}\,M_{\odot}$ (using the M31 HST+sats
$v_{\rm tan}$ zero-point) and Mvir $=3.02^{+1.30}_{-0.69}\times
10^{12}\,M_{\odot}$ (HST+Gaia $v_{\rm tan}$ zero-point; see §4).
2. 2.
Including $v_{\rm max}$ in the likelihood function results in masses that are
10-15% higher than without $v_{\rm max}$. Our results are robust against the
Too Big To Fail challenge, however, this suggests that M31 must be at least as
massive as $\sim 3\times 10^{12}\,M_{\odot}$ to host satellites with
properties similar to NGC 147, NGC 185, and IC 10, which are typically
outliers in simulations following the evolution of both dark matter and
baryons (see §4.3).
3. 3.
For both M31 $v_{\rm tan}$ zero-points uncertainties range from 23-50%
compared to 50-120% when only one satellite (M33) is used (P17). By using a
sample with four times more satellites (see §5.1), the uncertainties on M31’s
mass are more than halved. When 6D phase space information is available for
all 35 M31 satellites through HST GO-15902 (PI: D. Weisz), HST GO-16273 (PI:
S.T. Sohn), and JWST GTO 1305 (PI. R. van der Marel); we expect to reach
uncertainties below 20% (see also Li et al., 2017).
4. 4.
Comparing to literature M31 masses, the precision and numerical values we find
are closest to studies using LG dynamics, the numerical action method, and
machine learning. The advantage of our method is that it does not require
strong assumptions about the properties of M31’s halo or about the orbital
configuration of satellites. Of the M31 masses compiled from analyses that do
account for both observed measurement errors and systematic uncertainties, our
results are amongst those with the highest precision (see §5.1).
5. 5.
Our M31 mass results are consistent with recently revised estimates for the
total mass of the LG ($4-4.5\times 10^{12}\,M_{\odot}$), assuming the mass of
the MW is $\approx 10^{12}\,M_{\odot}$ ( P18, see §5.2).
6. 6.
Our observed M31 stellar mass–halo mass fractions are consistent with the
upper limits of the median SMHM relation at fixed halo mass, indicating that a
high halo mass ($\geq 4\times 10^{12}\,M_{\odot}$) is cosmologically most
favorable (see §5.3).
7. 7.
Our M31 masses are consistent with the observed properties of M31, including
the observed rotation curve and the radial velocity dispersion of nearly all
M31 satellites, implying that a high mass M31 ($\sim 3\times
10^{12}\,M_{\odot}$) is plausible given velocity measurements for stars in the
M31 disk and in tracer populations (see §5.4).
8. 8.
The implications of an M31 mass $>2.5\times 10^{12}\,M_{\odot}$ are expected
to be substantial, particularly in the context of orbital modeling for
substructures throughout M31’s halo. This will be the subject of future work
(see §5.5).
To utilize the abundance of satellite phase space information that has been
published for MW satellites since P18 and to prepare for more M31 satellite
phase space information, it will be necessary to move beyond the combination
of large-volume cosmological simulations and Bayesian statistics to
effectively further constrain the precise masses of the MW and M31.
We have discussed how low satellite statistics in N-body simulations result in
higher relative uncertainties in the inferred host halo masses. Neural
networks overcome the problem of low statistics by training on satellite
properties for a wide range of halos instead of only those most similar to the
MW or M31, yielding greater constraining power. These modern methods also
accommodate correlated satellite properties (e.g., infalling subhalos will
have satellites of their own) without biasing the inferred host halo masses.
Neural networks also have the advantage of self-consistently including
arbitrary additional information (e.g., larger-scale environment or gas
rotation velocities) to improve halo mass recovery. This will be the topic of
upcoming work (Hayati et al., in prep.) and the results are expected to
simultaneously improve our understanding of the MW and M31’s mass in addition
to constraining other galaxy and halo properties.
## Acknowledgements
E.P. acknowledges support from HST GO-15902 and HST AR-16628. Support for
GO-15902 and AR-16628 was provided by NASA through a grant from the Space
Telescope Science Institute, which is operated by the Association of
Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
K.S.M. acknowledges funding from the European Research Council under the
European Union’s Horizon 2020 research and innovation program (ERC Grant
Agreement No. 101002652). E.P. thanks Tony Sohn for producing the proper
motion Monte Carlo files for this analysis, Alessandro Savino for graciously
sharing distance moduli in advance of publication. E.P. would also like to
thank Mike Boylan-Kolchin and Peter Behroozi for informative discussions which
have helped improve the quality and context of this work. Additionally,
Gurtina Besla, Roeland van der Marel, Dan Weisz, Nico Garavito-Camargo, Laura
Watkins, and Katie Chamberlain have provided generous feedback on this
manuscript.
## References
* Battaglia et al. (2022) Battaglia, G., Taibi, S., Thomas, G. F., & Fritz, T. K. 2022, A&A, 657, A54, doi: 10.1051/0004-6361/202141528
* Behroozi et al. (2019) Behroozi, P., Wechsler, R. H., Hearin, A. P., & Conroy, C. 2019, MNRAS, 488, 3143, doi: 10.1093/mnras/stz1182
* Behroozi et al. (2013) Behroozi, P. S., Wechsler, R. H., & Conroy, C. 2013, ApJ, 770, 57, doi: 10.1088/0004-637X/770/1/57
* Benisty et al. (2022) Benisty, D., Vasiliev, E., Evans, N. W., et al. 2022, ApJ, 928, L5, doi: 10.3847/2041-8213/ac5c42
* Boylan-Kolchin et al. (2011) Boylan-Kolchin, M., Bullock, J. S., & Kaplinghat, M. 2011, MNRAS, 415, L40, doi: 10.1111/j.1745-3933.2011.01074.x
* Boylan-Kolchin et al. (2013) Boylan-Kolchin, M., Bullock, J. S., Sohn, S. T., Besla, G., & van der Marel, R. P. 2013, ApJ, 768, 140, doi: 10.1088/0004-637X/768/2/140
* Brook et al. (2014) Brook, C. B., Di Cintio, A., Knebe, A., et al. 2014, ApJ, 784, L14, doi: 10.1088/2041-8205/784/1/L14
* Brooks & Zolotov (2014) Brooks, A. M., & Zolotov, A. 2014, ApJ, 786, 87, doi: 10.1088/0004-637X/786/2/87
* Brunthaler et al. (2005) Brunthaler, A., Reid, M. J., Falcke, H., Greenhill, L. J., & Henkel, C. 2005, Science, 307, 1440, doi: 10.1126/science.1108342
* Brunthaler et al. (2007) Brunthaler, A., Reid, M. J., Falcke, H., Henkel, C., & Menten, K. M. 2007, A&A, 462, 101, doi: 10.1051/0004-6361:20066430
* Bryan & Norman (1998) Bryan, G. L., & Norman, M. L. 1998, ApJ, 495, 80, doi: 10.1086/305262
* Buck et al. (2019) Buck, T., Macciò, A. V., Dutton, A. A., Obreja, A., & Frings, J. 2019, MNRAS, 483, 1314, doi: 10.1093/mnras/sty2913
* Busha et al. (2011) Busha, M. T., Marshall, P. J., Wechsler, R. H., Klypin, A., & Primack, J. 2011, ApJ, 743, 40, doi: 10.1088/0004-637X/743/1/40
* Carlesi et al. (2022) Carlesi, E., Hoffman, Y., & Libeskind, N. I. 2022, MNRAS, 513, 2385, doi: 10.1093/mnras/stac897
* Cautun et al. (2014) Cautun, M., Frenk, C. S., van de Weygaert, R., Hellwing, W. A., & Jones, B. J. T. 2014, MNRAS, 445, 2049, doi: 10.1093/mnras/stu1849
* Chamberlain et al. (2022) Chamberlain, K., Price-Whelan, A. M., Besla, G., et al. 2022, arXiv e-prints, arXiv:2204.07173. https://arxiv.org/abs/2204.07173
* Chemin et al. (2009) Chemin, L., Carignan, C., & Foster, T. 2009, ApJ, 705, 1395, doi: 10.1088/0004-637X/705/2/1395
* Collins et al. (2013) Collins, M. L. M., Chapman, S. C., Rich, R. M., et al. 2013, ApJ, 768, 172, doi: 10.1088/0004-637X/768/2/172
* Corbelli et al. (2010) Corbelli, E., Lorenzoni, S., Walterbos, R., Braun, R., & Thilker, D. 2010, A&A, 511, A89, doi: 10.1051/0004-6361/200913297
* Corbelli & Salucci (2000) Corbelli, E., & Salucci, P. 2000, MNRAS, 311, 441, doi: 10.1046/j.1365-8711.2000.03075.x
* Corbelli & Schneider (1997) Corbelli, E., & Schneider, S. E. 1997, ApJ, 479, 244, doi: 10.1086/303849
* Côté et al. (2000) Côté, P., Mateo, M., Sargent, W. L. W., & Olszewski, E. W. 2000, ApJ, 537, L91, doi: 10.1086/312766
* Courteau & van den Bergh (1999) Courteau, S., & van den Bergh, S. 1999, AJ, 118, 337, doi: 10.1086/300942
* Cunningham et al. (2020) Cunningham, E. C., Garavito-Camargo, N., Deason, A. J., et al. 2020, ApJ, 898, 4, doi: 10.3847/1538-4357/ab9b88
* Deason et al. (2021) Deason, A. J., Erkal, D., Belokurov, V., et al. 2021, MNRAS, 501, 5964, doi: 10.1093/mnras/staa3984
* Dey et al. (2022) Dey, A., Najita, J. R., Koposov, S. E., et al. 2022, arXiv e-prints, arXiv:2208.11683. https://arxiv.org/abs/2208.11683
* Diaz et al. (2014) Diaz, J. D., Koposov, S. E., Irwin, M., Belokurov, V., & Evans, N. W. 2014, MNRAS, 443, 1688, doi: 10.1093/mnras/stu1210
* D’Souza & Bell (2018) D’Souza, R., & Bell, E. F. 2018, Nature Astronomy, 2, 737, doi: 10.1038/s41550-018-0533-x
* D’Souza & Bell (2022) —. 2022, MNRAS, 512, 739, doi: 10.1093/mnras/stac404
* Eadie et al. (2017) Eadie, G. M., Springford, A., & Harris, W. E. 2017, ApJ, 835, 167, doi: 10.3847/1538-4357/835/2/167
* Evans & Wilkinson (2000) Evans, N. W., & Wilkinson, M. I. 2000, MNRAS, 316, 929, doi: 10.1046/j.1365-8711.2000.03645.x
* Evans et al. (2000) Evans, N. W., Wilkinson, M. I., Guhathakurta, P., Grebel, E. K., & Vogt, S. S. 2000, ApJ, 540, L9, doi: 10.1086/312861
* Fardal et al. (2006) Fardal, M. A., Babul, A., Geehan, J. J., & Guhathakurta, P. 2006, MNRAS, 366, 1012, doi: 10.1111/j.1365-2966.2005.09864.x
* Fardal et al. (2013) Fardal, M. A., Weinberg, M. D., Babul, A., et al. 2013, MNRAS, 434, 2779, doi: 10.1093/mnras/stt1121
* Fattahi et al. (2016) Fattahi, A., Navarro, J. F., Sawala, T., et al. 2016, arXiv e-prints, arXiv:1607.06479. https://arxiv.org/abs/1607.06479
* Font et al. (2006) Font, A. S., Johnston, K. V., Guhathakurta, P., Majewski, S. R., & Rich, R. M. 2006, AJ, 131, 1436, doi: 10.1086/499564
* Fritz et al. (2018) Fritz, T. K., Battaglia, G., Pawlowski, M. S., et al. 2018, A&A, 619, A103, doi: 10.1051/0004-6361/201833343
* Gaia Collaboration et al. (2018) Gaia Collaboration, Helmi, A., van Leeuwen, F., et al. 2018, A&A, 616, A12, doi: 10.1051/0004-6361/201832698
* Galleti et al. (2006) Galleti, S., Federici, L., Bellazzini, M., Buzzoni, A., & Fusi Pecci, F. 2006, A&A, 456, 985, doi: 10.1051/0004-6361:20065309
* Garavito-Camargo et al. (2019) Garavito-Camargo, N., Besla, G., Laporte, C. F. P., et al. 2019, ApJ, 884, 51, doi: 10.3847/1538-4357/ab32eb
* Garavito-Camargo et al. (2021) —. 2021, ApJ, 919, 109, doi: 10.3847/1538-4357/ac0b44
* Garrison-Kimmel et al. (2014a) Garrison-Kimmel, S., Boylan-Kolchin, M., Bullock, J. S., & Kirby, E. N. 2014a, MNRAS, 444, 222, doi: 10.1093/mnras/stu1477
* Garrison-Kimmel et al. (2014b) Garrison-Kimmel, S., Boylan-Kolchin, M., Bullock, J. S., & Lee, K. 2014b, MNRAS, 438, 2578, doi: 10.1093/mnras/stt2377
* Garrison-Kimmel et al. (2017) Garrison-Kimmel, S., Bullock, J. S., Boylan-Kolchin, M., & Bardwell, E. 2017, MNRAS, 464, 3108, doi: 10.1093/mnras/stw2564
* Garrison-Kimmel et al. (2019) Garrison-Kimmel, S., Hopkins, P. F., Wetzel, A., et al. 2019, MNRAS, 487, 1380, doi: 10.1093/mnras/stz1317
* Geha et al. (2010) Geha, M., van der Marel, R. P., Guhathakurta, P., et al. 2010, ApJ, 711, 361, doi: 10.1088/0004-637X/711/1/361
* Gibbons et al. (2014) Gibbons, S. L. J., Belokurov, V., & Evans, N. W. 2014, MNRAS, 445, 3788, doi: 10.1093/mnras/stu1986
* Gómez et al. (2015) Gómez, F. A., Besla, G., Carpintero, D. D., et al. 2015, ApJ, 802, 128, doi: 10.1088/0004-637X/802/2/128
* González et al. (2013) González, R. E., Kravtsov, A. V., & Gnedin, N. Y. 2013, ApJ, 770, 96, doi: 10.1088/0004-637X/770/2/96
* Hammer et al. (2018) Hammer, F., Yang, Y. B., Wang, J. L., et al. 2018, MNRAS, 475, 2754, doi: 10.1093/mnras/stx3343
* Hayashi & Chiba (2014a) Hayashi, K., & Chiba, M. 2014a, ApJ, 789, 62, doi: 10.1088/0004-637X/789/1/62
* Hayashi & Chiba (2014b) —. 2014b, ApJ, 789, 62, doi: 10.1088/0004-637X/789/1/62
* Huchra et al. (1999) Huchra, J. P., Vogeley, M. S., & Geller, M. J. 1999, ApJS, 121, 287, doi: 10.1086/313194
* Hunter (2007) Hunter, J. D. 2007, Computing in Science Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Ibata et al. (2004) Ibata, R., Chapman, S., Ferguson, A. M. N., et al. 2004, MNRAS, 351, 117, doi: 10.1111/j.1365-2966.2004.07759.x
* Jones et al. (2001–) Jones, E., Oliphant, T., Peterson, P., et al. 2001–, SciPy: Open source scientific tools for Python. http://www.scipy.org/
* Kafle et al. (2018) Kafle, P. R., Sharma, S., Lewis, G. F., Robotham, A. S. G., & Driver, S. P. 2018, MNRAS, 475, 4043, doi: 10.1093/mnras/sty082
* Klypin et al. (2011) Klypin, A. A., Trujillo-Gomez, S., & Primack, J. 2011, ApJ, 740, 102, doi: 10.1088/0004-637X/740/2/102
* Lee et al. (2008) Lee, M. G., Hwang, H. S., Kim, S. C., et al. 2008, ApJ, 674, 886, doi: 10.1086/526396
* Lemos et al. (2021) Lemos, P., Jeffrey, N., Whiteway, L., et al. 2021, Phys. Rev. D, 103, 023009, doi: 10.1103/PhysRevD.103.023009
* Lewis et al. (2023) Lewis, G. F., Brewer, B. J., Mackey, D., et al. 2023, MNRAS, 518, 5778, doi: 10.1093/mnras/stac3325
* Li et al. (2021) Li, S., Riess, A. G., Busch, M. P., et al. 2021, ApJ, 920, 84, doi: 10.3847/1538-4357/ac1597
* Li et al. (2017) Li, Z.-Z., Jing, Y. P., Qian, Y.-Z., Yuan, Z., & Zhao, D.-H. 2017, ApJ, 850, 116, doi: 10.3847/1538-4357/aa94c0
* Mackey et al. (2019) Mackey, D., Lewis, G. F., Brewer, B. J., et al. 2019, Nature, 574, 69, doi: 10.1038/s41586-019-1597-1
* Marconi et al. (2015) Marconi, M., Coppola, G., Bono, G., et al. 2015, ApJ, 808, 50, doi: 10.1088/0004-637X/808/1/50
* Marinacci et al. (2018) Marinacci, F., Vogelsberger, M., Pakmor, R., et al. 2018, MNRAS, 480, 5113, doi: 10.1093/mnras/sty2206
* McConnachie (2012) McConnachie, A. W. 2012, AJ, 144, 4, doi: 10.1088/0004-6256/144/1/4
* McConnachie & Venn (2020a) McConnachie, A. W., & Venn, K. A. 2020a, Research Notes of the American Astronomical Society, 4, 229, doi: 10.3847/2515-5172/abd18b
* McConnachie & Venn (2020b) —. 2020b, AJ, 160, 124, doi: 10.3847/1538-3881/aba4ab
* McConnachie et al. (2009) McConnachie, A. W., Irwin, M. J., Ibata, R. A., et al. 2009, Nature, 461, 66, doi: 10.1038/nature08327
* McConnachie et al. (2018) McConnachie, A. W., Ibata, R., Martin, N., et al. 2018, ApJ, 868, 55, doi: 10.3847/1538-4357/aae8e7
* McGaugh & van Dokkum (2021) McGaugh, S. S., & van Dokkum, P. 2021, Research Notes of the American Astronomical Society, 5, 23, doi: 10.3847/2515-5172/abe1ba
* McLeod et al. (2017) McLeod, M., Libeskind, N., Lahav, O., & Hoffman, Y. 2017, J. Cosmology Astropart. Phys, 2017, 034, doi: 10.1088/1475-7516/2017/12/034
* McQuinn et al. (2017) McQuinn, K. B. W., Boyer, M. L., Mitchell, M. B., et al. 2017, ApJ, 834, 78, doi: 10.3847/1538-4357/834/1/78
* Moster et al. (2013) Moster, B. P., Naab, T., & White, S. D. M. 2013, MNRAS, 428, 3121, doi: 10.1093/mnras/sts261
* Nagarajan et al. (2022) Nagarajan, P., Weisz, D. R., & El-Badry, K. 2022, ApJ, 932, 19, doi: 10.3847/1538-4357/ac69e6
* Naiman et al. (2018) Naiman, J. P., Pillepich, A., Springel, V., et al. 2018, MNRAS, 477, 1206, doi: 10.1093/mnras/sty618
* Nelson et al. (2018) Nelson, D., Pillepich, A., Springel, V., et al. 2018, MNRAS, 475, 624, doi: 10.1093/mnras/stx3040
* Nelson et al. (2019) Nelson, D., Springel, V., Pillepich, A., et al. 2019, Computational Astrophysics and Cosmology, 6, 2, doi: 10.1186/s40668-019-0028-x
* Oh et al. (2015) Oh, S.-H., Hunter, D. A., Brinks, E., et al. 2015, AJ, 149, 180, doi: 10.1088/0004-6256/149/6/180
* Pace et al. (2022) Pace, A. B., Erkal, D., & Li, T. S. 2022, arXiv e-prints, arXiv:2205.05699. https://arxiv.org/abs/2205.05699
* Patel et al. (2017b) Patel, E., Besla, G., & Mandel, K. 2017b, MNRAS, 468, 3428, doi: 10.1093/mnras/stx698
* Patel et al. (2018) Patel, E., Besla, G., Mandel, K., & Sohn, S. T. 2018, ApJ, 857, 78, doi: 10.3847/1538-4357/aab78f
* Patel et al. (2017a) Patel, E., Besla, G., & Sohn, S. T. 2017a, MNRAS, 464, 3825, doi: 10.1093/mnras/stw2616
* Patel et al. (2020) Patel, E., Kallivayalil, N., Garavito-Camargo, N., et al. 2020, ApJ, 893, 121, doi: 10.3847/1538-4357/ab7b75
* Peñarrubia et al. (2016) Peñarrubia, J., Gómez, F. A., Besla, G., Erkal, D., & Ma, Y.-Z. 2016, MNRAS, 456, L54, doi: 10.1093/mnrasl/slv160
* Peñarrubia et al. (2014) Peñarrubia, J., Ma, Y.-Z., Walker, M. G., & McConnachie, A. 2014, MNRAS, 443, 2204, doi: 10.1093/mnras/stu879
* Perrett et al. (2002) Perrett, K. M., Bridges, T. J., Hanes, D. A., et al. 2002, AJ, 123, 2490, doi: 10.1086/340186
* Petersen & Peñarrubia (2020) Petersen, M. S., & Peñarrubia, J. 2020, MNRAS, 494, L11, doi: 10.1093/mnrasl/slaa029
* Petersen & Peñarrubia (2021) —. 2021, Nature Astronomy, 5, 251, doi: 10.1038/s41550-020-01254-3
* Phelps et al. (2013) Phelps, S., Nusser, A., & Desjacques, V. 2013, ApJ, 775, 102, doi: 10.1088/0004-637X/775/2/102
* Pillepich et al. (2018) Pillepich, A., Nelson, D., Hernquist, L., et al. 2018, MNRAS, 475, 648, doi: 10.1093/mnras/stx3112
* Planck Collaboration et al. (2016) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13, doi: 10.1051/0004-6361/201525830
* Putman et al. (2009) Putman, M. E., Peek, J. E. G., Muratov, A., et al. 2009, ApJ, 703, 1486, doi: 10.1088/0004-637X/703/2/1486
* Salomon et al. (2021) Salomon, J. B., Ibata, R., Reylé, C., et al. 2021, MNRAS, 507, 2592, doi: 10.1093/mnras/stab2253
* Salomon et al. (2016) Salomon, J.-B., Ibata, R. A., Famaey, B., Martin, N. F., & Lewis, G. F. 2016, MNRAS, 456, 4432, doi: 10.1093/mnras/stv2865
* Savino et al. (2022) Savino, A., Weisz, D. R., Skillman, E. D., et al. 2022, arXiv e-prints, arXiv:2206.02801. https://arxiv.org/abs/2206.02801
* Sick et al. (2015) Sick, J., Courteau, S., Cuillandre, J.-C., et al. 2015, in Galaxy Masses as Constraints of Formation Models, ed. M. Cappellari & S. Courteau, Vol. 311, 82–85, doi: 10.1017/S1743921315003440
* Simon (2018) Simon, J. D. 2018, ApJ, 863, 89, doi: 10.3847/1538-4357/aacdfb
* Slipher (1913) Slipher, V. M. 1913, Lowell Observatory Bulletin, 2, 56
* Sofue (2015) Sofue, Y. 2015, PASJ, 67, 75, doi: 10.1093/pasj/psv042
* Sohn et al. (2012) Sohn, S. T., Anderson, J., & van der Marel, R. P. 2012, ApJ, 753, 7, doi: 10.1088/0004-637X/753/1/7
* Sohn et al. (2020) Sohn, S. T., Patel, E., Fardal, M. A., et al. 2020, ApJ, 901, 43, doi: 10.3847/1538-4357/abaf49
* Springel et al. (2018) Springel, V., Pakmor, R., Pillepich, A., et al. 2018, MNRAS, 475, 676, doi: 10.1093/mnras/stx3304
* Tamm et al. (2012) Tamm, A., Tempel, E., Tenjes, P., Tihhonova, O., & Tuvikene, T. 2012, A&A, 546, A4, doi: 10.1051/0004-6361/201220065
* Tepper-García et al. (2020) Tepper-García, T., Bland-Hawthorn, J., & Li, D. 2020, MNRAS, 493, 5636, doi: 10.1093/mnras/staa317
* The Astropy Collaboration et al. (2018) The Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, ArXiv e-prints. https://arxiv.org/abs/1801.02634
* Tollerud et al. (2014) Tollerud, E. J., Boylan-Kolchin, M., & Bullock, J. S. 2014, MNRAS, 440, 3511, doi: 10.1093/mnras/stu474
* Tollerud et al. (2012) Tollerud, E. J., Beaton, R. L., Geha, M. C., et al. 2012, ApJ, 752, 45, doi: 10.1088/0004-637X/752/1/45
* van der Marel et al. (2002) van der Marel, R. P., Alves, D. R., Hardy, E., & Suntzeff, N. B. 2002, AJ, 124, 2639, doi: 10.1086/343775
* van der Marel et al. (2012b) van der Marel, R. P., Besla, G., Cox, T. J., Sohn, S. T., & Anderson, J. 2012b, ApJ, 753, 9, doi: 10.1088/0004-637X/753/1/9
* van der Marel et al. (2012a) van der Marel, R. P., Fardal, M., Besla, G., et al. 2012a, ApJ, 753, 8, doi: 10.1088/0004-637X/753/1/8
* van der Marel et al. (2019) van der Marel, R. P., Fardal, M. A., Sohn, S. T., et al. 2019, ApJ, 872, 24, doi: 10.3847/1538-4357/ab001b
* van der Marel & Guhathakurta (2008) van der Marel, R. P., & Guhathakurta, P. 2008, ApJ, 678, 187, doi: 10.1086/533430
* van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science Engineering, 13, 22, doi: 10.1109/MCSE.2011.37
* Veljanoski et al. (2013) Veljanoski, J., Ferguson, A. M. N., Mackey, A. D., et al. 2013, ApJ, 768, L33, doi: 10.1088/2041-8205/768/2/L33
* Villanueva-Domingo et al. (2021) Villanueva-Domingo, P., Villaescusa-Navarro, F., Genel, S., et al. 2021, arXiv e-prints, arXiv:2111.14874. https://arxiv.org/abs/2111.14874
* Wang et al. (2020) Wang, W., Han, J., Cautun, M., Li, Z., & Ishigaki, M. N. 2020, Science China Physics, Mechanics, and Astronomy, 63, 109801, doi: 10.1007/s11433-019-1541-6
* Watkins et al. (2010) Watkins, L. L., Evans, N. W., & An, J. H. 2010, MNRAS, 406, 264, doi: 10.1111/j.1365-2966.2010.16708.x
* Watkins et al. (2013) Watkins, L. L., Evans, N. W., & van de Ven, G. 2013, MNRAS, 430, 971, doi: 10.1093/mnras/sts634
* Wolf et al. (2010) Wolf, J., Martinez, G. D., Bullock, J. S., et al. 2010, Monthly Notices of the Royal Astronomical Society, 406, 1220, doi: 10.1111/j.1365-2966.2010.16753.x
* Zhai et al. (2020) Zhai, M., Guo, Q., Zhao, G., Gu, Q., & Liu, A. 2020, ApJ, 890, 27, doi: 10.3847/1538-4357/ab6986
## Appendix A Estimating M31’s Mass with M33 Using IllustrisTNG-Dark vs.
Illustris-Dark
Figure 6: Posterior distributions for the virial mass of M31 using the
properties of M33 (see Table 2) and the IllustrisTNG-Dark simulation. Left:
The mass resulting from the momentum (black) method using the HST+sats M31
$v_{\rm tan}$ zero-point. Posterior PDFs are also shown for the maximum
circular velocity of the satellite’s dark matter halo ($v_{\rm max}$; red) and
the total orbital angular momentum ($j$; purple). Right: The estimated mass of
M31 using the HST+Gaia M31 $v_{\rm tan}$ zero-point. The latter results are
significantly higher since the 3D velocity of M33 relative to M31 increased
between the old (HST+sats) vs. new (HST+Gaia) M31 PM measurements (see Table
2). Overall, IllustrisTNG-Dark results are systematically lower by 5% as
compared to the same results using the Illustris-Dark simulation (P17).
As this work employs IllustrisTNG, we estimate the mass of M31 using only the
properties of M33 and the IllustrisTNG-Dark simulation to see how the results
compare to those using Illustris-1-Dark. For consistency, we adopt the same
values for the observed M33 data as in P17, rather than the revised values in
Table 2.
We choose a prior sample from IllustrisTNG-Dark using identical criteria as in
P17. This includes choosing only those host halos that contain a massive
satellite analog with $v_{\rm max}>70\rm\,km\,s^{-1}$, that resides within the
virial radius of its host galaxy at $z\approx 0$, and that has a minimal
subhalo mass of $10^{10}\,M_{\odot}$ at $z\approx 0$. We build the prior
sample statistics by choosing all halo systems with massive satellite analogs
satisfying these properties from snapshots corresponding to $z=0-0.26$ (or
snapshots 80-99 in IllustrisTNG-Dark). This yields 24,964 halo systems that
constitute the prior sample. The original prior sample in P17 contained 19,653
halos.
Using this prior sample, we calculate likelihood functions as in Eqs. 7-10
from P17 and marginalize over Mvir using importance sampling. For the same
exact M33 properties adopted in P17 (including the original angular momentum
value), we find an M31 mass of Mvir $=1.30^{+1.40}_{-0.66}\times
10^{12}\,M_{\odot}$. These results are illustrated in Figure 6. In P17, we
reported an M31 mass of Mvir $=1.37^{+1.39}_{-0.75}\times 10^{12}\,M_{\odot}$
for the momentum method. The IllustrisTNG-Dark results are systematically
lower by $\sim$5% compared to the Illustris-Dark results (P17), however, they
are consistent within the corresponding credible intervals. A preliminary
analysis of M31 stellar mass analogs, chosen as the primary halos whose
corresponding stellar masses via the Moster et al. (2013) abundance matching
relationship are in the range $5-10\times 10^{10}\,M_{\odot}$, in both
Illustris-Dark and IllustrisTNG-Dark show that this could be an artifact of a
systematic position offset between satellites located at $<$ 400 kpc with a
1:10-1:100 mass ratio with these M31 analogs. This exercise shows the median
position offset between satellites and their hosts in IllustrisTNG-Dark is
also $\sim$5% lower than their counterparts in Illustris-Dark. These offsets,
their origins, and implications will be discussed in detail in Chamberlain et
al., in preparation.
Since both M31 and M33 have a new set of independent PM measurements based on
Gaia DR2 data, and updated distances, we also compute the estimated mass of
M31 using the combination of HST+Gaia M31 phase space information and the
VLBA+Gaia M33 phase space information. The corresponding observed properties
are listed in Table 2 and the resulting M31 mass isMvir
$=1.87^{+2.20}_{-0.79}\times 10^{12}\,M_{\odot}$. These results are
illustrated in the right panel of Figure 6. Given the increase in M33’s
relative position and velocity with the HST+Gaia M31 vtan zero-point, it is
unsurprising that these results also increase, as illustrated by comparing the
purple curves in the left and right panels in Figure 6.
## Appendix B Derivation of Importance Weights for Multiple Satellites
The joint posterior distribution of a halo’s log mass $m\equiv\log_{10}M_{\rm
vir}$ and the latent properties $\bm{X}=\\{\bm{x}_{1},\ldots,\bm{x}_{N_{\rm
sat}}\\}$ of its $N_{\rm sat}$ subhalos, conditional on the measurements
$\bm{D}=\\{\bm{d}_{1},\ldots,\bm{d}_{N_{\rm sat}}\\}$, is given by Bayes’
Theorem (Eq. 3):
$P(m,\bm{X}|\,\bm{D})\propto P(\bm{D}|\,m,\bm{X}\\})\times
P(m,\bm{X}|\,\bm{C})$ (B1)
We invoke the reasonable assumption that, conditional on the true latent
properties, the probability distribution of the measurements has no additional
dependence on $m$. This implies that the measurement errors are independent of
$m$. Furthermore, we assume that the measurements of each satellite’s
properties, conditional on the true latent values of those properties, are
mutually independent of the other satellites’ properties and their
measurements. These reasonable assumptions allow us to write the second term
as:
$P(\\{\bm{d}_{1},\ldots,\bm{d}_{N_{\rm
sat}}\\}|\,m,\\{\bm{x}_{1},\ldots,\bm{x}_{N_{\rm sat}}\\})=\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}_{s}).$ (B2)
Expectations of functions of the log mass, $f(m)$, with respect to the
posterior distribution (Eq. 6) can be written as:
$\mathbb{E}[f(m)|\,\bm{D}]=\frac{\int f(m)\left[\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}_{s})\right]\times
P(m,\\{\bm{x}_{1},\ldots,\bm{x}_{N_{\rm sat}}\\}|\,\bm{C})\,{\rm d}m\,{\rm
d}x_{1},\ldots,{\rm d}x_{N_{\rm sat}}}{\int\left[\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}_{s})\right]\times
P(m,\\{\bm{x}_{1},\ldots,\bm{x}_{N_{\rm sat}}\\}|\,\bm{C})\,{\rm d}m\,{\rm
d}x_{1},\ldots,{\rm d}x_{N_{\rm sat}}},$ (B3)
where the denominator is the normalization term. To derive the self-normalized
importance weights, we approximate both integrals as Monte Carlo sums over $n$
independent draws from the prior, i.e. the halo systems from the simulation,
indexed as $j=1,\ldots,n$,
$m^{j},\\{\bm{x}_{1}^{j},\ldots,\bm{x}^{j}_{N_{\rm sat}}\\}\sim
P(m,\\{\bm{x}_{1},\ldots,\bm{x}_{N_{\rm sat}}\\}|\,\bm{C}).$ (B4)
The posterior expectation is estimated from these samples as:
$\mathbb{E}[f(m)|\,\bm{D}]\approx\frac{\sum_{j=1}^{n}f(m^{j})\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}^{j}_{s})}{\sum_{j=1}^{n}\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}^{j}_{s})}=\sum_{j=1}^{n}f(m^{j})\,w_{j}$ (B5)
where
$w_{j}\equiv\frac{\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}^{j}_{s})}{\sum_{i=1}^{n}\prod_{s=1}^{N_{\rm
sat}}P(\bm{d}_{s}|\,\bm{x}^{i}_{s})}$ (B6)
are the self-normalized importance weights, as in Eq. 9. Note that
$\sum_{j=1}^{n}w_{j}=1$.
|
# Posterior Sampling for Continuing Environments
Wanqiao Xu Stanford University, Department of Management Science and
Engineering. Email<EMAIL_ADDRESS>Shi Dong Stanford University,
Department of Electrical Engineering. Email<EMAIL_ADDRESS>Benjamin
Van Roy Stanford University.
###### Abstract
We develop an extension of posterior sampling for reinforcement learning
(PSRL) that is suited for a continuing agent-environment interface and
integrates naturally into agent designs that scale to complex environments.
The approach, continuing PSRL, maintains a statistically plausible model of
the environment and follows a policy that maximizes expected
$\gamma$-discounted return in that model. At each time, with probability
$1-\gamma$, the model is replaced by a sample from the posterior distribution
over environments. For a choice of discount factor that suitably depends on
the horizon $T$, we establish an $\tilde{O}(\tau S\sqrt{AT})$ bound on the
Bayesian regret, where $S$ is the number of environment states, $A$ is the
number of actions, and $\tau$ denotes the reward averaging time, which is a
bound on the duration required to accurately estimate the average reward of
any policy. Our work is the first to formalize and rigorously analyze the
resampling approach with randomized exploration.
## 1 Introduction
A reinforcement learning (RL) agent is faced with the task of interacting with
an unknown environment while trying to maximize the total reward accrued over
time. The environment is commonly modeled as a Markov Decision Process (MDP),
with the transition dynamics hidden from the agent. A core challenge in RL is
how to balance the fundamental tradeoff: when taking exploratory actions, the
agent learns better estimates of the underlying dynamics, but exploiting the
knowledge obtained so far may result in higher immediate return.
A growing literature builds on Thompson sampling [1, 2] to develop randomized
approaches to exploration [3, 4, 5]. While these approaches have proved to be
effective, they have largely been limited to episodic environments. In
particular, the modus oparandi involves randomly sampling a new policy, which
aims to maximize expected return in a statistically plausible model of the
environment, immediately before the start of each episode, and following that
policy throughout the episode. For example, bootstrapped DQN [5] maintains an
ensemble that approximates the posterior distribution of the optimal action
value function $Q_{*}$ and, before each $\ell$th episode, samples a random
element $\hat{Q}_{\ell}$. Then, a greedy policy with respect to
$\hat{Q}_{\ell}$ is executed.
While work on randomized approaches to exploration have focused on episodic
environments, [6, 7] represent two exceptions that treat continuing
environments. The algorithm proposed in [6] resamples an environment from the
environment posterior each time either of the two following criteria holds: 1)
the time elapsed since the last resampling exceeds the interval between the
two most recent resamplings, and 2) the number of visits to any state-action
pair is doubled since the previous resampling. The latter criterion plays an
essential role but is not viable when operating in a complex environment, for
example, addressing an intractably large state space and approximating a
distribution of action value functions using a neural network [5, 8]. In
particular, it is not clear how to efficiently track visitation counts, and
even if that were possible, the counts could be irrelevant since it may even
be rare to visit any individual state more than once. In order to address
large state spaces, [7] considers simply doubling the duration between each
successive pair of resampling times. Although the resulting algorithm
circumvents maintaining visitation counts, their analysis relies heavily on
technical assumptions, without which the regret bound grows linearly with
time. It remains unclear whether a strong performance guarantee is possible
for an approach that simply doubles the duration between successive samples.
We consider an alternative strategy that resamples at each time with a
probability $p$. Here, $p$ is specified as a part of the agent design and
represents how the agent chooses to partition its experience into intervals
that it interprets as trials. With this resampling rule, it is natural to
execute a plan that aims to maximize discounted return with a discount factor
$\gamma=1-p$. Indeed, a simple lemma shows that, with this choice of $\gamma$,
the undiscounted return attained between consecutive resampling times
constitutes an unbiased estimate of the $\gamma$-discounted return of the
policy used. This simple resampling scheme easily integrates into scalable
randomized exploration algorithms. For example, bootstrapped DQN [5] can be
modified to address continuing environments by resampling the action value
function from the prevailing approximate posterior distribution at each time
with probability $p$. As with the original version of bootstrapped DQN, each
executed action is greedy with respect to the most recently sampled action
value function.
The idea of using a discount factor in planning is not new. Many theoretical
works consider $\gamma$ as part of the environment, e.g. they directly analyze
$\gamma$-discounted regret [9, 10]. In contrast, we assess agent performance
in terms of undiscounted regret. Thus, while the discount factor $\gamma$
plays a role in agent design, it does not reflect what we view as the
designer’s objective. This viewpoint aligns with empirical work that regards
the discount factor as a tunable hyperparameter [11, 12]. Our analysis shows
that, while resampling with a probability $p=1-\gamma$ and planning with a
$\gamma$-discounted objective does not lead to vanishing per-timestep regret,
that can be accomplished by increasing $\gamma$ over time.
In this paper, we formalize the aforementioned approach to randomized
exploration – both with fixed and decreasing reset probabilities – and provide
a first rigorous analysis, which establishes regret bounds similar to [6] but
with a resampling criterion much simpler and more scalable than what is
proposed in that paper. Interestingly, our analysis is also simpler than that
of [6] because our resampling criterion is policy-independent. Specifically,
we show that for a choice of discount factor that suitably depends on the
horizon $T$, our algorithm, continuing posterior sampling, satisfies an
$\tilde{O}(\tau S\sqrt{AT})$ regret bound, where $S$ is the number of
environment states, $A$ is the number of actions, and $\tau$ denotes the
reward averaging time [13], which is a bound on the duration required to
accurately estimate the average reward of any policy. This regret bound
matches that established by [6], though their use of “span” replaced by the
reward averaging time $\tau$.
## 2 Problem Formulation
In this section, we begin with an overview of the agent-environment interface
and introduce important notations regarding the environment and agent design.
We define the undiscounted objectives of interest and intermediate planning
objectives at the end of this section.
### 2.1 Environment
We consider the problem of learning to optimize performance through a single
stream of interactions with an unknown environment
$\mathcal{E}=(\mathcal{A},\mathcal{S},\rho)$. Here $\mathcal{A}$ is a finite
action space with cardinality $A$, $\mathcal{S}$ is a finite state space with
cardinality $S$, and $\rho$ is a function that specifies a state transition
probability $\rho(s^{\prime}\mid s,a)$ given a current state $s\in\mathcal{S}$
and action $a\in\mathcal{A}$. Interactions up to time $t$ make up a history
$H_{t}=(S_{0},A_{0},S_{1},A_{1},S_{2},\dots,A_{t-1},S_{t})$, and the agent
selects action $A_{t}$ after observing $S_{t}$. We define the environment and
all random quantities we will consider with respect to a common probability
space $(\Omega,\mathcal{F},\mathbb{P})$. In particular, the environment
$\mathcal{E}$ itself is random, and we use a distribution
$\mathbb{P}(\mathcal{E}\in\cdot)$ to capture the agent designer’s prior belief
over all possible environments. As the history evolves, what can be learned is
represented by the posterior distribution
$\mathbb{P}(\mathcal{E}\in\cdot|H_{t})$. We additionally assume that
$\mathcal{A}$ and $\mathcal{S}$ are deterministic and known, but the
observation probability function $\rho$ is a random variable that the agent
needs to learn. For simplicity, we assume $S_{0}$ is deterministic, but the
same analysis can be easily extended to consider a distribution over initial
states.
### 2.2 Objectives
Reward. We consider the design of an agent to produce desirable outcomes. The
agent’s preferences can be represented by a reward function
$r:\mathcal{S}\times\mathcal{A}\mapsto[0,1]$. After selecting action $A_{t}$
in state $S_{t}$, the agent observes $S_{t+1}$ and receives a deterministic
reward $R_{t+1}=r(S_{t},A_{t})$ bounded in $[0,1]$. We take $r$ to be
deterministic and known for simplicity, but our result easily generalizes to
randomized reward functions.
Policy. The agent specifies its actions via policies. A stochastic policy
$\pi$ can be represented by a probability mass function $\pi(\cdot\mid S_{t})$
that an agent assigns to actions in $\mathcal{A}$ given situational state
$S_{t}$.
Regret. Before we formally define the learning objectives of the agent, we
extend the agent state to account for randomized policies. We consider the
notion of an algorithmic state $Z_{t}$ introduced in [14], which captures the
algorithmic randomness at time $t$. An algorithm is a deterministic sequence
$\\{\mu_{t}\mid t=1,2,\dots\\}$ of functions, each mapping the pair
$(H_{t},Z_{t})$ to a policy. At each time step $t$, the algorithm samples a
random algorithmic state $Z_{t}$ and computes a policy
$\pi_{t}=\mu_{t}(H_{t},Z_{t})$. The algorithm then samples actions
$A_{t}\sim\pi_{t}(\cdot\mid S_{t})$ at times $t$. For a policy $\pi$, we
denote the average reward starting at state $s$ as
$\lambda_{\pi,\mathcal{E}}(s)=\liminf_{T\to\infty}\mathbb{E}_{\pi}\left[\frac{1}{T}\sum_{t=0}^{T-1}R_{t+1}\Bigg{|}\mathcal{E},S_{0}=s\right],$
(1)
where the subscript of the expectation indicates that the reward sequence is
realized by following policy $\pi$, and the subscript $\mathcal{E}$ emphasizes
the dependence of the average reward on the environment $\mathcal{E}$. We also
denote the optimal average reward as
$\lambda_{*,\mathcal{E}}(s)=\sup_{\pi}\lambda_{\pi,\mathcal{E}}(s)\quad\forall
s\in\mathcal{S}.$
We consider weakly-communicating Markovian environments, the most general
subclass of MDPs for which finite time regret bounds are plausible. This
assumption also appears in [6, 15]. We give a formal definition below.
###### Definition 2.1.
(weakly-communicating MDP) An MDP is weakly communicating if there exists a
closed set of states, with each state in that set accessible from every other
state in that set under some deterministic stationary policy, plus a possibly
empty set of states which is transient under every policy.
We remark that the optimal average reward $\lambda_{*,\mathcal{E}}(\cdot)$ is
state-independent under weakly-communicating MDPs. Thus, we override the
notation $\lambda_{*,\mathcal{E}}$ to denote the optimal average reward
$\lambda_{*,\mathcal{E}}(s)$ for all states $s\in\mathcal{S}$. For a policy
$\pi$, we define its regret up to time $T$ to be
$\text{Regret}(T,\pi)\coloneqq\sum_{t=0}^{T-1}\left(\lambda_{*,\mathcal{E}}-R_{t+1}\right).$
(2)
The regret itself is a random variable depending on the random environment
$\mathcal{E}$, the algorithm’s internal random sampling, and random
transitions. We will measure agent performance in terms of regret and its
expected value.
## 3 CPSRL Algorithm
For episodic MDPs, the planning horizon is fixed ahead and known to the agent.
The planning objective is often naturally the cumulative reward over the
finite number of timesteps until the end of each episode. When the environment
is modeled by an MDP with an infinite horizon, planning ahead becomes a
challenge for the agent. One way to address this challenge is to set an
effective finite planning horizon for the agent by maintaining a discount
factor $\gamma\in[0,1)$. Essentially, $\gamma$ dictates the frequency with
which the algorithm resamples an independent environment model used for
planning. Given this discount factor, we divide the original infinite-horizon
learning problem into _pseudo-episodes_ with random lengths, where each
pseudo-episode terminates when the algorithm resamples and computes a new
policy. Concretely, at the beginning of timestep $t=0,1,\dots$, the agent
samples a binary indicator $X_{t}$. If $X_{t}=0$, the agent samples a new
environment $\mathcal{E}$ based on the history $H_{t}$ available at that time,
and marks $t$ as the start of a new pseudo-episode. It then computes a new
policy $\pi$ to follow in this pseudo-episode, and acts according to $\pi$. If
$X_{t}=1$, it continues the current pseudo-episode and follows the most
recently computed policy $\pi$. When $X_{t}\sim\text{Bernoulli}(\gamma)$, one
may interpret $\gamma$ as the _survival probability_ of a pseudo-episode at
timestep $t$. We let $E_{k}$ denote the set of timesteps in the $k$-th pseudo-
episode and $t_{k}$ denote the starting timestep of the $k$-th pseudo-episode.
Discounted value and discounted regret. At each timestep, the agent optimizes
a discounted objective with the aforementioned discount factor $\gamma$. To
formally define this objective, we first introduce matrix notations to avoid
conditioning on zero-probability events. For all $s,s^{\prime}\in\mathcal{S}$
and $a\in\mathcal{A}$, let
$P_{ass^{\prime}}=\rho(s^{\prime}\mid s,a).$
For each policy $\pi$, we also define transition probabilities
$P_{\pi ss^{\prime}}=\sum_{a\in\mathcal{A}}\pi(a\mid s)P_{ass^{\prime}}.$
With this notation, we regard $P_{a}$ and $P_{\pi}$ as $S\times S$ stochastic
matrices, and $P_{as}$ as $S$-dimensional vectors. Furthermore, for each
$s\in\mathcal{S}$ and $a\in\mathcal{A}$, we define
$r_{as}=r(s,a),$
and for each policy $\pi$,
$r_{\pi s}=\sum_{a\in\mathcal{A}}\pi(a\mid s)r_{as}.$
Both $r_{a}$ and $r_{\pi}$ can also be viewed as $S$-dimensional vectors.
While rewards represent agent preferences over actions in a single step, the
agent may be interested in optimizing for returns accumulated over multiple
steps. We define the value of a policy $\pi$ in the context of pseudo-episodes
introduced above. For each environment $\mathcal{E}$ and policy $\pi$, we
define the $\gamma$-discounted value function
$V^{\gamma}_{\pi,\mathcal{E}}\in\mathbb{R}^{S}$ of $\pi$ in $\mathcal{E}$ as
$V_{\pi,\mathcal{E}}^{\gamma}\coloneqq\mathbb{E}\left[\sum_{h=0}^{H-1}P_{\pi}^{h}r_{\pi}\mid\mathcal{E}\right]=\mathbb{E}\left[\sum_{h=0}^{\infty}\gamma^{h}P_{\pi}^{h}r_{\pi}\mid\mathcal{E}\right],$
(3)
where the expectation is over the random episode length $H$. Since a pseudo-
episode terminates at time $t$ when the independently sampled
$X_{t}\sim\text{Bernoulli}(\gamma)$ takes value 0, its length $H$ follows a
binomial distribution with parameter $\gamma$. The second equality above is a
direct consequence of this observation. A policy $\pi$ is said to be optimal
for the environment $\mathcal{E}$ if
$V_{\pi,\mathcal{E}}^{\gamma}=\sup_{\pi^{\prime}}V_{\pi^{\prime},\mathcal{E}}^{\gamma}$.
For an optimal policy $\pi$, we also write its value
$V_{*,\mathcal{E}}^{\gamma}(s)\equiv V_{\pi,\mathcal{E}}^{\gamma}(s)$ as the
optimal value. Furthermore, we denote an optimal policy with respect to
$V_{*,\mathcal{E}}^{\gamma}$ for each $\mathcal{E}$ as $\pi^{\mathcal{E}}$,
which will be useful in the analysis. Note that $V_{\pi,\mathcal{E}}^{\gamma}$
satisfies the Bellman equation
$\displaystyle V_{\pi,\mathcal{E}}^{\gamma}=r_{\pi}+\gamma
P_{\pi}V_{\pi,\mathcal{E}}^{\gamma}.$ (4)
Reward averaging time. We consider the notion of reward averaging times
$\tau_{\pi,\mathcal{E}}$ of policies introduced in [13] and derive regret
bounds that depend on $\tau_{\pi,\mathcal{E}}$.
###### Definition 3.1.
(reward averaging time) The reward averaging time $\tau_{\pi,\mathcal{E}}$ of
a policy $\pi$ is the smallest value $\tau\in[0,\infty)$ such that
$\left|\mathbb{E}_{\pi}\left[\sum_{t=0}^{T-1}R_{t+1}\mid\mathcal{E},S_{0}=s\right]-T\cdot\lambda_{\pi,\mathcal{E}}(s)\right|\leq\tau,$
for all $T\geq 0$ and $s\in\mathcal{S}$.
When $\pi^{*}$ is an optimal policy for $\mathcal{E}$,
$\tau_{*,\mathcal{E}}\coloneqq\tau_{\pi^{*},\mathcal{E}}$ is equivalent to the
notion of span in [16]. We define $\Omega_{*}$ to be the set of all weakly
communicating MDPs $\mathcal{E}$ and further make the following assumption on
the prior distribution $\mathbb{P}(\mathcal{E}\in\cdot)$. This assumption says
that we focus on weakly communicating MDPs with bounded reward averaging time.
###### Assumption 3.2.
There exists $\tau<\infty$ such that the prior distribution over possible
environments $\mathbb{P}(\mathcal{E}\in\cdot)$ satisfies
$\mathbb{P}\Big{(}\mathcal{E}\in\Omega_{*},\
\tau_{*,\mathcal{E}}\leq\tau\Big{)}=1.$
Below, we restate an important lemma, Lemma 2 in [13], that relates
$\lambda_{\pi,\mathcal{E}}$, $\tau_{\pi,\mathcal{E}}$, and
$V_{\pi,\mathcal{E}}^{\gamma}$.
###### Lemma 3.3.
For all $\pi$, $s\in\mathcal{S}$ and $\gamma\in[0,1)$,
$\left|V_{\pi,\mathcal{E}}^{\gamma}(s)-\frac{\lambda_{\pi,\mathcal{E}}(s)}{1-\gamma}\right|\leq\tau_{\pi,\mathcal{E}}.$
We note again that for weakly communicating $\mathcal{E}\in\Omega_{*}$, the
optimal average reward is state independent, i.e.,
$\lambda_{*,\mathcal{E}}=\lambda_{\pi_{*},\mathcal{E}}(s)$ for all
$s\in\mathcal{S}$. Thus, under Assumption 3.2, we have
$\left|V^{\gamma}_{*,\mathcal{E}}(s)-V^{\gamma}_{*,\mathcal{E}}(s^{\prime})\right|\leq
2\tau_{*,\mathcal{E}}\leq 2\tau$ (5)
almost surely for all $s,s^{\prime}\in\mathcal{S}$.
Discounted regret. Although the regret we eventually hope to analyze is
defined by equation 2, we also consider a discounted version of the regret to
aid our analysis. To analyze the performance of our algorithm over $T$
timesteps, we consider $K=\arg\max\\{k:t_{k}\leq T\\}$ as the number of
pseudo-episodes until time $T$. In our subsequent analysis, we adopt the
convention that $t_{K+1}=T+1$. To get a bound for general $T$, we can always
fill in the rest of the timesteps to make a full pseudo-episode, but the
asymptotic rate stays the same. Moreover, it is easy to see that for all
$\gamma\in[0,1)$, $\mathbb{E}[K]\leq(1-\gamma)T+1$. Given a discount factor
$\gamma\in[0,1)$, we define the $\gamma$-discounted regret up to time $T$ to
be
$\text{Regret}_{\gamma}(T,\pi)\coloneqq\sum_{k=1}^{K}\Delta_{k},$ (6)
where $K=\arg\max\\{k:t_{k}\leq T\\}$, and $\Delta_{k}$ is the regret over
pseudo-episode $k$, defined as
$\Delta_{k}=V_{*,\mathcal{E}}^{\gamma}(s_{k,1})-V_{\pi_{k},\mathcal{E}}^{\gamma}(s_{k,1}),$
with
$V_{*,\mathcal{E}}^{\gamma}=V_{\pi^{*},\mathcal{E}}^{\gamma}=V_{\pi^{\mathcal{E}},\mathcal{E}}^{\gamma}$,
$\pi_{k}\sim\mu_{k}(H_{t_{k}})$, $A_{t}\sim\pi_{k}(\cdot\mid S_{t})$,
$S_{t+1}\sim\rho(\cdot|S_{t},A_{t})$, and $R_{t}=r(S_{t},A_{t},S_{t+1})$ for
$t\in E_{k}$. Note that similar to the regret, the discounted regret is also a
random variable depending on the random environment $\mathcal{E}$, the
algorithm’s internal random sampling, random transitions, and additionally the
random lengths of the pseudo-episodes.
Empirical estimates. Finally, we define the empirical transition probabilities
used by the algorithm. Let
$N_{t}(s,a)=\sum_{\tau=1}^{t}\mathbbm{1}\\{(S_{\tau},A_{\tau})=(s,a)\\}$ be
the number of times action $a$ has been sampled in state $s$ up to timestep
$t$. For every pair $(s,a)$ with $N_{t_{k}}(s,a)>0$, the empirical transition
probabilities up to pseudo-episode $k$ are
$\displaystyle\hat{\rho}_{k}(s^{\prime}\mid s,a)$
$\displaystyle=\sum_{\ell=1}^{k-1}\sum_{t\in
E_{k}}\frac{\mathbbm{1}\\{(S_{t},A_{t},S_{t+1})=(s,a,s^{\prime})\\}}{N_{t_{k}}(s,a)}$
(7)
for all $s^{\prime}\in\mathcal{S}$. If the pair $(s,a)$ has never been sampled
before pseudo-episode $k$, we define $\hat{\rho}_{k}(s^{\prime}\mid s,a)=1$
for a random $s^{\prime}\in\mathcal{S}$, and
$\hat{\rho}_{k}(s^{\prime\prime}\mid s,a)=0$ for
$s^{\prime\prime}\in\mathcal{S}\setminus\\{s^{\prime}\\}$. The corresponding
matrix notation $\hat{P}^{k}$ are defined analogously.
### 3.1 The Algorithm
We present the Continuing Posterior Sampling for Reinforcement Learning
(CPSRL) algorithm in Algorithm 1, which extends PSRL [3] to the infinite
horizon setting with $\gamma$-discounted planning. The algorithm begins with a
prior distribution over environments with actions $\mathcal{A}$ and state
space $\mathcal{S}$. In addition, the algorithm takes an indicator $X_{1}=0$
and assumes a discount factor $\gamma$. At the start of each timestep $t$, if
the indicator $X_{t}=0$, CPSRL samples an environment
$\mathcal{E}_{t}=(\mathcal{A},\mathcal{S},\rho_{t})$ from the posterior
distribution conditioned on the history $H_{t}$ available at that time, and
mark $t$ as the start of a new pseudo-episode. It then computes and follows
the policy $\pi_{t}=\pi^{\mathcal{E}_{t}}$ at time $t$. Otherwise, if
$X_{t}=1$, it sticks to the policy $\pi_{t}=\pi_{t-1}$ at time $t$ and adds
step $t$ to the current pseudo-episode. Then $X_{t+1}$ is drawn from a
Bernoulli distribution with parameter $\gamma$ to be used in the next
timestep.
Compared with the vanilla PSRL, CPSRL simply adds an independent Bernoulli
random number generator to determine when to resample. Although CPSRL is not
designed to be implemented in practice per se, we note that such resampling
scheme brings both scalability and generalizability. For example, when the
environment has an extremely large state or action space, e.g. Atari games
[11], prior resampling methods relying on state-action visitation statistics
[6] require a huge look-up table, while the resampling method in CPSRL can
still be easily applied, with little computational overhead.
Data: Prior distribution $f$, discount factor $\gamma$, total learning time
$T$
initialize $t=1$, $k=1$, $X_{1}=0$
for _$t\leq T$_ do
if _$X_{t}=0$_ then
$t_{k}\leftarrow t$
sample $\mathcal{E}^{k}\sim f(\cdot\mid H_{t_{k}})$
compute $\pi_{k}=\pi^{\mathcal{E}^{k}}$
$k\leftarrow k+1$
end if
sample and apply $A_{t}\sim\pi_{k}(\cdot\mid S_{t})$
observe $R_{t+1}$ and $S_{t+1}$
$t\leftarrow t+1$
sample $X_{t+1}\sim{\rm Bernoulli}\left(\gamma\right)$
end for
Algorithm 1 CPSRL (Continuing Posterior Sampling for Reinforcement Learning)
## 4 Main Results
We present our main results in this section. Theorem 4.1 establishes that PSRL
with discounted planning satisfies a polynomial Bayesian regret bound for
infinite-horizon tabular MDP environments. The bounds for the expected regret
are of order $\tilde{O}(\tau S\sqrt{AT})$, matching the regret bound for TSDE
in [6], but achieved by a simple and elegant algorithm without additional
episode termination criteria. Here, we remark briefly that the discount factor
in Algorithm 1 is not constrained to be constant. The bound in Theorem 4.1
involves letting $\gamma$ depend on MDP parameters $T,S,A$ with a suitable
rate which we discuss in more detail in Section 5.1.
###### Theorem 4.1.
The expected regret for Algorithm 1 with $\gamma$ satisfying
$\frac{1}{1-\gamma}=\sqrt{\frac{T}{SA}}$ is bounded as
$\mathbb{E}\left[\text{\emph{Regret}}(T,\pi)\right]=\tilde{O}\left(\tau
S\sqrt{AT}\right).$
Note that our main theorem bounds the regret with respect to the optimal
average reward, and thus has no dependence on the discount factor $\gamma$,
which, as we emphasized, is only a design factor within the algorithm. For the
purpose of the analysis, we utilize the $\gamma$-discounted regret defined in
equation 6 and prove the following intermediate bound for the discounted
regret.
###### Theorem 4.2.
$\displaystyle\mathbb{E}\left[\text{\emph{Regret}}_{\gamma}(T,\pi)\right]\leq(4\tau+2)S\sqrt{28AT\log(2SAT)}+\frac{SA}{1-\gamma}\log\left(\frac{1}{1-\gamma}\log\left(\frac{2T}{1-\gamma}\right)\right)+\frac{1}{2}+\frac{2}{1-\gamma}.$
(8)
## 5 Analysis
As discussed in [3], a key property of posterior sampling algorithms is that
for any function $g$ measurable with respect to the sigma algebra generated by
the history,
$\mathbb{E}[g(\mathcal{E})|H_{t}]=\mathbb{E}[g(\mathcal{E}^{t})|H_{t}]$ if
$\mathcal{E}^{t}$ is sampled from the posterior distribution at time $t$.
Under our pseudo-episode construction, we state a stopping-time version of
this property similar to the one in [6].
###### Lemma 5.1.
If $f$ is the distribution of $\mathcal{E}$, then for any
$\sigma(H_{t_{k}})$-measurable function $g$, we have
$\mathbb{E}[g(\mathcal{E})|H_{t_{k}}]=\mathbb{E}[g(\mathcal{E}^{k})|H_{t_{k}}],$
(9)
where the expectation is taken over $f$.
We include a proof in Appendix B.1 for completeness. We let
$V^{\gamma}_{\pi_{k},k}=V^{\gamma}_{\pi_{k},\mathcal{E}^{k}}$ denote the value
function of $\pi_{k}$, the policy employed by CPSRL, under the sampled
environment $\mathcal{E}^{k}$, and define
$\tilde{\Delta}_{k}=V^{\gamma}_{\pi_{k},k}(s_{k,1})-V^{\gamma}_{\pi_{k},\mathcal{E}}(s_{k,1})$
as the difference in performance of $\pi_{k}$ under $\mathcal{E}^{k}$ and the
true environment $\mathcal{E}$. The next lemma allows us to evaluate regret in
terms of $\tilde{\Delta}_{k}$, which we can analyze using the known sampled
environment and our observation from the true environment, whereas it is
typically hard to directly analyze $\Delta_{k}$ since we do not know what the
optimal policy $\pi^{*}=\pi^{\mathcal{E}}$ is.
###### Lemma 5.2.
$\mathbb{E}\left[\sum_{k=1}^{K}\Delta_{k}\right]=\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\right].$
(10)
###### Proof.
The claim follows by applying Lemma 5.1 and a similar argument as that in
Theorem 2 in [3]. ∎
We prove a value decomposition lemma that allows us to express the difference
in values of $\pi_{k}$ in the true environment $\mathcal{E}$ and sampled
environment $\mathcal{E}^{k}$ in terms of a sum of Bellman errors over one
stochastic pseudo-episode.
###### Lemma 5.3 (Value decomposition).
For any environment $\hat{\mathcal{E}}$ and any policy $\pi$,
$\displaystyle\mathbb{E}\left[V_{\pi,\mathcal{E}}^{\gamma}(s_{0})-V_{\pi,\hat{\mathcal{E}}}^{\gamma}(s_{0})\mid\mathcal{E},\hat{\mathcal{E}}\right]=\mathbb{E}\left[\sum_{t=0}^{\eta-1}\gamma\left\langle
P_{\pi(s_{t})s_{t}}-\hat{P}_{\pi(s_{t})s_{t}},V_{\pi,\hat{\mathcal{E}}}^{\gamma}\right\rangle\mid\mathcal{E},\hat{\mathcal{E}},\pi\right],$
where $\eta$ is the random length of a pseudo-episode, and the expectation is
over the distribution of $\eta$, conditioned on the sampled state trajectory
$s_{0},s_{1},\dots$ drawn from following $\pi$ in the environment
$\mathcal{E}$.
###### Proof.
Expanding the right hand side by the tower property, we get
$\displaystyle\quad\mathbb{E}\left[\sum_{t=0}^{\eta-1}\gamma\left\langle
P_{\pi(s_{t})s_{t}}-\hat{P}_{\pi(s_{t})s_{t}},V_{\pi,\hat{\mathcal{E}}}^{\gamma}\right\rangle\mid\mathcal{E},\hat{\mathcal{E}},\pi\right]$
$\displaystyle=\sum_{H=1}^{\infty}\mathbb{E}\left[\sum_{t=0}^{H-1}\gamma\left\langle
P_{\pi(s_{t})s_{t}}-\hat{P}_{\pi(s_{t})s_{t}},V_{\pi,\hat{\mathcal{E}}}^{\gamma}\right\rangle\mid\mathcal{E},\hat{\mathcal{E}},\pi\right]\cdot\gamma^{H-1}(1-\gamma)$
$\displaystyle=\mathbb{E}\left[\sum_{t=0}^{\infty}\gamma^{t+1}\left\langle
P_{\pi(s_{t})s_{t}}-\hat{P}_{\pi(s_{t})s_{t}},V_{\pi,\hat{\mathcal{E}}}^{\gamma}\right\rangle\mid\mathcal{E},\hat{\mathcal{E}},\pi\right].$
By applying Bellman equations recursively, the value function difference in
the left hand side can be expressed as
$\displaystyle
V_{\pi,\mathcal{E}}^{\gamma}(s_{0})-V_{\pi,\hat{\mathcal{E}}}^{\gamma}(s_{0})$
$\displaystyle=\sum_{t=0}^{\infty}\gamma^{t+1}\langle
P_{\pi(s_{t})s_{t}}-\hat{P}_{\pi(s_{t})s_{t}},V_{\pi,\mathcal{E}}^{\gamma}\rangle+\sum_{t=0}^{\infty}d_{t},$
where
$\displaystyle d_{i}\coloneqq\gamma\langle
P_{\pi(s_{i})s_{i}},V_{\pi,\mathcal{E}}^{\gamma}-V_{\pi,\hat{\mathcal{E}}}^{\gamma}\rangle-\gamma\left(V_{\pi,\mathcal{E}}^{\gamma}(s_{i+1})-V_{\pi,\hat{\mathcal{E}}}^{\gamma}(s_{i+1})\right).$
In state $s_{i}$ under policy $\pi$, the expected value of $\gamma\langle
P_{\pi(s_{i})s_{i}},V_{\pi,\mathcal{E}}^{\gamma}-V_{\pi,\hat{\mathcal{E}}}^{\gamma}\rangle$
is exactly
$\gamma\left(V_{\pi,\mathcal{E}}^{\gamma}(s_{i+1})-V_{\pi,\hat{\mathcal{E}}}^{\gamma}(s_{i+1})\right)$,
so conditioning on the true environment $\mathcal{E}$ and the sampled
environment $\hat{\mathcal{E}}$, the expectation of $\sum_{t=0}^{\infty}d_{t}$
is zero. Taking the expectation conditioned on $\mathcal{E},\hat{\mathcal{E}}$
and $\pi$, our claim is proved. ∎
At the start of each pseudo-episode $k$, we consider a confidence set
$\mathcal{M}_{k}=\left\\{(\mathcal{A},\mathcal{O},\rho):\left\|P_{as}-\hat{P}_{as}\right\|_{1}\leq\beta_{k}(s,a)~{}~{}\forall(s,a)\right\\}$
where
$\displaystyle\beta_{k}(s,a)$
$\displaystyle=\sqrt{\frac{14S\log(2SAKt_{k})}{\max\\{N_{t_{k}}(s,a),1\\}}}.$
The following lemma bounds the probability that the true environment falls
outside of the confidence set $\mathcal{M}_{k}$.
###### Lemma 5.4.
$\mathbb{P}\left(\mathcal{E}\notin\mathcal{M}_{k}\right)\leq\frac{1}{K}.$
The proof follows standard concentration arguments, which we defer to Appendix
B.2. We let
$m=\frac{1}{1-\gamma}\log\left(\frac{2K}{1-\gamma}\right)$
be a high probability upper bound for the length of each episode $E_{k}$,
$k=1,\dots,K$. Since $\tilde{\Delta}_{k}\leq\frac{1}{1-\gamma}$, we can
decompose the regret as
$\displaystyle\sum_{k=1}^{K}$
$\displaystyle\tilde{\Delta}_{k}\leq\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\\{|E_{k}|\leq
m\\}}+\frac{1}{1-\gamma}\sum_{k=1}^{K}\left[\mathbbm{1}_{\\{|E_{k}|>m\\}}+\mathbbm{1}_{\\{\mathcal{E}\notin\mathcal{M}_{k}\\}}+\mathbbm{1}_{\\{\mathcal{E}^{k}\notin\mathcal{M}_{k}\\}}\right].$
Note that $\mathcal{M}_{k}$ is $\sigma(H_{t_{k}})$-measurable, so by Lemma
5.1,
$\mathbb{E}[\mathbbm{1}_{\\{\mathcal{E}\notin\mathcal{M}_{k}\\}}\mid
H_{t_{k}}]=\mathbb{E}[\mathbbm{1}_{\\{\mathcal{E}^{k}\notin\mathcal{M}_{k}\\}}\mid
H_{t_{k}}].$
Therefore,
$\displaystyle\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\right]$
$\displaystyle\leq\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\\{|E_{k}|\leq
m\\}}\right]+\frac{1}{1-\gamma}\mathbb{E}\left[\sum_{k=1}^{K}\mathbbm{1}_{\\{|E_{k}|>m\\}}\right]$
$\displaystyle\quad+\frac{2}{1-\gamma}\cdot\mathbb{E}\left[\sum_{k=1}^{K}\mathbbm{1}_{\\{\mathcal{E}\notin\mathcal{M}_{k}\\}}\right].$
(11)
The third term can be bounded by $\frac{2}{1-\gamma}$ via Lemma 5.4. In what
follows, we show how to bound the second and the first term.
Second term. For the second term, since $|E_{k}|$ follows a geometric
distribution with parameter $1-\gamma$, applying the inequality
$(1+\frac{x}{n})^{n}\leq e^{x}$ by taking $n=\frac{1}{1-\gamma}\geq 1,x=-1$,
we have
$\displaystyle\frac{1}{1-\gamma}\mathbb{E}\left[\sum_{k=1}^{K}\mathbbm{1}_{\\{|E_{k}|>m\\}}\right]$
$\displaystyle=\frac{1}{1-\gamma}\sum_{k=1}^{K}\gamma^{m}=\frac{1}{1-\gamma}\sum_{k=1}^{K}\gamma^{\frac{1}{1-\gamma}\log(2K/(1-\gamma))}\leq\frac{1}{1-\gamma}\sum_{k=1}^{K}2e^{-\log\left(\frac{2K}{1-\gamma}\right)}=\frac{1}{2}.$
First term. It remains to bound the first term in equation 5. In Appendix A,
we provide a detailed proof for the bound
$\displaystyle\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\\{|E_{k}|\leq
m\\}}\right]\leq 4\tau\cdot
S\sqrt{28AT\log(2SAT)}+\frac{SA}{1-\gamma}\log\left(\frac{1}{1-\gamma}\log\left(\frac{2T}{1-\gamma}\right)\right).$
We now conclude the proof of Theorem 4.2 below.
###### Proof of Theorem 4.2..
Combining the bound for each term in equation 5, we have
$\displaystyle\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\right]\leq
4\tau
S\sqrt{28AT\log(2SAT)}+\frac{SA}{1-\gamma}\log\left(\frac{1}{1-\gamma}\log\left(\frac{2T}{1-\gamma}\right)\right)+\frac{1}{2}+\frac{2}{1-\gamma}.$
The claim follows from Lemma 5.2. ∎
### 5.1 Average Reward Regret
We have treated $\gamma$ as a constant discount factor so far, resulting in a
regret bound in equation 6 that scales linearly with $\frac{1}{1-\gamma}$,
ignoring logarithmic factors. However, the CPSRL algorithm does not constrain
$\gamma$ to be a constant during the learning process, i.e. we are able to
allow the discount factor to increase over time, accounting for the growing
horizon. Specifically, at each time step $t$, we can consider a discount
factor $\gamma_{t}$, and CPSRL resamples a new environment with probability
$1-\gamma_{t}$. If resampling happens, the agent switches to the optimal
policy in the resampled environment maximizing the $\gamma_{t}$-discounted
cumulative reward. Otherwise, the agent keeps following the previous policy.
If $\gamma_{t}$ increases with $t$, the agent is effectively planning over a
longer horizon as interactions continue, and as Theorem 4.2 justifies, the
agent’s performance should always keep up with that of the optimal policy in
the environment, regardless of the planning horizon.
Note that, although Theorem 4.2 provides a sublinear upper bound for the
discounted regret with a fixed discount factor $\gamma$, we are ultimately
interested in the performance shortfall with respect to the optimal average-
reward policy. To obtain a sublinear upper bound for the latter, we employ
such time-dependent discount factors discussed above, which allows us to show
Theorem 4.1, a regret bound for the Bayesian regret that does not depend on a
discount factor.
###### Proof of Theorem 4.1.
To better represent the dependence on $K$, set
$\mathcal{R}(K,\pi)\coloneqq\text{Regret}(T,\pi),$
where $T$ is at the end of the $K$-th pseudo-episode. The expected regret can
be written as
$\displaystyle\mathbb{E}\left[\mathcal{R}(K,\pi)\right]=\mathbb{E}_{\pi}\left[\sum_{k=1}^{K}\sum_{t\in
E_{k}}(\lambda_{*}-R_{t})\right]=\mathbb{E}_{\pi}\left[\sum_{k=1}^{K}|E_{k}|\lambda_{*}-\sum_{k=1}^{K}\sum_{t\in
E_{k}}R_{t}\right].$
Adding and subtracting the optimal discounted value,
$\displaystyle\mathbb{E}\left[\mathcal{R}(K,\pi)\right]=\underbrace{\mathbb{E}\left[\sum_{k=1}^{K}(|E_{k}|\lambda_{*}-V_{*,\mathcal{E}}^{\gamma}(s_{k,1}))\right]}_{(a)}+\underbrace{\mathbb{E}\left[\sum_{k=1}^{K}\left[V_{*,\mathcal{E}}^{\gamma}(s_{k,1})-V_{\pi_{k},\mathcal{E}}^{\gamma}(s_{k,1})\right]\right]}_{(b)}.$
Since the length of each pseudo-episode is independent of the policy or the
environment itself with mean $\frac{1}{1-\gamma}$, term (a) is the difference
between the optimal average reward, weighted by the effective horizon, and the
optimal discounted reward
$\mathbb{E}\left[\sum_{k=1}^{K}(\frac{1}{1-\gamma}\lambda_{*}-V^{\gamma}_{*,\mathcal{E}}(s_{k,1}))\right].$
By Lemma 3.3,
$\displaystyle\mathbb{E}\left[\sum_{k=1}^{K}\left|\frac{1}{1-\gamma}\lambda_{*}-V_{*,\mathcal{E}}^{\gamma}(s_{k,1})\right|\right]$
$\displaystyle\leq\tau\mathbb{E}[K].$
The expectation (b) is the sum of differences between the optimal discounted
value and the discounted value of the deployed policy over $K$ pseudo-
episodes. From equation 6 we can see that (b) is exactly
$\mathbb{E}\left[\text{Regret}_{\gamma}(K,\pi)\right]$. By Theorem 4.2, we
have that the regret can be bounded in terms of $\gamma$ as
$\displaystyle\mathbb{E}\left[\mathcal{R}(K,\pi)\right]$
$\displaystyle\leq\tau\mathbb{E}[K]+\tilde{O}\left(4\tau
S\sqrt{AT}+\frac{SA}{1-\gamma}\right)\leq(1-\gamma)T\tau+\tau+\tilde{O}\left(4\tau
S\sqrt{AT}+\frac{SA}{1-\gamma}\right).$
If we assume the knowledge of $T$, optimizing for $\gamma$, the best rate in
terms of $T$ can be achieved by setting
$\frac{1}{1-\gamma}=\sqrt{\frac{T}{SA}},$
and the final bound becomes
$\displaystyle\mathbb{E}[\text{Regret}(T,\pi)]=\mathbb{E}\left[\mathcal{R}(K,\pi)\right]\leq\tau\sqrt{SAT}+\tilde{O}\left(4\tau
S\sqrt{AT}\right)=\tilde{O}\left(\tau S\sqrt{AT}\right).$
If the total learning horizon $T$ is unknown, we can utilize a classical
doubling trick argument that is common in the design of online learning
algorithms [17]. The idea is to divide the learning horizon into time
intervals of the form $[2^{k},2^{k+1})$ and set
$\frac{1}{1-\gamma_{t}}=\sqrt{\frac{2^{k+1}}{SA}}$
for $t\in[2^{k},2^{k+1})$, $k\in\mathbb{N}$.
∎
## 6 Conclusion
We proposed a novel algorithm design extending PSRL to the setting where the
environment does not have a reset schedule, and the agent has to plan over a
possibly infinite horizon. We establish theoretically that our algorithm,
CPSRL, enjoys a regret upper bound that is close to the theoretical
optimality. Notably, CPSRL only relies on a single Bernoulli random number
generator to resample the environment, as opposed to the complex episode-
stopping schemes in prior works. Such design principle can be readily applied
to general environments with large state spaces. Moreover, CPSRL also
highlights the role of discount factor in agent design, as the discount factor
is no longer considered as a part of the learning target, but mainly acts as a
tool for the agent to dynamically adjust its planning horizon. As such, this
work might provide an important step towards understanding discount factors,
which have seen wide popularity in practical RL applications.
## References
* [1] William R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25:285–294, 1933.
* [2] Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, and Zheng Wen. A tutorial on Thompson sampling. Foundations and Trends® in Machine Learning, 11(1):1–96, 2018\.
* [3] Ian Osband, Daniel Russo, and Benjamin Van Roy. (More) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, 2013.
* [4] Ian Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized value functions. In International Conference on International Conference on Machine Learning, 2016.
* [5] Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped DQN. In Advances in Neural Information Processing Systems, 2016.
* [6] Yi Ouyang, Mukul Gagrani, Ashutosh Nayyar, and Rahul Jain. Learning unknown Markov decision processes: A Thompson sampling approach. In Advances in Neural Information Processing Systems, 2017.
* [7] Georgios Theocharous, Zheng Wen, Yasin Abbasi Yadkori, and Nikos Vlassis. Scalar posterior sampling with applications. In Advances in Neural Information Processing Systems, 2018.
* [8] Vikranth Dwaracherla, Xiuyuan Lu, Morteza Ibrahimi, Ian Osband, Zheng Wen, and Benjamin Van Roy. Hypermodels for exploration. In International Conference on Learning Representations, 2020.
* [9] Tor Lattimore and Marcus Hutter. PAC bounds for discounted MDPs. In International Conference on Algorithmic Learning Theory, 2012\.
* [10] Yuanhao Wang, Kefan Dong, Xiaoyu Chen, and Liwei Wang. Q-learning with UCB exploration is sample efficient for infinite-horizon MDP. In International Conference on Learning Representations, 2020.
* [11] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015.
* [12] Vincent François-Lavet, Raphaël Fonteneau, and Damien Ernst. How to discount deep reinforcement learning: Towards new dynamic strategies. In Advances in Neural Information Processing Systems, 2015.
* [13] Shi Dong, Benjamin Van Roy, and Zhengyuan Zhou. Simple agent, complex environment: Efficient reinforcement learning with agent states. Journal of Machine Learning Research, 23:1–54, 2022.
* [14] Xiuyuan Lu, Benjamin Van Roy, Vikranth Dwaracherla, Morteza Ibrahimi, Ian Osband, and Zheng Wen. Reinforcement learning, bit by bit. arXiv preprint arXiv:2103.04047, 2021.
* [15] Shipra Agrawal and Navin Goyal. Further optimal regret bounds for Thompson sampling. In International Conference on Artificial Intelligence and Statistics, 2013.
* [16] Peter L. Bartlett and Ambuj Tewari. REGAL: A regularization based algorithm for reinforcement learning in weakly communicating MDPs. In Conference on Uncertainty in Artificial Intelligence, 2009.
* [17] Junzi Zhang, Jongho Kim, Brendan O’Donoghue, and Stephen Boyd. Sample efficient reinforcement learning with REINFORCE. In AAAI Conference on Artificial Intelligence, 2020.
* [18] Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdú, and Marcelo J. Weinberger. Inequalities for the L1 deviation of the empirical distribution. In Hewlett-Packard Labs Technical Reports, 2003.
## Appendix A Bounding the sum of confidence set widths
We are interested in bounding
$\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\\{|E_{k}|\leq
m\\}}\right].$
###### Proof.
For each $k$, we define the event
$\mathcal{B}_{k}\coloneqq\left\\{N_{t_{k+1}-1}(s,a)+1\leq
2N_{t_{k}}(s,a)~{}\text{for
all}~{}(s,a)\in\mathcal{S}\times\mathcal{A}\right\\}.$
Then $\mathcal{B}_{k}^{c}=\\{N_{t_{k+1}-1}(s,a)\geq
2N_{t_{k}}(s,a)~{}\text{for some}~{}(s,a)\in\mathcal{S}\times\mathcal{A}\\}$.
Following a similar strategy as in [3], we can write
$\displaystyle\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\\{|E_{k}|\leq
m\\}}\right]$
$\displaystyle\leq\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}}\right]+\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{|E_{k}|\leq
m\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}^{c}}\right]$
$\displaystyle\leq\mathbb{E}\left[\sum_{k=1}^{K}\mathbb{E}\left[\tilde{\Delta}_{k}\mid\mathcal{E},\mathcal{E}^{k}\right]\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}}\right]+\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{|E_{k}|\leq
m\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}^{c}}\right].$
The event $N_{t_{k+1}-1}(s,a)\geq 2N_{t_{k}}(s,a)$ can happen in at most $\log
m$ episodes per state action pair under the event $\\{|E_{k}|\leq m\\}$. Thus,
the second term can be bounded by
$\mathbb{E}\left[\sum_{k=1}^{K}\tilde{\Delta}_{k}\mathbbm{1}_{\\{|E_{k}|\leq
m\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}^{c}}\right]\leq\frac{SA}{1-\gamma}\log
m.$
We define $\\{(s_{k,i},a_{k,i})\\}_{i=1}^{|E_{k}|}$ to be the trajectory
followed by $\pi_{k}$ in pseudo-episode $k$ starting from state
$s_{k,1}=s_{t_{k}}$, the state at the beginning of pseudo-episode $k$. By
equation 5,
$|\min_{s\in\mathcal{S}}V^{\gamma}_{\pi_{k},\mathcal{E}^{k}}(s)-\max_{s\in\mathcal{S}}V^{\gamma}_{\pi_{k},\mathcal{E}^{k}}(s)|\leq
2\tau$ for all $s\in\mathcal{S}$. Thus, we can bound the first term
$\displaystyle\quad\mathbb{E}\left[\sum_{k=1}^{K}\mathbb{E}\left[\tilde{\Delta}_{k}\mid\mathcal{E},\mathcal{E}^{k}\right]\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}}\right]$
$\displaystyle\leq\mathbb{E}\sum_{k=1}^{K}\sum_{i=1}^{|E_{k}|}\mathbb{E}\left[\gamma\left|\langle
P_{\pi_{k}(s_{k,i})s_{k,i}}-P^{k}_{\pi_{k}(s_{k,i})s_{k,i}},V_{\pi_{k},\mathcal{E}^{k}}^{\gamma}\rangle\right|\mid\mathcal{E},\mathcal{E}^{k}\right]\cdot\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}}$
$\displaystyle=\mathbb{E}\sum_{k=1}^{K}\sum_{i=1}^{|E_{k}|}\mathbb{E}\Big{[}\gamma\left|\langle
P_{\pi_{k}(s_{k,i})s_{k,i}}-P^{k}_{\pi_{k}(s_{k,i})s_{k,i}},V_{\pi_{k},\mathcal{E}^{k}}^{\gamma}-\min_{s\in\mathcal{S}}V_{\pi_{k},\mathcal{E}^{k}}^{\gamma}(s)\cdot\mathbf{1}\rangle\right|$
$\displaystyle\quad+\gamma\left|\langle
P_{\pi_{k}(s_{k,i})s_{k,i}}-P^{k}_{\pi_{k}(s_{k,i})s_{k,i}},\min_{s\in\mathcal{S}}V_{\pi_{k},\mathcal{E}^{k}}^{\gamma}(s)\cdot\mathbf{1}\rangle\right|\mid\mathcal{E},\mathcal{E}^{k}\Big{]}\cdot\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}}$
$\displaystyle=\mathbb{E}\sum_{k=1}^{K}\sum_{i=1}^{|E_{k}|}\mathbb{E}\Big{[}\gamma\left|\langle
P_{\pi_{k}(s_{k,i})s_{k,i}}-P^{k}_{\pi_{k}(s_{k,i})s_{k,i}},V_{\pi_{k},\mathcal{E}^{k}}^{\gamma}-\min_{s\in\mathcal{S}}V_{\pi_{k},\mathcal{E}^{k}}^{\gamma}(s)\cdot\mathbf{1}\rangle\right|\mid\mathcal{E},\mathcal{E}^{k}\Big{]}\cdot\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}}$
$\displaystyle\leq\mathbb{E}\sum_{k=1}^{K}\sum_{i=1}^{|E_{k}|}\gamma\left\|P_{\pi_{k}(s_{k,i})s_{k,i}}-P^{k}_{\pi_{k}(s_{k,i})s_{k,i}}\right\|_{1}\left\|V_{\pi_{k},\mathcal{E}^{k}}^{\gamma}-\min_{s\in\mathcal{S}}V_{\pi_{k},\mathcal{E}^{k}}^{\gamma}(s)\cdot\mathbf{1}\right\|_{\infty}\cdot\mathbbm{1}_{\\{\mathcal{E},\mathcal{E}^{k}\in\mathcal{M}_{k}\\}}\cdot\mathbbm{1}_{\mathcal{B}_{k}}$
$\displaystyle\leq\mathbb{E}\sum_{k=1}^{K}\sum_{i=1}^{|E_{k}|}\min\\{4\tau\gamma\beta_{k}(s_{k,i},a_{k,i}),1\\}\cdot\mathbbm{1}_{\mathcal{B}_{k}}.$
where the second equality follows since $P_{\pi_{k}(s_{k,i})s_{k,i}}$ and
$P^{k}_{\pi_{k}(s_{k,i})s_{k,i}}$ are probability distributions, in the
second-to-last inequality we apply Hölder’s inequality, and in the last
inequality we apply Lemma 5.3. We proceed to bounding the first term. Recall
that
$\beta_{k}(s,a)=\sqrt{\frac{14S\log(2SAKt_{k})}{\max\\{N_{t_{k}}(s,a),1\\}}}$,
then
$\displaystyle\quad\sum_{k=1}^{K}\sum_{i=1}^{|E_{k}|}\min\\{4\tau\gamma\beta_{k}(s_{k,i},a_{k,i}),1\\}\cdot\mathbbm{1}_{\mathcal{B}_{k}}\leq
4\tau\sum_{k=1}^{K}\sum_{i=1}^{|E_{k}|}\mathbbm{1}_{\mathcal{B}_{k}}\sqrt{\frac{14S\log(2SAKt_{k})}{\max\\{1,N_{t_{k}}(s_{k,i},a_{k,i})\\}}}.$
Under the event $\mathcal{B}_{k}=\\{N_{t_{k+1}-1}(s,a)+1\leq
2N_{t_{k}}(s,a)~{}\forall(s,a)\in\mathcal{S}\times\mathcal{A}\\}$, for any
$t\in E_{k}$, $N_{t}(s,a)+1\leq N_{t_{k+1}-1}(s,a)+1\leq 2N_{t_{k}}(s,a)$.
Therefore,
$\displaystyle\sum_{k=1}^{K}\sum_{t\in
E_{k}}\sqrt{\frac{\mathbbm{1}_{\mathcal{B}_{k}}}{N_{t_{k}}(s_{t},a_{t})}}$
$\displaystyle\leq\sum_{k=1}^{K}\sum_{t\in
E_{k}}\sqrt{\frac{2}{N_{t}(s_{t},a_{t})+1}}$
$\displaystyle=\sqrt{2}\sum_{t=1}^{T}(N_{t}(s_{t},a_{t})+1)^{-1/2}$
$\displaystyle\leq\sqrt{2}\sum_{s,a}\sum_{j=1}^{N_{T+1}(s,a)}j^{-1/2}$
$\displaystyle\leq\sqrt{2}\sum_{s,a}\int_{x=0}^{N_{T+1}(s,a)}x^{-1/2}dx$
$\displaystyle\leq\sqrt{2SA\sum_{s,a}N_{T+1}(s,a)}$
$\displaystyle=\sqrt{2SAT}.$
Since all rewards and transitions are absolutely constrained in $[0,1]$, our
term of interest
$\displaystyle\quad\sum_{k=1}^{K}\sum_{i=1}^{|E_{k}|}\min\\{4\tau\gamma\beta_{k}(s_{k,i},a_{k,i}),1\\}\cdot\mathbbm{1}_{\mathcal{B}_{k}}\leq
4\tau\cdot S\sqrt{28AT\log(2SAT)}.$
∎
## Appendix B Supporting Lemmas
### B.1 Proof of Lemma 5.1
###### Proof.
By definition, $t_{k}$ is a stopping time, so it is
$\sigma(H_{t_{k}})$-measurable. Since $\mathcal{E}^{k}$ is sampled from the
posterior distribution $\mathbb{P}(\mathcal{E}\in\cdot|H_{t_{k}})$,
$\mathcal{E}^{k}$ and $\mathbb{P}(\mathcal{E}\in\cdot|H_{t_{k}})$ are also
measurable with respect to $\sigma(H_{t_{k}})$. Conditioning on $H_{t_{k}}$,
the only randomness in $g(\mathcal{E}^{k})$ is the random sampling in the
algorithm. The proof follows by integrating over
$\mathbb{P}(\mathcal{E}\in\cdot|H_{t_{k}})$. ∎
### B.2 Proof of Lemma 5.4
###### Proof.
By the $L_{1}$-deviation bound for empirical distributions in [18], when
$N_{t_{k}}(s,a)>0$,
$\displaystyle\mathbb{P}\left(\left\|\hat{P}_{as}-P_{as}\right\|_{1}\geq\beta_{k}(s,a)\right)$
$\displaystyle\leq(2^{S}-2)\exp\left(-\frac{N_{t_{k}}(s,a)\beta_{k}^{2}(s,a)}{2}\right)$
$\displaystyle\leq 2^{S}\exp\left(-7S\log(2SAKt_{k})\right)\leq
2^{S}\exp(-\log(4\cdot 2^{S}SAKt_{k}^{2}))=\frac{1}{4SAKt_{k}^{2}},$
where the second line is due to
$\sum_{s^{\prime}\in\mathcal{S}}\hat{P}_{as}(s^{\prime})=\sum_{s^{\prime}\in\mathcal{S}}P_{as}(s^{\prime})=1$,
and the third line follows from Hölder’s inequality. When $N_{t_{k}}(s,a)=0$,
the bounds trivially hold. Applying a union bound over all possible values of
$t_{k}=1,\dots,\infty$, we get
$\displaystyle\mathbb{P}\left(\left|\langle\hat{P}_{as}-P_{as},V^{\gamma}_{\pi_{k},\mathcal{E}^{k}}\rangle\right|\geq\beta_{k}(s,a)\right)$
$\displaystyle\leq\sum_{t=1}^{\infty}\frac{1}{4SAKt^{2}}=\frac{\pi^{2}}{24SAK}\leq\frac{1}{2SAK}.$
We may conclude the proof with a union bound over all $(s,a)$. ∎
|
# Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks
Carlos Martin Computer Science Department, Carnegie Mellon University Tuomas
Sandholm Computer Science Department, Carnegie Mellon University Strategy
Robot, Inc. Optimized Markets, Inc. Strategic Machine, Inc.
###### Abstract
We study the problem of computing an approximate Nash equilibrium of
continuous-action game without access to gradients. Such game access is common
in reinforcement learning settings, where the environment is typically treated
as a black box. To tackle this problem, we apply zeroth-order optimization
techniques that combine smoothed gradient estimators with equilibrium-finding
dynamics. We model players’ strategies using artificial neural networks. In
particular, we use randomized policy networks to model mixed strategies. These
take noise in addition to an observation as input and can flexibly represent
arbitrary observation-dependent, continuous-action distributions. Being able
to model such mixed strategies is crucial for tackling continuous-action games
that lack pure-strategy equilibria. We evaluate the performance of our method
using an approximation of the Nash convergence metric from game theory, which
measures how much players can benefit from unilaterally changing their
strategy. We apply our method to continuous Colonel Blotto games, single-item
and multi-item auctions, and a visibility game. The experiments show that our
method can quickly find high-quality approximate equilibria. Furthermore, they
show that the dimensionality of the input noise is crucial for performance. To
our knowledge, this paper is the first to solve general continuous-action
games with unrestricted mixed strategies and without any gradient information.
## 1 Introduction
Most work on computing equilibria of games has focused on settings with
finite, discrete action spaces. Yet many games involving space, time, money,
_etc._ actually have continuous action spaces. Examples include continuous
resource allocation games [Ganzfried, 2021], security games in continuous
spaces [Kamra et al., 2017, 2018, 2019], network games [Ghosh and Kundu,
2019], simulations of military scenarios and wargaming [Marchesi et al.,
2020], and video games [Berner et al., 2019, Vinyals et al., 2019].
Furthermore, even if the action space is discrete, it may be fine-grained
enough to treat as continuous for computational efficiency purposes [Borel,
1938, Chen and Ankenman, 2006, Ganzfried and Sandholm, 2010b].
The typical approach to computing an equilibrium of a game with continuous
action spaces involves discretizing the action space. That entails loss in
solution quality [Kroer and Sandholm, 2015]. Also, it does not scale well; for
one, in multidimensional action spaces it entails a combinatorial explosion of
discretized points (exponential in the number of dimensions). Therefore, other
approaches are called for. Furthermore, in many applications, explicit
gradient information about the game is not available.
This paper is, to our knowledge, the first to solve general continuous-action
games with unrestricted mixed strategies and without any gradient information.
We start by introducing some background needed to formulate the problem. We
then describe related research that tackles the problem of computing
approximate equilibria of such games. Next, we describe our method and its
components, including smoothed gradient estimators, equilibrium-finding
dynamics, and representation of mixed strategies111We use the terms policy and
strategy interchangeably. The former is common in reinforcement learning, the
latter in game theory. using randomized policy networks. We then describe the
various games that we use in the experiments. After that, we present our
experimental results and discussion. Finally, we present our conclusion and
suggest directions for future research.
## 2 Problem description
First, we introduce some notation: $\triangle X$ is the set of probability
measures on $X$, $\mathcal{U}(X)$ is the uniform probability measure on $X$,
and $[\cdot]$ is an Iverson bracket, which is 1 if its argument is true and 0
otherwise.
A _strategic-form game_ is a tuple $(I,S,u)$ where $I$ is a set of players,
$S_{i}$ a set of strategies for player $i$, and
$u_{i}:\prod_{j:I}S_{j}\to\mathbb{R}$ a utility function for player $i$. A
strategy profile $s:\prod_{i:I}S_{i}$ maps players to strategies and $s_{-i}$
denotes $s$ excluding $i$’s strategy. Player $i$’s best response utility
$b_{i}(s_{-i})=\sup_{s_{i}:S_{i}}u_{i}(s)$ is the highest utility they can
attain given the other players’ strategies. Their utility gap
$g_{i}(s)=b_{i}(s_{-i})-u_{i}(s)$ is the highest utility they can gain from
unilaterally changing their strategy, and $s$ is an $\varepsilon$-equilibrium
iff $\sup_{i:I}g_{i}(s)\leq\varepsilon$. A 0-equilibrium is called a Nash
equilibrium. A common measure of closeness to Nash equilibrium is _NashConv_ ,
defined as $\bar{g}=\int_{i\sim\mu}g_{i}$, where $\mu$ is some measure on $I$.
Typically, $I$ is finite and $\mu$ is the counting measure, making $\bar{g}$ a
finite sum of utility gaps. However, some games may have infinitely many
players, such as mean field games. A game is zero-sum iff
$\int_{i\sim\mu}u_{i}=0$, which makes $\bar{g}=\int_{i\sim\mu}b_{i}$. In a
two-player zero-sum game, $\bar{g}$ reduces to the so-called “duality gap”
[Grnarova et al., 2019]:
$\bar{g}(s)=\sup_{s_{1}^{\prime}}u(s_{1}^{\prime},s_{2})-\inf_{s_{2}^{\prime}}u(s_{1},s_{2}^{\prime})$.
In many games, the $S_{i}$ are infinite. The following theorems apply to such
games. If for all $i$, $S_{i}$ is nonempty and compact, and $u_{i}$ is
continuous in $s$, a mixed strategy Nash equilibrium exists [Glicksberg,
1952]. If for all $i$, $S_{i}$ is nonempty, compact, and convex, and $u_{i}$
is continuous in $s$ and quasi-concave in $s_{i}$, a pure strategy Nash
equilibrium exists [Fudenberg and Tirole, 1991, p. 34]. Other results include
the existence of a mixed strategy Nash equilibrium for games with
discontinuous utilities under some mild semicontinuity conditions on the
utility functions [Dasgupta and Maskin, 1986], and the uniqueness of a pure
Nash equilibrium for continuous games under diagonal strict concavity
assumptions [Rosen, 1965].
A _Bayesian game_ is a game of incomplete information, that is, a game in
which players have only partial information about the game and other players.
Formally, it is a tuple $(I,\Omega,\mu,O,\tau,A,r)$ where $I$ is a set of
players, $\Omega$ a set of states, $\mu:\triangle\Omega$ a distribution over
states, $O_{i}$ a set of observations for $i$, $\tau_{i}:\Omega\to O_{i}$ an
observation function for $i$, $A_{i}$ a set of actions for $i$, and
$r_{i}:\Omega\times\prod_{j:I}A_{j}\to\mathbb{R}$ a payoff function for $i$. A
strategy for player $i$ is a function $s_{i}:O_{i}\to\triangle A_{i}$. Given
strategy profile $s$, player $i$’s expected payoff is
$u_{i}(s)=\operatorname{E}_{\omega\sim\mu}\operatorname{E}_{a_{j}\sim
s_{j}(\tau_{j}(\omega)),\forall j:I}r_{i}(\omega,a)$ and their best response
payoff $b_{i}(s_{-i})=\sup_{s_{i}}u_{i}(s)$ is
$\displaystyle\sup_{s_{i}}\operatorname{E}_{\omega}\operatorname{E}_{a}r_{i}(\omega,a)$
$\displaystyle=\operatorname{E}_{o_{i}}\sup_{s_{i}}\operatorname{E}_{\omega|o_{i}}\operatorname{E}_{a}r_{i}(\omega,a)$
(1)
$\displaystyle=\operatorname{E}_{o_{i}}\sup_{a_{i}}\operatorname{E}_{\omega|o_{i}}\operatorname{E}_{a_{-i}}r_{i}(\omega,a)$
(2)
where $\omega|o_{i}$ conditions $\omega$ on player $i$’s observation being
$o_{i}$.
## 3 Related research
McMahan et al. [2003] introduce the double oracle algorithm for normal-form
games and prove convergence. Adam et al. [2021] extended it to two-player
zero-sum continuous games. Kroupa and Votroubek [2021] extend it to $n$-player
continuous games. Their algorithm maintains finite strategy sets for each
player and iteratively extends them with best responses to an equilibrium of
the induced finite sub-game. It “converges fast when the dimension of strategy
spaces is small, and the generated subgames are not large.” For example, in
the two-player zero-sum case: “The best responses were computed by selecting
the best point of a uniform discretization for the one-dimensional problems
and by using a mixed-integer linear programming reformulation for the Colonel
Blotto games.” This approach does not scale to high-dimensional games with
general payoffs where best-response computation is difficult. Moreover, if the
game is stochastic, estimating the finite subgame can be difficult and require
many samples. Furthermore, this approach does not learn observation-dependent
strategies that generalize across observations.
Ganzfried [2021] introduce an algorithm for approximating equilibria in
continuous games called “redundant fictitious play” and apply it to a
continuous Colonel Blotto game. Kamra et al. [2019] present DeepFP, an
approximate extension of fictitious play [Brown, 1951, Berger, 2007] to
continuous action spaces. They demonstrate stable convergence to equilibrium
on several classic games and a large forest security domain. DeepFP represents
players’ approximate best responses via generative neural networks. The
authors state that such models cannot be trained directly in the absence of
gradients, and thus employ a game-model network that is a differentiable
approximation of the game’s payoff function, training these networks end-to-
end in a model-based learning regime. Our approach shows, however, that these
generative models _can_ be trained directly.
Li and Wellman [2021] extend the double oracle approach to $n$-player general-
sum continuous Bayesian games. They represent agents as neural networks and
optimize them using _natural evolution strategies (NES)_ [Wierstra et al.,
2008, 2014]. To approximate a pure-strategy equilibrium, they formulate the
problem as a bi-level optimization and employ NES to implement both inner-loop
best response optimization and outer-loop regret minimization. Bichler et al.
[2021] represent strategies as neural networks and apply simultaneous
gradients to provably learn local equilibria. They focus on symmetric auction
models, assuming symmetric prior distributions and symmetric equilibrium
bidding strategies. Bichler and Kohring [2022] extend that to asymmetric
auctions, where one needs to train multiple neural networks. Both papers
restrict their attention to pure strategies. Fichtl et al. [2022] compute
distributional strategies [Milgrom and Weber, 1985] (a form of mixed
strategies for Bayesian game) on a discretized version of the game via online
convex optimization, specifically _simultaneous online dual averaging_ , and
show that the equilibrium of the discretized game approximates an equilibrium
in the continuous game. That is, they discretize the type and action spaces
and implement gradient dynamics in the discretized version of the game without
using neural networks. In contrast, our approach does not use discretization,
which can work well for small games but does not scale to high-dimensional
observation and action spaces.
A _generative adversarial network (GAN)_ [Goodfellow et al., 2014] consists of
a generator that learns to generate fake data and a discriminator that learns
to distinguish it from real data. Grnarova et al. [2019] propose using an
approximation of the game-theoretic _duality gap_ as a performance measure for
GANs. Grnarova et al. [2021] propose using this measure as the training
objective itself, proving some convergence guarantees. Lockhart et al. [2019]
present exploitability descent, which computes approximate equilibria in two-
player zero-sum extensive-form games by direct strategy optimization against
worst-case opponents. They prove that the exploitability of a player’s
strategy converges asymptotically to zero. Hence, if both players employ this
optimization, the strategy profile converges to an equilibrium. Unlike
extensive-form fictitious play [Heinrich et al., 2015] and counterfactual
regret minimization [Zinkevich et al., 2007], their result pertains to last-
iterate rather than average-iterate convergence. [Timbers et al., 2022]
introduce approximate exploitability, which uses an approximate best response
computed through search and reinforcement learning. This is a generalization
of local best response, a domain-specific evaluation metric used in poker
[Lisý and Bowling, 2017].
Gemp et al. [2022] propose an approach called _average deviation incentive
descent with adaptive sampling_ that iteratively improves an approximation to
a Nash equilibrium through joint play by tracing a homotopy that defines a
continuum of equilibria for the game regularized with decaying levels of
entropy. To encourage iterates to remain near this path, they minimize average
deviation incentive via stochastic gradient descent.
Ganzfried and Sandholm [2010a, b] present a procedure for solving large
imperfect-information games by solving an infinite approximation of the
original game and mapping the equilibrium to a strategy profile in the
original game. Perhaps counterintuitively, the infinite approximation can
often be solved much more easily than the finite game. The algorithm exploits
some qualitative model of equilibrium structure as an additional input to find
an equilibrium in continuous games.
## 4 Game-solving technique
We now describe the components of our game-solving technique.
### 4.1 Gradient estimation
Consider the problem of maximizing $f:\mathbb{R}^{d}\to\mathbb{R}$ with access
to its values but not derivatives. This setting is called _zeroth-order
optimization_. One approach to this problem is to compute estimates of the
gradient $g(x)\approx\nabla f(x)$ and apply gradient-based optimization. The
gradient could be estimated via finite differences as
$g(x)_{i}=\tfrac{1}{\sigma}(f(x+\sigma e_{i})-f(x))$ for all $i\in[d]$, where
$e_{i}$ is the $i$th standard basis vector and $\sigma$ is a small number.
However, the number of queries needed scales linearly with the number of
dimensions $d$. Another approach is to evaluate the function at _randomly-
sampled_ points and estimate the gradient as a sum of estimates of directional
derivatives along random directions [Duchi et al., 2015, Nesterov and
Spokoiny, 2017, Shamir, 2017, Berahas et al., 2022]. These methods compute an
unbiased estimator of the gradient of a _smoothed_ version of $f$ induced by
stochastically perturbing the input under some distribution $\mu_{1}$ and
taking the expectation [Duchi et al., 2012]. Specifically, for distributions
$\mu_{1}$ and $\mu_{2}$, $\nabla_{x}\operatorname{E}_{u\sim\mu_{1}}f(x+\sigma
u)=\tfrac{1}{\sigma}\operatorname{E}_{u\sim\mu_{2}}f(x+\sigma u)u$. Gaussian
smoothing uses $\mu_{1}=\mu_{2}=\mathcal{N}(0,I_{d})$. Ball smoothing uses
$\mu_{1}=\mathcal{U}(\sqrt{d}\mathbb{B}_{d}),\mu_{2}=\mathcal{U}(\sqrt{d}\mathbb{S}_{d})$,
where $\mathbb{B}_{d}$ and $\mathbb{S}_{d}$ are the $d$-dimensional unit ball
and sphere. These yield instances of a class of black box optimization
algorithms called _evolution strategies_ [Rechenberg and Eigen, 1973,
Schwefel, 1977, Rechenberg, 1978], which maintain and evolve a population of
parameter vectors. Specifically, they yield instances of _natural evolution
strategies_ [Wierstra et al., 2008, 2014, Yi et al., 2009], which represent
the population as a distribution over parameters and maximize its average
objective value using the score function estimator. For example, Gaussian
smoothing has been applied to single-agent reinforcement learning and obtains
competitive results on standard benchmarks [Salimans et al., 2017].
To estimate the smoothed gradient, various _stencils_ can be used. These have
the form $\tfrac{1}{\sigma N}\sum_{i=1}^{N}a_{i}u_{i}$ where
$u_{i}\sim\mu_{2}$ independently and $a_{i}$ is $f(x+\sigma u_{i})$,
$f(x+\sigma u_{i})-f(x)$, and $\tfrac{1}{2}(f(x+\sigma u_{i})-f(x-\sigma
u_{i}))$ for the single-point, forward-difference, and central-difference
stencils, respectively. The single-point stencil has a large variance that
diverges to infinity as $\sigma$ approaches 0, so the latter two are typically
used in practice [Berahas et al., 2022]. A related method is _simultaneous-
perturbation stochastic approximation (SPSA)_ [Spall, 1992], which perturbs
each coordinate with Rademacher variates $u\sim\mathcal{U}(\\{-1,1\\}^{d})$
and uses the central-difference stencil. A Taylor expansion of $f$ shows that
this is a good estimate of the true gradient when $\sigma$ is small. Spall
[1997] introduced a one-measurement variant of SPSA that uses the single-point
stencil. In our experiments, we use Gaussian smoothing with the central-
difference stencil.
### 4.2 Equilibrium-finding dynamics
Several gradient-based algorithms exist for finding equilibria in continuous
games, as described in the appendix. Their convergence is analyzed in various
works, including Balduzzi et al. [2018], Letcher et al. [2019], Mertikopoulos
and Zhou [2019], Grnarova et al. [2019], Mazumdar et al. [2019], Hsieh et al.
[2021]. In the games we tested, simultaneous gradient ascent was sufficient to
obtain convergence to equilibrium and the other dynamics did not yield further
improvements. Mertikopoulos and Zhou [2019] analyze the conditions under which
simultaneous gradient ascent converges to Nash equilibria. They prove that, if
the game admits a pseudoconcave potential or if it is monotone, the players’
actions converge to Nash equilibrium, no matter the level of uncertainty
affecting the players’ feedback. Bichler et al. [2021] write that most
auctions in the literature assume symmetric bidders and symmetric equilibrium
bid functions [Krishna, 2002]. This symmetry creates a potential game, and
simultaneous gradient dynamics provably converge to a pure local Nash
equilibria in finite-dimensional continuous potential games [Mazumdar et al.,
2020]. Thus in any symmetric and smooth auction game, symmetric gradient
ascent with appropriate (square-summable but not summable) step sizes almost
surely converges to a local ex-ante approximate Bayes-Nash equilibrium
[Bichler et al., 2021, Proposition 1]. These results apply to most of our
experiments, except for the asymmetric-information auction.
### 4.3 Distributed training algorithm
Our algorithm for training strategy profiles can also be efficiently
distributed, as we now describe. The pseudocode is given as Algorithm 1. On
any iteration, there is a set of available workers $\mathcal{J}$. Each worker
is assigned the task of computing a pseudogradient for a particular player.
The vector $\\{a_{j}\\}_{j\in\mathcal{J}}$ contains the assignment of a player
for each worker. Each worker’s pseudorandom number generator (PRNG) is
initialized with the same fixed seed. On any iteration, one of the workers is
the _coordinator_. Initially, or when the current coordinator goes offline,
the workers choose a coordinator by running a leader election algorithm.
On each iteration, each worker evaluates the utility function (generally the
most expensive operation and bottleneck for training) twice to compute the
finite difference required for the pseudogradient. It then sends this computed
finite difference (a single scalar) to the coordinator. The coordinator then
sends the vector of these scalars to every worker, ensuring that all workers
see each other’s scalars. Thus the information that needs to be passed between
workers is minimal. This greatly reduces the cross-worker bandwidth required
by the algorithm compared to schemes that pass parameters or gradients between
workers, which can be very expensive for large models.
Algorithm 1 Distributed multiagent pseudogradient ascent
The following algorithm runs on each worker:
$\mathcal{I}$ is the set of players
$u$ is the utility function
initialize PRNG state with fixed seed
$\mathbf{x}\leftarrow$ initial strategy profile
for $i\in\mathcal{I}$ do
$S_{i}\leftarrow\text{init}(\mathbf{x}_{i})$ $\triangleright$ initial state of
optimizer $i$
loop
$\mathcal{J}\leftarrow$ set of available workers
for $j\in\mathcal{J}$ do
$a_{j}\leftarrow$ player $\in\mathcal{I}$ $\triangleright$ can be set
dynamically
$\varepsilon_{j}\leftarrow$ scale $\in\mathbb{R}_{>0}$ $\triangleright$ can be
set dynamically
$\mathbf{z}_{j}\sim N(\mathbf{0},\mathbf{I}_{\dim\mathbf{x}_{i}})$ where
$i=a_{j}$
$j\leftarrow$ own worker ID
$\delta_{j}\leftarrow\frac{u(\mathbf{x}_{i}+\varepsilon_{j}\mathbf{z}_{j},\mathbf{x}_{-i})_{i}-u(\mathbf{x}_{i}-\varepsilon_{j}\mathbf{z}_{j},\mathbf{x}_{-i})_{i}}{2\varepsilon_{j}}$
where $i=a_{j}$
send $\delta_{j}$ to coordinator
receive $\delta$ from coordinator
for $i\in\mathcal{I}$ do
$\mathcal{K}\leftarrow\\{j\in\mathcal{J}\mid a_{j}=i\\}$ $\triangleright$
workers assigned $i$
$\mathbf{v}_{i}\leftarrow\frac{1}{|\mathcal{K}|}\sum_{j\in\mathcal{K}}\delta_{j}\mathbf{z}_{j}$
$\triangleright$ $i$’s pseudogradient
$S_{i},\mathbf{x}_{i}\leftarrow\text{step}(S_{i},\mathbf{v}_{i})$
$\triangleright$ step optimizer $i$
This massively parallelizes [Bichler et al., 2021, Algorithm 1] (“NPGA using
ES gradients”). Simultaneously, it generalizes [Salimans et al., 2017,
Algorithm 2] (“Parallelized Evolution Strategies”), which also uses shared
seeds, to the multiplayer setting, with separate gradient evaluations and
optimizers for each player. Furthermore, it allows for the possibility of
setting the worker-player assignments $a_{j}$ and perturbation noise scales
$\varepsilon_{j}$ dynamically over time, provided that this is done
consistently across workers (e.g., based on their common state variables).
Vanilla gradient descent, momentum gradient descent, optimistic gradient
descent, or other optimization algorithms can be used. More information about
the different optimization algorithms is in the appendix in the section on
alternative equilibrium-finding dynamics.
The set of available workers can also change dynamically over time. If a
worker leaves or joins the pool, the coordinator notifies all workers of its
ID so they can remove it from, or add it to, their $\mathcal{J}$ sets,
respectively. The new worker is brought up to speed by passing it the current
PRNG state, strategy profile parameters, and optimizer states (what state
information is needed depends on the algorithm used, for example, whether
momentum is used).
### 4.4 Policy representation
Another key design choice is how the players’ strategies are modeled. Bichler
et al. [2021] model strategies using neural networks [McCulloch and Pitts,
1943, Rosenblatt, 1958]. Each player’s policy network takes as input a
player’s observation and outputs an action. These policy networks were then
trained using _neural pseudogradient ascent_ , which uses Gaussian smoothing
and applies simultaneous gradient ascent. As the authors note, their policy
networks can only model pure strategies, since the output action is
deterministic with respect to the input observation.
We also model strategies using neural networks, with one crucial difference:
our policy network $f_{\theta}$ takes as input the player’s observation $o$
_together with noise_ $z$ from some fixed latent distribution, such as the
standard multivariate Gaussian distribution. Thus the output
$a=f_{\theta}(o,z)$ of the network is _random_ with respect to $o$. The
network can then learn to transform this randomness into a desired action
distribution (see Figure 1 in the appendix). This lets us model mixed
strategies, which is especially desirable in games that lack pure-strategy
equilibria. Some approaches in the literature use the output of a policy
network to parameterize some parametric distribution on the action space, such
as a Gaussian mixture model. However, taking the randomness _as input_ and
letting the neural network reshape it as desired allows us to model arbitrary
distributions more flexibly.
Figure 1 illustrates the high-level structure of a randomized policy network.
It takes as input an observation and random noise, concatenates them, passes
the result through a feedforward neural network, and outputs an action.
ptneural networkobservationrandomnessaction
Figure 1: Structure of a randomized policy network.
Since the dimensionality of noise fed into a randomized policy network is an
important hyperparameter, we now review the literature that studies the
relation between input noise dimension and representational power in neural
network-based generative models.
The universal approximation theorem [Cybenko, 1989, Hornik, 1991, Leshno et
al., 1993, Pinkus, 1999] states that a single (sufficiently wide) hidden layer
suffices to approximate arbitrary continuous functions on a compact domain.
Furthermore, if $\mathcal{X},\mathcal{Y},\subseteq\mathbb{R}^{m}$, $X$ is an
$\mathcal{X}$-valued random variable, $f:\mathcal{X}\to\mathcal{Y}$, and
$f_{n}$ is a sequence of functions that converges pointwise to $f$, the
sequence of random variables $Y_{n}=f_{n}(X)$ converges in distribution to
$Y=f(X)$ [Huang et al., 2018, Lemma 4]. Gaussian input noise of _lower_
dimension than the output space does not suffice to approximate arbitrary
distributions on the output space. In particular, Sard’s theorem [Sard, 1942]
says the following: Let $f:\mathbb{R}^{n}\to\mathbb{R}^{m}$ be $k$ times
continuously differentiable, where $k\geq\max\\{n-m+1,1\\}$. Let
$X\subseteq\mathbb{R}^{n}$ be the set of critical points of $f$ (that is,
points where the Jacobian matrix of $f$ has rank less than $m$). Then the
image $f[X]$ has Lebesgue measure zero in $\mathbb{R}^{m}$. As a corollary, if
$n<m$, then all points in $\mathbb{R}^{n}$ are critical. Thus
$\operatorname{im}f$ has Lebesgue measure zero in $\mathbb{R}^{m}$.222From a
standard uniform random variable, one can extract two independent variables by
de-interleaving its binary expansion, but this operation is highly
discontinuous and pathological.
Padala et al. [2021] study the effect of input noise dimension in GANs. They
show that the right dimension of input noise for optimal results depends on
the dataset and architecture used. Feng et al. [2021] study how noise
injection in GANs helps them overcome the “adversarial dimension trap”, which
arises when the generated manifold has an intrinsic dimension lower than that
of the data manifold: that is, when the latent space is low-dimensional
compared to the high-dimensional space of real image details. Citing Sard’s
theorem [Petersen, 2006], they advise against mapping low-dimensional feature
spaces into feature manifolds with higher intrinsic dimensions. Bailey and
Telgarsky [2018] investigate the ability of generative networks to convert
input noise distributions into other distributions. One question they study is
how easy it is to create a network that outputs _more_ dimensions of noise
than it receives. They derive bounds showing that an increase in dimension
requires a large, complicated network. For example, an approximation of the
uniform distribution on the unit square using the uniform distribution on the
unit interval could use an (almost) space-filling curve such as the iterated
tent map. (A space-filling curve is a continuous surjection from the unit
interval to the unit square or volume of higher dimension.) This function is
highly nonlinear and it can be shown that neural networks must be large to
approximate it well. Thus the dimensionality of input noise is essential in
practice. As we discuss later in this paper, our experiments support this
conclusion for the game context. Bailey and Telgarsky [2018] also show that,
even when the input dimension is greater than the output dimension, increased
input dimension can still sometimes improve accuracy. For example, to
approximate a univariate Gaussian distribution with a high dimensional uniform
distribution, one can sum the inputs. By the Berry-Esseen theorem [Berry,
1941], a refinement of the central limit theorem, the output is close to a
Gaussian distribution. This uses no nonlinearity, but simply takes advantage
of the fact that projecting a hypercube onto a line results in an
approximately Gaussian distribution. In our experiments, we show an example of
excess dimensionality improving performance.
## 5 Games used in experiments
We now describe the games used as benchmarks.
### 5.1 Colonel Blotto games
The Colonel Blotto game is a two-player zero-sum game in which two players
distribute resources over several battlefields. A battlefield is won by
whoever devotes the most resources to it. A player’s payoff is the number of
battlefields they win. This game was introduced by Borel [1953]. It
illustrates fundamental strategic considerations that arise in conflicts or
competition involving resource allocation, such as political campaigns,
research and development competition (where innovation may involve obtaining a
collection of interrelated patents), national security and military and
systems defense [Kovenock and Roberson, 2021].
Gross and Wagner [1950] analyzed a continuous variant in which both players
have continuous, possibly unequal budgets. They obtained exact solutions for
various special cases, including all 2-battlefield cases and all 3-battlefield
cases with equal budgets. Washburn [2013] generalized to the case where
battlefield values are unequal across battlefields. Kovenock and Roberson
[2021] generalized to the case where battlefield values are also unequal
across players. Adamo and Matros [2009] studied a variant in which players
have incomplete information about the other player’s resource budgets.
Kovenock and Roberson [2011] studied a model where the players are subject to
incomplete information about the battlefield valuations. Boix-Adserà et al.
[2021] analyzed the natural multiplayer generalization of the continuous
Colonel Blotto game.
We describe the case with heterogeneous budgets, heterogeneous battlefield
values across both players and battlefields, and several players. Formally,
suppose there are $J$ battlefields. Let $b_{i}$ be the budget of player $i$.
Let $v_{ij}$ be the value to player $i$ of battlefield $j$. Player $i$’s
action space is the standard $J$-simplex dilated by their budget:
$A_{i}=\\{a_{ij}:\mathbb{R}\mid a_{ij}\geq
0,\textstyle\sum_{j}a_{ij}=b_{i}\\}$. Player $i$’s reward function is
$r_{i}(a)=\textstyle\sum_{j}v_{ij}w_{ij}(a)$ where $w_{ij}$ is the probability
that $i$ wins $j$. Ties are broken uniformly at random.
### 5.2 Single-item auctions
An auction is a mechanism by which _items_ are sold to _bidders_. Auctions
play a central role in the study of markets and are used in a wide range of
contexts. In a single-item sealed bid auction, bidders simultaneously submit
bids and the highest bidder wins the item. Let $w_{i}(a)$ be the probability
$i$ wins given action profile $a$, where ties are broken uniformly at random.
Let $v_{i}(\omega)$ be the item’s value for the $i$th player given state
$\omega$. In a $k$th-price _winner-pay_ auction, the winner pays the $k$th
highest bid: $r_{i}(\omega,a)=w_{i}(a)(v_{i}(\omega)-a_{(k)})$, where
$a_{(k)}$ is the $k$th highest bid. In an _all-pay_ auction, each player
always pays their bid: $r_{i}(\omega,a)=w_{i}(a)v_{i}(\omega)-a_{i}$. This
auction is widely used to model lobbying for rents in regulated and trade
protected industries, technological competition and R&D races, political
campaigns, job promotions, and other contests [Baye et al., 1996]. The all-pay
complete-information auction lacks pure-strategy equilibria [Baye et al.,
1996]. The 2-player 1st-price winner-pay asymmetric-information auction also
lacks pure-strategy equilibria [Krishna, 2002, section 8.3]. In particular,
the second player must randomize. More details about each type of auction can
be found in the appendix.
### 5.3 Multi-item auctions
Multi-item auctions are of great importance in practice, for example in
strategic sourcing [Sandholm, 2013] and radio spectrum allocation [Milgrom and
Segal, 2014, 2020]. However, deriving equilibrium bidding strategies for
multi-item auctions is notoriously difficult. A rare notable instance where it
has been derived is the _chopstick auction_ , which was introduced and
analyzed by Szentes and Rosenthal [2003b, a]. In this auction, 3 chopsticks
are sold simultaneously in separate first-price sealed-bid auctions. There are
2 bidders, and it is common knowledge that a pair of chopsticks is worth $1, a
single chopstick is worth nothing by itself, and 3 chopsticks are worth the
same as 2. In short, one needs two chopsticks to eat. Pure strategies are
triples of non-negative real numbers (bids). This game has an interesting
equilibrium: let the tetrahedron $T$ be defined as the convex hull of the four
points $(\tfrac{1}{2},\tfrac{1}{2},0)$, $(\tfrac{1}{2},0,\tfrac{1}{2})$,
$(0,\tfrac{1}{2},\tfrac{1}{2})$, and $(0,0,0)$. Then the uniform probability
measure on the 2-dimensional surface of $T$ generates a symmetric equilibrium.
(Furthermore, all points inside the tetrahedron are pure best responses to
this equilibrium mixture.) We benchmark on the chopsticks auction since it is
a rare case of a multi-item auction where the solution can be checked because
the equilibrium is known. It is also a canonical case of simultaneous separate
auctions under combinatorial preferences.
### 5.4 Visibility game
Lotker et al. [2008] introduced the _visibility game_ , a noncooperative,
complete-information strategic game. In this game, each player $i$ chooses a
point $x_{i}\in[0,1]$. Their payoff is the distance to the next higher point,
or to $1$ if $x_{i}$ is the highest. This game models a situation where
players seek to maximize their _visibility time_ , and is a variant of the
family of “timing games” [Fudenberg and Tirole, 1991]. It resembles the “war
of attrition” game formalized by Smith [1974]. In this game, both players are
engaged in a costly competition and they need to choose a time to concede.
More formally, the first player to concede (called “leader”) gets a smaller
payoff than the other player (called “follower”). Furthermore, the payoff to
the leader strictly decreases as time progresses. That is, conceding early is
better than conceding late.
They prove that the $n$-player visibility game has no pure equilibrium, but
has a unique mixed equilibrium, which is symmetric. In the 2-player case, up
to a set of measure zero, there is a unique equilibrium whose strategies have
probability densities $p(x)=1/(1-x)$ when $0\leq x\leq 1-1/\mathrm{e}$ and 0
otherwise. Each player’s expected payoff is $1/\mathrm{e}$.
## 6 Hyperparameters used in the experiments
To initialize our networks, we use He initialization [He et al., 2015], which
is widely used for feedforward networks with ReLU-like activation functions.
It initializes bias vectors to zero and weight matrices with normally-
distributed entries scaled by $\sqrt{2/n}$, where $n$ is the layer’s input
dimension. We use the ELU activation function [Clevert et al., 2016] for
hidden layers. Like Bichler et al. [2021], we use 2 hidden layers with 10
neurons each.
We illustrate the performance of our method by plotting NashConv against the
number of epochs, where each epoch consists of $10^{6}$ optimization steps.
Each hyperparameter setting is labeled in the legend and shown in a different
color. Each individual setting is run 20 times with different random seeds.
Solid lines indicate means across trials. Bands indicate a confidence interval
for this mean with a confidence level of 0.95. These confidence intervals are
computed using bootstrapping [Efron, 1979], specifically the bias-corrected
and accelerated (BCa) method [Efron, 1987].
For the gradient estimator, we use the Gaussian distribution with scale
$\sigma=10^{-2}$, $N=1$ samples, and the central-difference stencil (so 2
evaluations per step). For the optimizer, we use standard gradient descent
with a learning rate of $10^{-6}$. To estimate NashConv (Equation 2), we use
100 observation samples and 300 state samples (given each observation). We use
a 100-point discretization of the action space for the auctions and visibility
game. For Colonel Blotto games, we use a 231-point discretization of the
action space. It is obtained by enumerating all partitions of the integer 20
into 3 parts and renormalizing them to sum to 1.
## 7 Experimental results
We now describe our experimental results for each game. Figures illustrating
analytically-derived equilibria in cases where they are known can be found in
the appendix.
### 7.1 Colonel Blotto games
Actions in the continuous Colonel Blotto game are points on the standard
simplex. Thus we use a softmax activation function for the output layer of the
randomized policy network. Figure 2 illustrates the performance of our method
on the continuous Colonel Blotto game with 2 players and 3 battlefields. Since
the game has no pure-strategy Nash equilibrium, deterministic strategies
perform badly, as expected. 1-dimensional noise results in slightly better
performance, but does not let players randomize well enough to approximate the
equilibrium. On the other hand, noise of dimension 2 and higher is sufficient
for good performance. The very slight increase in exploitability after 1e8
steps is most likely due to fluctuations introduced by the many sources of
stochasticity in the training process, including the game and gradient
estimates, as well as the fact that we are training a multi-layer neural
network. Even in the supervised learning setting, loss does not always
decrease monotonically.
Figure 2: Continuous Colonel Blotto game.
Figure 3 illustrates the strategies at different stages of training for one
trial that uses 2-dimensional noise. Each scatter plot is made by sampling
$10^{5}$ actions from each player’s strategy. The strategies converge to the
analytical solution derived by Gross and Wagner [1950]. More details about
this solution can be found in the appendix.
Figure 3: Strategies at epochs 0, 60, 120, and 180 (left to right). Each
histogram uses $10^{4}$ action samples.
Figure 4 also illustrates performances on the continuous Colonel Blotto game
with 2 players and 3 battlefields. This time, however, the budgets for each
player are sampled from the standard uniform distribution and revealed to both
players. Thus each player must adjust their action distribution accordingly.
To our knowledge, prior approaches [Adam et al., 2021, Kroupa and Votroubek,
2021, Ganzfried, 2021] do not learn strategies that can generalize across
different parameters (like budgets and valuations), which requires the use of
function approximators such as neural networks.
Figure 4: Continuous Colonel Blotto game with random budgets.
### 7.2 Single-item and multi-item auctions
We now turn to the auction setting. Unlike Bichler et al. [2021], we use an
absolute value function in the output layer rather than a ReLU function. The
reason is that, as we found in our experiments, ReLU can easily cause
degenerate initializations: if the randomly-initialized neural network happens
to map all of the unit interval (the observation space) to negative bids, no
gradient signal can be received and the network is stuck. By default, auctions
are 2-player, 1st-price, and winner-pay unless otherwise noted.
Figures 5 and 6 illustrate performances and strategies for the asymmetric
information auction. Figures 7 and 8 illustrate performances and strategies
for the complete-information all-pay auction. Recall that these auctions have
no pure-strategy equilibria. Thus, as expected, deterministic strategies
perform poorly. As with Colonel Blotto games, our experiments in these auction
settings show that the ability to flexibly model mixed strategies is crucial
for computing approximate Nash equilibria in certain auction settings.
Figure 5: The asymmetric-information auction.
Figure 6: Strategies at epochs 0, 30, 60, and 90 (left to right). X and Y axes
denote observation and bid, respectively. Each histogram uses $10^{4}$ action
samples per observation. Figure 7: The all-pay complete-information auction.
Figure 8: Strategies at epochs 0, 30, 60, and 90 (left to right). X and Y axes
denote observation and bid, respectively. Each histogram uses $10^{4}$ action
samples per observation.
Figure 9 and 10 illustrate performances and strategies for the chopstick
auction. Here we encounter an interesting phenomenon: Recall that this game
has a symmetric equilibrium generated by the uniform measure on the surface of
a tetrahedron. Although the tetrahedron itself is 3-dimensional, its surface
is only 2-dimensional. Thus one may wonder whether 2-dimensional noise is
sufficient: that is, whether the network can learn to project this lower-
dimensional manifold out into the third dimension while “folding” it in the
way required to obtain the surface of the tetrahedron. Through our
experiments, we observe that 2-dimensional noise indeed suffices to
(approximately) match the performance of higher-dimensional noise. Thus the
_intrinsic dimension_ of the equilibrium action distribution (as opposed to
the extrinsic dimension of the ambient space it is embedded in) seems to be
the decisive factor.
Figure 9: The chopstick auction.
Figure 10: Illustration of player strategies, based on $10^{5}$ action
samples. Left to right: Players 1 and 2. Top to bottom: Epochs 30, 60, and 90.
### 7.3 Visibility game
Figures 11 illustrates performances on the 2-player visibility game. Figure 12
illustrates strategies during training for a trial with 1-dimensional noise.
The players’ distributions converge to the expected distribution (there is a
distinctive cutoff at $1-1/\mathrm{e}\approx 0.632$). As expected,
0-dimensional noise, which yields deterministic strategies, performs very
poorly. More interestingly, there is a noticeable gap in performance between
1-dimensional noise, which matches the dimensionality of the action space, and
higher-dimensional noise. That is, using noise of higher dimension than the
action space accelerates convergence in this game.
Figure 11: Performance on the 2-player visibility game.
Figure 12: Strategies at epochs 0, 100, 200, and 300 (left to right). X and Y
axes denote observation and bid, respectively. Each histogram uses $10^{5}$
action samples.
## 8 Conclusions and future research
We presented the first paper, to our knowledge, to solve general continuous-
action games with unrestricted mixed strategies and without any gradient
information. We accomplished this using zeroth-order optimization techniques
that combine smoothed gradient estimators with equilibrium-finding gradient
dynamics. We modeled players’ strategies using _randomized policy networks_
that take noise as input and can flexibly represent arbitrary observation-
dependent, continuous-action distributions. Being able to model such mixed
strategies is crucial for tackling continuous-action games that can lack pure-
strategy equilibria.
We evaluated our method on various games, including continuous Colonel Blotto
games, single-item and multi-item auctions, and a visibility game. The
experiments showed that our method can quickly compute high-quality
approximate equilibria for these games. Furthermore, they show that the
dimensionality of the input noise is crucial for representing and converging
to equilibria. In particular, noise of too low dimension (or no noise, which
yields a deterministic policy) results in failure to converge. Randomized
policy networks flexibly model observation-dependent action distributions.
Thus, in contrast to prior work, we can flexibly model mixed strategies and
directly optimize them in a “black-box” game with access only to payoffs.
This work opens many directions for tackling even more complex multiagent
environments. In multi-step environments, the current observation may not
contain all information about past observations and actions that is relevant
to choosing an action. To give agents memory, one can use recurrent networks
[Rumelhart et al., 1986, Werbos, 1988] such as LSTMs [Hochreiter and
Schmidhuber, 1997] or GRUs [Cho et al., 2014]. In that case, the policy
network would receive as input an observation, source of randomness, and
memory state and output an action and new memory state. One can also consider
games with more complex observation and action spaces, including high-
dimensional arrays like images. Convolutional networks [LeCun et al., 1998,
1989] can be used to process such inputs. Very complex environments, including
real-time strategy games like StarCraft II, may require more sophisticated
neural architectures [Vinyals et al., 2019] such as pointer networks [Vinyals
et al., 2015], transformers [Vaswani et al., 2017], and scatter connections
that integrate spatial and non-spatial information.
## 9 Acknowledgements
This material is based on work supported by the National Science Foundation
under grants IIS-1901403 and CCF-1733556 and by the ARO under award
W911NF2210266.
## References
* Adam et al. [2021] Lukáš Adam, Rostislav Horčík, Tomáš Kasl, and Tomáš Kroupa. Double oracle algorithm for computing equilibria in continuous games. _AAAI Conference on Artificial Intelligence (AAAI)_ , 35:5070–5077, 2021.
* Adamo and Matros [2009] Tim Adamo and Alexander Matros. A Blotto game with incomplete information. _Economics Letters_ , 105:100–102, 2009.
* Bailey and Telgarsky [2018] Bolton Bailey and Matus Telgarsky. Size-noise tradeoffs in generative networks. In _Conference on Neural Information Processing Systems (NeurIPS)_ , pages 6490–6500, 2018.
* Balduzzi et al. [2018] David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of n-player differentiable games. In _International Conference on Machine Learning (ICML)_ , volume 80, pages 354–363, 2018.
* Baye et al. [1996] Michael R. Baye, Dan Kovenock, and Casper G. de Vries. The all-pay auction with complete information. _Economic Theory_ , 8:291–305, 1996.
* Berahas et al. [2022] Albert S. Berahas, Liyuan Cao, Krzysztof Choromanski, and Katya Scheinberg. A theoretical and empirical comparison of gradient approximations in derivative-free optimization. _Foundations of Computational Mathematics_ , 22:507–560, 2022.
* Berger [2007] Ulrich Berger. Brown’s original fictitious play. _Journal of Economic Theory_ , 135:572–578, 2007.
* Berner et al. [2019] Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. _arXiv preprint arXiv:1912.06680_ , 2019.
* Berry [1941] Andrew C. Berry. The accuracy of the Gaussian approximation to the sum of independent variates. _Transactions of the American Mathematical Society_ , 49:122–136, 1941.
* Bichler and Kohring [2022] Martin Bichler and Stefan Heidekrüger Nils Kohring. Learning equilibria in asymmetric auction games. To appear in INFORMS Journal on Computing, 2022.
* Bichler et al. [2021] Martin Bichler, Maximilian Fichtl, Stefan Heidekrüger, Nils Kohring, and Paul Sutterer. Learning equilibria in symmetric auction games using artificial neural networks. _Nature Machine Intelligence_ , 3:687–695, 2021.
* Boix-Adserà et al. [2021] Enric Boix-Adserà, Benjamin L. Edelman, and Siddhartha Jayanti. The multiplayer Colonel Blotto game. _Games and Economic Behavior_ , 129:15–31, 2021.
* Borel [1938] Émile Borel. _Traité du calcul des probabilités et ses applications_ , volume IV of _Applications aux jeux des hazard_. Gauthier-Villars, Paris, 1938.
* Borel [1953] Émile Borel. The theory of play and integral equations with skew symmetric kernels. _Econometrica_ , 21:97–100, 1953.
* Brown [1951] George W. Brown. Iterative solutions of games by fictitious play. In Tjalling C. Koopmans, editor, _Activity Analysis of Production and Allocation_ , pages 374–376. John Wiley & Sons, 1951.
* Chen and Ankenman [2006] Bill Chen and Jerrod Ankenman. _The Mathematics of Poker_. ConJelCo, 2006.
* Cho et al. [2014] Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: encoder-decoder approaches, 2014.
* Clevert et al. [2016] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). In _International Conference on Learning Representations (ICLR)_ , 2016.
* Cybenko [1989] G. Cybenko. Approximation by superpositions of a sigmoidal function. _Mathematics of Control, Signals and Systems_ , 2:303–314, 1989.
* Dasgupta and Maskin [1986] P. Dasgupta and Eric Maskin. The existence of equilibrium in discontinuous economic games. _Review of Economic Studies_ , 53:1–26, 1986.
* Daskalakis et al. [2018] Constantinos Daskalakis, Andrew Ilyas, Vasilis Syrgkanis, and Haoyang Zeng. Training GANs with optimism. In _International Conference on Learning Representations (ICLR)_ , 2018.
* Duchi et al. [2012] John C Duchi, Peter L Bartlett, and Martin J Wainwright. Randomized smoothing for stochastic optimization. _SIAM Journal on Optimization_ , 22(2):674–701, 2012.
* Duchi et al. [2015] John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Andre Wibisono. Optimal rates for zero-order convex optimization: the power of two function evaluations. _IEEE Transactions on Information Theory_ , 61:2788–2806, 2015.
* Efron [1979] Bradley Efron. Bootstrap methods: another look at the jackknife. _The Annals of Statistics_ , 7:1–26, 1979.
* Efron [1987] Bradley Efron. Better bootstrap confidence intervals. _Journal of the American Statistical Association_ , 82:171–185, 1987.
* Feng et al. [2021] Ruili Feng, Deli Zhao, and Zheng-Jun Zha. Understanding noise injection in GANs. In _International Conference on Machine Learning (ICML)_ , volume 139, pages 3284–3293, 2021.
* Fichtl et al. [2022] Maximilian Fichtl, Matthias Oberlechner, and Martin Bichler. Computing Bayes Nash equilibrium strategies in auction games via simultaneous online dual averaging, 2022.
* Fudenberg and Tirole [1991] Drew Fudenberg and Jean Tirole. _Game theory_. MIT Press, Cambridge, MA, 1991. ISBN 0-262-06141-4.
* Ganzfried [2021] Sam Ganzfried. Algorithm for computing approximate Nash equilibrium in continuous games with application to continuous Blotto. _Games_ , 12:47, 2021.
* Ganzfried and Sandholm [2010a] Sam Ganzfried and Tuomas Sandholm. Computing equilibria by incorporating qualitative models. In _International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS)_ , 2010a.
* Ganzfried and Sandholm [2010b] Sam Ganzfried and Tuomas Sandholm. Computing equilibria by incorporating qualitative models. Technical report, Carnegie Mellon University, 2010b.
* Gemp and Mahadevan [2018] Ian Gemp and Sridhar Mahadevan. Global convergence to the equilibrium of GANs using variational inequalities, 2018.
* Gemp et al. [2022] Ian Gemp, Rahul Savani, Marc Lanctot, Yoram Bachrach, Thomas Anthony, Richard Everett, Andrea Tacchetti, Tom Eccles, and János Kramár. Sample-based approximation of Nash in large many-player games via gradient descent. In _Autonomous Agents and Multi-Agent Systems_ , pages 507–515, 2022\.
* Ghosh and Kundu [2019] Papiya Ghosh and Rajendra P. Kundu. Best-shot network games with continuous action space. _Research in Economics_ , 73:225–234, 2019.
* Glicksberg [1952] I. L. Glicksberg. A further generalization of the Kakutani fixed point theorem, with application to Nash equilibrium points. _Proceedings of the American Mathematical Society_ , 3(1):170–174, 1952.
* Goodfellow et al. [2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In _Conference on Neural Information Processing Systems (NeurIPS)_ , volume 27, 2014.
* Grnarova et al. [2019] Paulina Grnarova, Kfir Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Ian Goodfellow, Thomas Hofmann, and Andreas Krause. A domain agnostic measure for monitoring and evaluating GANs. In _Conference on Neural Information Processing Systems (NeurIPS)_ , volume 32, 2019.
* Grnarova et al. [2021] Paulina Grnarova, Yannic Kilcher, Kfir Y. Levy, Aurelien Lucchi, and Thomas Hofmann. Generative minimization networks: training GANs without competition, 2021.
* Gross and Wagner [1950] Oliver Alfred Gross and R. A. Wagner. _A continuous Colonel Blotto game_. RAND Corporation, 1950.
* He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_ , 2015.
* Heinrich et al. [2015] Johannes Heinrich, Marc Lanctot, and David Silver. Fictitious self-play in extensive-form games. In _International Conference on Machine Learning (ICML)_ , volume 37, pages 805–813, 2015.
* Hochreiter and Schmidhuber [1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. _Neural Computation_ , 9:1735–1780, 1997.
* Hornik [1991] Kurt Hornik. Approximation capabilities of multilayer feedforward networks. _Neural Networks_ , 4:251–257, 1991.
* Hsieh et al. [2021] Ya-Ping Hsieh, Panayotis Mertikopoulos, and Volkan Cevher. The limits of min-max optimization algorithms: convergence to spurious non-critical sets. In _International Conference on Machine Learning (ICML)_ , volume 139, pages 4337–4348, 2021.
* Huang et al. [2018] Chin-Wei Huang, David Krueger, Alexandre Lacoste, and Aaron Courville. Neural autoregressive flows. In _International Conference on Machine Learning (ICML)_ , volume 80, pages 2078–2087, 2018.
* Kagel and Levin [1993] John H. Kagel and Dan Levin. Independent private value auctions: bidder behaviour in first-, second- and third-price auctions with varying numbers of bidders. _The Economic Journal_ , 103:868–879, 1993.
* Kamra et al. [2017] Nitin Kamra, Fei Fang, Debarun Kar, Yan Liu, and Milind Tambe. Handling continuous space security games with neural networks. In _Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)_ , 2017.
* Kamra et al. [2018] Nitin Kamra, Umang Gupta, Fei Fang, Yan Liu, and Milind Tambe. Policy learning for continuous space security games using neural networks. _AAAI Conference on Artificial Intelligence (AAAI)_ , 32, 2018.
* Kamra et al. [2019] Nitin Kamra, Umang Gupta, Kai Wang, Fei Fang, Yan Liu, and Milind Tambe. Deepfp for finding Nash equilibrium in continuous action spaces. In _Decision and Game Theory for Security_ , pages 238–258, 2019\.
* Korpelevich [1976] G. M. Korpelevich. The extragradient method for finding saddle points and other problems. _Ekonomika i Matematicheskie Metody_ , 12:747–756, 1976\.
* Kovenock and Roberson [2011] Dan Kovenock and Brian Roberson. A Blotto game with multi-dimensional incomplete information. _Economics Letters_ , 113:273–275, 2011.
* Kovenock and Roberson [2021] Dan Kovenock and Brian Roberson. Generalizations of the General Lotto and Colonel Blotto games. _Economic Theory_ , 71:997–1032, 2021.
* Krishna [2002] Vijay Krishna. _Auction theory_. Academic Press, 2002.
* Kroer and Sandholm [2015] Christian Kroer and Tuomas Sandholm. Discretization of continuous action spaces in extensive-form games. In _International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS)_ , 2015.
* Kroupa and Votroubek [2021] T. Kroupa and T. Votroubek. Multiple oracle algorithm for general-sum continuous games, 2021.
* LeCun et al. [1989] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. _Neural Computation_ , 1:541–551, 1989.
* LeCun et al. [1998] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. _Proceedings of the IEEE_ , 86:2278–2324, 1998.
* Leshno et al. [1993] Moshe Leshno, Vladimir Ya. Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. _Neural Networks_ , 6:861–867, 1993.
* Letcher et al. [2019] Alistair Letcher, David Balduzzi, Sébastien Racanière, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. Differentiable game mechanics. _Journal of Machine Learning Research_ , 20:3032–3071, 2019\.
* Li and Wellman [2021] Zun Li and Michael P. Wellman. Evolution strategies for approximate solution of Bayesian games. _AAAI Conference on Artificial Intelligence (AAAI)_ , 35:5531–5540, 2021.
* Lisý and Bowling [2017] Viliam Lisý and Michael Bowling. Eqilibrium approximation quality of current no-limit poker bots. In _AAAI Computer Poker Workshop_ , 2017.
* Lockhart et al. [2019] Edward Lockhart, Marc Lanctot, Julien Pérolat, Jean-Baptiste Lespiau, Dustin Morrill, Finbarr Timbers, and Karl Tuyls. Computing approximate equilibria in sequential adversarial games by exploitability descent. In _Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)_ , pages 464–470, 2019.
* Lotker et al. [2008] Zvi Lotker, Boaz Patt-Shamir, and Mark R. Tuttle. A game of timing and visibility. _Games and Economic Behavior_ , 62:643–660, 2008.
* Marchesi et al. [2020] Alberto Marchesi, Francesco Trovò, and Nicola Gatti. Learning probably approximately correct maximin strategies in simulation-based games with infinite strategy spaces. In _Autonomous Agents and Multi-Agent Systems_ , pages 834–842, 2020\.
* Mazumdar et al. [2020] Eric Mazumdar, Lillian J. Ratliff, and S. Shankar Sastry. On gradient-based learning in continuous games. _SIAM Journal on Mathematics of Data Science_ , 2:103–131, 2020.
* Mazumdar et al. [2019] Eric V. Mazumdar, Michael I. Jordan, and S. Shankar Sastry. On finding local Nash equilibria (and only local Nash equilibria) in zero-sum games, 2019.
* McCulloch and Pitts [1943] Warren S. McCulloch and Walter Pitts. A logical calculus of the ideas immanent in nervous activity. _The bulletin of mathematical biophysics_ , 5:115–133, 1943\.
* McMahan et al. [2003] H. Brendan McMahan, Geoffrey J. Gordon, and Avrim Blum. Planning in the presence of cost functions controlled by an adversary. In _International Conference on Machine Learning (ICML)_ , pages 536–543, 2003.
* Mertikopoulos and Zhou [2019] Panayotis Mertikopoulos and Zhengyuan Zhou. Learning in games with continuous action sets and unknown payoff functions. _Mathematical Programming_ , 173:465–507, 2019.
* Mescheder et al. [2017] Lars Mescheder, Sebastian Nowozin, and Andreas Geiger. The numerics of GANs. In _Conference on Neural Information Processing Systems (NeurIPS)_ , pages 1823–1833, 2017.
* Milgrom and Segal [2014] Paul Milgrom and Ilya Segal. Deferred-acceptance auctions and radio spectrum reallocation. In _Proceedings of the ACM Conference on Economics and Computation (EC)_ , 2014.
* Milgrom and Segal [2020] Paul Milgrom and Ilya Segal. Clock auctions and radio spectrum reallocation. _Journal of Political Economy_ , 128:1–31, 2020.
* Milgrom and Weber [1985] Paul Milgrom and Robert Weber. Distributional strategies for games with incomplete infromation. _Mathematics of Operations Research_ , 10:619–632, 1985\.
* Mokhtari et al. [2020] Aryan Mokhtari, Asuman Ozdaglar, and Sarath Pattathil. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: proximal point approach. In _International Conference on Artificial Intelligence and Statistics (AISTATS)_ , volume 108, pages 1497–1507, 2020.
* Nesterov and Spokoiny [2017] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. _Foundations of Computational Mathematics_ , 17:527–566, 2017.
* Padala et al. [2021] Manisha Padala, Debojit Das, and Sujit Gujar. Effect of input noise dimension in GANs. In _Neural Information Processing_ , pages 558–569, 2021.
* Petersen [2006] Peter Petersen. _Riemannian geometry_. Springer, 2 edition, 2006.
* Pinkus [1999] Allan Pinkus. Approximation theory of the MLP model in neural networks. _Acta Numerica_ , 8:143–195, 1999.
* Rechenberg [1978] Ingo Rechenberg. Evolutionsstrategien. In _Simulationsmethoden in der medizin und biologie_ , pages 83–114. Springer, 1978.
* Rechenberg and Eigen [1973] Ingo Rechenberg and M Eigen. _Evolutionsstrategie: optimierung technischer systeme nach prinzipien der biologischen evolution_. Frommann-Holzboog Stuttgart, 1973.
* Rosen [1965] J. B. Rosen. Existence and uniqueness of equilibrium points for concave n-person games. _Econometrica_ , 33:520–534, 1965.
* Rosenblatt [1958] F. Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. _Psychological Review_ , 65:386–408, 1958.
* Rumelhart et al. [1986] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. _Nature_ , 323:533–536, 1986.
* Salimans et al. [2017] Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning, 2017.
* Sandholm [2013] Tuomas Sandholm. Very-large-scale generalized combinatorial multi-attribute auctions: Lessons from conducting $60 billion of sourcing. In Zvika Neeman, Alvin Roth, and Nir Vulkan, editors, _Handbook of Market Design_. Oxford University Press, 2013.
* Sard [1942] Arthur Sard. The measure of the critical values of differentiable maps. _Bulletin of the American Mathematical Society_ , 48:883–890, 1942.
* Schwefel [1977] Hans-Paul Schwefel. _Numerische optimierung von computer-modellen mittels der evolutionsstrategie_. Birkhäuser Basel, 1977.
* Shamir [2017] Ohad Shamir. An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. _Journal of Machine Learning Research_ , 18:1–11, 2017\.
* Smith [1974] John Maynard Smith. The theory of games and evolution in animal conflict. _Journal of Theoretical Biology_ , 47:209–221, 1974.
* Spall [1992] James C. Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. _IEEE Transactions on Automatic Control_ , 37:332–341, 1992\.
* Spall [1997] James C. Spall. A one-measurement form of simultaneous perturbation stochastic approximation. _Automatica_ , 33:109–112, 1997.
* Szentes and Rosenthal [2003a] Balazs Szentes and Robert W. Rosenthal. Beyond chopsticks: symmetric equilibria in majority auction games. _Games and Economic Behavior_ , 45(2):278–295, 2003a.
* Szentes and Rosenthal [2003b] Balázs Szentes and Robert W. Rosenthal. Three-object two-bidder simultaneous auctions: chopsticks and tetrahedra. _Games and Economic Behavior_ , 44:114–133, 2003b.
* Timbers et al. [2022] Finbarr Timbers, Nolan Bard, Edward Lockhart, Marc Lanctot, Martin Schmid, Neil Burch, Julian Schrittwieser, Thomas Hubert, and Michael Bowling. Approximate exploitability: learning a best response. In _Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI)_ , pages 3487–3493, 2022.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Conference on Neural Information Processing Systems (NeurIPS)_ , volume 30, 2017.
* Vinyals et al. [2015] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In _Conference on Neural Information Processing Systems (NeurIPS)_ , volume 28, 2015.
* Vinyals et al. [2019] Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. _Nature_ , 575(7782):350–354, 2019.
* Washburn [2013] Alan Washburn. OR forum - Blotto politics. _Operations Research_ , 61:532–543, 2013.
* Werbos [1988] Paul J. Werbos. Generalization of backpropagation with application to a recurrent gas market model. _Neural networks_ , 1:339–356, 1988.
* Wierstra et al. [2008] Daan Wierstra, Tom Schaul, Jan Peters, and Juergen Schmidhuber. Natural evolution strategies. In _2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence)_ , pages 3381–3387, 2008.
* Wierstra et al. [2014] Daan Wierstra, Tom Schaul, Tobias Glasmachers, Yi Sun, Jan Peters, and Jürgen Schmidhuber. Natural evolution strategies. _Journal of Machine Learning Research_ , 15:949–980, 2014\.
* Yi et al. [2009] Sun Yi, Daan Wierstra, Tom Schaul, and Jürgen Schmidhuber. Stochastic search using the natural gradient. In _International Conference on Machine Learning (ICML)_ , pages 1161–1168, 2009.
* Zinkevich et al. [2007] Martin Zinkevich, Michael Bowling, Michael Johanson, and Carmelo Piccione. Regret minimization in games with incomplete information. In _Conference on Neural Information Processing Systems (NeurIPS)_ , 2007.
## Appendix A Best response computation for continuous Colonel Blotto game
Approximate best responses can be computed for the continuous Colonel Blotto
game without discretizing the action space [Ganzfried, 2021]. By sampling $K$
batches of actions from other players’ strategies, we can obtain an
approximate best response $a_{i}$ for player $i$ using a mixed-integer linear
program (MILP). More precisely, let $h_{jk}$ be the highest bid for $j$ from
other players in batch $k$. Solve the following MILP:
maximize
$\displaystyle\textstyle\sum_{j}v_{ij}\tfrac{1}{K}\textstyle\sum_{k}z_{jk}$
(3) over $\displaystyle a_{i}\in\mathbb{R}^{J}$ (4) $\displaystyle
z\in\\{0,1\\}^{J\times K}$ (5) subject to $\displaystyle a_{ij}\geq
0\quad\forall j$ (6) $\displaystyle\textstyle\sum_{j}a_{ij}=b_{i}$ (7)
$\displaystyle z_{jk}=[a_{ij}\geq h_{jk}]\quad\forall j,k$ (8)
We can use the Big M method to represent the last constraint:
$\displaystyle a_{ij}-h_{jk}+M(1-z_{jk})\geq 0$ (9)
where $M\gg 0$, which forces $z_{jk}$ to be 0 when $a_{ij}-h_{jk}$ is
negative.
## Appendix B Additional information about auctions
Table 1 describes the independent private values, common values, affiliated
values, complete information, and asymmetric information auctions, in that
order.
$\Omega$ | $\tau_{i}(\omega)$ | $v_{i}(\omega)$
---|---|---
$[0,1]^{n}$ | $\omega_{i}$ | $\omega_{i}$
$[0,1]^{n+1}$ | $\omega_{i}\omega_{n+1}$ | $\omega_{n+1}$
$[0,1]^{n+1}$ | $\omega_{i}+\omega_{n+1}$ | $\omega_{n+1}+\tfrac{1}{n}\textstyle\sum_{i=1}^{n}\omega_{i}$
$[0,1]$ | $\omega$ | $\omega$
$[0,1]$ | $\omega[i=1]$ | $\omega$
Table 1: Auction descriptions. All use $\mu=\mathcal{U}(\Omega)$.
For the common values auction, also known as the “mineral rights” model
[Krishna, 2002, example 6.1], the following procedure can be used to sample
$\omega\mid o_{i}$:
$\displaystyle z$ $\displaystyle\sim\mathcal{U}([0,1])$ (10)
$\displaystyle\omega_{n+1}$ $\displaystyle=o_{i}^{z}$ (11)
$\displaystyle\omega_{i}$ $\displaystyle=o_{i}\mathbin{/}\omega_{n+1}$ (12)
$\displaystyle\omega_{j}$ $\displaystyle\sim\mathcal{U}([0,1])\qquad j\neq i$
(13)
For the affiliated values auction [Krishna, 2002, example 6.2], the following
procedure can be used to sample $\omega\mid o_{i}$:
$\displaystyle\omega_{n+1}$
$\displaystyle\sim\mathcal{U}(\max\\{0,o_{i}-1\\},\min\\{1,o_{i}\\})$ (14)
$\displaystyle\omega_{i}$ $\displaystyle=o_{i}-\omega_{n+1}$ (15)
$\displaystyle\omega_{j}$ $\displaystyle\sim\mathcal{U}([0,1])\qquad j\neq i$
(16)
The all-pay auction with independent private values has a pure symmetric
equilibrium generated by:
$\displaystyle a_{i}$ $\displaystyle=\tfrac{n-1}{n}(o_{i})^{n}$ (17)
The $k$th-price winner-pay auction with independent private values has a pure
symmetric equilibrium generated by [Kagel and Levin, 1993, p. 878]:
$\displaystyle a_{i}$ $\displaystyle=\tfrac{n-1}{n+1-k}o_{i}$ (18)
The 3-player 2nd-price winner-pay auction with common values has a pure
symmetric equilibrium generated by [Bichler et al., 2021]:
$\displaystyle a_{i}$ $\displaystyle=\tfrac{o_{i}}{2+\tfrac{1}{2}o_{i}}$ (19)
The 2-player 1st- and 2nd-price winner-pay auction with affiliated values have
pure symmetric equilibria generated by, respectively [Bichler et al., 2021]:
$\displaystyle a_{i}$ $\displaystyle=\tfrac{2}{3}o_{i}$ (20) $\displaystyle
a_{i}$ $\displaystyle=o_{i}$ (21)
## Appendix C Other equilibrium-finding dynamics
We tested a broad set of equilibrium-finding dynamics. This section describes
those dynamics. Let $\xi$ denote the _simultaneous gradient_ of the utilities
with respect to the parameters of the respective players only:
$\xi_{i}=\nabla_{i}u_{i}$. This is a vector field on the strategy profile
space. In general, it is not conservative (the gradient of some potential).
This is the primary source of difficulties in applying standard gradient-based
optimization methods, since trajectories can cycle around fixed points rather
than converge to them. Various algorithms have been proposed in the literature
to address this. These include the following:
1. 1.
Simultaneous gradient:
$\displaystyle x^{t+1}=x^{t}+\alpha\xi(x^{t})$ (22)
2. 2.
Extragradient [Korpelevich, 1976]:
$\displaystyle x^{t+1}=x^{t}+\alpha\xi(x^{t}+\beta\xi(x^{t}))$ (23)
3. 3.
Optimistic gradient [Daskalakis et al., 2018, Mokhtari et al., 2020]:
$\displaystyle x^{t+1}=x^{t}+\alpha\xi(x^{t})+\beta(\xi(x^{t})-\xi(x^{t-1}))$
(24)
4. 4.
_Consensus optimization (CO)_ [Mescheder et al., 2017]:
$\displaystyle x^{t+1}$
$\displaystyle=x^{t}+\alpha(\xi-\beta\nabla\tfrac{1}{2}\|\xi\|^{2})(x^{t})$
(25) $\displaystyle=x^{t}+\alpha(\xi-\beta\xi\cdot\nabla\xi)(x^{t})$ (26)
5. 5.
_Symplectic gradient adjustment (SGA)_ [Balduzzi et al., 2018, Gemp and
Mahadevan, 2018, Letcher et al., 2019]:
$\displaystyle x^{t+1}$
$\displaystyle=x^{t}+\alpha(\xi-\beta\xi\cdot(\nabla\xi)_{\mathrm{A}})(x^{t})$
(27)
where $M_{\mathrm{A}}=\tfrac{1}{2}(M-M^{\top})$ is the antisymmetric part of
$M$. CO and SGA are second-order methods, meaning they require access to the
second derivatives of the utility functions. CO can converge to bad critical
points even in simple cases where the ‘game’ is to minimize a single function.
Thus it cannot be considered a candidate algorithm for finding stable fixed
points in general games, since it fails in the basic case of potential games
[Letcher et al., 2019, p. 13]. SGA was created to address some of the
shortcomings of CO.
## Appendix D Analytically-derived equilibria of the benchmark games
We carefully selected the benchmark games to be ones for which an
analytically-derived equilibrium exists, so that we can compare the
computationally-derived equilibria to the analytical one. In this section, we
illustrate the analytical equilibria to support that comparison.
Figure 13 illustrates the analytical solution for the continuous Colonel
Blotto game with fixed homogeneous budgets and valuations. This game was
analyzed by Gross and Wagner [1950], who also analyzed the game for various
special cases of heterogeneous budgets and valuations. They give the following
geometric description of the equilibrium strategy: “[The player] inscribes a
circle within [the triangle] and erects a hemisphere upon this circle. He next
chooses a point from a density uniformly distributed over the surface of the
hemisphere and projects this point straight down into the plane of the
triangle… He then divides his forces in respective proportion to the
triangular areas subtended by [this point] and the sides.”
Figure 13: Continuous Colonel Blotto game solution.
Figure 14 illustrates the analytical solution for the all-pay complete
information auction. This game was analyzed by Baye et al. [1996]. They show
that with homogeneous valuations ($v_{1}=v_{2}=\ldots=v_{n}$) there exists a
unique symmetric equilibrium and a continuum of asymmetric equilibria. All of
these equilibria are payoff equivalent, as is the expected sum of the bids
(revenue to the auctioneer). In the symmetric equilibrium, each player
randomizes uniformly on $[0,v_{1}]$.
Figure 14: Complete-information auction solution.
Figure 15 illustrates the analytical solution for the 2-player 1st-price
winner-pay asymmetric-information auction. This game was analyzed by Krishna
[2002, section 8.3]. Bidder 1 bids according to the strategy
$\beta(v)=\operatorname{E}[V\mid V\leq v]$. In our case,
$V\sim\mathcal{U}([0,1])$, so $\beta(v)=\tfrac{v}{2}$. Bidder 2 chooses a bid
at random from the interval $[0,\operatorname{E}[V]]$ according to the
distribution defined by $H(b)=\operatorname{P}[\beta(V)\leq b]$. In our case,
this reduces to the distribution $\mathcal{U}([0,\tfrac{1}{2}])$.
Figure 15: Asymmetric-information auction solution.
Figure 16 illustrates the analytical solution for the chopstick auction. This
game was analyzed by Szentes and Rosenthal [2003b, a], who state the
following: “The supports of the mixtures that generate the symmetric
equilibria in both the first- and second-price cases, turn out to be the
surfaces of regular tetrahedra, and the distributions themselves turn out to
be uniform on these surfaces. In addition, in each case all the points inside
the tetrahedron are pure best responses to the equilibrium mixture.”
Figure 16: Chopstick auction solution.
Figure 17 illustrates the analytical solution for the 2-player visibility
game. This game was analyzed by Lotker et al. [2008]. Up to a set of measure
zero, it has a unique equilibrium whose strategies have probability densities
$p(x)=1/(1-x)$ when $0\leq x\leq 1-1/\mathrm{e}$ and 0 otherwise. Each
player’s expected payoff is $1/\mathrm{e}$.
Figure 17: Visibility game solution.
|
# When Quantum Information Technologies Meet Blockchain in Web 3.0
Minrui Xu, Xiaoxu Ren, Dusit Niyato, _Fellow, IEEE_ , Jiawen Kang, _Member,
IEEE_ , Chao Qiu _Member, IEEE_ ,
Zehui Xiong, _Member, IEEE_ , Xiaofei Wang, _Senior Member, IEEE_ , and
Victor C. M. Leung, _Life Fellow, IEEE_ M. Xu and D. Niyato are with the
School of Computer Science and Engineering, Nanyang Technological University,
Singapore (e-mail<EMAIL_ADDRESS>dniyato@ntu.edu.sg). X. Wang, X.
Ren, C. Qiu are with the College of Intelligence and Computing, Tianjin
University, Tianjin 300072, China. X. Wang and C. Qiu are also with the
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ),
Shenzhen 518000, China (e-mails<EMAIL_ADDRESS><EMAIL_ADDRESS>chao.qiu@tju.edu.cn). J. Kang is with the School of Automation, Guangdong
University of Technology, China (e-mail: kavinkang@gdut.edu.cn). Z. Xiong is
with the Pillar of Information Systems Technology and Design, Singapore
University of Technology and Design, Singapore 487372, Singapore (e-mail:
zehui_xiong@sutd.edu.sg). Victor C.M. Leung is with the College of Computer
Science and Software Engineering, Shenzhen University, Shenzhen 518061, China,
and also with the Department of Electrical and Computer Engineering, The
University of British Columbia, Vancouver BC V6T 1Z4, Canada (E-mail:
vleung@ieee.org).
###### Abstract
With the drive to create a decentralized digital economy, Web 3.0 has become a
cornerstone of digital transformation, developed on the basis of computing-
force networking, distributed data storage, and blockchain. With the rapid
realization of quantum devices, Web 3.0 is being developed in parallel with
the deployment of quantum cloud computing and quantum Internet. In this
regard, quantum computing first disrupts the original cryptographic systems
that protect data security while reshaping modern cryptography with the
advantages of quantum computing and communication. Therefore, in this paper,
we introduce a quantum blockchain-driven Web 3.0 framework that provides
information-theoretic security for decentralized data transferring and payment
transactions. First, we present the framework of quantum blockchain-driven Web
3.0 with future-proof security during the transmission of data and transaction
information. Next, we discuss the potential applications and challenges of
implementing quantum blockchain in Web 3.0. Finally, we describe a use case
for quantum non-fungible tokens (NFTs) and propose a quantum deep learning-
based optimal auction for NFT trading to maximize the achievable revenue for
sufficient liquidity in Web 3.0. In this way, the proposed framework can
achieve proven security and sustainability for the next-generation
decentralized digital society.
###### Index Terms:
Blockchain, Quantum computing, Quantum communication, Web 3.0, Auction theory,
Machine Learning.
Figure 1: An illustration of quantum blockchain-related research activities.
## I Introduction
With distinctive features of the creator economy and decentralized autonomous
organization, Web 3.0 can improve user experience and guarantee user privacy
and security to the next dimension [1]. The decentralization of Web 3.0 is
based on blockchain, along with computing-force networking (CPN) [2] and
distributed data storage as key technologies, which promises to provide data
security, transparency, and accountability for digital transformation
activities [3]. These promising advantages of blockchain are based on one-way
mathematical functions in classical computing, e.g., hash functions and
public-key encryption. Unfortunately, blockchain is particularly at risk since
these computational assumptions of one-way functions are broken by quantum
computing. With the superposition and the entanglement of quantum bits
(qubits), quantum computing can provide sufficient computing power to break
existing digital signatures and hash functions within secrecy periods, e.g.,
quantum computers with 2$\times$107 qubits can crack RSA-2048 within eight
hours.
While Web 3.0 enables users to read, write, and own their user-generated
content (UGC) [3], classical blockchain is about to cease once the power of a
quantum computer meets the requirements to run Shor’s algorithm and Grover’s
algorithm [4]. The development of quantum information technology has opened
another door for cryptography. For example, quantum key distribution (QKD)
protocols encode the key used for symmetric encryption into a quantum state
for transmission over a quantum channel. Together with one-time-pad
technology, QKD-secured communication can achieve information-theoretically
secure data transmission. Moreover, quantum blockchain can use the quantum
hash function and one-way quantum computing functions to develop quantum
voting [5] and quantum signature [6] algorithms. In this way, the classical
blockchain is updated to the quantum blockchain with the advantage of longer
data encryption and utilization of more than twenty years.
Based on quantum cryptography techniques [5], e.g., quantum hash functions and
quantum signature algorithms, quantum blockchain provides several advanced
encryption services to quantum-driven Web 3.0. For example, quantum
decentralized digital identity (DDID) is fundamental of quantum-driven Web 3.0
that allows users to access decentralized financial (Defi) and decentralized
applications (dApps) with the corresponding quantum digital signature. In
addition, quantum non-fungible tokens (NFTs) [7] represented by quantum
hypergraph status are designed to provide a reliable and cheaper digital asset
identification mechanism than the classical one. Finally, quantum consensus
algorithms, such as quantum delegated proof of stake constructed by quantum
voting algorithms, can help to build trust among multiple quantum participants
without an authenticated third party [8].
In addition to the proven security provided by quantum cryptography, incentive
mechanisms in blockchain [9] should be well-designed to ensure the active
participation of miners and NFT buyers for the sustainability of Web 3.0.
Especially, sufficient liquidity relies on the achievable revenue from NFT
trading market of Web 3.0. However, it is challenging to achieve revenue
maximization in the NFT trading market while ensuring NFT buyers’ individual
rationality (IR) and dominant strategy-incentive compatibility (DSIC). IR
indicates all the buyers receive non-negative incentives, while DSIC means no
buyers can obtain a higher utility by submitting an untruthful bid. In
literature, several deep learning (DL)-based optimal auctions validated the
feasibility of incentivizing miners in blockchain [9]. Therefore, in this
paper, we propose a quantum deep learning (QDL)-based optimal auction based on
quantum neural networks to achieve optimal auction in NFT trading. The
experimental results demonstrate the effectiveness of the proposed QDL-based
optimal auction.
The main contributions of this article are summarized as follows:
1. 1.
We propose a novel quantum blockchain framework to drive Web 3.0 with
information-theoretically security. In the framework, quantum cryptography is
leveraged to provide the potential to encrypt and protect the decentralized
data of users longer than secrecy periods.
2. 2.
Based on quantum blockchain-based services in the framework, we discuss the
potential applications of digital transformation and the corresponding
challenges. Furthermore, we identify the benefits of the framework to
facilitate the creation, management, and usage of these applications.
3. 3.
To maintain sufficient liquidity in the framework, we propose a quantum
machine learning-based optimal incentive mechanism to maximize the achievable
revenue in the NFT trading market while guaranteeing buyers’ IR and DSIC. In
this way, the incentive mechanism can provide long-term sustainability with
proven security for the digital society.
In literature, several quantum-safe blockchain solutions are proposed, the
major ones of which are listed in Fig. 2. There are three main types of
existing quantum blockchains. The first type uses post-quantum encryption
algorithms to protect against attacks from quantum computers. Second, quantum
networks have absolute security properties that can be used to guarantee the
proven security of quantum blockchain running on classical computers during
data transmission. Finally, quantum blockchain running on quantum computers
ensures the secure operation of blockchain in quantum computing by using
advanced quantum computing techniques to implement quantum signatures and
quantum hash functions.
Figure 2: The framework of quantum blockchain-driven Web 3.0
## II The Framework of Quantum Blockchain-driven Web 3.0
To support the decentralized digital society in the quantum era, the framework
of quantum blockchain-driven Web 3.0 consists of enabling infrastructure,
quantum cryptography protocols, and quantum blockchain-based services.
### II-A Enabling Infrastructure of Web 3.0
In Web 3.0, users can fully control their own digital identities and digital
assets based on the critical infrastructure of Web 3.0, including CPN,
distribution data storage, and blockchain. Ubiquitous network connectivity
enables computing and storage resources in different geographical locations to
be merged into a common CPN to support Web 3.0 applications that are connected
to the blockchain for computational security. By merging computing resources
from end devices, edge servers, and the cloud (including classical and quantum
cloud computing) in CPN, Web 3.0 applications can be used for record-keeping,
proof of ownership, scheduling, and trading of decentralized digital finance
and marketplaces. Distributed storage enables decentralized applications to
store large amounts of data and digital assets in distributed storage nodes
while recording the data retrieval links used for data identification.
Finally, dApps developed based on blockchain act as distributed ledgers for
digital identities and digital assets, which provide unified access for Web
3.0 players.
### II-B Quantum Cryptography Protocols for Web 3.0
In Web 3.0, several cryptography protocols are adopted to ensure the security
and privacy of users, including identity authentication, consensus mechanism,
block verification, and block propagation.
#### II-B1 Quantum Identity Authentication
Quantum identity authentication (QIA) uses quantum cryptography protocols to
verify the identity of blockchain participants and prevent adversaries from
impersonating blockchain participants. To implement information-theoretically
secure authentication in quantum blockchain, QIA protocols can use two types
of methods for authentication based on shared classical keys and shared
entangled states. In QIA protocols with shared classical keys, the two
communicating parties share a predefined message in advance to indicate their
identity. In QIA protocols with shared entanglement, the communicating parties
share a set of entangled particles (i.e., qubits), and each party owns one of
each pair of entangled particles and performs the corresponding operation on
the entangled pair to identify each other. This method requires plenty of time
to store a large number of entangled particles and is not easy to implement.
To achieve information-theoretic security, the key or entanglement state that
users share in advance in the QIA protocol should ensure that it cannot be
obtained by an eavesdropper during use and generally cannot be reused. In
addition, identity authentication should be performed concurrently with
protocols such as QKD to prevent an eavesdropper from skipping the
authentication phase and sharing keys directly.
#### II-B2 Quantum Consensus Mechanism
The consensus mechanism allows nodes in a blockchain network to agree without
a trusted third party. In classical blockchain, consensus protocols such as
Proof-of-Work (PoW), Proof-of-Stake (PoS), Delegated Proof-of-Stake (DPoS),
and practical Byzantine Fault Tolerance (PBFT) are widely used to protect the
block generation process of blockchain. In quantum blockchain, quantum
cryptographic algorithms, such as quantum voting, can be used to develop
quantum consensus mechanisms. Based on quantum voting, quantum delegated
proof-of-stake (QDPoS) is used to defend against quantum attacks on the
selection of representative nodes participating in the consensus. Meanwhile,
DPoS with node behavior and Borda count (DPoSB) [6] is proposed to select
fairly the witness nodes with low energy consumption.
#### II-B3 Quantum Block Verification
Before a new block is added to the blockchain, block verification must be
performed to check the transaction messages and other related information in
the new block. The quantum blockchain generates verifiable information through
quantum signature algorithms and sends it to other nodes for verification.
After the new block is verified, the quantum miner sends the new block to the
other nodes in the blockchain network for consensus. In the quantum blockchain
protocol proposed in [10], each block is represented by a qubit encoding the
information of weighted hypergraph status, which decreases the required
storage space of the whole blockchain. This protocol can withstand several
security threats, including external attacks, intercept-measure-repeat
attacks, and entanglement-measure attacks.
#### II-B4 Quantum Block Propagation
In the quantum blockchain, the propagation of blocks can be conducted on both
classical and quantum networks via quantum secure communication [11]. When a
channel in a network uses QKD to secure its communication, a quantum security
key to encrypt the block must be distributed over the quantum channel before
block propagation. In addition, quantum blockchain can propagate the qubit
used to represent the blockchain information through the entanglements in the
quantum network. Both approaches can provide information-theoretic protection
for block propagation in the quantum blockchain. However, as quantum
communication resources remain scarce and expensive, it is necessary to
clarify how to balance the tradeoff of block propagation efficiency and
security of blockchain with minimized network costs.
### II-C Quantum Blockchain-based Services in Web 3.0
With quantum cryptography protocols, a few promising services based on quantum
blockchain are provisioned in Web 3.0, including quantum DDID, quantum NFT
(QNFT), quantum DeFi, and quantum DAO.
#### II-C1 Quantum Decentralized Digital Identity
Users in Web 3.0 prove their ownership of digital assets (e.g., UGC, browsing
records, transaction records, and behavior records) in the blockchain by
digitally signing with their private keys, i.e., distributed digital
identities. Based on the randomness of quantum physics, quantum signature
algorithms provide tamper-proof, anonymous, and transparent digital identities
for users in Web 3.0. Moreover, quantum DDID can be leveraged to restore the
original value in users’ accounts [11].
#### II-C2 Quantum Non-fungible Token
Non-fungible Token is a unique implementation of Web 3.0 for enforcing user-
created content. By tagging digital assets with specific rights, NFT based on
smart contracts can generate a unique link, which is recorded on the
corresponding blockchain with which cannot be tampered, creating a unique
digital collection that can prove ownership. Instead of physically
distributing the NFT directly to the owner, quantum algorithms, e.g., quantum
walk, can represent the information in the QNFT with quantum bipartite
hypergraph states and attach the quantum status to the blockchain. As a
result, quantum blockchain can provide more reliable and cost-effective QNFTs
with the guarantee of uniqueness and proof of ownership.
#### II-C3 Quantum Decentralized Finance
Decentralized Finance (DeFi) in Web 3.0, including stablecoins, decentralized
exchanges, and peer-to-peer lending, can increase the liquidity of digital
assets which are circulating on the blockchain. Quantum blockchain can provide
information-theoretically security for transactions in DeFi with instantaneous
payment transmission. For example, a hybrid classical-quantum framework is
proposed in [11] to provide a scalable solution to the existing smart
contract-based blockchain. In detail, quantum lightning based on quantum
secure communication is leveraged to enable quantum DeFi in Web 3.0 with
unbounded throughput and instantaneous payments.
#### II-C4 Quantum Decentralized Autonomous Organization
A decentralized autonomous organization is a form of social organization on
Web 3.0 that fully participates in decision-making in a low-trust model.
Participants in a DAO have the equal right to determine the rules and
processes of collaboration in the blockchain openly and transparently through
voting, including smart contracts, prior commitment, task assignment, payroll,
and organizational governance. In DAO, the identity of participating users can
be verified through the quantum signature algorithm. Then, through the quantum
voting mechanism, users can participate in voting on new proposals at DAO in
the form of information-theoretical security through the mechanism of quantum
voting. Finally, incentives in DAO can be distributed through tokens of
quantum blockchain [11].
In summary, the proposed framework quantum blockchain-driven Web 3.0 can
provide proven security for quantum blockchain-based services by leveraging
quantum cryptography protocols. To support the sustainable use of quantum
cryptography techniques, efficient networking for blockchain via quantum
networks and incentive mechanisms for miners and NFT traders with quantum
resources should be achieved.
Figure 3: Stages in the development of quantum blockchain-driven Web 3.0.
## III Techniques, Applications, and Challenges of Quantum blockchain-driven
Web 3.0
### III-A Quantum Cryptography Techniques
There are six main stages in the continuous development of quantum blockchain-
driven Web 3.0, as shown in Fig. 3. At different stages, quantum cryptography
techniques will become mainstream to support the quantum cryptography
protocols in that stage.
#### III-A1 Quantum Secure Communication
To achieve communication security in quantum blockchain, QKD and quantum
secure direct communication (QSDC) can be adopted. On the one hand, QKD is a
key exchange protocol for symmetric encryption via quantum communication.
Using the non-cloning theorem and Heisenberg’s uncertainty theorem, QKD can
provide provable security for communication in the case of the one-time-pad
(OTP). In the stage of trusted repeater networks, the vanilla protocols, such
as the BB84 protocol [12], are proposed to encode key information into quantum
states and transmit it via experimental quantum networks. Then, in the stage
of prepare-and-measure networks, measurement-device-independent (MDI)-QKD
protocols are proposed to improve the feasible distance of QKD links. Finally,
in the stage of entanglement distribution networks, device-independent
(DI)-QKD protocols are developed to provide a longer entanglement distance of
quantum networks. On the other hand, with the development of quantum routers
with quantum memory, QSDC becomes possible in the stage of quantum memory
networks. The QSDC protocol achieves direct transmission of secret information
through chunking, quantum privacy amplification, and error correction. In this
way, QSDC constitutes not only a new secure communication paradigm but also an
instantaneous communication scheme by transmitting messages in the quantum
channel.
#### III-A2 Quantum Random Number Generators
Based on the uncertainty principle of quantum mechanics, quantum computing can
provide true random number generators for cryptography protocols, i.e.,
quantum random number generators (QRNG), compared to pseudo-random number
generators in classical computing. The advantages of QRNG are multiple. These
include the fundamental benefits of exploiting quantum uncertainty, often
through the use of photonics for faster performance. Finally, the ability to
understand and verify the sources of unpredictability is a core guarantee for
quantum blockchain.
#### III-A3 Quantum Signature Algorithms
Since classical digital signatures based on asymmetric encryption, such as
RSA, will be cracked by Shor’s algorithm, quantum digital signature algorithms
are considered to be an important component of quantum blockchain to ensure
tamper-resistance. Based on the computational problems of distinguishability
of quantum states, quantum signature algorithms can secure payment and data
transferring information in quantum blockchain.
#### III-A4 Quantum Hash Functions
In the block header of the quantum blockchain, quantum walking can provide a
hash function for block verification. Quantum computers can be used not only
to crack classical hash functions but also to develop quantum hash functions
using quantum one-way functions or quantum random processes. For example, in
the quantum blockchain protocol proposed in [10], the hash value of each block
is encoded in a qubit based on a controlled alternative quantum walk method
for generating the hash value of a block.
#### III-A5 Quantum Voting
Quantum voting protocols [5] achieve information-theoretic security of the
voting process in the quantum blockchain by sharing quantum information with
the requirements of correctness, traceability, verifiability, and anonymity.
At the voting setup node, the voting information is generated and encoded with
quantum states. Then the voter shares the quantum information with other
participants through the quantum channel. At the voting stage, the results of
voting with quantum states are calculated in a transparent and anonymous
manner.
### III-B Potential Digital Transformation Applications
#### III-B1 Smart City
Smart cities aim to develop protocols and technologies to improve the quality
of People’s daily life. Meanwhile, the benefits of smart cities come with
other potential risks. For example, massive IoT devices are vulnerable to
cyber-attacks. Blockchain is a promising solution that enables decentralized
and secure data management. However, the implementation of quantum computers
makes most current encryption algorithms underlying the blockchain open to be
hacked. In this regard, quantum cryptography helps to eliminate the risk of
data storage and transmission associated with blockchain and IoT. The work in
[13] designs a novel authentication and encryption protocol using quantum
walks (QIQW). The QIQW protocol enables IoT nodes in smart cities to share
data securely and have full control of their records. Meanwhile, the novel
blockchain framework based on QIQW is able to resist probable quantum attacks.
#### III-B2 Smart Healthcare
Smart healthcare aims to provide patient-centric healthcare services by secure
data collection, efficient data processing, and systematic knowledge
extraction [14]. However, maintaining the security and privacy of stakeholders
is a challenge for traditional healthcare systems. To improve the efficiency
of today’s healthcare system, blockchain emerges as a technology, and it also
helps to maintain the security and privacy of all the stakeholders [14].
Replacing traditional encryption signature algorithms with quantum
authentication systems, a quantum electronic medical record protocol is
designed in [10]. This protocol tracks every medical record while guaranteeing
the security and privacy of EMRs in medical systems.
#### III-B3 Metaverse
In the Metaverse [15], users telepresent as avatars to immerse in and interact
with virtual objects in 3D virtual space. The quantum blockchain-driven Web
3.0 is the economic system of Metaverse. For example, avatar identity
management can leverage quantum identity authentication for cost-effective
verification with proven security. In addition, digital assets in immersive
games are minted as QNFTs, which are temper-proof, unique, and tradable.
Finally, cross-chain mechanisms in quantum blockchain-driven Web 3.0 can
provide instantaneous and information-theoretically secure interoperability
for data and payment transferring among different Metaverse.
### III-C Main Challenges
In the proposed framework, the main challenges can be identified from two
aspects: networking and incentives.
#### III-C1 Quantum Networking for Blockchain in Web 3.0
As an inherent property of quantum networking, information-theoretic security
can be equipped for networking in Web 3.0. Therefore, quantum networking for
blockchain in Web 3.0 is a promising solution. However, scalability in
blockchain is still a challenge for consensus and recording payment
information in Web 3.0. Currently, three mainstream schemes are proposed to
improve throughput for classical blockchain, i.e., payment channels, sharding,
and cross-chain. First, with the hash lock in smart contracts, two nodes in
quantum blockchain can open a payment channel over a quantum channel. Second,
quantum network resources can be leveraged for inter-shard communication for
real-time synchronization among multiple shards in a blockchain. Finally,
quantum information technologies can help to select the relays for cross-chain
transactions among multiple blockchains. However, quantum network resources
are limited and insufficient to satisfy the required throughput for blockchain
scalability.
#### III-C2 Incentive Mechanism Design
In Web 3.0, sufficient incentives are required to ensure liquidity in the
economic ecosystem for long-term sustainability. On the one hand, the
mechanism must provide proper incentives and penalties for achieving consensus
in quantum blockchain based on the devoted quantum computing and communication
resources of miners. On the other hand, the mechanism should provide
sufficient revenues for NFT trading in Web 3.0. In this way, the proposed
framework is sustainable.
Figure 4: An illustration of the quantum deep learning-based auction with
hybrid classic-quantum neural networks.
## IV Sustainable Incentive Mechanism for NFT Trading in Quantum Blockchain-
driven Web 3.0
### IV-A The Quantum NFT Protocol
In quantum blockchain, miners choose the winning node that desires to create a
QNFT using QDPoS. The winner of the QDPoS [8] creates a quantum state to
represent the QNFT. In the quantum state of the QNFT, the first qubit is
encoded with the information (e.g., the name of the UGC) about the UGC, and
the second qubit is encoded with the information of the UGC, which is a random
phase according to the consensus. After the quantum state of the QNFT is
prepared, the creator of the QNFT sends a copy of its state to every
participant in quantum blockchain. After receiving the proposed QNFT, each
node verifies the QNFT by using quantum gates for each qubit of the QNFT. If
the proposed QNFT fails to pass, i.e., if the measurement result does not
match the prepared result, the protocol aborts this QNFT and penalizes the
proposed peer. If the proposed state passes verification, each node adds the
block to its local copy of the blockchain. In this way, the QNFT is
successfully added to the quantum blockchain.
### IV-B Quantum Deep Learning-based Optimal Auction
To maintain the sustainability of Web 3.0, NFTs can be traded in the market
for distribution even after it has been created. The profits generated by NFT
transactions can be used for liquidity in the Web 3.0 economic system. The
goal of the optimal auction with multiple buyers and multiple items is to
maximize the expected revenue while ensuring individual rationality and DSIC
[9]. In the classical deep learning-based optimal auction (DLA), the
auctioneer’s price rule and allocation rule are output by a neural network. To
obtain bids for multiple items, long-short time memory (LSTM) nodes are used
to extract useful information from the bids. After the hidden layer of the
neural network, the ReLu function and the softmax function of the latent
output layer are then processed separately to output the price factors and the
allocation probability. To reduce the number of parameters in the DLA neural
networks, we propose the QDL-based optimal auction (QDLA) by replacing the
hidden layer with quantum circuits. The framework of QDLA is shown in Fig. 4,
where the quantum circuits consist of AngleEmbedding and BasicEntangler
layers.
### IV-C Experimental Results
(a) Revenue v.s. training epochs.
(b) Regret v.s. training epochs.
Figure 5: The achieved revenue and regret in the NFT trading market.
To demonstrate the effectiveness of the proposed QDLA, we evaluate the
performance of DLA and QDLA in the NFT trading market of Web 3.0. We generate
7000 training samples and 3000 testing samples with 3 buyers and 3 items in
the market, and the valuation of NFT is randomly selected from 0 to 1. The
learning rate of DLA is set to 0.001, while the learning rate of QDLA is set
to 0.01. In DLA, the size of the LSTM layer is 32 and the size of the hidden
layer is 32. In classical networks of QDLA, the size of the LSTM layer is 4
and the size of the hidden layer is 16. In quantum networks of QDLA, the
number of qubits is 4 and the number of layers is 6. The experimental results
of the proposed QDLA are shown in Fig. 5. As we observe in Fig. 5(a), the DLA
and the QDLA can converge at about the tenth epoch and outperform the second-
price auction (SPA), which can ensure IR and IC but cannot maximize revenue,
in terms of expected revenue. The regret, i.e., the square distance between
the achieved utility and the optimal utility, is evaluated in Fig. 5(b). It
can be observed that both mechanisms can achieve a non-negative regret close
to zero, which means that the DLA and the QDLA can achieve DSIC. In summary,
although the QDLA consists of fewer parameters than the DLA, the QDLA can
achieve similar revenue compared with the DLA and is more robust than the DLA.
Quantum blockchain effectively synthesizes the security of quantum networks
and the efficiency of quantum computing and quantum algorithms. With this use
case, we demonstrate clearly that the proposed framework for building a
decentralized digital society can achieve not only information-theoretic
security but also long-term sustainability in Web 3.0.
## V Conclusion
In this paper, we have investigated the quantum blockchain-driven Web 3.0
through the implementation of quantum cryptography protocols in blockchain for
decentralization, scalability, and security. In detail, we have proposed the
framework of quantum blockchain-driven Web 3.0, which consists of enabling
infrastructure, quantum cryptography protocols, and quantum blockchain-based
services. Moreover, we have discussed potential applications and challenges in
the proposed platform. Finally, the QNFT protocol based on QDPoS has been
studied as the use case, and a QDLA-based optimal auction has been proposed
for improving fluidity in the NFT trading market. The experimental results
have demonstrated the effectiveness and efficiency of the proposed QDLA.
## References
* [1] C. Chen, L. Zhang, Y. Li, T. Liao, S. Zhao, Z. Zheng, H. Huang, and J. Wu, “When digital economy meets web 3.0: Applications and challenges,” IEEE Open Journal of the Computer Society, 2022.
* [2] X. Wang, X. Ren, C. Qiu, Y. Cao, T. Taleb, and V. C. Leung, “Net-in-ai: A computing-power networking framework with adaptability, flexibility, and profitability for ubiquitous ai,” IEEE Network, vol. 35, no. 1, pp. 280–288, 2020.
* [3] Y. Lin, Z. Gao, H. Du, D. Niyato, J. Kang, R. Deng, and X. S. Shen, “A unified blockchain-semantic framework for wireless edge intelligence enabled web 3.0,” arXiv preprint arXiv:2210.15130, 2022.
* [4] A. K. Fedorov, E. O. Kiktenko, and A. I. Lvovsky, “Quantum computers put blockchain security at risk,” 2018.
* [5] S. Pirandola, U. L. Andersen, L. Banchi, M. Berta, D. Bunandar, R. Colbeck, D. Englund, T. Gehring, C. Lupo, C. Ottaviani, et al., “Advances in quantum cryptography,” Advances in optics and photonics, vol. 12, no. 4, pp. 1012–1236, 2020.
* [6] W. Wang, Y. Yu, and L. Du, “Quantum blockchain based on asymmetric quantum encryption and a stake vote consensus algorithm,” Scientific Reports, vol. 12, no. 1, pp. 1–12, 2022.
* [7] S. S. Pandey, T. Dash, P. K. Panigrahi, and A. Farouk, “Efficient quantum non-fungible tokens for blockchain,” arXiv preprint arXiv:2209.02449, 2022\.
* [8] Q. Li, J. Wu, J. Quan, J. Shi, and S. Zhang, “Efficient quantum blockchain with a consensus mechanism qdpos,” IEEE Transactions on Information Forensics and Security, vol. 17, pp. 3264–3276, 2022.
* [9] N. C. Luong, Z. Xiong, P. Wang, and D. Niyato, “Optimal auction for edge computing resource management in mobile blockchain networks: A deep learning approach,” in 2018 IEEE international conference on communications (ICC), pp. 1–6, IEEE, 2018.
* [10] Z. Qu, Z. Zhang, and M. Zheng, “A quantum blockchain-enabled framework for secure private electronic medical records in internet of medical things,” Information Sciences, vol. 612, pp. 942–958, 2022.
* [11] A. Coladangelo and O. Sattath, “A quantum money solution to the blockchain scalability problem,” Quantum, vol. 4, p. 297, 2020.
* [12] Y. Cao, Y. Zhao, Q. Wang, J. Zhang, S. X. Ng, and L. Hanzo, “The evolution of quantum key distribution networks: On the road to the qinternet,” IEEE Communications Surveys & Tutorials, vol. 24, no. 2, pp. 839–894, 2022.
* [13] A. A. Abd El-Latif, B. Abd-El-Atty, I. Mehmood, K. Muhammad, S. E. Venegas-Andraca, and J. Peng, “Quantum-inspired blockchain-based cybersecurity: Securing smart edge utilities in iot-based smart cities,” Information Processing & Management, vol. 58, no. 4, p. 102549, 2021.
* [14] M. Bhavin, S. Tanwar, N. Sharma, S. Tyagi, and N. Kumar, “Blockchain and quantum blind signature-based hybrid scheme for healthcare 5.0 applications,” Journal of Information Security and Applications, vol. 56, p. 102673, 2021.
* [15] M. Xu, W. C. Ng, W. Y. B. Lim, J. Kang, Z. Xiong, D. Niyato, Q. Yang, X. S. Shen, and C. Miao, “A full dive into realizing the edge-enabled metaverse: Visions, enabling technologies, and challenges,” IEEE Communications Surveys & Tutorials, 2022.
Minrui Xu<EMAIL_ADDRESS>received the B.S. degree from Sun Yat-Sen
University, Guangzhou, China, in 2021. He is currently working toward the
Ph.D. degree in the School of Computer Science and Engineering, Nanyang
Technological University, Singapore. His research interests mainly focus on
Metaverse, quantum information technologies, deep reinforcement learning, and
mechanism design.
---
Xiaoxu Ren [S’20]<EMAIL_ADDRESS>is currently pursuing the Ph.D.
degree at the College of Intelligence and Computing, Tianjin University,
Tianjin, China. She received the B.S. degree from the College of Science,
Inner Mongolia University of Technology, China, in 2016. Her current research
interests include machine learning, computing power networking, and
blockchain.
---
Dusit Niyato [M’09, SM’15, F’17]<EMAIL_ADDRESS>is currently a professor
in the School of Computer Science and Engineering, Nanyang Technological
University, Singapore. He received the B.Eng. degree from King Mongkuts
Institute of Technology Ladkrabang (KMITL), Thailand in 1999 and Ph.D. in
electrical and computer engineering from the University of Manitoba, Canada in
2008. His research interests are in the areas of Internet of Things (IoT),
machine learning, and incentive mechanism design.
---
Jiawen Kang [M’18]<EMAIL_ADDRESS>received the M.S. degree and the
Ph.D. degree from the Guangdong University of Technology, China, in 2015 and
2018, respectively. He is currently a full professor at the Guangdong
University of Technology. He was a postdoc at Nanyang Technological University
from 2018 to 2021, Singapore. His research interests mainly focus on
blockchain, security, and privacy protection in wireless communications and
networking.
---
Chao Qiu [S’15, M’19]<EMAIL_ADDRESS>is currently a lecturer in the
School of Computer Science and Technology, College of Intelligence and
Computing, Tianjin University. She received the B.S. degree from China
Agricultural University in 2013 in communication engineering and the Ph.D.
from Beijing University of Posts and Telecommunications in 2019 in information
and communication engineering. From September 2017 to September 2018, she
visited Carleton University, Ottawa, ON, Canada, as a visiting scholar. Her
current research interests include machine learning, computing power
networking and blockchain.
---
Zehui Xiong [M’20]<EMAIL_ADDRESS>is an Assistant Professor at
Singapore University of Technology and Design. Prior to that, he was a
researcher with Alibaba-NTU Joint Research Institute, Singapore. He received
the Ph.D. degree in Computer Science and Engineering at Nanyang Technological
University, Singapore. He was a visiting scholar with Princeton University and
University of Waterloo. His research interests include wireless
communications, network games and economics, blockchain, and edge
intelligence.
---
Xiaofei Wang [S’06, M’13, SM’18]<EMAIL_ADDRESS>is currently a
professor at Tianjin University, China. He received master and doctor degrees
from Seoul National University in 2006 to 2013, respectively, and was a post-
doctoral fellow with The University of British Columbia from 2014 to 2016. He
was a recipient of the National Thousand Talents Plan (Youth) of China. In
2017, he received the Fred W. Ellersick Prize from the IEEE Communication
Society. His current research interests include social-aware cloud computing,
cooperative cell caching, and mobile traffic offloading.
---
Victor C. M. Leung [S’75, M’89, SM’97, F’03, LF’20]<EMAIL_ADDRESS>is a
distinguished professor of computer science and software engineering at
Shenzhen University. He is also an emeritus professor of electrical and
computer engineering and the Director of the Laboratory for Wireless Networks
and Mobile Systems at the University of British Columbia (UBC). His research
is in the areas of wireless networks and mobile systems. He is a Fellow of the
Royal Society of Canada, a Fellow of the Canadian Academy of Engineering, and
a Fellow of the Engineering Institute of Canada.
---
|
# Universal quantum computer based on Carbon Nanotube Rotators
Motohiko Ezawa Department of Applied Physics, The University of Tokyo, 7-3-1
Hongo, Tokyo 113-8656, Japan Shun Yasunaga Department of Electrical
Engineering, University of Tokyo, Hongo 7-3-1, 113-8656, Japan Tetsuya Iizuka
Department of Electrical Engineering, University of Tokyo, Hongo 7-3-1,
113-8656, Japan Akio Higo Department of Electrical Engineering, University
of Tokyo, Hongo 7-3-1, 113-8656, Japan Yoshio Mita Department of Electrical
Engineering, University of Tokyo, Hongo 7-3-1, 113-8656, Japan
###### Abstract
We propose a universal quantum computer based on a chain of carbon nanotube
rotators where one metallic plate is attached to each rotator. The dynamical
variable is the rotational angle $\phi$. The attached plate connected to
ground electrostatically interacts with two fixed plates. Two angle positions
$\phi=0,\pi$ are made stable by applying a voltage difference between the
attached plate and the two fixed plates. We assign $\phi=0$ and $\pi$ to the
qubit states $|0\rangle$ and $|1\rangle$. Then, considering a chain of
rotators, we construct the arbitrary phase-shift gate, the NOT gate and the
Ising gate, which constitute a set of universal quantum gates. They are
executed by controlling the voltage between various plates.
## I Introduction
The Moor law is a fundamental roadmap of integrated circuits, which dictates
that the number of elements increases exponentially as a function of year. It
also means that the size of an element must become exponentially small.
However, there is an intrinsic limit of an element, which is the size of atoms
of the order of 1nm. It gives a limit to the Moor law. The quantum computer is
expected to be a solution to overcome it[1, 2, 3]. Quantum computers are based
on qubits, where the superposition of the quantum states $|0\rangle$ and
$|1\rangle$ is used. Various methods have been proposed such as
superconductors[4], photonic systems[5], ion trap[6], nuclear magnetic
resonance[7, 8], quantum dots[9], skyrmions[10, 11] and merons[12].
Nanomechanical systems are also applicable to quantum computers[13, 14, 15,
16].
Quantum algorithm is decomposed into a sequential application of quantum
gates. The Solovay-Kitaev theorem assures that only three quantum gates, the
$\pi/4$ phase-shift gate, the Hadamard gates and the CNOT gate, are enough for
universal quantum computations[17, 18, 19]. Alternatively, the arbitrary
phase-shift gate, the NOT gate and the Ising gate constitute a set of
universal quantum gates as well.
Nano-electromechanical systems (NEMS)[20, 21] have various industrial
applications. A nanorotator based on a carbon nanotube has been experimentally
realized[22, 23, 24, 25]. Especially, a double-wall nanotube structure acts as
a nanomotor[26, 23, 27, 28, 24, 29, 25, 30]. A carbon nanotube can be metallic
depending on the chirality of a nanotube[31]. In addition, it is possible to
attach a metallic plate to a nanotube[22, 27]. Quantum effects are
experimentally observed in NEMS[32, 33, 34].
In this paper, we propose a universal quantum computer by constructing a set
of universal quantum gates. We prepare a rotator based on a double-wall
nanotube as illustrated in Fig.1(a). We attach one metallic plate to the inner
nanotube. It is possible to materialize such a nanorotator by using the
present techniques[22, 27]. Then, we align these rotators along a line with
equal spacing, which is the main configuration of our proposal. We explicitly
design the arbitrary phase-shift gate, the NOT gate and the Ising gate. They
are controlled by the voltage between two plates.
Figure 1: (a) Illustration of a nanotube rotator suspended by the double-wall
nanotube structures at the top and the bottom. One metallic plate (in orange)
is attached to the nanotube. Two metallic plates (in blue) are fixed to the
outer system. We connect the inner plate to ground. When we give a voltage to
the two outer plates, the $\cos 2\phi$ potential is induced. When we give a
voltage to one of the two outer plates, the $\cos\phi$ potential is induced.
On the other hand, when we give a voltage between the two plates attached to
two nanotubes, the Ising interaction is induced. (b) The configuration of a
rotator with $\phi=0$, representing the qubit state $\left|0\right\rangle$.
(c) The configuration of a rotator with $\phi=\pi$, representing the qubit
state $\left|1\right\rangle$. (d) The configuration of a rotator with a
generic angle $\phi$. Dotted circles denote the rotator parts.
## II Model
### II.1 Carbon nanotube rotator
We consider a rotator whose dynamical variable is the rotational angle $\phi$
with the potential energy given by[27]
$W_{2}(\phi)=-A\cos 2\phi.$ (1)
There are two stable angles $\phi=0$ and $\pi$, which we regard to form one-
qubit states {$|0\rangle,|1\rangle$}. In addition, we introduce a potential
term given by[27]
$W_{1}(\phi)=-B\cos\phi,$ (2)
which resolves the degeneracy between the states $|0\rangle$ and $|1\rangle$.
The Schrödinger equation for one rotator reads
$i\hbar\frac{d}{dt}\psi\left(t\right)=H\psi\left(t\right),$ (3)
with the Hamiltonian given by
$H=-\frac{\hbar^{2}}{2\mu r^{2}}\frac{\partial}{\partial\phi^{2}}-A\cos
2\phi-B\cos\phi,$ (4)
where $\mu$ is the inertia of the rotator and $r$ is the radius of the
rotator. The eigenequation reads
$H\psi=E\psi.$ (5)
The Hamiltonian (4) may be materialized by a rotator, which is made of a
nanotube (in green) supported by the double-wall nanotube structures at the
top and the bottom, as illustrated in Fig.1(a). We attach one metal plate (in
orange) to the nanotube, which we call the inner plate. Then, we introduce two
metal plates (in blue) fixed to an outer system, which we call the outer
plates.
1) We connect the inner plate to ground. It materializes the potential energy
(1) when we give a voltage $\varpropto V_{2}$ to the two outer plates. There
are two stable angles $\phi=0$ and $\pi$, as illustrated in Fig.1(b1) and
(c1). We also illustrate a rotor with a generic angle $\phi$ in Fig.1(d1).
2) We connect the inner plate to ground. It materializes the potential energy
(2) when we give a voltage $\varpropto V_{1}$ to one of the two outer plates.
There is one stable angle $\phi=0$, as illustrated in Fig.1(b2). We also
illustrate a rotor with a generic angle $\phi$ in Fig.1(d2).
### II.2 Whittaker–Hill Equation
The Schrödinger equation (3) may be rewritten in a dimensionless form as
$i\frac{d}{d\mathcal{\tau}}\psi\left(\phi,\tau\right)=\mathcal{H}\psi\left(\phi,\tau\right),$
(6)
with the dimensionless Hamiltonian,
$\mathcal{H=}-\frac{d^{2}}{d\phi^{2}}+\mathcal{V(}\phi),$ (7)
and the dimensionless potential,
$\mathcal{V\mathcal{(}\phi)=}-\mathcal{V}_{2}\cos
2\phi-\mathcal{V}_{1}\cos\phi,$ (8)
where
$\displaystyle\mathcal{\tau}$ $\displaystyle=\frac{\hbar}{2\mu
r^{2}}t,\quad\varepsilon=\frac{2\mu r^{2}}{\hbar^{2}}E,\quad$
$\displaystyle\mathcal{V}_{2}$ $\displaystyle=\frac{2\mu
r^{2}}{\hbar^{2}}A,\quad\mathcal{V}_{1}=\frac{2\mu r^{2}}{\hbar^{2}}B.$ (9)
The dimensionless quantity $\mathcal{V}_{2}$ ($\mathcal{V}_{1}$) is given in
terms of the voltage difference $V_{1}$($V_{2}$) between the inner plate and
the two (one) outer plates,
$\mathcal{V}_{i}=\frac{1}{2}CV_{i}^{2},$ (10)
where $C$ is the capacitance of the inner-outer plate system, as indicated by
$\cos 2\phi$ ($\cos\phi$) in Fig.1(a). We show the potential
$\mathcal{V}(\phi)$ for $\mathcal{V}_{1}=0$ in Fig.2(a1) and for
$\mathcal{V}_{1}=1$ in Fig.2(b1) by setting $\mathcal{V}_{2}=20$.
Figure 2: (a1) Potential energy as a function of $\phi$, where there are two
minima at $\phi=0$ and $\pi$; (a2) wave functions as a function of $\phi$,
where the magenta curve indicates the symmetric ground state $\psi_{+}$ and
the cyan curve indicates the antisymmetric first-excited state $\psi_{-}$.
Here, we have set $\mathcal{V}_{1}=0$ and $\mathcal{V}_{2}=20$. (a2) Potential
energy as a function of $\phi$, where there is only one minimum at $\phi=0$;
(b2) wave functions as a function of $\phi$, where the magenta curve indicates
the eigenfunction $\psi_{0}$ of the state $|0\rangle$ and the cyan curve
indicates $\psi_{1}$ of $|1\rangle$. Here, we have set $\mathcal{V}_{1}=1$ and
$\mathcal{V}_{2}=20$.
The eigenequation $\mathcal{H}\psi=\varepsilon\psi$ reads
$\left[\frac{d^{2}}{d\phi^{2}}+\mathcal{V}_{2}\cos
2\phi+\mathcal{V}_{1}\cos\phi\right]\psi=-\varepsilon\psi.$ (11)
This is the Whittaker–Hill equation, which is reduced to the Mathieu equation
for $\mathcal{V}_{1}=0$.
### II.3 Strong potential limit
As the basic picture of the present model, we require the dominant role of the
cosine potential $\cos 2\phi$ to generate two-fold degenerated ground states
at $\phi=0$ and $\pi$, which we regard to form one-qubit states
{$|0\rangle,|1\rangle$}. On the other hand, we use the cosine potential
$\cos\phi$ to make gate operations. Hence, we consider the regime where
$\mathcal{V}_{2}\gg\mathcal{V}_{1}\geq 0$. It is possible to derive the
analytical solutions around these two points for sufficiently large
$\mathcal{V}_{2}$. The Whittaker-Hill equation (11) is approximated as
$\displaystyle\left[\frac{d^{2}}{d\phi^{2}}+\left(\mathcal{V}_{2}+\left(-1\right)^{q}\mathcal{V}_{1}\right)-\left(2\mathcal{V}_{2}+\left(-1\right)^{q}\frac{\mathcal{V}_{1}}{2}\right)\phi_{q}^{2}\right]\psi_{q}$
$\displaystyle\hskip 113.81102pt=\varepsilon_{q}\psi_{q},$ (12)
where $\phi_{q}=\phi-q\pi$ with $q=0,1$.
By solving Eq.(12), the wave function $\psi_{q}$ is given by
$\psi_{q}\left(\phi_{q}\right)=\left(\frac{2a}{\pi}\right)^{1/4}e^{-\alpha_{q}\phi_{q}^{2}},$
(13)
with
$\alpha_{q}=\sqrt{\frac{\mathcal{V}_{2}}{2}+\left(-1\right)^{q}\frac{\mathcal{V}_{1}}{8}}.$
(14)
The energy of the state (13) is
$\varepsilon_{q}=-\left(\mathcal{V}_{2}+\left(-1\right)^{q}\mathcal{V}_{1}\right)+\sqrt{2\mathcal{V}_{2}+\left(-1\right)^{q}\frac{\mathcal{V}_{1}}{2}}.$
(15)
When $\mathcal{V}_{1}>0$, the ground state is given by $\psi_{0}$, and the
first excited state by $\psi_{1}$. The wave function $\psi_{q}$ describes the
one-qubit state $|q\rangle$ with the qubit variable $q=0,1$.
When $\mathcal{V}_{1}=0$, these two states are degenerate. However, an energy
splitting occurs due to the difference between the Whittaker-Hill equation
(11) and the approximated equation (12). Then, due to the mixing, the ground
state and the first-excited state wave functions turn out to be the symmetric
function $\psi_{+}$ and the antisymmetric function $\psi_{-}$,
$\psi_{+}=\frac{\psi_{0}+\psi_{1}}{\sqrt{2}},\quad|\psi_{-}\rangle\equiv\frac{\psi_{0}-\psi_{1}}{\sqrt{2}},$
(16)
however small the energy splitting is.
### II.4 Numerical analysis
The Whittaker–Hill equation is solved by making a Fourier series expansion,
$\psi=\sum_{n=-\infty}^{\infty}\alpha_{n}e^{in\phi}.$ (17)
The coefficient $\alpha_{n}$ is determined by solving a set of eigenequations,
$-n^{2}\alpha_{n}+\frac{\mathcal{V}_{2}}{2}\left(\alpha_{n+2}+\alpha_{n-2}\right)+\frac{\mathcal{V}_{1}}{2}\left(\alpha_{n+1}+\alpha_{n-1}\right)=\varepsilon\alpha_{n}.$
(18)
This is summarized in the matrix form,
$\sum_{m}M_{nm}\alpha_{m}=\varepsilon\alpha_{n}.$ (19)
We have numerically solved this matrix equation by introducing a cut off as in
$\psi=\sum_{n=-N}^{N}\alpha_{n}e^{in\phi},$ (20)
with a certain integer $N$. We have checked that it is enough to take $N=8$.
Here we use $N=12$.
Figure 3: (a) Eigenenergy $\varepsilon$ as a function of $\mathcal{V}_{2}$ by
setting $\mathcal{V}_{1}=0$. (b) Eigenenergy $\varepsilon$ as a function of
$\mathcal{V}_{1}$ by setting $\mathcal{V}_{2}=20$. The lowest two energy
levels are colored in red, while the other energy levels are colored in blue.
We show the energy spectrum as a function of $\mathcal{V}_{2}$ by setting
$\mathcal{V}_{1}=0$ in Fig.3(a). We are concerned about the lowest two energy
levels indicated in red, which are well separated from all the other. The
energy is two-fold degenerated in the limit of
$\mathcal{V}_{2}\rightarrow\infty$. Their wave functions are given by the
symmetric function $\psi_{+}$ and the antisymmetric function $\psi_{-}$, as
shown in Fig.2(a2).
We show the energy spectrum as a function of $\mathcal{V}_{1}$ by setting
$\mathcal{V}_{2}=20$ in Fig.3(b). The almost two-fold degenerated energy
levels split linearly as a function of $\mathcal{V}_{1}$. The ground state and
the first-excited state wave functions are $\psi_{0}$ and $\psi_{1}$, which
are localized at $\phi=0$ and $\pi$, as shown in Fig.2(b2) for the case of
$\mathcal{V}_{2}=20$ and $\mathcal{V}_{1}=1$.
### II.5 Quantum tunneling
We use the two states $\left|0\right\rangle$ and $\left|1\right\rangle$ as the
one-qubit states. Because these two states are degenerate when
$\mathcal{V}_{1}=0$, one may wonder if they are naturally mixed by quantum
tunneling. Then, the life time of a qubit state is too short. However, this is
not the case. During gate operations we keep $\mathcal{V}_{2}$ quite large to
keeps the cosine potential well defined, as generates a quite large barrier
between these two states.
Quantum tunneling is estimated by means of the WKB approximation. The
tunneling rate $\Gamma$ is given by
$\Gamma=e^{-2\gamma},\quad\gamma\equiv\frac{1}{\hbar}\int_{0}^{\pi}d\phi\left|\sqrt{2\mu
r^{2}\left(\mathcal{V}_{2}+\mathcal{V}_{2}\left(\phi\right)\right)}\right|.$
(21)
By setting $\mathcal{V}_{2}\left(\phi\right)=-\mathcal{V}_{2}\cos 2\phi$, we
find
$\gamma=\frac{4\sqrt{\mu r^{2}\mathcal{V}_{2}}}{\hbar}.$ (22)
Hence, the quantum tunneling is exponentially small as a function of the
applied voltage $V_{2}$. It is estimated that $\gamma=10^{6}\sim 10^{8}$ for
$V_{2}=$1mV$\sim$100mV, where we have used that the inertia $\mu r^{2}$ is
10-30kgm2[38]. See Sec.IV with respect to these parameters.
## III Qubit operations
### III.1 Initialization
The present system is composed of a chain of nanotube rotators, each of which
is subject to the cosine potential $\cos 2\phi_{n}$ as in Fig.2(a1) by
requiring $\mathcal{V}_{1}=0$. The ground states are $N$-qubit states
$\left|q_{1}q_{2}\cdots q_{N}\right\rangle$ with $q_{n}=0,1$ for
$n=1,2,\cdots,N$. The initialization to the state $\left|00\cdots
0\right\rangle$ is necessary for quantum computations. It is done by the
annealing method. First, we start with a high temperature, where the
rotational angle is random. Then, we cool down the sample slowly by applying a
voltage $\mathcal{V}_{1}$ to all the outer plates in the right-hand side of
the rotators. The rotators tend to the angle $\phi_{n}=0$ in order to minimize
the electrostatic energy. As a result, all of the rotators have the angle
$\phi_{n}=0$, which corresponds to the state $\left|00\cdots 0\right\rangle$.
### III.2 One-qubit gates
We construct quantum gates for universal quantum computations. It is enough to
design the arbitrary phase-shift gate and the NOT gate with respect to one-
qubit gates. These gates are realized by varying the parameters
$\mathcal{V}_{1}$ and $\mathcal{V}_{2}$ in the potential (8). We investigate
the quantum dynamics governed by
$i\frac{d}{d\tau}\psi\left(\phi,\tau\right)=\mathcal{H}\left(\tau\right)\psi\left(\phi,\tau\right),$
(23)
or equivalently,
$i\frac{d}{d\tau}\alpha_{n}\left(\tau\right)=\sum_{m}M_{nm}\left(\tau\right)\alpha_{m}\left(\tau\right),$
(24)
in terms of the coefficients $\alpha_{n}$ in Eq.(17). We numerically solve
these differential equations in the following.
In the present instance, we assume that either $\mathcal{V}_{1}$ or
$\mathcal{V}_{2}$ is time dependent during a gate operation. We consider a
quantum gate operation satisfying
$\mathcal{H}(\tau_{\text{final}})=\mathcal{H}(\tau_{\text{initial}}).$ (25)
Namely, we tune so that
$\mathcal{V}_{2}(\tau_{\text{final}})=\mathcal{V}_{2}(\tau_{\text{initial}})$
and
$\mathcal{V}_{1}(\tau_{\text{final}})=\mathcal{V}_{1}(\tau_{\text{initial}})$.
Then, the wave function after the gate operation is expanded by the
superposition of the two gaussian functions (13), which enables us to
determine the coefficients of $\left|0\right\rangle$ and
$\left|1\right\rangle$.
This process is represented by a unitary matrix $U$ from the initial state
$(\left|0\right\rangle_{\text{initial}},\left|1\right\rangle_{\text{initial}})$
to the final state
$(\left|0\right\rangle_{\text{final}},\left|1\right\rangle_{\text{final}})$
defined by
$\left(\begin{array}[]{c}\left|0\right\rangle_{\text{final}}\\\
\left|1\right\rangle_{\text{final}}\end{array}\right)=U\left(\begin{array}[]{c}\left|0\right\rangle_{\text{initial}}\\\
\left|1\right\rangle_{\text{initial}}\end{array}\right).$ (26)
This unitary matrix defines a one-qubit gate.
### III.3 Phase-shift gate
We construct the arbitrary phase-shift gate defined by
$U_{Z}(\theta)\equiv\text{diag.}(1,e^{i\theta}),$ (27)
whose action is
$U_{Z}(\theta)\left|0\right\rangle=\left|0\right\rangle,\qquad
U_{Z}(\theta)\left|1\right\rangle=e^{i\theta}\left|1\right\rangle.$ (28)
As is well known, time evolution generates a phase to a state according to the
Schrödinger equation. Hence, the two states $|0\rangle$ and $|1\rangle$
acquire different phases as time evolves, provided their energies are made
different by the presence of $\mathcal{V}_{1}$. This is the basic idea of the
phase-shift gate.
Figure 4: (a) Time evolution of phase modulation. We have set $\tau_{1}=4$ and
$\tau_{2}-\tau_{1}=0.41p$ with $p=0,1,\cdots,8$. (b) Phase modulation as a
function of $\tau_{2}$, where $4\leq\tau_{2}\leq 8$. We have set
$\mathcal{V}_{2}=20$, $\mathcal{V}_{1}=1$, $\mathcal{T}=2$ and $T=10$.
We temporary control $\mathcal{V}_{1}$ by tuning an applied voltage difference
according to the formula
$\mathcal{V}_{1}\left(\tau\right)=\frac{\mathcal{\bar{V}}_{1}}{2}\left[\tanh\frac{\tau-\tau_{2}}{\mathcal{T}}-\tanh\frac{\tau-\tau_{1}}{\mathcal{T}}+2\right],$
(29)
with $\tau_{2}\gg\tau_{1}$, while we fix $\mathcal{V}_{2}\neq 0$. We start
either from the state $|0\rangle$ or $|1\rangle$. The absolute value of the
wave function does not change its form but only the phase rotation is
modulated because the state remains in the bottom of the cosine potential. We
show the phase modulation as a function of time for various $\tau_{2}$ in
Fig.4(a). The phase difference between the initial state and the final state
is shown in Fig.4(b), which is linear as a function of $\tau_{2}-\tau_{1}$.
The phase modulations between the two states $|0\rangle$ and $|1\rangle$ are
opposite as in
$U_{\theta}\equiv\text{diag.}(e^{-i\theta/2},e^{i\theta/2}),$ (30)
because the energy splitting is opposite between them. Here,
$\theta/2\pi=\mathcal{\bar{V}}_{1}f(\mathcal{V}_{2})(\tau_{2}-\tau_{1}).$ (31)
We find $f(\mathcal{V}_{2})=-0.3$ in the case of $\mathcal{V}_{2}=20$ by
fitting the line in Fig.4(b). This is equivalent to the phase-shift gate (27),
because the overall phase is irrelevant to quantum gate operations.
### III.4 $\pi/4$ phase-shift gate
The $\pi/4$ phase-shift gate
$U_{T}\equiv\text{diag.}(1,e^{i\pi/4})$ (32)
is realized by setting $\theta=\pi/4$ in the generic phase-shift gate (27).
### III.5 Pauli-Z gate
The Pauli-Z gate is realized by the $z$ rotation with the angle $\pi$ as
$U_{Z}=-iU_{Z}\left(\pi\right)$ (33)
in the generic phase-shift gate (27).
### III.6 NOT gate
We construct the NOT gate defined by
$U_{\text{NOT}}\equiv\left(\begin{array}[]{cc}0&1\\\ 1&0\end{array}\right).$
(34)
This gate exchanges the two states $|0\rangle$ and $|1\rangle$. It is
impossible to keep the potential $\mathcal{V}_{2}$ finite since it prohibits
quantum tunneling.
For this purpose we temporary control $\mathcal{V}_{2}$ by tuning the applied
voltage to the rotator in such a way that
$\mathcal{V}_{2}\left(\tau\right)=\frac{\mathcal{\bar{V}}_{2}}{2}\left[\tanh\frac{\tau-\tau_{2}}{\mathcal{T}}-\tanh\frac{\tau-\tau_{1}}{\mathcal{T}}+2\right],$
(35)
with $\tau_{2}\gg\tau_{1}$, while we set $\mathcal{V}_{1}=0$. The gate
operation requires that the initial state $|0\rangle$ is transferred to the
final state $|1\rangle$ as in Fig.5(c). We explain how to obtain this time
evolution. Let its time evolution be described by the wave function
$\psi(\phi,\tau)$. The initial condition implies that it satisfies
$\psi(0,\tau)=\psi_{\text{max}}$ and $\psi(\pi,\tau)=0$ at $\tau=0$, where
$\psi_{\text{max}}$ is the maximum value of $|\psi(\phi,\tau)|$. The final
state should satisfies $\psi(0,\tau)=0$ and $\psi(\pi,\tau)=\psi_{\text{max}}$
at $\tau=T\gg\tau_{2}$, because $|0\rangle$ is transformed to the state
$|1\rangle$.
This is a nontrivial problem depending on the parameters $\tau_{2}$ in the
applied voltage $\mathcal{V}_{2}\left(\tau\right)$. We fix $\tau_{1}$
arbitrarily and solve the Schrödinger equation for $\psi(\phi,T)$ as a
function of $\tau_{2}$, whose result we show in Fig.5(a). There is a certain
value of $\tau_{2}$ where $|\psi(0,T)|=0$ and
$|\psi(\pi,T)|=\psi_{\text{max}}$, as is clear in Fig.5(b). Then, we show the
dynamics of $|\psi(\phi,\tau)|$ with the use of this value of $\tau_{2}$ in
Fig.5(c) and (d), where it is seen that the initial state $|0\rangle$
localized at $\phi=0$ is transformed to the final state $|1\rangle$ localized
at $\phi=\pm\pi$. This is the action of the NOT gate.
Figure 5: Time evolution of the NOT gate operation. (a) Bird’s eye’s view of
the final state $|\psi(\phi,T)|$ as a function of $\phi$ and $\tau_{2}$, where
$-\pi\leq\phi<\pi$ and $4\leq\tau_{2}\leq 16$. (b) The final state
$|\psi(0,T)|$ colored in red and $|\psi(\pi,T)|$ colored in blue as a function
of $\tau_{2}$, where $4\leq\tau_{2}\leq 16$. (c) Bird’s eye’s view of
$|\psi(\phi,\tau)|$, where $-\pi\leq\phi<\pi$ and $0\leq\tau<T$. (d) Top view
of $|\psi(\phi,\tau)|$. We have set $\mathcal{\bar{V}}_{2}=20$,
$\mathcal{V}_{1}=0$, $\tau_{1}=4$, $\mathcal{T}$ $=2$ and $T=20$ in (a) and
(b). We have additionally set $\tau_{2}=12.6$ in (c) and (d).
### III.7 Hadamard gate
The Hadamard gate is defined by
$U_{\text{H}}\equiv\frac{1}{\sqrt{2}}\left(\begin{array}[]{cc}1&1\\\
1&-1\end{array}\right).$ (36)
It is realized by a sequential application of the Pauli Z gates and the NOT
gate [35] as
$U_{\text{H}}=-iU_{Z}U_{\text{NOT}}U_{Z}.$ (37)
### III.8 Two-qubit gates
A two-qubit system is made of two rotators put along the $x$ axis as in Fig.1.
The two-qubit state is expressed as $\left|q_{1}q_{2}\right\rangle$ with
$q_{n}=0,1$. An example of the state $\left|01\right\rangle$ is given in the
system made of Fig.1(b3) and (c3). A two-qubit gate operation transforms the
initial state $\left|q_{1}q_{2}\right\rangle_{\text{initial}}$ to the final
state $\left|q_{1}q_{2}\right\rangle_{\text{final}}$ as
$\left(\begin{array}[]{c}\left|00\right\rangle_{\text{final}}\\\
\left|01\right\rangle_{\text{final}}\\\
\left|10\right\rangle_{\text{final}}\\\
\left|11\right\rangle_{\text{final}}\end{array}\right)=U\left(\begin{array}[]{c}\left|00\right\rangle_{\text{initial}}\\\
\left|01\right\rangle_{\text{initial}}\\\
\left|10\right\rangle_{\text{initial}}\\\
\left|11\right\rangle_{\text{initial}}\end{array}\right),$ (38)
which defines the two-qubit gate operation $U$.
### III.9 Two-qubit phase-shift gate
We apply the voltage difference $V_{12}$ between the two rotators as in
Fig.1(b3) and (c3). The potential energy is given by
$W\left(\phi_{1},\phi_{2}\right)\equiv
W_{2}\left(\phi_{1}\right)+W_{2}\left(\phi_{2}\right)+\frac{C\left(\phi_{1},\phi_{2}\right)}{2}V_{12}^{2},$
(39)
where $W_{2}\left(\phi\right)$ is the potential energy given by Eq.(1), and
$C(x_{1},x_{2})$ is the capacitance between the rotators,
$C(\phi_{1},\phi_{2})=\frac{\varepsilon_{0}S}{L\left(\phi_{1},\phi_{2}\right)}.$
(40)
Here, $\varepsilon_{0}$ and $S$ are the permittivity and the area of the
plates, while $L\left(\phi_{1},\phi_{2}\right)$ is the distance between the
two plates attached to the rotator.
When we apply the Ising gate, the absolute values of the wave functions do not
change but only the phases are modulated. The wave functions are $|0\rangle$
or $|1\rangle$, and hence we concentrate on the parallel case
$\phi_{1},\phi_{2}=0,\pi$. There are relations,
$\displaystyle L\left(0,0\right)$ $\displaystyle=L\left(\pi,\pi\right)=\ell,$
(41) $\displaystyle L\left(0,\pi\right)$ $\displaystyle=2R+\ell,\quad
L\left(\pi,0\right)=-2R+\ell,$ (42)
where $R$ is the radius of the rotation of the rotator, and $\ell$ is the
length between the supporting points of two adjacent rotators: See Fig.1(b3)
and (c3).
We calculate
$\displaystyle W\left(0,0\right)$
$\displaystyle=W\left(\pi,\pi\right)=\frac{\varepsilon_{0}S}{\ell}\frac{V_{12}^{2}}{2}\equiv
E_{0},$ (43) $\displaystyle W\left(0,\pi\right)$
$\displaystyle=\frac{\varepsilon_{0}S}{\ell+2R}\frac{V_{12}^{2}}{2}\equiv
E_{+},$ (44) $\displaystyle W\left(\pi,0\right)$
$\displaystyle=\frac{\varepsilon_{0}S}{\ell-2R}\frac{V_{12}^{2}}{2}\equiv
E_{-}.$ (45)
We show that the potential energy (39) may be written in the form of the Ising
model with field $B_{j}$,
$H_{\text{Ising}}=\sum_{j=1}^{N-1}J_{j}s_{j}s_{j+1}+\sum_{j=1}^{N}B_{j}s_{j}+E_{0},$
(46)
where $s_{j}=\pm 1$. We rewrite Eq.(46) as
$H_{\text{Ising}}=\sum_{j=1}^{N-1}H_{j}\left(s_{j},s_{j+1}\right)+\frac{B_{1}}{2}s_{1}+\frac{B_{N}}{2}s_{N},$
(47)
with
$H_{j}\left(s_{j},s_{j+1}\right)=J_{j}s_{j}s_{j+1}+\frac{B_{j}}{2}s_{j}+\frac{B_{j+1}}{2}s_{j+1}+\frac{E_{0}}{N-1}.$
(48)
We realize the term (48) by a system made of two adjacent buckled plates $j$
and $j+1$. There are relations
$\displaystyle H_{j}\left(1,1\right)$ $\displaystyle=W\left(0,0\right),\qquad
H_{j}\left(-1,-1\right)=W\left(\pi,\pi\right),$ (49) $\displaystyle
H_{j}\left(1,-1\right)$ $\displaystyle=W\left(0,\pi\right),\qquad
H_{j}\left(-1,1\right)=W\left(\pi,0\right).$ (50)
The coefficients in the Ising model are given by
$\displaystyle J_{j}$ $\displaystyle=\frac{2E_{0}-E_{+}-E_{-}}{4},$ (51)
$\displaystyle B_{j}$ $\displaystyle=-B_{j+1}=\frac{E_{+}-E_{-}}{4},$ (52)
$\displaystyle E_{0}$ $\displaystyle=\frac{2E_{0}+E_{+}+E_{-}}{4}.$ (53)
We start with the Gaussian state
$\Psi_{\sigma_{1}\sigma_{2}}\left(x_{1},x_{2}\right)\equiv\psi_{\sigma_{1}}(x_{1})\psi_{\sigma_{2}}(x_{2})$
with Eq.(13) localized at four points $x_{1}=\sigma_{1}R$ and
$x_{2}=\sigma_{2}R$, where $\sigma_{1}=\pm,\sigma_{2}=\pm$. The absolute value
of this wave function almost remains as it is, but a phase shift occurs. The
unitary evolution is given by
$U\left(t\right)=\exp[-i\left(E_{0}/\hbar+\omega\right)t]$ (54)
for $\sigma_{1}=\sigma_{2}=+$ and $\sigma_{1}=\sigma_{2}=-$,
$U\left(t\right)=\exp[-i\left(E_{+}/\hbar+\omega\right)t]$ (55)
for $\sigma_{1}=+$ and $\sigma_{2}=-$,
$U\left(t\right)=\exp[-i\left(E_{-}/\hbar+\omega\right)t]$ (56)
for $\sigma_{1}=-$ and $\sigma_{2}=+$, where we have added the zero-point
energy.
It corresponds to the two-qubit phase-shift gate operation,
$\displaystyle U_{\text{2-phase}}\left(t\right)$
$\displaystyle=\text{diag.}\left(e^{-i\frac{E_{0}}{\hbar}t},e^{-i\frac{E_{-}}{\hbar}t},e^{-i\frac{E_{+}}{\hbar}t},e^{-i\frac{E_{0}}{\hbar}t}\right)$
$\displaystyle=e^{-i\frac{E_{0}}{\hbar}t}\text{diag.}\left(1,e^{-i\frac{E_{X}}{\hbar}t},e^{i\frac{E_{X}}{\hbar}t},1\right),$
(57)
by identifying the qubit state
$\left(\left|00\right\rangle,\left|01\right\rangle,\left|10\right\rangle,\left|11\right\rangle\right)^{t}=\left(\left|++\right\rangle,\left|+-\right\rangle,\left|-+\right\rangle,\left|--\right\rangle\right)^{t}$.
### III.10 Ising gate
The Ising gate is defined by $U_{ZZ}\equiv$diag.$(1,-1,-1,1)$, and realized by
setting $E_{X}t/\hbar=\pi$ in Eq.(57) up to the global phase
$\exp\left[-iE_{0}t/\hbar\right]$.
### III.11 CZ gate
The controlled-Z (CZ) gate is defined by $U_{\text{CZ}}=$diag.$(1,1,1,-1)$. It
is constructed by a sequential application of the Ising gate and the one-qubit
phase-shift gates as[36]
$U_{\text{CZ}}=e^{i\pi/4}U_{Z}\left(\frac{\pi}{2}\right)U_{Z}\left(\frac{\pi}{2}\right)U_{ZZ}.$
(58)
### III.12 CNOT gate
The CNOT is defined by
$U_{\text{CNOT}}^{1\rightarrow 2}\equiv\left(\begin{array}[]{cccc}1&0&0&0\\\
0&1&0&0\\\ 0&0&0&1\\\ 0&0&1&0\end{array}\right),$ (59)
and is constructed by a sequential application of the CZ gate (58) and the
Hadamard gate $U_{\text{H}}^{\left(2\right)}$ in Eq.(37) acting on the second
qubit as $U_{\text{CNOT}}^{1\rightarrow
2}=U_{\text{H}}^{\left(2\right)}U_{\text{CZ}}U_{\text{H}}^{\left(2\right)}$.
### III.13 Readout process
The readout of the plate angle $\phi_{n}$ can be done for all rotators by
using the fact that the capacitance depends on the relative angle of the two
plates[27]. By applying a tiny voltage and by measuring the induced current,
we can readout the capacitance, which is directly related to the angle
$\phi_{n}$.
## IV Discussion
We have proposed a universal quantum computer with the use of a chain of
carbon nanotubes together with metal plates attached to them. One-qubit gate
operations are controlled electrically by giving voltage difference between
the attached plate and other plates fixed to the outer system. Two-qubit
operations are controlled electrically by given voltage difference between the
two attached plates belonging to two adjacent nanotubes. We now discuss the
feasibility of such a quantum computer.
We mention experimentally obtained material parameters of a double-wall
nanotube structure[29]. Nanotube length is 10nm and the intertube gap length
is 0.3nm. The diameter of a nanotube is 10nm[22]. Q factor[37] is of the order
of 100. The inertia $\mu r^{2}$ is 10-30kgm2[38]. The rotational frequency is
from 1MHz[37] to 100GHz[29].
If we use a plate with 10nm square with the distance $L=100$nm, the
capacitance $C=\varepsilon_{0}S/L$ is 10-21F. If we apply 1mV to the plate,
the electrostatic energy $CV^{2}/2$ is 10-26Nm and the operating time is of
the order of 10$\mu$s. If we apply 100mV to the plates, the electrostatic
energy is 10-22Nm and the operating time is of the order of 1ns. These values
are experimentally feasible.
This work is supported by CREST, JST (Grants No. JPMJCR20T2).
## References
* [1] R. Feynman, Simulating physics with computers, Int. J. Theor. Phys. 21, 467 (1982).
* [2] D. P. DiVincenzo, Quantum Computation, Science 270, 255 (1995).
* [3] M. Nielsen and I. Chuang, "Quantum Computation and Quantum Information", Cambridge University Press, (2016); ISBN 978-1-107-00217-3.
* [4] Y. Nakamura; Yu. A. Pashkin; J. S. Tsai, Coherent control of macroscopic quantum states in a single-Cooper-pair box, Nature 398, 786 (1999).
* [5] E. Knill, R. Laflamme and G. J. Milburn, A scheme for efficient quantum computation with linear optics, Nature, 409, 46 (2001).
* [6] J. I. Cirac and P. Zoller, Quantum Computations with Cold Trapped Ions, Phys. Rev. Lett. 74, 4091 (1995).
* [7] L. M.K. Vandersypen, M. Steffen, G. Breyta, C. S. Yannoni, M. H. Sherwood, I. L. Chuang, Nature 414, 883 (2001).
* [8] B. E. Kane, Nature 393, 133 (1998).
* [9] D. Loss and D. P. DiVincenzo, Quantum computation with quantum dots, Phys. Rev. A 57, 120 (1998).
* [10] C. Psaroudaki and C. Panagopoulos, Phys. Rev. Lett. 127, 06720 (2021).
* [11] J. Xia, X. Zhang, X. Liu, Y. Zhou, M. Ezawa, arXiv:2204.04589
* [12] J. Xia, X. Zhang, X. Liu, Y. Zhou, M. Ezawa, Commun. Mater. 3, 88 (2022).
* [13] S. Rips and M. J. Hartmann, Phys. Rev. Lett. 110, 120503 (2013).
* [14] S. Rips, I. Wilson-Rae and M. J. Hartmann, Phys. Rev. A 89, 013854 (2014).
* [15] F. Pistolesi, A. N. Cleland, and A. Bachtold, Phys. Rev. X 11, 031027 (2021).
* [16] M. Ezawa, S. Yasunaga, A. Higo, T. Iizuka, Y. Mita, arXiv:2208.04528
* [17] D. Deutsch, Quantum theory, Proceedings of the Royal Society A. 400, 97 (1985).
* [18] C. M. Dawson and M. A. Nielsen, quant-ph/arXiv:0505030.
* [19] M. Nielsen and I. Chuang, "Quantum Computation and Quantum Information", Cambridge University Press, Cambridge, UK (2010).
* [20] H. G. Craighead, Science 290, 1532 (2000).
* [21] K. L. Ekinci and M. L. Roukes, Rev. Sci. Instruments 76, 6 (2005).
* [22] A. M. Fennimore, T. D. Yuzvinsky, W.-Q. Han, M. S. Fuhrer, J. Cumings and A. Zettl, Nature 424, 408 (2003).
* [23] A. Barreiro, R. Rurali, E. R. Herndez, J. Moser, T. Pichler, L. Forro A. Bachtold, Science 320 (5877): 775 (2008).
* [24] K. Cai, J. Wan, Q. H. Qin and J. Shi, Nanotechnology 27, 055706 (2016).
* [25] K. Cai, J. Yu, J. Wan, H. Yin, J. Shi, Q H. Qin, Carbon, 101, 168 (2016).
* [26] A. N. Kolmogorov and V. H. Crespi, Phys. Rev. Lett. 85, 4727 (2000).
* [27] B. Bourlon, D. C. Glattli, C. Miko, L. Forr and A. Bachtold, Nano Lett. 4, 709 (2004).
* [28] Z. Xu, Q.-S. Zheng and G. Chen, Phys. Rev. B 75, 195445 (2007).
* [29] K. Cai, Y. Li, Q. H. Qin and H. Yin, Nanotechnology 25, 505701 (2014).
* [30] H. A. Zambrano, J. H. Walther and R. L. Jaffe, J. Chem. Phys. 131 (24) 241104 (2009).
* [31] R. Saito, Physical Properties of Carbon Nanotubes, Imperial College Press (1998)
* [32] M. Blencowe, Physics Reports 395, 159 (2004).
* [33] M. Poot, H. S.J. van der Zant, Physics Reports 511, 273 (2012.)
* [34] O. Slowik, K. Orlowska, D. Kopiec, P. Janus, P. Grabiec and T. Gotszalk, Measurement Automation Monitoring, Mar., 62, 2450 (2016).
* [35] N. Schuch and J. Seiwert, Phys. Rev. A 67, 032301 (2003).
* [36] Y. Makhlin, Quant. Info. Proc. 1, 243 (2002).
* [37] S.J. Papadakis, A.R. Hall, P. A.Williams, L.Vicci, M.R. Falvo, R. Superfine and S.Washburn, Phys. Rev. Lett. Phys. Rev. Lett. 93, 146101 (2004).
* [38] T. D. Yuzvinsky, A. M. Fennimore, A. Kis and A. Zettl, Nanotechnology 17, 434 (2006)
|
# Fully Stochastic Trust-Region Sequential Quadratic Programming for Equality-
Constrained Optimization Problems
Yuchen Fang Committee on Computational and Applied Mathematics, The
University of Chicago Sen Na ICSI and Department of Statistics, University
of California, Berkeley Michael W. Mahoney ICSI and Department of
Statistics, University of California, Berkeley Lawrence Berkeley National
Laboratory Mladen Kolar Booth School of Business, The University of Chicago
###### Abstract
We propose a trust-region stochastic sequential quadratic programming
algorithm (TR-StoSQP) to solve nonlinear optimization problems with stochastic
objectives and deterministic equality constraints. We consider a fully
stochastic setting, where in each iteration a single sample is generated to
estimate the objective gradient. The algorithm adaptively selects the trust-
region radius and, compared to the existing line-search StoSQP schemes, allows
us to employ indefinite Hessian matrices (i.e., Hessians without modification)
in SQP subproblems. As a trust-region method for constrained optimization, our
algorithm needs to address an infeasibility issue—the linearized equality
constraints and trust-region constraints might lead to infeasible SQP
subproblems. In this regard, we propose an adaptive relaxation technique to
compute the trial step that consists of a normal step and a tangential step.
To control the lengths of the two steps, we adaptively decompose the trust-
region radius into two segments based on the proportions of the feasibility
and optimality residuals to the full KKT residual. The normal step has a
closed form, while the tangential step is solved from a trust-region
subproblem, to which a solution ensuring the Cauchy reduction is sufficient
for our study. We establish the global almost sure convergence guarantee for
TR-StoSQP, and illustrate its empirical performance on both a subset of
problems in the CUTEst test set and constrained logistic regression problems
using data from the LIBSVM collection.
## 1 Introduction
We consider the constrained stochastic optimization problem:
$\min_{{\bm{x}}\in\mathbb{R}^{d}}\;f({\bm{x}})=\mathbb{E}[F({\bm{x}};\xi)],\quad\text{s.t.}\;\;c({\bm{x}})={\bm{0}},$
(1)
where $f:\mathbb{R}^{d}\rightarrow\mathbb{R}$ is a stochastic objective with
$F(\cdot;\xi)$ being one of its realizations,
$c:\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}$ are deterministic equality
constraints, $\xi$ is a random variable following the distribution
${\mathcal{P}}$, and the expectation $\mathbb{E}[\cdot]$ is taken over the
randomness of $\xi$. Problem (1) appears in various applications including
constrained deep neural networks (Chen et al., 2018), constrained maximum
likelihood estimation (Dupacova and Wets, 1988), optimal control (Birge,
1997), PDE-constrained optimization (Rees et al., 2010), and network
optimization (Bertsekas, 1998).
There are numerous methods for solving constrained optimization problems with
deterministic objectives. Among them, sequential quadratic programming (SQP)
methods are one of the leading approaches and are effective for both small and
large problems. When the objective is stochastic, some stochastic SQP (StoSQP)
methods have been proposed recently (Berahas et al., 2021a, b, 2022b; Na et
al., 2021, 2022a; Curtis et al., 2021b). These literature consider the
following two different setups for modeling the objective.
The first setup is called the random model setup (Chen et al., 2017), where
samples with adaptive batch sizes are generated in each iteration to estimate
the objective model (e.g., objective value and gradient). The algorithms under
this setup often require the estimated objective model to satisfy certain
adaptive accuracy conditions with a fixed probability in each iteration. Under
this setup, Na et al. (2022a) proposed an StoSQP algorithm for (1), which
adopts a stochastic line search procedure with an exact augmented Lagrangian
merit function to select the stepsize. Subsequently, Na et al. (2021) further
enhanced the designs and arguments in Na et al. (2022a) and developed an
active-set StoSQP method to enable inequality constraints; and Berahas et al.
(2022b) considered a finite-sum objective and accelerated StoSQP by applying
the SVRG technique (Johnson and Zhang, 2013), which, however, requires one to
periodically compute the full objective gradient. Also, Berahas et al. (2022a)
introduced a norm test condition for StoSQP to adaptively select the batch
sizes.
The second setup is called the fully stochastic setup (Curtis and Shi, 2020),
where a single sample is generated in each iteration to estimate the objective
model. Under this setup, a prespecified sequence is often required as an input
to assist with the step selection. For example, Berahas et al. (2021b)
designed an StoSQP algorithm that uses a random projection procedure to select
the stepsize. The projection procedure uses a prespecified sequence
$\\{\beta_{k}\\}$, together with the estimated Lipschitz constants of the
objective gradient and constraint Jacobian, to construct a projection interval
in each iteration. A random quantity is then computed and projected into the
interval to decide the stepsize, which ensures a sufficient reduction on the
$\ell_{1}$ merit function. Based on Berahas et al. (2021b), some algorithmic
and theoretical improvements have been reported: Berahas et al. (2021a) dealt
with rank-deficient Jacobians; Curtis et al. (2021b) solved Newton systems
inexactly; Curtis et al. (2021a) analyzed the worst-case sample complexity;
and Na and Mahoney (2022) established the local rate and performed statistical
inference for the method in Berahas et al. (2021b).
The existing StoSQP algorithms converge globally either in expectation or
almost surely, and enjoy promising empirical performance under favorable
settings. However, there are three drawbacks that motivate our study. First,
the algorithms are all line-search-based; that is, a search direction is first
computed by solving an SQP subproblem, and then a stepsize is selected, either
by random projection or by stochastic line search along the direction.
However, it is observed that for deterministic problems, computing the search
direction and selecting the stepsize jointly, as is done in trust-region
methods, can lead to better performance in some cases (see Chapter 4 in
Nocedal and Wright, 2006). Second, to make SQP subproblems solvable, the
existing schemes require the approximation of the Lagrangian Hessian to be
positive definite in the null space of constraint Jacobian. Such a condition
is common in the SQP literature (Boggs and Tolle, 1995; Nocedal and Wright,
2006), while it is often achieved by Hessian modification, which excludes
promising choices of the Hessian matrices, such as the unperturbed
(stochastic) Hessian of the Lagrangian. Third, to show global convergence, the
existing literature requires the random merit parameter to be not only
stabilized, but also sufficiently large (or small, depending on the context)
with an unknown threshold. To achieve the latter goal, Na et al. (2021, 2022a)
imposed an adaptive condition on the feasibility error when selecting the
merit parameter, while Berahas et al. (2021a, b, 2022b); Curtis et al. (2021b)
imposed a symmetry condition on the noise distribution. In contrast,
deterministic SQP schemes only require the stability of the merit parameter
(see Boggs and Tolle, 1995, and references therein).
In this paper, we consider the fully stochastic setup, and we design a trust-
region stochastic SQP (TR-StoSQP) method to address the above drawbacks. As a
trust-region method, TR-StoSQP computes the search direction and stepsize
jointly, and, unlike line-search-based methods, it avoids Hessian
modifications in formulating SQP subproblems. Thus, it can explore negative
curvature directions of the Hessian matrix. Further, our analysis only relies
on the stability of the merit parameter (of the $\ell_{2}$ merit function),
which is consistent with deterministic SQP schemes. The design of TR-StoSQP is
inspired by a stochastic trust-region method for solving unconstrained
problems reported in Curtis and Shi (2020), which improves the authors’ prior
design in Curtis et al. (2019) from using linear model to quadratic model to
approximate the objective function. As in Curtis and Shi (2020), our method
inputs a user-specified radius-related sequence $\\{\beta_{k}\\}$ to generate
the trust-region radius at each step. Beyond this similarity, our scheme
differs from Curtis and Shi (2020) in several aspects.
First, it is known that trust-region methods for constrained optimization are
bothered by the infeasibility issue—the linearized constraints and trust-
region constraints might have an empty intersection leading to an infeasible
SQP subproblem. Although some literature on trust-region SQP has been proposed
to address this issue (Celis et al., 1984; Vardi, 1985; Byrd et al., 1987;
Omojokun, 1989), we develop a novel adaptive relaxation technique to compute
the trial step, which is of independent interest even for deterministic
schemes. In particular, we decompose the trial step into a normal step and a
tangential step. Then, we control the lengths of the two steps by decomposing
the trust-region radius into two segments adaptively, based on the proportions
of the estimated feasibility and optimality residuals to the full KKT
residual. Compared to existing relaxation techniques, our relaxation technique
does not require any tuning parameters. See Section 2 for details.
Second, in TR-StoSQP, we properly compute some control parameters using known
or estimable quantities. By the computation, we no longer need to tune the
other two input parameter sequences as in Curtis and Shi (2020) (i.e.,
$\\{\gamma_{1,k},\gamma_{2,k}\\}$ in their notation), except to tune the input
radius-related sequence $\\{\beta_{k}\\}$. Further, we use the control
parameters to adjust the input sequence $\\{\beta_{k}\\}$ when computing the
trust-region radius, so that $\\{\beta_{k}\\}\subseteq(0,\beta_{\max}]$ with
any $\beta_{\max}>0$ is sufficient for our convergence analysis. Our design
simplifies the one in Curtis and Shi (2020), where there are three parameter
sequences to tune whose conditions are highly coupled (see Lemma 4.5 in Curtis
and Shi, 2020). In addition, as the authors stated, Curtis and Shi (2020)
rescaled the Hessian matrix based on the input $\\{\gamma_{1,k}\\}$ that is
not ideal (because the rescaling step modifies the curvature information of
the Hessian). We have removed this step in our design.
To our knowledge, TR-StoSQP is the first trust-region SQP algorithm for
solving constrained optimization problems under fully stochastic setup. With a
stabilized merit parameter, we establish the global convergence property of
TR-StoSQP. In particular, we show that (i) when $\beta_{k}=\beta$, $\forall
k\geq 0$, the expectation of weighted averaged KKT residuals converges to a
neighborhood around zero; (ii) when $\beta_{k}$ decays properly such that
$\sum\beta_{k}=\infty$ and $\sum\beta_{k}^{2}<\infty$, the KKT residuals
converge to zero almost surely. These results are similar to the ones for
unconstrained and constrained problems established under fully stochastic
setup in Berahas et al. (2021a, b); Curtis and Shi (2020); Curtis et al.
(2021b). However, we have weaker conditions on the objective gradient noise
(e.g., we consider a growth condition) and on the sequence $\beta_{k}$ (e.g.,
we only require $\beta_{k}\leq\beta_{\max}$). See the discussions after
Theorem 4.9 and Theorem 4.11 for more details. We also note that a recent
paper (Sun and Nocedal, 2022) studied a noisy trust-region method for
unconstrained deterministic optimization. In that method, the value and
gradient of the objective are evaluated with bounded deterministic noise. The
authors showed that the trust-region iterates visit a neighborhood of the
stationarity infinitely often, with the radius proportional to the noise
magnitude. Given the significant differences between stochastic and
deterministic problems, and between constrained and unconstrained problems,
our algorithm design and analysis are quite different from Sun and Nocedal
(2022). We implement TR-StoSQP on a subset of problems in the CUTEst test set
and on constrained logistic regression problems using data from the LIBSVM
collection. The numerical results demonstrate the promising performance of our
method.
Notation. We use $\|\cdot\|$ to denote the $\ell_{2}$ norm for vectors and the
operator norm for matrices. $I$ denotes the identity matrix and ${\bm{0}}$
denotes the zero matrix (or vector). Their dimensions are clear from the
context. We let $G({\bm{x}})=\nabla^{T}c({\bm{x}})\in\mathbb{R}^{m\times d}$
be the Jacobian matrix of the constraints and
$P({\bm{x}})=I-G^{T}({\bm{x}})[G({\bm{x}})G^{T}({\bm{x}})]^{-1}G({\bm{x}})$ be
the projection matrix to the null space of $G({\bm{x}})$. We use
${\bar{g}}({\bm{x}})=\nabla F({\bm{x}};\xi)$ to denote an estimate of $\nabla
f({\bm{x}})$, and use $\bar{(\cdot)}$ to denote stochastic quantities.
Structure of the paper. We introduce the adaptive relaxation technique in
Section 2. We propose the trust-region stochastic SQP (TR-StoSQP) algorithm in
Section 3 and establish its global convergence guarantee in Section 4.
Numerical experiments are presented in Section 5 and conclusions are presented
in Section 6.
## 2 Adaptive Relaxation
The Lagrangian of Problem (1) is
${\mathcal{L}}({\bm{x}},{\bm{\lambda}})=f({\bm{x}})+{\bm{\lambda}}^{T}c({\bm{x}})$,
where ${\bm{\lambda}}\in\mathbb{R}^{m}$ is the Lagrangian multiplier vector.
Finding a first-order stationary point of (1) is equivalent to finding a pair
$({\bm{x}}^{*},{\bm{\lambda}}^{*})$ such that
$\nabla{\mathcal{L}}({\bm{x}}^{*},{\bm{\lambda}}^{*})=\begin{pmatrix}\nabla_{\bm{x}}{\mathcal{L}}({\bm{x}}^{*},{\bm{\lambda}}^{*})\\\
\nabla_{{\bm{\lambda}}}{\mathcal{L}}({\bm{x}}^{*},{\bm{\lambda}}^{*})\end{pmatrix}=\begin{pmatrix}\nabla
f({\bm{x}}^{*})+G^{T}({\bm{x}}^{*}){\bm{\lambda}}^{*}\\\
c({\bm{x}}^{*})\end{pmatrix}=\begin{pmatrix}{\bm{0}}\\\
{\bm{0}}\end{pmatrix}.$
We call $\|\nabla_{\bm{x}}{\mathcal{L}}({\bm{x}},{\bm{\lambda}})\|$ the
optimality residual,
$\|\nabla_{\bm{\lambda}}{\mathcal{L}}({\bm{x}},{\bm{\lambda}})\|$ (i.e.,
$\|c({\bm{x}})\|$) the feasibility residual, and
$\|\nabla{\mathcal{L}}({\bm{x}},{\bm{\lambda}})\|$ the KKT residual. In the
stochastic setting, $\nabla f({\bm{x}})$ is replaced by its stochastic
estimate ${\bar{g}}({\bm{x}})=\nabla F({\bm{x}};\xi)$, where $\xi$ is sampled
from the underlying distribution ${\mathcal{P}}$ and realizes an objective
$F({\bm{x}};\xi)$. Given $({\bm{x}}_{k},{\bm{\lambda}}_{k})$ in the $k$-th
iteration, we denote $\nabla f_{k}=\nabla f({\bm{x}}_{k})$,
${\bar{g}}_{k}={\bar{g}}({\bm{x}}_{k})$, $c_{k}=c({\bm{x}}_{k})$,
$G_{k}=G({\bm{x}}_{k})$, and define the estimated KKT residual as
$\|{\bar{\nabla}}\mathcal{L}_{k}\|=\|({\bar{\nabla}}_{\bm{x}}\mathcal{L}_{k},c_{k})\|$
with
${\bar{\nabla}}_{\bm{x}}\mathcal{L}_{k}={\bar{g}}_{k}+G_{k}^{T}{\bm{\lambda}}_{k}$.
### 2.1 Preliminaries
Given the iterate ${\bm{x}}_{k}$ and trust-region radius $\Delta_{k}$ in the
$k$-th iteration, we compute an approximation $B_{k}$ of the Lagrangian
Hessian $\nabla^{2}_{{\bm{x}}}{\mathcal{L}}_{k}$, and we aim to obtain the
trial step $\Delta{\bm{x}}_{k}$ by solving a trust-region stochastic SQP
subproblem
$\min_{\Delta{\bm{x}}\in\mathbb{R}^{d}}\
\frac{1}{2}\Delta{\bm{x}}^{T}B_{k}\Delta{\bm{x}}+{\bar{g}}_{k}^{T}\Delta{\bm{x}},\;\quad\text{s.t.}\;\;c_{k}+G_{k}\Delta{\bm{x}}={\bm{0}},\;\|\Delta{\bm{x}}\|\leq\Delta_{k}.$
(2)
However, if
$\\{\Delta{\bm{x}}\in\mathbb{R}^{d}:c_{k}+G_{k}\Delta{\bm{x}}={\bm{0}}\\}\cap\\{\Delta{\bm{x}}\in\mathbb{R}^{d}:\|\Delta{\bm{x}}\|\leq\Delta_{k}\\}=\emptyset$,
then (2) does not have a feasible point. This infeasibility issue happens when
the radius $\Delta_{k}$ is too short. To resolve this issue, one should not
enlarge $\Delta_{k}$, which would make the trust-region constraint useless and
violate the spirit of the trust-region scheme. Instead, one should relax the
linearized constraint $c_{k}+G_{k}\Delta{\bm{x}}={\bm{0}}$.
Before introducing our adaptive relaxation technique, we review some classical
relaxation techniques. To start, Celis et al. (1984) relaxed the linearized
constraint by $\|c_{k}+G_{k}\Delta{\bm{x}}\|\leq\theta_{k}$ with
$\theta_{k}=\|c_{k}+G_{k}\Delta{\bm{x}}_{k}^{CP}\|$, where
$\Delta{\bm{x}}_{k}^{CP}$ is the Cauchy point (i.e., the best steepest descent
step) of the following problem:
$\min_{\Delta{\bm{x}}\in\mathbb{R}^{d}}\
\|c_{k}+G_{k}\Delta{\bm{x}}\|\;\quad\text{s.t.}\quad\|\Delta{\bm{x}}\|\leq\Delta_{k}.$
(3)
However, since one has to minimize a quadratic function over two ellipsoids
after the relaxation, the resulting (stochastic) SQP subproblem is often
expensive to solve. Alternatively, Vardi (1985) relaxed the linearized
constraint by $\gamma_{k}c_{k}+G_{k}\Delta{\bm{x}}={\bm{0}}$, with
$\gamma_{k}\in(0,1]$ chosen to make the trust-region constraint of (2)
inactive. However, Vardi (1985) only showed the existence of an extremely
small $\gamma_{k}$, and it did not provide a practical way to choose it.
Subsequently, Byrd et al. (1987) refined the relaxation technique of Vardi
(1985) by a step decomposition. At the $k$-th step, Byrd et al. (1987)
decomposed the trial step $\Delta{\bm{x}}_{k}$ into a normal step
$\bm{w}_{k}\in\text{im}(G_{k}^{T})$ and a tangential step
$\bm{t}_{k}\in\text{ker}(G_{k})$, denoted as
$\Delta{\bm{x}}_{k}=\bm{w}_{k}+\bm{t}_{k}$. By the constraint
$\gamma_{k}c_{k}+G_{k}\Delta{\bm{x}}_{k}={\bm{0}}$, the normal step has a
closed form as (suppose $G_{k}$ has full row rank)
$\bm{w}_{k}\coloneqq\gamma_{k}{\bm{v}}_{k}\coloneqq-\gamma_{k}\cdot
G_{k}^{T}[G_{k}G_{k}^{T}]^{-1}c_{k},$ (4)
and the tangential step is expressed as $\bm{t}_{k}=Z_{k}{\bm{u}}_{k}$ for a
vector ${\bm{u}}_{k}\in\mathbb{R}^{d-m}$. Here, the columns of
$Z_{k}\in\mathbb{R}^{d\times(d-m)}$ form the bases of $\text{ker}(G_{k})$.
Byrd et al. (1987) proposed to choose $\gamma_{k}$ such that
$\theta\Delta_{k}\leq\|\bm{w}_{k}\|\leq\Delta_{k}$ for a tuning parameter
$\theta\in(0,1)$, and solve ${\bm{u}}_{k}$ from
$\min_{{\bm{u}}\in\mathbb{R}^{d-m}}\;\frac{1}{2}{\bm{u}}^{T}Z_{k}^{T}B_{k}Z_{k}{\bm{u}}+({\bar{g}}_{k}+B_{k}\bm{w}_{k})^{T}{\bm{u}}\;\quad\text{s.t.}\;\;\|{\bm{u}}\|^{2}\leq\Delta_{k}^{2}-\|\bm{w}_{k}\|^{2}.$
(5)
Furthermore, Omojokun (1989) combined the techniques of Celis et al. (1984);
Byrd et al. (1987); it solved the normal step $\bm{w}_{k}$ from Problem (3) by
replacing the constraint $\|\Delta{\bm{x}}\|\leq\Delta_{k}$ with
$\|\Delta{\bm{x}}\|\leq\theta\Delta_{k}$ for some $\theta\in(0,1)$; and it
solved the tangential step $\bm{t}_{k}=Z_{k}{\bm{u}}_{k}$ from Problem (5). We
note that the solution of (3) is naturally a normal step (i.e., lies in
$\text{im}(G_{k}^{T})$), because any directions in $\text{ker}(G_{k})$ do not
change the objective in (3).
Although the methods in Byrd et al. (1987); Omojokun (1989) allow one to
employ Cauchy points for trust-region subproblems, they lack guidance for
selecting the user-specified parameter $\theta$, which controls the lengths of
the normal and tangential steps. In fact, an inappropriately specified
parameter $\theta$ may make either step conservative and further affect the
performance of the algorithm. This concern is amplified in stochastic
optimization. As we show in (9) and (10) later, the normal step relates to the
reduction of the feasibility residual, while the tangential step relates to
the reduction of the optimality residual. We have to rescale the two steps
properly so that the model reduction achieved by $\Delta{\bm{x}}_{k}$ is large
enough to dominate the noise in the estimate ${\bar{g}}_{k}$. In the end, a
more delicate control over the lengths of the two steps is desired.
### 2.2 Our adaptive relaxation technique
We propose an adaptive relaxation technique. Compared to Vardi (1985); Byrd et
al. (1987); Omojokun (1989), our relaxation procedure is parameter-free. As
before, we relax the linearized constraint in (2) by
${\bar{\gamma}}_{k}c_{k}+G_{k}\Delta{\bm{x}}={\bm{0}}$ with a (random)
${\bar{\gamma}}_{k}$ defined later, and decompose the trial step by
$\Delta{\bm{x}}_{k}=\bm{w}_{k}+\bm{t}_{k}$. We know that the normal step
$\bm{w}_{k}$ is given by (4), and the tangential step $\bm{t}_{k}$ can always
be expressed as $\bm{t}_{k}=P_{k}{\bm{u}}_{k}$ for some vector
${\bm{u}}_{k}\in\mathbb{R}^{d}$. Recall that
$P_{k}=I-G_{k}^{T}[G_{k}G_{k}^{T}]^{-1}G_{k}$. Here, we slightly abuse the
notation of ${\bm{u}}_{k}$ in (5) to let it be a vector in $\mathbb{R}^{d}$.
To control the lengths of the two steps, we adaptively decompose the trust-
region radius $\Delta_{k}$ into two segments, based on the proportions of the
estimated feasibility and optimality residuals to the estimated KKT residual.
We let
$\breve{\Delta}_{k}=\frac{\|c_{k}\|}{\|\bar{\nabla}{\mathcal{L}}_{k}\|}\cdot\Delta_{k}\quad\quad\text{
and
}\quad\quad\widetilde{\Delta}_{k}=\frac{\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|}{\|\bar{\nabla}{\mathcal{L}}_{k}\|}\cdot\Delta_{k}.$
(6)
It is implicitly assumed that $\|\bar{\nabla}{\mathcal{L}}_{k}\|>0$; otherwise
we either terminate the algorithm or re-estimate
$\bar{\nabla}{\mathcal{L}}_{k}$. When ${\mathcal{P}}$ has a continuous
distribution, ${\bar{\nabla}}\mathcal{L}_{k}={\bm{0}}$ with probability zero.
We let $\breve{\Delta}_{k}$ control the length of normal step $\bm{w}_{k}$ and
$\widetilde{\Delta}_{k}$ control the length of tangential step
$\bm{t}_{k}=P_{k}{\bm{u}}_{k}$. Specifically, we define ${\bar{\gamma}}_{k}$
as (recall ${\bm{v}}_{k}$ is defined in (4))
${\bar{\gamma}}_{k}\coloneqq\min\\{\breve{\Delta}_{k}/\|{\bm{v}}_{k}\|,1\\}$
(7)
so that
$\|\bm{w}_{k}\|={\bar{\gamma}}_{k}\|{\bm{v}}_{k}\|\leq\breve{\Delta}_{k}$, and
we compute ${\bm{u}}_{k}$ by solving
$\min_{{\bm{u}}\in\mathbb{R}^{d}}\
m({\bm{u}})\coloneqq\frac{1}{2}{\bm{u}}^{T}P_{k}B_{k}P_{k}{\bm{u}}+{\bar{g}}_{k}^{T}P_{k}{\bm{u}}\;\quad\text{
s.t. }\;\;\|{\bm{u}}\|\leq\widetilde{\Delta}_{k}.$ (8)
When ${\bm{v}}_{k}=0$ (i.e., $c_{k}={\bm{0}}$), there is no need to choose
${\bar{\gamma}}_{k}$ and we set $\Delta{\bm{x}}_{k}=P_{k}{\bm{u}}_{k}$.
Problem (8) is a trust-region subproblem that appears in unconstrained
optimization. In our analysis, we only require a vector ${\bm{u}}_{k}$ that
reduces $m({\bm{u}})$ by at least as much as the Cauchy point, which takes the
direction of $-P_{k}{\bar{g}}_{k}$ and minimizes $m({\bm{u}})$ within the
trust region (see Algorithm 4.2 in Nocedal and Wright, 2006). Such a reduction
requirement can be achieved by various methods, including finding the exact
solution or applying the dogleg or two-dimensional subspace minimization
methods (Nocedal and Wright, 2006).
The following result provides a bound on the reduction in $m({\bm{u}})$ that
is different from the standard analysis of the Cauchy point (e.g., Lemma 4.3
in Nocedal and Wright, 2006).
###### Lemma 2.1.
Let ${\bm{u}}_{k}$ be an approximate solution to (8) that reduces the
objective $m({\bm{u}})$ by at least as much as the Cauchy point. For all
$k\geq 0$, we have
$m({\bm{u}}_{k})-m({\bm{0}})=\frac{1}{2}{\bm{u}}_{k}^{T}P_{k}B_{k}P_{k}{\bm{u}}_{k}+{\bar{g}}_{k}^{T}P_{k}{\bm{u}}_{k}\leq-\|P_{k}{\bar{g}}_{k}\|\widetilde{\Delta}_{k}+\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}.$
###### Proof.
Let ${\bm{u}}_{k}^{CP}$ denote the Cauchy point. Since $m({\bm{u}}_{k})\leq
m({\bm{u}}_{k}^{CP})$, it suffices to analyze the reduction achieved by
${\bm{u}}_{k}^{CP}$. By the formula of ${\bm{u}}_{k}^{CP}$ in (4.12) in
Nocedal and Wright (2006), we know that if
$\|P_{k}{\bar{g}}_{k}\|^{3}\leq\widetilde{\Delta}_{k}{\bar{g}}_{k}^{T}P_{k}B_{k}P_{k}{\bar{g}}_{k}$,
then
${\bm{u}}_{k}^{CP}=-\|P_{k}{\bar{g}}_{k}\|^{2}/{\bar{g}}_{k}^{T}P_{k}B_{k}P_{k}{\bar{g}}_{k}\cdot
P_{k}{\bar{g}}_{k}$. In this case, we apply $P_{k}^{2}=P_{k}$ and have
$m({\bm{u}}_{k}^{CP})-m({\bm{0}})=\frac{1}{2}({\bm{u}}_{k}^{CP})^{T}P_{k}B_{k}P_{k}{\bm{u}}_{k}^{CP}+{\bar{g}}_{k}^{T}P_{k}{\bm{u}}_{k}^{CP}=-\frac{1}{2}\frac{\|P_{k}{\bar{g}}_{k}\|^{4}}{{\bar{g}}_{k}^{T}P_{k}B_{k}P_{k}{\bar{g}}_{k}}\leq-\frac{1}{2}\frac{\|P_{k}{\bar{g}}_{k}\|^{2}}{\|B_{k}\|}.$
Otherwise,
${\bm{u}}_{k}^{CP}=-\widetilde{\Delta}_{k}/\|P_{k}{\bar{g}}_{k}\|\cdot
P_{k}{\bar{g}}_{k}$. In this case, we have
$\displaystyle m({\bm{u}}_{k}^{CP})-m({\bm{0}})$
$\displaystyle=\frac{1}{2}({\bm{u}}_{k}^{CP})^{T}P_{k}B_{k}P_{k}{\bm{u}}_{k}^{CP}+{\bar{g}}_{k}^{T}P_{k}{\bm{u}}_{k}^{CP}=\frac{1}{2}\frac{{\bar{g}}_{k}^{T}P_{k}B_{k}P_{k}{\bar{g}}_{k}}{\|P_{k}{\bar{g}}_{k}\|^{2}}\widetilde{\Delta}_{k}^{2}-\|P_{k}{\bar{g}}_{k}\|\widetilde{\Delta}_{k}$
$\displaystyle\leq\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}-\|P_{k}{\bar{g}}_{k}\|\widetilde{\Delta}_{k}.$
Combining the above two cases, we have
$m({\bm{u}}_{k}^{CP})-m({\bm{0}})\leq-\min\left\\{-\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}+\|P_{k}{\bar{g}}_{k}\|\widetilde{\Delta}_{k},\;\frac{1}{2}\frac{\|P_{k}{\bar{g}}_{k}\|^{2}}{\|B_{k}\|}\right\\}.$
Using the fact that
$-\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}+\|P_{k}{\bar{g}}_{k}\|\widetilde{\Delta}_{k}=-\frac{1}{2}\|B_{k}\|\left(\widetilde{\Delta}_{k}-\frac{\|P_{k}{\bar{g}}_{k}\|}{\|B_{k}\|}\right)^{2}+\frac{1}{2}\frac{\|P_{k}{\bar{g}}_{k}\|^{2}}{\|B_{k}\|}\leq\frac{1}{2}\frac{\|P_{k}{\bar{g}}_{k}\|^{2}}{\|B_{k}\|},$
we complete the proof. ∎
It is easy to see that our relaxation technique indeed results in a trial step
that lies in the trust region. We have (noting that $\|P_{k}\|\leq 1$) that
$\|\Delta{\bm{x}}_{k}\|^{2}=\|\bm{w}_{k}\|^{2}+\|\bm{t}_{k}\|^{2}=({\bar{\gamma}}_{k}\|{\bm{v}}_{k}\|)^{2}+\|P_{k}{\bm{u}}_{k}\|^{2}\stackrel{{\scriptstyle\eqref{eq:Sto_gamma_k},\eqref{eq:Sto_tangential_step}}}{{\leq}}\breve{\Delta}_{k}^{2}+\widetilde{\Delta}_{k}^{2}\stackrel{{\scriptstyle\eqref{eq:breve
and tilde_delta_k}}}{{=}}\Delta_{k}^{2}.$
In fact, the normal step $\bm{w}_{k}$ helps to reduce the feasibility
residual, since
$\|c_{k}+G_{k}\Delta{\bm{x}}_{k}\|-\|c_{k}\|=\|c_{k}+G_{k}\bm{w}_{k}\|-\|c_{k}\|=-{\bar{\gamma}}_{k}\|c_{k}\|\leq
0,$ (9)
where the strict inequality holds as long as $c_{k}\neq{\bm{0}}$. Furthermore,
when we define the least-squares Lagrangian multiplier as111We write
${\bar{\bm{\lambda}}}_{k}$ to highlight its dependence on ${\bar{g}}_{k}$.
${\bar{\bm{\lambda}}}_{k}=-[G_{k}G_{k}^{T}]^{-1}G_{k}{\bar{g}}_{k}$, we have
$P_{k}{\bar{g}}_{k}=\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}$ and can rewrite
the conclusion of Lemma 2.1 as
$m({\bm{u}}_{k})-m({\bm{0}})\leq-\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\widetilde{\Delta}_{k}+\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2},$
(10)
indicating that the tangential step relates to the reduction of the optimality
residual.
To end this section, we would like to mention two differences of our
relaxation technique to the ones in Byrd et al. (1987); Omojokun (1989).
First, we decompose the trust-region radius by an adaptive scheme based on the
proportions of the estimated feasibility and optimality residuals to the
estimated KKT residual (cf. (6)). Such a scheme is parameter-free and novel to
the literature. If the proportion of the feasibility residual is larger than
that of the optimality residual, then decreasing the feasibility residual is
more important. As a result, we assign a larger trust-region radius to the
normal step in order to achieve a larger reduction in the feasibility
residual. Otherwise, we assign a larger radius to the tangential step to
achieve a larger reduction in the optimality residual. On the contrary, Byrd
et al. (1987); Omojokun (1989) relied on a fixed proportion constant
$\theta\in(0,1)$ and decomposed the radius by $\theta\Delta_{k}$ and
$(1-\theta)\Delta_{k}$; thus, these methods are less adaptive than ours.
Second, we simplify the objective function to compute the tangential step. We
use ${\bar{g}}_{k}^{T}P_{k}{\bm{u}}$ as the linear term instead of
$({\bar{g}}_{k}+B_{k}\bm{w}_{k})^{T}Z_{k}{\bm{u}}$ (see (5) and (8)). This
simplification enables us to link the tangential step and the optimality
residual, as shown in (10).
In the next section, we use the proposed relaxation scheme to design a
stochastic SQP algorithm for Problem (1).
## 3 A Trust-Region Stochastic SQP Algorithm
We summarize the proposed TR-StoSQP algorithm in Algorithm 1. We now introduce
the algorithm details.
In the $k$-th iteration, we are given the iterate ${\bm{x}}_{k}$, a fixed
scalar $\zeta>0$, and the parameters $(\beta_{k},L_{\nabla
f,k},L_{G,k},{\bar{\mu}}_{k-1})$. Here, $\beta_{k}\in(0,\beta_{\max}]$ with
upper bound $\beta_{\max}>0$ is the input radius-related parameter; $L_{\nabla
f,k}$ and $L_{G,k}$ are the (estimated) Lipschitz constants of $\nabla
f({\bm{x}})$ and $G({\bm{x}})$ (in practice, they can be estimated by standard
procedures in Curtis and Robinson (2018); Berahas et al. (2021b)); and
${\bar{\mu}}_{k-1}$ is the merit parameter of the $\ell_{2}$ merit function
obtained after the $(k-1)$-th iteration. With these parameters, we proceed
with the following three steps.
Step 1: Compute control parameters. We compute a matrix $B_{k}$ to approximate
the Hessian of the Lagrangian $\nabla^{2}_{{\bm{x}}}{\mathcal{L}}_{k}$, and
require it to be deterministic conditioning on ${\bm{x}}_{k}$. Then, we
compute several control parameters:
$\displaystyle\eta_{1,k}$
$\displaystyle=\zeta\min\left\\{\frac{1}{\|B_{k}\|},\frac{6\beta_{\max}}{\|G_{k}\|}\right\\},\quad\quad\tau_{k}=L_{\nabla
f,k}+L_{G,k}{\bar{\mu}}_{k-1}+\|B_{k}\|,$ (11) $\displaystyle\alpha_{k}$
$\displaystyle=\frac{\beta_{k}}{(4\eta_{1,k}\tau_{k}+6\zeta)\beta_{\max}},\hskip
28.45274pt\eta_{2,k}=\eta_{1,k}-\frac{1}{2}\zeta\eta_{1,k}\alpha_{k}.$
We should emphasize that, compared to the existing line-search-based StoSQP
methods (Berahas et al., 2021a, b, 2022b; Na et al., 2021, 2022a; Na and
Mahoney, 2022), we do not require $B_{k}$ to be positive definite in the null
space $\text{ker}(G_{k})$. This benefit adheres to the trust-region methods,
more precisely, the existence of the trust-region constraint. Due to this
benefit, we can construct different $B_{k}$ to formulate the StoSQP
subproblems. In our experiments in Section 5, we will construct $B_{k}$ by the
identity matrix, the symmetric rank-one (SR1) update, the estimated Hessian
without modification, and the average of the estimated Hessians.
The control parameters in (11) play a critical role in adjusting the input
$\\{\beta_{k}\\}$ and generating the trust-region radius. Compared to Curtis
and Shi (2020), $\\{\eta_{1,k},\eta_{2,k}\\}$ (i.e.,
$\\{\gamma_{1,k},\gamma_{2,k}\\}$ in their notation) are no longer inputs and
$B_{k}$ is not rescaled by the parameters.
Step 2: Compute the trust-region radius. We sample a realization $\xi_{g}^{k}$
and compute an estimate ${\bar{g}}_{k}=\nabla F({\bm{x}}_{k};\xi_{g}^{k})$ of
$\nabla f_{k}$. We then compute the least-squares Lagrangian multiplier as
${\bar{\bm{\lambda}}}_{k}=-[G_{k}G_{k}^{T}]^{-1}G_{k}{\bar{g}}_{k}$ and the
KKT vector $\bar{\nabla}{\mathcal{L}}_{k}$. Furthermore, we define the trust-
region radius as
$\Delta_{k}=\left\\{\begin{aligned}
&\eta_{1,k}\alpha_{k}\|\bar{\nabla}{\mathcal{L}}_{k}\|\quad\text{if
}\|\bar{\nabla}{\mathcal{L}}_{k}\|\in(0,1/\eta_{1,k}),\\\
&\alpha_{k}\qquad\qquad\quad\;\;\text{if
}\|\bar{\nabla}{\mathcal{L}}_{k}\|\in[1/\eta_{1,k},1/\eta_{2,k}],\\\
&\eta_{2,k}\alpha_{k}\|\bar{\nabla}{\mathcal{L}}_{k}\|\quad\text{if
}\|\bar{\nabla}{\mathcal{L}}_{k}\|\in(1/\eta_{2,k},\infty).\end{aligned}\right.$
(12)
We provide the following remark to compare (12) with the line search scheme in
Berahas et al. (2021b).
###### Remark 3.1.
It is interesting to see that the scheme (12) enjoys the same flavor as the
random-projection-based line search procedure in Berahas et al. (2021b). In
particular, Berahas et al. (2021b) updates ${\bm{x}}_{k}$ by
$\alpha_{k}\widetilde{\Delta}{\bm{x}}_{k}$ each step, where
$\widetilde{\Delta}{\bm{x}}_{k}$ is solved from Problem (2) (without trust-
region constraint) and the stepsize $\alpha_{k}$ is selected by projecting a
random quantity into the interval $[\beta_{k},\beta_{k}+\beta_{k}^{2}]$. By
the facts that
$\|\widetilde{\Delta}{\bm{x}}_{k}\|=\mathcal{O}(\|\bar{\nabla}{\mathcal{L}}_{k}\|)$
(i.e., $\widetilde{\Delta}{\bm{x}}_{k}$ and ${\bar{\nabla}}{\mathcal{L}}_{k}$
have the same order of magnitude) and $\alpha_{k}=\mathcal{O}(\beta_{k})$, we
know
$\|{\bm{x}}_{k+1}-{\bm{x}}_{k}\|=\|\alpha_{k}\widetilde{\Delta}{\bm{x}}_{k}\|=\mathcal{O}(\beta_{k}\|\bar{\nabla}{\mathcal{L}}_{k}\|)$.
This order is preserved by the above trust-region scheme because, by (11) and
(12), we have
$\|{\bm{x}}_{k+1}-{\bm{x}}_{k}\|=\|\Delta{\bm{x}}_{k}\|=\mathcal{O}(\beta_{k}\|\bar{\nabla}{\mathcal{L}}_{k}\|)$.
Further, the projection in Berahas et al. (2021b) brings some sort of
adaptivity to the scheme since the stepsize $\alpha_{k}$ has a variability of
$\mathcal{O}(\beta_{k}^{2})$. This merit is also preserved by (12), noting
that $(\eta_{1,k}-\eta_{2,k})\alpha_{k}=\mathcal{O}(\beta_{k}^{2})$.
We emphasize that (12) offers adaptivity to selecting the radius $\Delta_{k}$
based on $\alpha_{k}(=\mathcal{O}(\beta_{k}))$. When
$\|\bar{\nabla}{\mathcal{L}}_{k}\|$ is large, the iterate ${\bm{x}}_{k}$ is
likely to be far from the KKT point. Thus, we set $\Delta_{k}>\alpha_{k}$ to
be more confident than $\alpha_{k}$. Otherwise, when
$\|\bar{\nabla}{\mathcal{L}}_{k}\|$ is small, the iterate ${\bm{x}}_{k}$ is
likely to be near the KKT point. Thus, we set $\Delta_{k}<\alpha_{k}$ to be
more conservative than $\alpha_{k}$.
Step 3: Compute the trial step and update the merit parameter. With
$\Delta_{k}$ from Step 2, we compute the trial step $\Delta{\bm{x}}_{k}$ using
the relaxation technique in Section 2.2. Then, we update the iterate as
${\bm{x}}_{k+1}={\bm{x}}_{k}+\Delta{\bm{x}}_{k}$, and update the merit
parameter ${\bar{\mu}}_{k-1}$ of the $\ell_{2}$ merit function, defined as
${\mathcal{L}}_{{\bar{\mu}}}({\bm{x}})=f({\bm{x}})+{\bar{\mu}}\|c({\bm{x}})\|.$
(13)
In particular, we let ${\bar{\mu}}_{k}={\bar{\mu}}_{k-1}$ and compute the
predicted reduction of $\mathcal{L}_{{\bar{\mu}}_{k}}^{k}$ as
$\text{Pred}_{k}={\bar{g}}_{k}^{T}\Delta{\bm{x}}_{k}+\frac{1}{2}\Delta{\bm{x}}_{k}^{T}B_{k}\Delta{\bm{x}}_{k}+{\bar{\mu}}_{k}(\|c_{k}+G_{k}\Delta{\bm{x}}_{k}\|-\|c_{k}\|).$
(14)
The parameter ${\bar{\mu}}_{k}$ is then iteratively updated as
${\bar{\mu}}_{k}\leftarrow\rho{\bar{\mu}}_{k}$ with some $\rho>1$ until
$\text{Pred}_{k}\leq-\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\widetilde{\Delta}_{k}-\frac{1}{2}\|c_{k}\|\breve{\Delta}_{k}+\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}+\|B_{k}\|\breve{\Delta}_{k}\widetilde{\Delta}_{k}.$
(15)
The right-hand side ensures that the iterates achieve a sufficient reduction
on the merit function (13). The first two terms are linear in $\Delta_{k}$ and
dominate the last two terms. It can be shown that the right-hand side is
negative under our definition of $\Delta_{k}$.
Algorithm 1 A Trust Region Stochastic SQP (TR-StoSQP) Algorithm
1:Input: Initial iterate ${\bm{x}}_{0}$, radius-related sequence
$\\{\beta_{k}\\}\subset(0,\beta_{\max}]$, parameters
$\rho>1,{\bar{\mu}}_{-1},\zeta>0$, (estimated) Lipschitz constants
$\\{L_{\nabla f,k}\\},\\{L_{G,k}\\}$.
2:for $k=0,1,\cdots,$ do
3: Compute an approximation $B_{k}$ and control parameters
$\eta_{1,k},\tau_{k},\alpha_{k},\eta_{2,k}$ as (11);
4: Sample $\xi_{g}^{k}$ and compute ${\bar{g}}_{k}$,
${\bar{\bm{\lambda}}}_{k}$, and $\bar{\nabla}{\mathcal{L}}_{k}$;
5: Compute the trust-region radius $\Delta_{k}$ as (12), and compute
$\Delta{\bm{x}}_{k}$ using the relaxation technique in Section 2.2;
6: Update the iterate as ${\bm{x}}_{k+1}={\bm{x}}_{k}+\Delta{\bm{x}}_{k}$;
7: Set ${\bar{\mu}}_{k}={\bar{\mu}}_{k-1}$ and compute the predicted reduction
$\text{Pred}_{k}$ as (14);
8: while (15) does not hold do
9: Set ${\bar{\mu}}_{k}=\rho{\bar{\mu}}_{k}$;
10: end while
11:end for
We end this section by introducing the randomness in TR-StoSQP. We let
${\mathcal{F}}_{0}\subseteq{\mathcal{F}}_{1}\subseteq{\mathcal{F}}_{2}\cdots$
be a filtration of $\sigma$-algebras with ${\mathcal{F}}_{k-1}$ generated by
$\\{\xi_{g}^{j}\\}_{j=0}^{k-1}$; thus, ${\mathcal{F}}_{k-1}$ contains all the
randomness before the $k$-th iteration. Let
${\mathcal{F}}_{-1}=\sigma({\bm{x}}_{0})$ be the trivial $\sigma$-algebra for
consistency. It is easy to see that for all $k\geq 0$, we have
$\sigma({\bm{x}}_{k},\eta_{1,k},\tau_{k},\alpha_{k},\eta_{2,k})\subseteq{\mathcal{F}}_{k-1}\quad\text{
and
}\quad\sigma(\Delta{\bm{x}}_{k},{\bar{\bm{\lambda}}}_{k},{\bar{\mu}}_{k})\subseteq{\mathcal{F}}_{k}.$
In the next section, we conduct the global analysis of the proposed algorithm.
## 4 Convergence Analysis
We study the convergence of Algorithm 1 by measuring the decrease of the
$\ell_{2}$ merit function at each step, that is
${\mathcal{L}}_{{\bar{\mu}}_{k}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{k}}^{k}=f_{k+1}-f_{k}+{\bar{\mu}}_{k}(\|c_{k+1}\|-\|c_{k}\|).$
We use ${\bar{\mu}}_{k}$ to denote the merit parameter obtained after the
While loop in Line 8 of Algorithm 1, so that ${\bar{\mu}}_{k}$ satisfies (15).
Following the analysis of Berahas et al. (2021a, b, 2022b); Curtis et al.
(2021b), we will first assume ${\bar{\mu}}_{k}$ stabilizes (but not
necessarily at a large enough value) after a few iterations, and then we will
validate the stability of ${\bar{\mu}}_{k}$ in Section 4.3.
We now state the assumptions for the analysis.
###### Assumption 4.1.
Let $\Omega\subseteq\mathbb{R}^{d}$ be an open convex set containing the
iterates $\\{{\bm{x}}_{k}\\}$. The function $f({\bm{x}})$ is continuously
differentiable and is bounded below by $f_{\inf}$ over $\Omega$. The gradient
$\nabla f({\bm{x}})$ is Lipschitz continuous over $\Omega$ with constant
$L_{\nabla f}>0$, so that the (estimated) Lipschitz constant $L_{\nabla f,k}$
at ${\bm{x}}_{k}$ satisfies $L_{\nabla f,k}\leq L_{\nabla f}$, $\forall k\geq
0$. Similarly, the constraint $c({\bm{x}})$ is continuously differentiable
over $\Omega$; its Jacobian $G({\bm{x}})$ is Lipschitz continuous over
$\Omega$ with constant $L_{G}>0$; and $L_{G,k}\leq L_{G}$, $\forall k\geq 0$.
We also assume there exist positive constants
$\kappa_{B},\kappa_{c},\kappa_{\nabla f},\kappa_{1,G},\kappa_{2,G}>0$ such
that
$\|B_{k}\|\leq\kappa_{B},\;\;\|c_{k}\|\leq\kappa_{c},\;\;\|\nabla
f_{k}\|\leq\kappa_{\nabla f},\;\;\;\kappa_{1,G}\cdot I\preceq
G_{k}G_{k}^{T}\preceq\kappa_{2,G}\cdot I,\;\;\;\forall k\geq 0.$
Assumption 4.1 is standard in the literature on both deterministic and
stochastic SQP methods (see, e.g., Byrd et al., 1987; El-Alem, 1991; Powell
and Yuan, 1990; Berahas et al., 2021a, b, 2022b; Curtis et al., 2021b). In
fact, when one uses a While loop to adaptively increase $L_{\nabla f,k}$ and
$L_{G,k}$ to enforce the Lipschitz conditions (as did in Berahas et al.
(2021b); Curtis and Robinson (2018)), one has $L_{\nabla f,k}\leq L_{\nabla
f}^{\prime}\coloneqq\rho L_{\nabla f}$ for a factor $\rho>1$ (same for
$L_{G,k}$; see Lemma 8 in Berahas et al. (2021b)). We unify the Lipschitz
constant and upper bound of $L_{\nabla f,k}$ as $L_{\nabla f}$ just for
simplicity. In addition, the condition $\kappa_{1,G}\cdot I\preceq
G_{k}G_{k}^{T}\preceq\kappa_{2,G}\cdot I$ implies $G_{k}$ has full row rank;
thus, the least-squares dual iterate
${\bar{\bm{\lambda}}}_{k}=-[G_{k}G_{k}^{T}]^{-1}G_{k}{\bar{g}}_{k}$ is well
defined.
Next, we assume the stability of ${\bar{\mu}}_{k}$. Compared to existing
StoSQP literature (Berahas et al., 2021a, b, 2022b; Curtis et al., 2021b), we
do not require the stabilized value to be large enough. We will revisit this
assumption in Section 4.3.
###### Assumption 4.2.
There exist an (possibly random) iteration threshold ${\bar{K}}<\infty$ and a
deterministic constant $\widehat{\mu}>0$, such that for all $k>{\bar{K}}$,
${\bar{\mu}}_{k}={\bar{\mu}}_{{\bar{K}}}\leq{\widehat{\mu}}$.
Since ${\bar{\mu}}_{k}$ is non-decreasing in TR-StoSQP, we have
${\bar{\mu}}_{k}\leq{\widehat{\mu}}$, $\forall k\geq 0$. The global analysis
only needs to study the convergence behavior of the algorithm after
$k\geq{\bar{K}}+1$ iterations. Next, we impose a condition on the gradient
estimate.
###### Assumption 4.3.
There exist constants $M_{g}\geq 1,M_{g,1}\geq 0$ such that the stochastic
gradient estimate $\bar{g}_{k}$ satisfies $\mathbb{E}_{k}[\bar{g}_{k}]=\nabla
f_{k}$ and $\mathbb{E}_{k}[\|{\bar{g}}_{k}-\nabla f_{k}\|^{2}]\leq
M_{g}+M_{g,1}(f_{k}-f_{\inf})$, $\forall k\geq 0$, where
$\mathbb{E}_{k}[\cdot]$ denotes $\mathbb{E}[\cdot\mid\mathcal{F}_{k-1}]$.
We assume that the variance of the gradient estimate satisfies a growth
condition. This condition is weaker than the usual bounded variance condition
assumed in the StoSQP literature (Curtis and Shi, 2020; Berahas et al., 2021a,
b; Na et al., 2021, 2022a), which corresponds to $M_{g,1}=0$. The growth
condition is more realistic and was recently investigated for stochastic
first-order methods on unconstrained problems (Stich, 2019; Bottou et al.,
2018; Vaswani et al., 2019; Chen et al., 2020), while is less explored for
StoSQP methods.
### 4.1 Fundamental lemmas
The following result establishes the reduction of the $\ell_{2}$ merit
function achieved by the trial step.
###### Lemma 4.4.
Suppose Assumptions 4.1 and 4.2 hold. For all $k\geq{\bar{K}}+1$, we have
${\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}\leq-\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\widetilde{\Delta}_{k}-\frac{1}{2}\|c_{k}\|\breve{\Delta}_{k}+\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}+\|B_{k}\|\breve{\Delta}_{k}\widetilde{\Delta}_{k}+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}\\\ +\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|\widetilde{\Delta}_{k}+\frac{1}{2}\tau_{k}\Delta_{k}^{2}.$
(16)
###### Proof.
By the definitions of ${\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}({\bm{x}})$ and
$\text{Pred}_{k}$ in (13) and (14), we have
${\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-\text{Pred}_{k}=f_{k+1}-f_{k}-{\bar{g}}_{k}^{T}\Delta{\bm{x}}_{k}-\frac{1}{2}\Delta{\bm{x}}_{k}^{T}B_{k}\Delta{\bm{x}}_{k}+{\bar{\mu}}_{{\bar{K}}}(\|c_{k+1}\|-\|c_{k}+G_{k}\Delta{\bm{x}}_{k}\|).$
By the Lipschitz continuity of $\nabla f({\bm{x}})$ and $G({\bm{x}})$, we
further have
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle-\text{Pred}_{k}\leq(\nabla
f_{k}-{\bar{g}}_{k})^{T}\Delta{\bm{x}}_{k}+\frac{1}{2}(L_{\nabla
f,k}+\|B_{k}\|+L_{G,k}{\bar{\mu}}_{{\bar{K}}})\|\Delta{\bm{x}}_{k}\|^{2}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{def:eta2k}}}}{{=}}\;(\nabla
f_{k}-{\bar{g}}_{k})^{T}\Delta{\bm{x}}_{k}+\frac{1}{2}\tau_{k}\|\Delta{\bm{x}}_{k}\|^{2}$
$\displaystyle={\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+(\nabla
f_{k}-{\bar{g}}_{k})^{T}P_{k}{\bm{u}}_{k}+\frac{1}{2}\tau_{k}\|\Delta{\bm{x}}_{k}\|^{2}\;\;(\text{use
}\Delta{\bm{x}}_{k}={\bar{\gamma}}_{k}{\bm{v}}_{k}+P_{k}{\bm{u}}_{k})$
$\displaystyle\leq{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|\|{\bm{u}}_{k}\|+\frac{1}{2}\tau_{k}\|\Delta{\bm{x}}_{k}\|^{2}.$
Combining the above result with the reduction condition in (15), and noting
that $\|{\bm{u}}_{k}\|\leq\widetilde{\Delta}_{k}$ and
$\|\Delta{\bm{x}}_{k}\|\leq\Delta_{k}$, we complete the proof. ∎
Now, we further analyze the right hand side of (16). By taking the expectation
conditional on ${\bm{x}}_{k}$, we can show that the term
${\bar{\gamma}}_{k}(\nabla f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}$ is upper
bounded by a quantity proportional to the expected error of the gradient
estimate.
###### Lemma 4.5.
Suppose Assumptions 4.1 and 4.3 hold. For all $k\geq 0$, we have
$\mathbb{E}_{k}[{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}]\leq\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\cdot\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|].$
###### Proof.
When ${\bm{v}}_{k}={\bm{0}}$, the result holds trivially. We consider
${\bm{v}}_{k}\neq{\bm{0}}$. By the definitions of $\breve{\Delta}_{k}$ in (6)
and $\Delta_{k}$ in (12), we know
$\gamma_{k,\min}\coloneqq\frac{\|c_{k}\|}{\|{\bm{v}}_{k}\|}\eta_{2,k}\alpha_{k}\stackrel{{\scriptstyle\eqref{Ful_GenerateRadius}}}{{\leq}}\frac{\breve{\Delta}_{k}}{\|{\bm{v}}_{k}\|}\stackrel{{\scriptstyle\eqref{eq:breve
and
tilde_delta_k}}}{{=}}\frac{\|c_{k}\|}{\|{\bm{v}}_{k}\|}\frac{\Delta_{k}}{\|{\bar{\nabla}}\mathcal{L}_{k}\|}\stackrel{{\scriptstyle\eqref{Ful_GenerateRadius}}}{{\leq}}\frac{\|c_{k}\|}{\|{\bm{v}}_{k}\|}\eta_{1,k}\alpha_{k}\eqqcolon\gamma_{k,\max}.$
(17)
Note that $\sigma(\gamma_{k,\min},\gamma_{k,\max})\subseteq\mathcal{F}_{k-1}$.
If $\gamma_{k,\min}\geq 1$, then ${\bar{\gamma}}_{k}=1$ for any estimate
${\bar{g}}_{k}$, thus the result holds due to Assumption 4.3. We consider
$\gamma_{k,\min}<1$. By (17), we have
$\gamma_{k,\min}\leq{\bar{\gamma}}_{k}\stackrel{{\scriptstyle\eqref{eq:Sto_gamma_k}}}{{=}}\min\\{\breve{\Delta}_{k}/\|{\bm{v}}_{k}\|,1\\}\leq\gamma_{k,\max},\quad\forall
k\geq 0.$ (18)
Let $E_{k}$ be the event that $(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}\geq 0$, $E_{k}^{c}$ be its complement,
and $\mathbb{P}_{k}[\cdot]$ denote the probability conditional on
$\mathcal{F}_{k-1}$. By the law of total expectation, one finds
$\displaystyle\mathbb{E}_{k}[{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}]$
$\displaystyle=\mathbb{E}_{k}[{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}\mid
E_{k}]\mathbb{P}_{k}[E_{k}]+\mathbb{E}_{k}[{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}\mid E_{k}^{c}]\mathbb{P}_{k}[E_{k}^{c}]$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:bound of
gamma}}}}{{\leq}}\gamma_{k,\max}\mathbb{E}_{k}[(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}\mid
E_{k}]\mathbb{P}_{k}[E_{k}]+\gamma_{k,\min}\mathbb{E}_{k}[(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}\mid E_{k}^{c}]\mathbb{P}_{k}[E_{k}^{c}]$
$\displaystyle=(\gamma_{k,\max}-\gamma_{k,\min})\mathbb{E}_{k}[(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}\mid
E_{k}]\mathbb{P}_{k}[E_{k}]\quad(\text{by Assumption }\ref{ass:Ful_unbias})$
$\displaystyle\leq(\gamma_{k,\max}-\gamma_{k,\min})\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|\|{\bm{v}}_{k}\|\mid E_{k}]\mathbb{P}_{k}[E_{k}]$
$\displaystyle\leq(\gamma_{k,\max}-\gamma_{k,\min})\|{\bm{v}}_{k}\|\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|]$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{pequ:1}}}}{{=}}\;\left(\eta_{1,k}-\eta_{2,k}\right)\alpha_{k}\|c_{k}\|\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|]$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{def:eta2k}}}}{{\leq}}\;\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\cdot\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|]\quad(\text{also by Assumption }\ref{ass:1-1}).$
This completes the proof. ∎
We further simplify the result of (16) using the trust-region scheme in (12).
###### Lemma 4.6.
Suppose Assumptions 4.1, 4.2, and 4.3 hold and
$\\{\beta_{k}\\}\subseteq(0,\beta_{\max}]$. For all $k\geq{\bar{K}}+1$, we
have
$\mathbb{E}_{k}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}]\leq{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-\frac{1}{4}\eta_{2,k}\alpha_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\left\\{2\zeta+\eta_{1,k}(\tau_{k}+\|B_{k}\|)\right\\}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|^{2}]\\\
+\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|].$
###### Proof.
According to (12), we separate the proof into the following three cases:
$\|\bar{\nabla}{\mathcal{L}}_{k}\|\in(0,1/\eta_{1,k})$,
$\|\bar{\nabla}{\mathcal{L}}_{k}\|\in[1/\eta_{1,k},1/\eta_{2,k}]$, and
$\|\bar{\nabla}{\mathcal{L}}_{k}\|\in(1/\eta_{2,k},\infty)$. Case 1,
$\|\bar{\nabla}{\mathcal{L}}_{k}\|\in(0,1/\eta_{1,k})$. We have
$\Delta_{k}=\eta_{1,k}\alpha_{k}\|{\bar{\nabla}}\mathcal{L}_{k}\|$. By (6), we
have
$\displaystyle-\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\widetilde{\Delta}_{k}+\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}$
$\displaystyle\;\stackrel{{\scriptstyle\mathclap{\eqref{eq:breve and
tilde_delta_k}}}}{{=}}-\eta_{1,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}\|B_{k}\|$
$\displaystyle\;\stackrel{{\scriptstyle\mathclap{\eqref{def:eta2k}}}}{{\leq}}-\eta_{1,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\zeta\eta_{1,k}\alpha_{k}^{2}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\;=-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{1,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}.$
Plugging the above expression into (16) and applying (6) and (12), we have
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{1,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{2}\eta_{1,k}\alpha_{k}\|c_{k}\|^{2}+\eta_{1,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\|c_{k}\|$
$\displaystyle\quad+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\eta_{1,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|+\frac{1}{2}\tau_{k}\eta_{1,k}^{2}\alpha_{k}^{2}\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\leq-\frac{1}{2}\left(1-\alpha_{k}\zeta\right)\eta_{1,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{2}\eta_{1,k}\alpha_{k}\|c_{k}\|^{2}+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\quad+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|c_{k}\|^{2}+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\frac{1}{2}\eta_{1,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}$
$\displaystyle\quad+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\tau_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\tau_{k}\|c_{k}\|^{2}\quad(\text{by
Young's inequality}).$ (19)
Noting from (11) that $\eta_{1,k}\alpha_{k}\leq 1/(4\tau_{k})\leq
1/(4\|B_{k}\|)$, we have
$\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|c_{k}\|^{2}+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\tau_{k}\|c_{k}\|^{2}\leq\frac{1}{4}\eta_{1,k}\alpha_{k}\|c_{k}\|^{2}.$
Plugging the above inequality into (4.1) and rearranging terms, we have
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\frac{1}{2}\left\\{1-\alpha_{k}\zeta-\eta_{1,k}\alpha_{k}(\tau_{k}+\|B_{k}\|)\right\\}\eta_{1,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{4}\eta_{1,k}\alpha_{k}\|c_{k}\|^{2}$
$\displaystyle\quad+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\frac{1}{2}\eta_{1,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}.$ (20)
Case 2, $\|\bar{\nabla}{\mathcal{L}}_{k}\|\in[1/\eta_{1,k},1/\eta_{2,k}]$. We
have $\Delta_{k}=\alpha_{k}$. By (6), we have
$\displaystyle-\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\widetilde{\Delta}_{k}+\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:breve and
tilde_delta_k}}}}{{=}}-\alpha_{k}\frac{\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}}{\|\bar{\nabla}{\mathcal{L}}_{k}\|}+\alpha_{k}^{2}\frac{\|B_{k}\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}}{2\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2}}\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\leq}}-\alpha_{k}\frac{\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}}{\|\bar{\nabla}{\mathcal{L}}_{k}\|}+\frac{\zeta\alpha_{k}^{2}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}}{2\eta_{1,k}\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2}}$
$\displaystyle\leq-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\alpha_{k}\frac{\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}}{\|\bar{\nabla}{\mathcal{L}}_{k}\|}\leq-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2},$
where the third inequality is due to
$\eta_{1,k}\|\bar{\nabla}{\mathcal{L}}_{k}\|\geq 1$, and the last inequality
is due to $\eta_{2,k}\|\bar{\nabla}{\mathcal{L}}_{k}\|\leq 1$ and
$\alpha_{k}\zeta<2$. Plugging the above expression into (16), we have
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{\alpha_{k}}{2}\frac{\|c_{k}\|^{2}}{\|\bar{\nabla}{\mathcal{L}}_{k}\|}+\alpha_{k}^{2}\frac{\|B_{k}\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\|c_{k}\|}{\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2}}$
$\displaystyle\quad+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\alpha_{k}\frac{\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|}{\|\bar{\nabla}{\mathcal{L}}_{k}\|}+\frac{1}{2}\tau_{k}\alpha_{k}^{2}$
$\displaystyle\leq-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{2}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+\eta_{1,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\|c_{k}\|$
$\displaystyle\quad+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\eta_{1,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\tau_{k}\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2},$
where the last inequality is due to
$1/\eta_{1,k}\leq\|{\bar{\nabla}}\mathcal{L}_{k}\|\leq 1/\eta_{2,k}$. Similar
to (4.1), we apply the Young’s inequality and have
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{2}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|c_{k}\|^{2}$
$\displaystyle\quad+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\frac{1}{2}\eta_{1,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\quad+\frac{1}{2}\eta_{1,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\tau_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}^{2}\tau_{k}\|c_{k}\|^{2}.$
(21)
We note that
$\displaystyle\alpha_{k}$
$\displaystyle\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{=}}\frac{\beta_{k}}{(4\eta_{1,k}\tau_{k}+6\zeta)\beta_{\max}}\Longrightarrow\alpha_{k}\leq\frac{2}{8\eta_{1,k}\tau_{k}+\zeta}\Longrightarrow\eta_{1,k}\alpha_{k}\tau_{k}\leq\frac{1}{4}-\frac{1}{8}\zeta\alpha_{k}$
$\displaystyle\Longrightarrow\eta_{1,k}^{2}\alpha_{k}^{2}\|B_{k}\|\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\leq}}\eta_{1,k}^{2}\alpha_{k}^{2}\tau_{k}\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\leq}}\frac{1}{4}\eta_{2,k}\alpha_{k}.$
Plugging the above result into (4.1) and rearranging terms, we have
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\left\\{\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}-\frac{1}{2}\eta_{1,k}-\frac{1}{2}\alpha_{k}\eta_{1,k}^{2}(\tau_{k}+\|B_{k}\|)\right\\}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\quad-\frac{1}{4}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\frac{1}{2}\eta_{1,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}.$ (22)
Case 3, $\|\bar{\nabla}{\mathcal{L}}_{k}\|\in(1/\eta_{2,k},\infty)$. We have
$\Delta_{k}=\eta_{2,k}\alpha_{k}\|\bar{\nabla}{\mathcal{L}}_{k}\|$. By (6), we
have
$\displaystyle-\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\widetilde{\Delta}_{k}+\frac{1}{2}\|B_{k}\|\widetilde{\Delta}_{k}^{2}$
$\displaystyle\;\stackrel{{\scriptstyle\mathclap{\eqref{eq:breve and
tilde_delta_k}}}}{{=}}-\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\eta_{2,k}^{2}\alpha_{k}^{2}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}\|B_{k}\|$
$\displaystyle\;\leq-\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\zeta\eta_{2,k}\alpha_{k}^{2}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle=-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2},$
where the inequality is due to
$\|B_{k}\|\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\leq}}\zeta/\eta_{1,k}\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\leq}}\zeta/\eta_{2,k}$.
Plugging into (16), we have
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{2}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+\eta_{2,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\|c_{k}\|$
$\displaystyle\quad+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\eta_{2,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|+\frac{1}{2}\tau_{k}\eta_{2,k}^{2}\alpha_{k}^{2}\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\leq-\frac{1}{2}\left(1-\alpha_{k}\zeta\right)\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{2}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+\frac{1}{2}\eta_{2,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\quad+\frac{1}{2}\eta_{2,k}^{2}\alpha_{k}^{2}\|B_{k}\|\|c_{k}\|^{2}+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\frac{1}{2}\eta_{2,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}$
$\displaystyle\quad+\frac{1}{2}\eta_{2,k}^{2}\alpha_{k}^{2}\tau_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\eta_{2,k}^{2}\alpha_{k}^{2}\tau_{k}\|c_{k}\|^{2}\quad(\text{by
Young's inequality}).$
Noting from (11) that $\eta_{2,k}\alpha_{k}\leq\eta_{1,k}\alpha_{k}\leq
1/(4\tau_{k})\leq 1/(4\|B_{k}\|)$, we further obtain
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\frac{1}{2}\left\\{1-\alpha_{k}\zeta-\eta_{2,k}\alpha_{k}(\tau_{k}+\|B_{k}\|)\right\\}\eta_{2,k}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{4}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}$
$\displaystyle\quad+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\frac{1}{2}\eta_{2,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}.$ (23)
Using $\eta_{2,k}\leq\eta_{1,k}$ and taking an upper for the results of the
three cases in (4.1), (4.1), and (4.1), we have
$\displaystyle{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\left\\{\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}-\frac{1}{2}\eta_{1,k}-\frac{1}{2}\alpha_{k}\eta_{1,k}^{2}(\tau_{k}+\|B_{k}\|)\right\\}\alpha_{k}\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\quad-\frac{1}{4}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+{\bar{\gamma}}_{k}(\nabla
f_{k}-{\bar{g}}_{k})^{T}{\bm{v}}_{k}+\frac{1}{2}\eta_{1,k}\alpha_{k}\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}.$
Taking expectation conditional on ${\bm{x}}_{k}$, applying Lemma 4.5, and
noting that
$\mathbb{E}_{k}[\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}]=\|\nabla_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}+\mathbb{E}_{k}[\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}]$, we have
$\displaystyle\mathbb{E}_{k}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}]-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\left\\{\left(1-\frac{1}{2}\alpha_{k}\zeta\right)\eta_{2,k}-\frac{1}{2}\eta_{1,k}-\frac{1}{2}\alpha_{k}\eta_{1,k}^{2}(\tau_{k}+\|B_{k}\|)\right\\}\alpha_{k}\mathbb{E}_{k}[\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}]$
$\displaystyle\quad-\frac{1}{4}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|]+\frac{1}{2}\eta_{1,k}\alpha_{k}(\mathbb{E}_{k}[\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}]-\|\nabla_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2})$
$\displaystyle=\left\\{\eta_{1,k}-\eta_{2,k}+\frac{1}{2}\alpha_{k}(\eta_{2,k}\zeta+\eta_{1,k}^{2}(\tau_{k}+\|B_{k}\|))\right\\}\alpha_{k}\mathbb{E}_{k}[\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}]-\frac{1}{4}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}$
$\displaystyle\quad+\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|]-\frac{1}{2}\eta_{1,k}\alpha_{k}\|\nabla_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle=-\left\\{\frac{1}{2}\eta_{1,k}-(\eta_{1,k}-\eta_{2,k})-\frac{1}{2}\alpha_{k}\left(\eta_{2,k}\zeta+\eta_{1,k}^{2}(\tau_{k}+\|B_{k}\|)\right)\right\\}\alpha_{k}\|\nabla_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\quad+\left\\{\eta_{1,k}-\eta_{2,k}+\frac{1}{2}\alpha_{k}\left(\eta_{2,k}\zeta+\eta_{1,k}^{2}(\tau_{k}+\|B_{k}\|)\right)\right\\}\alpha_{k}\mathbb{E}_{k}[\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}]$
$\displaystyle\quad-\frac{1}{4}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|]$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{def:eta2k}}}}{{=}}-\left\\{\frac{1}{2}-\alpha_{k}\zeta+\frac{1}{4}\zeta^{2}\alpha_{k}^{2}-\frac{1}{2}\eta_{1,k}\alpha_{k}(\tau_{k}+\|B_{k}\|)\right\\}\eta_{1,k}\alpha_{k}\|\nabla_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}$
$\displaystyle\quad+\frac{1}{2}\left\\{2\zeta-\frac{1}{2}\zeta^{2}\alpha_{k}+\eta_{1,k}(\tau_{k}+\|B_{k}\|)\right\\}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}]$
$\displaystyle\quad-\frac{1}{4}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|].$
Furthermore, we note that
$\displaystyle\|B_{k}\|\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\leq}}\tau_{k}$
$\displaystyle\Longrightarrow\alpha_{k}\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{=}}\frac{\beta_{k}}{(4\eta_{1,k}\tau_{k}+6\zeta)\beta_{\max}}\leq\frac{2}{4\eta_{1,k}(\tau_{k}+\|B_{k}\|)+7\zeta}$
$\displaystyle\Longrightarrow\frac{1}{4}-\frac{1}{2}\eta_{1,k}\alpha_{k}(\tau_{k}+\|B_{k}\|)\geq\frac{7}{8}\alpha_{k}\zeta$
$\displaystyle\Longrightarrow\frac{1}{4}\eta_{1,k}-\eta_{1,k}\alpha_{k}\zeta-\frac{1}{2}\eta_{1,k}^{2}\alpha_{k}(\tau_{k}+\|B_{k}\|)\geq-\frac{1}{8}\eta_{1,k}\alpha_{k}\zeta\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{=}}-\frac{1}{4}(\eta_{1,k}-\eta_{2,k})$
$\displaystyle\Longrightarrow\left\\{\frac{1}{2}-\alpha_{k}\zeta+\frac{1}{4}\zeta^{2}\alpha_{k}^{2}-\frac{1}{2}\eta_{1,k}\alpha_{k}(\tau_{k}+\|B_{k}\|)\right\\}\eta_{1,k}\geq\frac{\eta_{2,k}}{4}.$
Combining the above two results, we have
$\displaystyle\mathbb{E}_{k}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}]-{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}$
$\displaystyle\leq-\frac{1}{4}\eta_{2,k}\alpha_{k}\|\nabla_{{\bm{x}}}{\mathcal{L}}_{k}\|^{2}-\frac{1}{4}\eta_{2,k}\alpha_{k}\|c_{k}\|^{2}+\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|]$
$\displaystyle\quad+\frac{1}{2}\left\\{2\zeta-\frac{1}{2}\zeta^{2}\alpha_{k}+\eta_{1,k}(\tau_{k}+\|B_{k}\|)\right\\}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}]$
$\displaystyle\leq-\frac{1}{4}\eta_{2,k}\alpha_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|]$
$\displaystyle\quad+\frac{1}{2}\left\\{2\zeta+\eta_{1,k}(\tau_{k}+\|B_{k}\|)\right\\}\eta_{1,k}\alpha_{k}^{2}\mathbb{E}_{k}[\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}].$
The conclusion follows by noting that $\mathbb{E}_{k}[\|P_{k}(\nabla
f_{k}-{\bar{g}}_{k})\|^{2}]\leq\mathbb{E}_{k}[\|\nabla
f_{k}-{\bar{g}}_{k}\|^{2}]$. ∎
Finally, we present some properties of the control parameters generated in
Step 1 of Algorithm 1.
###### Lemma 4.7.
Let Assumptions 4.1, 4.2 hold and $\\{\beta_{k}\\}\subseteq(0,\beta_{\max}]$.
For all $k\geq 0$,
(a) there exist constants $\eta_{\min},\eta_{\max}>0$ such that
$\eta_{\min}\leq\eta_{2,k}\leq\eta_{1,k}\leq\eta_{\max}$;
(b) there exists a constant $\tau_{\max}>0$ such that
$\tau_{k}\leq\tau_{\max}$;
(c) there exist constants $\alpha_{l},\alpha_{u}>0$ such that
$\alpha_{k}\in[\alpha_{l}\beta_{k},\alpha_{u}\beta_{k}]$.
###### Proof.
(a) By (11), we see that $\eta_{2,k}\leq\eta_{1,k}$. Further, by Assumption
4.1, we have
$\displaystyle\eta_{1,k}$
$\displaystyle\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\leq}}\frac{6\zeta\beta_{\max}}{\|G_{k}\|}\leq\frac{6\zeta\beta_{\max}}{\sqrt{\kappa_{1,G}}}\eqqcolon\eta_{\max},$
$\displaystyle\eta_{2,k}$
$\displaystyle\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{=}}\eta_{1,k}\left(1-\frac{\zeta\alpha_{k}}{2}\right)\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\geq}}\eta_{1,k}\left(1-\frac{\zeta}{2}\cdot\frac{1}{6\zeta}\right)\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\geq}}\frac{11}{12}\zeta\min\left\\{\frac{1}{\kappa_{B}},\frac{6\beta_{\max}}{\sqrt{\kappa_{2,G}}}\right\\}\eqqcolon\eta_{\min}.$
(b) By Assumptions 4.1 and 4.2, we have $L_{\nabla f,k}\leq L_{\nabla
f},L_{G,k}\leq L_{G}$, $\|B_{k}\|\leq\kappa_{B}$, and
${\bar{\mu}}_{k}\leq\widehat{\mu}$. Thus, we let $\tau_{\max}\coloneqq
L_{\nabla f}+L_{G}{\widehat{\mu}}+\kappa_{B}$ and the result holds. (c) We let
$\alpha_{l}\coloneqq
1/(4\eta_{\max}\tau_{\max}\beta_{\max}+6\zeta\beta_{\max})$ and
$\alpha_{u}\coloneqq 1/(6\zeta\beta_{\max})$, and the result holds. ∎
In the next subsection, we use Lemmas 4.6 and 4.7 to show the global
convergence of TR-StoSQP. We consider both constant and decaying $\beta_{k}$
sequences.
### 4.2 Global convergence
We first consider constant $\beta_{k}$, i.e.,
$\beta_{k}=\beta\in(0,\beta_{\max}]$, $\forall k\geq 0$. We show that the
expectation of weighted averaged KKT residuals converges to a neighborhood
around zero with a radius of the order $\mathcal{O}(\beta)$. When the growth
condition parameter $M_{g,1}=0$ (cf. Assumption 4.3), the weighted average
reduces to the uniform average.
###### Lemma 4.8.
Suppose Assumptions 4.1, 4.2, and 4.3 hold and
$\beta_{k}=\beta\in(0,\beta_{\max}]$, $\forall k\geq 0$. For any positive
integer $K>0$, we define $w_{k}=(1+\Upsilon
M_{g,1}\beta^{2})^{{\bar{K}}+K-k}$, ${\bar{K}}\leq k\leq{\bar{K}}+K$, with
$\Upsilon\coloneqq(\zeta+\zeta\kappa_{c}+\eta_{\max}\tau_{\max})\eta_{\max}\alpha_{u}^{2}$.
We have (cf.
$\mathbb{E}_{{\bar{K}}+1}[\cdot]=\mathbb{E}[\cdot\mid\mathcal{F}_{{\bar{K}}}]$)
$\mathbb{E}_{{\bar{K}}+1}\left[\frac{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}\right]\leq\frac{4}{\eta_{\min}\alpha_{l}\beta}\cdot\frac{w_{{\bar{K}}}({\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{{\bar{K}}+1}-f_{\inf})}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}+\frac{4\Upsilon
M_{g}}{\eta_{\min}\alpha_{l}}\beta.$
###### Proof.
From Lemma 4.6 and Assumption 4.3, we have for any $k\geq{\bar{K}}+1$,
$\displaystyle\mathbb{E}_{k}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}]\leq{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-\frac{1}{4}\eta_{2,k}\alpha_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}+\frac{1}{2}\left(2\zeta+\eta_{1,k}(\tau_{k}+\|B_{k}\|)\right)\eta_{1,k}\alpha_{k}^{2}[M_{g}+M_{g,1}(f_{k}-f_{\inf})]$
$\displaystyle\hskip
56.9055pt+\frac{1}{2}\zeta\kappa_{c}\eta_{1,k}\alpha_{k}^{2}\sqrt{M_{g}+M_{g,1}(f_{k}-f_{\inf})}$
$\displaystyle\hskip 42.67912pt\stackrel{{\scriptstyle\mathclap{\text{Lemma
\ref{lemma:4statements}}}}}{{\leq}}\;\;\;{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-\frac{1}{4}\eta_{\min}\alpha_{l}\beta\|\nabla{\mathcal{L}}_{k}\|^{2}+\Upsilon\beta^{2}[M_{g}+M_{g,1}(f_{k}-f_{\inf})]\quad(\text{by
}\tau_{\max}\geq\kappa_{B},M_{g}\geq 1).$
Using the fact that $f_{k}-f_{\inf}\leq
f_{k}-f_{\inf}+{\bar{\mu}}_{{\bar{K}}}\|c_{k}\|={\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-f_{\inf}$,
we obtain
$\mathbb{E}_{k}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-f_{\inf}]\leq\left(1+\Upsilon
M_{g,1}\beta^{2}\right)({\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-f_{\inf})-\frac{1}{4}\eta_{\min}\alpha_{l}\beta\|\nabla{\mathcal{L}}_{k}\|^{2}+\Upsilon
M_{g}\beta^{2}.$
Taking the expectation conditional on $\mathcal{F}_{{\bar{K}}}$ and
rearranging the terms, we have
$\mathbb{E}_{{\bar{K}}+1}[\|\nabla{\mathcal{L}}_{k}\|^{2}]\leq\frac{4\left(1+\Upsilon
M_{g,1}\beta^{2}\right)}{\eta_{\min}\alpha_{l}\beta}\mathbb{E}_{{\bar{K}}+1}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-f_{\inf}]-\frac{4}{\eta_{\min}\alpha_{l}\beta}\mathbb{E}_{{\bar{K}}+1}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-f_{\inf}]+\frac{4\Upsilon
M_{g}}{\eta_{\min}\alpha_{l}}\beta.$
Multiplying $w_{k}$ on both sides and summing over
$k={\bar{K}}+1,\cdots,{\bar{K}}+K$, we have
$\mathbb{E}_{{\bar{K}}+1}\left[\frac{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}\right]=\frac{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}\mathbb{E}_{{\bar{K}}+1}[\|\nabla{\mathcal{L}}_{k}\|^{2}]}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}\\\
\leq\frac{4}{\eta_{\min}\alpha_{l}\beta}\cdot\frac{w_{{\bar{K}}}({\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{{\bar{K}}+1}-f_{\inf})-\mathbb{E}_{{\bar{K}}+1}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{{\bar{K}}+K+1}-f_{\inf}]}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}+\frac{4\Upsilon
M_{g}}{\eta_{\min}\alpha_{l}}\beta,$
where the first equality uses the fact that ${\bar{K}}$ is fixed in the
conditional expectation. Noting that
$\mathbb{E}_{{\bar{K}}+1}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{{\bar{K}}+K+1}-f_{\inf}]\geq
0$, we complete the proof. ∎
The following theorem follows from Lemma 4.8.
###### Theorem 4.9 (Global convergence with constant $\beta_{k}$).
Suppose Assumptions 4.1, 4.2, and 4.3 hold and
$\beta_{k}=\beta\in(0,\beta_{\max}]$, $\forall k\geq 0$. Let us define $w_{k}$
and $\Upsilon$ as in Lemma 4.8. We have
(a) when $M_{g,1}=0$,
$\lim_{K\rightarrow\infty}\mathbb{E}\left[\frac{1}{K}\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}\|\nabla{\mathcal{L}}_{k}\|^{2}\right]\leq\frac{4\Upsilon
M_{g}}{\eta_{\min}\alpha_{l}}\beta;$
(b) when $M_{g,1}>0$,
$\lim_{K\rightarrow\infty}\mathbb{E}\left[\frac{1}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}\right]\leq\frac{4\Upsilon\\{M_{g,1}\mathbb{E}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{{\bar{K}}+1}-f_{\inf}]+M_{g}\\}}{\eta_{\min}\alpha_{l}}\beta.$
###### Proof.
(a) When $M_{g,1}=0$, we have $w_{k}=1$ for ${\bar{K}}+1\leq
k\leq{\bar{K}}+K$. From Lemma 4.8, we have
$\mathbb{E}_{{\bar{K}}+1}\left[\frac{1}{K}\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}\|\nabla{\mathcal{L}}_{k}\|^{2}\right]\leq\frac{4}{\eta_{\min}\alpha_{l}\beta}\cdot\frac{{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{{\bar{K}}+1}-f_{\inf}}{K}+\frac{4\Upsilon
M_{g}}{\eta_{\min}\alpha_{l}}\beta.$
Letting $K\rightarrow\infty$ and using the fact that
$\|\nabla{\mathcal{L}}_{k}\|^{2}\leq\kappa_{\nabla f}^{2}+\kappa_{c}^{2}$ (cf.
Assumption 4.1), we apply Fatou’s lemma and have (the $\lim$ on the left can
be strengthened to $\limsup$)
$\lim_{K\rightarrow\infty}\mathbb{E}\left[\frac{1}{K}\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}\|\nabla{\mathcal{L}}_{k}\|^{2}\right]\leq\mathbb{E}\left[\limsup_{K\rightarrow\infty}\mathbb{E}_{{\bar{K}}+1}\left[\frac{1}{K}\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}\|\nabla{\mathcal{L}}_{k}\|^{2}\right]\right]\leq\frac{4\Upsilon
M_{g}}{\eta_{\min}\alpha_{l}}\beta.$
(b) When $M_{g,1}>0$, we apply Lemma 4.8 and the fact that
$\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}=(w_{{\bar{K}}}-1)/(\Upsilon
M_{g,1}\beta^{2})$, and obtain
$\mathbb{E}_{{\bar{K}}+1}\left[\frac{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}\right]\leq\frac{4\Upsilon
M_{g,1}\beta}{\eta_{\min}\alpha_{l}}\cdot\frac{w_{{\bar{K}}}({\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{{\bar{K}}+1}-f_{\inf})}{w_{{\bar{K}}}-1}+\frac{4\Upsilon
M_{g}}{\eta_{\min}\alpha_{l}}\beta.$
Since $w_{{\bar{K}}}/(w_{{\bar{K}}}-1)=(1+\Upsilon
M_{g,1}\beta^{2})^{K}/\\{(1+\Upsilon M_{g,1}\beta^{2})^{K}-1\\}\rightarrow 1$
as $K\rightarrow\infty$, we apply Fatou’s lemma and have (the $\lim$ on the
left can be strengthened to $\limsup$)
$\lim_{K\rightarrow\infty}\mathbb{E}\left[\frac{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}\right]\leq\mathbb{E}\left[\limsup_{K\rightarrow\infty}\mathbb{E}_{{\bar{K}}+1}\left[\frac{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}}{\sum_{k={\bar{K}}+1}^{{\bar{K}}+K}w_{k}}\right]\right]\\\
\leq\frac{4\Upsilon\\{M_{g,1}\mathbb{E}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{{\bar{K}}+1}-f_{\inf}]+M_{g}\\}}{\eta_{\min}\alpha_{l}}\beta.$
This completes the proof. ∎
From Theorem 4.9, we note that the radius of the local neighborhood is
proportional to $\beta$. Thus, to decrease the radius, one should choose a
smaller $\beta$. However, the trust-region radius is also proportional to
$\beta$ (cf. (12)); thus, a smaller $\beta$ may result in a slow convergence.
This suggests the existence of a trade-off between the convergence speed and
convergence precision.
For constant $\\{\beta_{k}\\}$, Berahas et al. (2021a, b); Curtis et al.
(2021b); Curtis and Shi (2020) established similar global results to Theorem
4.9. However, our analysis has two major differences. (i) These literature
required $\beta$ to be upper bounded by some complex quantities that may be
less than 1, while we do not need such a condition. (ii) Compared to the
stochastic trust-region method for unconstrained optimization (Curtis and Shi,
2020), our local neighborhood radius is proportional to the input $\beta$
(i.e., we can control the radius by the input), while the one in Curtis and
Shi (2020) is independent of $\beta$.
Next, we consider decaying $\beta_{k}$. We show in the next lemma that, when
$\sum\beta_{k}=\infty$ and $\sum\beta_{k}^{2}<\infty$, the inferior of KKT
residuals converges to zero almost surely. Based on this result, we further
show that the KKT residuals converge to zero almost surely.
###### Lemma 4.10.
Suppose Assumptions 4.1, 4.2, and 4.3 hold,
$\\{\beta_{k}\\}\subseteq(0,\beta_{\max}]$, and
$\sum_{k=0}^{\infty}\beta_{k}=\infty$ and
$\sum_{k=0}^{\infty}\beta_{k}^{2}<\infty$. We have
$\liminf_{k\rightarrow\infty}\|\nabla{\mathcal{L}}_{k}\|=0\quad\text{almost
surely}.$
###### Proof.
From the proof of Lemma 4.8, we have for any $\forall k\geq{\bar{K}}+1$ that
$\mathbb{E}_{k}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k+1}-f_{\inf}]\leq\left(1+\Upsilon
M_{g,1}\beta_{k}^{2}\right)({\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-f_{\inf})-\frac{1}{4}\eta_{\min}\alpha_{l}\beta_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}+\Upsilon
M_{g}\beta_{k}^{2}.$
Since ${\mathcal{L}}_{{\bar{\mu}}}({\bm{x}})-f_{\inf}$ is bounded below by
zero, $\eta_{\min}\alpha_{l}\beta_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}>0$, and
$\sum_{k={\bar{K}}+1}^{\infty}\beta_{k}^{2}<\infty$, it immediately follows
from the Robbins-Siegmund theorem (Robbins and Siegmund, 1971) that
$\sup_{k\geq{\bar{K}}+1}\mathbb{E}_{{\bar{K}}+1}[{\mathcal{L}}_{{\bar{\mu}}_{{\bar{K}}}}^{k}-f_{\inf}]\coloneqq
M_{{\bar{K}}}<\infty,\;\;\;\;\quad\sum_{k={\bar{K}}+1}^{\infty}\beta_{k}\mathbb{E}_{{\bar{K}}+1}[\|\nabla{\mathcal{L}}_{k}\|^{2}]<\infty.$
(24)
The latter part suggests that
$P[\sum_{k={\bar{K}}+1}^{\infty}\beta_{k}\|\nabla\mathcal{L}_{k}\|^{2}<\infty\mid\mathcal{F}_{{\bar{K}}}]=1$.
Since the result holds for any $\mathcal{F}_{{\bar{K}}}$, we have
$P[\sum_{k={\bar{K}}+1}^{\infty}\beta_{k}\|\nabla\mathcal{L}_{k}\|^{2}<\infty]=1$.
Noting that $\sum_{k={\bar{K}}+1}^{\infty}\beta_{k}=\infty$ for any run of the
algorithm, we complete the proof. ∎
Finally, we establish the global convergence theorem for decaying $\beta_{k}$
sequence.
###### Theorem 4.11 (Global convergence with decaying $\beta_{k}$).
Suppose Assumptions 4.1, 4.2, and 4.3 hold,
$\\{\beta_{k}\\}\subseteq(0,\beta_{\max}]$, and
$\sum_{k=0}^{\infty}\beta_{k}=\infty$ and
$\sum_{k=0}^{\infty}\beta_{k}^{2}<\infty$. We have
$\lim_{k\rightarrow\infty}\|\nabla{\mathcal{L}}_{k}\|=0\quad\text{almost
surely}.$
###### Proof.
For any run of the algorithm, suppose the statement does not hold, then we
have $\limsup_{k\rightarrow\infty}\|\nabla{\mathcal{L}}_{k}\|\geq 2\epsilon$
for some $\epsilon>0$. For such a run, let us define the set
$\mathcal{K}_{\epsilon}\coloneqq\\{k\geq{\bar{K}}+1:\|\nabla{\mathcal{L}}_{k}\|\geq\epsilon\\}$.
By Lemma 4.10, there exist two infinite index sets $\\{m_{i}\\}$,
$\\{n_{i}\\}$ with ${\bar{K}}<m_{i}<n_{i}$, $\forall i\geq 0$, such that
$\|\nabla{\mathcal{L}}_{m_{i}}\|\geq
2\epsilon,\quad\|\nabla{\mathcal{L}}_{n_{i}}\|<\epsilon,\quad\|\nabla{\mathcal{L}}_{k}\|\geq\epsilon\text{
for }k\in\\{m_{i}+1,\cdots,n_{i}-1\\}.$ (25)
By Assumption 4.1 and the definition $\nabla{\mathcal{L}}_{k}=(P_{k}\nabla
f_{k},c_{k})$, there exists $L_{\nabla{\mathcal{L}}}>0$ such that
$\|\nabla{\mathcal{L}}_{k+1}-\nabla{\mathcal{L}}_{k}\|\leq
L_{\nabla{\mathcal{L}}}\\{\|{\bm{x}}_{k+1}-{\bm{x}}_{k}\|+\|{\bm{x}}_{k+1}-{\bm{x}}_{k}\|^{2}\\}$.
Thus, (25) implies
$\displaystyle\epsilon$
$\displaystyle\leq\|\nabla{\mathcal{L}}_{m_{i}}\|-\|\nabla{\mathcal{L}}_{n_{i}}\|\leq\|\nabla{\mathcal{L}}_{n_{i}}-\nabla{\mathcal{L}}_{m_{i}}\|\leq\sum_{k=m_{i}}^{n_{i}-1}\|\nabla{\mathcal{L}}_{k+1}-\nabla{\mathcal{L}}_{k}\|$
$\displaystyle\leq
L_{\nabla{\mathcal{L}}}\sum_{k=m_{i}}^{n_{i}-1}\\{\|{\bm{x}}_{k+1}-{\bm{x}}_{k}\|+\|{\bm{x}}_{k+1}-{\bm{x}}_{k}\|^{2}\\}\leq
L_{\nabla{\mathcal{L}}}\sum_{k=m_{i}}^{n_{i}-1}(\Delta_{k}+\Delta_{k}^{2})$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{Ful_GenerateRadius}}}}{{\leq}}L_{\nabla{\mathcal{L}}}\sum_{k=m_{i}}^{n_{i}-1}(\eta_{\max}\alpha_{u}\beta_{k}\|\bar{\nabla}{\mathcal{L}}_{k}\|+\eta_{\max}^{2}\alpha_{u}^{2}\beta_{k}^{2}\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2})\quad(\text{also
by Lemma }\ref{lemma:4statements}).$
Since
$\|\bar{\nabla}{\mathcal{L}}_{k}\|\leq\|\nabla{\mathcal{L}}_{k}\|+\|{\bar{g}}_{k}-\nabla
f_{k}\|$, $\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2}\leq
2(\|\nabla{\mathcal{L}}_{k}\|^{2}+\|{\bar{g}}_{k}-\nabla f_{k}\|^{2})$ and
$\beta_{k}\leq\beta_{\max}$, we have
$\epsilon\leq
L_{\nabla{\mathcal{L}}}\eta_{\max}\alpha_{u}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\|\nabla{\mathcal{L}}_{k}\|+2L_{\nabla{\mathcal{L}}}\eta_{\max}^{2}\alpha_{u}^{2}\beta_{\max}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}\\\
+L_{\nabla{\mathcal{L}}}\eta_{\max}\alpha_{u}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\|{\bar{g}}_{k}-\nabla
f_{k}\|+2L_{\nabla{\mathcal{L}}}\eta_{\max}^{2}\alpha_{u}^{2}\beta_{\max}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\|{\bar{g}}_{k}-\nabla
f_{k}\|^{2}.$
Multiplying $\epsilon$ on both sides and using
$\|\nabla{\mathcal{L}}_{k}\|\geq\epsilon$ for
$k\in\\{m_{i},\cdots,n_{i}-1\\}$, we have
$\epsilon^{2}\leq\\{L_{\nabla{\mathcal{L}}}\eta_{\max}\alpha_{u}+2\epsilon
L_{\nabla{\mathcal{L}}}\eta_{\max}^{2}\alpha_{u}^{2}\beta_{\max}\\}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}\\\
+\left\\{\epsilon L_{\nabla{\mathcal{L}}}\eta_{\max}\alpha_{u}+2\epsilon
L_{\nabla{\mathcal{L}}}\eta_{\max}^{2}\alpha_{u}^{2}\beta_{\max}\right\\}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\left(\|{\bar{g}}_{k}-\nabla
f_{k}\|+\|{\bar{g}}_{k}-\nabla f_{k}\|^{2}\right).$ (26)
For sake of contradiction, we will show that the right-hand-side of the above
expression converges to zero as $i\rightarrow\infty$. By (24), we know that
$\infty>\sum_{k={\bar{K}}+1}^{\infty}\beta_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}\geq\sum_{i=0}^{\infty}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}$.
Thus,
$\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\|\nabla{\mathcal{L}}_{k}\|^{2}\rightarrow
0$ as $i\rightarrow\infty$. For the second term, we note that
$\sum_{i=0}^{\infty}\mathbb{E}_{{\bar{K}}+1}\left[\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}(\|{\bar{g}}_{k}-\nabla
f_{k}\|+\|{\bar{g}}_{k}-\nabla
f_{k}\|^{2})\right]=\sum_{i=0}^{\infty}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\mathbb{E}_{{\bar{K}}+1}[\|{\bar{g}}_{k}-\nabla
f_{k}\|+\|{\bar{g}}_{k}-\nabla f_{k}\|^{2}]\\\ \leq
2\sum_{i=0}^{\infty}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}(M_{g}+M_{g,1}\mathbb{E}_{{\bar{K}}+1}[f_{k}-f_{\inf}])\stackrel{{\scriptstyle\mathclap{\eqref{pequ:4}}}}{{\leq}}2(M_{g}+M_{g,1}M_{{\bar{K}}})\sum_{i=0}^{\infty}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}.$
By the definition of $\mathcal{K}_{\epsilon}$ above and by (24), we have
$\sum_{i=0}^{\infty}\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}\leq\sum_{k\in\mathcal{K}_{\epsilon}}\beta_{k}<\infty$.
We apply Borel-Cantelli lemma, integrate out the randomness of
$\mathcal{F}_{{\bar{K}}}$, and have
$\sum_{k=m_{i}}^{n_{i}-1}\beta_{k}(\|{\bar{g}}_{k}-\nabla
f_{k}\|+\|{\bar{g}}_{k}-\nabla f_{k}\|^{2})\rightarrow 0$ as
$i\rightarrow\infty$ almost surely. Thus, the right-hand-side of (26)
converges to zero, which leads to the contradiction and completes the proof. ∎
Our almost sure convergence result matches the ones in Na et al. (2021, 2022a)
established for stochastic line search methods in constrained optimization,
and matches the one in Curtis and Shi (2020) established for stochastic trust-
region method in unconstrained optimization. Compared to Curtis and Shi (2020)
(cf. Assumption 4.4 there), we do not assume the variance of the gradient
estimates decays as $\beta_{k}$. Such an assumption violates the flavor of
fully stochastic methods, since a batch of samples is required per iteration
with the batch size goes to infinity. On the contrary, we assume a growth
condition (cf. Assumption 4.3), which is weaker than the usual bounded
variance condition. We should also mention that, if one applies the result of
Lemma 4.5 in Curtis and Shi (2020), one may be able to show almost sure
convergence for decaying $\beta_{k}$ without requiring decaying variance in
the context of Curtis and Shi (2020). However, a new concern appears—one needs
to rescale the Hessian matrix at each step, which modifies the curvature
information and affects the convergence speed.
### 4.3 Merit parameter behavior
In this subsection, we study the behavior of the merit parameter. We revisit
Assumption 4.2 and show that it is satisfied provided ${\bar{g}}_{k}$ is
bounded. In other words, the gradient noise is supposed to have a bounded
support (e.g., sampling from an empirical distribution). The boundedness
condition is standard to ensure a stabilized merit parameter for both
deterministic and stochastic SQP methods (Bertsekas, 1982; Berahas et al.,
2021a, b, 2022b; Curtis et al., 2021b; Na et al., 2021, 2022a). Compared to
existing stochastic SQP methods, we do not require the stabilized value to be
large enough.
###### Assumption 4.12.
For a constant $M_{1}>0$, we have $\|{\bar{g}}_{k}-\nabla f_{k}\|\leq M_{1}$,
$\forall k\geq 0$.
###### Lemma 4.13.
Suppose Assumptions 4.1 and 4.12 hold. There exist a (potentially random)
${\bar{K}}<\infty$ and a deterministic constant $\widehat{\mu}$, such that
${\bar{\mu}}_{k}={\bar{\mu}}_{{\bar{K}}}\leq{\widehat{\mu}}$, $\forall
k>{\bar{K}}$.
###### Proof.
It suffices to show that there exists a deterministic threshold
$\widetilde{\mu}>0$ independent of $k$ such that (15) is satisfied as long as
${\bar{\mu}}_{k}\geq\widetilde{\mu}$. We have
$\displaystyle\text{Pred}_{k}$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:Ful_Pred_k}}}}{{=}}\;{\bar{g}}_{k}^{T}\Delta{\bm{x}}_{k}+\frac{1}{2}\Delta{\bm{x}}_{k}^{T}B_{k}\Delta{\bm{x}}_{k}+{\bar{\mu}}_{k}(\|c_{k}+G_{k}\Delta{\bm{x}}_{k}\|-\|c_{k}\|)$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\eqref{eq:constraint_violation}}}}{{=}}\;{\bar{g}}_{k}^{T}P_{k}{\bm{u}}_{k}+{\bar{\gamma}}_{k}({\bar{g}}_{k}-\nabla
f_{k})^{T}{\bm{v}}_{k}+{\bar{\gamma}}_{k}\nabla
f_{k}^{T}{\bm{v}}_{k}+\frac{1}{2}{\bm{u}}_{k}^{T}P_{k}B_{k}P_{k}{\bm{u}}_{k}+{\bar{\gamma}}_{k}{\bm{v}}_{k}^{T}B_{k}P_{k}{\bm{u}}_{k}$
$\displaystyle\quad+\frac{1}{2}{\bar{\gamma}}_{k}^{2}{\bm{v}}_{k}^{T}B_{k}{\bm{v}}_{k}-{\bar{\mu}}_{k}{\bar{\gamma}}_{k}\|c_{k}\|\quad(\text{also
use }\Delta{\bm{x}}_{k}={\bar{\gamma}}_{k}{\bm{v}}_{k}+P_{k}{\bm{u}}_{k})$
$\displaystyle\leq{\bar{g}}_{k}^{T}P_{k}{\bm{u}}_{k}+{\bar{\gamma}}_{k}(M_{1}+\kappa_{\nabla
f})\|{\bm{v}}_{k}\|+\frac{1}{2}{\bm{u}}_{k}^{T}P_{k}B_{k}P_{k}{\bm{u}}_{k}+\|B_{k}\|\|{\bar{\gamma}}_{k}{\bm{v}}_{k}\|\|P_{k}{\bm{u}}_{k}\|$
$\displaystyle\quad+\frac{1}{2}\|B_{k}\|\|{\bar{\gamma}}_{k}{\bm{v}}_{k}\|^{2}-{\bar{\mu}}_{k}{\bar{\gamma}}_{k}\|c_{k}\|\quad(\text{by
Assumptions \ref{ass:1-1}, \ref{ass:bound_error}})$
$\displaystyle\stackrel{{\scriptstyle\mathclap{\begin{subarray}{c}\eqref{eq:Sto_gamma_k},\eqref{eq:Sto_tangential_step}\end{subarray}}}}{{\leq}}\quad{\bar{g}}_{k}^{T}P_{k}{\bm{u}}_{k}+\frac{{\bar{\gamma}}_{k}(M_{1}+\kappa_{\nabla
f})}{\sqrt{\kappa_{1,G}}}\|c_{k}\|+\frac{1}{2}{\bm{u}}_{k}^{T}P_{k}B_{k}P_{k}{\bm{u}}_{k}+\|B_{k}\|\breve{\Delta}_{k}\widetilde{\Delta}_{k}$
$\displaystyle\quad+\frac{{\bar{\gamma}}_{k}\kappa_{B}\kappa_{c}}{2\kappa_{1,G}}\|c_{k}\|-{\bar{\mu}}_{k}{\bar{\gamma}}_{k}\|c_{k}\|\quad(\text{also
use }\|{\bm{v}}_{k}\|\leq\|c_{k}\|/\sqrt{\kappa_{1,G}}\text{ and
}{\bar{\gamma}}_{k}\leq 1).$
Thus, we have that
$\text{Pred}_{k}\leq{\bar{g}}_{k}^{T}P_{k}{\bm{u}}_{k}+\frac{1}{2}{\bm{u}}_{k}^{T}P_{k}B_{k}P_{k}{\bm{u}}_{k}+\|B_{k}\|\breve{\Delta}_{k}\widetilde{\Delta}_{k}-\frac{1}{2}\breve{\Delta}_{k}\|c_{k}\|$
(27)
provided ${\bar{\mu}}_{k}$ satisfies
${\bar{\mu}}_{k}{\bar{\gamma}}_{k}\|c_{k}\|\geq\frac{1}{2}\breve{\Delta}_{k}\|c_{k}\|+{\bar{\gamma}}_{k}\left(\frac{M_{1}+\kappa_{\nabla
f}}{\sqrt{\kappa_{1,G}}}+\frac{\kappa_{B}\kappa_{c}}{2\kappa_{1,G}}\right)\|c_{k}\|.$
(28)
When ${\bar{\gamma}}_{k}=1$, we apply (6) and (12), and have
$\breve{\Delta}_{k}\leq\eta_{1,k}\alpha_{k}\|c_{k}\|\stackrel{{\scriptstyle\eqref{def:eta2k}}}{{\leq}}\frac{6\zeta\beta_{\max}}{\|G_{k}\|}\cdot\frac{\beta_{k}}{6\zeta\beta_{\max}}\|c_{k}\|\leq\frac{\beta_{\max}\kappa_{c}}{\sqrt{\kappa_{1,G}}},$
where the last inequality uses Assumption 4.1. Thus, (28) holds as long as
${\bar{\mu}}_{k}\geq\frac{\beta_{\max}\kappa_{c}}{2\sqrt{\kappa_{1,G}}}+\frac{M_{1}+\kappa_{\nabla
f}}{\sqrt{\kappa_{1,G}}}+\frac{\kappa_{B}\kappa_{c}}{2\kappa_{1,G}}.$
When ${\bar{\gamma}}_{k}<1$, we apply
$\breve{\Delta}_{k}\stackrel{{\scriptstyle\eqref{eq:Sto_gamma_k}}}{{=}}{\bar{\gamma}}_{k}\|{\bm{v}}_{k}\|\leq{\bar{\gamma}}_{k}\kappa_{c}/\sqrt{\kappa_{1,G}}$.
Thus, (28) holds as long as ${\bar{\mu}}_{k}$ satisfies the above condition
with $\beta_{\max}$ replaced by 1. Combining the two cases, we know (27) holds
if ${\bar{\mu}}_{k}\geq\widetilde{\mu}$ for some $\widetilde{\mu}>0$. By Lemma
2.1 and $P_{k}{\bar{g}}_{k}=\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}$, (27)
implies (15). Since ${\bar{\mu}}_{k}$ is increased by at least a factor of
$\rho$ for each update, we define $\widehat{\mu}\coloneqq\rho\widetilde{\mu}$
and complete the proof. ∎
The additional requirement for having a large enough stabilized value is
critical for the analysis of existing StoSQP methods. To satisfy this
requirement, Na et al. (2021, 2022a) imposed an adaptive condition on the
feasibility error to be satisfied when selecting the merit parameter; and
Berahas et al. (2021a, b, 2022b); Curtis et al. (2021b) imposed a symmetry
condition on the noise distribution. In contrast, TR-StoSQP relies on a novel
relaxation technique and adjusts the linear term in (5) to the one in (8) for
computing $\Delta{\bm{x}}_{k}$. By the step and radius decomposition, we can
convert the feasibility and optimality residuals in the reduction (15), i.e.,
$\|c_{k}\|\breve{\Delta}_{k}$ and
$\|\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}\|\widetilde{\Delta}_{k}$, to the
squared estimated KKT residual $\|\bar{\nabla}{\mathcal{L}}_{k}\|^{2}$. After
taking the conditional expectation and carefully analyzing the error terms, we
can further use the true KKT residual to characterize the improvement of the
merit function in each step. In the end, we suppress the condition on a
sufficiently large merit parameter.
## 5 Numerical Experiments
In this section, we demonstrate the empirical performance of Algorithm 1. We
compare our method with the line-search $\ell_{1}$-StoSQP method that is
designed in (Berahas et al., 2021b, Algorithm 3) under the same fully
stochastic setup. In what follows, we first describe algorithmic settings in
Section 5.1; then we show numerical results on a subset of CUTEst problems
(Gould et al., 2014) in Section 5.2; and then we show numerical results on
constrained logistic regression problems in Section 5.3. The implementation of
TR-StoSQP is available at https://github.com/ychenfang/TR-StoSQP.
### 5.1 Algorithm setups
For both our method and $\ell_{1}$-StoSQP, we try two constant sequences,
$\beta_{k}\in\\{0.5,1\\}$, and two decaying sequences,
$\beta_{k}\in\\{k^{-0.6},k^{-0.8}\\}$. The sequence $\\{\beta_{k}\\}$ is used
to select the stepsize in $\ell_{1}$-StoSQP. We use the same input since, as
discussed in Remark 3.1, $\beta_{k}$ in two methods shares the same order. For
both methods, the Lipschitz constants of the objective gradients and
constraint Jacobians are estimated around the initialization and kept constant
for the subsequent iterations.
We follow Berahas et al. (2021b) to set up the $\ell_{1}$-StoSQP method, where
we set $B_{k}=I$ and solve the SQP subproblems exactly. We set the parameters
of TR-StoSQP as $\zeta=10$, ${\bar{\mu}}_{-1}=1$, and $\rho=1.5$. We use IPOPT
solver (Wächter and Biegler, 2005) to solve (8), and apply four different
Hessian approximations $B_{k}$ as follows:
1. (a)
Identity (Id). We set $B_{k}=I$, which is widely used in the literature
(Berahas et al., 2021a, b; Na et al., 2021, 2022a).
2. (b)
Symmetric rank-one (SR1) update. We set $H_{-1}=H_{0}=I$ and update $H_{k}$ as
$H_{k}=H_{k-1}+\frac{(\bm{y}_{k-1}-H_{k-1}\Delta{\bm{x}}_{k-1})(\bm{y}_{k-1}-H_{k-1}\Delta{\bm{x}}_{k-1})^{T}}{(\bm{y}_{k-1}-H_{k-1}\Delta{\bm{x}}_{k-1})^{T}\Delta{\bm{x}}_{k-1}},\quad\forall
k\geq 1,$
where
$\bm{y}_{k-1}=\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k}-\bar{\nabla}_{{\bm{x}}}{\mathcal{L}}_{k-1}$
and $\Delta{\bm{x}}_{k-1}={\bm{x}}_{k}-{\bm{x}}_{k-1}$. Since $H_{k}$ depends
on ${\bar{g}}_{k}$, we set $B_{k}=H_{k-1}$ ($B_{0}=H_{-1}=I$) to ensure that
$\sigma(B_{k})\subseteq\mathcal{F}_{k-1}$.
3. (c)
Estimated Hessian (EstH). We set $B_{0}=I$ and
$B_{k}=\bar{\nabla}_{{\bm{x}}}^{2}{\mathcal{L}}_{k-1}$, $\forall k\geq 1$,
where $\bar{\nabla}_{{\bm{x}}}^{2}{\mathcal{L}}_{k-1}$ is estimated using the
same sample used to estimate ${\bar{g}}_{k-1}$.
4. (d)
Averaged Hessian (AveH). We set $B_{0}=I$, set
$B_{k}=\sum_{i=k-100}^{k-1}\bar{\nabla}_{{\bm{x}}}^{2}{\mathcal{L}}_{i}/100$
for $k\geq 100$, and set
$B_{k}=\sum_{i=0}^{k-1}\bar{\nabla}_{{\bm{x}}}^{2}{\mathcal{L}}_{i}/k$ for
$0<k<100$. This Hessian approximation is inspired by Na et al. (2022b), where
the authors showed that the Hessian averaging is helpful for denoising the
noise in the stochastic Hessian estimates.
### 5.2 CUTEst
We implement 10 problems in the CUTEst set (Gould et al., 2014): BT4, BT5,
BT8, BT9, MARATOS, HS39, HS40, HS42, HS78, HS79. The singularity of the matrix
$G_{k}G_{k}^{T}$ is not encountered in any iteration of all problems. The
initial iterate is provided by the CUTEst package. At each step, the estimate
${\bar{g}}_{k}$ is drawn from ${\mathcal{N}}(\nabla
f_{k},\sigma^{2}(I+\bm{1}\bm{1}^{T}))$, where $\bm{1}$ denotes the
$d$-dimensional all one vector and $\sigma^{2}$ denotes the noise level
varying within $\\{10^{-8},10^{-4},10^{-2},10^{-1}\\}$. When the approximation
EstH or AveH is used, the estimate $(\bar{\nabla}^{2}f_{k})_{i,j}$ (which is
also the $(j,i)$ entry) is drawn from
${\mathcal{N}}((\nabla^{2}f_{k})_{i,j},\sigma^{2})$ with the same $\sigma^{2}$
used for estimating the gradient. We set the maximum iteration budget to
$10^{5}$ and, for each setup of $\beta_{k}$ and $\sigma^{2}$, average the KKT
residuals over 5 runs. We stop the iteration of both methods if
$\|\nabla\mathcal{L}_{k}\|\leq 10^{-4}$ or $k\geq 10^{5}$.
We report the KKT residuals of $\ell_{1}$-StoSQP and TR-StoSQP with different
Hessian approximations in Figure 1. We observe that for both constant
$\beta_{k}$ and decaying $\beta_{k}$ with a high noise level, TR-StoSQP
consistently outperforms $\ell_{1}$-StoSQP. We note that $\ell_{1}$-StoSQP
performs better than TR-StoSQP for decaying $\beta_{k}$ with a low noise level
(e.g., $\sigma^{2}=10^{-8}$). However, in that case, TR-StoSQP is not
sensitive to the noise level $\sigma^{2}$ while the performance of
$\ell_{1}$-StoSQP deteriorates rapidly as $\sigma^{2}$ increases. We think
that the robustness against noise is a benefit brought by the trust-region
constraint, which properly regularizes the SQP subproblem when $\sigma^{2}$ is
large. Furthermore, among the four choices of Hessian approximations, TR-
StoSQP generally performs the best with the averaged Hessian, and the second
best with the estimated Hessian. Compared to the identity and SR1 update, the
estimated Hessian provides a better approximation to the true Hessian
(especially when $\sigma^{2}$ is small); the averaged Hessian further reduces
the noise that leads to a better performance (especially when $\sigma^{2}$ is
large).
() $\beta_{k}=0.5$
() $\beta_{k}=1.0$
() $\beta_{k}=k^{-0.6}$
() $\beta_{k}=k^{-0.8}$
Figure 1: KKT residual boxplots for CUTEst problems. For each $\sigma^{2}$,
there are five boxes. The first four boxes correspond to the proposed TR-
StoSQP method with four different choices of $B_{k}$, while the last box
corresponds to the $\ell_{1}$-StoSQP method.
We then investigate the adaptivity of the radius selection scheme in (12). As
explained in Remark 3.1, the radius $\Delta_{k}$ can be set larger or smaller
than $\alpha_{k}=\mathcal{O}(\beta_{k})$, depending on the magnitude of the
estimated KKT residual. In Table 1, we report the proportions of the three
cases in (12): $\Delta_{k}<\alpha_{k}$, $\Delta_{k}=\alpha_{k}$, and
$\Delta_{k}>\alpha_{k}$. We average the proportions over 5 runs of all 10
problems in each setup. From Table 1, we have the following three
observations. (i) Case 2 has a near zero proportion for all setups. This
phenomenon is because of the fact that
$\eta_{1,k}-\eta_{2,k}=\mathcal{O}(\beta_{k})$. For constant $\beta_{k}$, this
value is small, thus a few iterations are in Case 2. For decaying $\beta_{k}$,
this value even converges to zero, thus almost no iterations are in Case 2.
(ii) Case 3 is triggered quite frequently if $\beta_{k}$ decays rapidly. This
phenomenon suggests that the adaptive scheme can generate aggressive steps
even if we input a conservative radius-related sequence $\beta_{k}$. (iii) The
proportion of Case 1 dominates the other two cases in the most of setups. This
phenomenon is reasonable since Case 1 is always triggered when the iterates
are near a KKT point.
$\beta_{k}$ | $B_{k}$ | $\sigma^{2}=10^{-8}$ | $\sigma^{2}=10^{-4}$ | $\sigma^{2}=10^{-2}$ | $\sigma^{2}=10^{-1}$
---|---|---|---|---|---
Case 1 | Case 2 | Case 3 | Case 1 | Case 2 | Case 3 | Case 1 | Case 2 | Case 3 | Case 1 | Case 2 | Case 3
0.5 | Id | 90.4 | 0.1 | 9.5 | 91.3 | 0.1 | 8.6 | 95.0 | 0.1 | 4.9 | 54.7 | 0.8 | 44.5
SR1 | 94.6 | 0.1 | 5.3 | 93.4 | 0.1 | 6.5 | 95.2 | 0.1 | 4.7 | 58.6 | 0.7 | 40.7
EstH | 93.6 | 0.1 | 6.3 | 93.8 | 0.1 | 6.2 | 87.4 | 0.2 | 12.4 | 70.4 | 0.3 | 29.2
AveH | 93.7 | 0.1 | 6.2 | 93.5 | 0.1 | 6.4 | 86.1 | 0.2 | 13.8 | 67.6 | 0.3 | 32.1
1.0 | Id | 90.1 | 0.1 | 9.8 | 92.2 | 0.2 | 7.7 | 96.5 | 0.2 | 3.3 | 54.4 | 1.5 | 44.1
SR1 | 94.9 | 0.1 | 5.0 | 95.6 | 0.1 | 4.3 | 97.2 | 0.2 | 2.6 | 66.3 | 1.1 | 32.6
EstH | 93.6 | 0.1 | 6.3 | 94.3 | 0.1 | 5.6 | 88.0 | 0.3 | 11.7 | 71.2 | 0.6 | 28.2
AveH | 93.6 | 0.2 | 6.2 | 94.2 | 0.2 | 5.6 | 87.8 | 0.3 | 11.9 | 68.7 | 0.6 | 30.7
$k^{-0.6}$ | Id | 97.8 | 0.0 | 2.2 | 97.4 | 0.0 | 2.6 | 95.2 | 0.0 | 4.8 | 52.4 | 0.0 | 47.6
SR1 | 98.0 | 0.0 | 2.0 | 96.8 | 0.0 | 3.2 | 94.3 | 0.0 | 5.7 | 52.2 | 0.0 | 47.8
EstH | 97.5 | 0.0 | 2.5 | 97.5 | 0.0 | 2.5 | 88.3 | 0.0 | 11.7 | 71.3 | 0.0 | 28.6
AveH | 97.5 | 0.0 | 2.5 | 97.5 | 0.0 | 2.5 | 87.8 | 0.0 | 12.2 | 68.8 | 0.0 | 31.2
$k^{-0.8}$ | Id | 73.4 | 0.0 | 26.6 | 73.4 | 0.0 | 26.6 | 70.1 | 0.0 | 29.9 | 43.3 | 0.0 | 56.7
SR1 | 58.8 | 0.0 | 41.2 | 67.0 | 0.0 | 33.0 | 68.4 | 0.0 | 31.6 | 42.9 | 0.0 | 57.1
EstH | 68.0 | 0.0 | 32.0 | 68.0 | 0.0 | 32.0 | 66.3 | 0.0 | 33.7 | 55.8 | 0.0 | 44.2
AveH | 68.2 | 0.0 | 31.8 | 68.2 | 0.0 | 31.8 | 66.5 | 0.0 | 33.5 | 55.3 | 0.0 | 44.7
Table 1: Proportions of the three cases in (12) (%). We highlight the
proportion of Case 3 if the value is higher than 25%.
### 5.3 Constrained logistic regression
We consider equality-constrained logistic regression of the form
$\min_{{\bm{x}}\in\mathbb{R}^{d}}\;f({\bm{x}})=\frac{1}{N}\sum_{i=1}^{N}\log\left\\{1+\exp\left(-y_{i}\cdot\langle{\bm{z}}_{i},{\bm{x}}\rangle\right)\right\\}\quad\text{
s.t. }A{\bm{x}}=\bm{b},$
where ${\bm{z}}_{i}\in\mathbb{R}^{d}$ is the sample point,
$y_{i}\in\\{-1,1\\}$ is the label, and $A\in\mathbb{R}^{m\times d}$ and
${\bm{b}}\in\mathbb{R}^{m}$ form the deterministic constraints. We implement 8
datasets from LIBSVM (Chang and Lin, 2011): austrilian, breast-cancer,
diabetes, heart, ionosphere, sonar, splice, and svmguide3. For each dataset,
we set $m=5$ and generate random $A$ and $\bm{b}$ by drawing each element from
a standard normal distribution. We ensure that $A$ has full row rank in all
problems. For both algorithms and all problems, the initial iterate is set to
be all one vector of appropriate dimension. In each iteration, we select one
sample at random to estimate the objective gradient (and Hessian if EstH or
AveH is used). A budget of 20 epochs—the number of passes over the dataset—is
used for both algorithms and all problems. We stop the iteration if
$\|\nabla\mathcal{L}_{k}\|\leq 10^{-4}$ or the epoch budget is consumed.
We report the average of the KKT residuals over 5 runs in Figure 2. From the
figure, we observe that TR-StoSQP with all four choices of $B_{k}$
consistently outperforms $\ell_{1}$-StoSQP when $\beta_{k}=0.5$, $1.0$, and
$k^{-0.6}$. When $\beta_{k}=k^{-0.8}$, TR-StoSQP enjoys a better performance
by using the estimated Hessian or averaged Hessian. This experiment further
illustrates the promising performance of our method.
() Constant $\beta_{k}$
() Decaying $\beta_{k}=k^{-s}$
Figure 2: KKT residual boxplots for constrained logistic regression problems.
For each setup of $\beta_{k}$, there are five boxes. The first four boxes
correspond to the proposed TR-StoSQP method with four different choices of
$B_{k}$, while the last box corresponds to the $\ell_{1}$-StoSQP method.
## 6 Conclusion
We designed a trust-region stochastic SQP (TR-StoSQP) algorithm for solving
nonlinear optimization problems with stochastic objective and deterministic
equality constraints. We developed an adaptive relaxation technique to address
the infeasibility issue that bothers the trust-region methods when applied to
constrained optimization. With a stabilized merit parameter, TR-StoSQP
converges in two regimes. (i) When $\beta_{k}=\beta$, $\forall k\geq 0$, the
expectation of weighted averaged KKT residuals converges to a neighborhood
around zero. (ii) When $\beta_{k}$ satisfies $\sum\beta_{k}=\infty$ and
$\sum\beta_{k}^{2}<\infty$, the KKT residuals converge to zero almost surely.
We also showed that the merit parameter is ensured to stabilize provided the
gradient estimates are bounded. Our numerical experiments on a subset of
problems of the CUTEst set and constrained logistic regression problems showed
a promising performance of the proposed method.
### Acknowledgments
We would like to acknowledge the DOE, NSF, and ONR as well as the J. P. Morgan
Chase Faculty Research Award for providing partial support of this work.
## References
* Berahas et al. (2021a) A. S. Berahas, F. E. Curtis, M. J. O’Neill, and D. P. Robinson. A stochastic sequential quadratic optimization algorithm for nonlinear equality constrained optimization with rank-deficient jacobians. _arXiv preprint arXiv:2106.13015_ , 2021a.
* Berahas et al. (2021b) A. S. Berahas, F. E. Curtis, D. Robinson, and B. Zhou. Sequential quadratic optimization for nonlinear equality constrained stochastic optimization. _SIAM Journal on Optimization_ , 31(2):1352–1379, 2021b.
* Berahas et al. (2022a) A. S. Berahas, R. Bollapragada, and B. Zhou. An adaptive sampling sequential quadratic programming method for equality constrained stochastic optimization. _arXiv preprint arXiv:2206.00712_ , 2022a.
* Berahas et al. (2022b) A. S. Berahas, J. Shi, Z. Yi, and B. Zhou. Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction. _arXiv preprint arXiv:2204.04161_ , 2022b.
* Bertsekas (1998) D. Bertsekas. _Network optimization: continuous and discrete models_ , volume 8. Athena Scientific, 1998.
* Bertsekas (1982) D. P. Bertsekas. _Constrained Optimization and Lagrange Multiplier Methods_. Elsevier, 1982.
* Birge (1997) J. R. Birge. State-of-the-art-survey—stochastic programming: Computation and applications. _INFORMS Journal on Computing_ , 9(2):111–133, 1997.
* Boggs and Tolle (1995) P. T. Boggs and J. W. Tolle. Sequential quadratic programming. _Acta Numerica_ , 4:1–51, 1995.
* Bottou et al. (2018) L. Bottou, F. E. Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. _SIAM Review_ , 60(2):223–311, 2018.
* Byrd et al. (1987) R. H. Byrd, R. B. Schnabel, and G. A. Shultz. A trust region algorithm for nonlinearly constrained optimization. _SIAM Journal on Numerical Analysis_ , 24(5):1152–1170, 1987.
* Celis et al. (1984) M. R. Celis, J. Dennis Jr, and R. A. Tapia. A trust region strategy for equality constrained optimization. Technical report, 1984.
* Chang and Lin (2011) C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. _ACM Transactions on Intelligent Systems and Technology_ , 2(3):1–27, 2011.
* Chen et al. (2018) C. Chen, F. Tung, N. Vedula, and G. Mori. Constraint-aware deep neural network compression. In _Computer Vision – ECCV 2018_ , pages 409–424. Springer International Publishing, 2018.
* Chen et al. (2017) R. Chen, M. Menickelly, and K. Scheinberg. Stochastic optimization using a trust-region method and random models. _Mathematical Programming_ , 169(2):447–487, 2017\.
* Chen et al. (2020) Y.-L. Chen, S. Na, and M. Kolar. Convergence analysis of accelerated stochastic gradient descent under the growth condition. _arXiv preprint arXiv:2006.06782_ , 2020.
* Curtis and Robinson (2018) F. E. Curtis and D. P. Robinson. Exploiting negative curvature in deterministic and stochastic optimization. _Mathematical Programming_ , 176(1-2):69–94, 2018\.
* Curtis and Shi (2020) F. E. Curtis and R. Shi. A fully stochastic second-order trust region method. _Optimization Methods and Software_ , 37(3):844–877, 2020.
* Curtis et al. (2019) F. E. Curtis, K. Scheinberg, and R. Shi. A stochastic trust region algorithm based on careful step normalization. _INFORMS Journal on Optimization_ , 1(3):200–220, 2019.
* Curtis et al. (2021a) F. E. Curtis, M. J. O’Neill, and D. P. Robinson. Worst-case complexity of an sqp method for nonlinear equality constrained stochastic optimization. _arXiv preprint arXiv:2112.14799_ , 2021a.
* Curtis et al. (2021b) F. E. Curtis, D. P. Robinson, and B. Zhou. Inexact sequential quadratic optimization for minimizing a stochastic objective function subject to deterministic nonlinear equality constraints. _arXiv preprint arXiv:2107.03512_ , 2021b.
* Dupacova and Wets (1988) J. Dupacova and R. Wets. Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems. _The Annals of Statistics_ , 16(4):1517–1549, 1988.
* El-Alem (1991) M. El-Alem. A global convergence theory for the celis–dennis–tapia trust-region algorithm for constrained optimization. _SIAM Journal on Numerical Analysis_ , 28(1):266–290, 1991.
* Gould et al. (2014) N. I. M. Gould, D. Orban, and P. L. Toint. CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. _Computational Optimization and Applications_ , 60(3):545–557, 2014.
* Johnson and Zhang (2013) R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. _Advances in neural information processing systems_ , 26, 2013.
* Na and Mahoney (2022) S. Na and M. W. Mahoney. Asymptotic convergence rate and statistical inference for stochastic sequential quadratic programming. _arXiv preprint arXiv:2205.13687_ , 2022.
* Na et al. (2021) S. Na, M. Anitescu, and M. Kolar. Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming. _arXiv preprint arXiv:2109.11502_ , 2021.
* Na et al. (2022a) S. Na, M. Anitescu, and M. Kolar. An adaptive stochastic sequential quadratic programming with differentiable exact augmented lagrangians. _Mathematical Programming_ , 2022a.
* Na et al. (2022b) S. Na, M. Dereziński, and M. W. Mahoney. Hessian averaging in stochastic Newton methods achieves superlinear convergence. _arXiv preprint arXiv:2204.09266_ , 2022b.
* Nocedal and Wright (2006) J. Nocedal and S. Wright. _Numerical Optimization_. Springer New York, 2006.
* Omojokun (1989) E. O. Omojokun. _Trust region algorithms for optimization with nonlinear equality and inequality constraints_. PhD thesis, University of Colorado, Boulder, CO, 1989.
* Powell and Yuan (1990) M. J. D. Powell and Y. Yuan. A trust region algorithm for equality constrained optimization. _Mathematical Programming_ , 49(1-3):189–211, 1990.
* Rees et al. (2010) T. Rees, H. S. Dollar, and A. J. Wathen. Optimal solvers for PDE-constrained optimization. _SIAM Journal on Scientific Computing_ , 32(1):271–298, 2010.
* Robbins and Siegmund (1971) H. Robbins and D. Siegmund. A convergence theorem for non negative almost supermartingales and some applications. In _Optimizing Methods in Statistics_ , pages 233–257. Elsevier, 1971\.
* Stich (2019) S. U. Stich. Unified optimal analysis of the (stochastic) gradient method. _arXiv preprint arXiv:1907.04232_ , 2019.
* Sun and Nocedal (2022) S. Sun and J. Nocedal. A trust region method for the optimization of noisy functions. _arXiv preprint arXiv:2201.00973_ , 2022.
* Vardi (1985) A. Vardi. A trust region algorithm for equality constrained minimization: Convergence properties and implementation. _SIAM Journal on Numerical Analysis_ , 22(3):575–591, 1985.
* Vaswani et al. (2019) S. Vaswani, F. Bach, and M. Schmidt. Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron. In _The 22nd international conference on artificial intelligence and statistics_ , pages 1195–1204. PMLR, 2019.
* Wächter and Biegler (2005) A. Wächter and L. T. Biegler. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. _Mathematical Programming_ , 106(1):25–57, 2005\.
|
# The Effectiveness of World Models for
Continual Reinforcement Learning
Samuel Kessler1, Mateusz Ostaszewski2, Michał Bortkiewicz2, Mateusz Żarski,3
Maciej Wołczyk4, Jack Parker-Holder1, Stephen J. Roberts1, Piotr Miłoś5
1 University of Oxford
2 Warsaw University of Technology
3 Institute of Theoretical and Applied Informatics, PAS
4 Jagiellonian University, Cracow
5 Polish Academy of Sciences and Ideas NCBR
<EMAIL_ADDRESS>
###### Abstract
World models power some of the most efficient reinforcement learning
algorithms. In this work, we showcase that they can be harnessed for continual
learning – a situation when the agent faces changing environments. World
models typically employ a replay buffer for training, which can be naturally
extended to continual learning. We systematically study how different
selective experience replay methods affect performance, forgetting, and
transfer. We also provide recommendations regarding various modeling options
for using world models. The best set of choices is called Continual-Dreamer,
it is task-agnostic and utilizes the world model for continual exploration.
Continual-Dreamer is sample efficient and outperforms state-of-the-art task-
agnostic continual reinforcement learning methods on Minigrid and Minihack
benchmarks.
## 1 Introduction
CRL with world models.
There have been many recent successes in reinforcement learning (RL), such as
in games (Schrittwieser et al., 2019), robotics (OpenAI et al., 2018) and in
scientific applications (Nguyen et al., 2021; Degrave et al., 2022). However,
these successes showcase methods for solving individual tasks. Looking beyond,
the field of continual reinforcement learning (CRL) aims to develop agents
which can solve many tasks, one after another, continually while retaining
performance on all previously seen ones (Khetarpal et al., 2020). Such
capabilities are conjectured to be essential for truly scalable intelligent
systems, Ring (1994) and Hassabis et al. (2017) conjecture that truly scalable
intelligent systems will additionally need to master many tasks in a continual
manner. The field of continual reinforcement learning (CRL) aims to develop
agents which can solve many tasks, one after another, continually while
retaining performance on all previously seen ones (Khetarpal et al., 2020).
_World models_ combine generative models with RL and have become one of the
most successful paradigms in single-task RL (Ha & Schmidhuber, 2018). World
models are typically trained iteratively, first, the policy interacts with the
environment, second, the collected experience is used to train the world model
and then the policy is trained using hallucinated experience generated by the
world model (Kaiser et al., 2019). Such approaches are typically sample
efficient, offloading the most data-hungry trial-and-error RL training to the
imagined world. Further, generative models can be used to create compact
representations that facilitate policy training and so obtain very good
results for challenging pixel-based environments Hafner et al. (2023).
This paper explores using world models for learning tasks sequentially. We
showcase a method that satisfies the traditional CL desiderata: avoids
catastrophic forgetting, achieves transfer, high average performance, and is
scalable. The proposed method _Continual-Dreamer_ is built on top of DreamerV2
(Hafner et al., 2020) – an RL algorithm, which achieves state-of-the-art on a
number of benchmarks. We define the Continual-Dreamer configuration in Section
5. DreamerV2 uses replay buffer (Lin, 1992), which we extend across many tasks
to mitigate forgetting akin to (Isele & Cosgun, 2018; Rolnick et al., 2019).
Additionally, we demonstrate that world models are capable to operate without
explicit task identification. This is an important requirement in many CRL
scenarios and enables further capabilities. In particular, we implement an
adaptive exploration method similar to (Steinparz et al., 2022), which
implicitly adapts to task changes. Based on our empirical results we argue
that world-models offer a potent approach for CRL and should attract more
research attention.
Our contributions are as follows111Code available:
https://github.com/skezle/continual-dreamer:
* •
We present the first approach to task-agnostic model-based CRL. We use
DreamerV2 as a backbone, and our work is transferable to other world models.
* •
We evaluate our method performance on two challenging CRL benchmarks, Minigrid
and Minihack, and show that it outperforms state-of-the-art task-agnostic
methods, demonstrating that the model-based paradigm is a viable solution to
CRL.
* •
We thoroughly explore different experience replay strategies, which address
how we sample from or populate the experience replay buffer to balance
preventing forgetting of previously learned skills while enabling learning new
skills from new tasks.
## 2 Preliminaries
### 2.1 Reinforcement Learning
A Partially Observable Markov Decision Process ($\mathrm{POMDP}$ (Kaelbling et
al., 1998)) is the following tuple
$(\mathcal{S},\mathcal{A},P,R,\Omega,\mathcal{O},\gamma)$. Here, $\mathcal{S}$
and $\mathcal{A}$ are the sets of states and actions respectively, such that
for $s_{t},s_{t+1}\in\mathcal{S}$ and $a_{t}\in\mathcal{A}$.
$P(s_{t+1}|s_{t},a_{t})$ is the transition distribution and
$R(a_{t},s_{t},s_{t+1})$ is the reward function. Additionally,
$\gamma\in(0,1)$ is the discount factor. Since the environments we consider
are partially observable, the agent does not have access to the environment
state $s\in\mathcal{S}$, but only the observations $o\in\Omega$, where
$\Omega$ is the set of observations and
$\mathcal{O}:\mathcal{S}\times\mathcal{A}\rightarrow P(\Omega)$ is an
observation function that defines a distribution over observations. Actions
$a_{t}$ are chosen using a policy $\pi$ that maps observations to actions:
$\Omega\rightarrow\mathcal{A}$. For the purposes of this introduction, let us
assume we have access to the states $s_{t}$ and we are working with a finite
horizon $H$. Then the return from a state is
$R_{t}=\sum_{i=t}^{H}\gamma^{(i\text{-}t)}r(s_{i},a_{i})$. In RL the objective
is to maximize the expected return
$J=\mathbb{E}_{a_{i}\sim\pi,s_{0}\sim\rho}[R_{1}|s_{0}]$ where
$s_{0}\sim\rho(s_{0})$ and $\rho(\cdot)$ is the initial state distribution.
One approach to maximizing expected return is to use a _model-free_ approach,
to learn a policy $\pi_{\phi}:\mathcal{S}\rightarrow\mathcal{A}$ with a
parametric model such as a neural network with parameters $\phi$ guided by an
action-value function $Q_{\theta}(s_{t},a_{t})$ with parameters $\theta$.
Alternatively, instead of learning a policy directly from experience we can
employ _model-based_ RL (MBRL) and learn an intermediate model $f$, for
instance, a transition model $s_{t+1}=f(s_{t},a_{t})$ from experience and
learn our policy with additional experience generated from the model $f$
(Sutton, 1991). Instead of working with the actual state $s_{t}$, our methods
consider the observations $o_{t}$, we employ recurrent policies, action-value
functions, and models to help better estimate states $s_{t}$ (Hausknecht &
Stone, 2015).
### 2.2 Continual Reinforcement Learning
In continual RL the agent has a budget of $N$ interactions with each task
environment $\mathcal{T}_{\tau}$ The agent is then required to learn a policy
to maximize rewards in this environment, before interacting with a new
environment and having to learn a new policy. Each task is defined as a new
$\mathrm{POMDP}$,
$\mathcal{T}_{\tau}=\big{(}\mathcal{S}_{\tau},\mathcal{A}_{\tau},P_{\tau},R_{\tau},\Omega_{\tau},\mathcal{O}_{\tau},\gamma_{\tau}\big{)}$.
See Section A.1 for a definition of CL in the supervised learning setting.
The agent is continually evaluated on all past and present tasks and so it is
desirable for the agent’s policy to transfer to new tasks while not forgetting
how to perform past tasks. CRL is not a new problem setting (Thrun & Mitchell,
1995), however, its definition has evolved over time and some settings will
differ from paper to paper, we employ the setting above which is related to
previous recent work in CRL (Kirkpatrick et al., 2016; Schwarz et al., 2018;
Rolnick et al., 2019; Wolczyk et al., 2021; Kessler et al., 2021; Powers et
al., 2021; Caccia et al., 2022).
Assumption on the task distribution. In this work we set out to study how
world models deal with changing states spaces and transition distributions. We
assume that for all pairs of tasks $\mathcal{T}_{i}$ and $\mathcal{T}_{j}$
that the state-spaces between tasks are disjoint:
$\forall(\mathcal{T}_{i},\mathcal{T}_{j}),\,\mathcal{S}_{i}\cap\mathcal{S}_{j}=\varnothing$.222Since
we are working with $\mathrm{POMDP}$s certain observations from different
tasks might coincide. This setting is popular for evaluating CRL benchmarks
(Wolczyk et al., 2021; Powers et al., 2021).
## 3 Related Work
Continual Reinforcement Learning. Seminal work in CRL, EWC (Kirkpatrick et
al., 2016) enables DQN (Mnih et al., 2015) to continually learn to play
different Atari games with limited forgetting. EWC learns new Q-functions by
regularizing the parameter-weighted L2 distance between the new task’s current
weights and the previous task’s optimal weights. EWC requires additional
supervision informing it of task changes to update its objective, select
specific Q-function head and select a task-specific $\epsilon$-greedy
exploration schedule. Progress and Compress (Schwarz et al., 2018) applies a
regularization to policy and value function feature extractors for an actor-
critic approach. Alternatively, LPG-FTW (Mendez et al., 2020) learns an actor-
critic that factorizes into task-specific parameters and shared parameters.
Both methods require task supervision and make use of task-specific parameters
and shared parameters. Task-agnostic methods like CLEAR (Rolnick et al., 2019)
do not require task information to perform CRL. CLEAR leverages experience
replay buffers (Lin, 1992) to prevent forgetting: by using an actor-critic
with V-trace importance sampling (Espeholt et al., 2018) of past experiences
from the replay buffer. Model-based RL approaches to CRL have been
demonstrated where the model weights are generated from a hypernetwork which
itself is conditioned by a task embedding (Huang et al., 2021). Recent work
demonstrates that recurrent policies for POMDPs can obtain good overall
performance on continuous control CRL benchmarks (Caccia et al., 2022).
Another task-aware solution is to expand a subspace of policies, the number of
policies scaling sublinearly with the number of tasks (Gaya et al., 2022). Of
the related works presented, only CLEAR is task-agnostic and so is the primary
baseline under consideration when comparing task-agnostic world model
approaches to CRL. A number of previous works have studied transfer in multi-
task RL settings where the goals within an environment change (Barreto et al.,
2017; Schaul et al., 2015; Barreto et al., 2019). In particular, incorporating
the task definition directly into the value function (Schaul et al., 2015) and
combining this with off-policy learning allows a CRL agent to solve multiple
tasks continually, and generalize to new goals (Mankowitz et al., 2018).
Model-based RL mathods have also been assessed by looking at how quickly they
can adapt to changes in reward function (Van Seijen et al., 2020). When only
the reward function changes between tasks then experience from the replay
buffer can interfere to prevent learning a new task or cause forgetting (Wan
et al., 2022). The problem of interference can be mitigated by having separate
policy heads per task (Kessler et al., 2021). See Section A.2 for related
works on continual supervised learning.
Continual Adaptation. Instead of focusing on remembering how to perform all
past tasks, another line of research investigates quick adaptation to changes
in the environment. This can be captured by using a latent variable and off-
policy RL (Xie et al., 2020). Alternatively, one can meta-learn a model such
that it can then adapt quickly to new changes in the environment (Nagabandi et
al., 2018). All these works use small environment changes such as modification
of the reward function or variations in gravity or mass of certain agent limbs
as new tasks. The tasks which we consider in this work contain substantially
different $\mathcal{A}$, $\mathcal{S}$ from one task to the next. For example,
skills such as opening doors with keys or avoiding lava, or crossing a river
which are quite different in comparison. Continual exploration strategies
which use curiosity (Pathak et al., 2017) can be added as an intrinsic reward
in the face of non-stationary environments in infinite horizon MDPs (Steinparz
et al., 2022). The disagreement between an ensemble of models that predict the
next state from the current state and action has been shown to be effective
for exploration (Pathak et al., 2019). Our proposed model uses Plan2Explore
which uses forward prediction disagreement and outperforms curiosity-based
methods (Sekar et al., 2020). The tasks themselves can be meta-learned using a
latent variable world model and task similarities can be exploited when
learning a new task (Fu et al., 2022).
Curriculum Learning. Another related area of research is open-ended learning
which aims to build agents that generalize to unseen environments through a
curriculum that starts off with easy tasks and then progresses to harder tasks
thereby creating agents which can generalize (Wang et al., 2019; Team et al.,
2021; Parker-Holder et al., 2022).
## 4 World Models for Continual Reinforcement Learning
We leverage world models for learning tasks sequentially without forgetting.
We use DreamerV2 (Hafner et al., 2020) which introduces a discrete stochastic
and recurrent world model that is state of the art on numerous single-GPU RL
benchmarks. This is a good choice for CRL since the world model is trained by
reconstructing state, action, and reward trajectories from experience, we can
thus leverage experience replay buffers which persist across tasks to prevent
_forgetting_ in the world model. Additionally, we can train a policy in the
imagination or in the generated trajectories of the world model, similar to
generative experience replay methods in supervised CL which remember previous
tasks by replaying generated data (Shin et al., 2017). Thus, using a world
model is also sample efficient. Also, world models are _task-agnostic_ and do
not require external supervision about task changes, without signaling to the
agent that it is interacting with a new task. Additionally, by generating
rollouts in the world model’s imagination the uncertainty in the world model’s
predictions, more specifically the disagreement between predictions can be
used as a task-agnostic exploration bonus. To summarize, we propose using
model-based RL with recurrent world models as a viable method for CRL, see
Algorithm 1 for an overview, with DreamerV2 as the world model. Recently,
world models have been shown to collect diamonds in Minecraft, a very hard
skill to achieve which requires the composition of many other skills with
DreamerV3 (Hafner et al., 2023), the ideas introduced in this manuscript are
directly applicable to newer world model methods as well.
Algorithm 1 CRL with World Models
1: Input: Tasks (environments) $\mathcal{T}_{1:T}$, world model $M$, policy
$\pi$, experience replay buffer $\mathcal{D}$.
2: for $\mathcal{T}_{1}$ to $\mathcal{T}_{T}$ do
3: Train world model $M$ on $\mathcal{D}$.
4: Train $\pi$ inside world model $M$.
5: Execute $\pi$ in task $\mathcal{T}_{\tau}$ to gather episodes and append to
$\mathcal{D}$.
6: end for
Learning the World Model. DreamerV2 learns a recurrent (latent) state-space
world model (RSSM) which predicts the forward dynamics of the environment. At
each time step $t$ the world model receives an observation $o_{t}$ and is
required to reconstruct the observations, $o_{t}$ conditioned on the previous
actions $a_{<t}$ (in addition to reconstructing rewards and discounts). The
forward dynamics are modeled using an RNN,
$h_{t}=\textrm{GRU}(h_{t-1},z_{t},a_{t})$ (Chung et al., 2014) where $h_{t}$
is the hidden state $z_{t}$ are the discrete probabilistic latent states (Van
Den Oord et al., 2017) which condition the observation predictions
$p(o_{t}|z_{t},h_{t})$. Trajectories are sampled from an experience replay
buffer and so persisting the replay buffer across different tasks should
alleviate forgetting in the world model (Rolnick et al., 2019).
Policy Learning inside the World Model. The policy $\pi$ is learned inside the
world model by using an actor-critic (Sutton & Barto, 2018) while freezing the
weights of the RSSM world model. At each step $t$ of the dream inside the RSSM
world model a latent state $z_{t}$ is sampled, $z_{t}$, and the RNN hidden
state condition the actor $\hat{a}_{t}\sim\pi(\,\cdot\,|z_{t},h_{t})$. The
reward $\hat{r}_{t+1}$ is predicted by the world model. The policy, $\pi$ is
then used to obtain new trajectories in the real environment. These
trajectories are added to the experience replay buffer. An initial observation
$o_{1}$ is used to start generating rollouts for policy learning. This
training regime ensures that the policy generalizes to previously seen
environments through the world model.
Task-agnostic Exploration. The policy learns using the imagined trajectories
from the RSSM world model and the world model’s predicted rewards are used as
a signal for the agent’s policy and critic. The policy is also used to gain
experience inside the real environment. For exploration, the policy
prioritizes regions of the state and action space where the world model
produces uncertain predictions. Hence, the uncertainty in the world model’s
trajectory prediction can be used as an additional intrinsic reward. This idea
underpins Plan2Explore (Sekar et al., 2020) which naturally fits with
DreamerV2.
The world model quantifies the uncertainty in the next latent state prediction
by using a deep ensemble; multiple neural networks with independent weights.
Deep ensembles are a surprisingly robust baseline for uncertainty
quantification (Lakshminarayanan et al., 2017) and the ensemble’s variance is
used as an intrinsic reward. The exploration neural networks in the ensemble
are trained to predict the next RSSM latent features $[z_{t+1},h_{t+1}]$. The
world model is frozen while the ensemble is trained.
The policy $\pi$ observes the reward $r=\alpha_{i}r_{i}+\alpha_{e}r_{e}$,
where $r_{e}$ is the extrinsic reward predicted by the world model, $r_{i}$ is
the intrinsic reward, the latent disagreement between the next latent state
predictions. The coefficients $\alpha_{i}$ and $\alpha_{e}$ are $\in[0,1]$.
Hence the policy $\pi$ can be trained inside the world model to seek regions
in the state-action space that the world model struggles to predict and hence
when the policy is deployed in the environment it will seek these same regions
in the state-action space to obtain new trajectories to train the RSSM world
model. The exploration strategy is significant for CRL since it is not task
dependent unlike using DQN where each task needs an $\epsilon$-greedy schedule
(Kirkpatrick et al., 2016; Kessler et al., 2021) or SAC (Haarnoja et al.,
2018) which needs an entropy regularizer per task (Wolczyk et al., 2021).
### 4.1 Selective experience replay methods
To enable Continual-Dreamer to remember how to solve all the previous tasks it
has learned with a limited replay buffer size requires us to select important
trajectories to fill the experience replay buffer and selectively choose
trajectories to train the world model. DreamerV2 uses a first-in-first-out
(FIFO) replay buffer. It also randomly samples trajectories from the replay
buffer to train the world model on and also randomly samples a trajectory so
that the world model can start dreaming and allow the policy can learn inside
the dream. In such a scenario, catastrophic forgetting can occur due to the
loss of experience from previous tasks since the replay buffer is FIFO. There
is prior work on managing experience replay buffers from supervised CL (Caccia
et al., 2019) and off-policy RL (Isele & Cosgun, 2018) which we can
systematically study for the application of CRL with world models. To ensure
that we have a uniform sample of experience from all tasks in the replay
buffer to sample from we explore the following methods:
* •
Reservoir Sampling (rs) (Vitter, 1985; Isele & Cosgun, 2018): enables a
uniform distribution over all task experience seen so far. This is achieved by
storing new examples in the replay buffer with a decreasing probability of
$\min(n/t,1)$, where $t$ is the number of trajectories seen so far and $n$ is
the size of the replay buffer. This can be paired with any method of sampling
from the replay buffer. By default, this is using random sampling. Reservoir
sampling does not check for duplicate examples when storing experience. In
practice, we found no duplicates when checking the episodes in the replay
buffer using the experimental setup in Section 5.1, this is not a surprise
since our task environments are procedurally generated, see Section 5 for
further experimental details.
* •
Coverage Maximization (cm) (Isele & Cosgun, 2018): also attempts to create a
uniform distribution of experience seen so far. Experience is added to the
replay buffer by checking how close it is to trajectories already stored in
the replay buffer. Trajectories are embedded using a fixed convolutional LSTM
architecture (Shi et al., 2015) and we can calculate distances using an
$L^{2}$ distance between the LSTM hidden state with respect to $1000$ randomly
selected trajectories from the replay buffer. The median distance determines
the priority for the sample to be added to the replay buffer.
In addition to methods that populate the experience replay buffer, we can also
consider how we should construct the mini-batch for world model and policy
learning. For instance, we can prioritize more informative samples to help
remembering and help learning new tasks to aid stability and plasticity. We
consider $3$ approaches:
Figure 1: Performance of CRL agents on $3$ Minigrid tasks. Grey-shaded regions
indicate the environment which the agent is currently interacting with. All
learning curves are a median and inter-quartile range across $20$ seeds. On
the right, we pick a random instantiation of the Minigrid environments that
are being evaluated.
* •
Uncertainty sampling (us): we construct a minibatch of experience where the
probability of sampling an episode corresponds to the trajectory’s uncertainty
or intrinsic reward from Plan2Explore. Next state uncertainties are generated
for each transition and summed and normalized per trajectory before it is
added to the replay buffer. We only calculate the uncertainty once before it
is added to the replay buffer. This is similar to sampling the replay buffer
according to the size of the temporal-difference error, known as sampling via
“surprise” (Isele & Cosgun, 2018). The temporal difference error is also only
calculated once when transitions are added to the experience replay buffer for
DQN.
* •
Reward sampling (rwd) (Isele & Cosgun, 2018): we construct a mini-batch of
experience for world model learning where the probability that an episode is
sampled, corresponds to the reward from the environment.
* •
$50$:$50$ sampling, of past and recent experience. We construct a mini-batch
for world model learning based on a $50$:$50$ ratio of uniform random sampling
from the replay buffer and sampling from a triangular distribution that favors
the most recent experience added so far to help learning more recent tasks.
This idea is similar to the on-policy off-policy ratio of learning in CLEAR
(Rolnick et al., 2019) which also aims to balance stability and plasticity.
### 4.2 Task-aware baseline
All replay buffer management techniques presented above are task-agnostic,
i.e. operate without explicit task identification. We also consider a task-
aware baseline for comparison, we use $L^{2}$ weight regularization with
respect to the weights from the previous task, which is a simple
regularization-based approach to CRL. In this scenario, after the first task,
we add to each loss function, an additional objective that minimizes the
distance between the current model and policy weights and the optimal weights
from the previous task.
## 5 Experiments
Our results indicate that DreamerV2 and DreamerV2 + Plan2Explore obtain good
out-of-the-box performance for CRL on $3$ Minigrid tasks (Chevalier-Boisvert
et al., 2018). On a harder Minihack (Samvelyan et al., 2021) tasks from the
CORA suite (Powers et al., 2021), we find that DreamerV2 and DreamerV2 +
Plan2Explore exhibit forgetting. To address forgetting we systematically study
various selective experience replay methods. The best configuration uses
reservoir sampling (Vitter, 1985) which we name Continual-Dreamer and
Continual-Dreamer + Plan2Explore. We use two primary baselines. First, Impala
which is a powerful deep RL method not designed for CRL (Espeholt et al.,
2018). Second, we consider CLEAR (Rolnick et al., 2019) which uses Impala as a
base RL algorithm and leverages experience replay buffers to prevent
forgetting and is task-agnostic.
Throughout our experiments, we use $3$ different metrics average performance,
average forgetting, and average forward transfer Appendix B, to assess the
effectiveness of each method (Wolczyk et al., 2021) in addition to
qualitatively inspecting the learning curves.
Figure 2: Return averaged over all tasks for various CRL agents on $4$
Minihack tasks. All learning curves are IQM from the rliable package across
$10$ seeds and $1000$ bootstrap samples (Agarwal et al., 2021).
### 5.1 Minigrid
| Avg. Performance ($\uparrow$) | Avg. Forgetting ($\downarrow$) | Avg. Forward Transfer ($\uparrow$)
---|---|---|---
Impala | $0.00\pm 0.00$ | $0.00\pm 0.00$ | $0.00\pm 0.00$
CLEAR | $0.03\pm 0.05$ | $0.01\pm 0.06$ | $0.03\pm 0.03$
Impala$\times 10$ | $0.16\pm 0.16$ | $0.06\pm 0.13$ | -
CLEAR$\times 10$ | $0.64\pm 0.20$ | $0.00\pm 0.00$ | -
DreamerV2 | $0.72\pm 0.24$ | $-0.11\pm 0.30$ | $0.49\pm 0.83$
DreamerV2 + p2e | $0.46\pm 0.10$ | $0.05\pm 0.18$ | $0.43\pm 0.22$
Table 1: Results on $3$ Minigrid tasks. All metrics are an average and
standard deviation over $20$ seeds. We use $0.75$M interactions for each task
and $7.5$M in methods marked with $\times 10$. $\uparrow$ indicates better
performance with higher numbers, and $\downarrow$ the opposite.
We test the out-of-the-box performance of DreamerV2 and DreamerV2 +
Plan2Explore as a CRL baseline on $3$ sequential Minigrid tasks. Minigrid is a
challenging image-based, partially observable, and sparse reward environment.
The agent, in red, will get a reward of $+1$ when it gets to the green goal
Figure 1. The agent sees a small region of the Minigrid environment as
observation, $o_{t}$. We use $3$ different tasks from Minigrid: DoorKey-9x9,
SimpleCrossing-SN9 and LavaCrossing-9x9. Each environment has a different
skill and so the tasks are diverse. Each method interacts with each task for
$0.75$M environment interactions, as previously proposed in (Kessler et al.,
2021).
We evaluate CRL agents on all tasks, see Figure 1. The results indicate that
DreamerV2 is able to solve difficult exploration tasks like the DoorKey-9x9.
Additionally, since DreamerV2 trains its policy inside the world model it is
more sample efficient than CLEAR which needs $\times 10$ more environment
interactions to be able to solve the easier Minigrid tasks SimpleCrossing-SN9
and LavaCrossing-9x9, Table 1. The addition of Plan2Explore enables DreamerV2
to solve these environments even more quickly, see Figure 1. DreamerV2 does
exhibit some forgetting of the DoorKey-9x9 task and this indicates that
additional mechanisms to prevent forgetting might be needed.
Figure 3: Metrics on $4$ Minihack tasks using interquartile mean with $20$
runs with different seeds and $1000$ bootstrap samples from the rliable
package (Agarwal et al., 2021).
From the metrics in Table 1 we can see that DreamerV2 has strong forward
transfer. From the learning curves for individual tasks Figure 7 we can see
that DreamerV2 struggles with independent task learning over the course of
$1$M environment steps. In contrast, when learning continually DreamerV2 is
able to solve all tasks indicating that it transfers knowledge from previous
tasks. This is not entirely surprising since the levels look similar and so
the world model will be able to reconstruct observations of a new task more
quickly compared to reconstruction from scratch.
For DreamerV2 we use the model and hyperparameters from (Hafner et al., 2020)
with an experience replay buffer for world model learning of size $2$M. For
DreamerV2 + Plan2Explore we set the reward coefficients to
$\alpha_{i}=\alpha_{e}=0.9$ which was found by grid search of various single
task Minihack environments over $\\{0.1,0.5,0.9\\}$ we use the same policy for
exploration and evaluation and learn the world model by observation
reconstruction only, rather than observation, reward, and discount
reconstruction. We explore these design decisions using the Minihack benchmark
in Section D.4. For CLEAR we also use an experience replay buffer size of $2$M
like DreamerV2 and DreamerV2 + Plan2Explore.
### 5.2 Minihack
Figure 4: Proportion of episodes from each task in the replay buffer for
different replay buffer construction strategies at $2$M, $3$M, and $4$M
environment steps on $4$ task Minihack. Bar plots are an average of $5$ runs.
We test DreamerV2 and DreamerV2 with Plan2Explore on a set of harder Minihack
tasks (Samvelyan et al., 2021). Minihack is a set of diverse, image-based, and
sparse reward tasks based on the game of Nethack (Küttler et al., 2020) which
have larger state spaces than MiniGrid and require learning more difficult
skills such as crossing rivers by pushing multiple rocks into the river for
instance. This will test the task-agnostic exploration mechanism from
Plan2Explore further. We use $4$ tasks Minihack tasks: in particular, we
consider the following tasks Room-Random-15x15-v0, Room-Trap-15x15-v0, River-
Narrow-v0, and River-Monster-v0 which are a subset of the $12$ Minihack tasks
from the CORA CRL benchmark (Powers et al., 2021). Each task is seen once and
has a budget of $1$M environment interactions. We use the same experimental
setup as in Section 5.1, however, we keep the sizes of the experience replay
buffer fixed to $1$M for both DreamerV2, its variants, and CLEAR.
We perform a comprehensive evaluation of various replay buffer management and
mini-batch selection strategies. We find that using reservoir sampling
together with DreamerV2 and DreamerV2 + Plan2Explore is the best configuration
which we name Continual-Dreamer, see detailed analysis in Section 5.2.1. The
results of the Minihack experiment, as shown in Figure 2, demonstrate that
Continual-Dreamer and Continual-Dreamer + Plan2Explore perform better than the
baselines regarding the average return over all tasks and $10$ seeds. In
particular, Continual-Dreamer and Continual-Dreamer + Plan2Explore exhibit
faster learning on new tasks. Neither task-agnostic continual learning
baselines, Impala and CLEAR, can learn the most difficult task River-
Monster-v0, Figure 9. These results suggest that world models are effective
for consolidating knowledge across environments and changing task objectives.
We also compare to a task-aware baseline: an $L^{2}$ regularization of the
world model and actor-critic about the previous task’s optimal parameters
Figure 10. We find that this performs poorly.
#### 5.2.1 Evaluation of Different Selective Replay Methods
Figure 3 presents results for different replay strategies. If we consider
DreamerV2 and DreamerV2 + Plan2Explore we can see that there is some
forgetting of the first Room tasks, see Figure 9. Our main finding is that
reservoir sampling robustly mitigates forgetting, which gives rise to
Continual-Dreamer.
We can increase plasticity by biasing the sampling toward recent experiences.
We can see that if we add $50$:$50$ sampling of the minibatch construction
together with reservoir sampling, this causes inconsistent effects. For
Continual-Dreamer, $50$:$50$ sampling increases forgetting while the
performance remains constant, indicating better learning of harder tasks and
transfer. On the other hand, when $50$:$50$ sampling is applied to Continual-
Dreamer + Plan2Explore performance and forgetting worsen.
We tested DreamerV2’s performance in other variants, including uncertainty
sampling (us), coverage maximization (cm), and reward sampling (rwd). The
results we obtained are consistent with prior works (Isele & Cosgun, 2018). It
can be seen that us performed closely to Impala with low performance, forward
transfer, and high forgetting. Using rwd sampling results in performance that
does not improve over random sampling of the minibatch. Using cm with
DreamerV2 and DreamerV2 + Plan2Explore results in less forgetting. However, it
behaves inconsistently when observing performance; performance increases when
applied to DreamerV2 and it decreases when applied to DreamerV2 +
Plan2Explore. As a baseline, we also tested a naive approach, which is to
increase the size of the replay buffer; this indeed prevents forgetting and
increases performance, see Section D.5. However, this is at the cost of making
it harder to learn new tasks: the harder exploration Minihack tasks are not
solved and forward transfer decreases as we increase the size of the replay
buffer. In Figure 4 we can see the mini-batch task composition at $2$M, $3$M
and $4$M steps for $4$ task Minihack for various replay buffer management
methods. As training progresses the FIFO will lose experience from earlier
tasks, whereas _rs_ and _cm_ are able to retain experience from earlier tasks.
_rs_ retains a more uniform distribution. Whereas _cm_ \- which is distance
based - retains fewer samples from intermediate tasks, e.g. Task 1 at $4$M
steps since embeddings look similar to Task $0$. We store entire episodes of
state, action, and rewards in the replay buffer. When a task is mastered
shorter episodes where the agent navigates to the goal are stored in the
replay buffer. When adding experience from a new task the episodes are longer
(episodes can reach the cut-off of $100$ transitions before the episode is
ended) and thus require more episodes of past data to be removed to make way
for it in the reservoir sampling update. Due to this phenomenon, there are
relatively fewer samples from the earlier tasks in the replay buffer. We
perform a similar analysis for the sampling methods we consider in Section
D.8.
Figure 5: Metrics on $8$ Minihack tasks using interquartile mean with $10$
runs with different seeds and $1000$ bootstrap samples. The results for
DreamerV2 + rs + $50$:$50$ with and without Plan2Explore are an interquartile
mean with $5$ runs with different seeds and $1000$ bootstrap samples.
### 5.3 Scaling to More Tasks
We scale to $8$ Minihack tasks from the CORA suite: Room-Random-15x15-v0,
Room-Monster-15x15-v0, Room-Trap-15x15-v0, Room-Ultimate-15x15-v0, River-
Narrow-v0, River-v0, River-Monster-v0 and HideNSeek-v0. For DreamerV2 and its
variants and for CLEAR we set the size of the experience replay buffer to
$2$M. We can see that by using reservoir sampling with $50$:$50$ sampling
prevents forgetting and slightly improves performance in DreamerV2 and
DreamerV2 + Plan2Explore Figure 5. Performance over CLEAR is also improved by
introducing reservoir sampling and $50$:$50$ sampling. The difficult Minihack
tasks, such as River-v0, River-Monster-v0 and HideNSeek-v0 are solved by
DreamerV2 and its variants but not by CLEAR Figure 11. We conjecture that
warm-starting the world model by solving easier related tasks enables
DreamerV2 to improve its downstream performance. For instance the single-task
DreamerV2, see Figure 8, does not solve River-Monster-v0 as often as continual
agent Figure 11. Arguably, the skills learned in Room-Monster-15x15-v0 and
River-v0 enable DreamerV2 to solve River-Monster-v0.
## 6 Limitations
Interference. Continual-Dreamer uses a replay buffer of past experience for
retaining skills. Interference from past data is when past samples prevent the
learning of new tasks in the face of changing reward functions or goal
locations. This has been shown in world-models (Wan et al., 2022) and off-
policy RL (Kessler et al., 2021). Continual-Dreamer and the selective
experience replay methods which are explored are not intended to prevent
interference from past samples. We show interference can affect DreamerV2 on
the Minigrid FourRooms-v0 environment with changing rewards/goals for $2$
different tasks in Section D.6.
Task data imbalances. The selective experience replay considered for managing
experience in the replay buffer are not designed for keeping an even
distribution of task data in the replay buffer in the face of imbalanced
tasks. To illustrate this, we consider a $2$ task scenario with $0.4$M
interactions of Room-Random-15x15-v0 and then $2.4$M interactions of River-
Narrow-v0 and a replay buffer of size $0.4$M. We see that both _rs_ and _cm_
are unable to retain the performance of the first Room-Random-15x15-v0
similarly to a FIFO buffer Figure 6. For _rs_ experience from the longer
second task will saturate the replay buffer. While _cm_ is able to retain a
small number of samples from the first task, but not enough to prevent
forgetting, since it uses a distanced-based criterion to populate the replay
buffer. So _cm_ -type approaches to replay buffer management could be a
fruitful method for managing task imbalances.
Figure 6: Top, learning curves for imbalanced number of interactions with
Room-Random-15x15-v0 and River-Narrow-v0. Bottom, the number of episodes from
each task in the replay buffer for _rs_ and _cm_. The grey shaded region
indicates which timesteps the agent interacts with a task. All runs are
interquartile ranges over $20$ random seeds.
## 7 Discussion and Future Works
We have explored the use of world models as a _task-agnostic_ CRL baseline.
World models can be powerful CRL agents as they train the policy inside the
world model and can thus be sample efficient. World models are trained by
using experience replay buffers and so we can prevent _forgetting_ of past
tasks by persisting the replay buffer across from the current task to new
tasks. Importantly, the world model’s prediction uncertainty can be used as an
additional intrinsic task-agnostic reward to help exploration and solve
difficult tasks in a task-agnostic fashion (Sekar et al., 2020). Previous CRL
exploration strategies in the literature all require an indication of when the
agent stops interacting with a previous task and starts interacting with a new
task to reset an exploration schedule. Our implementation uses DreamerV2 as
the world model (Hafner et al., 2020), and we demonstrate a selective
experience replay setting which we call Continual-Dreamer which is a powerful
CRL method on two difficult CRL benchmarks.
We show empirically that world models can be a strong task-agnostic baseline
for CRL problems compared to state-of-the-art task-agnostic methods (Rolnick
et al., 2019). DreamerV2 with Plan2Explore outperforms CLEAR on Minigrid. Our
experiments on Minihack test the limits of using world models for CRL and
require us to introduce experience replay buffer management methods to aid in
retaining skills in addition to enabling the learning of new skills. We show
that reservoir sampling enables an even coverage of experience in the replay
buffer to mitigate forgetting. We call this configuration of DreamerV2 with
reservoir sampling Continual-Dreamer. Future work will explore continuous
control CRL benchmarks, such as ContinualWorld (Wolczyk et al., 2021) and
explore other world-models.
## 8 Acknowledgements
We would like to thank anonymous reviewers for their valuable feedback. SK
would like to thank the Oxford-Man Institute of Quantitative Finance for
funding. This research was funded by National Science Center Poland under the
grant agreement 2020/39/B/ST6/01511 and from Warsaw University of Technology
within the Excellence Initiative: Research University (IDUB) programme. We
gratefully acknowledge Polish high-performance computing infrastructure PLGrid
(HPC Centers: ACK Cyfronet AGH, CI TASK) for providing computer facilities and
support within computational grant no. PLG/2023/016202. Piotr Milos was
supported by the Polish National Science Centre grant 2019/35/O/ST6/03464.
## References
* Agarwal et al. (2021) Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. _Advances in neural information processing systems_ , 34:29304–29320, 2021.
* Aljundi et al. (2019) Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In _Advances in Neural Information Processing Systems_ , 2019.
* Barreto et al. (2017) André Barreto, Will Dabney, Rémi Munos, Jonathan J Hunt, Tom Schaul, Hado P van Hasselt, and David Silver. Successor features for transfer in reinforcement learning. _Advances in neural information processing systems_ , 30, 2017.
* Barreto et al. (2019) André Barreto, Diana Borsa, Shaobo Hou, Gheorghe Comanici, Eser Aygün, Philippe Hamel, Daniel K Toyama, Jonathan J Hunt, Shibl Mourad, David Silver, et al. The option keyboard: Combining skills in reinforcement learning. 2019\.
* Benjamin et al. (2019) Ari S Benjamin, David Rolnick, and Konrad P Kording. Measuring and Regularizing Networks in Function Space. In _International Conference on Learning Representations_ , 2019.
* Buzzega et al. (2020) Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. Dark experience for general continual learning: a strong, simple baseline. _Advances in neural information processing systems_ , 33:15920–15930, 2020.
* Caccia et al. (2019) Lucas Caccia, Rahaf Aljundi, Eugene Belilovsky, Massimo Caccia, Laurent Charlin, and Tinne Tuytelaars. Online continual learning with maximally interfered retrieval. _Advances in Neural Information Processing (NeurIPS)_ , 2019.
* Caccia et al. (2022) Massimo Caccia, Jonas Mueller, Taesup Kim, Laurent Charlin, and Rasool Fakoor. Task-agnostic continual reinforcement learning: In praise of a simple baseline. _arXiv preprint arXiv:2205.14495_ , 2022.
* Chaudhry et al. (2019) Arslan Chaudhry, Marcus Rohrbach Facebook, A I Research, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip H S Torr, and Marc ’ Aurelio Ranzato. On Tiny Episodic Memories in Continual Learning. _arxiv.org:1902.10486_ , 2019.
* Chevalier-Boisvert et al. (2018) Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic gridworld environment for openai gym. https://github.com/maximecb/gym-minigrid, 2018.
* Chung et al. (2014) Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. _arXiv preprint arXiv:1412.3555_ , 2014.
* Degrave et al. (2022) Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de Las Casas, et al. Magnetic control of tokamak plasmas through deep reinforcement learning. _Nature_ , 602(7897):414–419, 2022.
* Espeholt et al. (2018) Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. IMPALA: Scalable distributed deep-RL with importance weighted actor-learner architectures. In _International Conference on Machine Learning_. 2018.
* Fu et al. (2022) Haotian Fu, Shangqun Yu, Michael Littman, and George Konidaris. Model-based lifelong reinforcement learning with bayesian exploration. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), _Advances in Neural Information Processing Systems_ , 2022. URL https://openreview.net/forum?id=6I3zJn9Slsb.
* Gaya et al. (2022) Jean-Baptiste Gaya, Thang Doan, Lucas Caccia, Laure Soulier, Ludovic Denoyer, and Roberta Raileanu. Building a subspace of policies for scalable continual learning. _arXiv preprint arXiv:2211.10445_ , 2022.
* Ha & Schmidhuber (2018) David Ha and Jürgen Schmidhuber. World models. _arXiv preprint arXiv:1803.10122_ , 2018.
* Haarnoja et al. (2018) Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In _International conference on machine learning_ , pp. 1861–1870. PMLR, 2018.
* Hafner et al. (2020) Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, and Jimmy Ba. Mastering atari with discrete world models. _arXiv preprint arXiv:2010.02193_ , 2020.
* Hafner et al. (2023) Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. _arXiv preprint arXiv:2301.04104_ , 2023.
* Hassabis et al. (2017) Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscience-inspired artificial intelligence. _Neuron_ , 95(2):245 – 258, 2017. ISSN 0896-6273. doi: https://doi.org/10.1016/j.neuron.2017.06.011.
* Hausknecht & Stone (2015) Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. In _2015 aaai fall symposium series_ , 2015.
* Hsu et al. (2018) Yen-Chang Hsu, Yen-Cheng Liu, Anita Ramasamy, and Zsolt Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. _arXiv preprint arXiv:1810.12488_ , 2018.
* Huang et al. (2021) Yizhou Huang, Kevin Xie, Homanga Bharadhwaj, and Florian Shkurti. Continual model-based reinforcement learning with hypernetworks. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_ , pp. 799–805. IEEE, 2021.
* Isele & Cosgun (2018) David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 32, 2018.
* Kaelbling et al. (1998) Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. Planning and acting in partially observable stochastic domains. _Artificial intelligence_ , 101(1-2):99–134, 1998\.
* Kaiser et al. (2019) Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, et al. Model-based reinforcement learning for atari. _arXiv preprint arXiv:1903.00374_ , 2019.
* Kessler et al. (2021) Samuel Kessler, Jack Parker-Holder, Philip Ball, Stefan Zohren, and Stephen J Roberts. Same state, different task: Continual reinforcement learning without interference. _arXiv preprint arXiv:2106.02940_ , 2021.
* Khetarpal et al. (2020) Khimya Khetarpal, Matthew Riemer, Irina Rish, and Doina Precup. Towards continual reinforcement learning: A review and perspectives. _arXiv preprint arXiv:2012.13490_ , 2020.
* Kirkpatrick et al. (2016) James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. _CoRR_ , abs/1612.00796, 2016.
* Küttler et al. (2020) Heinrich Küttler, Nantas Nardelli, Alexander H. Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. The NetHack Learning Environment. In _Proceedings of the Conference on Neural Information Processing Systems (NeurIPS)_ , 2020.
* Lakshminarayanan et al. (2017) Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. _Advances in neural information processing systems_ , 30, 2017.
* Lee et al. (2020) Soochan Lee, Junsoo Ha, Dongsu Zhang, and Gunhee Kim. A neural dirichlet process mixture model for task-free continual learning. _arXiv preprint arXiv:2001.00689_ , 2020.
* Li & Hoiem (2017) Zhizhong Li and Derek Hoiem. Learning without Forgetting. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2017.
* Lin (1992) Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. _Mach. Learn._ , 8(3–4):293–321, May 1992\. ISSN 0885-6125. doi: 10.1007/BF00992699.
* Lopez-Paz & Ranzato (2017) David Lopez-Paz and Marc ’ Aurelio Ranzato. Gradient Episodic Memory for Continual Learning. In _Advances in Neural Information Processing Systems_ , 2017.
* Mankowitz et al. (2018) Daniel J Mankowitz, Augustin Žídek, André Barreto, Dan Horgan, Matteo Hessel, John Quan, Junhyuk Oh, Hado van Hasselt, David Silver, and Tom Schaul. Unicorn: Continual learning with a universal, off-policy agent. _arXiv preprint arXiv:1802.08294_ , 2018.
* Mendez et al. (2020) Jorge A Mendez, Boyu Wang, and Eric Eaton. Lifelong Policy Gradient Learning of Factored Policies for Faster Training Without Forgetting. In _Advances in Neural Information Processing Systems_ , 2020.
* Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. _Nature_ , 2015. doi: 10.1038/nature14236.
* Nagabandi et al. (2018) Anusha Nagabandi, Chelsea Finn, and Sergey Levine. Deep online learning via meta-learning: Continual adaptation for model-based rl. _arXiv preprint arXiv:1812.07671_ , 2018.
* Nguyen et al. (2018) Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. In _International Conference on Learning Representations_ , 2018.
* Nguyen et al. (2021) V Nguyen, SB Orbell, Dominic T Lennon, Hyungil Moon, Florian Vigneau, Leon C Camenzind, Liuqi Yu, Dominik M Zumbühl, G Andrew D Briggs, Michael A Osborne, et al. Deep reinforcement learning for efficient measurement of quantum devices. _npj Quantum Information_ , 7(1):1–9, 2021.
* OpenAI et al. (2018) OpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Józefowicz, Bob McGrew, Jakub W. Pachocki, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, Jonas Schneider, Szymon Sidor, Josh Tobin, Peter Welinder, Lilian Weng, and Wojciech Zaremba. Learning dexterous in-hand manipulation. _CoRR_ , abs/1808.00177, 2018.
* Parker-Holder et al. (2022) Jack Parker-Holder, Minqi Jiang, Michael Dennis, Mikayel Samvelyan, Jakob Foerster, Edward Grefenstette, and Tim Rocktäschel. Evolving curricula with regret-based environment design. _arXiv preprint arXiv:2203.01302_ , 2022.
* Pathak et al. (2017) Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In _International conference on machine learning_ , pp. 2778–2787. PMLR, 2017.
* Pathak et al. (2019) Deepak Pathak, Dhiraj Gandhi, and Abhinav Gupta. Self-supervised exploration via disagreement. In _International conference on machine learning_ , pp. 5062–5071. PMLR, 2019.
* Powers et al. (2021) Sam Powers, Eliot Xing, Eric Kolve, Roozbeh Mottaghi, and Abhinav Gupta. Cora: Benchmarks, baselines, and metrics as a platform for continual reinforcement learning agents. _arXiv preprint arXiv:2110.10067_ , 2021.
* Ring (1994) Mark Bishop Ring. _Continual learning in reinforcement environments_. PhD thesis, University of Texas at Austin, 1994.
* Rolnick et al. (2019) David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. In _Advances in Neural Information Processing Systems 32_ , pp. 350–360. 2019.
* Rusu et al. (2016) Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. _CoRR_ , abs/1606.04671, 2016.
* Samvelyan et al. (2021) Mikayel Samvelyan, Robert Kirk, Vitaly Kurin, Jack Parker-Holder, Minqi Jiang, Eric Hambro, Fabio Petroni, Heinrich Kuttler, Edward Grefenstette, and Tim Rocktäschel. Minihack the planet: A sandbox for open-ended reinforcement learning research. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)_ , 2021. URL https://openreview.net/forum?id=skFwlyefkWJ.
* Schaul et al. (2015) Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In _International conference on machine learning_ , pp. 1312–1320. PMLR, 2015.
* Schrittwieser et al. (2019) Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy P. Lillicrap, and David Silver. Mastering atari, go, chess and shogi by planning with a learned model. _CoRR_ , abs/1911.08265, 2019.
* Schwarz et al. (2018) Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In Jennifer G. Dy and Andreas Krause (eds.), _Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018_ , volume 80 of _Proceedings of Machine Learning Research_ , pp. 4535–4544. PMLR, 2018\.
* Sekar et al. (2020) Ramanan Sekar, Oleh Rybkin, Kostas Daniilidis, Pieter Abbeel, Danijar Hafner, and Deepak Pathak. Planning to explore via self-supervised world models. In _International Conference on Machine Learning_ , pp. 8583–8592. PMLR, 2020.
* Shi et al. (2015) Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. _Advances in neural information processing systems_ , 28, 2015.
* Shin et al. (2017) Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual Learning with Deep Generative Replay. In _Advances in Neural Information Processing Systems_ , 2017.
* Steinparz et al. (2022) Christian Steinparz, Thomas Schmied, Fabian Paischer, Marius-Constantin Dinu, Vihang Patil, Angela Bitto-Nemling, Hamid Eghbal-zadeh, and Sepp Hochreiter. Reactive exploration to cope with non-stationarity in lifelong reinforcement learning. _arXiv preprint arXiv:2207.05742_ , 2022.
* Sutton (1991) Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. _ACM Sigart Bulletin_ , 2(4):160–163, 1991.
* Sutton & Barto (2018) Richard S Sutton and Andrew G Barto. _Reinforcement learning: An introduction_. MIT press, 2018.
* Team et al. (2021) Open Ended Learning Team, Adam Stooke, Anuj Mahajan, Catarina Barros, Charlie Deck, Jakob Bauer, Jakub Sygnowski, Maja Trebacz, Max Jaderberg, Michael Mathieu, et al. Open-ended learning leads to generally capable agents. _arXiv preprint arXiv:2107.12808_ , 2021.
* Thrun & Mitchell (1995) Sebastian Thrun and Tom M. Mitchell. Lifelong robot learning. _Robotics and Autonomous Systems_ , 15(1):25–46, 1995. ISSN 0921-8890. doi: https://doi.org/10.1016/0921-8890(95)00004-Y. URL https://www.sciencedirect.com/science/article/pii/092188909500004Y. The Biology and Technology of Intelligent Autonomous Agents.
* Van de Ven & Tolias (2019) Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. _arXiv preprint arXiv:1904.07734_ , 2019.
* Van Den Oord et al. (2017) Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. _Advances in neural information processing systems_ , 30, 2017.
* Van Seijen et al. (2020) Harm Van Seijen, Hadi Nekoei, Evan Racah, and Sarath Chandar. The loca regret: a consistent metric to evaluate model-based behavior in reinforcement learning. _Advances in Neural Information Processing Systems_ , 33:6562–6572, 2020.
* Vitter (1985) Jeffrey Scott Vitter. Random sampling with a reservoir. _ACM Trans. Math. Softw._ , 11:37–57, 1985.
* Wan et al. (2022) Yi Wan, Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, and Harm H Van Seijen. Towards evaluating adaptivity of model-based reinforcement learning methods. In _International Conference on Machine Learning_ , pp. 22536–22561. PMLR, 2022.
* Wang et al. (2019) Rui Wang, Joel Lehman, Jeff Clune, and Kenneth O Stanley. Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions. _arXiv preprint arXiv:1901.01753_ , 2019.
* Wolczyk et al. (2021) Maciej Wolczyk, Michal Zajac, Razvan Pascanu, Lukasz Kucinski, and Piotr Milos. Continual world: A robotic benchmark for continual reinforcement learning. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan (eds.), _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_ , pp. 28496–28510, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/ef8446f35513a8d6aa2308357a268a7e-Abstract.html.
* Xie et al. (2020) Annie Xie, James Harrison, and Chelsea Finn. Deep reinforcement learning amidst lifelong non-stationarity. _arXiv preprint arXiv:2006.10701_ , 2020.
* Zenke et al. (2017) Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual Learning Through Synaptic Intelligence. In _International Conference on Machine Learning_ , 2017.
## Appendix
## Table of Contents
[sections] [sections]l1
## Appendix A Continual Supervised Learning
### A.1 Definition
Continual Learning (CL) is a setting whereby a model must master a set of
tasks sequentially while maintaining performance across all previously learned
tasks. Other important objectives are to develop scalable CL methods which are
able to transfer knowledge from previous tasks to aid the learning of new
tasks, known as forward transfer. Traditionally, in supervised CL, the model
is sequentially shown $T$ tasks, denoted $\mathcal{T}_{\tau}$ for
$\tau=1,\ldots,T$. Each task, $\mathcal{T}_{\tau}$, is comprised of a dataset
$\mathcal{D}_{\tau}=\left\\{({\bm{x}}_{i},y_{i})\right\\}_{i=1}^{N_{\tau}}$
which a neural network is required to learn from. More generally, a task is
denoted by a tuple comprised of the conditional and marginal distributions,
$\\{p_{\tau}(y|{\mathbf{x}}),p_{\tau}({\mathbf{x}})\\}$. After task $\tau$,
the model will lose access to the training dataset for $\mathcal{T}_{\tau}$,
however, its performance will be continually evaluated on all tasks
$\mathcal{T}_{i}$ for $i\leq\tau$. For a comprehensive review of CL scenarios
see (Hsu et al., 2018; Van de Ven & Tolias, 2019).
### A.2 Related Works
Here, we briefly describe CL methods. One approach to CL referred to as
_regularization approaches_ regularizes a NN’s weights to ensure that
optimizing for a new task finds a solution that is “close” to the previous
task’s (Kirkpatrick et al., 2016; Nguyen et al., 2018; Zenke et al., 2017).
Working with functions can be easier than with NN weights and so task
functions can be regularized to ensure that learning new function mappings are
“close” across tasks (Li & Hoiem, 2017; Benjamin et al., 2019; Buzzega et al.,
2020). By contrast, _expansion approaches_ add new NN components to enable
learning new tasks while preserving components for specific tasks (Rusu et
al., 2016; Lee et al., 2020). _Memory approaches_ replay data from previous
tasks when learning the current task. This can be performed with a generative
model (Shin et al., 2017). Or samples from previous tasks (_memories_) (Lopez-
Paz & Ranzato, 2017; Aljundi et al., 2019; Chaudhry et al., 2019).
## Appendix B Continual Reinforcement Learning Metrics
We describe in detail how to calculate the continual reinforcement learning
metrics used extensively throughout this manuscript.
### B.1 Average Performance
This measures how well a CRL method performs on all tasks at the end of the
task sequence. The task performance is $p_{\tau}(t)=[-1,1]$ for all $\tau<T$.
Since we have a reward of $+1$ for completing the task and $-1$ for being
killed by a monster or falling into lava. If each task is seen for $N$
environment steps and we have $T$ tasks and the $\tau$-th task is seen over
the interval of steps $[(\tau-1)\times N,\tau\times N]$. The average final
performance metric for our continual learning agent is defined as:
$\displaystyle p(t_{f})=\frac{1}{T}\sum_{\tau=1}^{T}p_{\tau}(t_{f}),$ (B.1)
where $t_{f}=N\times T$ is the final timestep.
### B.2 Forgetting
The average forgetting is the performance difference after interacting with a
task versus the performance at the end of the final task. The average
forgetting across all tasks is defined as:
$\displaystyle F=\frac{1}{T}\sum_{\tau=1}^{T}F_{\tau}\quad\textrm{where}\quad
F_{\tau}=p_{\tau}(\tau\times N)-p_{\tau}(t_{f}).$ (B.2)
The forgetting of the final $T$-th task is $F_{T}=0$. If a CRL agent has
better performance at the end of the task sequence compared to after $\tau$-th
task at time-step $\tau\times N$ then $F_{\tau}<0$. Note that both the average
performance and the forgetting metrics are functions of $p_{\tau}(t_{f})$ so
we expect anti-correlation between these two metrics.
### B.3 Forward Transfer
The forward transfer is the difference in task performance during continual
learning compared to the single task performance. The forward transfer is
defined:
$\displaystyle FT$
$\displaystyle=\frac{1}{T}\sum_{\tau=1}^{T}FT_{\tau}\quad\textrm{where}\quad
FT_{\tau}=\frac{\textrm{AUC}_{\tau}-\textrm{AUC}_{\textrm{ref}_{\tau}}}{1-\textrm{AUC}_{\tau}},$
(B.3)
where AUC denotes the area under the curve and is defined as:
$\displaystyle\textrm{AUC}_{\tau}$ $\displaystyle=\frac{1}{N}\int^{\tau\times
N}_{(\tau-1)\times
N}p_{\tau}(t)dt\quad\textrm{and}\quad\textrm{AUC}_{\textrm{ref}_{\tau}}=\frac{1}{N}\int_{0}^{N}p_{\textrm{ref}_{\tau}}(t)dt.$
(B.4)
$FT_{\tau}>0$ means that the CRL agent achieves better performance on task
$\tau$ during continual learning versus in isolation. So this metric measures
how well a CRL agent transfers knowledge from previous tasks when learning a
new task.
## Appendix C Single Task experiments
To assess the forward transfer of DreamerV2 for CRL we need the performance of
each task as a reference Equation B.3. Single task learning curves for
Minigrid are shown in Figure 7 and single task learning curves for all
Minihack tasks are shown in Figure 8.
## Appendix D Further Experiments
We introduce further experiments which are referenced in the main paper. In
Section D.1 we show learning curves for each individual task for $4$ task
Minihack. In Section D.2 we show the results of the regularization based task-
aware world model baseline. In Section D.3 we show learning curves for each
individual task from $8$ task Minihack experimental setup. In Section D.4 we
explore various design choices required for DreamerV2 with Plan2Explore to get
the best performance for CRL. In Section D.5 we explore how increasing the
size of the experience replay buffer size affects performance in the Minihack
CRL benchmark. These experiments are on $8$ tasks of Minihack. The $8$ tasks
are a subset of those introduced in the CORA Minihack suite (Powers et al.,
2021) and are a superset of the $4$ tasks in the main paper. The $8$ tasks, in
order, are: Room-Random-15x15-v0, Room-Monster-15x15-v0, Room-Trap-15x15-v0,
Room-Ultimate-15x15-v0, River-Narrow-v0, River-v0, River-Monster-v0, and
HideNSeek-v0.
Figure 7: Single task performance of on individual tasks from the Minigrid CRL
benchmark. All curves are a median and inter-quartile range over $20$ seeds.
Figure 8: Single task performance on individual tasks from the Minihack CRL
benchmark. All curves are a median and inter-quartile range over $10$ seeds.
### D.1 $4$ task Minihack
Figure 9 shows the success rates of the baseline methods and DreamerV2
variants for each of the four tasks in the Minihack experiment Section 5.2.
Continual Dreamer successfully balances retaining knowledge from previous
tasks and learning new ones. On the other hand, the CLEAR baseline can only
learn the first two tasks with a significant delay, while Impala struggles on
all tasks. DreamerV2 and DreamerV2 + Plan2Explore perform poorly on the last
task and exhibit more forgetting than Continual-Dreamer.
Figure 9: Detailed, per task, comparison of baselines and DreamerV2 variants
that are presented in Figure 2 with average return. All curves are a median
and inter-quartile range over $20$ seeds.
### D.2 Task-aware world-model baseline
Differences for task-agnostic and task-aware variants of DreamerV2 are shown
in Figure 10. The task-aware variant based on $L^{2}$ regularization,
underperforms in comparison to task-agnostic methods. The plausible
explanation is that $L^{2}$ regularization makes the method too rigid to
efficiently learn the two last tasks because of the excessive influence of the
first tasks. We optimize the size of the $L^{2}$ scaling from the set
$\\{10^{-4},10^{-3},10^{-2},10^{-1},1,10,100\\}$.
Figure 10: Comparison of the average return over all tasks between task-aware
and task-agnostic approaches based on DreamerV2, on 4 Minihack tasks. All
curves are IQM from rliable package across 10 seeds and 1000 bootstrap
samples.
### D.3 Scaling to $8$ tasks
The results for 8 Minihack tasks are shown in Figure 11. DreamerV2 variants
display strong knowledge retention and effective learning on almost every task
compared to baseline methods. CLEAR method struggle with the last 4 tasks,
whereas Impala’s performance is poor on every task. DreamerV2 and variants
displays forgetting of the initial tasks, for which CLEAR retains the highest
performance. However, CLEAR, in contrast to DreamerV2 variants, struggles to
learn novel tasks.
Figure 11: Learning curves for $8$ Minihack tasks for DreamerV2 and variants
and Impala and CLEAR baselines. All curves are a median and interquartile
range of $10$ seeds.
### D.4 DreamerV2 Ablation Experiments
We explore various design choices which come from the implementations of
DreamerV2 (Hafner et al., 2020) and Plan2Explore (Sekar et al., 2020).
1. 1.
The use of Plan2Explore as an intrinsic reward.
2. 2.
World model learning by reconstructing the observations $\hat{o}_{t}$ only and
not the observations, rewards, and discounts altogether.
3. 3.
The use of the exploration policy to evaluate the performance on all current
and past tasks rather than having a separate exploration and evaluation
policy.
The results are shown in Table 2. We decided to pick the model in the final
line in Table 2 to report the results in the main paper as they produce good
results on Minihack with a relatively small standard deviation.
Plan2Explore | $\hat{o}$ reconstruction only | $\pi_{exp}=\pi_{eval}$ | Avg. Performance ($\uparrow$) | Avg. Forgetting ($\downarrow$) | Avg. Forward Transfer ($\uparrow$)
---|---|---|---|---|---
- | - | - | $0.09\pm 0.07$ | $0.37\pm 0.07$ | $0.56\pm 0.86$
✔ | - | - | $0.28\pm 0.13$ | $0.13\pm 0.08$ | $0.11\pm 0.15$
✔ | ✔ | - | $0.39\pm 0.13$ | $0.19\pm 0.16$ | $0.87\pm 0.95$
✔ | ✔ | ✔ | $0.38\pm 0.03$ | $0.22\pm 0.05$ | $0.76\pm 0.25$
Table 2: CRL metrics for different design decisions on DreamerV2 for the
Minihack CRL benchmark of $8$ tasks. All metrics are an average and standard
deviation over $5$ seeds. $\uparrow$ indicates better performance with higher
numbers, and $\downarrow$ the opposite.
### D.5 Stability versus Plasticity: Increasing the Size of the Replay Buffer
By increasing the replay buffer size for world model learning for DreamerV2 +
Plan2Explore we see that forgetting and average performance increase. However,
the forward transfer simultaneously decreases, Figure 12. Additionally, by
inspecting the learning curves we notice that the harder exploration tasks are
not learned as the replay buffer size increases. This is an instance of the
stability-plasticity trade-off in continual learning the larger buffer size
enables better remembering but simultaneously prevents quickly learning the
new tasks.
Figure 12: CRL metric package for DreamerV2 + Plan2Explore for the Minihack
benchmark of $8$ tasks versus the experience replay buffer size of the world
model for DreamerV2 + Plan2Explore. All metrics are an interquartile mean
(IQM) over $5$ seeds with $1000$ bootstrap samples from the rliable.
### D.6 Interference
We observe interference explicitly using DreamerV2 with Plan2Explore on the
Minigrid FourRooms-v0 environment where we change the reward function or goal
location from one task to another Figure 13. We train for $1$M steps on the
first task, then $1$M steps on the second task where the goal location has
changed with all else the same. From the learning curves for each individual
run, we can see how only one task will ever be solved with simultaneously poor
performance in the second task (with the exception of one seed, in blue, which
switches between solving and not solving each task).
Figure 13: Top, $2$ FourRooms-v0 environment on with fixed agent start
location, agent start orientation, and obstacles. Only the goal or reward
function changes from one task to the next. Bottom, $4$ separate success rate
learning curves for different random seeds in different colours for DreamerV2
+ p2e.
### D.7 Decreasing the Replay Buffer Size
We decrease the size of the replay buffer for DreamerV2 and variants to see
how well it is able to manage environment reconstruction in the face of
continual learning and decreasing replay buffer size. We consider the $3$ task
Minigrid continual learning problem and decrease the size of the replay buffer
from $2\times 10^{6}$ transitions which is used in Section 5.1 to replay
buffers of size $\\{10^{4},10^{5},10^{6}\\}$ transitions.
From the results in Figure 14, we can see that DreamerV2 and its variants
under-perform with small replay buffers of size $10^{4}$ and $10^{5}$.
DreamerV2 with Plan2Explore is unable to learn with such small replay buffers.
DreamerV2 by itself is better at learning under small replay buffers.
DreamerV2 with Plan2Explore learns to solve the difficult DoorKey-9x9 problem
only when it has a replay buffer size of $10^{6}$. We can also see that
reservoir sampling helps against catastrophic forgetting for both Dreamer and
DreamerV2 with Plan2Explore with replay buffer sizes of $10^{5}$ and $10^{6}$.
Replay buffer size $10^{4}$.
Replay buffer size $10^{5}$.
Replay buffer size $10^{6}$. Figure 14: Learning curves for continual learning
on $3$ different Minigrid tasks with $1$M environment interactions per task
before changing task. The replay buffer is decreased from the experiments in
Section 5.1 to $10^{4}$ transitions in the top row, $10^{5}$ in the middle
row, and $10^{6}$ in the bottom row. All runs are medians and interquartile
ranges of over $10$ different runs with different seeds.
### D.8 Analysis of Replay Buffer Sampling Methods
We analyze the workings of the different minibatch construction sampling
methods: random sampling, _us_ , _rwd_ and _50:50_ sampling. For $50:50$
sampling we also employ reservoir sampling (since in our experiments in
Section 5 we employ it to add plasticity to _rs_). All other sampling methods
use a FIFO replay buffer. We plot histograms of the episode index which is
sampled for mini-batch construction for world-model learning and initiating
the Actor-Critic learning. We normalized the replay buffer episode indexes,
which are sampled so that they lie in the range $[0,4]$ to indicate which task
the episode corresponds to. So all the episode indexes which were sampled
while the agent interacted with a particular task are visualized in a distinct
histogram (columns) for various sampling methods (rows) Figure 15.
We can see from Figure 15 that the sampling from the replay buffer is similar
across all sampling methods for the first task. The distributions for random
sampling are not uniform since only episodes which are of length $>50$ are
stored in the replay buffer, so the histograms’ first row is not flat since
certain indexes will not be sampled as a result. The distribution for _rwd_
and _us_ is similar to random sampling for the first task. This must mean that
the uncertainty from Plan2Explore is quite uniform for the first task, perhaps
even the uncertainty is larger for the earlier episodes in the replay buffer
for the first task (row 4, column 1 Figure 15). As continual learning
progresses we can see how the sampling from the replay buffer becomes more
uneven in comparison to random sampling for the next tasks for _rwd_ and _us_.
In particular, we can see for time-steps $3$M to $4$M the detrimental effects
of _rwd_ , since only previous experience with high rewards are sampled when
learning the final task causing a lack of plasticity for _rwd_. When looking
at the distributions for $50:50$ sampling we can see how they are more shifted
to the right in comparison to random sampling, this is to be expected since we
are explicitly biasing the sampling to more recent experience to make the
world model learn recent tasks quicker since $50:50$ is paired with _rs_.
Figure 15: Distribution of episode indexes which are sampled to construct a
minibatch for various sampling methods (rows) while the world-model is
interacting with each task (columns).
|
Seven Concepts Attributed to Siméon-Denis Poisson
Seven Concepts Attributed to Siméon-Denis Poisson††This paper is a
contribution to the Special Issue on Differential Geometry Inspired by
Mathematical Physics in honor of Jean-Pierre Bourguignon for his 75th
birthday. The full collection is available at
https://www.emis.de/journals/SIGMA/Bourguignon.html
Yvette KOSMANN-SCHWARZBACH
Y. Kosmann-Schwarzbach
Paris, France<EMAIL_ADDRESS>
Received September 21, 2022, in final form November 25, 2022; Published online
November 29, 2022
Siméon-Denis Poisson was 25 years old when he was appointed Professor of
Mathematics at the École Polytechnique in 1806. Elected to the Paris Académie
des Sciences six years later, he soon became one of its most influential
members. The origin and later developments of the many concepts in mathematics
and physics that bear his name make interesting stories, a few of which we
shall attempt to sketch in this paper.
Poisson; École Polytechnique; Académie des Sciences; Poisson’s equation;
Poisson’s ratio; Poisson distribution; Poisson kernel; Poisson brackets
01A55; 01A70; 31A30; 31J05; 60G55
Pour Jean-Pierre Bourguignon
à l’occasion de son 75${}^{\,e}$ anniversaire
## 1 The mathematician and physicist Poisson (1781–1840)
Pithiviers is a small town in France, 50 miles south of Paris, renowned for a
special, tasty kind of pastry, called a “pithiviers”, and for the high-quality
honey from the neighboring countryside. But it has another claim to fame. It
was the birthplace in 1781 of Siméon-Denis Poisson, who would become the
mathematician and mathematical physicist whose name is attached to the Poisson
distribution, the Poisson brackets, Poisson geometry, Poisson algebras and
many other concepts, formulas, equations and theorems.
He was just eight years old when the French Revolution broke out. His father
was a retired soldier who held a modest administrative position. The
Revolution allowed boys from families such as his to obtain a decent
education. In 1794, the École Polytechnique, first called École centrale des
travaux publics, was created to prepare engineers with a solid scientific
background. Then, as now, admission was by a competitive examination.
Encouraged by one of his teachers in high school, and equipped with a
certificate attesting to his deep love of liberty, equality and all the
fundamental beliefs of the Republic, including a “hate for tyrants”, Poisson
sat for the entrance examination in 1798 and took first place. This was the
beginning of a very brilliant career, through many regime changes, first the
revolutionary Republic, then Napoleon’s Empire, the Restauration of the
royalty in 1815, under Louis XVIII until his death in 1824, followed by the
more autocratic Charles X, then the revolution of 1830 and the constitutional
monarchy of Louis-Philippe. Poisson did not live to see France’s later regime,
the short-lived republic of 1848, since he died in 1840 at the age of 58.
Ten years after his death, a lifesize statue of Poisson was installed in his
hometown. To this day, there is a square in the center of Pithiviers that
bears the name “Place Denis Poisson”, but the brass statue has disappeared,
like so many others in France and elsewhere, having been melted down during
the German occupation of Pithiviers in the Second World War.
When Poisson entered the École Polytechnique, the professors were the most
distinguished scientists of the time, Joseph Louis Lagrange (1736–1813) and
Gaspard Monge (1746–1818) taught mathematics, Pierre-Simon Laplace (1749–1827)
was an examiner for mathematics, Jean Baptiste Biot (1774–1862) was an
examiner for physics, and Antoine-François de Fourcroy (1755–1809) was
professor of chemistry.
Volume 4 of the Journal de l’École polytechnique, dated 1801–1802, contains
three papers written by Poisson in 1799 or early 1800, when he was still a
student. One deals with the classification of quadrics and is an “Addition” to
an article by Monge and Jean Nicolas Pierre Hachette (1769–1834) who were both
professors at the École. This 3-page note [7] must have been a remark that he
made on a publication of his teachers, and it is signed by one of them,
Hachette, together with Poisson. It is both the first and last paper ever co-
authored by Poisson.
In the same volume, we find the paper “Essay on elimination in algebraic
equations”,111Mémoire sur l’élimination dans les équations algébriques [11].
an essay on the elimination of variables in systems of algebraic equations,
which contains a new, simplified proof of the theorem of Étienne Bézout
(1730–1783) on the degree of the resultant attached to a pair of algebraic
curves, and the “Essay on the plurality of integrals in the calculus of
differences”,222Mémoire sur la pluralité des intégrales dans le calcul des
différences [10]. that he had read before the Institut National, which had
replaced the Académie des Sciences in 1795, on the 16th of Frimaire in the
year 9 of the French revolutionary calendar, i.e., December 8, 1800. In that
paper, he generalized a remark of Laplace on the solutions of first-order
difference equations. The report by two members of the Academy, Adrien-Marie
Legendre (1752–1833) and Sylvestre François Lacroix (1765–1843), is preserved
in the Archives de l’Académie des Sciences. In the conclusion of their four-
page long “Report on a paper by Citizen Poisson on the number of complete
integrals of which equations of finite differences are susceptible”,333Rapport
sur un Mémoire du Citoyen Poisson relatif au nombre d’intégrales complettes
[sic] dont les équations aux différences finies sont susceptibles. they wrote:
“It follows at a minimum that the theory established by this young geometer is
correct, and even though it may not be susceptible to useful applications in
the problems that lead to this type of equations, one must always regard the
clarification of a problem of analysis which, until the present, remained in
great obscurity as contributing to the progress of science”,444Il résulte au
moins que la théorie établie par ce jeune géomètre est exacte, et quand même
elle ne serait pas susceptible d’applications utiles dans les problèmes qui
conduisent à ce genre d’équations, on doit toujours regarder commme
contribuant au progrès de la science, l’éclairicssement d’un problème
d’analyse qui jusqu’à présent était resté dans une grande obscurité. and they
recommended the publication of Poisson’s paper. Legendre and Lacroix were not
enthusiastic but they certainly did not discourage the promising young
mathematician.
Upon finishing his studies at the École, Poisson was immediately appointed as
an assistant, and in 1802 he was called upon to “take over temporarily the
duties of Citizen Fourier”, Joseph Fourier (1768–1830) who would soon develop
the Fourier series and integrals. Four years later, at the age of 25, he was
appointed “Instituteur d’Analyse”, i.e., full professor of mathematics, to
replace Fourier for whom he had already been substituting for four years. The
archives of the École Polytechnique contain the confirmation of Poisson’s
nomination to replace Fourier, dated 11 Brumaire, year 11 (November 2, 1802),
as well as the covering letter, dated March 17, 1806, of the Emperor’s
official decree naming Poisson “Instituteur d’Analyse” at the École
Polytechnique.
In 1804, Poisson appears as a handsome young professor in a portrait by the
painter E. Marcellot, that is now in the collection of École Polytechnique.
In 1809, Napoleon decreed the opening of a re-organized “Université
Impériale”, and Poisson was named its first professor of mechanics. A poster
announcing the opening of classes in April 1811 may still be seen in the
collection of the Bibliothèque Nationale de France and it specifies that
Poisson’s lectures at the Sorbonne, to use the old name that has survived all
reforms, student revolts and re-organizations to this day, would be delivered
on Mondays and Fridays. (The other professors were Lacroix, for differential
and integral calculus, Louis-Benjamin Francœur (1773–1849) for advanced
algebra, Hachette for descriptive geometry, and Biot for astronomy.)
When there was finally a vacancy in the Academy of Sciences it was in the
physics section in 1812 and Poisson was elected to fill it. His role in the
Academy was soon preeminent, particularly in the election of new members. He
was charged with writing numerous reports. One, written in 1816, I find
particularly interesting because it shows Poisson, already a respected member
of the Academy, in a position to judge an unknown, young scientist, just as he
had been judged by Legendre and Lacroix, back in 1800. In it, together with
André-Marie Ampère (1775–1836), he reports on a paper by Claude Pouillet
(1790–1868) on the phenomenon of colored rings. Their conclusion on the work
submitted by this “young physicist” is very reminiscent of that of Legendre
and Lacroix on the “young geometer” Poisson, sixteen years earlier, and they
too recommend the publication of the essay. Their judgment was fair, since
Pouillet went on to teach at Polytechnique and at the Sorbonne, and to be
elected to the Academy. This report in the Archives de l’Académie des Sciences
is an autograph manuscript of Poisson.
Another aspect of Poisson’s role in the Académie was important even if it was
sometimes controversial. He often had to read papers submitted for publication
to the Mémoires de l’Académie. In several instances he rendered a great
service to the scientific community by summarizing in his own terms the main
points of these papers, before, sometimes many years before, they were revised
and published. A controversy arose when Poisson made use of such material,
with insufficient acknowledgement of his source. The best known case is his
bitter dispute with Fourier. Having had access to the manuscript on the theory
of heat that Fourier had submitted to the Academy in 1807, he published a
detailed account of the subject in 1808 in the Bulletin de la Société
Philomatique (Bulletin des Sciences), under the title “On the propagation of
heat in solid bodies, by M. Fourier”555Mémoire sur la propagation de la
chaleur dans les corps solides, par M. Fourier. and signed P, his initial.
While, from 1814 to 1825, Poisson published many essays on trigonometric
series and the theory of heat, Fourier’s text, revised, would be published in
1822 [5], much later than Poisson’s first papers. A virulent debate over
precedence broke out in 1815 between the two scientists, during which Fourier
wrote a letter to Laplace, blaming both Poisson and Biot: “[They] recognize
that they could not obtain up to now any results different from mine […] but
they say that they have another method for formulating them, and that this
method is excellent and the true one. […] But it does not extend the limits of
science to present results that one has not found in a form that one says is
different”.666[Ils] reconnaissent qu’ils n’ont pu donner jusqu’ici aucun
résultat différent des miens […] mais ils disent qu’ils ont une autre manière
de les exposer et que cette manière est excellente et la véritable. […] Mais
ce n’est pas reculer les limites des sciences que de présenter sous une forme
que l’on dit être différente des résultats que l’on n’a pas trouvés soi-même.
When Gaston Darboux (1842–1917) edited the works of Fourier in 1890, he
included Poisson’s 1808 account, explaining, “This article […] is not by
Fourier. Signed with the initial “P”, it was written by Poisson who was an
editor of the mathematical portion of the Bulletin des Sciences. Because of
the historic interest that it presents as the first publication that made
Fourier’s theory known, we believed that we should reproduce it in its
entirety”.777Cet Article […] n’est pas de Fourier. Signé de l’initiale P, il a
été rédigé par Poisson qui était un rédacteur du Bulletin des Sciences pour la
partie mathématique. À raison de l’intérêt historique qu’il présente comme
étant le premier écrit où l’on ait fait connaître la théorie de Fourier, nous
avons cru devoir le reproduire intégralement. In 1808, Fourier had derived the
heat equation and solved it in a particular case, by expanding the sought-
after solution into a trigonometric series. Poisson established the heat
equation for the case of variable conductivity in his “Essay on the
distribution of heat in solid bodies”,888Mémoire sur la distribution de la
chaleur dans les corps solides. published in the Journal de l’École
polytechnique in 1823, and included it in his 1835 book, “Mathematical Theory
of Heat”999Théorie mathématique de la chaleur. [20].
From 1820 until his death, Poisson played an important role in the
organization of education in France, as a member and then, after 1822, as
treasurer of the Royal Council of Public Education.101010Conseil royal de
l’instruction publique. As head of mathematics in France, he wielded
considerable influence and used his authority to do battle to insure the
teaching of mathematics to all students, including those primarily studying
humanities. As president of the jury of the “agrégation” for the selection of
teachers, he strove to maintain a single examination for both mathematics and
physics. He championed mathematics but he also understood that the two fields
had to be developed simultaneously at the middle and high school levels and at
the university, just as they were in his research.
## 2 Some stories
Poisson’s name appears in so many contexts in mathematics and in physics that
discovering the earliest formulation of each of his concepts or theorems in
his more that 200 publications would be a very time-consuming enterprise.
Here, I shall mention only a few of them.
### 2.1 Poisson’s equation in electromagnetism
Poisson’s first important contributions to the theory of electrostatics date
from 1811 to 1813, when he took up the problem of determining the distribution
of electrical charges in charged bodies by applying analytical techniques,
such as series expansions, while he began treating magnetism in an article
published later. In his “Essay on the theory of magnetism in
motion”111111Mémoire sur la théorie du magnétisme en mouvement. [16], which
appeared in the Mémoires de l’Académie des Sciences of 1823, one finds, on p.
463, “$\Delta V=0$, $=-2k\pi$, $=-4k\pi$, depending on whether point $M$ is
located outside, on the surface of, or inside the volume in
question”.121212$\Delta V=0$, $=-2k\pi$, $=-4k\pi$, selon que le point M sera
situé en dehors, à la surface ou en dedans du volume que l’on considère”. Here
$k$ denotes the constant charge density of the charged body. (It was Friedrich
Gauss (1777–1855) who would later treat the case of variable density.) The
case of the equation $\Delta V=0$ was already well known, being the famous
Laplace equation. Poisson had derived an equation satisfied by the potential
at a point interior to the charged body, but the novelty in the 1823 paper was
to treat the case of a point on the surface of the body. This case became
known as “Poisson’s equation” in electromagnetism. It was indeed discovered by
Poisson.
The importance of these articles was immediately recognized by George Green
(1793–1841), after whom are named the Green function and the Green–Riemann
theorem. In 1828, in the preface to his Essay on the Application of
Mathematical Analysis to the Theories of Electricity and Magnetism, Green
cited Poisson’s work prominently and, speaking of the papers of 1811 and 1812,
he wrote, “Little appears to have been effected in the mathematical theory of
electricity […] when M. Poisson presented to the French Institut two memoirs
of singular elegance, relative to the distribution of electricity on the
surfaces of conducting spheres, previously electrified and put in presence of
each other”.
The fame of Poisson’s mathematical theory of electrostatics is reflected in
the judgment of E.T. Whittaker (1873–1956) in his History of the Theories of
Æther and Electricity (1910). Regarding Poisson’s 1812 essay he wrote,
“Electrostatical theory was, however, suddenly advanced to quite a mature
state of development by Siméon-Denis Poisson, in a memoir which was read to
the French academy in 1812 […]. The rapidity with which in a single memoir
Poisson passed from the barest elements of the subject to such recondite
problems as those just mentioned may well excite admiration”. He concluded:
“His success is, no doubt, partly explained by the high state of development
to which analysis had been advanced by the great mathematicians of the
eighteenth century; but […] Poisson’s investigation must be accounted a
splendid memorial of his genius”. Later, he examined “Poisson’s theory of
magnetic induction”, rejecting his interpretation of the physics of the
situation but noting that the formulas derived by Poisson are valid.
And later, in 1939, the historian of science, Mario Gliozzi, in a paper
analyzing “Poisson’s contribution to electricity theory”,131313Il contributo
del Poisson all’electrologia. concluded that Poisson’s 1813 publication was a
most remarkable paper.
### 2.2 Poisson’s ratio
The story of Poisson’s ratio is that of a concept whose present, everyday
applications are as surprising as they are numerous. I heard a beautiful talk
by Tadashi Tokieda in 2012 in Paris, that started with Poisson, continued with
origami, and went on to an astonishing variety of contemporary questions in
materials science, and I later read the article by the physicist George
Neville Greaves (1945–2019), “Poisson’s ratio over two centuries: challenging
hypotheses” [6], which gave both a historical account and a detailed
description of the current theory and practice of this concept, which I shall
briefly summarize here. It started with a “shape versus volume concept”, a
hint already given by Poisson in his early Traité de Mécanique [13], first
published in 1811, where he wrote on page 176 of volume 2:
> For each of the elements into which we have divided the amount of fluid
> matter, its shape will be altered during the time ${\rm d}t$, and also its
> volume will change if the fluid is compressible; but, since its mass must
> remain unaltered, it follows that, if we seek to determine its volume and
> its density at the end of time $t+{\rm d}t$, their product will necessarily
> be the same as after time $t$.141414Chacun des éléments dans lesquels nous
> avons partagé la masse fluide, changera de forme pendant l’instant ${\rm
> d}t$, et il changera même de volume, si le fluide est compressible; mais
> comme sa masse devra toujours rester la même, il s’ensuit que, si nous
> cherchons ce que deviennent son volume et sa densité à la fin du temps
> $t+{\rm d}t$, leur produit devra être le même qu’à la fin du temps $t$.
In a Note in the Annales de chimie et de physique in 1827, “On the extension
of elastic threads and plates”151515Sur l’extension des fils et des plaques
élastiques. [18], Poisson introduced the dimensionless ratio that bears his
name, and, by means of a computation based on Laplace’s theory of molecular
interaction, announced that its value is $\frac{1}{2}$, in accordance with
recent experiments on brass [in French, laiton] by baron Charles Cagniard de
La Tour (1777–1859) and Félix Savart (1791–1841), on the vibrations of plates,
whose results had been recently presented to the Academy. Poisson further
developped the theory of elasticity in several papers that he read before the
Académie des Sciences between 1823 and 1828 and published from 1828 to 1830,
introducing the ratio of the strain in the transverse direction to the strain
in the primary direction.
About ten years later, as the precision of the measurements in experiments
increased, the constancy of Poisson’s ratio for all materials was proved to be
wrong, but the conflict between the hypothesis of interacting molecules and
the continuum theory of Sophie Germain (1776–1831) and Augustin Cauchy
(1789–1857) was not resolved until much later. In the 1860’s, James Clerk
Maxwell (1831–1879) defended Poisson’s viewpoint, but William Thomson (Lord
Kelvin, 1824–1907) declared that it had already been proved false by George
Stokes (1819–1903) in 1845. Until the 1970’s the variability of Poisson’s
ratio with every kind of material had only been established experimentally by
engineers “for whom macroscopic properties were sovereign” [6]. Still, it was
shown that its variability, unlike that of the other elastic moduli, is
restricted within the range [$-1,\frac{1}{2}$]. The number of publications
concerning the Poisson ratio increased exponentially after 1970, when it was
discovered that this concept helped to understand “the narrowing of arteries
during hypertension, the resilience of bones and medical implants, the
rheology of liquid crystals, the shaping of ocean floors, the oblateness of
the Earth, and planetary seismology after meteor impact” [6]. This is when
materials with negative Poisson’s ratio began to appear and found countless,
important applications that were aptly evoked in Tokieda’s entertaining
lecture. This was the result of the work of many physicists, beginning with
Roderic Lakes in 1987 and including Greaves, the author of the article that we
have attempted to summarize here, who observed that “Siméon-Denis Poisson is
particularly remembered for a ratio, a dimensionless quantity, which today has
surprisingly acquired a ubiquitous physical significance”.161616See also the
comprehensive article by G.N. Greaves et al., “Poisson’s ratio and modern
materials”, Nature Materials, vol. 10, November 2011.
### 2.3 Poisson’s spot
Poisson’s contribution to optics was not a successful treatment of the general
phenomena of light but a prediction based on his ability to compute. Following
Laplace, he held that all phenomena could be explained by molecular
interaction and was opposed to the theory of Augustin Fresnel (1788–1827),
based on a wave theory. When, in 1817, submissions for a grand prize for a
study of the diffraction of light were being examined by the Academy of
Sciences, Poisson was a member of the committee in charge of examining
Fresnel’s submission. Convinced that Fresnel was wrong, Poisson suggested an
experiment that would prove that a mathematical consequence of Fresnel’s
formulas was contrary to intuition and would disprove his theory. When the
consequence of Fresnel’s theory that Poisson had derived and considered absurd
was tested experimentally, what he had judged to be absurd was actually
observed: a bright spot appeared in the centre of the shadow of a disk lit by
a source situated on its axis. This phenomenon was then called derisively
“Poisson’s spot”. The experiment suggested by Poisson yielded a result in
Fresnel’s favor who was awarded the prize.
### 2.4 The Poisson distribution
This is probably the best known occurrence of Poisson’s name in the scientific
literature. In French it is referred to as “la loi de Poisson” (Poisson’s
law).
Poisson was not the first to deal with probabilities. Blaise Pascal
(1623–1662), Christiaan Huygens (1629–1695), John Arbuthnot (1667–1735),
Giovanni Battista Vico (1668–1744), Georges-Louis Leclerc de Buffon
(1707–1788) in his Essay on Moral Arithmetic,171717Essai d’arithmétique
morale. of 1777, had all written on this subject, and even Voltaire had
written a pamphlet, Essay on Probabilities Applied to Justice,181818Essai sur
les probabilités en fait de justice. in 1772, but it was not mathematical, and
the revised and augmented fifth edition of Laplace’s Philosophical Essay on
Probabilities191919Essai philosophique sur les probabilités. of 1814 was
published in volume 7 of his works in 1825. In 1981, Bernard Bru, in his essay
on Poisson and probability theory [1], wrote that a precursor of the
probability law that bears the name of Poisson can be found as early as 1718
in the work of the huguenot mathematician working in England, Abraham de
Moivre (1667–1754), The Doctrine of Chances. But in its present form, the
Poisson distribution appeared for the first time on page 262 of Poisson’s
essay of 1829, “Essay on the proportion of new-born females and
males”,202020Mémoire sur la proportion des naissances des filles et des
garçons. which was published in the Mémoires de l’Académie des Sciences [19]
of 1830, and reappeared in his subsequent book, Recherches sur la probabilité
des jugements [21], published in 1837, where one reads on p. 206,
$P=\left(1+\omega+\frac{\omega^{2}}{1\cdot 2}+\frac{\omega^{3}}{1\cdot 2\cdot
3}+\cdots+\frac{\omega^{n}}{1\cdot 2\cdot 3\cdots n}\right){\rm e}^{-\omega}.$
Related to the Poisson distribution are the so-called Poisson–Charlier
polynomials, whose sequence first appeared in the work of the Swedish
astronomer and statistician, Carl Vilhelm Ludwig Charlier (1862–1934). It was
the German mathematician Gustav Doetsch (1892–1977) who called them Charlier
polynomials in the title of an article published in Mathematische Annalen in
1933 where he discussed the differential-difference equation they satisfied.
The reviewer of Doetsch’s article for Zentralblatt wrote the definition of
these polynomials in terms of a sequence of functions defined by recursion
from the Poisson distribution (Poissonsche Verteilung) and noted the
orthogonality property which they satisfy with respect to the Poisson density,
whence the present terminology. In a note in the Annals of Mathematical
Statistics in 1947, Clifford Truesdell (1919–2000) derived their properties
from those of the F-functions which he introduced, and he entitled his
article, “A note on the Poisson–Charlier functions” [23]. Thus Poisson’s name
became attached to concepts invented a hundred years after his death.
### 2.5 The Poisson summation formula
The Poisson summation formula is so well known that it is often called,
simply, “the Poisson formula”. Why is the formula
$\sum_{n=-\infty}^{n=\infty}f(n)=\sum_{n=-\infty}^{n=\infty}{\mathcal{F}}f(n),$
where ${\mathcal{F}}f$ is the Fourier transform of $f$, defined by
${\mathcal{F}}f(\xi)=\int_{-\infty}^{\infty}{\rm e}^{-2\pi{\rm
i}x\xi}f(x)\,{\rm d}x$, attributed to Poisson? In the hope of finding
references that would lead us to his original papers, we opened a modern
textbook, Sphere Packings, Lattices and Groups (1999), by John Conway and
N.J.A. Sloane. Introducing the Jacobi theta functions, they write on p. 103,
“These functions are related by a labyrinth of identities […] One may regard
[them] as consequences of the general version of the Poisson summation
formula”.
How did the reference to Poisson reach Conway and Sloane at the end of the
20th century, and what do we find in Poisson? Fortunately, they refer to p.
475 of the classical treatise, A Course of Modern Analysis (1927), by
Whittaker and Watson who in turn refer to Poisson’s article of 1823 in the
Journal de l’École polytechnique, “Continuation of the essay on definite
integrals and the summation of series”212121Suite du mémoire sur les
intégrales définies et sur la sommation des séries. [17]. There, on p. 420, we
read the formula
$\pi+2\pi\sum_{n=1}^{\infty}{\rm
e}^{-4k\pi^{2}n^{2}}=\frac{\sqrt{\pi}}{2\sqrt{k}}+\frac{\sqrt{\pi}}{\sqrt{k}}\sum_{n=1}^{\infty}{\rm
e}^{-\frac{n^{2}}{4k}},$
which Poisson derived when working on a precise evaluation of the remainder in
the summation formula that Leonhard Euler had obtained in 1736 and which Colin
Maclaurin stated in his 1742 “Treatise of Fluxions”. Since the Fourier
transform of $f(x)={\rm e}^{-\alpha x^{2}}$ is
${\mathcal{F}}f(\xi)=\sqrt{\frac{\pi}{\alpha}}{\rm
e}^{-\frac{\pi^{2}\xi^{2}}{\alpha}}$, when the preceding formula is rewritten
as
$\sum_{n=-\infty}^{n=\infty}{\rm e}^{-4k\pi^{2}n^{2}}=\frac{1}{\sqrt{4\pi
k}}\sum_{n=-\infty}^{n=\infty}{\rm e}^{-\frac{n^{2}}{4k}},$
we recognize instances of the summation formula,
$\sum_{n=-\infty}^{n=\infty}f(n)=\sum_{n=-\infty}^{n=\infty}{\mathcal{F}}f(n),$
for each function $f_{k}$ defined by $f_{k}(x)={\rm e}^{-4k\pi^{2}x^{2}}$.
What Whittaker and Watson observed was that, setting $4k\pi=-{\rm i}\tau$,
Poisson’s formula can be rewritten as
$\theta_{3}(0|\tau)=\frac{1}{\sqrt{-{\rm
i}\tau}}\theta_{3}\left(0\,\bigg{|}\,{-}\frac{1}{\tau}\right),$
the particular case for $z=0$ of the general transformation formula for the
third theta function,
$\theta_{3}(z|\tau)=(-{\rm i}\tau)^{-\frac{1}{2}}{\rm e}^{\frac{z^{2}}{\pi{\rm
i}\tau}}\theta_{3}\left(\frac{z}{\tau}\,\bigg{|}\,{-}\frac{1}{\tau}\right).$
They also stated that a more general case is to be found in Poisson’s “Essay
on the numerical calculation of definite integrals”,222222Mémoire sur le
calcul numérique des intégrales définies. published in 1827 in the Mémoires de
l’Académie des Sciences, a year before Jacobi published “Continuation of the
notices on elliptical functions”232323Suite des notices sur les fonctions
elliptiques. in Crelle’s Journal, which was followed by his comprehensive
treatment of the identities satisfied by the theta functions in his Fundamenta
nova theorias functionum ellipticarum, published in Kœnigsberg in 1829.
Today, the summation formula, generalized in the theory of group
representations, has applications to network theory and error-correcting
codes, that Poisson could not have anticipated.
### 2.6 The Poisson kernel and the Poisson integral formula
If one opens any modern book on potential theory, one will no doubt find a
definition of “the Poisson kernel” and a proof of “the Poisson integral
formula”, often simply called “the Poisson formula”, for the case of a half-
plane and for a disk in the plane, often also for the sphere in 3-space or in
higher-dimensional spaces. How did these formulas reach the modern authors and
where did they appear in Poisson’s vast mathematical production?
What became known as “the Dirichlet problem” for a domain in $n$-space, which
consists of determining the value of a harmonic function in the interior of
the domain, given the value of the function on the boundary of the domain, was
formulated for a disk in the plane by Peter Gustav Lejeune Dirichlet
(1805–1859) in an article in Crelle’s Journal in 1828.
In a paper on the later history of Fourier series, Jean-Pierre Kahane
(1926–2017) listed, among the advances made in the 19th century on the subject
of the trigonometric series, the solution by Hermann Amandus Schwarz
(1843–1921) of the Dirichlet problem for the circle by means of the Poisson
formula, in 1872. In fact, Schwarz’s article [22] in the Journal für die reine
und angewandte Mathematik of 1872, contains the formula that expresses the
value of a harmonic function in the interior of a disk as an integral
involving only the values of that function on the boundary circle,
$u(r,\phi)=\frac{1}{2\pi}\int_{0}^{2\pi}u(1,\phi)\frac{1-r^{2}}{1-2r\cos(\psi-\phi)+r^{2}}\,{\rm
d}\psi,$
but he writes: “It is easy to recognize the fundamental idea of Poisson’s
proof in the proof that is to be found in Section 5.b”.242424Man wird auch in
dem Beweise, der in Section 5 unter b enthalten ist, die Grundgedanken des
Poissonschen Beweises leicht wiedererkennen. Schwarz attributed this formula
to Carl Neumann (1832–1925) in his article [9] in volume 59 of the same
journal. One does find this formula on p. 365 of Neumann’s article. While
there is no mention of Poisson in Neumann’s article, Schwarz on the other hand
gave numerous references to Poisson’s articles of 1815 and 1823, and to his
book on the theory of heat, Théorie mathématique de la chaleur, of 1835, as
well as to three other papers published in 1827, 1829 and 1831. He thus gave
us a useful map to enter the maze of sometimes very long essays that Poisson
wrote, many of them with a “Suite” and an “Addition”.
Poisson’s search started as early as 1813 in his “Essay on definite
integrals”252525Mémoire sur les intégrales définies. [14] published in the
Journal de l’École polytechnique. This paper was followed by another essay,
sixty pages long, in 1820, “Essay on the manner of expressing functions by
series of periodic quantities and on the use of this transformation in the
solution of various problems”262626Mémoire sur la manière d’exprimer les
fonctions par des séries de quantités périodiques, et sur l’usage de cette
transformation dans la résolution de différents problèmes. [15], published in
the same journal, and a “Continuation of the essay on definite integrals and
the summation of series”272727Suite du mémoire sur les intégrales définies et
sur la sommation des séries. [17] in 1823.
Poisson was trying to establish interpolation formulas à la Lagrange. In 1820,
in his essay on series of periodic functions, he proved that, given a finite
sequence of $m-1$ quantities, $y_{1},\dots,y_{m-1}$, setting
$z_{j}=\frac{2}{m}\sum_{k=1}^{m-1}y_{k}\sin\frac{kj\pi}{m}$, it follows that
$y_{n}=\sum_{j=1}^{m-1}z_{j}\sin\frac{nj\pi}{m}.$ He wrote that this formula
is a particular case of Lagrange’s formula in his “Researches on the nature of
sound and its propagation”282828Recherches sur la nature, et la propagation du
son. which had appeared in the first volume of the Mémoires de l’Académie de
Turin in 1759. Then, using a limiting procedure and exchanging summation and
integration, he derived the following formula
$fx=\frac{2}{l}\int_{0}^{l}\sum_{k=1}^{\infty}\sin\frac{k\pi
x}{m}\sin\frac{k\pi\alpha}{m}f\alpha\,{\rm d}\alpha,$
which he also attributed to Lagrange. In this long essay, he expressed his aim
of replacing the summation of a series by the computation of an integral or
conversely, and he treated the question of the summation of series of sines
and cosines. He wrote: “It will be advantageous to bring all of them together
under the same point of view and to deduce these values by a uniform
method”.292929Il ne sera pas inutile de les réunir toutes sous un même point
de vue et de déduire ces valeurs d’une méthode uniforme. On p. 422, he
introduced the evaluation of a function with the help of integration by way of
what would later be called a Poisson kernel, and he gave numerous applications
for it, including to the motion of a vibrating string composed of two parts of
different material and the motion of a heavy body suspended from an elastic
wire.
What is clear from reading Poisson is that he was not trying to solve the so-
called Dirichlet problem, except maybe in the case of some applied problem
related to his more general researches, but that he returned many times to the
theory of trigonometric series and that he was in fact trying to prove the so-
called Fourier theorem, that is, he was working towards a proof of the
convergence of the “Fourier series” of a given function to the function
itself. His attempts at a rigorous treatment of this question, as well as the
later treatment by Cauchy in 1823, were not successful, but it is in the
course of such a research that Poisson introduced the function that became
known as the Poisson kernel and the integral that became known as the Poisson
integral. Both terms are justified, but their appearance in the theory of
harmonic functions and potential theory came later, with Dirichlet and
Schwarz. In conclusion, we can affirm that, on the one hand, the Poisson
kernel was indeed introduced by Poisson in his attempts to prove the
convergence of the Fourier series of general types of functions, and that, on
the other hand, Poisson did not use the corresponding integral formula in the
search for a general solution of the Laplace equation, but only in particular
cases that arose from physical problems.
### 2.7 Poisson brackets
It all began in celestial mechanics. Following Lagrange’s essays of 1808 and
1809 on the variation of the principal axes of the orbits of planets and on a
general theory of the variation of arbitrary constants in mechanics, but it
was in Poisson’s famous essay of 1809, “Essay on the variation of arbitrary
constants in questions of mechanics”303030Mémoire sur la variation des
constantes arbitraires dans les questions de mécanique. [12] that the Poisson
brackets appeared in their own right. Lagrange had derived their expression by
an involved procedure which amounted to – in modern terms – inverting the
matrix of components of the canonical symplectic 2-form. Poisson denoted the
coordinates of the position of the body by $\phi$, $\psi$, $\theta$ and the
components of its velocity by $s$, $u$, $v$, and he wrote:
> It is clear that the left hand side of this equation is a complete
> differential with respect to $t$; integrating, we thus obtain the very
> simple equation
>
> ${{\rm d}b\over{\rm d}s}\cdot{{\rm d}a\over{\rm d}\phi}-{{\rm d}a\over{\rm
> d}s}\cdot{{\rm d}b\over{\rm d}\phi}+{{\rm d}b\over{\rm d}u}\cdot{{\rm
> d}a\over{\rm d}\psi}-{{\rm d}a\over{\rm d}u}\cdot{{\rm d}b\over{\rm
> d}\psi}+{{\rm d}b\over{\rm d}v}\cdot{{\rm d}a\over{\rm d}\theta}-{{\rm
> d}a\over{\rm d}v}\cdot{{\rm d}b\over{\rm d}\theta}=\text{const}.$
>
> One sees that the constant on the right hand side of this equation will, in
> general, be a function of $a$ and $b$, and of the arbitrary constants
> appearing in the other integrals of the motion; […] but in order to recall
> the origin of this quantity, which represents a certain combination of the
> partial differentials of the values of $a$ and $b$, we shall make use of the
> notation $(b,a)$ to denote it; so that we shall have generally313131Il est
> visible que le premier membre de cette équation est une différentielle
> complète par rapport à $t$; en intégrant, nous aurons donc cette équation
> fort simple ${{\rm d}b\over{\rm d}s}\cdot{{\rm d}a\over{\rm d}\phi}-{{\rm
> d}a\over{\rm d}s}\cdot{{\rm d}b\over{\rm d}\phi}+{{\rm d}b\over{\rm
> d}u}\cdot{{\rm d}a\over{\rm d}\psi}-{{\rm d}a\over{\rm d}u}\cdot{{\rm
> d}b\over{\rm d}\psi}+{{\rm d}b\over{\rm d}v}\cdot{{\rm d}a\over{\rm
> d}\theta}-{{\rm d}a\over{\rm d}v}\cdot{{\rm d}b\over{\rm
> d}\theta}=\mbox{const}.$ On conçoit que la constante qui fait le second
> membre de cette équation, sera en général une fonction de $a$ et $b$, et des
> constantes arbitraires contenues dans les autres intégrales des équations du
> mouvement; […] mais, afin de rappeler l’origine de cette quantité, qui
> représente une certaine combinaison des différences partielles des valeurs
> de $a$ et $b$, nous ferons usage de cette notation $(b,a)$, pour la
> désigner; de manière que nous aurons généralement ${{\rm d}b\over{\rm
> d}s}\cdot{{\rm d}a\over{\rm d}\phi}-{{\rm d}a\over{\rm d}s}\cdot{{\rm
> d}b\over{\rm d}\phi}+{{\rm d}b\over{\rm d}u}\cdot{{\rm d}a\over{\rm
> d}\psi}-{{\rm d}a\over{\rm d}u}\cdot{{\rm d}b\over{\rm d}\psi}+{{\rm
> d}b\over{\rm d}v}\cdot{{\rm d}a\over{\rm d}\theta}-{{\rm d}a\over{\rm
> d}v}\cdot{{\rm d}b\over{\rm d}\theta}=(b,a).$
>
> ${{\rm d}b\over{\rm d}s}\cdot{{\rm d}a\over{\rm d}\phi}-{{\rm d}a\over{\rm
> d}s}\cdot{{\rm d}b\over{\rm d}\phi}+{{\rm d}b\over{\rm d}u}\cdot{{\rm
> d}a\over{\rm d}\psi}-{{\rm d}a\over{\rm d}u}\cdot{{\rm d}b\over{\rm
> d}\psi}+{{\rm d}b\over{\rm d}v}\cdot{{\rm d}a\over{\rm d}\theta}-{{\rm
> d}a\over{\rm d}v}\cdot{{\rm d}b\over{\rm d}\theta}=(b,a).$
(There is a short but detailed discussion of Poisson’s paper in René Dugas’s
Histoire de la Mécanique [4], in a somewhat modernized notation, which renders
Poisson’s argument easy to understand and facilitates the reading of his
original text.)
It is in this early article that Poisson introduced the change of variables
from $(q_{i},\dot{q}_{i})$ to $(q_{i},p_{i})$, where the $p_{i}$’s are the
conjugate quantities, or momenta, of the $q_{i}$’s, defined by $\frac{\partial
L}{\partial\dot{q}_{i}}=p_{i}$, when $L$ is the Lagrangian function, paving
the way for the Hamiltonian form of the equations of motion, already implicit
in Lagrange, eventually derived by Cauchy in a lithographed memoir in 1831,
which was only later printed, in 1834 in Italian and in 1835 in French, and
eventually published by William Rowan Hamilton (1805–1865) in his “Second
essay on a general method in dynamics” in 1835. Poisson returned to the
subject in 1816.
Jacobi, in 1850, read a biography of Poisson (maybe Arago’s?), and he re-
discovered what became known as Poisson’s theorem, that the Poisson bracket of
two integrals of the motion is an integral of the motion. He exclaimed that
this theorem was a “truly prodigious theorem” (ce théorème vraiment
prodigieux), and he endeavored to explain what he claimed its author and later
authors had not perceived.
The ubiquity of the concepts of Poisson brackets, Poisson algebras and Poisson
manifolds in mechanics, theoretical physics and an impressive number of fields
of mathematics suggests the question: how did this happen? The story of
Poisson brackets involves, in addition to Lagrange, Cauchy and Hamilton,
mainly Jacobi, Liouville and Sophus Lie (1842–1899), and culminates in the
explanation of the role they played in the development of quantum mechanics.
It is of course too long a story to outline here. Even a short history of
Poisson geometry would imply an excursus into the history of symplectic
geometry. I shall only comment here on the fact that the name “Poisson
brackets” does not seem to have been adopted until Whittaker used it in his
History of the Theories of Æther and Electricity in 1910. They had previously
been called “expressions” generally, while one author, Joseph Graindorge
(1843–1889), wrote in 1872 that he was using “Poisson’s notation” (la notation
de Poisson).
Already in 1857 Arthur Cayley (1821–1895), in his Report of the British
Association for the Advancement of Science, had predicted the importance that
the Poisson brackets would assume later, as opposed to the Lagrange
parentheses, “The theory of Poisson gives rise to developments which seem to
have nothing corresponding to them in the theory of Lagrange”. In fact, while
the Lagrange parentheses are the components of a closed 2-form, a concept that
appeared only with Élie Cartan (1869–1951) just before 1900, the Poisson
brackets satisfy the identity that Jacobi proved ca. 1840 (published
posthumously in 1862), and the Jacobi identity has become the foundation of
the theory of Lie algebras.
## 3 Conclusion
Most, but not all, of the judgments that were passed on Poisson’s œuvre in the
19th century were extremely laudatory. The physicist and astronomer François
Arago (1786–1853), who was then Secrétaire perpétuel (president) de l’Académie
des Sciences, declared at Poisson’s funeral in 1840: “Genius does not die
thus; it survives in its works”,323232Le génie ne meurt pas ainsi; il se
survit dans ses œuvres. and ten years later he wrote in another eulogy in the
form of a scientific biography that Poisson had “three qualities: genius,
devotion to work and mathematical erudition”.333333On se demandera sans doute
comment, durant une vie si courte et consacrée en grande partie au
professorat, notre confrère était parvenu à attaquer et à résoudre tant de
problèmes. Je répondrai que c’est par la réunion de trois qualités: le génie,
l’amour du travail et l’érudition mathématique. But the author of a history of
the mathematical and physical sciences in 12 volumes published in the 1880’s,
Maximilien Marie, devoted several pages to a derogatory summary of Poisson’s
activity, “Poisson was very far from accomplishing the promise of his
youth”,343434Poisson n’a pas tenu, à beaucoup près, les promesses de sa
jeunesse. claiming that his position in any debate would always have been the
wrong one, and Pierre Costabel in [2], although less negative, wrote a harsh
appreciation.
When the first new buildings of the Sorbonne were inaugurated in 1890, fifty
years after Poisson’s death, it was Charles Hermite (1822–1901) – after whom
the “Hermite polynomials” and “Hermitian matrices” are named – who delivered a
speech [8] at the ceremony in the presence of the then French President, Sadi
Carnot. He chose to review the work and the legacy of the professors who held
the first chairs at the Faculté des Sciences at its creation in 1809. He
declared, “Poisson is one of the great geometers of this
century”,353535Poisson est l’un des grands géomètres de ce siècle. before
reviewing Poisson’s achievements in mathematical physics: “For Laplace and
Poisson, pure analysis is not the object, but the tool”,363636Pour Laplace et
Poisson, l’Analyse pure n’est point le but, mais l’instrument. and he pursues:
“But, having a different object, Poisson and Fourier contributed to the
development of analysis which they enriched with methods, new results,
fundamental concepts”.373737Mais en ayant un autre but, Poisson et Fourier
contribuent au développement de l’Analyse qu’ils enrichissent de méthodes, de
résultats nouveaux, de notions fondamentales. He underlined the fact that
Poisson was a disciple of Laplace, but he also announced the following,
referring to Poisson’s work on what was already known as “the Poisson
theorem”, “But he is also related to contemporary analysis regarding a
question of the utmost importance and of particular interest from the point of
view of mathematical invention”.383838Mais il se rattache aussi à l’analyse de
notre temps dans une question de la plus grande importance et qui présente un
intérêt singulier au point de vue de l’invention mathématique. And he goes on
to quote a sentence in Latin written by Jacobi which I understand to mean, “We
here have an example that clearly shows that, if problems are not already
formulated in our minds, it may well be that we would not see most important
discoveries that are set in front of our eyes”.
Was Poisson a mathematician or a physicist? If he was called “a geometer” by
Arago, Hermite and others, it is because “geometer” was the generic term for
mathematician until much later in the 19th century. His ambition was to write
a comprehensive treatise of mathematical physics. Lagrange, Laplace, Gauss,
Cauchy, all contributed both to “pure mathematics” and to the solution of
problems in physics, sometimes very practical problems, such as geodesy,
working towards the latter with the mathematial tools that they forged.
Poisson’s work on the theory of magnetism had important applications to the
navigation instruments for ships. His molecular theory, following Laplace, did
not win against the wave theory advocated by Fresnel, but in the history of
elasticity some of his insights which had been discredited have regained their
importance in the latest development of composite materials.
Poisson never traveled. After he came to Paris at the age of 17 to enter the
École Polytechnique, he left the capital only to visit Laplace at Arcueil, a
few miles from the southern edge of Paris, where he joined the other members
of the Société d’Arcueil, a small circle of renowned scientists, named after
their meeting place. But his publications abroad were numerous and his
influence in Germany, in England, and in Russia was considerable. Several of
his books were translated and extracts or summaries of his articles appeared
in the Zeitschrift für Physik, in the Annalen der Physik, and the
Philosophical Transactions.393939These references are not in the autograph
list of his works – now in the Bibliothèque de l’Institut – that Poisson
himself drew up not long before he died. They had escaped the meticulous,
invaluable work of Pierre Dugac in [3].
But Lie algebra theory had to wait for other geniuses to be developed, and
Poisson geometry had to wait for more than a century and a half to be
developed in various forms in the work of, among others, Wolfgang Pauli,
George W. Mackey, Wlodzimierz Tulczyjew, Vladimir Maslov, Robert Hermann,
Alexander Kirillov, Moshé Flato, André Lichnerowicz and Alan Weinstein.
Poisson’s role in French science was dominant while he lived. His explanations
of physical phenomena were mostly proved wrong by his contemporaries or by
later physicists, but his achievements in mathematical physics remain. Often,
he found the right equations using the wrong physical arguments. He was a
formidable “computer” and his legacy in mathematics is essential. He advanced
mathematics by trying to solve physical problems, sometimes successfully,
sometimes not, and he is to be remembered for concepts, equations, formulas
and theorems. He demonstrated how physical problems can suggest entirely
abstract mathematics, what we now call mathematical physics.
## References
* [1] Bru B., Poisson et le calcul des probabilités [reprinted from Siméon-Denis Poisson et la science de son temps, École Polytech., Palaiseau, 1981], in Siméon-Denis Poisson, Editor Y. Kosmann-Schwarzbach, Hist. Math. Sci. Phys., Ed. École Polytech., Palaiseau, 2013, 333–355.
* [2] Costabel P., S.D. Poisson, in Complete Dictionary of Scientific Biography, Vol. 15, suppl. 1, Charles Scribner’s Sons, 2008, 480–490, French transl.: Siméon-Denis Poisson, aspect de l’homme et de son œuvre [reprinted from Siméon-Denis Poisson et la science de son temps, École Polytech., Palaiseau, 1981], in Siméon-Denis Poisson, Editor Y. Kosmann-Schwarzbach, Hist. Math. Sci. Phys., Ed. École Polytech., Palaiseau, 2013, 21–39.
* [3] Dugac P., Liste des travaux de Siméon-Denis Poisson [reprinted from Siméon-Denis Poisson et la science de son temps, École Polytech., Palaiseau, 1981], in Siméon-Denis Poisson, Editor Y. Kosmann-Schwarzbach, Hist. Math. Sci. Phys., Ed. École Polytech., Palaiseau, 2013, 423–436.
* [4] Dugas R., Histoire de la mécanique [reprint of the 1950 original], Éditions Jacques Gabay, Paris, 1996, English transl.: Maddox J.R., A history of mechanics, Dover, 1988.
* [5] Fourier J., Théorie analytique de la chaleur [reprint of the 1822 original], Éditions Jacques Gabay, Paris, 1988.
* [6] Greaves G.N., Poisson’s ratio over two centuries: challenging hypotheses, Notes and Records 67 (2013), 37–58.
* [7] Hachette C., Poisson S.-D., Addition au mémoire précédent, J. Éc. Polytech. (1800), 11e cahier, an X, 170–173.
* [8] Hermite C., Discours prononcé à l’inauguration de la nouvelle Sorbonne, Bull. Administratif Instruction Publique (1889), no. 867, 17 août, 260–279.
* [9] Neumann C., Ueber die Integration der partiellen Differentialgleichung: $\frac{\partial^{2}\Phi}{\partial x^{2}}\frac{\partial^{2}\Phi}{\partial y^{2}}=0$, J. Reine Angew. Math. 59 (1861), 335–366.
* [10] Poisson S.-D., Mémoire sur la pluralité des intégrales dans le calcul des différences, J. Éc. Polytech. (1800), 11e cahier, an X, 173–181.
* [11] Poisson S.-D., Mémoire sur l’élimination dans les équations algébriques, J. Éc. Polytech. (1800), 11e cahier, an X, 199–203.
* [12] Poisson S.-D., Mémoire sur la variation des constantes arbitraires dans les questions de mécanique, J. Éc. Polytech. (1809), 15e cahier, 8, 266–344.
* [13] Poisson S.-D., Traité de mécanique, Veuve Courcier, 1811.
* [14] Poisson S.-D., Mémoire sur les intégrales définies, J. Éc. Polytech. (1813), 16e cahier, 9, 215–246.
* [15] Poisson S.-D., Mémoire sur la manière d’exprimer les fonctions par des séries de quantités périodiques, et sur l’usage de cette transformation dans la résolution de différents problèmes, J. Éc. Polytech. (1820), 18e cahier, 11, 417–489.
* [16] Poisson S.-D., Mémoire sur la théorie du magnétisme en mouvement, Mém. Acad. R. Sci. Inst. France 6 (1823), 441–570.
* [17] Poisson S.-D., Suite du mémoire sur les intégrales définies et sur la sommation des séries, J. Éc. Polytech. (1823), 19e cahier, 12, 404–509.
* [18] Poisson S.-D., Sur l’extension des fils et des plaques élastiques, Ann. de Chim. et Phys. 36 (1827), 384–387.
* [19] Poisson S.-D., Mémoire sur la proportion des naissances des filles et des garçons, Mém. Acad. Sci. Paris 9 (1830), 239–308.
* [20] Poisson S.-D., Théorie mathématique de la chaleur, Paris, Bachelier, 1835.
* [21] Poisson S.-D., Recherches sur la probabilité des jugements en matière criminelle et en matière civile, précédées des règles générales du calcul des probabilités [reprint of the 1837 original], Éditions Jacques Gabay, Paris, 2003.
* [22] Schwarz H.A., Zur Integration der partiellen Differentialgleichung ${\partial^{2}u\over\partial x^{2}}+{\partial^{2}u\over\partial y^{2}}=0$, J. Reine Angew. Math. 74 (1872), 218–253.
* [23] Truesdell C., A note on the Poisson–Charlier functions, Ann. Math. Statistics 18 (1947), 450–454.
|
# Radiative corrections to inverse muon decay for accelerator neutrinos
Oleksandr Tomalak<EMAIL_ADDRESS>Theoretical Division, Los Alamos National
Laboratory, Los Alamos, NM 87545, USA Kaushik Borah University of Kentucky,
Department of Physics and Astronomy, Lexington, KY 40506, USA Fermilab,
Theoretical Physics Department, Batavia, IL 60510, USA Richard J. Hill
University of Kentucky, Department of Physics and Astronomy, Lexington, KY
40506, USA Fermilab, Theoretical Physics Department, Batavia, IL 60510, USA
Kevin S. McFarland University of Rochester, Department of Physics and
Astronomy, Rochester, NY 14627, USA Daniel Ruterbories University of
Rochester, Department of Physics and Astronomy, Rochester, NY 14627, USA
(August 27, 2024)
Inverse muon decay ($\nu_{\mu}e^{-}\to\nu_{e}\mu^{-}$) is a promising tool to
constrain neutrino fluxes with energies $E_{\nu}\geq 10.9\leavevmode\nobreak\
\mathrm{GeV}$. Radiative corrections introduce percent-level distortions to
energy spectra of outgoing muons and depend on experimental details. In this
paper, we calculate radiative corrections to the scattering processes
$\nu_{\mu}e^{-}\to\nu_{e}\mu^{-}$ and
$\bar{\nu}_{e}e^{-}\to\bar{\nu}_{\mu}\mu^{-}$. We present the muon energy
spectrum for both channels, double-differential distributions in muon energy
and muon scattering angle and in photon energy and photon scattering angle,
and the photon energy spectrum for the dominant
$\nu_{\mu}e^{-}\to\nu_{e}\mu^{-}$ process. Our results clarify and extend the
region of applicability of previous results in the literature for the double
differential distribution in muon energy and photon energy, and in the muon
energy spectrum with a radiated photon above a threshold energy. We provide
analytic expressions for single, double and triple differential cross
sections, and discuss how radiative corrections modify experimentally
interesting observable distributions.
###### Contents
1. 1 Introduction
2. 2 Inverse muon decay at tree level
3. 3 Muon energy spectrum and integrated cross section
4. 4 Distortion of experimentally accessed distributions
5. 5 Conclusions and Outlook
6. A Virtual corrections
7. B Real radiation
1. B.1 Soft-photon Bremsstrahlung
2. B.2 Contribution of hard photons
8. C Triple-differential distribution
9. D Double-differential distribution in muon energy and muon angle
10. E Double-differential distribution in photon energy and photon angle
11. F Photon energy spectrum
## 1 Introduction
Scattering of neutrino beams from atomic electrons provides us a “standard
candle” for constraints on the neutrino fluxes at accelerator-based
experiments. For example, the MINERvA experiment exploits the elastic
scattering channel $\nu_{\ell}e^{-}\to\nu_{\ell}e^{-}$ [1, 2, 3, 4] for the
normalization of all (anti)neutrino-nucleus cross-section measurements.
Another pure-leptonic process, inverse muon decay
$\nu_{\mu}e^{-}\to\nu_{e}\mu^{-}$ and
$\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}$, requires (anti)neutrinos to
be sufficiently energetic to produce the massive muon in the final state. The
incoming energy should be larger than $10.9\leavevmode\nobreak\ \mathrm{GeV}$,
which is slightly above the main region of modern artificial (anti)neutrino
fluxes. Such high-energy tails are a very uncertain part of the (anti)neutrino
beam [2] due to less-known hadroproduction cross sections for forward-going
mesons in the direction of the proton beam. Recently, high-energy tails of the
muon component in the incoming neutrino beam were also successfully
constrained with the inverse muon decay (IMD) reaction
$\nu_{\mu}e^{-}\to\nu_{e}\mu^{-}$ by the MINERvA experiment [5]. According to
the study in Ref. [6], the future DUNE experiment [7] will have tens of
thousands of elastic neutrino-electron scattering events and more than a few
thousand inverse muon decay events. Consequently, both reactions will be
accessed at the percent level, and radiative corrections become crucial for
the correct interpretation of experimental measurements [8, 9].
A comprehensive theoretical study of radiative corrections and various final-
state distributions in elastic neutrino-electron scattering with error
analysis was recently presented in Ref. [10] and compared to all previous
calculations discussed in Refs. [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 28, 29, 30]. Radiative corrections to the inverse muon
decay were discussed in the ultrarelativistic limit in Ref. [14] and
subsequently evaluated in Ref. [31]. Before the above-mentioned measurements
by the MINERvA Collaboration, IMD results from the CHARM-II Collaboration [32,
33] have confirmed predictions of the Standard Model of particle physics.
The most relevant experimental observable is the muon energy spectrum with or
without restrictions on the energy of the radiated real photon. The muon is
scattered primarily in the forward direction. Although experimental resolution
does not allow a precise determination of the muon scattering angle, the muon
angular distribution can potentially provide better selection criteria for IMD
events. We compare our results for the muon energy spectrum with the Bardin-
Dokuchaeva calculation for the dominant $\nu_{\mu}e^{-}\to\nu_{e}\mu^{-}$
channel [31] and provide a new result for the subdominant
$\bar{\nu}_{e}e^{-}\to\bar{\nu}_{\mu}\mu^{-}$ process. We also provide new
expressions for double-differential distributions in muon energy and muon
scattering angle, in photon energy and photon scattering angle, as well as in
muon energy and photon energy. We discuss how radiative corrections modify the
experimentally sensitive distributions from Ref. [5].
The paper is organized as follows. In Section 2, we discuss the IMD reaction
at tree level. We provide details of virtual radiative corrections in Appendix
A and describe the evaluation of real contributions in Appendix B. We combine
these calculations to obtain the resulting muon-energy spectrum at
$\mathrm{O}\left(\alpha\right)$ precision in Section 3, where we also present
the total cross section. In the following Section 4, we discuss distortions of
experimentally accessed distributions due to $\mathrm{O}\left(\alpha\right)$
radiative corrections. We finish with conclusions and outlook in Section 5. We
provide new expressions for the triple-differential distribution in muon
energy, muon scattering angle and photon energy, the double-differential
distribution in muon energy and muon scattering angle, the double-differential
distribution in photon energy and photon scattering angle, and the photon
energy spectrum in the Supplemental material and Appendixes C, D, E, and F,
respectively.
## 2 Inverse muon decay at tree level
Consider muon production on atomic electrons by a neutrino beam,
$\nu_{\mu}\left(k_{\nu_{\mu}}\right)e^{-}\left(p_{e}\right)\to{\nu}_{e}\left(k_{{\nu}_{e}}\right)\mu^{-}\left(p_{\mu}\right)$
[or
$\bar{\nu}_{e}\left(k_{\bar{\nu}_{e}}\right)e^{-}\left(p_{e}\right)\to\bar{{\nu}}_{\mu}\left(k_{\bar{\nu}_{\mu}}\right)\mu^{-}\left(p_{\mu}\right)$].
This process is governed by the low-energy effective four-fermion interaction
with scale-independent Fermi coupling constant $\mathrm{G}_{\mathrm{F}}$ [34,
35, 36, 37, 38, 39]
$\displaystyle{\cal L}_{\rm
eff}=-2\sqrt{2}\mathrm{G}_{\mathrm{F}}\bar{\nu}_{e}\gamma^{\lambda}\mathrm{P}_{\mathrm{L}}\nu_{\mu}\,\bar{\mu}\gamma_{\lambda}\mathrm{P}_{\mathrm{L}}e+\mathrm{h.c.}.$
(1)
The reaction is kinematically allowed only for sufficiently high energies of
the incoming neutrino $E_{\nu_{\mu}},E_{\bar{\nu}_{e}}\geq
E^{\mathrm{thr}}_{\nu}$, where
$\displaystyle E^{\mathrm{thr}}_{\nu}=\frac{m^{2}_{\mu}-m^{2}_{e}}{2m_{e}}.$
(2)
In radiation-free kinematics, the muon goes predominantly in the forward
direction with a scattering angle $\theta_{\mu}$:
$\displaystyle\cos\theta_{\mu}=\frac{2\left(E_{\nu}E_{\mu}+m_{e}E_{\mu}-m_{e}E_{\nu}\right)-m^{2}_{\mu}-m^{2}_{e}}{2E_{\nu}\sqrt{E^{2}_{\mu}-m^{2}_{\mu}}}.$
(3)
The corresponding cone size increases with incoming (anti)neutrino energy. For
example, the scattering within $0.2^{\circ}$ is allowed only for the incoming
(anti)neutrino energy $E_{\nu}\gtrsim 38\leavevmode\nobreak\ \mathrm{GeV}$.
The differential cross section with respect to the muon energy $E_{\mu}$, as a
function of the incoming neutrino energy $E_{\nu}$, is given by
$\displaystyle\frac{\mathrm{d}\sigma_{\mathrm{LO}}\left(\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-},\leavevmode\nobreak\
\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}\right)}{\mathrm{d}E_{\mu}}=\frac{|\mathrm{T}_{\mathrm{LO}}|^{2}}{32\pi
m_{e}E^{2}_{\nu}}\,,$ (4)
where the squared matrix element at leading order is
$|\mathrm{T}_{\mathrm{LO}}|^{2}=64\mathrm{G}_{\mathrm{F}}^{2}p_{\mu}\cdot
k_{{\nu}_{e}}p_{e}\cdot k_{\nu_{\mu}}$ for
$\nu_{\mu}\left(k_{\nu_{\mu}}\right)e^{-}\left(p_{e}\right)\to{\nu}_{e}\left(k_{{\nu}_{e}}\right)\mu^{-}\left(p_{\mu}\right)$
and $|\mathrm{T}_{\mathrm{LO}}|^{2}=64\mathrm{G}_{\mathrm{F}}^{2}p_{\mu}\cdot
k_{\bar{\nu}_{e}}p_{e}\cdot k_{\bar{\nu}_{\mu}}$ for
$\bar{\nu}_{e}\left(k_{\bar{\nu}_{e}}\right)e^{-}\left(p_{e}\right)\to\bar{{\nu}}_{\mu}\left(k_{\bar{\nu}_{\mu}}\right)\mu^{-}\left(p_{\mu}\right)$.
Integration of this distribution over the kinematically allowed range of muon
energies,
$\displaystyle E_{\mu}^{\mathrm{min}}=\frac{m^{2}_{\mu}+m^{2}_{e}}{2m_{e}}\leq
E_{\mu}\leq\frac{\left(E_{\nu}+\frac{m_{e}}{2}\right)^{2}+\frac{m^{2}_{\mu}}{4}}{E_{\nu}+\frac{m_{e}}{2}}=E_{\mu}^{\mathrm{max}}\,,$
(5)
results in the following total cross sections [31]:
$\displaystyle\sigma_{\mathrm{LO}}\left(\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}\right)$
$\displaystyle=\frac{2\mathrm{G}_{\mathrm{F}}^{2}m_{e}\left(E_{\mu}^{\mathrm{max}}-E_{\mu}^{\mathrm{min}}\right)}{\pi}\frac{E_{\nu_{\mu}}-E^{\mathrm{thr}}_{\nu}}{E_{\nu_{\mu}}},$
(6)
$\displaystyle\sigma_{\mathrm{LO}}^{\bar{\nu}}\left(\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}\right)$
$\displaystyle=\frac{2\mathrm{G}_{\mathrm{F}}^{2}m_{e}\left(E_{\mu}^{\mathrm{max}}-E_{\mu}^{\mathrm{min}}\right)}{\pi}\left[\frac{\left(E_{\mu}^{\mathrm{max}}\right)^{2}+E_{\mu}^{\mathrm{max}}E_{\mu}^{\mathrm{min}}+\left(E_{\mu}^{\mathrm{min}}\right)^{2}}{3E^{2}_{\bar{\nu}_{e}}}+\frac{E_{\bar{\nu}_{e}}+m_{e}}{E_{\bar{\nu}_{e}}}\frac{E_{\bar{\nu}_{e}}+E_{\mu}^{\mathrm{min}}}{E_{\bar{\nu}_{e}}}\right.$
$\displaystyle\left.-\frac{E_{\bar{\nu}_{e}}+E_{\mu}^{\mathrm{min}}-\frac{E^{\mathrm{thr}}_{\nu}}{2}}{E_{\bar{\nu}_{e}}}\frac{E_{\mu}^{\mathrm{max}}+E_{\mu}^{\mathrm{min}}}{E_{\bar{\nu}_{e}}}\right].$
(7)
We provide details of the standard calculation of real and virtual radiative
corrections to the inverse muon decay cross sections in Appendixes A and B,
respectively.
## 3 Muon energy spectrum and integrated cross section
Adding virtual and real corrections, we obtain the muon energy spectrum for
inverse muon decay, $\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}$, including photons of
arbitrarily large energy allowed by kinematics.111Note that this inclusive
observable is independent of the infrared regulators $\lambda$ and $\Delta E$
from Appendixes A and B. In the limit $E_{\nu}\gg m_{e}$ (i.e., neglecting
order $m_{e}/E_{\nu}$ power corrections) our results are in agreement with the
calculation of Bardin and Dokuchaeva [31]:
$\displaystyle\frac{\mathrm{d}\sigma}{\mathrm{d}E_{\mu}}$
$\displaystyle=\frac{\mathrm{d}\sigma_{\mathrm{LO}}}{\mathrm{d}E_{\mu}}+\frac{2\mathrm{G}_{\mathrm{F}}^{2}m_{e}}{\pi}\frac{\alpha}{\pi}\left[\frac{1+3x}{2}\left(\mathrm{Li}_{2}\frac{1-\frac{x}{y}}{1-x}-\mathrm{Li}_{2}\frac{y-x}{1-x}-\ln\frac{y}{x}\ln\frac{y-x}{1-x}\right)\right.$
$\displaystyle\left.+\left(1-x\right)\left(\left(\ln\frac{y^{2}}{xr_{e}}-2\right)\ln\frac{y-x}{y}+\ln\frac{y}{x}\ln\left(1-y\right)-\mathrm{Li}_{2}x+\mathrm{Li}_{2}y+\mathrm{Li}_{2}\frac{x-y}{1-y}+\frac{3}{2}\left(1-x\right)\ln\left(1-x\right)\right)\right.$
$\displaystyle\left.-\frac{7x^{3}}{36}y^{-3}+\frac{x^{2}}{12}\left(1+\frac{7x}{2}\right)y^{-2}+\left(-\frac{7x}{12}-\frac{x^{2}}{2}-\frac{x^{3}}{6}\right)y^{-1}-\frac{47}{36}+\frac{25x}{8}+\frac{3x^{2}}{8}-\left(\frac{11}{12}+\frac{x}{4}\right)y+\frac{y^{2}}{24}\right.$
$\displaystyle\left.-\left(\frac{x^{2}}{2}y^{-2}+\left(\frac{x}{2}-2x^{2}\right)y^{-1}+\frac{1}{4}-\frac{3x}{4}+\frac{3x^{2}}{2}+\frac{y}{2}\right)\ln
x+\left(x^{2}y^{-2}+x\left(1-4x\right)y^{-1}+\frac{3x^{2}}{2}+y\right)\ln
y\right.$
$\displaystyle\left.+\left(\frac{x^{3}}{6}y^{-3}-\frac{x^{2}\left(1+x\right)}{4}y^{-2}+\frac{x\left(1+3x\right)}{2}y^{-1}-\frac{23}{12}+\frac{9x}{4}-\frac{3x^{2}}{2}-\frac{y}{2}\right)\ln\left(1-y\right)\right.$
$\displaystyle\left.+\left(\frac{x^{2}}{6}y^{-2}-\frac{x}{4}\left(\frac{1}{3}+x\right)y^{-1}+\frac{5}{4}\left(\frac{1}{3}+x\right)+\frac{y}{2}\right)\frac{y-x}{y}\ln\frac{y-x}{y}\right.$
$\displaystyle\left.-\left(\frac{x^{3}}{6}y^{-3}+\frac{x^{2}\left(1-x\right)}{4}y^{-2}+\left(x-\frac{x^{2}}{2}\right)y^{-1}-\frac{2}{3}\right)\ln
r_{e}\right],$ (8) $\displaystyle x$
$\displaystyle=\frac{m^{2}_{\mu}}{2m_{e}E_{\nu_{\mu}}},\qquad\qquad\qquad
y=\frac{E_{\mu}}{E_{\nu_{\mu}}},\qquad\qquad\qquad
r_{e}=\frac{m_{e}}{2E_{\nu_{\mu}}}.$ (9)
Comparing the double-differential distribution in photon energy and muon
energy in Ref. [31] to our numerical evaluation, which starts from the matrix
element, Eq. (28) in Appendix B, and performs numerical integration within the
allowed phase space of the process, we find agreement only inside the range of
photon energies
$\left(m_{e}-\left(E_{\mu}-\sqrt{E_{\mu}^{2}-m^{2}_{\mu}}\right)\right)/2\leq
k_{\gamma}\leq\left(m_{e}+2E_{\nu}-\left(E_{\mu}+\sqrt{E_{\mu}^{2}-m^{2}_{\mu}}\right)\right)/2$;
in particular, the double-differential distribution in the calculation of
Bardin and Dokuchaeva is not positive-definite in small end-point regions
outside this range.222Our result agrees up to power corrections in the
electron mass with the expression of Ref. [31] [their Eq. (24)] for the
contribution to the muon energy spectrum from events with photons above an
energy cutoff $\Delta E$ when $\Delta E\gg m_{e}$ and
$\left(m_{e}+2E_{\nu}-\left(E_{\mu}+\sqrt{E_{\mu}^{2}-m^{2}_{\mu}}\right)\right)/2-\Delta
E\gg m_{e}$.
The corresponding muon energy spectrum for
$\bar{\nu}_{e}e^{-}\to\bar{\nu}_{\mu}\mu^{-}$ in the limit $E_{\nu}\gg m_{e}$
is given by
$\displaystyle\frac{\mathrm{d}\sigma^{\bar{\nu}}}{\mathrm{d}E_{\mu}}$
$\displaystyle=\frac{\mathrm{d}\sigma^{\bar{\nu}}_{\mathrm{LO}}}{\mathrm{d}E_{\mu}}+\frac{2\mathrm{G}_{\mathrm{F}}^{2}m_{e}}{\pi}\frac{\alpha}{\pi}\left(1-y\right)\left[\left(1+x-y\right)\left(\mathrm{Li}_{2}\frac{1-\frac{y}{x}}{1-y}+\mathrm{Li}_{2}\frac{y-x}{y}+\ln\frac{x-y}{y}\ln\frac{y^{2}}{xr_{e}}+\frac{1}{2}\ln^{2}\frac{y}{x}\right)\right.$
$\displaystyle\left.+\frac{1-2\left(1-y\right)y+x\left(1+y\right)}{2\left(1-y\right)}\left(\mathrm{Li}_{2}\frac{y-1}{y}+\mathrm{Li}_{2}\frac{x-y}{x}-\mathrm{Li}_{2}\frac{x-1}{x}+\ln\frac{y}{x}\ln\left[y\left(y-x\right)\right]-\ln
y\ln\left(1-y\right)\right)\right.$
$\displaystyle\left.+\frac{1-2\left(1-y\right)y+x\left(1+y\right)}{2\left(1-y\right)}\ln
x\ln\left(1-x\right)+\frac{x^{3}}{18}y^{-3}-\frac{x^{2}}{24}\left(7-\frac{5x}{3}\right)y^{-2}-\left(\frac{4x}{3}+\frac{23x^{2}}{24}\right)y^{-1}-\frac{31}{72}\right.$
$\displaystyle\left.+\frac{5x}{24}+\frac{49}{72}y-\left(\frac{x^{2}}{2}y^{-2}+\frac{x\left(1-x\right)}{2}y^{-1}-\frac{1}{2}-\frac{9x}{4}\right)\ln
x+\frac{3}{4}\frac{\left(1-x\right)\left(1-x-2y\right)}{1-y}\ln\left(1-x\right)\right.$
$\displaystyle\left.+\left(x^{2}y^{-2}+x\left(1-2x\right)y^{-1}-\frac{3}{4}-\frac{5}{2}x+\frac{x^{2}}{4}+\left(\frac{1}{2}+\frac{3}{2}x\right)y+y^{2}\right)\frac{\ln
y}{1-y}\right.$
$\displaystyle\left.+\left(\frac{x^{3}}{6}y^{-3}-\frac{x^{2}\left(3-x\right)}{12}y^{-2}+\frac{x\left(2+3x\right)}{4}y^{-1}-\frac{7}{6}-\frac{x}{4}+\frac{5}{3}y\right)\ln\left(1-y\right)\right.$
$\displaystyle\left.-\left(\frac{x^{3}}{6}y^{-3}-\frac{x^{2}\left(3+x\right)}{12}y^{-2}+\frac{x}{2}\left(1+2x-\frac{x^{2}}{6}\right)y^{-1}+\frac{19}{12}-\frac{x}{4}-\left(\frac{8}{3}+\frac{x}{4}\right)y+\frac{y^{2}}{3}\right)\frac{1}{1-y}\ln\frac{y-x}{y}\right.$
$\displaystyle\left.-\left(\frac{x^{3}}{6}y^{-3}+\frac{x^{2}\left(3+x\right)}{12}y^{-2}+\left(x+\frac{x^{2}}{4}\right)y^{-1}-\frac{2}{3}-x+\frac{2}{3}y\right)\ln
r_{e}\right],$ (10)
where $x,y$, and $r_{e}$ are given by the substitution $E_{\nu_{\mu}}\to
E_{\bar{\nu}_{e}}$ in Eqs. (9).
Figure 1: Leading-order muon spectrum and $\mathrm{O}\left(\alpha\right)$
corrections for a fixed neutrino energy $E_{\nu}=15$ GeV. The
$\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}$ process at leading order is shown by the
blue solid line and is compared to the spectrum in
$\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}$ at leading order, which is
shown by green dashed line, and to the $\mathrm{O}\left(\alpha\right)$
contribution in Eqs. (8) and (10), cf. the red dotted and black dash-dotted
lines respectively. The $\mathrm{O}\left(\alpha\right)$ contribution is
negative, i.e., decreases the total and differential cross sections at all
values of muon energy.
At the fixed illustrative neutrino energy of $E_{\nu}=15$ GeV, Fig. 1 shows
muon energy spectra for the tree-level processes
$\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}$ and
$\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}$ as well as the
$\mathrm{O}\left(\alpha\right)$ contribution to
$\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}$ from Eq. (8) and
$\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}$ from Eq. (10). We show the
latter with an opposite sign for convenience. The radiative corrections reduce
the cross section by $3$-$4\leavevmode\nobreak\ \%$. They have the largest
relative size for backward scattering and increase going to forward angles.
Integrating the muon energy spectrum over the kinematically allowed range in
Eq. (5), we obtain the $\mathrm{O}\left(\alpha\right)$ contribution to the
unpolarized inverse muon decay cross section $\sigma$. For illustration, we
present two limits of interest. The leading term in $m_{e}/E_{\nu}$ expansion
is given by
$\displaystyle\sigma$ $\displaystyle\underset{r_{e}\ll
1}{\longrightarrow}\sigma_{\mathrm{LO}}+\frac{2\mathrm{G}_{\mathrm{F}}^{2}m_{e}E_{\nu}}{\pi}\frac{\alpha}{\pi}\left[\frac{1}{24}\left(19-4\pi^{2}\left(1-\frac{3}{2}x+\frac{5}{2}x^{2}\right)+16\ln
r_{e}+36x\left(1-2\ln x\right)+x^{2}\left(45-4x\right)\right)\right.$
$\displaystyle\left.+\frac{x}{2}\left(\left(1+3x\right)\mathrm{Li}_{2}x+\left(3-7x\right)\left(\mathrm{Li}_{2}\left(1-\frac{1}{x}\right)+\frac{1}{2}\ln^{2}x\right)-8+x\ln
x\ln r_{e}\right)-x\ln x\left(1-\frac{13}{4}x\right)\right.$
$\displaystyle\left.-\ln\left(1-x\right)\left(\left(1-\frac{1}{2}x-\frac{5}{2}x^{2}\right)\ln
x+\left(1-x\right)^{2}\left(4+\ln
r_{e}\right)\right)-x\left(1-\frac{x\left(3-x\right)}{6}\right)\ln
r_{e}\right],$ (11) $\displaystyle\sigma^{\bar{\nu}}$
$\displaystyle\underset{r_{e}\ll
1}{\longrightarrow}\sigma^{\bar{\nu}}_{\mathrm{LO}}+\frac{2\mathrm{G}_{\mathrm{F}}^{2}m_{e}E_{\nu}}{\pi}\frac{\alpha}{\pi}\left[\frac{1}{72}\left(43-4\pi^{2}\left(1+9x+\frac{3}{2}x^{2}-x^{3}\right)+16\ln
r_{e}+\frac{27x}{2}\left(9-4\ln x\right)-41x^{2}\right)\right.$
$\displaystyle\left.+\frac{25x^{3}}{144}+\frac{x}{2}\left(\left(7+x-x^{2}\right)\mathrm{Li}_{2}x-\left(3+x-\frac{x^{2}}{3}\right)\left(\mathrm{Li}_{2}\left(1-\frac{1}{x}\right)+\frac{1}{2}\ln^{2}x\right)-\frac{34}{9}+\frac{x^{2}}{2}\ln
x\ln r_{e}\right)\right.$ $\displaystyle\left.-\frac{x}{3}\ln
x\left(1+\frac{5}{4}x-\frac{47}{24}x^{2}\right)-\ln\left(1-x\right)\left(\left(1-6x-\frac{3}{2}x^{2}+x^{3}\right)\frac{\ln
x}{3}+\left(1-\frac{3x}{2}+\frac{x^{3}}{2}\right)\frac{\ln
r_{e}}{3}\right)\right.$
$\displaystyle\left.-\left(\frac{17}{9}+\frac{x\left(7-11x\right)}{18}\right)\left(1-x\right)\ln\left(1-x\right)-\frac{x}{3}\left(1-\frac{x\left(3+x\right)}{12}\right)\ln
r_{e}\right],$ (12)
while the high-energy limit, $x\ll 1$, is given by
$\displaystyle\sigma\left(E_{\nu}\right)-\sigma_{\mathrm{LO}}\left(E_{\nu}\right)$
$\displaystyle\underset{x\ll
1}{\longrightarrow}\frac{\mathrm{G}_{\mathrm{F}}^{2}m_{e}E_{\nu}}{12\pi}\frac{\alpha}{\pi}\left(19-4\pi^{2}+16\ln
r_{e}+36x\left(1-2\ln x\right)\right),$ (13)
$\displaystyle\sigma^{\bar{\nu}}\left(E_{\nu}\right)-\sigma^{\bar{\nu}}_{\mathrm{LO}}\left(E_{\nu}\right)$
$\displaystyle\underset{x\ll
1}{\longrightarrow}\frac{\mathrm{G}_{\mathrm{F}}^{2}m_{e}E_{\nu}}{36\pi}\frac{\alpha}{\pi}\left(43-4\pi^{2}\left(1+\frac{9}{2}x\right)+16\ln
r_{e}+\frac{27}{2}x\left(9-4\ln x\right)\right),$ (14)
where the leading terms at $x\to 0$ coincide with the well-known expressions
in Ref. [14].
We provide the total cross section, double-differential distribution in muon
energy and muon scattering angle, double-differential distribution in muon
energy, and triple-differential distribution in muon energy, muon scattering
angle, and photon energy in Appendixes C, D, and in the Supplemental material.
## 4 Distortion of experimentally accessed distributions
Experimentally, inverse muon decay events are distinguished from other
reactions by looking for high-energy muons, above the $E_{\mu}^{\mathrm{min}}$
of Eq. (5) with no other particles in the final state, and which are along the
direction of the incoming neutrino due to the kinematics of elastic scattering
from electrons. Radiative corrections cause events with real photons in the
final state and with a different distribution of muon energies and angles than
in the tree-level process. This section explores those changes from the tree-
level predictions.
Experiments will need to reject events from $\nu_{\mu}$ quasielastic
scattering on nucleons in nuclei which may appear to be consistent with
elastic kinematics, but which will have a recoiling proton in the final state.
Similarly, inelastic processes can produce high-energy forward muons with
other particles in the final state. Because there are many possible elastic
and inelastic reactions, a common experimental strategy is to remove events
with any other visible energy than the muon in the final state. An energetic
real photon from radiative processes, even one nearly collinear with the muon,
may produce visible energy that will veto the event due to this requirement.
While this experimental strategy will be common to all measurements, the
details of the effect will be particular to each experimental setup. In its
analysis [5], MINERvA predicted the relative acceptance as a function of
photon energy, and that prediction is shown in Fig. 2.
Figure 2: MINERvA’s probability to accept a radiative IMD event as a function
of collinear photon energy [5]. Figure 3: Left panel: MINERvA medium-energy
flux is averaged over the energies above the IMD threshold to $30$ and
$80\leavevmode\nobreak\ \mathrm{GeV}$. For DUNE experiment, we present the
averaging over the range up to $80\leavevmode\nobreak\ \mathrm{GeV}$. Right
panel: Ratio of $\mathrm{O}\left(\alpha\right)$ contribution to the leading-
order result for the muon energy spectrum above $E_{\mu}^{\mathrm{min}}$,
averaged over the anticipated DUNE flux, shown by the red dashed line, is
compared to this ratio averaged over the medium-energy flux of the MINERvA
experiment, shown by the blue solid line. The green dashed line shows the
further reduction in cross section due to the probability of vetoing the event
due to the presence of a real radiated photon in MINERvA’s analysis of IMD
events [5]. MINERvA’s probability to accept events with real photons as a
function of the photon energy is shown in Figure 2.
Averaging over high-energy tails of the expected flux in the DUNE experiment
[40] and medium-energy “neutrino” (forward horn current) mode for the MINERvA
experiment [2, 41, 3], we provide the effect of
$\mathrm{O}\left(\alpha\right)$ on muon energy spectra for two representative
examples of neutrino experiments that do or will use IMD to constrain its
high-energy flux tails in Fig. 3. For MINERvA and DUNE predictions, we average
over the (anti)neutrino energy above the threshold value
$E_{\nu}^{\mathrm{thr}}$ but below $30$ and $80\leavevmode\nobreak\
\mathrm{GeV}$ respectively. We illustrate this averaging in the left panel of
Fig. 3 and compare it to fluxes averaged over the same region in both
experiments. The average over the flux decreases the resulting cross section
compared to the fixed energy $E_{\nu}=15\leavevmode\nobreak\ \mathrm{GeV}$
result, which is shown in Fig. 1, since the flux falls monotonically with
(anti)neutrino energy and is convoluted with slower rising cross section.
Distortions of the muon energy spectrum increase as the neutrino energy
approaches the threshold of the inverse muon decay from above.
The effect on the measurable cross section from the removal of events with
real photons by MINERvA is also shown in Fig. 3 and compared with the
$\mathrm{O}\left(\alpha\right)$ correction. It is less than a $1\%$ reduction
in the observed rate, with a larger effect for higher muon energies.
The kinematics of elastic scattering from electrons produces a relationship
between the muon energy and angle with respect to the incoming neutrino
direction. A useful combination is
$\displaystyle{\cal{F}}\left(E_{\mu},\leavevmode\nobreak\
\theta_{\mu}\right)\equiv
E_{\mu}\theta_{\mu}^{2}\approx\left(1-\frac{E_{\mu}}{E_{\nu}}\right)\left(2m_{e}-\frac{m_{\mu}^{2}}{E_{\mu}}\right).$
(15)
When $E_{\nu}\gg E_{\mu}$ and $E_{\mu}\gg E_{\mu}^{\mathrm{min}}$, ${\cal{F}}$
can approach its upper limit of $2m_{e}$.
In measurements of elastic neutrino-electron scattering by the MINERvA
experiment [1, 3, 4], the same quantity was used to select events that were
due to elastic scattering from electrons. In this case $E_{e}\gg
E_{e}^{\mathrm{min}}$ for all of the selected events. However, for IMD for the
experimental fluxes considered above from DUNE and MINERvA, neither condition
above is true for most events, and therefore typically ${\cal{F}}\ll 2m_{e}$.
In particular because the factor $\left(1-\frac{E_{\mu}}{E_{\nu}}\right)$ is
usually small, one might want to consider an “idealized” version of
${\cal{F}}$,
$\displaystyle{{\cal{F}}^{\rm\scriptstyle
ideal}}\left(E_{\mu},\leavevmode\nobreak\
\theta_{\mu}\right)\equiv\frac{E_{\mu}\theta_{\mu}^{2}}{1-\frac{E_{\mu}}{E_{\nu}}}\approx
2m_{e}-\frac{m_{\mu}^{2}}{E_{\mu}}.$ (16)
However, this quantity is not accessible since the neutrino energy is not
known on an event-by-event basis.
In the measurement by the MINERvA experiment [5], the analysis enforced
elastic kinematics for a “maximum” energy of likely candidate events in its
beam. The variable ${\cal{F}}^{\rm\scriptstyle MINERvA}$,
$\displaystyle{{\cal{F}}^{\rm\scriptstyle
MINERvA}}\left(E_{\mu},\leavevmode\nobreak\
\theta_{\mu}\right)\equiv\frac{E_{\mu}\theta_{\mu}^{2}}{1-\frac{E_{\mu}}{E^{\mathrm{max}}}},$
(17)
with $E^{\mathrm{max}}=35\leavevmode\nobreak\ \mathrm{GeV}$, was used for the
selection of signal events by placing a cut on ${{\cal{F}}^{\rm\scriptstyle
MINERvA}}\left(E_{\mu},\leavevmode\nobreak\ \theta_{\mu}\right)$.
Figure 4: The variable $\cal{F}$ as defined in Eqs. (15), (16), and (17) is
presented as a function of the muon energy $E_{\mu}$ at the fixed neutrino
energy $E_{\nu}=15$ GeV . Figure 5: Distribution of the variable $\cal{F}$ for
the tree-level events in IMD reaction according to definitions in Eqs. (15),
(16), and (17) is shown at the fixed neutrino energy $E_{\nu}=15$ GeV.
To illustrate various definitions for the variable $\cal{F}$, we present all
three variants as a function of the final-state muon energy $E_{\mu}$ for the
fixed neutrino energy $E_{\nu}=15$ GeV in Fig. 4. The size of this variable in
the inverse muon decay is below $10-100\leavevmode\nobreak\ \mathrm{keV}$.
$\cal{F}$ vanishes both in forward and backward directions for definitions in
Eqs. (15) and (17) contrary to the forward scattering only for the definition
in Eq. (16). In Fig. 5, we also present the tree-level distributions of the
variable $\cal{F}$ for the same illustrative neutrino energy, in the region
that is allowed kinematically for all three definitions. We observe a
significant redistribution of events moving from one definition of the
variable $\cal{F}$ to another.
To illustrate the effect of radiative corrections on the distribution of the
${\cal F}$ variables, we keep MINERvA’s definition in Eq. (17) for
applications to MINERvA’s study [2, 41]. However for a general experiment,
including the application to the DUNE flux [40] in this paper, we do not wish
to enforce a maximum neutrino energy above which we would drop the constraint,
so we study instead the original ${\cal{F}}$ of Eq. (15). We present in Fig. 6
the distribution of the variable ${\cal{F}}^{\mathrm{MINERvA}}$ at tree level
for the fixed incoming neutrino energy $E_{\nu}=15\leavevmode\nobreak\
\mathrm{GeV}$, and compare it to the $\mathrm{O}\left(\alpha\right)$
contribution of radiative corrections by integrating the double-differential
distribution in muon energy and muon scattering angle, and by providing the
naive estimate assuming the kinematics of the radiation-free process and Eq.
(8). $\mathrm{O}\left(\alpha\right)$ contributions shift the distribution of
${\cal{F}}^{\mathrm{MINERvA}}$ variable by a percent-level correction. Note
also that all inverse muon decay events from neutrinos of energy $E_{\nu}\leq
30\leavevmode\nobreak\ \mathrm{GeV}$ belong to the first bin in the variable
${\cal{F}}^{\mathrm{MINERvA}}$ considered in Ref. [5], i.e.,
$0\leq{\cal{F}}^{\mathrm{MINERvA}}\leq 250\leavevmode\nobreak\ \mathrm{keV}$.
We provide an analogous comparison for the distributions of the variable
${\cal{F}}$ averaged over the MINERvA medium-energy flux and anticipated DUNE
flux [40] in the following Figs. 7 and 8, respectively. In each case, we
observe percent-level distortions due to $\mathrm{O}\left(\alpha\right)$
radiative corrections. Moreover, there is a significant difference between a
naive calculation (applying corrections from Eq. (8) under the assumption of
radiation-free kinematics) and the complete calculation which properly
accounts for the angular distribution.
Figure 6: Comparisons of leading order and $\mathrm{O}\left(\alpha\right)$
correction in the distribution of the variable ${\cal{F}}^{\mathrm{MINERvA}}$
for a fixed neutrino energy $E_{\nu}=15$ GeV. The ratio of the leading-order
processes, $\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}$ to
$\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}$ is almost constant, as is
shown by the red dotted line. The $\mathrm{O}\left(\alpha\right)$ correction
from Eq. (8) is shown by the blue solid line under the assumption that the
kinematics is identical to that of radiation-free scattering. It is compared
to the “true” $\mathrm{O}\left(\alpha\right)$ contribution, which is obtained
by integrating the appropriate double-differential distribution and adding
virtual and soft-photon corrections. Note that both
$\mathrm{O}\left(\alpha\right)$ contributions are negative and so decrease the
cross section. Figure 7: Ratios of the $O\left(\alpha\right)$ contribution to
the leading-order result for the distribution of the variable
${\cal{F}}^{\mathrm{MINERvA}}$, cf. Eq. (17), averaged over the medium-energy
flux of the MINERvA experiment. The $O\left(\alpha\right)$ correction in
Eq.(8), assuming the kinematics of radiation-free scattering, the blue solid
line, is compared to the $O\left(\alpha\right)$ contribution, which is
obtained by integrating the appropriate double-differential distribution and
adding virtual and soft-photon corrections on top, cf. the green dashed line.
Figure 8: Same as Fig. 7 but for the anticipated DUNE flux according to the
definition of the variable ${\cal{F}}$ in Eq. (15).
## 5 Conclusions and Outlook
The goal of this paper is to enable percent-level constraints on the incoming
(anti)neutrino fluxes by measuring the inverse muon decay reaction on the
atomic electrons. Thus, we performed a study of radiative corrections and
various distributions for inverse muon decay. We confirmed an analytical
expression for the muon energy spectrum in $\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}$
and presented a new expression for the spectrum in
$\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}$. We provided the following new
cross sections: triple-differential distribution in muon energy, muon
scattering angle, and photon energy; double-differential distribution in muon
energy and muon scattering angle; double-differential distribution in photon
energy and photon scattering angle; double-differential distribution in muon
energy and photon energy for the dominant muon channel, and total radiative
cross section for both channels.
We investigated the effects of $\mathrm{O}\left(\alpha\right)$ radiative
corrections on the muon energy spectrum and on the distribution of the
experimentally accessed variable $\cal{F}$. In both cases, the corresponding
distortions have the percent-level size. We have clarified the definition for
the variable $\cal{F}$ that discriminates between inverse muon decay and other
neutrino interactions. We noted that there is a significant difference between
the complete calculation of ${\cal F}$ from the two-dimensional distribution
in muon energy and photon energy, and a naive implementation of the muon
energy spectrum.
Providing radiative corrections to the inverse muon decay paves the way for
percent-level constraints on high-energy tails for neutrino fluxes of modern
and future neutrino oscillation and cross-section experiments.
## Acknowledgments
O.T. thanks Matthias Heller and Marc Vanderhaeghen for technical discussions
while working on other projects. This work is supported by the US Department
of Energy through the Los Alamos National Laboratory and by LANL’s Laboratory
Directed Research and Development (LDRD/PRD) program under project number
20210968PRD4. Los Alamos National Laboratory is operated by Triad National
Security, LLC, for the National Nuclear Security Administration of U.S.
Department of Energy (Contract No. 89233218CNA000001). This work was also
supported by the U.S. Department of Energy, Office of Science, Office of High
Energy Physics, under Awards DE-SC0019095 and DE-SC0008475. K.S.M.
acknowledges support from a Fermilab Intensity Frontier Fellowship during the
early stages of this work, and from the University of Rochester’s Steven Chu
Professorship in Physics. D.R. gratefully acknowledges support from a Cottrell
Postdoctoral Fellowship, Research Corporation for Scientific Advancement Award
No. 27467 and National Science Foundation Award CHE2039044. FeynCalc [42, 43],
LoopTools [44], Mathematica [45], and DataGraph were extremely useful in this
work.
## Appendix A Virtual corrections
To evaluate virtual contributions, it is convenient to express vertex
corrections as a deviation of the charged-lepton current
$\left(\delta\mathrm{J}^{\mathrm{L}}\right)^{\nu}$ from the tree-level
expression
$\left(\mathrm{J}^{\mathrm{L}}\right)^{\nu}=\bar{\mu}\left(p_{\mu}\right)\gamma^{\nu}\mathrm{P}_{\mathrm{L}}e\left(p_{e}\right)$
as
$\displaystyle\left(\delta\mathrm{J}^{\mathrm{L}}\right)^{\nu}=e^{2}\int\frac{\mathrm{d}^{D}L}{(2\pi)^{D}}\frac{\bar{\mu}\left(p_{\mu}\right)\gamma^{\lambda}\left(p\\!\\!\\!/\\!\,\,_{\mu}-L\\!\\!\\!/\\!\,\,+m_{\mu}\right)\gamma^{\nu}\mathrm{P}_{\mathrm{L}}\left(p\\!\\!\\!/\\!\,\,_{e}-L\\!\\!\\!/\\!\,\,+m_{e}\right)\gamma^{\rho}e\left(p_{e}\right)}{\left[(p_{\mu}-L)^{2}-m_{\mu}^{2}\right]\left[(p_{e}-L)^{2}-m_{e}^{2}\right]}\mathrm{\Pi}_{\lambda\rho}\left(L\right),$
(18)
with the momentum-space photon propagator $\mathrm{\Pi}^{\mu\nu}$:
$\displaystyle\mathrm{\Pi}^{\mu\nu}\left(L\right)=\frac{i}{L^{2}-\lambda^{2}}\left[-g^{\mu\nu}+\left(1-\xi_{\gamma}\right)\frac{L^{\mu}L^{\nu}}{L^{2}-a\xi_{\gamma}\lambda^{2}}\right],$
(19)
where the photon mass $\lambda$ regulates the infrared divergence,
$\xi_{\gamma}$ is the photon gauge-fixing parameter, and $a$ is an arbitrary
constant. The corresponding field renormalization factors for the external
charged leptons are evaluated from the one-loop self-energies as [46, 47]
$\displaystyle Z_{\ell}$
$\displaystyle=1-\frac{\alpha}{4\pi}\frac{\xi_{\gamma}}{\varepsilon}-\frac{\alpha}{4\pi}\left(\ln\frac{\mu^{2}}{m_{\ell}^{2}}+2\ln\frac{\lambda^{2}}{m_{\ell}^{2}}+4\right)+\frac{\alpha}{4\pi}\left(1-\xi_{\gamma}\right)\left(\ln\frac{\mu^{2}}{\lambda^{2}}+1+\frac{a\xi_{\gamma}\ln
a\xi_{\gamma}}{1-a\xi_{\gamma}}\right),$ (20)
with the renormalization scale in dimensional regularization $\mu$, where the
number of dimensions is $D=4-2\varepsilon$. Neglecting Lorentz structures
whose contractions with the (anti)neutrino current vanish at $m_{\nu}=0$ and
denoting the ratio of lepton masses as $r=m_{e}/m_{\mu}$, the resulting
correction to the charged lepton current is expressed as
$\displaystyle\left(\sqrt{Z_{e}Z_{\mu}}-1\right)\left(\mathrm{J}^{\mathrm{L}}\right)^{\nu}+\left(\delta\mathrm{J}^{\mathrm{L}}\right)^{\nu}=\frac{\alpha}{2\pi}\bar{\mu}\left(p_{\mu}\right)\left(g_{M}\gamma^{\nu}-f_{2}\frac{p_{\mu}^{\nu}+rp_{e}^{\nu}}{2m_{\mu}r^{2}}-g^{5}_{M}\gamma^{\nu}\gamma_{5}+f^{5}_{2}\frac{p_{\mu}^{\nu}-rp_{e}^{\nu}}{2m_{\mu}r^{2}}\gamma_{5}\right)e\left(p_{e}\right),$
(21)
where the form factors $g_{M},\leavevmode\nobreak\
g^{5}_{M},\leavevmode\nobreak\ f_{2}$, and $f^{5}_{2}$ are [39]
$\displaystyle g^{(5)}_{M}\left(\eta,r,\beta\right)$
$\displaystyle=-1+\frac{1}{\beta}\left(\frac{1}{2}\left(2\beta-\ln\frac{1+\beta}{1-\beta}\right)\ln\frac{2m_{e}}{\lambda}+\frac{1}{2}\ln\frac{1+\beta}{1-\beta}\ln\frac{1+\beta}{\beta}-\ln\frac{r\sqrt{1-\beta}-\sqrt{1+\beta}}{r\sqrt{1+\beta}-\sqrt{1-\beta}}\frac{\ln
r}{2}\right.$
$\displaystyle+\left.\frac{3}{8}\ln\frac{1+\beta}{1-\beta}+\frac{\sqrt{1-\beta^{2}}}{8\eta}\ln\frac{1+\beta}{1-\beta}+\frac{1}{4}\ln\frac{1+\beta}{1-\beta}\ln\frac{2r-\left(1+r^{2}\right)\sqrt{1-\beta^{2}}}{1-\beta}+\frac{\pi^{2}}{12}\right.$
$\displaystyle+\left.\frac{1}{2}\mathrm{Li}_{2}\frac{1-\beta}{1+\beta}-\frac{1}{2}\mathrm{Li}_{2}\left(\frac{\sqrt{1-\beta}}{\sqrt{1+\beta}}r\right)-\frac{1}{2}\mathrm{Li}_{2}\left(\frac{\sqrt{1-\beta}}{\sqrt{1+\beta}}\frac{1}{r}\right)-\frac{5}{16}\ln^{2}\frac{1+\beta}{1-\beta}-\frac{1}{4}\ln^{2}r\right)$
$\displaystyle+\frac{\sqrt{1-\beta^{2}}}{8\beta}\frac{\left(r+\eta\right)^{2}\left(1-\eta\sqrt{1-\beta^{2}}\right)}{2r-\left(1+r^{2}\right)\sqrt{1-\beta^{2}}}\ln\frac{1+\beta}{1-\beta}-\frac{12r-\left(7+5r^{2}\right)\sqrt{1-\beta^{2}}}{2r-\left(1+r^{2}\right)\sqrt{1-\beta^{2}}}\frac{\ln
r}{4}-\ln\frac{2}{r},$ (22) $\displaystyle
f^{(5)}_{2}\left(\eta,r,\beta\right)$
$\displaystyle=\frac{r^{2}}{2}\frac{\sqrt{1-\beta^{2}}}{\beta}\frac{1-\eta\sqrt{1-\beta^{2}}}{2r-\left(1+r^{2}\right)\sqrt{1-\beta^{2}}}\ln\frac{1+\beta}{1-\beta}-\frac{r-\eta}{r+\eta}\frac{r^{2}\sqrt{1-\beta^{2}}}{2r-\left(1+r^{2}\right)\sqrt{1-\beta^{2}}}\ln
r\,.$ (23)
Here $\beta$ is the velocity of the muon in the electron rest frame, $\eta=1$
for the form factors $g_{M},\leavevmode\nobreak\ f_{2}$ and $\eta=-1$ for the
form factors $g^{5}_{M},\leavevmode\nobreak\ f^{5}_{2}$.
The expressions above were presented in the literature using this approach in
Ref. [39]. Technically equivalent evaluations of physical observables and
similar quantities with distinct intermediate steps were performed in Refs.
[48, 49, 50]. Refs. [51, 52, 53, 54] have explored higher-order expansions in
$\alpha$ for electromagnetic transitions between fermions of different mass.
## Appendix B Real radiation
Inverse muon decay with single photon emission is described by the
Bremsstrahlung contribution $\mathrm{T}^{1\gamma}$:
$\displaystyle\mathrm{T}^{1\gamma}=-2\sqrt{2}\mathrm{G}_{\mathrm{F}}ie\leavevmode\nobreak\
\bar{\nu}_{e}\gamma^{\mu}\mathrm{P}_{\mathrm{L}}\nu_{\mu}\,\left[\left(\frac{{p}^{\nu}_{\mu}}{p_{\mu}\cdot
k_{\gamma}}-\frac{{p}^{\nu}_{e}}{p_{e}\cdot
k_{\gamma}}\right)\bar{\mu}\gamma_{\mu}\mathrm{P}_{\mathrm{L}}e+\frac{1}{2}\bar{\mu}\left(\frac{\gamma^{\nu}k\\!\\!\\!/\\!\,\,_{\gamma}\gamma_{\mu}}{p_{\mu}\cdot
k_{\gamma}}+\frac{\gamma_{\mu}k\\!\\!\\!/\\!\,\,_{\gamma}\gamma^{\nu}}{p_{e}\cdot
k_{\gamma}}\right)\mathrm{P}_{\mathrm{L}}e\right]\varepsilon^{\star}_{\nu},$
(24)
with the photon polarization four-vector $\varepsilon^{\star}_{\nu}$. Let us
consider separately the cases of soft and hard photon emission.
### B.1 Soft-photon Bremsstrahlung
The inverse muon decay with radiation of photons of arbitrary small energy
cannot be experimentally distinguished from the decay without radiation. All
events with photons below some energy cutoff $k_{\gamma}\leq\Delta E$ (in the
electron rest frame) must be included in measured observables. The
corresponding scattering cross section factorizes in terms of the tree-level
result of Eqs. (6) and (7) as
$\displaystyle\mathrm{d}\sigma\left(\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}\gamma,\leavevmode\nobreak\
\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}\gamma;\leavevmode\nobreak\
k_{\gamma}\leq\Delta E\right)=\frac{\alpha}{\pi}\delta_{s}\left(\Delta
E\right)\mathrm{d}\sigma_{\mathrm{LO}}\left(\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-},\leavevmode\nobreak\
\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}\right),$ (25)
with the universal correction $\delta_{s}\left(\Delta E\right)$ [55, 26, 14,
18, 10]:
$\displaystyle\delta_{s}\left(\Delta
E\right)=\frac{1}{\beta}\left(\mathrm{Li}_{2}\frac{1-\beta}{1+\beta}-\frac{\pi^{2}}{6}\right)-\frac{2}{\beta}\left(\beta-\frac{1}{2}\ln\frac{1+\beta}{1-\beta}\right)\ln\frac{2\Delta
E}{\lambda}+\frac{1}{2\beta}\ln\frac{1+\beta}{1-\beta}\left(1+\ln\frac{\beta^{-2}\sqrt{1-\beta^{2}}}{4\left(1+\beta\right)^{-1}}\right)+1\,.$
(26)
This region of the phase space with low-energy photons cancels the infrared-
divergent contributions from virtual diagrams. As a result, soft and virtual
contributions multiply the tree-level cross sections of Eq. (4) with infrared-
finite factor, i.e., independent of the fictitious photon mass $\lambda$ [56,
57, 58, 59], as
$\displaystyle\frac{\mathrm{d}\sigma\left(\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}\right)+\mathrm{d}\sigma\left(\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}\gamma\left(k_{\gamma}\leq\Delta
E\right)\right)}{\mathrm{d}\sigma_{\mathrm{LO}}\left(\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}\right)}=1+\frac{\alpha}{\pi}\Bigg{\\{}g_{M}+g^{5}_{M}+\delta_{s}\left(\Delta
E\right)$
$\displaystyle+\frac{rm^{2}_{\mu}}{4}\frac{\left(p_{\mu}-p_{e}\right)^{2}}{p_{\mu}\cdot
k_{{\nu}_{e}}p_{e}\cdot
k_{\nu_{\mu}}}\left(g_{M}-g^{5}_{M}\right)-\left(\frac{r^{2}m^{2}_{\mu}}{4}\frac{\left(p_{\mu}-p_{e}\right)^{2}}{p_{\mu}\cdot
k_{{\nu}_{e}}p_{e}\cdot k_{\nu_{\mu}}}+\frac{p_{e}\cdot
k_{{\nu}_{e}}}{p_{\mu}\cdot
k_{{\nu}_{e}}}\right)\left[\left(\frac{1+r}{2r}\right)^{2}f_{2}+\left(\frac{1-r}{2r}\right)^{2}f^{5}_{2}\right]\Bigg{\\}},$
(27)
where we can obtain the contributions in the
$\bar{\nu}_{e}e^{-}\to\bar{{\nu}}_{\mu}\mu^{-}$ reaction by replacing the
momenta of neutrinos with the momenta of antineutrinos.
### B.2 Contribution of hard photons
Here we evaluate the contribution of photons above the energy cutoff
$k_{\gamma}\geq\Delta E$ to the muon energy spectrum. Squaring the matrix
element of Eq. (24) for the inverse muon decay
$\nu_{\mu}e^{-}\to{\nu}_{e}\mu^{-}$, we obtain the result in terms of Lorentz
invariants as
$\displaystyle\frac{|\mathrm{T}^{1\gamma}|^{2}}{e^{2}|\mathrm{T}_{\mathrm{LO}}|^{2}}$
$\displaystyle=-\left(\frac{p_{\mu}}{p_{\mu}\cdot
k_{\gamma}}-\frac{p_{e}}{p_{e}\cdot k_{\gamma}}\right)^{2}+\frac{p_{\mu}\cdot
p_{e}}{p_{e}\cdot k_{\gamma}p_{\mu}\cdot
k_{\gamma}}\left(\frac{k_{{\nu}_{e}}\cdot k_{\gamma}}{k_{{\nu}_{e}}\cdot
p_{\mu}}-\frac{k_{\nu_{\mu}}\cdot k_{\gamma}}{k_{\nu_{\mu}}\cdot
p_{e}}\right)+\frac{k_{{\nu}_{\mu}}\cdot k_{\gamma}}{p_{e}\cdot
k_{{\nu}_{\mu}}p_{e}\cdot k_{\gamma}}-\frac{p_{e}\cdot
k_{\nu_{e}}}{p_{\mu}\cdot k_{\nu_{e}}p_{e}\cdot k_{\gamma}}$
$\displaystyle+\frac{1}{p_{\mu}\cdot k_{\gamma}}-\frac{1}{p_{e}\cdot
k_{\gamma}}+\frac{k_{\nu_{e}}\cdot k_{\gamma}}{p_{\mu}\cdot
k_{\nu_{e}}p_{\mu}\cdot k_{\gamma}}+\frac{p_{\mu}\cdot
k_{{\nu}_{\mu}}}{p_{e}\cdot k_{{\nu}_{\mu}}p_{\mu}\cdot
k_{\gamma}}+\frac{k_{\nu_{\mu}}\cdot k_{\gamma}}{\left(p_{e}\cdot
k_{\gamma}\right)^{2}}\frac{m^{2}_{e}}{p_{e}\cdot
k_{\nu_{\mu}}}-\frac{k_{{\nu}_{e}}\cdot k_{\gamma}}{\left(p_{\mu}\cdot
k_{\gamma}\right)^{2}}\frac{m^{2}_{\mu}}{p_{\mu}\cdot k_{{\nu}_{e}}},$ (28)
while the result for $\bar{\nu}_{e}e^{-}\to\bar{\nu}_{\mu}\mu^{-}$ is given by
the replacement of neutrino momenta by corresponding antineutrino momenta.
We perform the integration following the technique that was introduced in [11]
and further developed in [10, 39, 9, 8]. For the inverse muon decay, the
implementation is slightly more involved by having two mass scales: the
electron mass and the muon mass.
First, we introduce the four-vector $l$:
$l=p_{e}+k_{\nu}-p_{\mu}=\left(l_{0},\vec{f}\right)$. Working in the rest
frame of atomic electrons, we have for the components of $l$:
$\displaystyle l_{0}$ $\displaystyle=m_{e}+E_{\nu}-E_{\mu},$ (29)
$\displaystyle f^{2}$
$\displaystyle=|\vec{f}|^{2}=E_{\nu}^{2}+\beta^{2}E^{2}_{\mu}-2\beta
E_{\nu}E_{\mu}\cos\theta_{\mu}.$ (30)
Accounting for the conservation of energy and momentum, we obtain
$\displaystyle l^{2}$
$\displaystyle=2k_{\gamma}\left(l_{0}-f\cos\gamma\right),$ (31)
where $\gamma$ denotes the angle between the photon direction and the vector
$\vec{f}$. Using energy and momentum conservation to perform the integration
over the final-state neutrino momentum components and the photon energy, we
obtain the muon energy spectrum as
$\displaystyle\frac{\mathrm{d}\sigma^{1\gamma}}{\mathrm{d}E_{\mu}}$
$\displaystyle=\int\frac{|\mathrm{T}^{1\gamma}|^{2}}{256\pi^{4}m_{e}}\frac{k^{2}_{\gamma}f\mathrm{d}f\mathrm{d}\Omega_{k_{\gamma}}}{E^{2}_{\nu}\left(l_{0}^{2}-f^{2}\right)}.$
(32)
It is convenient to split the phase space into two regions with distinct
ranges of integration. There are no restrictions on the photon phase space in
the region I: $l^{2}\geq 2\Delta E\left(l_{0}-f\cos\gamma\right)$. In this
region, the range of kinematic variables is given by
$\displaystyle m_{e}+\frac{2\left(\Delta E\right)^{2}}{m_{e}-2\Delta
E}+\frac{\frac{m^{2}_{\mu}-m^{2}_{e}}{2}}{m_{e}-2\Delta E}$ $\displaystyle\leq
E_{\mu}\leq m_{e}+\frac{2\left(E_{\nu}-\Delta
E\right)^{2}}{m_{e}+2\left(E_{\nu}-\Delta
E\right)}+\frac{\frac{m^{2}_{\mu}-m^{2}_{e}}{2}}{m_{e}+2\left(E_{\nu}-\Delta
E\right)},$ (33) $\displaystyle|E_{\nu}-\beta E_{\mu}|$ $\displaystyle\leq
f\leq l_{0}-2\Delta E,$ (34) $\displaystyle\frac{l_{0}-f}{2}$
$\displaystyle\leq k_{\gamma}\leq\frac{l_{0}+f}{2}.$ (35)
In the complementary region II: $l^{2}\leq 2\Delta
E\left(l_{0}-f\cos\gamma\right)$ that is close to the kinematics of the
radiation-free process, the angle between the photon momentum and the vector
$\vec{f}$ is restricted as
$\displaystyle\cos\gamma\geq\frac{1}{f}\left(l_{0}-\frac{l^{2}}{2\Delta
E}\right).$ (36)
This region contributes a factorizable contribution $\delta_{\mathrm{II}}$,
which adds linearly to $\delta_{s}\left(\Delta E\right)$ of Eq. (27) [10]:
$\displaystyle\delta_{\mathrm{II}}$
$\displaystyle=\frac{1}{\beta}\left(\left(\frac{1}{2}+\ln\frac{\rho\left(1+\cos\delta_{0}\right)}{4\beta}\right)\ln\frac{1-\beta}{1+\beta}-\mathrm{Li}_{2}\frac{1-\beta}{1+\beta}-\mathrm{Li}_{2}\frac{\cos\delta_{0}-1}{\cos\delta_{0}+1}+\mathrm{Li}_{2}\left(\frac{\cos\delta_{0}-1}{\cos\delta_{0}+1}\frac{1+\beta}{1-\beta}\right)+\frac{\pi^{2}}{6}\right)$
$\displaystyle+\ln\frac{1-\beta\cos\delta_{0}}{\rho}-1,$ (37)
where the angle $\delta_{0}$ is given by
$\displaystyle\cos\delta_{0}=\frac{E_{\nu}^{2}-\beta^{2}E_{\mu}^{2}-l_{0}^{2}}{2\beta
E_{\mu}l_{0}}\,,$ (38)
and $\rho=\sqrt{1-\beta^{2}}$. Only the first term from Eq. (28) contributes
in this region. The same term generates the $\Delta E$ dependence after
integrating over region I. For the muon energy spectrum including both soft
and hard photons, this dependence cancels with the contribution of soft
photons from Eq. (25). For other terms, we can safely set $\Delta E=0$
starting from Eq. (28).
## Appendix C Triple-differential distribution
In the following appendixes, we provide analytic expressions for a few
unpolarized cross sections of interest for the dominant
$\nu_{\mu}e^{-}\to\nu_{\mu}e^{-}$ channel. The triple-differential cross
section with respect to the muon angle, muon energy, and photon energy is
given by
$\displaystyle\frac{\mathrm{d}\sigma}{\mathrm{d}E_{\mu}\mathrm{d}f\mathrm{d}k_{\gamma}}=\frac{\mathrm{G}_{\mathrm{F}}^{2}}{2\pi
m_{e}E_{\nu}^{2}}\frac{\alpha}{\pi}\mathrm{I},$ (39) $\displaystyle\mathrm{I}$
$\displaystyle=\frac{\rho
f^{2}}{k_{\gamma}}\frac{\left(k_{\gamma}+E_{\mu}\right)\left(l^{2}\right)^{2}+\left(k_{\gamma}\left(s+m_{\mu}^{2}\right)-E_{\mu}l^{2}\right)\left(2s-m_{\mu}^{2}-m^{2}_{e}\right)+4E_{\mu}E_{\nu}m_{e}\left(s-m^{2}_{\mu}\right)-2k_{\gamma}sl^{2}}{\sqrt{d}}$
$\displaystyle-\frac{2\rho^{2}m_{\mu}^{2}m_{e}\left(s-m^{2}_{\mu}\right)E_{\nu}f^{4}\sigma}{d^{3/2}}-\frac{3}{16}\frac{m_{e}+k_{\gamma}}{m_{e}}\frac{\sigma^{2}}{\rho^{2}f^{4}k_{\gamma}^{2}}+\frac{m_{e}+k_{\gamma}}{m_{e}}\frac{\left(E_{\mu}^{2}+k_{\gamma}^{2}+4m_{e}k_{\gamma}\right)\left(l^{2}\right)^{2}}{4f^{2}k_{\gamma}^{2}}-\frac{2m_{e}^{2}E_{\nu}^{2}l_{0}^{2}}{f^{2}k_{\gamma}^{2}}$
$\displaystyle-\frac{m_{e}+k_{\gamma}}{m_{e}}\frac{\left(l^{2}-k_{\gamma}^{2}-2l_{0}k_{\gamma}\right)\left(m_{\mu}^{2}+m_{e}^{2}\right)^{2}}{4f^{2}k_{\gamma}^{2}}+\frac{2m_{e}E_{\nu}\left(m_{e}+E_{\nu}\right)l_{0}l^{2}}{f^{2}k_{\gamma}^{2}}+\frac{\left(2l^{2}-4E_{\nu}l_{0}+l_{0}^{2}\right)\left(m_{\mu}^{2}-m_{e}^{2}\right)}{f^{2}}$
$\displaystyle-\frac{m_{e}+k_{\gamma}}{m_{e}}\frac{\left(m_{\mu}^{2}-m_{e}^{2}-2m_{e}\left(l_{0}-3\left(m_{e}+E_{\nu}\right)\right)\right)l^{2}-2\left(E_{\nu}^{2}+2E_{\nu}l_{0}-m_{\mu}^{2}\right)m_{e}^{2}}{2f^{2}}-\frac{\left(E_{\nu}^{2}-4m_{e}l_{0}\right)l^{2}}{f^{2}}$
$\displaystyle-\frac{\left(m_{e}\left(9l^{2}-4m_{\mu}^{2}\right)-4l_{0}\left(m_{\mu}^{2}+2m_{e}^{2}\right)\right)l^{2}}{4f^{2}k_{\gamma}^{2}}+\frac{\left(l_{0}-2k_{\gamma}\right)\left(l_{0}-2m_{e}-2E_{\nu}\right)\left(l^{2}\right)^{2}}{2f^{2}k_{\gamma}^{2}}-\frac{m_{e}^{2}\left(l^{2}-4m_{\mu}^{2}\right)l^{2}}{4f^{2}k_{\gamma}^{2}}$
$\displaystyle-\frac{\left(\left(l^{2}\right)^{2}+E_{\nu}\left(l_{0}-m_{e}-k_{\gamma}\right)l^{2}+2\left(l_{0}+E_{\nu}\right)k_{\gamma}^{3}+2l_{0}^{3}k_{\gamma}\right)\left(m_{\mu}^{2}-m_{e}^{2}\right)}{2f^{2}k_{\gamma}^{2}}-\frac{E_{\nu}\left(l^{2}+k_{\gamma}l_{0}\right)l^{2}}{f^{2}k_{\gamma}}$
$\displaystyle-\frac{m_{e}\left(\left(l_{0}-k_{\gamma}\right)\left(l^{2}-4k_{\gamma}l_{0}\right)+\left(2k_{\gamma}^{2}+6k_{\gamma}l_{0}-2l_{0}^{2}\right)E_{\nu}\right)\left(m_{\mu}^{2}-m_{e}^{2}\right)}{2f^{2}k_{\gamma}^{2}}-\frac{l_{0}\left(l_{0}^{2}l^{2}+2m_{\mu}^{2}m_{e}^{2}\right)}{f^{2}k_{\gamma}}$
$\displaystyle+\frac{k_{\gamma}}{m_{e}}\frac{\left(\left(l_{0}^{2}-m_{\mu}^{2}-5k_{\gamma}\left(l_{0}-E_{\nu}\right)-2\left(E_{\nu}-k_{\gamma}\right)l_{0}\right)l^{2}-\left(E_{\nu}l_{0}-k_{\gamma}\left(2l_{0}+3E_{\nu}\right)\right)m_{\mu}^{2}\right)l^{2}}{2f^{2}k_{\gamma}^{2}}$
$\displaystyle+\frac{\left(2E_{\nu}l_{0}\left(3l^{2}-8m_{e}l_{0}+4m_{e}^{2}\right)+4l_{0}^{2}\left(m_{\mu}^{2}+2m_{e}l_{0}-5m_{e}^{2}\right)+l^{2}\left(2m_{\mu}^{2}+7m_{e}l_{0}\right)\right)E_{\nu}}{2f^{2}k_{\gamma}^{2}}$
$\displaystyle-\frac{12E_{\nu}\left(E_{\mu}l^{2}+m_{e}l_{0}\left(l_{0}-E_{\nu}\right)\right)+4m_{e}l_{0}\left(m_{e}l_{0}+m_{\mu}^{2}\right)+m_{e}E_{\nu}\left(3l^{2}-4m_{e}l_{0}\right)}{2f^{2}},$
(40)
with the kinematic notations
$\displaystyle\sigma$
$\displaystyle=\rho\left(E_{\nu}^{2}-f^{2}-E_{\mu}^{2}+m_{\mu}^{2}\right)\left(l^{2}-2k_{\gamma}l_{0}\right)+4k_{\gamma}m_{\mu}f^{2},$
(41) $\displaystyle d$
$\displaystyle=\beta^{2}m_{\mu}^{2}l^{2}E_{\nu}^{2}\left(l^{2}+4k_{\gamma}^{2}-4k_{\gamma}l_{0}\right)\sin^{2}\theta_{\mu}+\frac{\sigma^{2}}{4},$
(42)
and the squared energy in the center-of-mass reference frame
$s=m^{2}_{e}+2m_{e}E_{\nu}$.
## Appendix D Double-differential distribution in muon energy and muon angle
Integrating Eq. (39) over the photon energy $k_{\gamma}$, we obtain the
double-differential cross section with respect to the recoil muon energy and
muon angle. The result is expressed in a similar to elastic neutrino-electron
scattering form [10]:
$\frac{\mathrm{d}\sigma}{\mathrm{d}E_{\mu}\mathrm{d}f}=\frac{\mathrm{G}_{\mathrm{F}}^{2}}{\pi
E_{\nu}^{2}}\frac{m_{\mu}}{m_{e}}\frac{\alpha}{\pi}\left(a+b\frac{f}{l^{2}_{0}-f^{2}}\ln\frac{1+\beta}{1-\beta}+c\ln\frac{l_{0}+f}{l_{0}-f}+\tilde{d}\ln\frac{l_{0}-\beta
f\cos\delta-\sqrt{g}}{l_{0}-\beta f\cos\delta+\sqrt{g}}\right),$ (43)
with $g=\left(f\cos\delta-\beta l_{0}\right)^{2}+\rho^{2}f^{2}\sin^{2}\delta$
and the angle $\delta$ between vectors $\vec{f}$ and $\vec{p}_{\mu}$:
$\cos\delta=\frac{E_{\nu}^{2}-\beta^{2}E_{\mu}^{2}-f^{2}}{2\beta E_{\mu}f}.$
(44)
The coefficients $a,\leavevmode\nobreak\ b,\leavevmode\nobreak\ c,$ and
$\tilde{d}$ are given by
$\displaystyle a$
$\displaystyle=\frac{\beta\cos\delta}{\rho}\left(m^{2}_{\mu}-m^{2}_{e}\right)+\frac{f}{m_{\mu}}\left(s-m^{2}_{\mu}-m^{2}_{e}-\frac{2\left(s-m^{2}_{\mu}\right)\left(s-m^{2}_{e}\right)}{l^{2}}\right)+\left(f+\frac{10m_{\mu}\beta\cos\delta}{\rho}\right)\frac{l_{0}-\beta
f\cos\delta}{2\rho}$
$\displaystyle+\left(1-\frac{l_{0}}{4m_{e}}\right)\frac{m_{\mu}}{f}\frac{\beta^{2}}{\rho^{2}}\left(1-3\cos^{2}\delta\right)l^{2}+\frac{m^{2}_{\mu}+m^{2}_{e}}{m_{e}}\frac{f-\beta
l_{0}\cos\delta}{2\rho}-\frac{m_{\mu}}{m_{e}}\frac{\beta\cos\delta}{\rho}\frac{2l^{2}_{0}+f^{2}}{\rho}$
$\displaystyle+\frac{3l_{0}f}{2}\frac{m_{\mu}}{m_{e}}\frac{1+\beta^{2}\cos^{2}\delta}{\rho^{2}},\,$
(45) $\displaystyle b$
$\displaystyle=\frac{2m_{\mu}l^{2}_{0}}{\beta\rho^{2}}+\frac{m_{e}E_{\nu}}{\beta
m_{\mu}}\left(s-m^{2}_{\mu}\right)+\frac{m^{2}_{\mu}-m^{2}_{e}}{\beta}\frac{l_{0}-\beta
f\cos\delta}{\rho}-4m_{\mu}f\frac{l_{0}-\frac{1}{2}\beta
f\cos\delta}{\rho}\frac{\cos\delta}{\rho},$ (46) $\displaystyle c$
$\displaystyle=\frac{m_{\mu}}{2}\left(l_{0}+\frac{m_{\mu}}{\rho}\right)-\frac{s^{2}+m_{e}^{3}E_{\nu}}{2m_{\mu}m_{e}}-\left(1+\frac{2\beta\cos\delta}{\rho}\frac{m_{\mu}}{f}-\frac{m^{2}_{\mu}}{f^{2}}\frac{\beta^{2}}{\rho^{2}}\frac{1-3\cos^{2}\delta}{2}\right)\frac{l^{2}\left(l^{2}-4m_{e}l_{0}\right)}{4m_{\mu}m_{e}}$
$\displaystyle+\left(1+\frac{\beta\cos\delta}{\rho}\frac{m_{\mu}}{f}\right)\left(\frac{1}{\rho}-\frac{m^{2}_{\mu}+m^{2}_{e}}{4m_{\mu}m_{e}}\right)l^{2}-\frac{\beta\cos\delta}{2\rho}\left(\frac{m_{\mu}}{f}\frac{l^{2}}{\rho}+\frac{l_{0}\left(2s-m^{2}_{\mu}-m^{2}_{e}\right)}{f}\right),$
(47) $\displaystyle\tilde{d}$
$\displaystyle=\frac{\sqrt{g}}{\rho}f+\frac{\rho}{\sqrt{g}}f\left(s-m_{\mu}^{2}+\frac{m_{e}E_{\nu}}{2m_{\mu}^{2}}\left(s+m_{\mu}^{2}\right)-m_{\mu}\frac{l_{0}-\beta
f\cos\delta}{\rho}\right).$ (48)
## Appendix E Double-differential distribution in photon energy and photon
angle
To study cross sections with respect to the photon kinematics, we introduce an
ancillary four-vector $\overline{l}$:
$\overline{l}=k_{\nu}+p_{e}-k_{\gamma}=\left(\overline{l}_{0},\vec{\overline{f}}\right)$.
In the laboratory frame, this vector can be expressed as
$\displaystyle\overline{l}_{0}$ $\displaystyle=m_{e}+E_{\nu}-k_{\gamma},$ (49)
$\displaystyle\overline{f}^{2}$
$\displaystyle=|\vec{\overline{f}}|^{2}=E_{\nu}^{2}+k^{2}_{\gamma}-2E_{\nu}k_{\gamma}\cos\theta_{\gamma},$
(50)
with the photon scattering angle $\theta_{\gamma}$.
The double-differential cross section with respect to the photon angle and
photon energy is given by
$\displaystyle\frac{\mathrm{d}\sigma}{\mathrm{d}k_{\gamma}\mathrm{d}\overline{f}}=\frac{\mathrm{G}_{\mathrm{F}}^{2}}{\pi
E_{\nu}}\frac{\alpha}{\pi}\left(\tilde{a}\left(\overline{l}^{2}-m_{\mu}^{2}\right)+\tilde{b}\ln\frac{m_{\mu}^{2}}{\overline{l}^{2}}\right)\frac{\overline{f}}{\left(\overline{l}^{2}-s\right)^{2}},$
(51)
with coefficients $\tilde{a}$ and $\tilde{b}$:
$\displaystyle\tilde{a}$
$\displaystyle=-\frac{m_{e}\left(2\overline{l}_{0}-m_{e}\right)}{\overline{l}^{2}}\left(s+2m^{2}_{\mu}+\left(s-m^{2}_{\mu}-m^{2}_{e}\right)\frac{k_{\gamma}}{E_{\nu}}+m^{2}_{\mu}\left(\frac{\left(\overline{l}_{0}-m_{e}\right)^{2}}{E_{\nu}k_{\gamma}}+\frac{m_{e}\left(s-m_{e}k_{\gamma}\right)}{4E_{\nu}k^{2}_{\gamma}}\right)\right)$
$\displaystyle-\frac{\overline{l}^{2}\left(\overline{l}^{2}-m_{\mu}^{2}\right)}{4E_{\nu}m_{e}k_{\gamma}}\left(\frac{\overline{l}^{2}-2\overline{l}_{0}m_{e}}{m_{e}}+\frac{\overline{l}^{2}-4\overline{l}_{0}m_{e}}{k_{\gamma}}\right)+\frac{m_{e}s^{2}}{4E_{\nu}k_{\gamma}^{2}}-3\left(s-m^{2}_{\mu}\right)+s\left(\frac{E_{\nu}}{k_{\gamma}}+\frac{m_{e}}{2E_{\nu}}\left(1-\frac{3}{2}\frac{m_{e}}{k_{\gamma}}\right)\right)$
$\displaystyle+\frac{\overline{l}^{2}-m_{\mu}^{2}}{E_{\nu}}\left(k_{\gamma}-3E_{\nu}-\frac{3}{4}\frac{\left(\overline{l}_{0}-m_{e}\right)^{2}}{k_{\gamma}}+m_{e}\frac{m^{2}_{e}-\overline{l}_{0}\left(2k_{\gamma}+5\overline{l}_{0}\right)}{4k^{2}_{\gamma}}+\frac{\overline{f}^{2}\left(m_{e}-k_{\gamma}\right)}{4k^{2}_{\gamma}}\right)-\frac{4m_{e}^{2}k^{2}_{\gamma}}{\overline{l}^{2}},$
(52) $\displaystyle\tilde{b}$
$\displaystyle=-\left(\overline{l}^{2}-s\right)^{2}-2\left(s-m^{2}_{\mu}\right)\left(\overline{l}^{2}+m_{\mu}^{2}\right).$
(53)
## Appendix F Photon energy spectrum
Integrating Eq. (53) over the photon scattering angle, we obtain the photon
energy spectrum for the photon energy
$k_{\gamma}\geq\frac{m_{e}}{2}-\frac{m^{2}_{\mu}}{2\left(m_{e}+2E_{\nu}\right)}$,
when the photon scatters in the cone around the forward direction:
$\displaystyle\frac{\mathrm{d}\sigma}{\mathrm{d}k_{\gamma}}$
$\displaystyle=\frac{\mathrm{G}_{\mathrm{F}}^{2}}{\pi
E_{\nu}}\frac{\alpha}{\pi}\left(\overline{a}+\overline{b}\ln\frac{2m_{e}k_{\gamma}}{s-m^{2}_{\mu}}+\overline{c}\ln\frac{s-2m_{e}k_{\gamma}}{m^{2}_{\mu}}-\overline{d}\ln\frac{2k_{\gamma}}{2E_{\nu}+m_{e}}\ln\frac{s-2m_{e}k_{\gamma}}{m^{2}_{\mu}}\right.$
$\displaystyle\left.+\overline{d}\sum\limits_{\sigma_{1},\sigma_{2}=\pm}{\cal{\Re}}\left(\mathrm{Li}_{2}\frac{\overline{l}_{0}+\sigma_{1}\sqrt{\overline{l}_{0}^{2}-m^{2}_{\mu}}}{\overline{l}_{0}+\sigma_{2}\sqrt{\left(\overline{l}_{0}-m_{\mu}\right)^{2}-2m_{e}k_{\gamma}}}-\mathrm{Li}_{2}\frac{\overline{l}_{0}+\sigma_{1}\left(\overline{l}_{0}-m_{e}\right)}{\overline{l}_{0}+\sigma_{2}\sqrt{\left(\overline{l}_{0}-m_{\mu}\right)^{2}-2m_{e}k_{\gamma}}}\right)\right),$
(54)
with coefficients $\overline{a},\leavevmode\nobreak\
\overline{b},\leavevmode\nobreak\ \overline{c},$ and $\overline{d}$ in Eq.
(54):
$\displaystyle\overline{a}$
$\displaystyle=\frac{s-m^{2}_{\mu}-2m_{e}k_{\gamma}}{12m_{e}E_{\nu}}\left(m_{e}k_{\gamma}-\frac{20s-5m^{2}_{\mu}-14m^{2}_{e}}{2}-\frac{53s^{2}-38m^{2}_{e}s-m^{2}_{\mu}\left(49s-26m^{2}_{e}\right)+2m^{4}_{\mu}-6m^{4}_{e}}{4m_{e}k_{\gamma}}\right)$
$\displaystyle-\frac{s-m^{2}_{\mu}-2m_{e}k_{\gamma}}{12m_{e}E_{\nu}}\frac{s\left(2s-3m^{2}_{e}\right)-m^{2}_{\mu}\left(4s-9m^{2}_{e}\right)+2m^{4}_{\mu}}{4k^{2}_{\gamma}},$
(55) $\displaystyle\overline{b}$
$\displaystyle=-\left(s-m_{\mu}^{2}\right)^{2}\left(\frac{m_{e}+E_{\nu}}{k_{\gamma}s}+\frac{2}{s-m^{2}_{\mu}}+\frac{1}{s-m^{2}_{e}}\right),$
(56) $\displaystyle\overline{c}$
$\displaystyle=\frac{m^{2}_{\mu}}{2}+\frac{s-2m_{e}k_{\gamma}}{2}\left(1+\frac{s}{m_{e}k_{\gamma}}-\frac{m^{4}_{\mu}}{m^{2}_{e}k^{2}_{\gamma}}\frac{k_{\gamma}s-m^{3}_{e}}{4E_{\nu}s}\right),$
(57) $\displaystyle\overline{d}$ $\displaystyle=-\left(s-m_{\mu}^{2}\right).$
(58)
For smaller energies
$k_{\gamma}\leq\frac{m_{e}}{2}-\frac{m^{2}_{\mu}}{2\left(m_{e}+2E_{\nu}\right)}<\frac{m_{e}}{2}$,
when there are no restrictions on the photon scattering angle, the photon
energy spectrum is given by
$\displaystyle\frac{\mathrm{d}\sigma}{\mathrm{d}k_{\gamma}}$
$\displaystyle=\frac{\mathrm{G}_{\mathrm{F}}^{2}}{\pi
E_{\nu}}\frac{\alpha}{\pi}\left(\overline{e}-\overline{b}\ln\frac{s}{m^{2}_{e}}+\overline{f}\ln\frac{s-2m_{e}k_{\gamma}}{m^{2}_{\mu}}-\overline{d}\left(\ln\frac{2k_{\gamma}}{2E_{\nu}+m_{e}}\ln\frac{s-2m_{e}k_{\gamma}}{m^{2}_{\mu}}-\ln\frac{2k_{\gamma}}{m_{e}}\ln\frac{\left(1-\frac{2k_{\gamma}}{m_{e}}\right)s}{m^{2}_{\mu}}\right)\right.$
$\displaystyle\left.+\overline{g}\ln\frac{1-\frac{2m_{e}k_{\gamma}}{s}}{1-\frac{2k_{\gamma}}{m_{e}}}+\overline{d}\sum\limits_{\sigma_{1},\sigma_{2}=\pm}{\cal{\Re}}\left(\mathrm{Li}_{2}\frac{\overline{l}_{0}+\sigma_{1}\left(\overline{l}_{0}-m_{e}+2k_{\gamma}\right)}{\overline{l}_{0}+\sigma_{2}\sqrt{\left(\overline{l}_{0}-m_{\mu}\right)^{2}-2m_{e}k_{\gamma}}}-\mathrm{Li}_{2}\frac{\overline{l}_{0}+\sigma_{1}\left(\overline{l}_{0}-m_{e}\right)}{\overline{l}_{0}+\sigma_{2}\sqrt{\left(\overline{l}_{0}-m_{\mu}\right)^{2}-2m_{e}k_{\gamma}}}\right)\right),$
(59)
with coefficients $\overline{e},\leavevmode\nobreak\ \overline{f}$, and
$\overline{g}$ in Eq. (59):
$\displaystyle\overline{e}$
$\displaystyle=-\frac{m^{4}_{\mu}}{2m_{e}^{2}}-2\left(1+\frac{E_{\nu}}{m_{e}}\right)\left(s-m^{2}_{\mu}-\frac{m^{2}_{e}}{2}\right)-2k^{2}_{\gamma}\frac{E_{\nu}}{m_{e}}\left(1+\frac{4}{3}\frac{E_{\nu}}{m_{e}}\right)-E_{\nu}k_{\gamma}\left(3+2\frac{m^{2}_{\mu}}{m^{2}_{e}}-\frac{10}{3}\frac{E_{\nu}}{m_{e}}\right)$
$\displaystyle-\frac{m^{4}_{\mu}m_{e}+8E_{\nu}\left(s-m^{2}_{\mu}\right)^{2}}{2k_{\gamma}s},$
(60) $\displaystyle\overline{f}$
$\displaystyle=\frac{E_{\nu}}{k_{\gamma}}\left(s+2k^{2}_{\gamma}-\frac{m^{4}_{\mu}}{s}\right),$
(61) $\displaystyle\overline{g}$
$\displaystyle=\frac{\left(2k_{\gamma}-m_{e}\right)\left(\left(m_{e}+k_{\gamma}\right)\left(m^{4}_{\mu}-4E_{\nu}k_{\gamma}s\right)-2m_{e}m^{4}_{\mu}\right)+2m^{2}_{\mu}E_{\nu}k_{\gamma}\left(m^{2}_{\mu}+2m_{e}k_{\gamma}\right)}{8m_{e}E_{\nu}k^{2}_{\gamma}}.$
(62)
Please note that the expressions in Appendix G of Ref. [10] are valid only for
the photon energies $k_{\gamma}\geq m_{e}E_{\nu}/\left(m_{e}+2E_{\nu}\right)$.
Below this energy, they should be modified as (in notations of Ref. [10])
$\displaystyle\tilde{\mathrm{I}}_{i}$
$\displaystyle\to\frac{\pi^{2}}{\omega^{3}}\mathrm{d}k_{\gamma}\Bigg{[}e_{i}-b_{i}\ln\frac{s}{m^{2}}+f_{i}\ln\frac{s-2mk_{\gamma}}{m^{2}}-d_{i}\left(\ln\frac{2k_{\gamma}}{2\omega+m}\ln\frac{2\bar{l}_{0}-m}{m}-\ln\frac{2k_{\gamma}}{m}\ln\frac{\left(1-\frac{2k_{\gamma}}{m}\right)s}{m^{2}}\right)\Bigg{.}$
$\displaystyle+\Bigg{.}g_{i}\ln\frac{s-2mk_{\gamma}}{\left(1-\frac{2k_{\gamma}}{m}\right)s}+d_{i}\sum\limits_{\sigma_{1},\leavevmode\nobreak\
\sigma_{2}=\pm}\Re\Bigg{(}\mathrm{Li}_{2}\frac{\bar{l}_{0}+\sigma_{1}\left(\overline{l}_{0}-m+2k_{\gamma}\right)}{\bar{l}_{0}+\sigma_{2}\sqrt{\left(\bar{l}_{0}-m\right)^{2}-2mk_{\gamma}}}-\mathrm{Li}_{2}\frac{\bar{l}_{0}+\sigma_{1}\left(\bar{l}_{0}-m\right)}{\bar{l}_{0}+\sigma_{2}\sqrt{\left(\bar{l}_{0}-m\right)^{2}-2mk_{\gamma}}}\Bigg{)}\Bigg{]},$
(63)
with coefficients $e_{i},\leavevmode\nobreak\ f_{i}$, and $g_{i}$ in Eq. (63):
$\displaystyle e_{\mathrm{L}}$
$\displaystyle=\omega\left(\frac{5}{6}\frac{k_{\gamma}\omega\left(2\omega-3m\right)}{m^{2}}-\frac{k^{2}_{\gamma}\omega\left(4\omega+3m\right)}{3m^{3}}-\frac{8\omega^{2}+6m\omega-m^{2}}{4m}-\frac{m\left(32\omega^{3}+m^{3}\right)}{4k_{\gamma}s}\right),$
(64) $\displaystyle e_{\mathrm{R}}$
$\displaystyle=-\frac{19}{144}\frac{m^{9}}{k_{\gamma}s^{3}}-\frac{m^{7}\left(m-40k_{\gamma}\right)}{96k^{2}_{\gamma}s^{2}}+\frac{m^{3}\left(m-2k_{\gamma}\right)}{48\left(2\omega-2k_{\gamma}-m\right)^{2}}+\frac{m^{2}\left(24k^{4}_{\gamma}-16k^{3}_{\gamma}m+2k^{2}_{\gamma}m^{2}-4k_{\gamma}m^{3}+m^{4}\right)}{192k^{3}_{\gamma}\left(2\omega-2k_{\gamma}-m\right)}$
$\displaystyle+\frac{m^{3}\left(48k^{4}_{\gamma}-168k^{3}_{\gamma}m-210k^{2}_{\gamma}m^{2}+4k_{\gamma}m^{3}-m^{4}\right)}{192k^{3}_{\gamma}s}+\frac{m\left(36k^{3}_{\gamma}-153k^{2}_{\gamma}m-49k_{\gamma}m^{2}+59m^{3}\right)}{72k_{\gamma}\left(m-k_{\gamma}\right)}$
$\displaystyle-\frac{k^{2}_{\gamma}\omega\left(4\omega^{2}-3m\omega+6m^{2}\right)}{9m^{3}}+\frac{k_{\gamma}\omega\left(10\omega^{2}-81m\omega-132m^{2}\right)}{18m^{2}}-\frac{\omega\left(4\omega^{2}+52m\omega+67m^{2}\right)}{6m}$
$\displaystyle-\frac{\omega\left(68\omega^{2}+48m\omega+51m^{2}\right)}{46k_{\gamma}},$
(65) $\displaystyle e^{\mathrm{L}}_{\mathrm{R}}$
$\displaystyle=\omega\left(\frac{3}{2}\omega-2m+k_{\gamma}\left(3\frac{\omega}{m}-2\right)+\frac{m^{3}\left(24\omega^{3}-16m\omega^{2}-14m^{2}\omega-3m^{3}\right)}{4k_{\gamma}s^{2}}\right),$
(66) $\displaystyle b^{\mathrm{L}}_{\mathrm{R}}$ $\displaystyle\to
2b^{\mathrm{L}}_{\mathrm{R}}+\frac{2k_{\gamma}\omega^{2}}{m},$ (67)
$\displaystyle f_{\mathrm{L}}$
$\displaystyle=\frac{k_{\gamma}\omega^{2}}{m}+\frac{2\omega^{3}m\left(m+\omega\right)}{k_{\gamma}s},$
(68) $\displaystyle f_{\mathrm{R}}$
$\displaystyle=\omega\left(\frac{k_{\gamma}}{s}\left(2\omega^{2}+6m\omega+3m^{2}\right)+\frac{2m^{2}\left(3\omega^{2}+16m\omega\left(m+\omega\right)+4m^{3}\right)}{s^{2}}\right)$
$\displaystyle+\frac{2}{3}\frac{m^{3}\omega\left(m+\omega\right)\left(4\omega^{4}+14m\omega^{3}+25m^{2}\omega^{2}+15m^{3}\omega+3m^{4}\right)}{k_{\gamma}s^{3}},$
(69) $\displaystyle f^{\mathrm{L}}_{\mathrm{R}}$
$\displaystyle\to-b^{\mathrm{L}}_{\mathrm{R}}-\frac{2k_{\gamma}\omega^{2}}{m},$
(70) $\displaystyle g_{\mathrm{L}}$
$\displaystyle=\frac{\left(m-2k_{\gamma}\right)\left(8\omega^{2}k_{\gamma}\left(m+k_{\gamma}\right)+m^{3}\left(m-k_{\gamma}\right)\right)+2mk_{\gamma}\omega\left(3m^{2}-4k^{2}_{\gamma}\right)}{16mk^{2}_{\gamma}},$
(71) $\displaystyle g_{\mathrm{R}}$
$\displaystyle=\frac{m^{4}}{48k_{\gamma}^{2}}-\frac{k_{\gamma}^{2}}{3}-\frac{2\omega^{2}+14m\omega+11m^{2}}{4}-k_{\gamma}\left(\frac{\omega^{2}}{m}+\frac{7}{2}\omega+\frac{15}{4}m\right)+\frac{m\left(24\omega^{2}+132m\omega+109m^{2}\right)}{48k_{\gamma}},$
(72) $\displaystyle g^{\mathrm{L}}_{\mathrm{R}}$
$\displaystyle=\frac{1}{2}m\left(m-2\omega\right)-\frac{3}{16}\frac{m^{4}}{k^{2}_{\gamma}}+k_{\gamma}\left(m-\frac{2\omega^{2}}{m}\right)+\frac{m\left(4\omega^{2}+8m\omega-m^{2}\right)}{8k_{\gamma}},$
(73)
with the soft-photon limit for the small electron mass $m\ll\omega$:
$\displaystyle\tilde{\mathrm{I}}_{\mathrm{L}}$
$\displaystyle=-\frac{\pi^{2}}{k_{\gamma}}\left(4+2\ln\frac{m}{2\omega}\right),$
(74) $\displaystyle\tilde{\mathrm{I}}_{\mathrm{R}}$
$\displaystyle=-\frac{\pi^{2}}{k_{\gamma}}\left(\frac{17}{9}+\frac{2}{3}\ln\frac{m}{2\omega}\right),$
(75) $\displaystyle\tilde{\mathrm{I}}^{\mathrm{L}}_{\mathrm{R}}$
$\displaystyle=\frac{m}{\omega}\frac{\pi^{2}}{k_{\gamma}}\left(\frac{3}{2}+\ln\frac{m}{2\omega}\right).$
(76)
## References
* Park _et al._ [2016] J. Park _et al._ (MINERvA), Phys. Rev. D 93, 112007 (2016), arXiv:1512.07699 [physics.ins-det].
* Aliaga _et al._ [2016] L. Aliaga _et al._ (MINERvA), Phys. Rev. D 94, 092005 (2016), [Addendum: Phys.Rev.D 95, 039903 (2017)], arXiv:1607.00704 [hep-ex].
* Valencia _et al._ [2019] E. Valencia _et al._ (MINERvA), Phys. Rev. D 100, 092001 (2019), arXiv:1906.00111 [hep-ex].
* Zazueta _et al._ [2022] L. Zazueta _et al._ (MINERvA), (2022), arXiv:2209.05540 [hep-ex].
* Ruterbories _et al._ [2021] D. Ruterbories _et al._ (MINERvA), Phys. Rev. D 104, 092010 (2021), arXiv:2107.01059 [hep-ex].
* Marshall _et al._ [2020] C. M. Marshall, K. S. McFarland, and C. Wilkinson, Phys. Rev. D 101, 032002 (2020), arXiv:1910.10996 [hep-ex].
* Abi _et al._ [2020] B. Abi _et al._ (DUNE), (2020), arXiv:2002.03005 [hep-ex].
* Tomalak _et al._ [2022a] O. Tomalak, Q. Chen, R. J. Hill, and K. S. McFarland, Nature Commun. 13, 5286 (2022a).
* Tomalak _et al._ [2022b] O. Tomalak, Q. Chen, R. J. Hill, K. S. McFarland, and C. Wret, Phys. Rev. D 106, 093006 (2022b), arXiv:2204.11379 [hep-ph].
* Tomalak and Hill [2020] O. Tomalak and R. J. Hill, Phys. Rev. D 101, 033006 (2020), arXiv:1907.03379 [hep-ph].
* Ram [1967] M. Ram, Phys. Rev. 155, 1539 (1967).
* Weinberg [1967] S. Weinberg, Phys. Rev. Lett. 19, 1264 (1967).
* ’t Hooft [1971] G. ’t Hooft, Phys. Lett. B 37, 195 (1971).
* Sarantakos _et al._ [1983] S. Sarantakos, A. Sirlin, and W. J. Marciano, Nucl. Phys. B 217, 84 (1983).
* Bahcall _et al._ [1995] J. N. Bahcall, M. Kamionkowski, and A. Sirlin, Phys. Rev. D 51, 6146 (1995), arXiv:astro-ph/9502003.
* Bardin and Dokuchaeva [1984a] D. Y. Bardin and V. A. Dokuchaeva, Nucl. Phys. B 246, 221 (1984a).
* Bardin and Dokuchaeva [1986] D. Y. Bardin and V. A. Dokuchaeva, Sov. J. Nucl. Phys. 43, 975 (1986).
* Passera [2001] M. Passera, Phys. Rev. D 64, 113002 (2001), arXiv:hep-ph/0011190.
* Green [1981] M. Green, J. Phys. G 7, 1169 (1981).
* Bardin and Dokuchaeva [1984b] D. Y. Bardin and V. A. Dokuchaeva, Sov. J. Nucl. Phys. 39, 563 (1984b).
* Zhizhin _et al._ [1975] E. D. Zhizhin, R. V. Konoplich, and Y. P. Nikitin, Izv. Vuz. Fiz. 1975, 82 (1975).
* Byers _et al._ [1979] N. Byers, R. Ruckl, and A. Yano, Physica A 96, 163 (1979).
* Salomonson and Ueda [1975] P. Salomonson and Y. Ueda, Phys. Rev. D 11, 2606 (1975).
* Green and Veltman [1980] M. Green and M. J. G. Veltman, Nucl. Phys. B 169, 137 (1980), [Erratum: Nucl.Phys.B 175, 547 (1980)].
* Marciano and Sirlin [1980] W. J. Marciano and A. Sirlin, Phys. Rev. D 22, 2695 (1980), [Erratum: Phys.Rev.D 31, 213 (1985)].
* Aoki _et al._ [1981] K.-i. Aoki, Z. Hioki, R. Kawabe, M. Konuma, and T. Muta, Prog. Theor. Phys. 65, 1001 (1981).
* Aoki and Hioki [1981] K.-i. Aoki and Z. Hioki, Prog. Theor. Phys. 66, 2234 (1981).
* Hioki [1982] Z. Hioki, Prog. Theor. Phys. 67, 1165 (1982).
* Marciano and Parsa [2003] W. J. Marciano and Z. Parsa, J. Phys. G 29, 2629 (2003), arXiv:hep-ph/0403168.
* Sirlin and Ferroglia [2013] A. Sirlin and A. Ferroglia, Rev. Mod. Phys. 85, 263 (2013), arXiv:1210.5296 [hep-ph].
* Bardin and Dokuchaeva [1987] D. Y. Bardin and V. A. Dokuchaeva, Nucl. Phys. B 287, 839 (1987).
* Geiregat _et al._ [1990] D. Geiregat _et al._ (CHARM-II), Phys. Lett. B 247, 131 (1990).
* Vilain _et al._ [1995] P. Vilain _et al._ (CHARM-II), Phys. Lett. B 364, 121 (1995).
* Fermi [1934] E. Fermi, Z. Phys. 88, 161 (1934).
* Feynman and Gell-Mann [1958] R. P. Feynman and M. Gell-Mann, Phys. Rev. 109, 193 (1958).
* Arason _et al._ [1992] H. Arason, D. J. Castano, B. Keszthelyi, S. Mikaelian, E. J. Piard, P. Ramond, and B. D. Wright, Phys. Rev. D 46, 3945 (1992).
* Antonelli and Maiani [1981] F. Antonelli and L. Maiani, Nucl. Phys. B 186, 269 (1981).
* Hill and Tomalak [2020] R. J. Hill and O. Tomalak, Phys. Lett. B 805, 135466 (2020), arXiv:1911.01493 [hep-ph].
* Tomalak [2022] O. Tomalak, Phys. Lett. B 829, 137108 (2022), arXiv:2112.12395 [hep-ph].
* dun [2019] “Dune fluxes,” http://home.fnal.gov/~ljf26/DUNEFluxes/ (2019).
* Devan _et al._ [2016] J. Devan _et al._ (MINERvA), Phys. Rev. D 94, 112007 (2016), arXiv:1610.04746 [hep-ex].
* Mertig _et al._ [1991] R. Mertig, M. Bohm, and A. Denner, Comput. Phys. Commun. 64, 345 (1991).
* Shtabovenko _et al._ [2016] V. Shtabovenko, R. Mertig, and F. Orellana, Comput. Phys. Commun. 207, 432 (2016), arXiv:1601.01167 [hep-ph].
* Hahn and Perez-Victoria [1999] T. Hahn and M. Perez-Victoria, Comput. Phys. Commun. 118, 153 (1999), arXiv:hep-ph/9807565.
* Inc. [2022] W. R. Inc., “Mathematica, Version 12.2.0.0,” (2022), champaign, IL, 2022\.
* Vanderhaeghen _et al._ [2000] M. Vanderhaeghen, J. M. Friedrich, D. Lhuillier, D. Marchand, L. Van Hoorebeke, and J. Van de Wiele, Phys. Rev. C 62, 025501 (2000), arXiv:hep-ph/0001100.
* Heller _et al._ [2018] M. Heller, O. Tomalak, and M. Vanderhaeghen, Phys. Rev. D 97, 076012 (2018), arXiv:1802.07174 [hep-ph].
* Behrends _et al._ [1956] R. E. Behrends, R. J. Finkelstein, and A. Sirlin, Phys. Rev. 101, 866 (1956).
* Arbuzov [2002] A. B. Arbuzov, Phys. Lett. B 524, 99 (2002), [Erratum: Phys.Lett.B 535, 378–378 (2002)], arXiv:hep-ph/0110047.
* Arbuzov _et al._ [2005] A. B. Arbuzov, D. Y. Bardin, and L. V. Kalinovskaya, JHEP 06, 078 (2005), arXiv:hep-ph/0407203.
* Chen [2018] L.-B. Chen, JHEP 02, 066 (2018), arXiv:1801.01033 [hep-ph].
* Engel _et al._ [2019] T. Engel, C. Gnendiger, A. Signer, and Y. Ulrich, JHEP 02, 118 (2019), arXiv:1811.06461 [hep-ph].
* Anastasiou _et al._ [2007] C. Anastasiou, K. Melnikov, and F. Petriello, JHEP 09, 014 (2007), arXiv:hep-ph/0505069.
* Engel _et al._ [2020] T. Engel, A. Signer, and Y. Ulrich, JHEP 01, 085 (2020), arXiv:1909.10244 [hep-ph].
* Lee and Sirlin [1964] T. D. Lee and A. Sirlin, Rev. Mod. Phys. 36, 666 (1964).
* Bloch and Nordsieck [1937] F. Bloch and A. Nordsieck, Phys. Rev. 52, 54 (1937).
* Nakanishi [1958] N. Nakanishi, Prog. Theor. Phys. 19, 159 (1958).
* Kinoshita [1962] T. Kinoshita, J. Math. Phys. 3, 650 (1962).
* Lee and Nauenberg [1964] T. D. Lee and M. Nauenberg, Phys. Rev. 133, B1549 (1964).
|
# Neural Vocoder Feature Estimation for Dry Singing Voice Separation
Jaekwon Im, Soonbeom Choi, Sangeon Yong, and Juhan Nam Graduate School of
Culture Technology, KAIST, Daejeon, South Korea
E-mail<EMAIL_ADDRESS>
###### Abstract
Singing voice separation (SVS) is a task that separates singing voice audio
from its mixture with instrumental audio. Previous SVS studies have mainly
employed the spectrogram masking method which requires a large dimensionality
in predicting the binary masks. In addition, they focused on extracting a
vocal stem that retains the wet sound with the reverberation effect. This
result may hinder the reusability of the isolated singing voice. This paper
addresses the issues by predicting mel-spectrogram of dry singing voices from
the mixed audio as neural vocoder features and synthesizing the singing voice
waveforms from the neural vocoder. We experimented with two separation
methods. One is predicting binary masks in the mel-spectrogram domain and the
other is directly predicting the mel-spectrogram. Furthermore, we add a
singing voice detector to identify the singing voice segments over time more
explicitly. We measured the model performance in terms of audio,
dereverberation, separation, and overall quality. The results show that our
proposed model outperforms state-of-the-art singing voice separation models in
both objective and subjective evaluation except the audio quality.
## 1 Introduction
Singing voice separation (SVS) is a task of isolating singing voice audio from
its musical mixture with various instrumental sounds. SVS is an important
topic because the separated singing voice can be used not only in music
production, such as music remix, but also in other tasks including singing
voice synthesis, singer recognition, lyric recognition, melody extraction and
note transcription. Early approaches use methods that take a subspace of
singing voice from the decomposed mixed audio, for example, using non-negative
matrix factorization (NMF) [1] and principal component analysis (PCA) [2].
Nowadays, deep learning is the dominant approach as it has significantly
improved the separation performance [3, 4, 5, 6, 7].
A common processing pipeline in the deep learning approach is to predict the
spectrogram masks of the singing voice from the mixed audio spectrogram and
multiply the predicted masks with the mixed audio spectrogram element-wise to
calculate the magnitude part of the singing voice. The phase part of the
singing voice is obtained from that of the mixed audio spectrogram or
predicted using a phase reconstruction algorithm such as Griffin-Lim to
convert the estimated singing voice spectrogram into a waveform. However, the
phase of the mixed audio spectrogram differs from that of the singing voice
spectrogram, and the phase reconstruction algorithm also has limitations in
predicting the precise phase. In addition, the large dimensionality of
spectrogram requires a neural network to predict the masks on the numerous
feature bins. Another common practice in the deep learning approach is that
the outcome of the separated singing voice is a vocal stem which is processed
with audio effects such as reverberation. This mainly attributes to the
available multi-track datasets such as MusDB [8] which manage individual sound
sources in stem unit for convenience. This wet singing voice with other voice
sources may hinder the reuse of the separated singing voice.
Recently, the phase issue has been addressed by predicting the magnitude and
phase at the same time in a neural network architecture. For instance, in the
Complex as Channel framework (CaC) [9], SVS was evaluated by creating features
with magnitude and phase information, taking the real and imaginary parts of
the spectrogram as real-valued features, respectively. PhaseNet handled the
phase prediction as a classification task by discretizing the phase [10].
ResUNetDecouple predicts complex ideal ratio masks (cIRM) as a way of
predicting the information [11]. Nevertheless, it is still challenging to
predict phase and magnitude simultaneously. These studies also have
limitations in that dereverberation was not considered.
An alternative to the time-frequency domain masks of singing voice is vocoder
features which can be directly converted to waveforms using a pre-built
vocoder model. SSSynth estimated the WORLD vocoder parameters (F0, harmonic
spectral envelope, and aperiodicity envelope of the singing voice) [12] from
mixed audio using a neural network and converted them into the singing voice
audio using the vocoder [13]. This method has advantages in that the predicted
features are much small dimensional and the separated singing voice has little
interference from other instruments. However, F0 has more considerable effect
on the audio quality of the synthesized voice in the DSP-based vocoder than
other vocoder parameters. SSSynth handles all vocoder parameters uniformly
without considering the relative importance of F0. This yields low synthesis
quality. Content-based singing voice extraction is another method that
predicts the WORLD vocoder parameters of singing voice from mixed audio [14].
They proposed a model that predicts unprocessed dry vocals not only with
accompaniment but also reverberation. However, it has the limitation that the
learning process is complex and the predicted quality of the singing voice
cannot exceed the synthesis quality of WORLD vocoder.
In this paper, we propose an SVS model that uses a neural vocoder to address
both singing voice separation and dereverberation. The SVS model is composed
of two modules. One is a separation module that estimates mel-spectrogram of
dry singing voices from the mixed audio as neural vocoder features and the
other is the neural vocoder that synthesizes dry singing voice waveforms from
the mel-spectrogram. This models has several advantages over previous work.
First, mel-spectrogram has smaller dimensions than the regular spectrogram.
Therefore, the neural network model for source separation can be more
streamlined. Second, since the neural vocoder directly generates waveforms
from mel spectrgoram, the phase estimation is not necessary. Lastly, since the
separation module and neural vocoder are separately trained, performance can
be enhanced by augmenting the train data or modifying the structure in one
model while leaving another model unchanged.
This use of neural vocoder in voice separation was recently attempted with
application to speech restoration [15]. However, we explore the approach in
singing voice separation along with music audio and also focus on dry voice
source. Specifically, we investigate the separation module with two different
schemes. One is predicting binary masks in the mel-spectrogram domain and the
other is directly predicting the mel-spectrogram. Furthermore, we add a
singing voice detector to identify the singing voice segments over time more
explicitly. Compared to the wet singing voice with reverberation, the dry
singing voice has a significant energy difference between the parts with and
without the voice. The singing voice detector facilitates estimating more
refined mel-spectrogram by determining the presence of singing voice based on
energy.
We compared our method to SSSynth [13] and ResUNetDecouple [11]. We evaluated
each method with objective metrics. However, it is difficult for the audio
synthesized by the neural vocoder to be precisely matched with the target
audio at the sample level. This makes it difficult to rely on the existing
objective evaluation metrics [15]. Therefore, we evaluated the separation
quality, dereverberation quality, and audio quality with more emphasis on
subjective evaluation. Sound examples of the compared methods are available
online111https://jakeoneijk.github.io/mel-svs-demopage/.
## 2 Methodology
Figure 1: The overall architecture of our proposed model.
We assume that mixed audio is the linear sum of dry singing voice, wet singing
voice by reverberation, and the accompaniment as follows:
$y=x_{d}+x_{r}+x_{a}.$ (1)
where $y\in\mathbb{R}^{c\times t}$ is the mixed audio,
$x_{d}\in\mathbb{R}^{c\times t}$ is the dry singing voice,
$x_{r}\in\mathbb{R}^{c\times t}$ is the wet singing voice, and
$x_{a}\in\mathbb{R}^{c\times t}$ is the accompaniment. We define the wet
singing voice $x_{r}$ as the convolution of dry singing voice with a spatial
room impulse response (SRIR).
$x_{r}=\alpha(h\ast x_{d}).$ (2)
where $h\in\mathbb{R}^{c\times t}$ is an SRIR, $*$ and $\alpha$ stands for
convolution operation and a reverberation coefficient constant, respectively.
Fig. 1 shows the overall architecture. Through short-time Fourier transform
(STFT), the mixed audio waveform is turned into the spectrogram. By using the
mel filterbank, the spectrogram is transformed into the mel-spectrogram. The
separation module estimates the mel-spectrogram of dry singing voice from the
mel-spectrogram of mixed audio. From the mel-spectrogram of singing voice ,the
singing voice detector predicts the singing voice detection mask, which is a
feature that determines whether a singing voice exists at each time frame.
Through element-wise multiplication of the detection mask and the singing
voice mel-spectrogram, the time-wise masked mel-spectrogram of singing voice
is converted to the final waveform audio by the neural vocoder.
### 2.1 Separation Module
Figure 2: Deep ResUNet used in our architecture. The numbers in the block
represent (in channels, out channels), (down/upsample ratio).
We employ the UNet [16] architecture for the separation module. UNet has
proven to be effective for singing voice separation [3]. It is common to set
the spectrogram of mixed audio to the input of the encoder and predict the the
time-frequency masks of singing voice in the decoder. We designed the
configuration of architecture based on ResUNetDecouple [11]. Since we use mel-
spectrogram of mixed audio and dry singing voice in the UNet architecture, we
modified the modules that face with the input and output. Fig. 2 illustrates
the structure of the separation module. Deep Residual UNet is a deep structure
in which each encoder block and decoder block consists of 4 residual
convolutional blocks, and each residual convolutional layer contains 2 or 3
convolutional layers. All convolutional layers except convolutional layers for
skip connection are preceded by batch normalization and application of leaky
ReLU. The final output of the decoder becomes a feature with a value between 0
and 1 through the sigmoid activation function. We employed two methods:
predicting the mask of the mel-spectrogram and directly predicting the mel-
spectrogram. In the case of predicting the mask, the mel-spectrogram of
singing voice is calculated by element-wise multiplication of the predicted
mask with the mixed audio mel-spectrogram. The separation module is optimized
with the mean absolute error (MAE) loss between the predicted dry singing
voice and the target dry singing voice.
### 2.2 Singing Voice Detector
Figure 3: Singing voice detector used in our architecture. The numbers in the
block represent (in channels, out channels).
Dry singing voice has persistent energy only during the voice production while
wet singing voice has a decaying energy even after the voice production is
paused. As a consequence, determining the presence of singing voice by energy
level is simple for dry singing voice. Using this attribute, we design a
singing voice detector that predicts a time-wise detection mask with a value
of 1 if a singing voice exists and 0 if it does not for each time frame. Fig.
3 shows the structure of the singing voice detector. The singing voice
detector is a simple structure with three 1D convolution layers that keep the
dimensions on the time axis. The detection mask is created by repeating the
predicted 1d feature along the frequency axis according to the frequency
dimension size of the mel-spectrogram. The detection mask is multiplied
element-wise by the predicted singing voice mel-spectrogram to estimate the
masked singing voice mel-spectrogram. The following loss function is used to
optimize the separation module with the singing voice detector:
$\mathcal{L}_{sep}=||\hat{X}-X||_{1}+||\hat{X}_{masked}-X_{masked}||_{1}$ (3)
where $X\in\mathbb{R}^{c\times T\times F}$ is the target singing voice,
$\hat{X}\in\mathbb{R}^{c\times T\times F}$ is the predicted singing voice,
$X_{masked}\in\mathbb{R}^{c\times T\times F}$ is the target masked singing
voice, and $\hat{X}_{masked}\in\mathbb{R}^{c\times T\times F}$ is the
predicted detection masked singing voice. They are all mel-spectrograms.
### 2.3 Neural Vocoder
The neural vocoder synthesizes singing voice waveforms from the estimated mel-
spectrogram from the separation module. We used HiFi-GAN [17], a neural
vocoder that achieves high quality voice synthesis by using multi-receptive
field fusion on a generator, multi-period discriminators and multi-scale
discriminators. Since it can synthesize high-fidelity voice in real-time, it
is widely used in both speech and singing voice [18, 19, 20] synthesis in
recent years.
## 3 Experiments
### 3.1 Datasets
We used MUSDB18 which is one of the most widely used datasets for SVS [8].
Since our experiment requires dry monophonic singing voices, however, the
MUSDB18 dataset offering wet polyphonic singing voices is not appropriate.
Therefore, we collected dry singing voices of songs in the MUSDB18 dataset
from MedleyDB[21] and the website: Mixing Secrets for The Small
Studio222https://www.cambridge-mt.com/ms/mtk/, which are original sources of
the MUSDB18 dataset. The MUSDB18 dataset has 86 songs for training, 14 songs
for validation, and 50 songs for testing. We trained the separation module
with the accompaniment of the MUSDB18 dataset and the corresponding dry
singing voice. The neural vocoder was also trained with dry singing voices of
the same dataset. We used the same 86 songs for training the separation for
training HiFiGAN. We used the v1 setting for HiFi-GAN while changing the
sampling rate to 24,000 Hz to match with the separation module output.
We used the DetmoldSRIR dataset [22] to investigate various reverberation
conditions. The dataset provides SRIRs measured in three performance spaces at
the Detmold University of Music. The first room is the Detmold Konzerthaus
(medium-sized concert hall), the second is the Brahmssaal (small music chamber
room), and the last is the Detmold Sommertheater (theater). We randomly
divided 937 SRIRs into a train, valid, and test set to contain 749, 94, and 94
SRIRs, respectively. In the training stage, the SRIRs were randomly selected
at every step. For the test, we made 94 pairs of music and SRIR by matching 1
or 2 songs per SRIR.
### 3.2 Data Processing
For the convenience of experiments, all audio sample rates were resampled to
24,000 Hz. We adopted the mix-audio data augmentation [23] to increase the
volume of the training set. The mix-audio data augmentation a method that
generates training data by taking each source from different music tracks. We
performed the mix-audio data augmentation using two accompaniments and one
singing voice. Since the vocoder synthesize a monophonic voice, only the
situation with one singing voice was considered. All audio files used in the
training phase was randomly segmented by 3 seconds. The reverberation of the
singing voice was rendered by the convolution operation between randomly
selected SRIR and dry singing voice sources. The reverberation coefficient
constant was randomly set to a value between 0 and 1 every time.
The audio was converted to the spectrogram by short-time Fourier transform
(STFT) using 1024 samples in FFT size, 256 samples in hop size, and the Hann
window function. The spectrogram was converted to mel-spectrogram through 80
channel mel filterbank. The log magnitude compression was applied to the mel-
spectrogram. When the separation module was trained to directly estimate the
mel-spectrogram, the min-max normalization was applied to set the value of the
mel-spectrogram between 0 and 1.
The target detection mask was obtained in the following steps. First, the
spectrogram was squared element-wise. Second, it is summed over the frequency
axis to obtain an one-dimensional feature. Third, values less than a threshold
are mapped to 0 and the others are mapped to 1. In our experiment, the
threshold was set to 4. All training was executed with a batch size of 4 and
the Adam optimizer [24]. The learning rates were set at 0.001 and decreased by
a rate of 0.9 every 15000 steps. Each separation module was trained by 1
million steps.
### 3.3 Compared Models
We compared our method with SSSynth and ResUNetDecouple. SSSynth estimates the
World vocoder parameters of singing voice from mixed audio and ResUNetDecouple
estimates the spectrogram domain masks. We trained the two models using the
same dataset, data split and preprocessing explained above.
### 3.4 Objective Evaluation
Since the audio synthesized by the neural vocoder does not perfectly match the
target at the sample level, the conventional source separation evaluation
metrics such as source to distortion (SDR), source to interference (SIR)
ratios [25] is not appropriate. We verified that even the singing voice
synthesized from the target mel-spectrogram has significantly low values of
SDR although it perceptually has high-quality. To address this issue, we used
two spectral metrics. First, we used scale-invariant spectrogram to noise
ratio (SiSPNR) [15]. SiSPNR is a spectrum-domain metric similar to the scale-
invariant signal to noise ratio (SiSNR) [26], comparing the energy of a signal
and its background noise. The formula of SiSPNR is as follows.
$SiSPNR=10\log_{10}{\frac{||S_{target}||^{2}}{||e_{noise}||^{2}}}$ (4)
$S_{target}={\frac{<\hat{S},S>S}{||\hat{S}||^{2}}}$ (5)
where $\hat{S}$ is the magnitude spectrogram of the predicted signal, and $S$
is the magnitude spectrogram of the target signal. As a second metric, we
propose a spectrogram to Distortion Ratio (SPDR). SPDR is the same as signal-
to-distortion ratio(SDR) [25], except that it is calculated with the magnitude
spectrogram. The formula of SPDR is as follows.
$SPDR=10\log_{10}{\frac{||S||^{2}}{||e_{interf}+e_{noise}+e_{artif}||^{2}}}$
(6)
where $S$ is the magnitude spectrogram of the target signal.
### 3.5 Subjective Evaluation
We also conducted subjective evaluation through a listening test with 23
participants. Considering the diversity of music genre and reverberation type,
15 songs from the test set were selected for the listening test. For each
song, the singing voices predicted by SSSynth, ResUNetDecouple, the proposed
method without and with the singing voice detector were evaluated. We only
evaluated the audio estimated by predicting the mask for the efficiency of
evaluation because it has no significant performance difference from directly
predicting the mel-spectrogram. We also included the target audio in the
evaluation audio set as the upper bound of the result. The evaluation items
included audio quality, dereverberation performance, separation performance,
and overall quality. Each evaluation item was rated from 1 to 5 points.
## 4 Results
### 4.1 Evaluation Results
Table 1: SPDR and SiSPNR comparison of previous and our proposed methods.
| SPDR↑ | SiSPNR↑
---|---|---
Target | - | 111.81
HiFi-GAN using target mela | 13.41 | 10.17
SSSynth | 7.53 | 3.52
ResUNetDecouple | 8.66 | 4.87
Proposed (directly predict mela ) w/o SVDb | 9.94 | 5.95
Proposed (directly predict mela ) w/ SVDb | 10.19 | 6.30
Proposed (predict mela mask) w/o SVDb | 10.13 | 6.29
Proposed (predict mela mask) w/ SVDb | 10.35 | 6.43
* a
mel-spectrogram
* b
Singing voice detector
Table 1 shows the SiSPNR and SPDR comparison of the proposed and two compared
methods. SiSPNR and SPDR results reveal similar tendencies. In two evaluation
metrics, our proposed method outperforms the previous methods. The method of
directly predicting the mel-spectrogram performed lower than the method of
predicting the mask of the mel-spectrogram. It implies that the advantage of
predicting the mask also applies to SVS in the mel-spectrogram domain. The
results show that the singing voice detector can effectively improve the
performance of the separation module. The performance of the models with the
singing voice detector is superior to that of the models without the singing
voice detector.
Table 2: MOS score comparison of previous and our proposed methods.
| | Audio
---
Quality
| Dereverbe
---
Performance
| Separation
---
Performance
| Overall
---
Quality
Target | 4.81 ± 0.49 | 4.61 ± 0.82 | 4.78 ± 0.62 | 4.79 ± 0.47
Model1 a | 1.32 ± 0.65 | 2.18 ± 1.30 | 1.94 ± 1.14 | 1.43 ± 0.73
Model2 b | 3.41 ± 0.89 | 3.29 ± 1.05 | 2.53 ± 1.04 | 3.19 ± 0.78
Model3 c | 3.07 ± 0.86 | 3.48 ± 2.44 | 3.64 ± 1.02 | 3.24 ± 0.82
Model4 d | 3.17 ± 0.84 | 3.50 ± 1.02 | 3.71 ± 1.0 | 3.35 ± 0.84
* a
SSSynth.
* b
ResUNetDecouple.
* c
The proposed method without Singing Voice Detector.
* d
The proposed method with Singing Voice Detector
* e
Dereverberation
Table 2 shows the Mean Opinion Score (MOS) results of previous and proposed
methods. Our proposed methods scored better than previous methods in all areas
except for audio quality. In particular, our methods significantly
outperformed the previous methods in the separation performance. The reason
our methods scored lower than ResUNetDecouple in audio quality is presumably
due to artifacts caused by HiFi-GAN. These artifacts can be alleviated by
training HiFi-GAN with data with more diverse distributions of pitch and
timbre. On every metric, the model with the singing voice detector
outperformed the model without it. This indicates that the singing voice
detector is an effective module for dry singing voice separation.
### 4.2 Spectrogram analysis
Figure 4: The target (ground truth) and estimated singing voice from the three
different methods. The proposed model used the setting that predicts mel-
spectrogram masks with the singing voice detector.
We compared the singing voice separation methods through spectrogram analysis
to provide more insightful ideas about the results. Fig. 4 depicts the
spectrograms of the separated singing voice estimated by SSSynth,
ResUNetDecouple, and the proposed model. SSSynth tends not to generate the
overall harmonic structure of sound correctly. Since it is difficult to
predict the f0 of a singing voice with reverberation from the mixed audio and
the world vocoder is greatly affected by incorrect F0, the singing voice
separated by SSSynth has many artifacts. Our proposed model outperforms
ResUnetDecouple in predicting the harmonic structure of the singing voice. In
addition, we found that our model separates percussion sounds better than
ResUnetDecouple. This is probably because our proposed structure is highly
optimized for predicting dry singing voices. Since the neural vocoder is
optimized to synthesize solely dry singing voices, other instrumental sounds
are not likely to be generated. Furthermore, since the singing voice detector
helps the separation module be trained regarding the existence of the singing
voice, the separation module can effectively separate the instrumental sounds
when the singing voice is absent.
## 5 Conclusions
In this study, we proposed an SVS model that estimates neural vocoder features
for dry singing voice separation. In objective and subjective evaluation, our
methods showed better performance than the compared methods. In addition, we
proposed the singing voice detector to improve the dry singing voice
separation and verified its effectiveness through the objective and subjective
evaluation. In the future, we plan to improve the audio quality of singing
voice with more enhanced separation modules and neural vocoders.
## References
* [1] Jean-Louis Durrieu, Alexey Ozerov, Cédric Févotte, Gaël Richard, and Bertrand David. Main instrument separation from stereophonic audio signals using a source/filter model. In 2009 17th European Signal Processing Conference, pages 15–19. IEEE, 2009.
* [2] Tak-Shing Chan, Tzu-Chun Yeh, Zhe-Cheng Fan, Hung-Wei Chen, Li Su, Yi-Hsuan Yang, and Roger Jang. Vocal activity informed singing voice separation with the ikala dataset. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 718–722. IEEE, 2015.
* [3] Andreas Jansson, Eric J. Humphrey, Nicola Montecchio, Rachel M. Bittner, Aparna Kumar, and Tillman Weyde. Singing voice separation with deep u-net convolutional networks. In Proceedings of the 18th International Society for Music Information Retrieval Conference, ISMIR 2017, Suzhou, China, October 23-27, 2017, pages 745–751, 2017.
* [4] Pritish Chandna, Marius Miron, Jordi Janer, and Emilia Gómez. Monoaural audio source separation using deep convolutional neural networks. In Latent Variable Analysis and Signal Separation - 13th International Conference, LVA/ICA 2017, Grenoble, France, February 21-23, 2017, Proceedings, volume 10169 of Lecture Notes in Computer Science, pages 258–266, 2017.
* [5] Fabian-Robert Stöter, Stefan Uhlich, Antoine Liutkus, and Yuki Mitsufuji. Open-unmix - A reference implementation for music source separation. J. Open Source Softw., 4(41):1667, 2019.
* [6] Naoya Takahashi, Nabarun Goswami, and Yuki Mitsufuji. Mmdenselstm: An efficient combination of convolutional and recurrent neural networks for audio source separation. In 16th International Workshop on Acoustic Signal Enhancement, IWAENC 2018, Tokyo, Japan, September 17-20, 2018, pages 106–110. IEEE, 2018\.
* [7] Romain Hennequin, Anis Khlif, Félix Voituret, and Manuel Moussallam. Spleeter: a fast and efficient music source separation tool with pre-trained models. J. Open Source Softw., 5(56):2154, 2020.
* [8] Zafar Rafii, Antoine Liutkus, Fabian-Robert Stöter, Stylianos Ioannis Mimilakis, and Rachel Bittner. Musdb18-a corpus for music separation. 2017\.
* [9] Woo-Sung Choi, Minseok Kim, Jaehwa Chung, Daewon Lee, and Soonyoung Jung. Investigating u-nets with various intermediate blocks for spectrogram-based singing voice separation. In Proceedings of the 21th International Society for Music Information Retrieval Conference, ISMIR 2020, Montreal, Canada, October 11-16, 2020, pages 192–198, 2020.
* [10] Naoya Takahashi, Purvi Agrawal, Nabarun Goswami, and Yuki Mitsufuji. Phasenet: Discretized phase modeling with deep neural networks for audio source separation. In Interspeech, pages 2713–2717, 2018.
* [11] Qiuqiang Kong, Yin Cao, Haohe Liu, Keunwoo Choi, and Yuxuan Wang. Decoupling magnitude and phase estimation with deep resunet for music source separation. In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021, pages 342–349, 2021.
* [12] Masanori Morise, Fumiya Yokomori, and Kenji Ozawa. World: a vocoder-based high-quality speech synthesis system for real-time applications. IEICE TRANSACTIONS on Information and Systems, 99(7):1877–1884, 2016.
* [13] Pritish Chandna, Merlijn Blaauw, Jordi Bonada, and Emilia Gomez. A vocoder based method for singing voice extraction. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 990–994. IEEE, 2019.
* [14] Pritish Chandna, Merlijn Blaauw, Jordi Bonada, and Emilia Gómez. Content based singing voice extraction from a musical mixture. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 781–785. IEEE, 2020.
* [15] Haohe Liu, Qiuqiang Kong, Qiao Tian, Yan Zhao, DeLiang Wang, Chuanzeng Huang, and Yuxuan Wang. Voicefixer: Toward general speech restoration with neural vocoder. CoRR, abs/2109.13731, 2021.
* [16] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
* [17] Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. Advances in Neural Information Processing Systems, 33:17022–17033, 2020.
* [18] Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Byoung Jin Choi, and Nam Soo Kim. Diff-tts: A denoising diffusion model for text-to-speech. In Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czechia, 30 August - 3 September 2021, pages 3605–3609. ISCA, 2021.
* [19] Soonbeom Choi and Juhan Nam. A melody-unsupervision model for singing voice synthesis. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7242–7246. IEEE, 2022.
* [20] Yongmao Zhang, Jian Cong, Heyang Xue, Lei Xie, Pengcheng Zhu, and Mengxiao Bi. Visinger: Variational inference with adversarial learning for end-to-end singing voice synthesis. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7237–7241. IEEE, 2022.
* [21] Rachel M Bittner, Justin Salamon, Mike Tierney, Matthias Mauch, Chris Cannam, and Juan Pablo Bello. Medleydb: A multitrack dataset for annotation-intensive mir research. In ISMIR, volume 14, pages 155–160, 2014.
* [22] Sebastia Vicenç Amengual Gari, Banu Sahin, Dusty Eddy, and Malte Kob. Open database of spatial room impulse responses at detmold university of music. In Audio Engineering Society Convention 149. Audio Engineering Society, 2020.
* [23] Xuchen Song, Qiuqiang Kong, Xingjian Du, and Yuxuan Wang. Catnet: music source separation system with mix-audio augmentation. CoRR, abs/2102.09966, 2021.
* [24] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
* [25] Emmanuel Vincent, Rémi Gribonval, and Cédric Févotte. Performance measurement in blind audio source separation. IEEE transactions on audio, speech, and language processing, 14(4):1462–1469, 2006.
* [26] Jonathan Le Roux, Scott Wisdom, Hakan Erdogan, and John R Hershey. Sdr–half-baked or well done? In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 626–630. IEEE, 2019.
|
# Unitary forms for holomorphic vertex operator algebras of central charge
$24$
Ching Hung Lam Institute of Mathematics, Academia Sinica, Taipei 10617,
Taiwan<EMAIL_ADDRESS>
###### Abstract.
We prove that all holomorphic vertex operator algebras of central charge $24$
with non-trivial weight one subspaces are unitary. The main method is to use
the orbifold construction of a holomorphic VOA $V$ of central charge $24$
directly from a Niemeier lattice VOA $V_{N}$. We show that it is possible to
extend the unitary form for the lattice VOA $V_{N}$ to the holomorphic VOA $V$
by using the orbifold construction and some information of the automorphism
group $\operatorname{Aut}(V)$.
###### 2010 Mathematics Subject Classification:
Primary 17B69; Secondary 20B25
C.H. Lam was partially supported by a research grant AS-IA-107-M02 of Academia
Sinica and MoST grants 110-2115-M-001-011-MY3 of Taiwan
## 1\. Introduction
The classification of strongly regular holomorphic vertex operator algebras
(abbreviated as VOA) of central charge $24$ with non-trivial weight one space
has recently been completed (see [ELMS21, LS20b, MS] and the references given
there). Except for the uniqueness of holomorphic VOAs of moonshine type (i.e,
with $V_{1}=0$), it was proved that there are exactly $70$ strongly regular
holomorphic VOAs with central charge $24$ and non-zero weight one space;
moreover, their VOA structures are uniquely determined by the Lie algebra
structures of their weight one spaces. The possible Lie algebra structures for
their weight one subspaces are exactly those given in Schellekens’ list
[Sc93]. It is commonly believed that all holomorphic VOAs of central charge
$24$ are unitary (i.e., they have some positive definite invariant Hermitian
forms). In this article, we show that all holomorphic vertex operator algebras
of central charge $24$ with non-trivial weight one subspaces are unitary.
It is well known [Bo86, DLin14, FLM88] that lattice VOAs are unitary. It turns
out that many automorphisms of finite order will preserve the unitary form.
Our main method is to use the orbifold construction of holomorphic VOAs of
central charge $24$ directly from Niemeier lattice VOAs (cf. [HM]). In
[ELMS21], it is proved that any holomorphic VOA of central charge $24$ with a
semisimple weight one Lie algebra can be constructed by a single orbifold
construction from the Leech lattice VOA $V_{\Lambda}$. It is well known [Bo86,
DN99] that any automorphism $\tilde{g}\in\operatorname{Aut}(V_{\Lambda})$ can
be written as $\tilde{g}=\widehat{\tau}\exp(2\pi i\beta(0))$ where $\tau\in
Co.0=O(\Lambda)$, $\beta\in\mathbb{R}\Lambda^{\tau}$ and $\widehat{\tau}$
denotes a standard lift of $\tau$ in $O(\widehat{\Lambda})$. It was first
observed by G. Höhn [Hö2] that the isometry $\tau$ belongs to only $11$
special conjugacy classes in $Co.0=O(\Lambda)$. All these isometries have
positive frame shape and their fixed point sublattices satisfy some duality
properties [LM, MS].
In [HM], other orbifold constructions for holomorphic VOAs of central charge
$24$ are discussed. In particular, it was proved that for any holomorphic VOA
$V$ of central charge $24$ with $V_{1}\neq 0$, there exist a Niemeier lattice
$N$ and an automorphism $g\in\operatorname{Aut}(V_{N})$ of finite order such
that the VOA $\widetilde{V_{N}}(g)$ obtained by an orbifold construction from
$V_{N}$ and $g$ is isomorphic to $V$. Therefore, $V$ contains a subVOA
$V_{N}^{g}$, which is also unitary. In addition, we verify that the
irreducible $g^{i}$-twisted modules of $V_{N}$ are unitary twisted
$V_{N}$-modules for all $i\in\mathbb{Z}$. By using some information about the
automorphism group of $V$, we will show that the unitary form on $V_{N}^{g}$
can be extended to $V$ and $V$ itself is also unitary (cf. Theorem 5.3). An
advantage of using Niemeier lattice VOA is that the order of $g$ can be chosen
to be relatively small and $(V_{N}^{g})_{1}$ is a relatively large Lie
subalgebra of $V_{1}$. Up to conjugation by an inner automorphism, any
automorphism $g\in\operatorname{Aut}(V_{N})$ can be written as
$g=\hat{\sigma}\exp(2\pi i\gamma(0))$, where $\sigma\in O(N)$,
$\gamma\in\mathbb{Q}\otimes_{\mathbb{Z}}N^{\sigma}$ and $N^{\sigma}$ denotes
the sublattice of $N$ fixed by $\sigma$. It turns out that $\sigma$ can be
chosen such that it has the same frame shape as one of the $11$ conjugacy
classes of $Co_{0}$ discussed above. Moreover, the order of $g$ is the same as
the order of $\sigma$.
The organization of this article is as follows. In Section 2, we review some
basic notions about unitary VOAs and their unitary modules from [DLin14] and
[CKLW]. In Section 3, we recall some facts about lattice VOAs and their
unitary structures. We also show that any irreducible $g$-twisted module
$V_{L}^{\chi}(g)$ for a lattice VOA $V_{L}$ for a finite order automorphism
$g$ is a unitary $g$-twisted module for $V_{L}$. In Section 4, we first review
several facts about the automorphism groups of holomorphic VOAs of central
charge $24$ with non-trivial weight one spaces. We then discuss the orbifold
constructions of holomorphic VOAs of central charge $24$ directly from
Niemeier lattice VOAs. Some explicit choices for the Niemeier lattice $N$ and
the automorphism $g$ are also discussed. In Section 5, we study the unitary
form for holomorphic VOAs of central charge $24$ with non-trivial weight one
spaces. The main theorem is Theorem 5.3. We show that a VOA $V$ is unitary if
it contains a pair of commuting automorphisms $(f,h)$ satisfying some
conditions. Finally we discuss a method to define the pair $(f,g)$ for each
holomorphic VOA of central charge $24$ with non-trivial weight one space.
##### Acknowledgment.
After this work has been completed, we noticed the preprint by Carpi et. al.
[CGGH], in which the unitary of strongly rational holomorphic vertex operator
algebras with central charge $24$ and non-zero weight one subspace is proved;
nevertheless, their method uses the theory of tensor category and is quite
different from our approach.
## 2\. Unitary VOA and unitary modules
We first recall the notion of unitary VOAs and unitary modules from [DLin14]
(see also [CKLW]).
###### Definition 2.1 ([DLin14]).
Let $(V,Y,\mathds{1},\omega)$ be a vertex operator algebra and let $\phi:V\to
V$ be an anti-linear involution of $V$ (i.e, $\phi(\lambda
u)=\bar{\lambda}\phi(u)$, $\phi(\mathds{1})=\mathds{1},\phi(\omega)=\omega$,
$\phi(u_{n}v)=\phi(u)_{n}\phi(v)$ for any $u,v\in V$, $n\in\mathbb{Z}$, and
$\phi$ has order $2$). Then $(V,\phi)$ is said to be unitary if there exists a
positive-definite Hermitian form $(\ ,\ )_{V}:V\times V\to\mathbb{C}$, which
is $\mathbb{C}$-linear on the first vector and anti-$\mathbb{C}$-linear on the
second vector, such that the following invariant property holds for any
$a,u,v\in V$:
$(Y(e^{zL(1)}(-z^{-2})^{L(0)}a,z^{-1})u,v)_{V}=(u,Y(\phi(a),z)v)_{V},$
where $L(n)$ is defined by $Y(\omega,z)=\sum_{n\in\mathbb{Z}}L(n)z^{-n-2}$.
###### Remark 2.2.
By [CKLW, Proposition 5.3], $V$ is self-dual and of CFT-type if $(V,\phi)$ is
a simple unitary VOA with an invariant Hermitian form $(\cdot,\cdot)_{V}$. In
this case, $V$ has a unique invariant symmetric bilinear form
$\langle\cdot,\cdot\rangle$, up to scalar ([Li94]). Normalizing
$(\mathds{1},\mathds{1})_{V}=\langle\mathds{1},\mathds{1}\rangle=1$, we obtain
$(u,v)_{V}=\langle u,\phi(v)\rangle$ for all $u,v\in V$. Note that
$(\phi(u),\phi(v))_{V}=\overline{(u,v)}_{V}=(v,u)_{V}$ for $u,v\in V$.
###### Definition 2.3 ([DLin14]).
Let $(V,\phi)$ be a unitary VOA and $g$ a finite order automorphism of $V$. An
(ordinary) $g$-twisted $V$-module $(M,Y_{M})$ is called a unitary $g$-twisted
$V$-module if there exists a positive-definite Hermitian form $(\ ,\
)_{M}:M\times M\to\mathbb{C}$ such that the following invariant property holds
for $a\in V$ and $w_{1},w_{2}\in M$:
$(Y_{M}(e^{zL(1)}(-z^{-2})^{L(0)}a,z^{-1})w_{1},w_{2})_{M}=(w_{1},Y_{M}(\phi(a),z)w_{2})_{M}.$
(2-1)
We call such a form a positive-definite invariant Hermitian form.
The following lemma follows from the similar argument as in [FHL93, Remark
5.3.3].
###### Lemma 2.4 (cf. [FHL93, Remark 5.3.3]).
Let $(V,\phi)$ be a unitary VOA. Let $M$ be a $V$-module and let $M^{\prime}$
be the contragredient module of $M$ with a natural pairing
$\langle\cdot,\cdot\rangle$ between $M$ and $M^{\prime}$.
1. (1)
If $M$ has a non-degenerate invariant sesquilinear form $(\cdot,\cdot)$, which
is linear on the first vector and anti-$\mathbb{C}$-linear on the second
vector and satisfies the invariant property (2-1), then the map $\Phi:M\to
M^{\prime}$ defined by $(u,v)=\langle u,\Phi(v)\rangle$, $u,v\in M$, is an
anti-linear bijective map and $\Phi(a_{n}u)=\phi(a)_{n}\Phi(u)$ for $a\in V$
and $u\in M$.
2. (2)
If there exists an anti-linear bijective map $\Phi:M\to M^{\prime}$ such that
$\Phi(a_{n}u)=\phi(a)_{n}\Phi(u)$ for $a\in V$ and $u\in M$, then
$(u,v)=\langle u,\Phi(v)\rangle$, $u,v\in M$, is a non-degenerate invariant
sesquilinear form on $M$.
The proof of the following lemma can be found in [CLS18]. The main point is
that the product of two anti-automorphisms is an automorphism and it acts on
an irreducible $V$-module as a scalar.
###### Lemma 2.5.
Let $(V,\phi)$ be a unitary VOA. Let $M$ be an irreducible $V$-module. Then
there exists at most one non-degenerate invariant sesquilinear form on $M$ (up
to scalar).
### 2.1. Unitary automorphisms and orbifold subVOAs
Let $(V,\phi)$ be a unitary VOA and $(\ ,\ )$ the corresponding positive
definite invariant Hermitian form. We use $\operatorname{Aut}_{(\ ,\ )}(V)$ to
denote the subgroup of $\operatorname{Aut}(V)$ which preserves the Hermitian
form, i.e,
$\operatorname{Aut}_{(\ ,\
)}(V)=\\{g\in\operatorname{Aut}(V)\mid(gx,gy)=(x,y)\text{ for all }x,y\in
V\\}.$
Next lemma follows immediately from the definition (see [CKLW]).
###### Lemma 2.6.
Let $(V,\phi)$ be a unitary VOA. Then
(1) $g\in\operatorname{Aut}_{(\ ,\ )}(V)$ if and only if $g^{-1}\phi g=\phi$.
(2) For any $H<\operatorname{Aut}_{(\ ,\ )}(V)$, $(V^{H},\phi)$ is also a
unitary VOA.
## 3\. Lattice VOA
Next we review some facts about lattice VOAs and their unitary structures. Let
$L$ be a positive-definite even lattice. Let
$V_{L}=M(1)\otimes_{\mathbb{C}}\mathbb{C}\\{L\\}$ be the lattice VOA as
defined in [FLM88]. Let $L^{*}$ be the dual lattice of $L$. Then
$V_{L^{*}}=M(1)\otimes_{\mathbb{C}}\mathbb{C}\\{L^{*}\\}$ is a $V_{L}$-module
and for any coset $\lambda+L\in L^{*}/L$,
$V_{\lambda+L}=M(1)\otimes_{\mathbb{C}}\mathbb{C}\\{\lambda+L\\}$ is an
irreducible $V_{L}$-module. It is proved in [KRR13] that there is a positive-
definite Hermitian form on
$M(1)=\operatorname{Span}_{\mathbb{C}}\\{\alpha_{1}(-n_{1})\dots\alpha_{k}(-n_{k})\mathds{1}\mid\alpha_{i}\in
L,\ n_{i}\in\mathbb{Z}_{>0}\\}$ such that $(\mathds{1},\mathds{1})=1$,
$(\alpha(n)u,v)=(u,\alpha(-n)v)$ for $\alpha\in L$ and for any $u,v\in M(1)$.
There also exists a positive-definite Hermitian form on
$\mathbb{C}\\{L^{*}\\}=\operatorname{Span}_{\mathbb{C}}\\{e^{\alpha}\mid\alpha\in
L^{*}\\}$ determined by $(e^{\alpha},e^{\beta})=\delta_{\alpha,\beta}$. Then a
positive-definite Hermitian form on $V_{L^{*}}$ can be defined by
$(u\otimes e^{\alpha},v\otimes e^{\beta})=(u,v)\cdot(e^{\alpha},e^{\beta}),$
where $u,v\in M(1)$ and $\alpha,\beta\in L$.
Let $\phi:V_{L^{*}}\to V_{L^{*}}$ be an anti-linear map determined by:
$\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{\alpha}\mapsto(-1)^{k}\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{-\alpha},$
where $\alpha_{1},\dots,\alpha_{k}\in L,\alpha\in L^{*}$.
###### Theorem 3.1.
([DLin14, Theorem 4.12]) Let $L$ be a positive-definite even lattice and let
$\phi$ be the anti-linear map of $V_{L}$ defined as above. Then the lattice
vertex operator algebra $(V_{L},\phi)$ is a unitary vertex operator algebra.
Moreover, $V_{\lambda+L}$ is a unitary module of $V_{L}$ for each
$\lambda+L\in L^{*}/L$.
### 3.1. $\operatorname{Aut}_{(\,,\,)}(V_{L})$
Next we consider some automorphisms of $V_{L}$ which preserves the invariant
Hermitian form. First we recall some facts about the automorphism group of
$V_{L}$. Let $L$ be an even lattice with the (positive-definite) bilinear form
$\langle\cdot|\cdot\rangle$. Denote by $\hat{L}=\\{\pm e^{\alpha}\mid\alpha\in
L\\}$ a central extension of $L$ by $\pm 1$ such that
$e^{\alpha}e^{\beta}=(-1)^{(\alpha|\beta)}e^{\beta}e^{\alpha}$. Let
$\operatorname{Aut}(\hat{L})$ be the automorphism group of $\hat{L}$ as a
group. We also assume that $e^{\alpha}\cdot
e^{-\alpha}=(-1)^{\langle\alpha|\alpha\rangle/2}e^{0}$. For
$g\in\operatorname{Aut}(\hat{L})$, let $\bar{g}$ be the map $L\to L$ defined
by $g(e^{\alpha})\in\\{\pm e^{\bar{g}(\alpha)}\\}$. Let
$O(\hat{L})=\\{g\in\operatorname{Aut}(\hat{L})\mid\bar{g}\in O(L)\\}$. Then by
[FLM88, Proposition 5.4.1], we have an exact sequence
$1\to\operatorname{Hom}(L,\mathbb{Z}/2\mathbb{Z})\to
O(\hat{L})\xrightarrow{\iota}O(L)\to 1.$ (3-1)
It is known that $O(\hat{L})$ is a subgroup of $\operatorname{Aut}(V_{L})$
(cf. loc. cit.). Let
$\mathrm{Inn}\,(V_{L})=\left\langle\exp(a_{(0)})\mid
a\in(V_{L})_{1}\right\rangle$
be the normal subgroup of $\operatorname{Aut}(V_{L})$ generated by the inner
automorphisms $\exp(a_{(0)})$.
###### Theorem 3.2 ([DN99]).
Let $L$ be a positive definite even lattice. Then
$\operatorname{Aut}(V_{L})=\mathrm{Inn}\,(V_{L})\,O(\hat{L})$
Moreover, the intersection $\mathrm{Inn}\,(V_{L})\cap O(\hat{L})$ contains a
subgroup $\hom(L,\mathbb{Z}/2\mathbb{Z})$ and the quotient
$\operatorname{Aut}(V_{L})/\mathrm{Inn}\,(V_{L})$ is isomorphic to a quotient
group of $O(L)$.
The following lemmas can be proved easily from the definition.
###### Lemma 3.3.
Let $g\in O(\hat{L})$. Then $g\in\operatorname{Aut}_{(\ ,\ )}(V_{L})$.
###### Proof.
Let $g\in O(\hat{L})$. Set $g(e^{\alpha})=a_{\alpha}e^{\bar{g}\alpha}$ and
$g(e^{-\alpha})=b_{\alpha}e^{-\bar{g}\alpha}$ for some roots of unity
$a_{\alpha},b_{\alpha}\in\mathbb{C}$. Recall that $e^{0}$ is the identity of
$\hat{L}$ and $e^{\alpha}\cdot
e^{-\alpha}=(-1)^{\langle\alpha,\alpha\rangle/2}e^{0}$. Then
$(-1)^{\langle\alpha,\alpha\rangle/2}e^{0}=g(e^{\alpha}\cdot
e^{-\alpha})=a_{\alpha}b_{\alpha}e^{\bar{g}\alpha}\cdot
e^{-\bar{g}\alpha}=a_{\alpha}b_{\alpha}(-1)^{\langle\bar{g}\alpha,\bar{g}\alpha\rangle/2}e^{0}$
and we have $a_{\alpha}b_{\alpha}=1$. Then
$g\phi(\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{\alpha})=(-1)^{k}b_{\alpha}\bar{g}\alpha_{1}(-n_{1})\cdots\bar{g}\alpha_{k}(-n_{k})\otimes
e^{-\bar{g}\alpha}$
and
$\phi g(\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{\alpha})=(-1)^{k}\overline{a_{\alpha}}\bar{g}\alpha_{1}(-n_{1})\cdots\bar{g}\alpha_{k}(-n_{k})\otimes
e^{-\bar{g}\alpha}.$
Since $b_{\alpha}=\overline{a_{\alpha}}$, we have $g\phi=\phi g$ as desired. ∎
###### Lemma 3.4.
Let $\beta\in L^{*}$ and $n$ a positive integer. Then $h=\exp(2\pi
i\frac{\beta(0)}{n})\in\operatorname{Aut}_{(\ ,\ )}(V_{L})$.
###### Proof.
Let $h=\exp(2\pi i\frac{\beta(0)}{n})$. Then
$\begin{split}h\phi(\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{\alpha})=&(-1)^{k}h(\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{-\alpha})\\\ =&(-1)^{k}\exp(-2\pi
i\langle\beta|\alpha\rangle/n)\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{-\alpha}\\\ =&\phi(\exp(2\pi
i\langle\beta|\alpha\rangle/n)\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{\alpha})\\\ =&\phi h(\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\otimes
e^{\alpha})\end{split}$
as desired. Note that $\phi$ is an anti-linear map. ∎
### 3.2. Unitary form on twisted modules
Next we discuss a unitary form on a twisted module. The main idea is similar
to that in [CLS18, DLin14]. First we review the construction of twisted
$V_{L}$-modules from [DL96] and [Le85].
Let $L$ be an even positive-definite lattice with a $\mathbb{Z}$-bilinear form
$\langle\cdot|\cdot\rangle$. Let $\tau$ be an isometry of $L$. Let $p$ be a
positive integer such that $\tau^{p}=1$ but $p$ may not be the order of
$\tau$. Define $\mathfrak{h}=\mathbb{C}\otimes_{\mathbb{Z}}L$ and extend the
$\mathbb{Z}$-form $\langle\cdot|\cdot\rangle$ $\mathbb{C}$-linearly to
$\mathfrak{h}.$ Denote
$\mathfrak{h}_{(n)}=\\{\alpha\in\mathfrak{h}\,|\,\tau\alpha=\xi^{n}\alpha\\}\quad\text{for
}n\in\mathbb{Z},$
where $\xi=\exp({2\pi\sqrt{-1}/p})$. In particular,
$\mathfrak{h}_{(0)}=\mathfrak{h}^{\tau}$ is the fixed point subspace of $\tau$
on $\mathfrak{h}$.
Let
$\hat{\mathfrak{h}}[\tau]=\coprod_{n\in\mathbb{Z}}\mathfrak{h}_{(n)}\otimes
t^{n/p}\oplus\mathbb{C}c$ be the $\tau$-twisted affine Lie algebra of
$\mathfrak{h}$. Denote
$\hat{\mathfrak{h}}[\tau]^{+}=\coprod_{n>0}\mathfrak{h}_{(n)}\otimes
t^{n/p},\quad\hat{\mathfrak{h}}[\tau]^{-}=\coprod_{n<0}\mathfrak{h}_{(n)}\otimes
t^{n/p},\quad\text{and}\quad\hat{\mathfrak{h}}[\tau]^{0}=\mathfrak{h}_{(0)}\otimes
t^{0}\oplus\mathbb{C}c,$
and form an induced module
$S[\tau]=U(\hat{\mathfrak{h}}[\tau])\otimes_{U(\hat{\mathfrak{h}}[\tau]^{+}\oplus\hat{\mathfrak{h}}[\tau]^{0})}\mathbb{C}\cong
S(\hat{\mathfrak{h}}[\tau]^{-})\quad\text{(linearly),}$
where $\coprod_{n>0}\mathfrak{h}_{(n)}\otimes t^{n/p}$ acts trivially on
$\mathbb{C}$ and $c$ acts as $1$, and $U(\cdot)$ and $S(\cdot)$ denote the
universal enveloping algebra and symmetric algebra, respectively. For any
$\alpha\in L$ and $n\in\frac{1}{p}\mathbb{Z}$, let $\alpha_{(pn)}$ be the
natural projection of $\alpha$ in $\mathfrak{h}_{(pn)}$ and we denote
$\alpha(n)=\alpha_{(pn)}\otimes t^{n}$.
Set $s=p$ if $p$ is even and $s=2p$ if $p$ is odd. Following [DL96, Remark
2.2], we define a $\tau$-invariant alternating $\mathbb{Z}$-bilinear map
$c^{\tau}$ from $L\times L$ to $\mathbb{Z}_{s}$ by
$c^{\tau}(\alpha,\beta)=\sum_{i=0}^{p-1}(s/2+si/p)\langle\tau^{i}(\alpha)|\beta\rangle+s\mathbb{Z}.$
(3-2)
For any positive integer $n$, let $\langle\kappa_{n}\rangle$ be a cyclic group
of order $n$ and consider the central extension
$1\ \longrightarrow\ \langle\kappa_{s}\rangle\ \longrightarrow\
\hat{L}_{\tau}\ \bar{\longrightarrow\ }L\longrightarrow\ 1$
such that $aba^{-1}b^{-1}=\kappa_{s}^{c^{\tau}(\bar{a},\bar{b})}$ for
$a,b\in\hat{L}_{\tau}$. Recall that there is a set-theoretic identification
between the central extensions $\hat{L}$ and $\hat{L}_{\tau}$ such that the
respective group multiplications $\times$ and $\times_{\tau}$ are related by
$a\times b=\kappa_{s}^{\varepsilon_{0}(\bar{a},\bar{b})}a\times_{\tau}b,$
(3-3)
where
$\displaystyle\varepsilon_{0}(\alpha,\beta)=\sum_{0<r<p/2}(s/2+rs/p)\langle\tau^{-r}\alpha|\beta\rangle$
(see [DL96, Remark 2.1]).
Now let $\hat{\tau}$ be a standard lift of $\tau$ in $O(\hat{L})$, i.e.,
$\hat{\tau}(e^{\alpha})=e^{\alpha}$ for any $\alpha\in L^{\tau}$. Then
$\hat{\tau}$ is also an automorphism of $\hat{L}_{\tau}$ by the identification
given in (3-3).
Next we recall a construction of an irreducible $\hat{L}_{\tau}$-module on
which $K=\\{a^{-1}\hat{\tau}(a)\mid a\in\hat{L}_{\tau}\\}$ acts trivially and
$\kappa_{s}$ acts as multiplication by $\xi=\exp(2\pi\sqrt{-1}/s)$ (cf. [Le85,
Proposition 6.1] ). Let $P_{0}:\mathfrak{h}\to h_{(0)}$ be the natural
projection. Set
$N=(1-P_{0})\mathfrak{h}\cap L=\\{\alpha\in
L\mid\langle\alpha|\mathfrak{h}_{(0)}\rangle=0\\},$
$R=\\{\alpha\in N\,|\,c^{\tau}(\alpha,\beta)=0\text{ for }\beta\in N\\}$ and
$M=(1-P_{0})L$. Denoting by $\widehat{Q}_{\tau}$ the subgroup of
$\widehat{L}_{\tau}$ obtained by pulling back a subgroup $Q$ of $L$. Then
$\widehat{R}_{\tau}$ is the center of $\widehat{N}_{\tau}$ and that
$\widehat{M}_{\tau}\subset\widehat{R}_{\tau}$. Note also that
$K=\\{a^{-1}\hat{\tau}(a)\mid
a\in\hat{L}_{\tau}\\}<\widehat{M}_{\tau}<\widehat{R}_{\tau}$.
Let $\mathcal{A}>\widehat{R}_{\tau}>K$ be a maximal abelian subgroup of
$\hat{N}_{\tau}$. Let $\chi:\mathcal{A}/K\to\mathbb{C}$ be a linear character
of $\mathcal{A}/K$. Let $\mathbb{C}_{\chi}$ be the $1$-dimensional module of
$\mathcal{A}$ affording $\chi$. Then we obtain an irreducible
$\widehat{N}_{\tau}$-module $T_{\chi}$ and an irreducible
$\hat{L}_{\tau}$-module $U_{\chi}$ as follows:
$T_{\chi}=\mathbb{C}[\hat{N}_{\tau}]\otimes_{\mathbb{C}[\mathcal{A}]}\mathbb{C}_{\chi}\quad\text{
and }\quad
U_{\chi}=\mathbb{C}[\hat{L}_{\tau}]\otimes_{\mathbb{C}[\mathcal{A}]}\mathbb{C}_{\chi}=\mathbb{C}[P_{0}(L)]\otimes
T_{\chi}.$
The twisted space $V_{L}^{\chi}(\tau)=S[\tau]\otimes U_{\chi}$ forms an
irreducible $\hat{\tau}$-twisted $V_{L}$-module with the vertex operator
$Y^{\tau}(\cdot,z):V_{L}\to\mathrm{End}(V_{L}^{T})[[z,z^{-1}]]$ on $V_{L}^{T}$
defined as follows (cf. [DL96]): For $a\in\hat{L}$, define
$W^{\tau}(a,z)=p^{-\langle\bar{a}|\bar{a}\rangle/2}\sigma(\bar{a})E^{-}(-\bar{a},z)E^{+}(-\bar{a},z)az^{-\langle\bar{a}|\bar{a}\rangle/2},$
(3-4)
where
$E^{\pm}(\alpha,z)=\exp\left(\sum_{n\in\frac{1}{p}\mathbb{Z}^{\pm}}\frac{\alpha(n)}{n}z^{-n}\right)$
and
$\sigma(\alpha)=\begin{cases}\displaystyle\prod_{0<r<p/2}(1-{\xi}^{-r})^{\langle\tau^{r}\alpha|\alpha\rangle}2^{\langle\tau^{p/2}\alpha|\alpha\rangle}&\text{
if }p\in 2\mathbb{Z},\\\
\displaystyle\prod_{0<r<p/2}(1-{\xi}^{-r})^{\langle\tau^{r}\alpha|\alpha\rangle}&\text{
if }p\in 2\mathbb{Z}+1.\end{cases}$ (3-5)
Note that $a\in\hat{L}$ acts on $U_{\chi}$ as an element of $\hat{L}_{\tau}$
via the identification given in (3-3).
For $\alpha_{1},\dots,\alpha_{k}\in\mathfrak{h}$, $n_{1},\dots,n_{k}>0$, and
$v=\alpha_{1}(-n_{1})\cdots\alpha_{k}(-n_{k})\cdot\iota(a)\in V_{L},$ set
$W(v,z)=\mbox{$\circ\atop\circ$}\left(\frac{1}{(n_{1}-1)!}\left(\frac{d}{dz}\right)^{n_{1}-1}\alpha_{1}(z)\right)\cdot\cdot\cdot\left(\frac{1}{(n_{k}-1)!}\left(\frac{d}{dz}\right)^{n_{k}-1}\alpha_{k}(z)\right)W^{\tau}(a,z)\mbox{$\circ\atop\circ$},$
where $\alpha(z)=\sum_{n\in\mathbb{Z}/p}\alpha(n)z^{-n-1}$ and
$\mbox{$\circ\atop\circ$}\cdots\mbox{$\circ\atop\circ$}$ denotes the normal
ordered product.
Define constants $c_{mn}^{i}\in\mathbb{C}$ for $m,n\geq 0$ and
$i=0,\cdots,p-1$ by the formulas
$\displaystyle\sum_{m,n\geq
0}c_{mn}^{0}x^{m}y^{n}=-\frac{1}{2}\sum_{r=1}^{p-1}{\rm
log}\left(\frac{(1+x)^{1/p}-\xi^{-r}(1+y)^{1/p}}{1-\xi^{-r}}\right),$ (3-6)
$\displaystyle\sum_{m,n\geq 0}c_{mn}^{i}x^{m}y^{n}=\frac{1}{2}{\rm
log}\left(\frac{(1+x)^{1/p}-\xi^{-i}(1+y)^{1/p}}{1-\xi^{-i}}\right)\ \text{
for}\ \ i\neq 0.$ (3-7)
Let $\\{\beta_{1},\cdot\cdot\cdot,\beta_{d}\\}$ be an orthonormal basis of
$\mathfrak{h}$ and set
$\Delta_{z}=\sum_{m,n\geq 0}\displaystyle{\sum^{p-1}_{i=0}}\
\displaystyle{\sum^{d}_{j=1}}c_{mn}^{i}(\tau^{-i}\beta_{j})(m)\beta_{j}(n)z^{-m-n}.$
(3-8)
Then $e^{\Delta_{z}}$ is well-defined on $V_{L}$ since $c_{00}^{i}=0$ for all
$i$, and for $v\in V_{L},$ $e^{\Delta_{z}}v\in V_{L}[z^{-1}].$ Note that
$\Delta_{z}$ is independent of the choice of orthonormal basis and
$\hat{\tau}\Delta_{z}=\Delta_{z}\hat{\tau}\qquad\text{and}\qquad\hat{\tau}e^{\Delta_{z}}=e^{\Delta_{z}}\hat{\tau}\quad\text{
on }V_{L}.$
For $v\in V_{L},$ the vertex operator $Y^{\tau}(v,z)$ is defined by
$Y^{\tau}(v,z)=W(e^{\Delta_{z}}v,z).$ (3-9)
Let $\beta\in\mathbb{Q}\otimes L^{\tau}$ such that
$p\langle\beta|L\rangle\in\mathbb{Z}$. Then $g=\hat{\tau}\exp(2\pi i\beta(0))$
also defines an automorphism of $V_{L}$ and $g^{p}=1$. An irreducible
$g$-twisted module is then given by
$V_{L}^{\chi}(g)=S[\tau]\otimes e^{-\beta}\otimes U_{\chi}\cong
S[\tau]\otimes\mathbb{C}[P_{0}^{\tau}(L)-\beta]\otimes T_{\chi},$
as a vector space. The vertex operator is still given by $Y^{\tau}(v,z)$ but
the action of $a\in\hat{L}$ on $U_{\chi}$ is twisted by $e^{-\beta}$. Note
that the alternating map $c^{\tau}(\cdot,\cdot)$ is still well-defined on
$\tilde{L}=\mathrm{Span}_{\mathbb{Z}}\\{L,\beta\\}$ and
$a\cdot(e^{-\beta}\otimes
u)=\xi^{-\langle\bar{a}|\beta\rangle}e^{-\beta}\otimes a\cdot u$ for any
$a\in\hat{L}$ and $u\in U_{\chi}$.
Next we define a Hermitian form on $V_{L}^{\chi}(g)$ as follows. For any
$a,b\in e^{-\beta}\hat{L}_{\tau}$, define
$(t(a),t(b))=\begin{cases}0&\text{ if }b^{-1}a\not\in\mathcal{A},\\\
\chi(b^{-1}a)&\text{ if }b^{-1}a\in\mathcal{A},\end{cases}$ (3-10)
where $t(a)=a\otimes 1\in e^{-\beta}\otimes U_{\chi}$. Using the similar
arguments as in [FLM88, KRR13], there is a positive-definite Hermitian form
$(\ ,\ )$ on $S[\tau]$ such that
$\begin{split}(1,1)&=1,\\\ (\alpha(n)\cdot u,v)&=(u,\alpha(-n)\cdot
v),\end{split}$
for any $u,v\in S[\tau]$ and $\alpha\in L$. Then one can define a positive-
definite Hermitian form on $V_{L}^{\chi}(g)$ by
$(u\otimes r,v\otimes s)=(u,v)\cdot(r,s),\quad\text{ where }u,v\in
S[\tau],r,s\in e^{-\beta}\otimes U_{\chi}.$
###### Lemma 3.5.
For any $u,v\in V_{L}^{\chi}(g)$ and $a\in\hat{L}$, we have $(a\cdot u,a\cdot
v)=(u,v)$.
###### Proof.
It suffices to consider the case for
$u=v_{1}\otimes t(b_{1})\quad\text{ and }\quad v=v_{2}\otimes t(b_{2}),$
where $v_{1},v_{2}\in S[\tau]$ and $b_{1},b_{2}\in e^{-\beta}\hat{L}_{\tau}$.
By definition, we have
$(a\cdot u,a\cdot v)=(v_{1}\otimes t(ab_{1}),v_{2}\otimes
t(ab_{2}))=(v_{1},v_{2})\cdot(t(ab_{1}),t(ab_{2})).$
Moreover, $(ab_{2})^{-1}ab_{1}=b_{2}^{-1}a^{-1}ab_{1}=b_{2}^{-1}b_{1}$.
Therefore, we have $\chi((ab_{2})^{-1}ab_{1})=\chi(b_{2}^{-1}b_{1})$ if
$b_{2}^{-1}b_{1}\in\mathcal{A}$. Hence, we have $(a\cdot u,a\cdot v)=(u,v)$ as
desired. ∎
###### Lemma 3.6.
For any $\alpha\in L$ and $u,v\in V_{L}^{\chi}(g)$, we have
$(e^{\alpha}\cdot u,v)=(u,\mu e^{-\alpha}\cdot v)$
where
$\mu=\begin{cases}\xi^{-\sum_{0<r<p/2}r\langle\tau^{r}\alpha,\alpha\rangle}&\text{
if $p$ is odd},\\\
\xi^{-\sum_{0<r<p/2}r\langle\tau^{r}\alpha,\alpha\rangle}(-1)^{\frac{1}{2}\langle\tau^{p/2}\alpha,\alpha\rangle}&\text{
if $p$ is even},\\\ \end{cases}$
###### Proof.
Recall the set-theoretic identification between $\hat{L}$ and $\hat{L}_{\tau}$
given in (3-3). It follows from $e^{\alpha}\times
e^{-\alpha}=\kappa_{2}^{\langle\alpha|\alpha\rangle/2}e^{0}$ that
$\begin{split}e^{\alpha}\times_{\tau}e^{-\alpha}&=\kappa_{s}^{\varepsilon_{0}(\alpha,\alpha)}\kappa_{2}^{\langle\alpha|\alpha\rangle/2}e^{0},\\\
&=\begin{cases}\kappa_{p}^{\sum_{0<r<p/2}r\langle\tau^{r}\alpha|\alpha\rangle}e^{0}&\text{
if $p$ is odd},\\\
\kappa_{p}^{\sum_{0<r<p/2}r\langle\tau^{r}\alpha|\alpha\rangle}\kappa_{2}^{\frac{1}{2}\langle\tau^{p/2}\alpha|\alpha\rangle}e^{0}&\text{
if $p$ is even}.\end{cases}\end{split}$
Now by Lemma 3.5, we have
$(u,\mu e^{-\alpha}v)=(e^{\alpha}\cdot u,e^{\alpha}\cdot(\mu
e^{-\alpha}v))=(e^{\alpha}\cdot u,v)$
as desired. ∎
###### Lemma 3.7.
For $\alpha\in L$,
$\overline{\sigma(\alpha)}\mu=(-1)^{\langle\alpha|\alpha\rangle/2}\sigma(\alpha)$.
###### Proof.
By definition (see (3-5)), we have
$\begin{split}\overline{\sigma(\alpha)}\mu=&\displaystyle\prod_{0<r<p/2}\overline{(1-{\xi}^{-r})^{\langle\tau^{r}\alpha|\alpha\rangle}}\cdot\xi^{-\sum_{0<r<p/2}r\langle\tau^{r}\alpha,\alpha\rangle}\\\
=&(-1)^{\langle\alpha,\alpha\rangle/2}\prod_{0<r<p/2}(1-{\xi}^{-r})^{\langle\tau^{r}\alpha|\alpha\rangle}=(-1)^{\langle\alpha|\alpha\rangle/2}\sigma(\alpha)\end{split}$
if $p$ is odd and
$\begin{split}\overline{\sigma(\alpha)}\mu=&\displaystyle\prod_{0<r<p/2}\overline{(1-{\xi}^{-r})^{\langle\tau^{r}\alpha|\alpha\rangle}2^{\langle\tau^{p/2}\alpha|\alpha\rangle}}\cdot\xi^{-\sum_{0<r<p/2}r\langle\tau^{r}\alpha,\alpha\rangle}\cdot(-1)^{\langle\tau^{p/2}\alpha|\alpha\rangle/2}\\\
=&(-1)^{\langle\alpha|\alpha\rangle/2}\prod_{0<r<p/2}(1-{\xi}^{-r})^{\langle\tau^{r}\alpha|\alpha\rangle}2^{\langle\tau^{p/2}\alpha|\alpha\rangle}=(-1)^{\langle\alpha|\alpha\rangle/2}\sigma(\alpha)\end{split}$
if $p$ is even. ∎
The proof of the following lemma is very similar to that in [DLin14, Theorem
4.14] and [CLS18, Lemma 5.6].
###### Lemma 3.8.
For any $\chi$, $V_{L}^{\chi}(g)$ is a unitary $g$-twisted module of
$(V_{L},\phi)$.
###### Proof.
We only need to verify the invariant property. Since the VOA $V_{L}$ is
generated by $\\{\alpha(-1)\cdot\mathds{1}\mid\alpha\in
L\\}\cup\\{e^{\alpha}\mid\alpha\in L\\}$, it is sufficient to check
$(Y^{\tau}(e^{zL(1)}(-z^{-2})^{L(0)}x,z^{-1})u,v)=(u,Y^{\tau}(\phi(x),z)v)$
for $x\in\\{\alpha(-1)\cdot\mathds{1}\mid\alpha\in
L\\}\cup\\{e^{\alpha}\mid\alpha\in L\\}$ and $u,v\in V_{L}^{\chi}(g)$ (cf.
[DLin14, Proposition 2.12]).
Let $u=v_{1}\otimes t(a)$ and $v=v_{2}\otimes t(b)$ for some $v_{1},v_{2}\in
S[\tau]$, $a,b\in e^{-\beta}\hat{L}_{\tau}$. Then
$(\alpha(n)u,v)=(u,\alpha(-n)v)$
for any $\alpha\in L$ and $n\in\frac{1}{p}\mathbb{Z}$. Thus for
$x=\alpha(-1)\cdot\mathds{1}$, we have
$\begin{split}&(Y^{\tau}(e^{zL(1)}(-z^{-2})^{L(0)}\alpha(-1)\cdot\mathds{1},z^{-1})u,v)\\\
=&-z^{-2}(Y^{\tau}(\alpha(-1)\cdot\mathds{1},z^{-1})v_{1}\otimes
t(a),v_{2}\otimes t(b))\\\
=&-z^{-2}\sum_{n\in\frac{1}{p}\mathbb{Z}}(\alpha(n)v_{1},v_{2})(t(a),t(b))z^{n+1}\\\
=&-\sum_{n\in\frac{1}{p}\mathbb{Z}}(v_{1},\alpha(-n)v_{2})(t(a),t(b))z^{n-1}\\\
=&(u,Y^{\tau}(\phi(\alpha(-1)\cdot\mathds{1}),z)v).\end{split}$
Notice that
$e^{\Delta_{z}}(\alpha(-1)\cdot\mathds{1})=\alpha(-1)\cdot\mathds{1}$.
Now take $x=e^{\alpha}$ with $\langle\alpha|\alpha\rangle=2k$. Then we have
$\begin{split}&(Y^{\tau}(e^{zL(1)}(-z^{-2})^{L(0)}e^{\alpha},z^{-1})u,v)\\\
=&(Y^{\tau}(e^{zL(1)}(-z^{-2})^{L(0)}e^{\alpha},z^{-1})v_{1}\otimes
t(a),v_{2}\otimes t(b))\\\
=&(-z^{-2})^{k}(p^{-k}\sigma(\alpha)E^{-}(-\alpha,z^{-1})E^{+}(-\alpha,z^{-1})e^{\alpha}z^{k}v_{1}\otimes
t(a),v_{2}\otimes t(b))\\\ =&(-z^{-2})^{k}(v_{1}\otimes
t(a),p^{-k}\overline{\sigma(\alpha)}E^{-}(\alpha,z)E^{+}(\alpha,z)\mu
e^{-\alpha}z^{k}v_{2}\otimes t(b))\\\ =&(v_{1}\otimes
t(a),p^{-k}\sigma(\alpha)E^{-}(\alpha,z)E^{+}(\alpha,z)e^{-\alpha}z^{-k}v_{2}\otimes
t(b))\\\ =&(v_{1}\otimes t(a),Y^{\tau}(\phi(e^{\alpha}),z)v_{2}\otimes
t(b))\end{split}$
as desired. ∎
## 4\. Holomorphic VOAs of central charge $24$
In this section, we review a construction of holomorphic vertex operator
algebras of central charge $24$ using certain simple current extensions of
lattice vertex operator algebras and some orbifold vertex operator subalgebras
in the Leech lattice vertex operator algebra [Hö2, La20, BLS].
Assume that $V_{1}$ is semisimple and let $\mathfrak{h}$ be a Cartan
subalgebra of $V_{1}$. Let $M(\mathfrak{h})$ be the subVOA generated by
$\mathfrak{h}$ and denote
$W=\mathrm{Comm}_{V}(M(\mathfrak{h}))\quad\text{ and }X=\mathrm{Comm}_{V}(W).$
Then $X$ is isomorphic to a lattice VOA $V_{L}$ and $W$ is isomorphic to an
orbifold VOA $V_{\Lambda_{\tau}}^{\hat{\tau}}$, where $\Lambda_{\tau}$ is the
coinvariant sublattice of the Leech lattice $\Lambda$ associated with an
isometry $\tau\in O(\Lambda)$ (see [Hö2] and [La20]). The possible isometry
$\tau\in O(\Lambda)$ has been described in [Hö2]. It is also proved in [La20]
that all irreducible modules for the fixed point subVOA
$V_{\Lambda_{\tau}}^{\hat{\tau}}$ are simple current modules. Therefore, the
VOA $V$ can be viewed as a simple current extension of $V_{L}\otimes
V_{\Lambda_{\tau}}^{\hat{\tau}}$.
### 4.1. Automorphism groups and
$\mathrm{Stab}_{\operatorname{Aut}(V)}(V_{L}\otimes
V_{\Lambda_{\tau}}^{\hat{\tau}})$
Next we describe the subgroup
$\mathrm{Stab}_{\operatorname{Aut}(V)}(V_{L}\otimes
V_{\Lambda_{\tau}}^{\hat{\tau}})$ for each case by using the methods in
[Sh07]. Note that the automorphism groups for all holomorphic VOAs of central
charge $24$ with $V_{1}\neq 0$ and
$\mathrm{Stab}_{\operatorname{Aut}(V)}(V_{L}\otimes
V_{\Lambda_{\tau}}^{\hat{\tau}})$ have already been computed in [BLS].
###### Notation 4.1.
For any VOA $U$, $\operatorname{Aut}(U)$ acts on $\mathrm{Irr}(U)$ by module
conjugations: for a $V$-module $\left(M,Y_{M}\right)$ and
$g\in\operatorname{Aut}(V)$, the $g$-conjugate module $\left(g\circ
M,Y_{g\circ M}\right)$ of $(M,Y_{M})$ is defined by $g\circ M=M$ as a vector
space and $Y_{g\circ M}\left(v,z\right)=Y_{M}\left(g^{-1}v,z\right)$ for $v\in
V$. This action preserves the conformal weights. Thus, we have a canonical
group homomorphism
$\mu_{U}:\operatorname{Aut}(U)\to O(\operatorname{Irr}(U),q_{U}),$ (4-1)
where
$O(\operatorname{Irr}(U),q_{U})=\\{h\in\operatorname{Aut}(\operatorname{Irr}(U))\mid
q_{U}(M)=q_{U}(h(M))\ \text{for all}\ M\in\operatorname{Irr}(U)\\}$ is the
orthogonal group of the quadratic space $(\operatorname{Irr}(U),q_{U})$. We
use $\overline{\operatorname{Aut}}(U)$ and $\operatorname{Aut}_{0}(U)$ to
denote $\operatorname{Im}\mu_{U}$ and $\operatorname{Ker}\mu_{U}$,
respectively.
Recall that the irreducible modules for the lattice VOA $V_{L}$ are
parametrized by its discriminant group $\mathcal{D}(L)=L^{*}/L$ and the fusion
rules are given by $V_{\lambda+L}\times V_{\eta+L}=V_{\lambda+\eta+L}$ for
$\lambda,\eta\in L^{*}$. Since $O(L)$ acts naturally on $\mathcal{D}(L)$, we
also have a canonical group homomorphism $\mu_{L}:O(L)\to
O(\mathcal{D}(L),q_{L})$. We use $\overline{O}(L)$ to denote the image of
$\mu_{L}$ on $O(\mathcal{D}(L),q_{L})$.
Set $W=V_{\Lambda_{\tau}}^{\hat{\tau}}$ and let $\varphi$ be a bijection from
$\mathcal{D}(L)$ to $\operatorname{Irr}(W)$ such that
$V\cong\bigoplus_{\lambda+L\in\mathcal{D}(L)}V_{\lambda+L}\otimes\varphi({\lambda+L}).$
For simplicity, we often denote $\varphi({\lambda+L})$ by $W_{\lambda}$.
Set
$S_{\varphi}=\\{(V_{\lambda+L},\varphi(\lambda+L))\mid\lambda+L\in\mathcal{D}({L})\\}\subset\operatorname{Irr}(V_{L})\times\operatorname{Irr}(W)$.
The dual group $S_{\varphi}^{*}={\rm Hom}(S_{\varphi},\mathbb{C}^{\times})$
acts faithfully on $V$ with the action given by
$S_{\varphi}^{*}=\\{\exp(2\pi\sqrt{-1}v_{(0)})\mid v+L\in\mathcal{D}(L)\\}.$
(4-2)
By [Sh07, Theorem 3.3], we know that
$\displaystyle S_{\varphi}^{*}$
$\displaystyle=\\{\sigma\in\operatorname{Aut}(V)\mid\sigma=id\ {\rm on}\
V_{L}\otimes W\\}.$ (4-3)
and
$\displaystyle N_{\operatorname{Aut}(V)}(S_{\varphi}^{*})/S_{\varphi}^{*}$
$\displaystyle\cong\mathrm{Stab}_{\operatorname{Aut}(V_{L}\otimes
W)}(S_{\varphi})=\\{\sigma\in\operatorname{Aut}(V_{L}\otimes W)\mid\sigma\circ
S_{\varphi}=S_{\varphi}\\}.$ (4-4)
Note that
$\displaystyle
N_{\operatorname{Aut}(V)}(S_{\varphi}^{*})=\\{\sigma\in\operatorname{Aut}(V)\mid\sigma(V_{L}\otimes
W)=V_{L}\otimes W\\}=\mathrm{Stab}_{\operatorname{Aut}(V)}(V_{L}\otimes W).$
(4-5)
Set ${\rm
Stab}_{\operatorname{Aut}(V)}(\mathfrak{h})=\\{\sigma\in\operatorname{Aut}(V)\mid\sigma(\mathfrak{h})=\mathfrak{h}\\}$
and ${\rm Stab}_{\mathrm{Inn}\,(V)}(\mathfrak{h})={\rm
Stab}_{\operatorname{Aut}(V)}(\mathfrak{h})\cap\mathrm{Inn}\,(V),$ where
$\mathfrak{h}$ is the chosen Cartan subalgebra of $V_{1}$. By [BLS, Lemma
3.14],
$\operatorname{Aut}(V)=\mathrm{Inn}\,(V){\rm
Stab}_{\operatorname{Aut}(V)}(\mathfrak{h})\quad\text{ and }\quad
N_{\operatorname{Aut}(V)}(S^{*}_{\varphi})=\mathrm{Inn}\,(V_{L}){\rm
Stab}_{\operatorname{Aut}(V)}(\mathfrak{h}).$
Moreover,
$\mathrm{Stab}_{\operatorname{Aut}(V)}(\mathfrak{h})/S_{\varphi}^{*}\cong\mathrm{Stab}_{\operatorname{Aut}(V_{L}\otimes
W)}(S_{\varphi})\cap\mathrm{Stab}_{\operatorname{Aut}(V_{L}\otimes
W)}(\mathfrak{h})$ [BLS, Lemma 3.15].
Recall from [BLS, Theorem 3.4] that $\mu_{W}$ is injective and
$\overline{\operatorname{Aut}}(W)\cong\operatorname{Aut}(W)$. Therefore, the
kernel of the group homomorphism
$\operatorname{Aut}(V_{L}\otimes W)\to
O(\operatorname{Irr}(V_{L}),q_{V_{L}})\times
O(\operatorname{Irr}(W),-q_{W}),\quad\sigma\mapsto(\mu_{V_{L}}(\sigma_{|V_{L}}),{\mu}_{W}(\sigma_{|W}))$
is $\operatorname{Aut}_{0}({V_{L}})\times 1$. It turns out that
$\mathrm{Stab}_{\operatorname{Aut}(V_{L}\otimes W)}(S_{\varphi})$ may be
viewed as a subgroup of $\operatorname{Aut}(V_{L})$ by considering the
restriction of $\mathrm{Stab}_{\operatorname{Aut}(V_{L}\otimes
W)}(S_{\varphi})$ to $V_{L}$. We also have
$\mathrm{Stab}_{\operatorname{Aut}(V_{L}\otimes
W)}(S_{\varphi})\cong\operatorname{Aut}_{0}({V_{L}}).(\overline{O}(L)\cap\varphi^{*}(\overline{\operatorname{Aut}}(W)))<\operatorname{Aut}(V_{L}),$
(4-6)
where
$\varphi^{*}(\overline{\operatorname{Aut}}(W))=\varphi^{-1}(\overline{\operatorname{Aut}}(W))\varphi\subset
O(\mathcal{D}(L),q_{L})$ and
$\displaystyle\mathrm{Stab}_{\operatorname{Aut}(V_{L}\otimes
W)}(S_{\varphi})\cap\mathrm{Stab}_{\operatorname{Aut}(V_{L}\otimes
W)}(\mathfrak{h})$ $\displaystyle\cong$ $\displaystyle\\{\exp(a_{(0)})\mid
a\in\mathfrak{h}\\}\iota^{-1}(O_{0}(L).(\overline{O}({L})\cap\varphi^{*}(\overline{\operatorname{Aut}}(W)))).$
Let $W(V_{1})$ be the Weyl group of the semisimple Lie algebra $V_{1}$. Since
$V_{1}$ is a semisimple, ${\rm Stab}_{\mathrm{Inn}\,(V)}(\mathfrak{h})$ acts
on $\mathfrak{h}$ as $W(V_{1})$.
###### Lemma 4.2 ([BLS, Lemma 3.16]).
1. (1)
${\rm Stab}_{\mathrm{Inn}\,(V)}(\mathfrak{h})/\\{\exp(a_{(0)})\mid
a\in\mathfrak{h}\\}\cong{W}(V_{1})$.
2. (2)
${\rm Stab}_{\operatorname{Aut}(V)}(\mathfrak{h})/\\{\exp(a_{(0)})\mid
a\in\mathfrak{h}\\}\cong\mu_{L}^{-1}(\bar{O}(L)\cap\varphi^{*}(\overline{\operatorname{Aut}}(W))$.
Therefore, we may regard $W(V_{1})$ as a subgroup of $O(L)$.
### 4.2. Orbifold construction from Niemeier lattice VOAs
It is known that all holomorphic VOA of central charge $24$ can be constructed
from a single orbifold construction from a lattice VOA. The constructions from
Leech lattice VOA have been discussed in [ELMS21] (see also [CLM22]). The
constructions from Niemeier lattice VOAs are also discussed in [HM]. In
particular, the following has been proved.
###### Theorem 4.3 (cf. [HM, Proposition 5.7 and Remark 5.8]).
Let $V$ be a holomorphic VOA of central charge $24$ with $V_{1}\neq 0$. Then
there exist a Niemeier $N$ and an automorphism $g=\hat{\tau}\exp(2\pi
i\beta(0))\in Aut(V_{N})$ such that $V\cong\widetilde{V_{N}}(g)$. Moreover,
1. (1)
$\tau$ has the same frame shape as one of the $11$ conjugacy classes of
$Co_{0}$ as discussed in [Hö2] and $|g|=|\tau|$.
2. (2)
$L\cong N^{\tau}_{\beta}$ and $V_{N_{\tau}}^{\hat{\tau}}\cong
V_{\Lambda_{\tau}}^{\hat{\tau}}$, where $N^{\tau}_{\beta}=\\{x\in
N^{\tau}\mid\langle x,\beta\rangle\in\mathbb{Z}\\}$; in particular,
$V_{N}^{g}>V_{L}\otimes V_{\Lambda_{\tau}}^{\hat{\tau}}$.
3. (3)
$(V_{N}^{g})_{1}$ is non-abelian and has the same Lie rank as $V_{1}$.
We note that the choices for $N$ and $g$ are not unique. It turns out that it
is possible to choose $(N,g)$ so that $(V_{N}^{g})_{1}$ contains a simple Lie
ideal which is a proper Lie subalgebra of a simple ideal of $V_{1}$.
###### Proposition 4.4.
Let $V$ be a holomorphic VOA of central charge $24$ with $V_{1}\neq 0$ and
$\mathrm{rank}V_{1}<24$. Then there exist a Niemeier $N$ and an automorphism
$g=\hat{\tau}\exp(2\pi i\beta(0))\in\operatorname{Aut}(V_{N})$ such that
$V\cong\widetilde{V_{N}}(g)$ and Conditions (1), (2), (3) in Theorem 4.3 are
satisfied. Moreover, $(V_{N}^{g})_{1}$ contains a simple Lie ideal which is a
proper Lie subalgebra of a simple ideal of $V_{1}$.
Next we will describe $N$ and $g$ explicitly for the cases that $|g|>2$.
#### 4.2.1. $\mathbb{Z}_{3}$ orbifold construction from Niemeier lattice VOA
First we consider the VOAs that can be obtained by a $\mathbb{Z}_{3}$ orbifold
construction from Niemeier lattice VOA.
Let $N$ be a Niemeier lattice. Then for any automorphism
$g\in\operatorname{Aut}(V_{N})$ of finite order, $g=\hat{\tau}\exp(2\pi
i\beta(0))$ for some $\hat{\tau}\in O(\hat{N})$ and
$\beta\in\mathbb{Q}\otimes_{\mathbb{Z}}N^{\tau}$ with
$|g|\cdot\beta\in(N^{\tau})^{*}$.
Case: $V_{1}\cong A_{2,3}^{6}$.
In this case, $V\cong\widetilde{V_{N}}(g)$, where $N=N(A_{1}^{24})$, $\tau$
acts a permutation of the $24$ copies of $A_{1}$’s with the cycle shape
$1^{6}3^{6}$ and $\beta=\frac{1}{6}(0^{12},\alpha^{12})$, where
$\mathbb{Z}\alpha\cong A_{1}$, i.e, $\langle\alpha,\alpha\rangle=2$. In this
case, $(V_{N}^{g})_{1}\cong A_{1,3}^{6}U(1)^{6}$ and
$U=\sqrt{3}L^{*}\cong\operatorname{Span}_{\mathbb{Z}}\\{A_{2}^{6},(111111)\\}$.
In particular, $O(L)=(W(A_{2})\wr Sym_{6}).\mathbb{Z}_{2}$.
Case: $V_{1}\cong{A_{5,3}}{D_{4,3}}A_{1,1}^{3}$.
In this case, $N=N(A_{5}^{4}D_{4})$, $\tau$ acts on $A_{5}^{4}$ as a $3$-cycle
and as a diagram automorphism $\varphi$ of order $3$ on $D_{4}$. The vector
$\beta=\frac{1}{3}(0,0,0,\lambda,u)$, where $\lambda=(1100-1-1)$ and
$u=(1001)$. In this case, $(V_{N}^{g})_{1}\cong
A_{5,3}A_{2,3}A_{1,1}^{3}U(1)^{2}$ and $U=\sqrt{3}L^{*}$ is an index $8$
overlattice of $A_{5}D_{4}(\sqrt{3}A_{1})^{3}$. Note $W(D_{4})$ can be viewed
as a subgroup of $\mathrm{Stab}_{\operatorname{Aut}(V)}(\mathfrak{h})$.
Case: $V_{1}\cong{A_{8,3}}A_{2,1}^{2}$.
In this case, $N=N(A_{6}^{4})$, $\tau$ acts on $A_{6}^{4}$ as a $3$-cycle and
$\beta=\frac{1}{3}(0,0,0,(1^{3},-1^{3},0))$. In this case,
$(V_{N}^{g})_{1}\cong A_{6,3}A_{2,1}^{2}U(1)^{2}$ and $U=\sqrt{3}L^{*}$ is an
index $9$ overlattice of $A_{8}(\sqrt{3}A_{2})^{2}$.
Case: $V_{1}\cong E_{6,3}{G_{2,1}}^{3}$.
In this case, $N=N(D_{4}^{6})$, $\tau$ acts a diagram automorphism of order
$3$ on each $D_{4}$ summand and $\beta=\frac{1}{3}(u,u,u,0,0,0)$, where
$u=(1,2,-1,0)\in D_{4}$. In this case, $(V_{N}^{g})_{1}\cong
A_{2,3}^{3}G_{2,1}^{3}$ and $U=\sqrt{3}L^{*}$ is an index $3^{3}$ overlattice
of $E_{6}(\sqrt{3}A_{2})^{3}$. Note that $W(E_{6})$ can be viewed as a
subgroup of $\mathrm{Stab}_{\operatorname{Aut}(V)}(\mathfrak{h})$.
Case: $V_{1}\cong{D_{7,3}}{A_{3,1}}{G_{2,1}}$.
In this case, $N=N(D_{4}^{6})$, $\tau$ acts as a 3-cycle on three copies of
$D_{4}$ and as a diagram automorphism of order $3$ on 2 copies of $D_{4}$;
$\beta=\frac{1}{3}(0^{4},u,(1111))$, where $u=(1,2,-1,0)\in D_{4}$. In this
case, $(V_{N}^{g})_{1}\cong D_{4,3}G_{2,1}A_{2,3}A_{3,1}U(1)$ and
$U=\sqrt{3}L^{*}$ is an index $12$ overlattice of
$D_{7}\sqrt{3}A_{3}\sqrt{3}A_{2}$.
Case: $V_{1}\cong E_{7,3}A_{5,1}$.
In this case, $N=N(E_{6}^{4})$, $\tau$ acts as a 3-cycle on three copies of
$E_{6}$ and $\beta=\frac{1}{3}(\Lambda_{1}+\Lambda_{2})$, where
$\Lambda_{1},\Lambda_{2}$ are fundamental weights. In this case,
$(V_{N}^{g})_{1}\cong E_{6,3}A_{5,1}U(1)$ and $U=\sqrt{3}L^{*}$ is an index
$6$ overlattice of $E_{7}\sqrt{3}A_{5}$.
Table 1. Orbifold construction associated with $3B$ No. | $\mathfrak{g}=V_{1}$ | Niemeier lattice $N$ | $\tau$ | $\beta$ | $(V_{N}^{g})_{1}$
---|---|---|---|---|---
$6$ | $A_{2,3}^{6}$ | $N(A_{1}^{24})$ | $1^{6}3^{6}$ | $\frac{1}{6}(\alpha^{12},0^{12})$ | $A_{1,3}^{6}U(1)^{6}$
$17$ | ${A_{5,3}}{D_{4,3}}A_{1,1}^{3}$ | $N(A_{5}^{4}D_{4})$ | 3-cycle$\times\varphi$ | $1/3(0,0,0,\lambda,u)$ | $A_{5,3}A_{2,3}A_{1,1}^{3}U(1)^{2}$
$27$ | $A_{8,3}A_{2,1}^{2}$ | $N(A_{6}^{4})$ | 3-cycle | $\frac{1}{3}(0,0,0,(1^{3},-1^{3},0))$ | $A_{6,3}A_{2,1}^{2}U(1)^{2}$
$32$ | $E_{6,3}{G_{2,1}}^{3}$ | $N(D_{4}^{6})$ | $\varphi^{\otimes 6}$ | $\frac{1}{3}(u,u,u,0,0,0)$ | $A_{2,3}^{3}G_{2,1}^{3}$
$34$ | ${D_{7,3}}{A_{3,1}}{G_{2,1}}$ | $N(D_{4}^{6})$ | 3-cycle$\times\varphi^{2}$ | $\frac{1}{3}(0^{4},u,(1111))$ | $D_{4,3}G_{2,1}A_{2,3}A_{3,1}U(1)$
$45$ | $E_{7,3}A_{5,1}$ | $N(E_{6}^{4})$ | 3-cycle | $\frac{1}{3}(\Lambda_{1}+\Lambda_{2})$ | $E_{6,3}A_{5,1}U(1)$
#### 4.2.2. $\mathbb{Z}_{5}$ orbifold construction from Niemeier lattice VOA
Next we consider the VOAs that can be obtained by a $\mathbb{Z}_{5}$ orbifold
construction from Niemeier lattice VOA.
Case: $V_{1}\cong A_{4,5}^{2}$.
In this case, $V\cong\widetilde{V_{N}}(g)$, where $N=N(A_{1}^{24})$, $\tau$
acts a permutation of the $24$ copies of $A_{1}$’s with the cycle shape
$1^{4}5^{4}$ and $\beta=\frac{1}{6}(0^{20},\alpha^{2},(2\alpha)^{2})$, where
$\mathbb{Z}\alpha\cong A_{1}$. In this case, $(V_{N}(A_{1}^{24})^{g})_{1}\cong
A_{1,5}^{4}U(1)^{4}$ and $U=\sqrt{5}L^{*}\cong A_{4}^{2}$.
Case: $V_{1}\cong D_{6,5}A_{1,1}^{2}$.
In this case, $V\cong\widetilde{V_{N}}(g)$, where $N=N(A_{1}^{24})$, $\tau$
acts a permutation of the $24$ copies of $A_{1}$’s with the cycle shape
$1^{4}5^{4}$ and $\beta=\frac{1}{5}(0^{22},\alpha,2\alpha)$. In this case,
$(V_{N}(A_{1}^{24})^{g})_{1}\cong A_{1,5}^{4}A_{1,1}^{2}U(1)^{2}$ and
$U=\sqrt{5}L^{*}$ is an index $4$ overlattice of $D_{6}(\sqrt{5}A_{1})^{2}$.
Table 2. Orbifold construction associated with $5B$ No. | $\mathfrak{g}=V_{1}$ | Niemeier lattice $N$ | $\tau$ | $\beta$ | $(V_{N}^{g})_{1}$
---|---|---|---|---|---
$9$ | $A_{4,5}^{2}$ | $N(A_{1}^{24})$ | $1^{4}5^{4}$ | $\frac{1}{10}(0^{10},\alpha^{12},(2\alpha)^{2})$ | $A_{1,5}^{4}U(1)^{4}$
$20$ | $D_{6,5}A_{1,1}^{2}$ | $N(A_{1}^{24})$ | $1^{4}5^{4}$ | $\frac{1}{5}(0^{22},\alpha,2\alpha)$ | $A_{1,5}^{4}A_{1,1}^{2}U(1)^{2}$
#### 4.2.3. $\mathbb{Z}_{7}$ orbifold construction from Niemeier lattice VOA
When $\tau$ has order $7$, there is only possible Lie algebra structure for
$V_{1}$.
Case: $V_{1}\cong A_{6,7}$.
In this case, we choose $N=N(A_{3}^{8})$ and $\tau$ acts a $7$-cycle on the
$8$ copies of $A_{3}$’s and $\beta=\frac{1}{7}(0^{7},(3,-2,-1,0))$. In this
case, $V\cong\widetilde{V_{N}}(g)$, $(V_{N}(A_{1}^{24})^{g})_{1}\cong
A_{3,7}U(1)^{3}$ and $U=\sqrt{7}L^{*}\cong A_{6}$.
Table 3. Orbifold construction associated with $7B$ No. | $\mathfrak{g}=V_{1}$ | Niemeier lattice $N$ | $\tau$ | $\beta$ | $(V_{N}^{g})_{1}$
---|---|---|---|---|---
$11$ | $A_{6,7}$ | $N(A_{3}^{8})$ | 7-cycle | $\frac{1}{7}(0^{7},(3,-2,-1,0))$ | $A_{3,7}U(1)^{3}$
#### 4.2.4. Remaining cases
For the remaining case, the order of $\tau$ is not a prime.
First we consider the cases where $\tau$ has the same frame shape as a $4C$
element in $Co_{0}$.
Case: $V_{1}\cong C_{7,2}A_{3,1}$. In this case, we choose
$N=N(A_{9}^{2}D_{6})$. $\tau$ acts on $A_{9}^{2}$ as a transposition and acts
as a diagram automorphism on $D_{6}$. The vector $\beta$ is given by
$\beta=(0,0,\frac{1}{2}(211110))$. Moreover, $(V_{N}^{g})_{1}\cong
A_{3,1}C_{5,2}A_{1,2}U(1)$.
Case: $V_{1}\cong E_{6,4}A_{2,1}B_{2,1}$.
In this case, we choose $N=N(A_{9}^{2}D_{6})$. $\tau$ acts on $A_{9}^{2}$ as a
transposition and acts as a diagram automorphism on $D_{6}$. The vector
$\beta$ is given by
$\beta=\frac{1}{8}(\,(1^{5},-1^{5}),(1^{5},-1^{5}),(22200))$. Moreover,
$(V_{N}^{g})_{1}\cong D_{5,4}A_{2,1}B_{2,1}U(1)$.
Case: $V_{1}\cong A_{7,4}A_{1,1}^{3}$.
In this case, we choose $N=N(A_{5}^{4}D_{4})$. $\tau$ acts as a product of two
2-cycles on $A_{5}^{4}$ times the diagram automorphism of $A_{5}$ on two
copies of $A_{5}$ such that $\tau$ has order $4$. The vector $\beta$ is given
by $\beta=(\frac{1}{8}(1^{3},-1^{3})^{4},\frac{1}{4}(1100))$. Moreover,
$(V_{N}^{g})_{1}\cong A_{1,1}^{3}A_{3,4}^{2}U(1)$.
Case: $V_{1}\cong A_{7,4}A_{1,1}^{3}$.
In this case, we choose $N=N(A_{5}^{4}D_{4})$. $\tau$ acts as a product of two
2-cycles on $A_{5}^{4}$ times the diagram automorphism of $A_{5}$ on two
copies of $A_{5}$ such that $\tau$ has order $4$. The vector $\beta$ is given
by $\beta=\frac{1}{8}((1^{3},-1^{3})^{2},0^{2},(3311))$. Moreover,
$(V_{N}^{g})_{1}\cong A_{1,1}^{2}C_{3,2}A_{3,4}U(1)^{2}$.
Case: $V_{1}\cong A_{3,4}^{3}A_{1,2}$.
In this case, we choose $N=N(A_{1}^{24})$. $\tau$ acts as a permutation with
the frame shape $1^{4}2^{2}4^{4}$. The vector $\beta$ is given by
$\beta=\frac{1}{8}(1^{8},2^{2},0^{14})\alpha$, where
$\langle\alpha|\alpha\rangle=2$. Moreover, $(V_{N}^{g})_{1}\cong
A_{1,2}^{2}A_{1,4}^{4}U(1)^{5}$.
Table 4. Orbifold construction associated with $4C$ No. | $\mathfrak{g}=V_{1}$ | Niemeier $N$ | $\tau$ | $\beta$ | $(V_{N}^{g})_{1}$
---|---|---|---|---|---
$35$ | $C_{7,2}A_{3,1}$ | $N(A_{9}^{2}D_{6})$ | 2-cycle$\times(1,\delta_{A_{9}})\times\delta_{D_{6}}$ | $(0,0,\frac{1}{2}(2,1,1,1,1,0))$ | $A_{3,1}C_{5,2}A_{1,2}U(1)$
$28$ | $E_{6,4}A_{2,1}B_{2,1}$ | $N(A_{9}^{2}D_{6})$ | 2-cycle$\times(1,\delta_{A_{9}})\times\delta_{D_{6}}$ | $\frac{1}{8}((1^{5},-1^{5}),(1^{5},-1^{5}))$ | $D_{5,4}A_{2,1}B_{2,1}U(1)$
| | | | $+\frac{1}{4}(111000)$ |
$18$ | $A_{7,4}A_{1,1}^{3}$ | $N(A_{5}^{4}D_{4})$ | (2-cycle$\times(1,\delta_{A_{5}}))^{2}$ | $(\frac{1}{8}(1^{3},-1^{3})^{4},\frac{1}{4}(1100))$ | $A_{1,1}^{3}A_{3,4}^{2}U(1)$
$19$ | $D_{5,4}C_{3,2}A_{1,1}^{2}$ | $N(A_{5}^{4}D_{4})$ | (2-cycle$\times(1,\delta_{A_{5}}))^{2}$ | $\frac{1}{8}((1^{3},-1^{3})^{2},0^{2},(3311))$ | $A_{1,1}^{2}C_{3,2}A_{3,4}U(1)^{2}$
$7$ | $A_{3,4}^{3}A_{1,2}$ | $N(A_{1}^{24})$ | $1^{4}2^{2}4^{4}$ | $\frac{1}{8}(1^{8},2^{2},0^{14})\alpha$ | $A_{1,2}A_{1,4}^{4}U(1)^{5}$
For isometries with the same frame shape as a $8E$ element, there is only
possible case.
Case: $V_{1}\cong D_{5,8}^{3}A_{1,2}$.
In this case, we choose $N=N(A_{1}^{24})$. $\tau$ acts as a permutation with
the frame shape $1^{2}2\,48^{2}$; that means $\tau$ has the same frame shape
as a $8E$-element in $Co_{0}$. The vector $\beta$ is given by
$\beta=\frac{1}{18}(3^{3},1^{5},0^{16})\alpha$, where
$\langle\alpha|\alpha\rangle=2$. Moreover, $(V_{N}^{g})_{1}\cong
A_{1,8}^{2}U(1)^{4}$.
Table 5. Orbifold construction associated with $8E$ No. | $\mathfrak{g}=V_{1}$ | Niemeier $N$ | $\tau$ | $\beta$ | $(V_{N}^{g})_{1}$
---|---|---|---|---|---
$10$ | $D_{5,8}A_{1,2}$ | $N(A_{1}^{24})$ | $1^{2}\cdot 2\cdot 4\cdot 8^{2}$ | $\frac{1}{16}(3^{3},1^{5},0^{16})\alpha$ | $A_{1,8}^{2}U(1)^{4}$
For isometries with the same frame shape as a $6E$ element, there are two
cases.
Case: $V_{1}\cong A_{5,6}B_{2,3}A_{1,2}$.
In this case, we choose $N=N(A_{3}^{8})$ and $\tau$ acts as a product of two
3-cycles times the diagram automorphism of $A_{3}$ on all copies of $A_{3}$ on
$A_{3}^{8}$. The vector $\beta$ is given by
$\beta=\frac{1}{6}(\gamma_{1}^{3},(2\gamma_{1})^{3},-\gamma_{1},0)$, where
$\gamma_{1}=\frac{1}{4}(3,-1,-1,-1)\in A_{3}^{*}$. Moreover,
$(V_{N}^{g})_{1}\cong A_{1,2}^{2}A_{1,6}^{2}B_{2,3}U(1)^{3}$.
Case: $V_{1}\cong C_{5,3}G_{2,2}A_{1,1}$.
In this case, we choose $N=N(D_{4}^{6})$. $\tau$ acts as a product of two
2-cycles times the diagram automorphism of $D_{4}$ on all copies of $D_{4}$ on
$D_{4}^{6}$. The vector $\beta$ is given by
$\beta=\frac{1}{6}((1100)^{3},(1,2,-1,0),0,0)$. Moreover,
$(V_{N}^{g})_{1}\cong A_{1,1}A_{1,3}^{2}A_{1,6}G_{2,2}U(1)^{2}$.
Table 6. Orbifold construction associated with $6E$ No. | $\mathfrak{g}=V_{1}$ | Niemeier lattice $N$ | $\tau$ | $\beta$ | $(V_{N}^{g})_{1}$
---|---|---|---|---|---
$8$ | $A_{5,6}B_{2,3}A_{1,2}$ | $N(A_{3}^{8})$ | $\delta_{A_{3}}^{8}\times$3-cycle2 | $\frac{1}{6}(\gamma_{1}^{3},(2\gamma_{1})^{3},-\gamma_{1},0)$ | $A_{1,2}A_{1,6}^{2}B_{2,3}U(1)^{3}$
$21$ | $C_{5,3}G_{2,2}A_{1,1}$ | $N(D_{4}^{6})$ | $\delta_{D_{4}}^{6}\times$ 2-cycle2 | $\frac{1}{6}((1100)^{3},(12-10),0,0)$ | $A_{1,1}A_{1,3}^{2}A_{1,6}G_{2,2}U(1)^{2}$
For isometries with the same frame shape as a $6G$ element, there are also two
cases.
Case: $V_{1}\cong D_{4,12}A_{2,6}$.
In this case, we choose $N=N(A_{2}^{12})$. $\tau$ acts as a product of a
3-cycle, a 6-cycle times the diagram automorphism of $A_{2}$ on 6 copies of
$A_{2}$ on $A_{2}^{6}$, on which the 6-cycle acts. The vector $\beta$ is given
by $\beta=\frac{1}{6}(0^{9},(10-1)^{3})$. Moreover, $(V_{N}^{g})_{1}\cong
A_{2,6}A_{1,12}$.
Case: $V_{1}\cong F_{4,6}A_{2,2}$.
In this case, we choose $N=N(A_{6}^{4})$. $\tau$ acts as a product of a
3-cycle times the diagram automorphism of $A_{6}$ on all 4 copies of $A_{6}$.
The vector $\beta$ is given by $\beta=\frac{1}{6}(0^{3},(1^{3},0,-1^{3}))$.
Moreover, $(V_{N}^{g})_{1}\cong B_{3,6}A_{2,2}U(1)$.
Table 7. Orbifold construction associated with $6G$ No. | $\mathfrak{g}=V_{1}$ | Niemeier lattice $N$ | $\tau$ | $\beta$ | $(V_{N}^{g})_{1}$
---|---|---|---|---|---
$3$ | $D_{4,12}A_{2,6}$ | $N(A_{2}^{12})$ | 3-cycle$\cdot$ 6-cycle$\times\delta_{A_{2}}^{6}$ | $\frac{1}{6}(0^{9},(10-1)^{3})$ | $A_{2,6}A_{1,12}U(1)^{3}$
$14$ | $F_{4,6}A_{2,2}$ | $N(A_{6}^{4})$ | 3-cycle$\times\delta_{A_{6}}^{4}$ | $\frac{1}{6}(0^{3},(1^{3},0,-1^{3}))$ | $B_{3,6}A_{2,2}U(1)$
For isometries with the frame shape of $10F$, there is only one possible Lie
algebra.
Case: $V_{1}\cong F_{4,6}A_{2,2}$.
In this case, we choose $N=N(A_{4}^{6})$ and $\tau$ acts as a product of a
5-cycle times the diagram automorphism of $A_{4}$ on all $6$ copies of
$A_{4}$. The vector $\beta$ is given by
$\beta=\frac{1}{10}(0^{5},(2,1,0,-1,-2))$. Moreover, $(V_{N}^{g})_{1}\cong
C_{2,10}U(1)^{2}$.
Table 8. Orbifold construction associated with $10F$ No. | $\mathfrak{g}=V_{1}$ | Niemeier lattice $N$ | $\tau$ | $\beta$ | $(V_{N}^{g})_{1}$
---|---|---|---|---|---
$4$ | $C_{4,10}$ | $N(A_{4}^{6})$ | 5-cycle$\times\delta_{A_{4}}^{6}$ | $\frac{1}{10}(0^{5},(2,1,0,-1,-2))$ | $C_{2,10}U(1)^{2}$
###### Remark 4.5.
For a root $\alpha$ of $V_{1}$, let $s_{\alpha}$ be the corresponding
reflection in $W(V_{1})$. We use $\psi_{\alpha}$ to denote a lift of
$s_{\alpha}$ in $\mathrm{Stab}_{\operatorname{Aut}(V)}(\mathfrak{h})$. Then
$\psi_{\alpha}^{2}=\exp(2\pi\sqrt{-1}\gamma_{(0)})$ for some
$\gamma\in\mathfrak{h}$. Up to conjugation by an element in
$\\{\exp(a_{(0)}\mid a\in\mathfrak{h}\\}$, we may assume $\gamma$ is fixed by
$s_{\alpha}$ (cf. [LS20a, Lemma 4.5]). Set $u=-\gamma/2$. Then $u$ is also
fixed by $s_{\alpha}$ and
$\begin{split}(\exp(2\pi\sqrt{-1}u_{(0)})\psi_{\alpha})^{2}=&\exp(2\pi\sqrt{-1}u_{(0)})\psi_{\alpha}\exp(2\pi\sqrt{-1}u_{(0)})\psi_{\alpha}\\\
=&\exp(2\pi\sqrt{-1}u_{(0)})\psi_{\alpha}\exp(2\pi\sqrt{-1}u_{(0)})\psi_{\alpha}^{-1}\psi_{\alpha}^{2}\\\
=&\exp(2\pi\sqrt{-1}u_{(0)})\exp(2\pi\sqrt{-1}s_{\alpha}u_{(0)})\exp(2\pi\sqrt{-1}\gamma_{(0)})=1.\end{split}$
Therefore, we may choose a lift such that $\psi_{\alpha}$ is an involution.
For our choices of $(N,g)$, there is always a root $\alpha$ such that
$\psi_{\alpha}((V_{N}^{g})_{1})\neq(V_{N}^{g})_{1}$.
## 5\. Unitary form
In this section, we will study the unitary form for holomorphic VOAs of
central charge $24$. First we recall a theorem from [DLin14], which is about
the unitary form for $\mathbb{Z}_{2}$ simple current extensions of unitary
VOAs .
###### Theorem 5.1 ([DLin14, Theorem 3.3]).
Let $(V,\varphi)$ be a rational and $C_{2}$-cofinite unitary self-dual vertex
operator algebra and $M$ a simple current irreducible $V$-module having
integral weights. Assume that $M$ has an anti-linear map $\psi$ such that
$\psi(v_{n}w)=\varphi(v)_{n}\psi(w)$ and $\psi^{2}=id$,
$(\psi(w_{1}),\psi(w_{2}))_{M}=(w_{1},w_{2})_{M}$ and the Hermitian form $(\
,\ )_{V}$ on $V$ has the property that
$(\varphi(v_{1}),\varphi(v_{2}))_{V}=(v_{1},v_{2})_{V}$. Then
$(U,\varphi_{U})$ has a unique unitary vertex operator algebra structure,
where $\varphi_{U}:U\to U$ is the anti-linear involution defined by
$\varphi_{U}(v,w)=(\varphi(v),\psi(w))$, for $v\in V,w\in M$. Furthermore, $U$
is rational and $C_{2}$-cofinite.
By Theorem 5.1, all holomorphic VOAs which can be constructed by a
$\mathbb{Z}_{2}$-orbifold construction from a lattice VOA are unitary. As a
consequences, we have the following result.
###### Theorem 5.2.
Let $V$ be a holomorphic VOA of central charge $24$ with the weight one Lie
algebra isomorphic to one of the Lie algebras in Table LABEL:T:LieZ2. Then $V$
is unitary.
Table 9. Weight one Lie algebras of holomorphic VOAs of central charge $24$ associated with $\mathbb{Z}_{2}$ orbifolds Class | $\\#$ of $V$ | Weight one Lie algebra structures
---|---|---
$2A$ | $17$ | $A_{1,2}^{16}$, $A_{3,2}^{4}A_{1,1}^{4}$, $D_{4,2}^{2}B_{2,1}^{4}$, $A_{5,2}^{2}C_{2,1}A_{2,1}^{2}$, $D_{5,2}^{2}C_{2,1}A_{2,1}^{2}$, $A_{7,2}C_{3,1}^{2}A_{3,1}$,
| | $C_{4,1}^{4}$, $D_{6,2}C_{4,1}B_{3,1}^{2}$, $A_{9,2}A_{4,1}B_{3,1}$, $E_{6,2}C_{5,1}A_{5,1}$, $D_{8,2}B_{4,1}^{2}$, $C_{6,1}^{2}B_{4,1}$,
| | $D_{9,2}A_{7,1}$, $C_{8,1}F_{4,1}^{2}$, $E_{7,2}B_{5,1}F_{4,1}$, $C_{10,1}B_{6,1}$, $B_{8,1}E_{8,2}$
$2C$ | $9$ | $A_{1,4}^{12}$, $B_{2,2}^{6}$, $B_{3,2}^{4}$, $B_{4,2}^{3}$, $B_{6,2}^{2}$, $B_{12,2}$, $D_{4,4}A_{2,2}^{4}$, $C_{4,2}A_{4,2}^{2}$, $A_{8,2}F_{4,2}$
### 5.1. Other orbifold constructions
Next we consider other orbifold constructions. The proof of the following
theorem is essentially the same as [CLS18, Theorem 4.8] with some necessary
modifications. For completeness, we include the proof here.
###### Theorem 5.3.
Let $V$ be a self-dual, simple VOA of CFT-type. Assume that $V$ has two
commuting automorphisms $f$ and $h$ of order $p$. For $i,j\in\mathbb{Z}$, set
$V^{i,j}=\\{v\in V\mid f(v)=\xi^{i}v,\ h(v)=\xi^{j}v\\}$, where
$\xi=\exp(2\pi\sqrt{-1}/p)$. Set $V^{i}=\bigoplus_{j=0}^{p-1}V^{i,j}$. Assume
the following:
1. (A)
There exists an anti-linear involution $\phi$ of $V^{0}$ such that
$(V^{0},\phi)$ is a unitary VOA;
2. (B)
For $i\in\\{1,\dots,p-1\\}$, $V^{i}$ is a unitary $(V^{0},\phi)$-module;
3. (C)
There exists an automorphism $\psi\in\operatorname{Aut}(V)$ such that
$\psi^{-1}f\psi=h$;
4. (D)
$\psi(V^{0,0})=V^{0,0}$ and $\psi\phi\psi^{-1}=\phi$ on $V^{0,0}$;
Then there exist an anti-linear involution $\Phi$ of $V$ such that $(V,\Phi)$
is a unitary VOA.
###### Remark 5.4.
Let $i,j,k,\ell\in\mathbb{Z}$. Then
$\operatorname{Span}_{\mathbb{C}}\\{u_{n}v\mid u\in V^{i,j},\ v\in
V^{k,\ell},\ n\in\mathbb{Z}\\}=V^{i+k,j+\ell}$. Note also that
$V^{i,j}=V^{i+pk,j+p\ell}$ and $V=\bigoplus_{0\leq i,j\leq p-1}V^{i,j}$ is
$\mathbb{Z}_{p}^{2}$-graded.
Let $V$ be a VOA satisfying the assumptions of Theorem 5.3. Let
$(\cdot,\cdot)_{V^{0}}$ be the positive-definite invariant Hermitian form on
$V^{0}$ normalized so that $(\mathds{1},\mathds{1})_{V^{0}}=1$. Let
$\langle\cdot,\cdot\rangle$ be the normalized symmetric invariant bilinear
form on $V$ such that $\langle\mathds{1},\mathds{1}\rangle=1$. Note that
$(u,v)_{V^{0}}=\langle u,\phi(v)\rangle$ for $u,v\in V^{0}$ (cf. Remark 2.2).
By the assumption (C), $\psi(V^{0})=V^{0,0}\oplus V^{1,0}\oplus\cdots\oplus
V^{p-1,0}$ is also a unitary VOA with the anti-linear automorphism
$\psi\phi\psi^{-1}$ and a positive-definite invariant Hermitian form defined
by
$(a,b)_{\psi(V^{0})}=(\psi^{-1}(a),\psi^{-1}(b))_{V^{0}}\quad\text{for}\quad
a,b\in\psi(V^{0}).$ (5-1)
Note that $\psi\phi\psi^{-1}=\phi$ on $V^{0,0}$ by Assumption (D).
By Lemma 2.5, a positive-definite invariant Hermitian form on the unitary
$(V^{0,0},\phi)$-module $V^{i,0}$ is unique up to scalar for each
$i=1,\dots,p-1$. We may choose a positive-definite invariant Hermitian form
$(\cdot,\cdot)_{V^{i}}$ on $V^{i}$ so that
$(u,v)_{V^{i}}=(u,v)_{\psi(V^{0})}\quad\text{for}\quad u,v\in V^{i,0}.$ (5-2)
By Lemma 2.4, there exists an anti-linear bijective map $\Phi^{i}:V^{i}\to
V^{p-i}$ such that $\Phi^{i}(a_{n}v)=\phi(a)_{n}\Phi^{i}(v)$ for $a\in V^{0}$
and $v\in V^{i}$ and
$(u,v)_{V^{i}}=\langle u,\Phi^{i}(v)\rangle\quad\text{for}\quad u,v\in V^{i}.$
(5-3)
By (5-1), (5-2) and (5-3), for any $u,v\in V^{i,0}$, we have
$\langle
u,\Phi^{i}(v)\rangle=(u,v)_{V^{i}}=(\psi^{-1}(u),\psi^{-1}(v))_{V^{0}}=\langle\psi^{-1}(u),\phi\psi^{-1}(v)\rangle=\langle
u,\psi\phi\psi^{-1}(v)\rangle.$
Hence
$\psi\phi\psi^{-1}=\Phi^{i}\quad\text{on}\quad V^{i,0}.$ (5-4)
Since the order of $\phi$ is $2$, both the composition maps
$\Phi^{p-i}\circ\Phi^{i}$ and $\Phi^{i}\circ\Phi^{p-i}$ are the identity map
on $V^{i,0}$ by (5-4). Viewing $V^{p-i}$ as an irreducible unitary
$(V^{0},\phi)$-module, we have $\Phi^{p-i}=(\Phi^{i})^{-1}$ on $V^{p-i}$ by
the same argument as in the proof of Lemma 2.5.
Now, we define the anti-linear map $\Phi:V\to V$ so that
$\displaystyle\Phi(u)=\begin{cases}\phi(u)&\text{for}\ u\in V^{0},\\\
\Phi^{i}(u)&\text{for}\ u\in V^{i},\ i=1,\dots,p-1,\end{cases}$
and the positive-definite Hermitian form $(\cdot,\cdot)$ on $V$ by
$\displaystyle(u,v)=\begin{cases}(u,v)_{V^{i}}&\text{ if }u,v\in V^{i},\
i=0,1,\dots,p-1,\\\ 0&\text{ if }u\in V^{i},\ v\in V^{j},\ i\neq
j.\end{cases}$
Clearly, $\Phi$ is bijective and $\Phi\circ\Phi$ is the identity of $V$. We
will show that $(V,\Phi)$ is unitary.
###### Lemma 5.5.
1. (1)
For $i,j\in\\{0,1,\dots,p-1\\}$, $\Phi(V^{i,j})=V^{p-i,p-j}$.
2. (2)
For $u,v\in V$, $(u,v)=\langle u,\Phi(v)\rangle$.
###### Proof.
Clearly, $(V^{0,0},\phi)$ is a simple unitary VOA and $V^{i,j}$ is an
irreducible unitary $(V^{0,0},\phi)$-module. Hence by Lemma 2.4 (2) and Lemma
2.5, the map $\Phi$ sends $V^{i,j}$ to the submodule of $V^{p-i,p-j}$
isomorphic to the contragredient module of $V^{i,j}$, which is $V^{p-i,p-j}$
by Remark 5.4 (4). Hence we obtain (1).
Let $u\in V^{i}$, $v\in V^{j}$ with $i\neq j$. Clearly $i-j\neq 0\mod p$. By
(1), we have $\Phi(v)\in V^{p-j}$. Since the contragredient module of $V^{i}$
is not isomorphic to $V^{p-j}$, we have $\langle u,\Phi(v)\rangle=0$. By (5-3)
and the definition of the form $(\cdot,\cdot)$, we obtain (2). ∎
###### Proposition 5.6.
The anti-linear map $\Phi$ is an anti-linear involution of $V$.
###### Proof.
Since $\phi$ is an anti-linear automorphism of $V^{0}$, $\Phi$ fixes the
vacuum vector and the conformal vector of $V$. Since $(V^{0},\phi)$ is
unitary, the equation
$\Phi(u_{n}v)=\Phi(u)_{n}\Phi(v)$ (5-5)
holds for $u,v\in V^{0}$ and $n\in\mathbb{Z}$. By the definition of $\Phi^{i}$
for $i=1,2$, (5-5) holds for $u\in V^{0}$ and $v\in V^{i}$. By the skew
symmetry, we have
$u_{n}v=(-1)^{n+1}v_{n}u+\sum_{i\geq
1}\frac{(-1)^{n+i+1}}{i!}L(-1)^{i}(v_{n+1}u)$
for $u,v\in V$ and $n\in\mathbb{Z}$. Hence the equation (5-5) also holds for
$u\in V^{i}$ and $v\in V^{0}$.
Let $x\in V^{0,j}$, $y\in V^{i,0}$ and $u\in V^{k,\ell}$. By Borcherds’
identity, for $r,q\in\mathbb{Z}$,
$(x_{r}y)_{q}u=\sum_{i=0}^{\infty}(-1)^{i}\binom{r}{i}\left(x_{r-i}(y_{q+i}u)-(-1)^{r}y_{q+r-i}(x_{i}u)\right).$
By the assumptions on $x$ and $y$ and the identity above, we have
$\Phi((x_{r}y)_{q}u)=(\Phi(x)_{r}\Phi(y))_{q}\Phi(u)=\Phi(x_{r}y)_{q}\Phi(u).$
By Remark 5.4 (2), we obtain $\Phi(u_{n}v)=\Phi(u)_{n}\Phi(v)$ for all $x,y\in
V$ and $n\in\mathbb{Z}$. ∎
By Lemma 5.5 (2), the invariant property of $\langle\cdot,\cdot\rangle$ and
Proposition 5.6, we obtain the following proposition:
###### Proposition 5.7.
The positive-definite Hermitian form $(\ ,\ )$ on $V$ satisfies the invariant
property for $(V,\Phi)$.
Combining Propositions 5.6 and 5.7, we have proved Theorem 5.3.
### 5.2. Unitary forms for holomorphic VOAs of central charge $24$
As we discussed in Section 4.2, every holomorphic VOA of central charge $24$
with $V_{1}\neq 0$ can be constructed by a single orbifold construction from a
Niemeier lattice VOA.
Let $(N,g)$ be pair of a Niemeier lattice and an automorphism of $V_{N}$ as
described in Section 4.2 such that $V\cong\widetilde{V_{N}}(g)$. Then
$V=V_{N}^{g}\oplus V_{N}[g]_{0}\oplus\cdots\oplus V_{N}[g^{p-1}]_{0},$
where $V_{N}[g^{i}]$ denotes the irreducible $g^{i}$-twisted module of
$V_{N}$.
Let $L$ be the even lattice such that
$V_{L}\cong\mathrm{Com}_{V}(\mathrm{Com}_{V}(M(\mathfrak{h})))$, where
$\mathfrak{h}$ is a Cartan subalgebra of $V_{1}$ and suppose
$g=\hat{\tau}\exp(2\pi i\beta(0))\in\operatorname{Aut}(V_{N})$. Then $L\cong
N_{\beta}^{\tau}$ and $V_{N}^{g}>V_{L}\otimes
V_{\Lambda_{\tau}}^{\hat{\tau}}$.
Set
$V_{N}=\bigoplus_{\lambda+N^{\tau}\in(N^{\tau})^{*}/N^{\tau}}V_{\lambda+N^{\tau}}\otimes
V_{\lambda^{\prime}+N_{\tau}}.$
Then
$V_{N}^{g}=\bigoplus_{\lambda+N^{\tau}\in(N^{\tau})^{*}/N^{\tau}}\left(V_{\lambda+N^{\tau}}\otimes
V_{\lambda^{\prime}+N_{\tau}}\right)^{g}=\bigoplus_{\lambda+L\in(N^{\tau})^{*}/L}V_{\lambda+L}\otimes
W_{\lambda}<V,$
where $W_{\lambda}$, $\lambda+L\in(N^{\tau})^{*}/L$, are irreducible
$V_{N_{\tau}}^{\hat{\tau}}$-modules and eigenspaces of $\hat{\tau}$ on
$V_{\lambda^{\prime}+N_{\tau}}$.
Define $f\in\operatorname{Aut}(V)$ so that $f$ acts on $V_{N}[g^{i}]_{0}$ as a
multiplication of the scalar $\xi^{i}$. Then $V^{f}=V_{N}^{g}$ and there is a
$\gamma\in\mathbb{Q}\otimes_{\mathbb{Z}}N^{\tau}$ such that
$\langle\gamma|\beta\rangle\notin\mathbb{Z}$ and $f=\exp(2\pi i\gamma(0))$. As
we discussed in Section 4.2, the Lie subalgebra $(V_{N}^{g})_{1}$ is proper
subalgebra of $V_{1}$ and $(V_{N}^{g})_{1}$ is non-abelian. By Lemma 4.2,
Proposition 4.4 and Remark 4.5, there is a root of $V_{1}$ and a lift
$\psi_{\alpha}\in\mathrm{Stab}_{\operatorname{Aut}(V)}(V_{L}\otimes W)$ of a
reflection $s_{\alpha}\in W(V_{1})$ such that
$\psi_{\alpha}((V_{N}^{g})_{1})\neq(V_{N}^{g})_{1}$ and $\psi_{\alpha}^{2}=1$.
For simplicity, we use $w$ and $\psi$ to denote $s_{\alpha}$ and
$\psi_{\alpha}$, respectively.
Define $h=\psi f\psi^{-1}$. Then $h=\exp(2\pi iw(\gamma)(0))$ and it is clear
that both $f$ and $h$ fix $V_{L}\otimes V_{\Lambda_{\tau}}^{\hat{\tau}}$
point-wisely. Since all irreducible modules for $V_{L}\otimes
V_{\Lambda_{\tau}}^{\hat{\tau}}$ are simple current modules, the subgroup of
$\operatorname{Aut}(V)$ that fixes $V_{L}\otimes
V_{\Lambda_{\tau}}^{\hat{\tau}}$ point-wisely is a finite abelian group. In
particular, $[f,h]=1$. Moreover, we have
$V^{0,0}=V^{<f,h>}=\bigoplus_{\lambda+L\in J/L}V_{\lambda+L}\otimes
W_{\lambda},$
where $J=\\{\lambda\in
L^{*}\mid\langle\lambda,\gamma\rangle\in\mathbb{Z},\langle\lambda,w(\gamma)\rangle\in\mathbb{Z}\\}$.
###### Lemma 5.8.
We have $w(J)=J$ and $\psi(V^{0,0})=V^{0,0}$.
###### Proof.
Let $\lambda\in J$. Then $\langle
w(\lambda)|w(\gamma)\rangle=\langle\lambda|\gamma\rangle\in\mathbb{Z}$ and
$\langle w(\lambda)|\gamma\rangle=\langle
w^{2}(\lambda)|w(\gamma)\rangle=\langle\lambda|w(\gamma)\rangle\in\mathbb{Z}$.
Thus $w(\lambda)\in J$ for any $\lambda\in J$ and we have the desired result.
∎
###### Lemma 5.9.
Let $X$ be a sublattice of $N$ such that $P_{0}(X)=J$. Then $V^{0,0}<V_{X}$
and $\psi$ can be considered as a lift of an isometry of $X$ in
$\operatorname{Aut}(V_{X})$. In particular, we have $\psi\phi\psi^{-1}=\phi$
on $V^{0,0}<V_{X}$.
###### Proof.
We first note that $J=(N^{\tau})^{*}\cap w((N^{\tau})^{*})>L$. Then
$V_{X}=\bigoplus_{\lambda+L\in J/L}V_{\lambda+L}\otimes
V_{\lambda^{\prime}+N_{\tau}}.$
Since $W_{\lambda}<V_{\lambda^{\prime}+N_{\tau}}$, we have $V^{0,0}<V_{X}$.
Since $w(X)<X$, $w$ defines an isometry of $X$ and thus
$\psi\phi\psi^{-1}=\phi$ on $V_{X}$. We have $\psi\phi\psi^{-1}=\phi$ on
$V^{0,0}$ as desired. ∎
Therefore, $V$,$f$ and $h$ satisfy the conditions in Theorem 5.3 and the main
theorem follows.
###### Theorem 5.10.
Any (strongly regular) holomorphic vertex operator algebra of central charge
$24$ and with non-trivial weight one subspace is unitary.
## References
* [BLS] K. Betsumiya, C.H. Lam and H. Shimakura, Automorphism groups and uniqueness of holomorphic vertex operator algebras of central charge $24$, to appear in _Comm. Math. Phys._ ; arXiv:2203.15992
* [Bo86] R.E. Borcherds, Vertex algebras, Kac-Moody algebras, and the Monster, _Proc. Nat’l. Acad. Sci. U.S.A._ 83 (1986), 3068–3071.
* [CGGH] S. Carpi, T. Guadio, L. Giorgetti, and R. Hillier, Haploid Algebras in $C^{*}$-tensor categories and the Schellekens list; arXiv:2211.12790.
* [CKLW] S. Carpi, Y. Kawahigashi, R. Longo and M. Weiner, From vertex operator algebras to conformal nets and back, _Mem. Amer. Math. Soc._ , 254 (2018), no. 1213, vi+85 pp.
* [CCN+85] J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, and R. A. Wilson, _Atlas of finite groups_ , Oxford University Press, Eynsham, 1985, Maximal subgroups and ordinary characters for simple groups, With computational assistance from J. G. Thackray.
* [CLS18] H.Y. Chen, C.H. Lam and H. Shimakura, On $\mathbb{Z}_{3}$-orbifold construction of the Moonshine vertex operator algebra, _Math. Z._ 288 (2018), 75–100.
* [CLM22] N. Chigira, C.H. Lam, M. Miyamoto, Orbifold construction and Lorentzian construction of Leech lattice vertex operator algebra, _J. Algebra_ 593 (2022), 26–71; arXiv:2104.03098.
* [CS83] J. H. Conway and N. J. A. Sloane, The Coxeter-Todd lattice, the Mitchell group, and related sphere packings, _Math. Proc. Cambridge Philos. Soc._ 93 (1983), 421–440.
* [DG02] C. Dong, R.L. Griess, Jr., Automorphism groups and derivation algebras of finitely generated vertex operator algebras, _Michigan Math. J._ 50 (2002), 227–239.
* [DGL07] C. Dong, R.L. Griess and C.H. Lam, Uniqueness results for the moonshine vertex operator algebra, _Amer. J. Math._ 129 (2007), 583–609.
* [DL96] C. Dong and J. Lepowsky, The algebraic structure of relative twisted vertex operators, _J. Pure Appl. Algebra_ 110 (1996), 259–295.
* [DLM00] C. Dong, H. Li, and G. Mason, Modular-invariance of trace functions in orbifold theory and generalized Moonshine, _Comm. Math. Phys._ 214 (2000), 1–56.
* [DLin14] C. Dong and X.J. Lin, Unitary vertex operator algebras, _J. Algebra_ 397 (2014), 252–277.
* [DN99] C. Dong and K. Nagatomo, Automorphism groups and twisted modules for lattice vertex operator algebras, in Recent developments in quantum affine algebras and related topics (Raleigh, NC, 1998), 117–133, _Contemp. Math._ , 248, Amer. Math. Soc., Providence, RI, 1999.
* [ELMS21] J. van Ekeren, C.H. Lam, S. Moller and H. Shimakura, Schellekens’ List and the Very Strange Formula, _Adv. Math._ , 380 (2021), 107567.
* [FHL93] I.B. Frenkel, Y. Huang and J. Lepowsky, On axiomatic approaches to vertex operator algebras and modules, _Mem. Amer. Math. Soc._ 104 (1993), viii+64 pp.
* [FLM88] I.B. Frenkel, J. Lepowsky, and A. Meurman, _Vertex operator algebras and the monster_ , Pure and Appl. Math., vol. 134, Academic Press, Boston, 1988.
* [Hö2] G. Höhn, On the Genus of the Moonshine Module, arXiv:1708.05990.
* [HM] G. Höhn and S. Möller, Systematic Orbifold Constructions of Schellekens’ Vertex Operator Algebras from Niemeier Lattices, to appear in _J. Lond. Math. Soc._ ; arXiv:2010.00849.
* [HKL15] Y. Z. Huang, A. Kirillov Jr., J. Lepowsky, Braided tensor categories and extensions of vertex operator algebras, _Commun. Math. Phys._ 337 (2015),1143-1159.
* [KRR13] V.G. Kac, A.K. Raina and N. Rozhkovskaya, Bombay lectures on highest weight representations of infinite dimensional Lie algebras. Second edition. Advanced Series in Mathematical Physics, 29. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2013.
* [La20] C.H. Lam, Cyclic orbifolds of lattice vertex operator algebras having group-like fusions, _Lett. Math. Phys._ 110 (2020), 1081–1112.
* [LM] C. H. Lam and M. Miyamoto, A lattice theoretical interpretation of generalized deep holes of the Leech lattice vertex operator algebra, arXiv:2205.04681.
* [LS20a] C.H. Lam and H. Shimakura, On orbifold constructions associated with the Leech lattice vertex operator algebra, _Mathematical Proceedings of the Cambridge Philosophical Society_ , 168 (2020), 261-285.
* [LS20b] C. H. Lam and H. Shimakura, Inertia groups and uniqueness of holomorphic vertex operator algebras, _Transformation groups_ , 25 (4), 1223-1268, 2020.
* [Le85] J. Lepowsky, Calculus of twisted vertex operators, _Proc. Nat. Acad. Sci. U.S.A._ , 82 (1985) 8295–8299
* [Li94] H. Li, Symmetric invariant bilinear forms on vertex operator algebras, _J. Pure Appl. Algebra_ 96 (1994), 279–297.
* [Mi13] M. Miyamoto, _A $\mathbb{Z}_{3}$-orbifold theory of lattice vertex operator algebra and $\mathbb{Z}_{3}$-orbifold constructions_, Symmetries, integrable systems and representations, Springer Proceedings in Mathematics & Statistics, vol. 40, Springer, Heidelberg, 2013, pp. 319–344.
* [MS] S. Moller, N.R. Scheithauer, Dimension formulae and generalised deep holes of the Leech lattice vertex operator algebra, to appear in _Ann. of Math._ ; arXiv:1910.04947.
* [Sc93] A.N. Schellekens, Meromorphic $c=24$ conformal field theories, _Comm. Math. Phys._ 153 (1993), 159–185.
* [Sh07] H. Shimakura, Lifts of automorphisms of vertex operator algebras in simple current extensions, _Math. Z._ 256 (2007), no. 3, 491–508.
|
# Generalized Face Anti-Spoofing via Multi-Task Learning
and One-Side Meta Triplet Loss
Chu-Chun Chuang1, Chien-Yi Wang2,and Shang-Hong Lai1,2
1 National Tsing Hua University, Taiwan 2 Microsoft AI R&D Center, Taiwan
###### Abstract
With the increasing variations of face presentation attacks, model
generalization becomes an essential challenge for a practical face anti-
spoofing system. This paper presents a generalized face anti-spoofing
framework that consists of three tasks: depth estimation, face parsing, and
live/spoof classification. With the pixel-wise supervision from the face
parsing and depth estimation tasks, the regularized features can better
distinguish spoof faces. While simulating domain shift with meta-learning
techniques, the proposed one-side triplet loss can further improve the
generalization capability by a large margin. Extensive experiments on four
public datasets demonstrate that the proposed framework and training
strategies are more effective than previous works for model generalization to
unseen domains.
††publicationid: pubid: 979-8-3503-4544-5/23/$31.00 © 2023 IEEE
## I INTRODUCTION
Owing to the progressive development of face recognition techniques, the
information securities are integral to recognition systems. Thanks to face
anti-spoofing, the probability of accessing the systems with presentation
attacks have strongly reduced.
Over the past few years, face anti-spoofing methods can be roughly divided
into appearance-based and temporal-based methods. However, the performance of
these works significantly degrades while the distributions between testing
data and training data exist significant discrepancies (sensors, environments,
and spoofing mediums). Therefore, domain generalization becomes significant
while dealing with the face anti-spoofing task. Some of the previous works [9,
19, 20, 3, 13, 25] have employed domain generalization techniques to face
anti-spoofing. Adversarial training and meta-learning frameworks were used to
find generalized features among different domains, and the performance showed
verified improvements. The way that meta-learning achieves generalization is
keeping simulating domain shift scenarios during the training process. The
model learns the ability of transfer between domains. In this paper, we
conduct meta-learning process to improve generalization and strengthen the
optimization of inter-domain feature distributions.
Figure 1: Illustrated idea of the proposed face anti-spoofing system.
To find generalized features, employing multiple aspects of face information
may help us achieve this goal. For example, the model would understand the
face composition and depth conditions better with face parsing and depth
information. Based on this idea, multi-task framework may be helpful to
address this thought. In this way, the features our model learned would
include various facial information. With understanding the key information of
faces, we could find more generalized facial features. Moreover, spoof regions
are not always the whole faces. There are many attack types of local regions
such as eyes. Understanding face parsing would also help in this situation.
Thus, here we employ face parsing to strengthen the face understanding
capability and enrich there presentations that we learned. In the past, [10,
11, 15] also employ face parsing to improve the understanding of faces. For
the depth, it is one of the important facial information as well. Pixel-wise
binary masks are employed in many works as supervision for training. [5]
employs binary masks that determines the real region as pixel-wise supervision
for partial attacks. Here we employ binary depth masks as pixel-wise
supervision. Moreover, the depth maps of attacks like print and replay are
flat. Thus, we encourage our model to understand faces much better and to
regularize learning directions by leveraging depth facial features in this
paper.
RFMetaFAS [20] conducts meta-learning process to improve generalized ability
with domain shift simulations. It also employs depth information as the prior
knowledge to regularize the learning direction. Inspired by RFMetaFAS [20] and
our thoughts mentioned above, we propose a multi-task meta-learning framework
that includes depth estimation, face parsing, and spoof classification tasks.
For the face parsing task, we employ an U-net based face parsing module into
the framework, which can serve as pixel-wise supervision and strengthen the
understanding of different facial parts. However, meta-learning focuses on
domain shift simulations which optimizes the inter-domain distributions.
Hence, the distributions of intra-domain may not well constrained. At the same
time, triplet loss can enforce similar data of different label to separate,
and this is crucial due to the distribution of intra-domain is usually close.
Moreover, as the spoof images are more dispersed due to the large variation of
attack mediums and collection environments, the one-side triplet loss only
samples triplet anchors from the features of live images. Hence, we propose to
apply the one-side triplet loss in the meta-optimization procedure in the meta
learner module. The one-side triplet loss encourages the aggregation of live
features and is proven to be very effective for domain generalization in face
anti-spoofing.
The main contributions of our work contain three aspects:
1\. We introduce a multi-task meta-learning framework for learning more
generalized features for face anti-spoofing. The framework and training
techniques demonstrate superior performance over previous approaches in
various domain generalization protocols of face anti-spoofing.
2\. A U-net based face parsing module and the attention-based skip connection
are utilized for encoding facial priors into the network and further promoting
the feature generalization capability.
3\. During the meta-optimization procedure, we propose to use the one-side
triplet loss, which encourages the aggregation of live features and
discriminates the diverse spoof features better in unseen domains.
## II Related Works
### II-A Face Anti-spoofing
Due to the different aspects of exploiting face features, existing methods
dealing with face anti-spoofing can be roughly divided into two groups:
temporal-based methods and appearance-based methods.
#### II-A1 Temporal-based Methods
Temporal-based methods aim to exploit the temporal information from multiple
frames of face videos to distinguish live and spoof faces. Recently, several
temporally-based methods have been proposed. CNN-LSTM [27] proposed to extract
temporal features from multiple face frames with a CNN-LSTM architecture for
anti-spoofing. Furthermore, [14] exploited rPPG signals as useful information
in face videos to distinguish spoof attacks from live faces.
#### II-A2 Appearance-based Methods
Comparatively, appearance-based methods aim to differentiate live and spoof
faces from spatial information in images due to the different textures and
features between live and spoof images. In recent years, CNN based methods
[28] have been employed for face anti-spoofing. Two-stream CNN-based method
[1] combines patch-based CNN and depth-based CNN models for face anti-
spoofing.
However, [1] learns the fused face representations via finding patch and depth
features separately, whereas our work directly leverages face parsing and
depth tasks to regularize the learning direction. The face representations
extracted in our work contains both face parsing and depth information with
one common feature extractor, and thus to obtain more generalized features.
Regardless of the tempotal-based or appearance-based methods mentioned above,
the prior works are poor to generalize to unseen domains since data
distributions are different from training data. Thus, several works including
this paper exploit the concepts of domain generalization for face anti-
spoofing. The next subsection we would introduce these works.
### II-B Domain Generalization for Face Anti-Spoofing
Domain generalization(DG) aims to learn from several source domains and then
test on the unseen target domain. Recently, domain generalization is an
important challenge for face anti-spoofing as we have mentioned in
introduction. Several learning methods such as adversarial learning and meta-
learning are adopted to improve generalization. MADDG [19] uses a multi-
adversarial discriminative deep domain generalization framework to learn
generalized features from multiple domains. SSDG [9] proposes single-side
domain generalization for face anti-spoofing with single-side adversarial
learning and an asymmetric triplet loss. RFMetaFAS [20] learns generalized
features via meta-learning and employs depth information as prior knowledge.
We deal with domain generalization of face anti-spoofing with also a meta-
learning-based method. Different from others, our work employs multi-task
framework to learn more complete and generalized facial features, and
optimizes distributions with meta-learning and one-side triplet loss for
inter-domain and intra-domain, separately.
## III Proposed Method
Our work aims to learn more robust features and improve the generalization
capability of the face anti-spoofing model. Meta-learning is also effective
for domain generalization due to domain shifting simulations. Here we propose
a multi-task framework with one-side triplet loss during meta-optimization.
In the following section, we first introduce the proposed multi-task
framework, followed by the components of the proposed framework: multi-task
meta-learning, U-net based face parsing module, one-side triplet loss, and
detailed objective functions.
### III-A Overview
Fig. 2 illustrates the overall framework. The proposed method is a multi-task
meta-learning based approach, which contains a feature extractor, a meta
learner for spoof classification, a depth estimator branch for depth
estimation, and a U-net based face parsing module for semantic segmentation on
faces.
Figure 2: Our framework contains depth estimation, face parsing, and meta-
learned spoof classification branches. Here we denote the feature extractor as
$F$, U-net based face parsing module as $S$, depth estimator as $D$, and meta
learner as $M$. Source images are divided into meta-train set and meta-test
set in each iteration. The depth estimator $D$ is to predict depth maps, which
serves as prior knowledge to regularize the model. U-net based face parsing
module $S$ aims to improve the generalization by encoding the semantic facial
priors with the attention-based skip-connection. In the end, features are fed
into the meta learner $M$ to conduct meta-optimization, which involves the
one-side triplet and classification losses.
We assume each training data contains a face image _x_ associated with the
corresponding label _y_ , face depth image _d_ , and face parsing image _s_.
For the network, we denote the feature extractor as $F$, U-net based face
parsing module as $S$, depth estimator as $D$, and meta learner as $M$. In the
network, the face image _x_ is fed into feature extractor $F$ to extract a
feature vector _f_. After the feature extraction, a depth estimator $D$
estimates the corresponding depth map _d_ from the feature vector _f_.
Simultaneously, the feature _f_ is fed into a U-net based face parsing module
$S$, which helps the model learn better representations to obtain and
strengthen the information of face parsing. Lastly, the aggregated feature
vector, which is concatenated from feature _f_ and face parsing feature, is
fed into the meta learner $M$ for live/spoof classification. Besides, we apply
one-side triplet loss in meta learner to improve the data distribution in
feature space. We explain the details in the coming sections.
### III-B Multi-Task Meta Learning
We adopt the fine-grained learning strategy in RFMetaFAS [20] that we randomly
choose one of the source domain as meta-test domain and the others are meta-
train domains during each iteration. Here we assume N source domains in total,
including N-1 meta-train domains and a meta-test domain. We assume $D_{mtrn}$
and $D_{mtst}$ as meta-train domains and meta-test domain, respectively. The
meta-learning process contains meta-train, meta-test, and meta-optimization
stages. The meta learner conducts meta-learning by exploiting a variety of
domain shift scenarios in each iteration. Fig. 3 illustrates the detailed
gradient flows of our meta-learning process.
Figure 3: Illustration of the optimization process in meta learner. Given
$D_{1}$, $D_{2}$ as meta-train domains, and $D_{3}$ as meta-test domain.
$L_{mtrn}$, $L_{mtst}$, $L_{Cls}$ and $L_{Trip}$ denotes loss of meta-train,
meta-test, spoof classification, and one-side triplet loss.
$\theta_{1}^{\prime}$ and $\theta_{2}^{\prime}$ are found by $D_{1}$ and
$D_{2}$, and $D_{3}$ is applied for finding learning directions by both
$\theta_{1}^{\prime}$ and $\theta_{2}^{\prime}$. Every domain division
contains a gradient and meta-gradient from meta-train and meta-test domains.
#### III-B1 Meta-train
For meta-train stage, data of each meta-train domain is sampled for batches.
Here we exploit classification loss and one-side triplet loss to find learning
directions. During the meta-train stage, the parameters of meta learner are
first found by
$\theta_{M_{i}^{\prime}}=\theta_{M}-\alpha((\nabla_{\theta_{M}}L_{Cls(D_{mtrn}(i))}(\theta_{F},\theta_{M}))+(\nabla_{\theta_{M}}L_{Trip(D_{mtrn}(i))}(\theta_{F},\theta_{M})))$
for each meta-train domain, where $\theta_{F}$ and $\theta_{M}$ denote the
parameters of feature extractor and meta learner, $i\in{D_{mtrn}}$, $L_{Cls}$
and $L_{Trip}$ are the classification loss and one-side triplet loss,
respectively. Besides, $L_{Dep}$ is adopted to regularize the feature
extractor as well. For live faces, depth image is supposed to be a face-shape
like depth maps, while depth images of spoof faces are all set to zero.
Moreover, $L_{Seg}$ is adopted to learn face parsing information and find more
generalized representations.
#### III-B2 Meta-test
The generalization capability could be further enhanced with meta-learning by
domain shifting simulations, and here meta-test domain acts as a role of the
unseen domain. Thus, our model should perform well on the meta-test domain
based on learning directions found by the meta-train domain. Parameters of the
meta learner here are denoted as $\theta_{M_{i}^{\prime}}$.
#### III-B3 Meta-Optimization
In meta-optimization, we optimize our model, which includes $\theta_{F}$,
$\theta_{M}$, $\theta_{S}$, and $\theta_{D}$, with the sum of meta-train and
meta-test losses.
### III-C U-net Based Face Parsing Module
U-net based face parsing module aims to learn semantic facial priors and
discriminate the spoof images with the attention to different facial parts.
Fig. 4 shows the architecture of U-net face parsing module. Our face parsing
module contains two parts: a face parsing U-net and an attention-based skip
connection.
#### III-C1 Face Parsing U-net
U-net is a commonly used architecture for dealing with semantic segmentation
tasks. We employ a U-net based architecture that includes an encoder-decoder
structure. Both encoder and decoder contain three convolutional blocks, and
skip connections are added to aggregate segmented information between them.
The face parsing U-net aims to predict face parsing images _s_. The input of
the face parsing module is the face image _x_ , which is the same as the
feature extractor. Specifically, since we want face parsing information to
regulate the feature extractor, we set the encoder and feature extractor with
shared weights. After we obtain the feature _f_ , the decoder outputs the
13-dimension face parsing binary maps.
#### III-C2 Attention-Based Skip Connection
We apply the attention-based skip connection to encode semantic facial priors
into the main branch of the network. Here we take the last stage of the face
parsing decoder as the input for the attention-based module. Due to the reason
that different channels of the face parsing feature may exist specific
connections between each other, here we apply a channel attention module ECA-
Net [24] to learn inter-channel relations of the feature in the skip
connection branch. Finally, the output feature is concatenated with feature
_f_ for feeding into the meta learner.
Figure 4: The architecture of U-net based face parsing module. Our face
parsing module contains two parts: face parsing U-net and attention-based skip
connection. The encoder of the U-net includes three convolutional blocks,
which share weights with the feature extractor. Features after the encoder are
fed into the decoder and upsampled three times, and there are skip connections
to the corresponding convolutional block of the encoder. Apart from the output
of 13 dimensions segmentation maps, there is another path called attention-
based skip-connection, which contains a convolutional block and an ECA module
to transfer important face parsing information to the main network.
### III-D One-Side triplet loss
Here we propose one-side triplet loss to combine the inter-domain advantage of
meta learning and intra-domain one of triplet loss. First, triplet loss is
suitable for face anti-spoofing due to aggregations of the same class and
separations of different classes. In the meantime, meta learning aims to
simulate domain shift and thus improves the generalization capability in
unseen domains. However, meta learning mainly focuses on inter-domain
distribution. Hence, we propose one-side triplet loss for both meta-train and
meta-test domains. Besides, since meta learning emphasizes domain shifting
directions for unseen domains, inter-domain triplet loss is redundant. Second,
different from the normal triplet loss, one-side triplet only aggregates live
domain data due to the reason below. Since there are countless kinds of
attacks due to variations such as cameras, illuminations, etc., we believe
that spoof data are more widely distributed than live ones. Moreover, since
spoof domains could be kept dividing into smaller spoof subdomains, we believe
distributions of different spoof datasets should be partially overlapped
instead of completely independent to others. Thus, it is not necessary to
separate different spoof domains. Separating live and spoof domains is a
critical task. Therefore, different from SSDG [9], we propose one-side triplet
loss, which only applies live domain for anchor and applies both live and
spoof domains for positive and negative samples.
Due to the spoof domains are widely distributed, it is prone to pick extreme
data while processing the one-side triplet loss. To avoid interference, the
triplet loss mining strategy becomes pretty crucial. Hence, we apply two-stage
mining triplet loss. A smaller margin is chosen at first and applies batch all
mining strategy, optimizing all of the valid triplets. Then we increase the
margin with the batch hard mining strategy, which optimizes only the hardest
triplet. By doing this, we could get more stable results. Equation (III-E2)
and (III-E2) show one-side triplet loss $L_{triplet}$ in meta-train and meta-
test.
### III-E Objective Function
Our network is trained with four different objective functions, i.e., the
classification loss, one-side triplet loss, segmentation loss, and depth loss.
The four objective functions will be described in detail in this subsection.
Here we assume $J$ denotes the domains belonging to $D_{mtrn}$,
$J_{i}(i=1,2,...,N-1)$ and ${K}$ denotes the domain belonging to $D_{mtst}$.
#### III-E1 Classification Loss
We adopt binary cross-entropy classification loss to both meta-train and meta-
test domains for classification. Equation (III-E1) and (III-E1) give the
classification losses during each meta-train and meta-test domain,
respectively.
$\displaystyle L_{Cls(J_{i})}(\theta_{F},\theta_{M})=$
$\displaystyle\sum_{(x,y)\sim{J_{i}}}(y\log{M(F(x))}$ $\displaystyle+$
$\displaystyle(1-y)\log{(1-M(F(x)))})$ (1)
$\displaystyle\sum\limits_{i=1}^{N-1}L_{Cls({K})}(\theta_{F},\theta_{M_{i}^{\prime}})=$
$\displaystyle\sum\limits_{i=1}^{N-1}\sum\limits_{(x,y)\sim{K}}(y\log{M_{i}^{\prime}(F(x))}$
$\displaystyle+$ $\displaystyle(1-y)\log{(1-M_{i}^{\prime}(F(x)))})$ (2)
where $\theta_{F}$, $\theta_{M}$, and $\theta_{M_{i}^{\prime}}$ denote the
parameters of feature extractor, meta learner, and meta learner after update
from $i$-th domain in meta-train. $x$ and $y$ are the input and the
corresponding label.
#### III-E2 One-Side Triplet Loss
In order to combine with meta learning, we adopt one-side triplet loss into
our method. The detailed equations of one-side triplet loss for each meta-
train and meta-test domain are given in (III-E2) and (III-E2), respectively.
$\displaystyle L_{Trip(J_{i})}(\theta_{F},\theta_{M})=$
$\displaystyle\sum_{(x_{i}^{a},x_{i}^{p},x_{i}^{n})}(\|{M(F(x_{i}^{a}))}-{M(F(x_{i}^{p}))}\|_{2}^{2}$
$\displaystyle-$
$\displaystyle\|M(F(x_{i}^{a}))-{M(F(x_{i}^{n}))}\|_{2}^{2}+\alpha)$ (3)
$\displaystyle\sum$
${}_{i=1}^{N-1}L_{Trip({K})}(\theta_{F},\theta_{M_{i}^{\prime}})=\sum\limits_{i=1}^{N-1}\sum\limits_{(x_{t}^{a},x_{t}^{p},x_{t}^{n})}(\|{M_{i}^{\prime}(F(x_{t}^{a}))}$
$\displaystyle-$
$\displaystyle{M_{i}^{\prime}(F(x_{t}^{p}))}\|_{2}^{2}-\|M_{i}^{\prime}(F(x_{t}^{a}))-{M_{i}^{\prime}(F(x_{t}^{n}))}\|_{2}^{2}+\alpha)$
(4)
where $x_{i}\sim{J}_{i}$ and $x_{t}\sim{K}$; $x^{a}$, $x^{p}$, $x^{n}$ denote
anchor, positive, negative sample, respectively, and $\alpha$ is a margin.
#### III-E3 Segmentation Loss
Since we expect the U-net based face parsing module to predict the face
parsing masks corresponding to the source face image, here we adopt the multi-
class cross-entropy loss. The segmentation masks are 13-dimensional binary
images, whose pseudo ground truth is pre-computed by [29]. The segmentation
loss is given by (III-E3) as follows:
$\displaystyle L_{Seg{J_{i}}}(\theta_{F},\theta_{S})=$ $\displaystyle CE(x,Y)$
$\displaystyle=$
$\displaystyle-\sum_{(x,Y)\sim{J_{i}}}\sum_{C}y\log{S(F(x_{j,k}))}$ (5)
where $\theta_{S}$ denotes the parameters in the face parsing module, $Y$
denotes the ground truth of face parsing masks corresponding to the input, $C$
is the channel of face parsing masks, and $x_{j,k}$ denotes a pixel in our
predicted output. $b$ is either a meta-train or meta-test domain depends on
the meta-train or meta-test stage.
#### III-E4 Depth Loss
Similar to RFMetaFAS [20], in order to exploit depth image as domain knowledge
to regularize the feature extractor, we apply the L2 loss between the
prediction and the pseudo depth ground truth obtained from [7]. It is given by
(6).
$\displaystyle
L_{Dep_{b}}(\theta_{F},\theta_{D})=\sum_{(x,I)\sim{b}}\|{D(F(x))-I}\|^{2}$ (6)
where $\theta_{D}$ consists of the parameters in the depth estimator, and $I$
denotes the pre-computed depth image.
#### III-E5 Overall Loss
Finally, the overall objective function is given as follows:
$\displaystyle L$ ${}_{all}=\lambda_{mtrn}L_{J}+\lambda_{mtst}L_{{K}}$
$\displaystyle=$
$\displaystyle\lambda_{mtrn}(\lambda_{Cls}L_{Cls(J)}+\lambda_{Dep}L_{Dep(J)}+\lambda_{Seg}L_{Seg(J)}+\lambda_{Trip}L_{Trip(J)})$
$\displaystyle+$
$\displaystyle\lambda_{mtst}(\lambda_{Cls}L_{Cls({K})}+\lambda_{Dep}L_{Dep({K})}+\lambda_{Seg}L_{Seg({K})}+\lambda_{Trip}L_{Trip({K})})$
(7)
where
$\lambda_{mtrn},\lambda_{mtst},\lambda_{Cls},\lambda_{Dep},\lambda_{Seg},\lambda_{Trip}$
are pre-defined hyperparameters. Here we set $\lambda_{mtrn}=1$,
$\lambda_{mtst}=1$, $\lambda_{Cls}=1$, $\lambda_{Dep}=10$, $\lambda_{Seg}=1$,
$\lambda_{Trip}=0.5$ in every experiment. To balance the contributions of
meta-train and meta-test set, we set $\lambda_{mtrn}=\lambda_{mtst}$.
Moreover, the values of $\lambda_{Cls}$, $\lambda_{Dep}$, $\lambda_{Seg}$, and
$\lambda_{Trip}$ are modified to balance $L_{Cls}$, $L_{Dep}$, $L_{Seg}$, and
$L_{Trip}$ because we find that the results are more stable while they are
balanced as we test.
TABLE I: Experimental Comparisons of different face anti-spoofing methods on the four domain generalization experiments. Method | O&C&I to M | O&M&I to C | O&C&M to I | I&C&M to O
---|---|---|---|---
| HTER(%) | AUC(%) | HTER(%) | AUC(%) | HTER(%) | AUC(%) | HTER(%) | AUC(%)
MADDG[19] | 17.69 | 88.06 | 24.50 | 84.51 | 22.19 | 84.99 | 27.98 | 80.02
RFMetaFAS[20] | 13.89 | 93.98 | 20.27 | 88.16 | 17.30 | 90.48 | 16.45 | 91.16
CCDD[17] | 15.42 | 91.13 | 17.41 | 90.12 | 15.87 | 91.47 | 14.72 | 93.08
PAD-GAN[23] | 17.02 | 90.10 | 19.68 | 87.43 | 20.87 | 86.72 | 25.02 | 81.47
NAS-FAS[30] | 16.85 | 90.42 | 15.21 | 92.64 | 11.63 | 96.98 | 13.16 | 94.18
SSDG-M[9] | 16.67 | 90.47 | 23.11 | 85.45 | 18.21 | 94.61 | 25.17 | 81.83
MT-FAS[16] | 11.67 | 93.09 | 18.44 | 89.67 | 11.93 | 94.95 | 16.23 | 91.18
D2AM[3] | 12.70 | 95.66 | 20.98 | 85.58 | 15.43 | 91.22 | 15.27 | 90.87
ANRL[13] | 10.83 | 96.75 | 17.85 | 89.26 | 16.03 | 91.04 | 15.76 | 91.90
FGHV[12] | 9.17 | 96.92 | 12.47 | 93.47 | 16.29 | 90.11 | 13.58 | 93.55
SSAN-M[25] | 10.42 | 94.76 | 16.47 | 90.81 | 14.00 | 94.58 | 19.51 | 88.17
Ours | 7.38 | 96.66 | 13.2 | 94.27 | 8.07 | 96.85 | 8.75 | 95.95
## IV Experimental Results
### IV-A Experimental Setting
#### IV-A1 Datasets
Here we conduct domain generalization experiments with four public datasets:
Oulu-NPU [2](denoted as O), Idiap Replay-Attack [4](denoted as I), CASIA-FASD
[31](denoted as C), and MSU-MFSD [26](denoted as M). These four datasets
contain large domain shifts which include variations in illumination,
background, imaging devices, and attack types.
The experiment setting follows the work in [20]. We take one of the four
source datasets as the unseen testing domain and the other three datasets as
training domains in each experiment. Therefore, we conduct four experiments:
O&C&I to M, O&M&I to C, O&C&M to I, I&C&M to O.
#### IV-A2 Implementation Details
Our network is implemented with PyTorch. Adam is adopted as the optimizer with
momentum of 0.9, weight decay is set to 5e-5, learning rate is 1e-3, and batch
size is set to 20 for each domain. For the hyperparameters in the objective
functions, we set $\lambda_{1}=1$, $\lambda_{2}=1$, $\lambda_{3}=1$,
$\lambda_{4}=10$, $\lambda_{5}=1$, $\lambda_{6}=0.5$. We concatenate RGB and
HSV images as input to our network, which is resized to $256\times 256\times
6$. The detailed network structure is given in the supplemental material.
#### IV-A3 Evaluation Metrics
Here we adopt Half Total Error Rate (HTER) and Area Under Curve (AUC) as
evaluation metrics. For visualization, we exploit t-SNE [21] and Grad-CAM [18]
to show the effect of our approach.
### IV-B Experimental Comparisons
In the following section, we compare performance of our approach with several
state-of-the-art face anti-spoofing methods, including MADDG [19], and
RFMetaFAS [20], CCDD [17], PAD-GAN [23], NAS-FAS [30], SSDG-M [9], MT-FAS
[16], D2AM [3], ANRL [13], FGHV [12], and SSAN-M [25]. Table I demonstrates
significant improvements by using the proposed method compared with the
baseline RFMetaFAS [20] and other state-of-the-art methods.
Note that we use the architecture of MADDG [19] as our backbone, which is the
same as SSDG-M [9]. However, SSDG-R [9] and PatchNet [22] adopts ResNet-18 [8]
as the backbone, and LMFD-PAD [6] adopts ResNet-50 as backbone. Thus, they are
not included in the experimental comparison because its model size is much
larger than MADDG.
TABLE II: Ablation study of using different components in our method. The upper part is about using face parsing information, and the lower part is about one-side triplet loss. For face parsing information, we compare the results of our method with and without using the attention-based skip connection (ASC) and U-net based face parsing module (parsing). For one-side triplet loss, we compare the results of our method with and without one-side triplet loss, without meta learning, and with normal triplet loss. Method | O&C&I to M | O&M&I to C | O&C&M to I | I&C&M to O
---|---|---|---|---
| HTER(%) | AUC(%) | HTER(%) | AUC(%) | HTER(%) | AUC(%) | HTER(%) | AUC(%)
Ours w/o parsing | 10.17 | 94.05 | 20.43 | 88.64 | 13.12 | 94.49 | 17.26 | 90.95
Ours w/o ASC | 9.31 | 94.30 | 26.68 | 83.72 | 9.13 | 96.46 | 15.01 | 93.06
Ours w/o one-side trip. | 9.25 | 95.72 | 25.03 | 83.97 | 12.30 | 95.08 | 13.79 | 93.51
Ours w/o meta | 10.94 | 95.30 | 31.05 | 75.78 | 27.79 | 79.26 | 29.24 | 77.70
Ours w/ normal trip. | 9.94 | 95.43 | 22.54 | 87.17 | 10.68 | 95.59 | 14.42 | 92.78
Ours | 7.38 | 96.66 | 13.20 | 94.27 | 8.07 | 96.85 | 8.75 | 95.95
### IV-C Face Parsing Results
We depict visualization of the output of U-net based face parsing module in
Fig. 5. 13 labels are chosen for the face parsing: skin, left/right brow,
left/right eye, eyeglasses, left/right ear, nose, mouth, upper/lower lip, and
background. The pixel-wise output of semantic facial priors is well performed
both for real and spoof images.
### IV-D Ablation Study
#### IV-D1 U-net Based Face Parsing Module
To understand the effect of U-net based face module, here we compare our
approach with two different experiment settings: our approach without the
U-net face parsing module and the attention-based skip connection. For the
frontal one, we want to realize the effect of face parsing for the face anti-
spoofing task. For the latter one, we want to discuss whether the face parsing
information should be encoded and added back to the main network by the
attention-based connection. Table II shows the experimental results of the
three settings. It shows that the U-net based face parsing module is effective
for regularizing the feature, and the attention-based skip connection(ASC)
indeed helps to generalize better in the unseen domains for face anti-
spoofing.
Figure 5: Segmentation output from the face parsing module. From top to
bottom: input image, parsing output, and ground truth. The left three columns
are live samples, and the right three columns are spoof ones.
#### IV-D2 One-Side Triplet Loss with Meta-learning
In the following section, we perform an experiment to show the effect of the
combination of one-side triplet loss and meta-learning. Table II shows results
of our approach with and without one-side triplet loss, without meta-learning,
and with normal triplet loss instead of one-side triplet loss. It is worth
noting that HTER and AUC are improved by over 10% in O&M&I to C experiment.
Without one-side triplet loss or meta-learning, the AUCs of O&M&I to C degrade
dramatically. In our opinion, O&M&I to C is the harder experiment because
there is no cut attack in source domains, but target domain exists cut attack.
Thus, the performance improves with one-side triplet loss or meta-learning
means that out model performs much generalized. For O&C&I to M, the
performance does not decline obviously due to the data of M is comparatively
easy. In summary, removing meta-learning from our method and changing one-side
triplet loss into ordinary triplet loss cause large degradation of the
generalization performance in most protocols.
Figure 6: Visualization of Grad-CAM. From top to bottom: input image, our
approach, and ours without U-net based face parsing module. The left four
columns are live images, and the remaining columns are spoof ones. Figure 7:
Visualization results of t-SNE based on our approach. From left to right: our
approach without triplet loss, ours with normal triplet loss, and our
approach. Src means the source domains, and tgt means the target domain.
### IV-E Visualization
#### IV-E1 Grad-CAM Visualization
In addition to the above experimental results, we depict visualization of some
examples with Grad-CAM[18], which can localize the important regions that the
network concentrated on according to a specific class.
Fig. 6 shows the results of the two settings: our approach with and without
U-net based face parsing module. The three rows are input images, ours, and
ours without U-net based face parsing module. The first four columns are live
samples, and the others are spoofed ones. We encourage our model to learn
discriminative cues from the facial region instead of background because the
background contain large variations for different datasets. As Fig. 6 shows,
our network tends to focus on facial parts for both live and spoof images.
#### IV-E2 t-SNE Visualization
t-SNE is adopted for visualizing the effect of one-side triplet loss. Fig. 7
shows three settings from left to right: our approach without triplet loss,
ours with normal triplet loss, and ours with one-side triplet loss. Here we
apply the margin of normal triplet loss as 0.1, which is the same as our
beginning setting. As shown in Fig. 7, target domain samples are well
discriminated against those with one-side triplet loss. To achieve
generalization, we do not separate different domains and thus reduce the
domain gaps.
Figure 8: Segmentation output from the face parsing module. Four rows are our
approach without attention-based skip connection, our approach, ground truth,
and the input image.
#### IV-E3 Effect of Attention-Based Skip Connection for Face Parsing
We visualize the segmentation output from the face parsing module. Here we
mainly focus on comparing our approach to ours without attention-based skip
connection. As Fig. 8 shown, adding attention-based skip connection to our
approach makes the face parsing results more stable and refined.
## V Conclusion
In this paper, we propose a multi-task framework based on meta learning to
improve the model generalization capability for face anti-spoofing. For the
face parsing task, a U-net based face parsing module is proposed to learn
important face parsing information. For the spoof classification task, a one-
side triplet loss is employed to combine the advantages of meta learning and
triplet loss. Our experiments on cross-domain generalization for face anti-
spoofing demonstrate that our method provides superior performance compared to
the state-of-the-art methods on public datasets.
## References
* [1] Y. Atoum, Y. Liu, A. Jourabloo, and X. Liu. Face anti-spoofing using patch and depth-based cnns. In In Proceeding of International Joint Conference on Biometrics, 2017.
* [2] Z. Boulkenafet, J. Komulainen, L. Li, X. Feng, and A. Hadid. Oulu-npu: A mobile face presentation attack database with real-world variations. In 12th IEEE International Conference on Automatic Face Gesture Recognition (FG), 2017.
* [3] Z. Chen, T. Yao, K. Sheng, S. Ding, Y. Tai, J. Li, F. Huang, and X. Jin. Generalizable representation learning for mixture domain face anti-spoofing. In AAAI, 2021.
* [4] I. Chingovska, A. Anjos, and S. Marcel. On the effectiveness of local binary patterns in face anti-spoofing. In 2012 BIOSIG - Proceedings of the International Conference of Biometrics Special Interest Group (BIOSIG), 2012.
* [5] M. Fang, F. Boutros, A. Kuijper, and N. Damer. Partial attack supervision and regional weighted inference for masked face presentation attack detection. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pages 1–8, 2021.
* [6] M. Fang, N. Damer, F. Kirchbuchner, and A. Kuijper. Learnable multi-level frequency decomposition and hierarchical attention mechanism for generalized face presentation attack detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 3722–3731, January 2022.
* [7] Y. Feng, F. Wu, X. Shao, Y. Wang, and X. Zhou. Joint 3d face reconstruction and dense alignment with position map regression network. In Proceedings of the European Conference on Computer Vision (ECCV), 2018.
* [8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
* [9] Y. Jia, J. Zhang, S. Shan, and X. Chen. Single-side domain generalization for face anti-spoofing. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [10] M. M. Kalayeh, B. Gong, and M. Shah. Improving facial attribute prediction using semantic segmentation, 2017\.
* [11] K. Khan, M. Attique, R. Khan, I. Syed, and T.-S. Chung. A multi-task framework for facial attributes classification through end-to-end face parsing and deep convolutional neural networks. Sensors, 20, 01 2020.
* [12] S. Liu, S. Lu, H. Xu, J. Yang, S. Ding, and L. Ma. Feature generation and hypothesis verification for reliable face anti-spoofing. In AAAI, 2022.
* [13] S. Liu, K.-Y. Zhang, T. Yao, M. Bi, S. Ding, J. Li, F. Huang, and L. Ma. Adaptive normalized representation learning for generalizable face anti-spoofing. In Proceedings of International Conference on Multimedia. Association for Computing Machinery, 2021.
* [14] Y. Liu, A. Jourabloo, and X. Liu. Learning deep models for face anti-spoofing: Binary or auxiliary supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
* [15] Z. Lu, T. Hu, L. Song, Z. Zhang, and R. He. Conditional expression synthesis with face parsing transformation. In Proceedings of the 26th ACM International Conference on Multimedia, page 1083–1091. Association for Computing Machinery, 2018.
* [16] Y. Qin, Z. Yu, L. Yan, Z.Wang, C. Zhao, and Z. Lei. Meta-teacher for face anti-spoofing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021\.
* [17] S. Saha, W. Xu, M. Kanakis, S. Georgoulis, Y. Chen, D. P. Paudel, and L. Van Gool. Domain agnostic feature learning for image and video based face anti-spoofing. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020.
* [18] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV), 2017.
* [19] R. Shao, X. Lan, J. Li, and P. C. Yuen. Multi-adversarial discriminative deep domain generalization for face presentation attack detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
* [20] R. Shao, X. Lan, and P. C. Yuen. Regularized fine-grained meta face anti-spoofing. In Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), 2020.
* [21] L. van der Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 2008.
* [22] C.-Y. Wang, Y.-D. Lu, S.-T. Yang, and S.-H. Lai. Patchnet: A simple face anti-spoofing framework via fine-grained patch recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20281–20290, June 2022.
* [23] G. Wang, H. Han, S. Shan, and X. Chen. Cross-domain face presentation attack detection via multi-domain disentangled representation learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [24] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu. Eca-net: Efficient channel attention for deep convolutional neural networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [25] Z. Wang, Z. Wang, Z. Yu, W. Deng, J. Li, and S. Li. Domain generalization via shuffled style assembly for face antispoofing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
* [26] D. Wen, H. Han, and A. K. Jain. Face spoof detection with image distortion analysis. IEEE Transactions on Information Forensics and Security, 2015.
* [27] Z. Xu, S. Li, and W. Deng. Learning temporal features using lstm-cnn architecture for face anti-spoofing. 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), 2015\.
* [28] J. Yang, Z. Lei, and S. Z. Li. Learn convolutional neural network for face anti-spoofing, 2014.
* [29] C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European conference on computer vision (ECCV), 2018.
* [30] Z. Yu, J. Wan, Y. Qin, X. Li, S. Z. Li, and G. Zhao. Nas-fas: Static-dynamic central difference network search for face anti-spoofing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020\.
* [31] Z. Zhang, J. Yan, S. Liu, Z. Lei, D. Yi, and S. Z. Li. A face antispoofing database with diverse attacks. In 2012 5th IAPR International Conference on Biometrics (ICB), 2012\.
|
# Advisory Tool for Managing Failure Cascades in Systems with Wind Power
††thanks: We thank MIT UROP, MITEI, and the NSF EAGER project #2002570 for
funding, and Dan Wu, Xinyu Wu, and Miroslav Kosanic for discussions.
Siyu Liu Massachusetts Institute of Technology
Cambridge, MA, USA
<EMAIL_ADDRESS>Marija Ilić, IEEE Life Fellow Massachusetts Institute of
Technology
Cambridge, MA, USA
<EMAIL_ADDRESS>
###### Abstract
This paper concerns the resilience of systems with wind power upon wind
reduction by evaluating the potential of corrective actions, such as
generation and load dispatch, on minimizing the effects of transmission line
failures. Three functions (grid, consumer-centric loss, and resilience impact)
are used to statistically evaluate the criticality of initial contingent
failures and wind reductions. Our model is learned with Monte Carlo, convex
optimization, and adaptive selection, illustrated on the IEEE-30 and IEEE-300
bus systems with both AC and DC models. We highlight the impact of wind
reductions and propose physically implementable solutions.
###### Index Terms:
wind power, cascade failure, influence model
## I Introduction
Modern power systems are prone to many unpredictable component failures. Past
events have shown that large scale blackouts are typically results of
sequential failures of transmission lines, called failure cascades. Cascades
tend to evolve quickly, leaving only up to 15 minutes for the system operators
to take corrective actions before the failure propagates[5]. Wind power
integration introduces additional unpredictability and potential congestion-
induced failures. Due to the fast-evolving nature of wind power and failure
propagation, it is particularly important to understand the cascade patterns
and their relations to sudden wind reduction, in order to advice system
operators during such extreme events. Failure cascades are extremely
computationally expensive to analyze due to the large number of nonlinear
relations. Most of the early work concerns analysis of cascades and holds a
pessimistic outlook at wind penetration due to its unpredictability[6].
In this paper, we go beyond analysis and explore corrective actions to
minimize the effects of equipment failures that may be exacerbated by wind
penetration. This is done by using probabilistic methods for deriving
statistical information about the most effective corrective actions, such as
generation re-dispatch and preemptive load shed. These are needed to prevent
cascading failures as the actual events occur in on-line operations; they can
also be used to enhance today’s industry manuals. Most of the studies concern
wind penetration risk by solving a DC power flow model, which under-estimates
the effects of failures and does not always provide physically implementable
solutions due to reactive power and voltage constraints [12], [20]-[23]. To
overcome the high complexity of power flow analysis, researchers have also
sought various statistical methods, including random graphs, branching
processes, and flow dynamics models — all typically embed non-trivial
constraint relaxations [16]-[18]. However, very few studies consider
physically-implementable generation re-dispatch [14, 20] or load shedding.
Those that do are only done for the purpose of minimizing grid-centric cost
[9, 10, 11]. Others, including [12] and [13], by implement a constant factor
load shed algorithm without any “smart scheduling.” To the best of our
knowledge, there is so far no statistical model for load shed prediction and
management. To remedy these shortcomings, we adopt a flow-free approach using
the influence model (IM). The IM is a Markovian model that, given the network
profile at each time step, computes the link failure and load shed probability
at all links and buses. It is straightforward to construct and fast to
implement, reducing the prediction tasks to matrix multiplication and
completely eliminating the burden of flow computation. We borrow insights from
[13] and [16] to train the model using Monte Carlo, convex optimization, and
adaptive selection. Our work drastically augments the scope of previous
studies in both methodology, results, and applicability.
In this paper, Section II briefly reviews the IM, and Section III outlines our
simulation process. Section IV proposes three metrics to evaluate various
corrective actions and underlines the significance of our findings for systems
with wind power. Section V summarizes the prediction accuracy and time
complexity of the IM. Section VI demonstrates a comparable effectiveness on a
large-scale system. Finally, Section VII introduces its use as an advisory
tool to system operators.
## II The Influence Model for Loss Prediction
The IM is a Markovian-like model whose dynamics are described by the state
variable transitions. We propose two IM, first for link failure prediction
through matrix $D,$ and second for load shed prediction through matrix $E.$
Both models operate on a $(N_{br}\times 1)$ network state vector $s$ that
stores the status of all links in binary, where $N_{br}$ is the number of
transmission lines (branches) in the network. Given the $i$-th link,
$s_{i}[t]=1$ indicates that link $i$ is alive at time $t,$ and $s_{i}[t]=0$
indicates that it has failed.
### II-A Matrix D for link failure prediction
The link failure IM predicts the subsequent network state $s[t+1]$ given the
current state $s[t]$ and trained parameters, which we define as follows.
* •
Transition probability matrices $A^{01}$ and $A^{11},$ both of size
$(N_{br}\times N_{br}),$ where
$\displaystyle A^{11}_{ji}:=\mathbb{P}(s_{i}[t+1]=1|s_{j}[t]=1),$ (1)
$\displaystyle A^{01}_{ji}:=\mathbb{P}(s_{i}[t+1]=1|s_{j}[t]=0).$ (2)
* •
The weighted influence matrix $D$ of size $(N_{br}\times N_{br}),$ where the
entry $d_{ij}$ represents the proportional effect from the link $j$ to $i.$ It
can be interpreted as the ratio of the influence from link $j$ to $i$ among
all links over $i.$
* •
The bisection threshold vector $\epsilon$ of size $(N_{br}\times 1).$ We
determine $\epsilon_{i}$ for each initial failure profile by examining the
$s_{i}$ sequence in all samples.
$A^{01}$ and $A^{11}$ are obtained through a Monte Carlo experiment, and the
$D$ entries are obtained by solving an optimization problem as outlined in
[13].
The transition probabilities from $A^{11}$ and $A^{01}$ weighted by the
influence factors in $D$ gives the prediction $\widetilde{s_{i}}[t+1]:$
$\widetilde{s_{i}}[t+1]=\sum_{j=1}^{N_{br}}d_{ji}\left(A^{11}_{ji}s_{j}[t]+A^{01}_{ji}(1-s_{j}[t])\right),$
(3)
where we predict link remaining healthy when
$\widetilde{s_{i}}[t+1]\geq\epsilon_{i}.$
### II-B Matrix E for load shed prediction
The link failure IM predicts the $(N\times 1)$ load binary vector $l[t]$,
where $l_{i}[t]=1$ indicates full service, and $l_{i}[t]=0$ indicates load
reduction. The prediction is done based on the network state $s[t]$ and
trained parameters, defined as follows.
* •
Transition probability matrices $B^{01}$ and $B^{11}$ of sizes $(N_{br}\times
N)$ that define the weighted influences from links to buses, where $N$ is the
number of buses. $B^{01}$ and $B^{11}$ defined as:
$\displaystyle B^{11}_{ji}:=\mathbb{P}(l_{i}[t]=1|s_{j}[t]=1),$ (4)
$\displaystyle B^{01}_{ji}:=\mathbb{P}(l_{i}[t]=1|s_{j}[t]=0).$ (5)
* •
The weighted influence matrix $E$ of size $(N\times N_{br})$ defines the
weighted influences from links to buses. Each entry $e_{ij}$ denotes the
proportional influence of link $j$ on bus $i.$
* •
The bisection threshold vector $\delta$ of size $(N\times 1).$ We determine
$\delta_{i}$ for each initial contingency profile by examining all samples
where load shed has occurred at bus $i.$
$B^{01},B^{11},$ and $E$ are obtained through a Monte Carlo experiment and
convex optimization with similar to [13].
The probability of the system being able to serve full load at bus $i$ is
calculated by a weighted sum of the influence from all links, using the
trained parameters in $B^{01},$ $B^{11},$ and $\delta_{i}:$
$\widetilde{l_{i}}[t]=\sum_{j=1}^{N_{br}}e_{ij}\left(B^{11}_{ji}s_{j}[t]+B^{01}_{ji}(1-s_{j}[t])\right),$
(6)
where we predict full service when $\widetilde{l_{i}}[t]\geq\delta_{i}.$
The run time for building the IM is dominated by the optimization step to
obtain $D$ and $E,$ which takes $O(N^{2}N_{br}^{2}).$
## III Sample Pool Generation
To examine the effects of sudden wind reduction, we first simulate the network
under normal conditions. We introduce a random link failure under normal
conditions where the loading level (base load - wind power) is nominal. If the
failure does not lead to a complete blackout, we continue the simulation by
introducing a wind reduction while the system is operating under $(N-2)$
conditions. The reduction level ranges from $10\%$ to $70\%$ of the base load,
causing the net loading level to rise up to $\times 1.7$ the original loading
level. We analyze the additional network congestion and failures caused by
this load increase. Fig. 1 provides a visualization to this process.
Figure 1: Wind Reduction. Before wind power reduction, the net system load is
at $\times.9$ base load. A $30\%$ wind reduction causes the net system load to
rise to $\times 1.2$ base load, causing the second round of cascade failures.
As there is no standard oracle for assessing failure cascades at present, we
base our experiments on the CFS oracle proposed in [7]. The CFS oracle is
similar to the short-term OPA oracle, except that line outages are treated
deterministically and it does not apply optimal re-dispatch during a failure
cascade. Instead, it only sheds load or curtails generation if system-wide
power mismatch occurs. For samples where no corrective actions are taken, we
follow the CFS oracle exactly, and for sample where corrective actions are
applied, we follow a relaxed version of the CFS oracle by allowing re-dispatch
during the cascade. This relaxation is realistic, as the time between two
failures can be as long as 15 minutes to allow the re-dispatch [5]. In all our
experiments, we initialize the network as fully functional, randomly select
two initial contingencies, and determine the cascade sequence following the
oracle. Long term thermal condition is used when all links are fully
functional, and it changes to short term thermal conditions once failures
occur (which we assume to be $1.05\times$ long term). After each failure, we
solve the DC/AC PF/OPF problem using the MATPOWER Toolbox [8]. Our three sets
of experiments are defined with parameters as follows.
Experiment 1: No corrective action. In this experiment, we simply record the
network status and loading levels at each bus at each step of the cascade
without any corrective actions.
Experiment 2: Generation re-dispatch for full service. We re-dispatch
generation whenever new link failures occur by solving for OPF under both
uniform generation cost and bus-specific generation cost provided by [8]. We
aim to serve all loads in full and only shed load uniformly in scale when
unable to serve in full.
Experiment 3: Generation re-dispatch (smart scheduling). This experiment
resembles Experiment 2, except that instead of aiming for full service, we
find the OPF solution that minimizes cost of shedding load, which we assume to
be either uniform or priority-based. Notably, no links fail in this
experiment, as the optimization step observes link constraints and maximum
service as part of the optimization objective.
Our experiments are done on the IEEE-30 system with initial loading being $c$
times the test case from [8]. The $c$ ranges from $0.9$ to $1.8$ in $0.1$
increments.
## IV Effect of Corrective Actions
We examine the effects of corrective actions under unexpected wind reduction
under DC and AC models. We propose two loss functions to quantify grid-centric
and consumer-centric loss of each cascade sequence, as well as a resilience
impact function to evaluate the loss given the base load and wind reduction in
Section IV-A – Section IV-E analyze the effects of different corrective
actions assessed by these functions and system-wide structural patterns.
### IV-A Grid-Centric Loss
For each cascade sample, grid-centric loss is defined as
$G(p)=\sum_{b=1}^{N_{br}}C(b)\cdot e^{-0.2t_{b}},$ (7)
where $G(p)$ is the link failure loss for initial network profile $p,$ $C(b)$
the cost on branch $b,$ proportional to its maximum thermal capacity, and
$t_{b}$ the life time of $b.$ The discounting factor $e^{-0.2t_{b}}$ is to
penalize early failures.
In Experiments 1 & 2, links fail more frequently and earlier on in the cascade
at higher initial loading levels. Even in the only two loading levels where
Experiment 2 only successfully initializes, the loss of link failure is much
greater than that in Experiment 1. This demonstrates that PF models
underestimates failure sizes. There is no observable difference between re-
dispatching with actual or uniform generation cost in Experiments 2 & 3. Fig.
6 illustrates these results.
### IV-B Consumer-Centric Loss
For each cascade sample, consumer-centric loss is defined with the formula as
follows.
$L(p)=\sum_{l=1}^{N}\sum_{t=1}^{T_{k}-1}C(l)\cdot LS_{l}(t)e^{-0.2t},$ (8)
where $L(p)$ is the load shed loss for initial network profile $p.$ $C(l)$ the
load priority, and $LS_{l}(t)$ the amount of load shed between time steps $t$
and $t+1$ at bus $l.$ The expression is similarly time – discounted by
$e^{-0.2t}.$
Our experiments yield one notable finding. If corrective actions are taken
promptly, we may preserve infrastructure integrity in full without significant
service reduction. In particular, load shed loss is minimized under Experiment
3’s smart scheduling, when we run OPF with cost-based load shed. As such, the
flow on all links are within their capacities and no cascade incurs. As
illustrated in Fig. 6, comparing the load shed across all three experiments,
Experiment 3 reduces the consumer-centric loss to much less than that in
Experiments 1 & 2. The passive, emergency load shed in Experiment 2 incurs the
greatest loss. Whether the cost of generation varies at different buses does
not induce significant difference in the load shed loss.
Figure 2: Link Fail Loss (DC).
Figure 3: Link Fail Loss (AC).
Figure 4: Load Shed Loss (DC).
Figure 5: Load Shed Loss (AC).
Figure 6: Grid-centric and consumer-centric loss over under various corrective
actions for DC and AC models.
### IV-C Resilience Impact
We propose the following equation to measure network resilience, where
$R(p,\Delta w)$ is network resilience for initial profile $p,$ wind reduction
$\Delta w,$ and $p^{\prime}$ the network profile during the wind reduction
when failure starts propagating.
$\displaystyle R(p,\Delta w)=R^{G}(p,\Delta w)+R^{L}(p,\Delta w),$ (9)
$\displaystyle R^{G}(p,\Delta w)=G(p^{\prime})-G(p)$ (10) $\displaystyle
R^{L}(p,\Delta w)=L(p^{\prime})-L(p),$ (11)
$R^{G}(p,\Delta w)$ and $R^{L}(p,\Delta w)$ correspond to grid-centric and
consumer-centric resilience, respectively.
### IV-D Corrective Action Analysis
Our experiment finds that, under certain net loading levels, full service is
impossible even without contingencies, as our solver fails to converge. The
problem arises under scenarios at high loading levels or non-uniform shedding
priorities. In Experiment 2, DC OPF and AC OPF fail to converge for loading
levels greater than $1.3\times$ and $1.0\times$ the default loading level,
respectively. This signifies the necessity for smart scheduling. The rest of
this section presents highlights of the analysis where initialization
succeeds.
We find that PF solutions are frequently not physically implementable. This
can be observed in the initial voltages under AC PF in Fig. 7. Bus voltages
drops significantly as loading increases, falling outside the $(0.95,1.05)$
constraint.
Figure 7: Initial bus voltages solved with AC power flow.
In all three experiments, AC models uncover more link failures and greater
load shed than their DC model counterparts. In particular, in Experiment 1,
losses on link failure is only slightly greater in AC than DC models, but this
difference is much greater in Experiment 2. The high levels of losses from AC
solutions renders the DC approximation insufficient to study failure cascades,
as it underestimates the severity of contingencies. Findings about the load
shed losses present an especially optimistic outlook. In Experiment 1 & 2, AC
models render much higher load shed than DC simulations. However, in
Experiment 3, when cost-based flexible load shed is implemented, this load
shed cost is no longer so significant. As AC simulation results are physically
implementable, this result shows that we can serve close to full demand
without causing congestion with optimal re-dispatch for both generation and
load. This is especially promising in practice.
To evaluate the resilience impact $R(p,\Delta w),$ it suffices to know
$G(p),L(p),G(p^{\prime}),L(p^{\prime}).$ We find that DC models significantly
underestimate the impact of wind reduction for both resilience measures. We
present the analysis of one particular scenario — when the initial net load
(load - wind power) is the system load, and examine the effects of different
corrective actions upon wind power reduction up to $70\%$ of the system load.
Our experiments found that, as a result of non-convergence, blackout happens
when wind reduction is $\geq 40\%$ under DC models Experiment 2, and any level
of wind reduction will cause blackout under AC models. Fig. 10 presents the
grid-centric ($R^{G}$) and consumer-centric ($R^{L}$) impact. For both $R^{G}$
and $R^{L},$ all models find the impact to increase drastically at higher
levels of wind reduction. Smart load re-dispatch (as in Exp. 3) minimizes the
impact, reducing it to about one-tenth of the impact when no action is taken
(as in Exp. 1) under both DC and AC models. This result underlines that,
without proactive planning to prevent blackout, wind penetration is risky, as
unexpected wind reduction of as little as $10\%$ of the base load can cause
large-scale congestion, but the risk can be significantly reduced with smart
rescheduling, which ensures near full service and avoids congestion
altogether.
Figure 8: Grid-Centric.
Figure 9: Consumer-Centric.
Figure 10: Resilience Impact.
### IV-E System-Wide Structures
A few interesting system-wide influence structures arise from our $D$ and $E$
matrices. Heatmaps of selected scenarios for $D$ and $E$ matrices are shown in
Fig. 17, where darker colors denote higher influence levels. The $E$ matrices
display a sparse structure. When no corrective actions are taken, the $D$
matrix has a sparse structure under DC models and linear structure under AC
models (Fig. 17, Fig. 17). However, when corrective actions are taken, the $D$
matrices display a linear structure, where the pair-wise influence values are
high in particular columns (Fig. 17, Fig. 17). This suggests a uni-
directional, strong influence from one to many other links.
This linear structure has significant practical value. Identifying the high-
influence links in $D$ can be extremely informative to the operators: when
critical links fail, there is higher value to execute scheduling according to
the scheme in Experiment 3 to preserve infrastructure integrity.
Figure 11: $D$ matrix for DC PF, $1.6\times$ loading
Figure 12: $D$ matrix for AC PF, $1.6\times$ loading
Figure 13: $D$ matrix for DCOPF, $1\times$ loading
Figure 14: $D$ matrix for ACOPF, $1\times$ loading
Figure 15: $E$ matrix for DC PF, $1.6\times$ loading
Figure 16: $E$ matrix for AC PF, $1.6\times$ loading
Figure 17: $D,E$ matrix structures.
## V Prediction Accuracy
Both link failure and load shed prediction reach high prediction accuracy,
$>90\%$ for most cases and $>80\%$ for all. There is no significant difference
between the training and testing sets, which verifies that our model does not
overfit. We find no significant difference between the DC and AC models. The
IM framework provide significant reduction in computational time, especially
for the AC models, which requires solving nonlinear equations. While flow-
based solutions become more inefficient at higher loading level, the
computational cost is identical for all cases under our method. To demonstrate
that our proposed framework scheme captures the failure cascade features, we
compare it to two prediction methods that do not depend on the IM: the
uniform, deterministic prediction and the randomized prediction. The IM
methods gave better performance than both random and uniform predictions with
no significant difference across different loading levels as shown by the mean
error rates in Table II and Table II.
| IM | Rand. | Unif.
---|---|---|---
exp1 | 0.038 | 0.188 | 0.109
exp2 | 0.019 | 0.093 | 0.049
exp3 | 0.000 | 0.094 | 0.049
TABLE I: Link Failure Prediction Error Rates.
| IM | Rand. | Unif.
---|---|---|---
exp1 | 0.214 | 0.318 | 0.255
exp2 | 0.043 | 0.082 | 0.043
exp3 | 0.014 | 0.026 | 0.014
TABLE II: Load Shed Prediction Error Rates.
## VI Tests in Large-Scale IEEE 300 Bus System
We verify that our methodology is accurate and scalable by experimenting on
the IEEE300 system[8]. In this system, smart scheduling in Experiment 3 can
reduce the cost of load shed by as much as $90\%$ at all loading levels, while
completely avoiding all link failures. Prediction with the IM model yields
even higher accuracy, with $>95\%$ for most cases, for both link failure and
load shed, with the exception of load shed prediction in Experiment 1, which
also produced $>75\%$ accuracy (the latter requires further assessment). The
influence model has significant computation cost advantage. Table III shows an
example at the default loading level on 1000 samples. The influence model can
predict link failures and load shed within 1/10 of the time required for
running PF or OPF 111Tested with MATLAB R2022a on Intel(R) Core(TM) i5-1135G7
<EMAIL_ADDRESS>Processor with 8GB installed memory.. Further studies are needed
to assess the economic advantage of our adaptive approach of rescheduling
generation upon contingencies.
| Simulation (s) | Training (s) | Prediction (s)
---|---|---|---
exp1 | 169.77 | 611.50 | 15.40
exp2 | 183.35 | 305.63 | 10.05
exp3 | 246.23 | 332.68 | 6.76
TABLE III: Time Cost at Default Loading Level.
## VII Conclusion: Advisory Tool for Operators
Based on our study of (1) the impact of wind reduction and effects of
corrective actions during a failure cascade and (2) the resilience impact
measures and risks of wind reduction, we propose an advisory tool for
operators to assess contingency criticality, predict losses, and strategize
for loss minimization with smart rescheduling. The tests have shown promising
robust results. The advisory tool comprises:
### VII-A Wind Reduction Risk Assessment
The IM helps us determine the most critical links and initial contingencies,
determined by a combination of criticality values as well as expected
$G(p),L(p),$ and $R(p,\cdot)$ where profile $p$ embeds the initial
contingencies. Grid and consumer-centric criticality values are computed with
equations as follows.
$\displaystyle
C^{D}(j)=\sum_{i=1,\dots,N_{br}}d_{ij}(A^{11}_{ji}-A^{01}_{ji}),$ (12)
$\displaystyle C^{E}(j)=\sum_{i=1,\dots,N}e_{ij}(B^{11}_{ji}-B^{01}_{ji}),$
(13)
where $j$ is the link index, and $i$ enumerates over all links for $C^{D}(j)$
or all buses for $C^{E}(j)$. With this information, system operators can
identify the impact of sudden wind reduction when there is a pre-existing link
failure in the network. As greater values of the resilience impact signify
greater cost, this tool informs operators of upcoming risks.
### VII-B Cascade Management
The failure cascade and load shed prediction given by our models can inform
operators the best course of action for loss minimization. Operators may also
use this tool to predict the impact of wind reduction and deploy load
reduction.
## References
* [1] “NorthEast US Failure Cascade,” Boston Globe. 2012.
* [2] “Manhattan, New York Failure Cascade,” The Atlantic, 2019.
* [3] “London Failure Cascade,” Bloomberg, 2019.
* [4] Yamashita, K., Joo, S.-K., Li, J., Zhang, P. and Liu, C.-C., 2008, Analysis, control, and economic impact assessment of major blackout events. Euro. Trans. Electr. Power.
* [5] Ilic, M., Ulerio, R. S., Corbett, E., Austin, E., Shatz, M., & Limpaecher, E., 2020. A Framework for Evaluating Electric Power Grid Improvements in Puerto Rico.
* [6] Dobson I., Carreras B., Lynch V., Newman D., ”Complex systems analysis of series of blackouts: Cascading failure, critical points, and self-organization”, Chaos 17, 2017.
* [7] M. J. Eppstein and P. D. H. Hines, ”A “Random Chemistry” Algorithm for Identifying Collections of Multiple Contingencies That Initiate Cascading Failure,” in IEEE Transactions on Power Systems, 2012.
* [8] R. D. Zimmerman, C. E. Murillo-Sánchez and R. J. Thomas, ”MATPOWER: Steady-State Operations, Planning, and Analysis Tools for Power Systems Research and Education,” in IEEE Transactions on Power Systems, 2011.
* [9] M. Sinha, M. Panwar, R. Kadavil, T. Hussain, S. Suryanarayanan, and M. Papic. 2019. Optimal Load Shedding for Mitigation of Cascading Failures in Power Grids. In Proceedings of the Tenth ACM International Conference on Future Energy Systems.
* [10] M. Rahnamay-Naeini, Z. Wang, N. Ghani, A. Mammoli and M. M. Hayat, ”Stochastic Analysis of Cascading-Failure Dynamics in Power Grids,” in IEEE Transactions on Power Systems, 2014.
* [11] B. Shi, J. Liu, Decentralized control and fair load-shedding compensations to prevent cascading failures in a smart grid, International Journal of Electrical Power & Energy Systems, 2015.
* [12] H. Cetinay, S. Soltan, F. A. Kuipers, G. Zussman, and P. Van Mieghem. 2018. Analyzing Cascading Failures in Power Grids under the AC and DC Power Flow Models.
* [13] X. Wu, D. Wu and E. Modiano, ”Predicting Failure Cascades in Large Scale Power Systems via the Influence Model Framework,” in IEEE Transactions on Power Systems, 2021.
* [14] Y. Liu, T. Wang, X. Gu, 2022. A risk-based multi-step corrective control method for mitigation of cascading failures. IET Gener. Transm. Distrib.
* [15] S. Yang, W. Chen, X. Zhang, Y. Jiang, Blocking cascading failures with optimal corrective transmission switching considering available correction time, International Journal of Electrical Power & Energy Systems, 2022.
* [16] J. Song, E. Cotilla-Sanchez, G. Ghanavati and P. D. H. Hines, ”Dynamic Modeling of Cascading Failure in Power Systems,” in IEEE Transactions on Power Systems, 2016.
* [17] X. Zhang, C. Zhan and C. K. Tse, ”Modeling the Dynamics of Cascading Failures in Power Systems,” in IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2017.
* [18] D. Zhang, D. Zhao, Z. Guan, Y. Wu, M. Chi, G. Zheng, Probabilistic analysis of cascade failure dynamics in complex network, Statistical Mechanics and its Applications, 2016.
* [19] Q. -S. Jia, M. Xie and F. F. Wu, ”Ordinal optimization based security dispatching in deregulated power systems,” Proceedings of the 48h IEEE Conference on Decision and Control, 2009.
* [20] M. H. Athari and Z. Wang, ”Impacts of Wind Power Uncertainty on Grid Vulnerability to Cascading Overload Failures,” in IEEE Transactions on Sustainable Energy, 2018.
* [21] Y. Dai, R. Preece, M. Panteli, ”Risk assessment of cascading failures in power systems with increasing wind penetration,” Electric Power Systems Research, 2022.
* [22] Y. Liu, Y. Wang, P. Yong, N. Zhang, C. Kang and D. Lu, ”Fast Power System Cascading Failure Path Searching With High Wind Power Penetration,” in IEEE Transactions on Sustainable Energy, 2020.
* [23] M. H. Athari and Z. Wang, ”Stochastic Cascading Failure Model With Uncertain Generation Using Unscented Transform,” in IEEE Transactions on Sustainable Energy, April 2020.
|
Remarks on Some Conditional Generalized Borel-Cantelli Lemmas
B.L.S. Prakasa Rao
CR Rao Advanced Institute of Mathematics, Statistics and Computer Science
Hyderabad, India
Abstract : We discuss some conditional generalized Borel-Cantelli lemmas and
investigate their quantitative versions following Arthan and Oliva
(arXiv:2012.09942). Key words : Borel-Cantelli lemma; Quantitative version.
AMS 2020 Subject Classification : Primary 60G70.
## 1 Introduction
Let $(\Omega,\cal A,P)$ be a probability space and $\\{A_{n},n\geq 1\\}$ be a
sequence of events in this probability space. The Borel-Cantelli lemma relates
the convergence or divergence of the series $\sum_{n=1}^{\infty}P(A_{n})$ with
the probability of the event “$A_{n}$ infinitely often” which is the set
defined by
$A_{n}\;i.o=\cap_{n=1}^{\infty}\cup_{i=n}^{\infty}A_{i}.$
The event “$A_{n}$ infinitely often” can also be represented as the event
“$\limsup A_{n}$”. The classical Borel-Cantelli lemma can be stated in two
parts. Theorem 1.1 (First Borel-Cantelli Lemma) Let $\\{A_{n},n\geq 1\\}$ be
an infinite sequence of events such that $\sum_{n=1}^{\infty}P(A_{n})<\infty.$
Then $P(A_{n}\;i.o)=0.$ Theorem 1.2 (Second Borel-Cantelli Lemma) Let
$\\{A_{n},n\geq 1\\}$ be an infinite sequence of mutually independent events
such that $\sum_{n=1}^{\infty}P(A_{n})<\infty.$ Then $P(A_{n}\;i.o)=1.$ Let
${\cal F}$ be a sub-$\sigma$-algebra of $\cal A.$ A sequence of events
$\\{A_{n},n\geq 1\\}$ is said to be conditionally independent given a
sub-$\sigma$-algebra ${\cal F}$ if
(1. 1) $P(\cap_{i=1}^{n}A_{i}|{{\cal F}})=\prod_{i=1}^{n}P(A_{i}|{\cal F})$
almost surely for all $n\geq 1.$ A sequence of random variables
$\\{X_{n},n\geq 1\\}$, defined on the probability space $(\Omega,\cal A,P)$,
is said to be conditionally independent given a sub-$\sigma$-algebra ${\cal
F}$ if
(1. 2) $P(\cap_{i=1}^{n}[X_{i}\leq x_{i}]|{\cal F})=\prod_{i=1}^{n}P(X_{i}\leq
x_{i}|{\cal F})$
almost surely for all $x_{i},1\leq i\leq n,n\geq 1.$ If ${\cal
F}=\\{\phi,\Omega\\},$ then the conditional independence reduces to the usual
notion of independence of random variables. It is known that independence of a
set of events does not imply their conditional independence and conditional
independence of a set of events does not imply their independence (cf. Prakasa
Rao [1]). Properties of sequences of random variables which are conditionally
independent given a sub-$\sigma$-algebra have been studied in Prakasa Rao [1]
and Roussas [2]. Yuan and Yang [3], Liu and Prakasa Rao [4], Yuan et al. [5],
Liu and Zhang [6], Yuan and Li [7] and Chen and Liu [8]investigated properties
of some conditional Borel-Cantelli lemmas and their applications and studied
limit theorems for conditionally independent random variables. An interesting
example of conditionally independent sequence of random variables is a
sequence of exchangeable random variables which become conditionally
independent given a suitable sub-$\sigma$-algebra. Prakasa Rao [1] and Roussas
[2] give other examples of stochastic models in which conditional independence
plays a major role such as non-ergodic models dealing with branching processes
(cf. Basawa and Scott [10]). Proofs of the results presented here are akin to
the corresponding results of the Borel-Cantelli lemmas but they are not
consequences of those results. Hence they have to be stated separately and
proved. Majerek et al. [9] obtained a conditional version of the Borel-
Cantelli lemma. Theorem 1.3 Let $\\{A_{n},n\geq 1\\}$ be a sequence of events
and suppose that ${\cal F}$ is a sub-$\sigma$-algebra such that
$\sum_{n=1}^{\infty}P(A_{n}|{\cal F})<\infty$
almost surely.Then
$P(\limsup A_{n}|{\cal F})=0$
almost surely. The following result is a generalized Borel-Cantelli lemma
originally due to Barndorff-Nielsen but with a corrected proof by Balakrishnan
and Stepanov [11,12]. Let $A^{c}$ denote the complement of a set $A.$ Theorem
1.4 Let $\\{A_{n},n\geq 1\\}$ be a sequence of events such that
$P(A_{n})\rightarrow 0$ as $n\rightarrow\infty.$ If, for some $m\geq 0,$
$\sum_{n=1}^{\infty}P(A_{n}^{c}A_{n+1}^{c}\dots A_{n+m-1}^{c}A_{n+m})<\infty,$
then
$P(\limsup A_{n})=0.$
We now prove a conditional version of this generalized Borel-Cantelli lemma.
Theorem 1.5 Let $\\{A_{n},n\geq 1\\}$ be a sequence of events and ${\cal F}$
be a sub-$\sigma$-algebra such that $P(A_{n}|{{\cal F}})\rightarrow 0$ almost
surely as $n\rightarrow\infty.$ If, for some $m\geq 0,$
$\sum_{n=1}^{\infty}P(A_{n}^{c}A_{n+1}^{c}\dots A_{n+m-1}^{c}A_{n+m}|{\cal
F})<\infty$
almost surely, then
$P(\limsup A_{n}|{\cal F})=0$
almost surely and hence
$P(\limsup A_{n})=0.$
Proof: Note that
$\displaystyle P(\limsup A_{n}|{\cal F})$ $\displaystyle=$ $\displaystyle
P(\cap_{n=1}^{\infty}\cup_{k=n}^{\infty}A_{k}|{\cal F})$ $\displaystyle=$
$\displaystyle\lim_{n\rightarrow\infty}P(\cup_{k=n}^{\infty}A_{k}|{\cal F})$
almost surely. However, for any fixed $m>n\geq 1,$
$\displaystyle P(\cup_{k=n}^{\infty}A_{k}|{\cal F})$ $\displaystyle=$
$\displaystyle P(A_{n}|{\cal F})+P(A_{n}^{c}A_{n+1}|{\cal
F})+P(A_{n}^{c}A_{n+1}^{c}A_{n+2}|{\cal F})+\dots$ $\displaystyle\leq$
$\displaystyle P(A_{n}|{\cal F})+P(A_{n}^{c}A_{n+1}|{\cal
F})+P(A_{n}^{c}A_{n+1}^{c}A_{n+2}|{\cal F})$
$\displaystyle\;\;\;\;+\dots+P(A_{n}^{c}\dots A_{n+m-2}^{c}A_{n+m-1}|{\cal
F})$ $\displaystyle\;\;\;\;+\sum_{k=n}^{\infty}P(A_{k}^{c}\dots
A_{k+m-1}^{c}A_{k+m}|{\cal F}).$
The last term given above tends to zero as $n\rightarrow\infty$ since it is
the tail sum of the infinite series
$\sum_{n=1}^{\infty}P(A_{n}^{c}A_{n+1}^{c}\dots A_{n+m-1}^{c}A_{n+m}|{\cal
F})<\infty$
which converges almost surely by hypothesis.
Furthermore, for a fixed $m\geq 0,$
(1. 3) $\displaystyle P(A_{n}|{\cal F})+P(A_{n}^{c}A_{n+1}|{\cal
F})+P(A_{n}^{c}A_{n+1}^{c}A_{n+2}|{\cal F})+\dots+P(A_{n}^{c}\dots
A_{n+m-2}^{c}A_{n+m-1}|{\cal F})$ $\displaystyle\leq P(A_{n}|{\cal
F})+P(A_{n+1}|{\cal F})+P(A_{n+2}|{\cal F})+\dots+P(A_{n+m-1}|{\cal F})$
and the last term on the right side of the inequality tends to zero almost
surely by hypothesis. Hence
$P(\limsup A_{n}|{\cal F})=0$
almost surely which, in turn, implies that
$E[P(\limsup A_{n}|{\cal F})]=P(\limsup A_{n})=0.$
Remarks : This result is also proved in Lemma 2.4 in Chen and Liu [8]. As a
consequence of Theorem 1.5, we have the following result. Theorem 1.6 Let
$\\{A_{n},n\geq 1\\}$ be a sequence of events and ${\cal F}$ be a
sub-$\sigma$-algebra such that $P(A_{n}|{\cal F})\rightarrow 0$ a.s. as
$n\rightarrow\infty.$ Further suppose that
$\sum_{n=1}^{\infty}P(A_{n}^{c}A_{n+1}|{\cal F})<\infty$
almost surely. Then
$P(\limsup A_{n}|{\cal F})=0$
almost surely and hence
$P(\limsup A_{n})=0.$
Proof: This theorem follows from Theorem 1.5 by choosing $m=1$ in that result.
Theorem 1.7 Let $\\{A_{n},n\geq 1\\}$ be a sequence of events and ${\cal F}$
be a sub-$\sigma$-algebra such that $P(A_{n}|{\cal F})\rightarrow 0$ almost
surely as $n\rightarrow\infty.$ Further suppose that
$\sum_{n=1}^{\infty}P(A_{n}A_{n+1}^{c}|{\cal F})<\infty$
almost surely. Then
$P(\limsup A_{n}|{\cal F})=0$
almost surely and hence
$P(\limsup A_{n})=0.$
Proof: This result is a consequence of Theorem 1.6 and the observation that
$\sum_{n=1}^{\infty}P(A_{n}A_{n+1}^{c}|{\cal F})=P(A_{1}|{\cal
F})+\sum_{n=1}^{\infty}P(A_{n}^{c}A_{n+1}|{\cal F}).$
Remarks: Result in Theorem 1.7 is also obtained in Lemma 2.2 in Chen and Liu
[8].
## 2 Conditional version of lemma in Balakrishnan and Stepanov [12]
Balakrishnan and Stepanov [12] generalized the Second result Borel-Cantelli
Lemma to some dependent events. Following Balakrishnan and Stepanov [12],
given an event $A,$ and a sub-$\sigma$-algebra ${\cal F},$ we say that the
quantity $\alpha\geq 0$ is the power-$A$ coefficient of the conditional
independence between the event $A$ and another event $B$ if
$P(AB|{\cal F})=[P(A|{\cal F})]^{\alpha}P(B|{\cal F})$
almost surely. It is obvious that if $A$ and $B$ are conditionally independent
given ${\cal F},$ then $\alpha=1.$ Let $\\{A_{n},n\geq 1\\}$ be a sequence of
events and let $A^{*}_{n}=A_{n}^{c}A_{n+1}^{c}\dots.$ Suppose that the
power-$A_{n}^{c}$ coefficient of the conditional independence between the
events $A_{n}^{c}$ and $A^{*}_{n+1}$ given the sub-$\sigma$-algebra ${\cal F}$
is $\alpha_{n}.$ Then the following result holds. Theorem 2.1 Let $A_{n}$ be a
sequence of events as defined above and let ${\cal F}$ be a
sub-$\sigma$-algebra . Then
(2. 1) $P(\limsup A_{n}|{\cal F})=1$
almost surely if and only if
(2. 2) $\sum_{n=1}^{\infty}\alpha_{n}P(A_{n}|{\cal F})=\infty$
almost surely. Proof: Note that
(2. 3) $P(A^{*}_{n}|{\cal F})=(P(A_{n}^{c}|{\cal
F}))^{\alpha_{n}}P(A^{*}_{n+1}|{\cal F})$
almost surely. Repeating the process, we obtain that
(2. 4) $P(A^{*}_{n}|{\cal F})=(P(A_{n}^{c}|{\cal
F}))^{\alpha_{n}}\dots(P(A_{n+k-1}^{c}|{\cal
F}))^{\alpha_{n+k-1}}P(A^{*}_{n+k}|{\cal F})$
almost surely for all $n\geq 1$ and $k\geq 1.$ Applying the inequality
$\log(1-x)\leq-x$ for $0\leq x<1,$ it follows that
$P(A^{*}_{n}|{\cal F})\leq\exp(-\sum_{i=n}^{n+k-1}\alpha_{i}P(A_{i}|{\cal
F}))P(A^{*}_{n+k}|{\cal F})$
almost surely for all $n\geq 1$ and $k\geq 1.$ Taking limit as
$k\rightarrow\infty,$ it follows that
(2. 5) $P(A^{*}_{n}|{\cal
F})\leq\exp(-\sum_{i=n}^{\infty}\alpha_{i}P(A_{i}|{\cal F}))(1-P(\limsup
A_{n}|{\cal F}))$
almost surely. Note that (2.2) implies (2.1). Suppose that $P(\limsup
A_{n}|{\cal F})<1.$ If the series $\sum_{i=n}^{\infty}\alpha_{i}P(A_{i}|{\cal
F})$ is divergent, then we have a contradiction in (2.5). Hence (2.1) implies
(2.2). Following Balakrishnan and Stepanov [12], let the sequence of events
$A_{n},n\geq 1$ be called conditionally Markov given a $\sigma$-algebra ${\cal
F},$ if the corresponding indicator random variables $I_{A_{n}},n\geq 1$ form
a Markov chain conditionally given the $\sigma$-algebra ${\cal F}.$ Suppose
$\beta_{n}$ is the power-$A_{n}^{c}$ coefficient of conditional dependence of
$A_{n}^{c}$ and $A_{n+1}^{c}$ given the $\sigma$-algebra ${\cal F}.$ It can be
checked that $\alpha_{n}$ defined above is equal to $\beta_{n}$ and in fact
$\alpha_{n}=\frac{\log P(A_{n}^{c}A_{n+1}|{\cal F})-\log P(A_{n+1}|{\cal
F})}{\log P(A_{n}^{c}|{\cal F})}.$
## 3 Quantitative version of the Conditional Second Borel-Cantelli Lemma
Kochen and Stone [13] presented a result that generalized the Second Borel-
Cantelli Lemma leading to a lower bound on $P(A_{n}\;i.o)$ when the events
$A_{n}$ are not mutually independent and proved that the Second Borel-Cantelli
Lemma holds for pairwise independent. Yan [14] generalized this result leading
to the following theorem. Theorem 3.1 (Kochen and Stone [13], Yan [14]) Let
$\\{A_{n},n\geq 1\\}$ be an infinite sequence of events such that
$\sum_{n=1}^{\infty}P(A_{n})=\infty.$ Then
(3. 1)
$P(A_{n}\;i.o)\geq\limsup_{n\rightarrow\infty}\frac{[\sum_{k=1}^{n}P(A_{k})]^{2}}{\sum_{i,k=1}^{n}P(A_{i}A_{k})}.$
A related theorem due to Erdos and Renyi [16] is the following result. Theorem
3.2 (Erdos and Renyi [16]) Let $\\{A_{n},n\geq 1\\}$ be an infinite sequence
of events such that $\sum_{n=1}^{\infty}P(A_{n})=\infty$ and
(3. 2)
$\limsup_{n\rightarrow\infty}\frac{\sum_{i,k=1}^{n}P(A_{i}A_{k})}{[\sum_{k=1}^{n}P(A_{k})]^{2}}=1.$
Then $P(A_{n}\;i.o)=1.$ We will now obtain the conditional quantitative
versions of these results following the ideas and techniques in Arthan and
Oliva [17]. Given an infinite sequence of events $\\{A_{n},n\geq 1\\}$ and a
sub-$\sigma$-algebra ${\cal F},$ the first conditional Borel-Cantelli lemma
implies that the probability of the event $\limsup A_{n}$ conditional on
${\cal F}$ is zero almost surely when $\sum_{n=1}^{\infty}P(A_{n}|{\cal
F})<\infty$ almost surely. The assumption that
$\sum_{n=1}^{\infty}P(A_{n}|{\cal F})<\infty$ almost surely is equivalent to
the almost sure convergence of the sequence $v_{k}=\sum_{i=1}^{k}P(A_{i}|{\cal
F}).$ Note that $\\{v_{k},k\geq 1\\}$ is a random sequence. This condition
implies the existence of a random function $\eta(.)$ such that
(3. 3) $|v_{m}-v_{n}|<\frac{1}{2^{\ell}},\ell,m,n>\eta(\ell)$
almost surely since the sequence $\\{v_{k},k\geq 1\\}$ is a Cauchy sequence
a.s. In other words, there exists a random function $\phi(\ell)$ such that
(3. 4) $\sum_{i=\phi(\ell)}^{\infty}P(A_{i}|{\cal F})\leq\frac{1}{2^{\ell}}$
almost surely. We now state the quantitative version of the conditional First
Borel-Cantelli Lemma stated in Theorem 1.3. Theorem 3.3 Suppose
$\\{A_{n},n\geq 1\\}$ is a sequence of events such that the sequence
$\sum_{i=1}^{m}P(A_{i}|{\cal F})$ converges almost surely with a rate of
convergence $\phi(.),$ that is, for all $\ell\geq 0$ and $m>\phi(\ell),$
$\sum_{i=\phi(\ell)}^{m}P(A_{i}|{\cal F})\leq\frac{1}{2^{\ell}}$
almost surely. Then the sequence $\\{P(\cup_{i=1}^{m}A_{i}|{\cal F}),m\geq
1\\}$ converges to 0 almost surely with the same rate, that is, for all
$\ell\geq 0$ and $m>\phi(\ell),$
$P(\cup_{i=\phi(\ell)}^{m}A_{i}|{\cal F})\leq\frac{1}{2^{\ell}}$
almost surely. Proof: This result is an easy consequence of the fact
$P(\cup_{i=\phi(\ell)}^{m}A_{i}|{\cal
F})\leq\sum_{i=\phi(\ell)}^{m}P(A_{i}|{\cal F})a.s.$
for all $\ell\geq 0$ and $m>\phi(\ell)$ almost surely. The following theorem
gives the quantitative version of the second conditional Borel-Cantelli Lemma.
Theorem 3.4 Suppose $\\{A_{n},n\geq 1\\}$ is a sequence of conditionally
independent events given a sub-$\sigma$-algebra ${\cal F}.$ Suppose further
that the sequence $\\{\sum_{i=1}^{n}P(A_{i}|{\cal F}),n\geq 1\\}$ diverges
with rate $\psi(.)$ almost surely, that is, for all $N\geq 1,$
$\sum_{i=1}^{\psi(N)}P(A_{i}|{\cal F})\geq N$
almost surely. Then, for all $n\geq 1,$ and $N\geq 1,$
$P(\cup_{i=1}^{\psi(n+N-1)}A_{i}|{\cal F})\geq 1-e^{-N}$
almost surely. Proof: We choose a fixed $n\geq 1$ and $N\geq 1.$ Let $A^{c}$
denote the complement of an event $A.$ By the conditional independence of the
events $\\{A_{n},n\geq 1\\}$ given the sub-$\sigma$-algebra ${\cal F},$ it
follows that the events $\\{A_{n}^{c},n\geq 1\\}$ are also conditionally
independent given sub-$\sigma$-algebra ${\cal F},$ and hence
$\displaystyle P(\cap_{i=n}^{\psi(n+N-1)}A_{i}^{c}|{\cal F})$ $\displaystyle=$
$\displaystyle\Pi_{i=n}^{\psi(n+N-1)}P(A_{i}^{c}|{\cal F})$ $\displaystyle=$
$\displaystyle\Pi_{i=n}^{\psi(n+N-1)}(1-P(A_{i}|{\cal F}))$
almost surely. Taking logarithms on both sides of the equation given above, it
follows that
$\displaystyle\log(P(\cap_{i=n}^{\psi(n+N-1)}A_{i}^{c}|{\cal F}))$
$\displaystyle=$ $\displaystyle\log(\Pi_{i=n}^{\psi(n+N-1)}(1-P(A_{i}|{\cal
F}))$ $\displaystyle=$
$\displaystyle\sum_{i=n}^{\psi(n+N-1)}\log(1-P(A_{i}|{\cal F}))$
$\displaystyle\leq$ $\displaystyle-\sum_{i=n}^{\psi(n+N-1)}P(A_{i}|{\cal F})$
$\displaystyle\leq$ $\displaystyle-N$
almost surely by the elementary inequality $\log(1+x)\leq x$ for
$x\in(-1,\infty).$ Hence
(3. 7) $P(\cap_{i=n}^{\psi(n+N-1)}A_{i}^{c}|{\cal F})\leq e^{-N}$
almost surely which implies that
(3. 8) $P(\cup_{i=1}^{\psi(n+N-1)}A_{i}|{\cal F})\geq 1-e^{-N}$
almost surely. Remarks It is easy to check that the Second Conditional Borel-
Cantelli lemma is a consequence of Theorem 3.4. This can be seen from the
following arguments. Suppose that $\sum_{i=1}^{\infty}P(A_{i}|{\cal
F})=\infty$ almost surely. Then there exists a function $\psi(.)$ such that,
for all,$N\geq 1,$
$\sum_{i=1}^{\psi(N)}P(A_{i}|{\cal F})\geq N$
almost surely. Applying Theorem 3.3, it follows that
$P(\cup_{i=n}^{\psi(n+N-1)}A_{i}|{\cal F})\geq 1-e^{-N}$
almost surely which in turn shows that
$P(\cup_{i=n}^{\infty}A_{i}|{\cal F})\geq 1-e^{-N}$
almost surely. Hence
$P(\limsup A_{n}|{\cal F})=1$
almost surely.
## 4 Quantitative version of the Conditional Erdos-Renyi theorem
We will now state and prove a lemma which will be used later.
Lemma 4.1: For any sequence of events $\\{A_{n},n\geq 1\\}$ and a
sub-$\sigma$-algebra ${\cal F},$ and for all $n\geq 1,$
(4. 1) $\frac{\sum_{i,k=1}^{n}P(A_{i}A_{k}|{\cal
F})}{(\sum_{k=1}^{n}P(A_{k}|{\cal F}))^{2}}\geq 1$
almost surely. Proof: Let $\alpha_{i}=P(A_{i}|{\cal F})$ and
$\eta_{n}=\sum_{i=1}^{n}\alpha_{i}.$ It is obvious that
$E(\eta_{n}^{2}|{\cal F})\geq(E(\eta_{n}|{\cal F}))^{2}$
from the elementary property that the conditional variance of any random
variable is greater than or equal to zero whenever it exists. Furthermore
$E(\eta_{n}^{2}|{\cal F})=\sum_{i,k=1}^{n}P(A_{i}A_{k}|{\cal F})$
almost surely and
$E(\eta_{n}|{\cal F})=\sum_{k=1}^{n}P(A_{k}|{\cal F})$
almost surely. Hence
$\sum_{i,k=1}^{n}P(A_{i}A_{k}|{\cal F})\geq(\sum_{k=1}^{n}P(A_{k}|{\cal
F}))^{2}$
or equivalently
$\frac{\sum_{i,k=1}^{n}P(A_{i}A_{k}|{\cal F})}{(\sum_{k=1}^{n}P(A_{k}|{\cal
F}))^{2}}\geq 1$
almost surely. Following the ideas of Arthan and Oliva [17] again, we can
obtain the following quantitative version of the conditional Erdos-Renyi
theorem using Lemma 4.1. We omit the details. Theorem 4.2 Consider a sequence
of events $\\{A_{n},n\geq 1\\}$ and a sub-$\sigma$-algebra ${\cal F}.$ Suppose
there exists a random function $\psi(n)$ such that, for all $N\geq 1,$
$\sum_{i=1}^{\psi(N)}P(A_{i}|{\cal F})\geq N$
almost surely and further suppose that there exists a random function
$\phi(\ell,n)$ such that for all $\ell,n$ with $\phi(\ell,n)\geq n,$
$\frac{\sum_{i,k=1}^{\phi(\ell,n)}P(A_{i}A_{k}|{\cal
F})}{(\sum_{k=1}^{\phi(\ell,n)}(P(A_{k}|{\cal F}))^{2}}\leq
1+\frac{1}{2^{\ell}}$
almost surely. Define $n_{1}=\phi(1,1)$ and for $k>1,$ let
$n_{k}=\phi(k,\max(n_{k-1},k)).$ Then, for all $n\geq 1$ and $\ell\geq 1,$
(4. 2) $P(\cup_{i=n}^{n_{m}}A_{i}|{\cal F})\geq 1-\frac{1}{2^{\ell}}$
almost surely where $m=\max(\psi(2n),\ell+3).$
## 5 Quantitative version of Conditional Kochen-Stove theorem
We first state a lemma.
Lemma 5.1 For any sequence of real numbers $\\{a_{n},n\geq 1\\},$ events
$\\{A_{n},n\geq 1\\}$ and a sub-$\sigma$-algebra ${\cal F},$, the following
are equivalent.
(5. 1) $P(A_{n}\;i.o|{\cal F})\geq\limsup a_{n}$
almost surely and, for every $m\geq 1,\ell\geq 1,$ there exists $n>m$ such
that for every $j>n$
(5. 2) $P(\cup_{i=m+1}^{n}A_{i}|{\cal F})+\frac{1}{2^{\ell}}\geq a_{j}$
almost surely. Proof of this lemma is along the same lines as the proof of
Lemma A.2 in Arthan and Oliva [17]. The following conditional version of
Kochen-Stone inequality can be obtained by arguments similar to those in
Kochen and Stone [13]. We now state an equivalent quantitative version. Lemma
5.2 Let $\\{A_{n},n\geq 1\\}$ be an infinite sequence of events and ${\cal F}$
be a sub-$\sigma$-algebra such that $\sum_{i=1}^{\infty}P(A_{i}|{\cal
F})=\infty$ almost surely. Then, for every $m\geq 1,\ell\geq 1,$ there exists
$n>m$ such that, for every $j>n,$
(5. 3)
$P(\cup_{i=m+1}^{n}A_{i})+\frac{1}{2^{\ell}}\geq\frac{(\sum_{k=1}^{j}P(A_{k}|{\cal
F}))^{2}}{\sum_{i,k=1}^{j}P(A_{i}A_{k}|{\cal F})}$
almost surely. Lemma 5.3 (Conditional Chung-Erdos [15] inequality) Let
$\\{A_{i},1\leq i\leq n\\}$ be an finite sequence of events and $\cal F$ be a
sub-$\sigma$-algebra. Then
(5. 4) $P(\cup_{k=1}^{n}A_{k}|{\cal F})\geq\frac{(\sum_{k=1}^{j}P(A_{k}|{\cal
F}))^{2}}{\sum_{i,k=1}^{j}P(A_{i}A_{k}|{\cal F})}$
almost surely. Proof of Lemma 5.3 can be given is along the same line as in
Chung and Erdos [15] and Yan [14]. We now state the quantitative version of
conditional Kochen-stone theorem. We omit the details. Proof runs along the
same lines as the proof of Theorem 4.2 given in Arthan and Oliva [17] in the
unconditional case using Lemmas 5.2 and 5.3. Theorem 5.4 (Quantitative version
of Conditional Kochen-Stone theorem) Let $\\{A_{n},n\geq 1\\}$ be an infinite
sequence of events and ${\cal F}$ be a sub-$\sigma$-algebra.. Suppose there
exists a random function $\phi(n)$ such that, for all $N\geq 1,$
$\sum_{i=1}^{\phi(N)}P(A_{i}|{\cal F})\geq N$
almost surely. Then, for all $m$ and $\ell$ and $g(.)$ such that $g(i)>i$ for
all $i$, there exists $n>m$ such that for all $j\in[n,g(n)],$
(5. 5) $P(\cup_{k=1}^{n}A_{k}|{\cal
F})+\frac{1}{2^{\ell}}\geq\frac{(\sum_{k=1}^{j}P(A_{k}|{\cal
F}))^{2}}{\sum_{i,k=1}^{j}P(A_{i}A_{k}|{\cal F})}$
almost surely. Acknowledgment This work is supported by the scheme “INSA
Senior Scientist” at the CR Rao Advanced Institute of Mathematics, Statistics
and Computer Science, Hyderabad, India. References
1\. Prakasa Rao, B.L.S. (2009) Conditional independence, conditional mixing
and conditional association, Ann. Inst. Statist. Math., 61, 441-460.
2\. Roussas, G.G. (2008) On conditional independence, mixing and association,
Stoch. Anal. Appl., 26, 1274-1309.
3\. Yuan, D. and Yang, Y. (2011) Conditional versions of limit theorems for
conditionally associated random variables, J. Math. Anal. Appl., 376, 282-293.
4\. Liu, J. and Prakasa Rao, B.L.S. (2013) On conditional Borel-Cantelli
lemmas for sequences of random variables, J. Math. Anal. Appl., 399, 156-165.
5\. Yuan, D. Wei, L., and Lei, L. (2014) Conditional central limit theorems
for a sequence of conditionally independent random variables, J. Korean Math.
Soc., 51, 1-15.
6\. Liu, J.C. and Zhang, L.D. (2014) Conditional Borel-Cantelli lemmas and
conditional strong law of large numbers, Acta Math. Appl. Sin., 37, 537-546.
7\. Yuan, D. and Li, S. (2015) From conditional independence to conditionally
negative association: some preliminary results., Comm. Stat.. Theory and
Methods, 44, 3942-3966.
8\. Chen, Q and Liu, J. (2017) The conditional Borel-Cantelli lemma and
applications, J. Korean Math. Soc., 54, 441-460.
9\. Majerak, D., Nowak, W., and Zieba, W. (2005) Conditional strong law of
large numbers, Int. J. Pure Appl. Math., 20, 143-156.
10\. Basawa, I.V. and Scott, D. (1983) Asymptotic Optimal Inference for Non-
ergodic Models, Lecture Notes Stat. Vol. 17, Springer, New York.
11\. Balakrishnan, N. and Stepanov, A. (2010) Generalization of Borel-Cantelli
lemma, The Mathematical Scientist, 35, 61-62.
12\. Balakrishnan, N. and Stepanov, A. (2021) A note on the Borel-Cantelli
lemma, arXiv: 2112.00741v1 [math.PR] 1 Dec 2021.
13\. Kochen, S.B. and Stone, C.J. (1964) A note on the Borel-Cantelli lemma,
Illinois J. Math. 8, 248-251.
14\. Yan, J. (2006) A simple proof of two generlized Borel-Cantelli lemmas, In
Lecture Notes in Math., Vol. 1874, pp. 77-79, Springer.
15\. Chung, K.L. and Erdos, P. (1952) On the application of the Borel-Cantelli
lemma, Trans. Amer. Math. Soc., 72, 179-186.
16\. Erdos. P. and Renyi, A. (1959) On Cantor’s series with convergence $\sum
1/q_{n}.$, Ann. Univ. Sci. Budap. Rolando Eotvos, Sect. Math., 2, 93-109.
17\. Arthan, R. and Oliva, P. (2020) On the Borel-Cantelli lemmas, the Erdos-
Renyi theorem and the Kochen-stone theorem, arXiv:201209942v1 [math. PR] 17
Dec 2020.
|
# On higher dimensional point sets in general position
Andrew Suk Department of Mathematics, University of California at San Diego,
La Jolla, CA, 92093 USA. Supported by NSF CAREER award DMS-1800746 and NSF
award DMS-1952786. Email<EMAIL_ADDRESS>Ji Zeng Department of Mathematics,
University of California at San Diego, La Jolla, CA, 92093 USA. Supported by
NSF grant DMS-1800746<EMAIL_ADDRESS>
###### Abstract
A finite point set in $\mathbb{R}^{d}$ is in general position if no $d+1$
points lie on a common hyperplane. Let $\alpha_{d}(N)$ be the largest integer
such that any set of $N$ points in $\mathbb{R}^{d}$ with no $d+2$ members on a
common hyperplane, contains a subset of size $\alpha_{d}(N)$ in general
position. Using the method of hypergraph containers, Balogh and Solymosi
showed that $\alpha_{2}(N)<N^{5/6+o(1)}$. In this paper, we also use the
container method to obtain new upper bounds for $\alpha_{d}(N)$ when $d\geq
3$. More precisely, we show that if $d$ is odd, then
$\alpha_{d}(N)<N^{\frac{1}{2}+\frac{1}{2d}+o(1)}$, and if $d$ is even, we have
$\alpha_{d}(N)<N^{\frac{1}{2}+\frac{1}{d-1}+o(1)}$.
We also study the classical problem of determining the maximum number
$a(d,k,n)$ of points selected from the grid $[n]^{d}$ such that no $k+2$
members lie on a $k$-flat. For fixed $d$ and $k$, we show that
$a(d,k,n)\leq
O\left(n^{\frac{d}{2\lfloor(k+2)/4\rfloor}(1-\frac{1}{2\lfloor(k+2)/4\rfloor
d+1})}\right),$
which improves the previously best known bound of
$O\left(n^{\frac{d}{\lfloor(k+2)/2\rfloor}}\right)$ due to Lefmann when $k+2$
is congruent to 0 or 1 mod 4.
## 1 Introduction
A finite point set in $\mathbb{R}^{d}$ is said to be in _general position_ if
no $d+1$ members lie on a common hyperplane. Let $\alpha_{d}(N)$ be the
largest integer such that any set of $N$ points in $\mathbb{R}^{d}$ with no
$d+2$ members on a hyperplane, contains $\alpha_{d}(N)$ points in general
position.
In 1986, Erdős [8] proposed the problem of determining $\alpha_{2}(N)$ and
observed that a simple greedy algorithm shows
$\alpha_{2}(N)\geq\Omega(\sqrt{N})$. A few years later, Füredi [10] showed
that
$\Omega(\sqrt{N\log N})<\alpha_{2}(N)<o(N),$
where the lower bound uses a result of Phelps and Rödl [20] on partial Steiner
systems, and the upper bound relies on the density Hales-Jewett theorem [11,
12]. In 2018, a breakthrough was made by Balogh and Solymosi [3], who showed
that $\alpha_{2}(N)<N^{5/6+o(1)}$. Their proof was based on the method of
hypergraph containers, a powerful technique introduced independently by
Balogh, Morris, and Samotij [1] and by Saxton and Thomason [24], that reveals
an underlying structure of the independent sets in a hypergraph. We refer
interested readers to [2] for a survey of results based on this method.
In higher dimensions, the best lower bound for $\alpha_{d}(N)$ is due to
Cardinal, Tóth, and Wood [5], who showed that $\alpha_{d}(N)\geq\Omega((N\log
N)^{1/d})$, for every fixed $d\geq 2$. For upper bounds, Milićević [18] used
the density Hales-Jewett theorem to show that $\alpha_{d}(N)=o(N)$ for every
fixed $d\geq 2$. However, these upper bounds in [18], just like that in [10],
are still almost linear in $N$. Our main result is the following.
###### Theorem 1.1.
Let $d\geq 3$ be a fixed integer. If $d$ is odd, then
$\alpha_{d}(N)<N^{\frac{1}{2}+\frac{1}{2d}+o(1)}$. If $d$ is even, then
$\alpha_{d}(N)<N^{\frac{1}{2}+\frac{1}{d-1}+o(1)}.$
Our proof of Theorem 1.1 is also based on the hypergraph container method. A
key ingredient in the proof is a new supersaturation lemma for $(k+2)$-tuples
of the grid $[n]^{d}$ that lie on a $k$-flat, which we shall discuss in the
next section. Here, by a _$k$ -flat_ we mean a $k$-dimensional affine subspace
of $\mathbb{R}^{d}$.
One can consider a generalization of the quantity $\alpha_{d}(N)$. We let
$\alpha_{d,s}(N)$ be the largest integer such that any set of $N$ points in
$\mathbb{R}^{d}$ with no $d+s$ members on a hyperplane, contains
$\alpha_{d,s}(N)$ points in general position. Hence,
$\alpha_{d}(N)=\alpha_{d,2}(N)$. A simple argument of Erdős [8] shows that
$\alpha_{d,s}(N)\geq\Omega(N^{1/d})$ for fixed $d$ and $s$ (see Section 6, or
[5] for large $s$). In the other direction, following the arguments in our
proof of Theorem 1.1 with a slight modification, we show the following.
###### Theorem 1.2.
Let $d,s\geq 3$ be fixed integers. If $d$ is odd and
$\frac{2d+s-2}{2d+2s-2}<\frac{d-1}{d}$, then $\alpha_{d,s}(N)\leq
N^{\frac{1}{2}+o(1)}$. If $d$ is even and
$\frac{2d+s-2}{2d+2s-2}<\frac{d-2}{d-1}$, then $\alpha_{d,s}(N)\leq
N^{\frac{1}{2}+o(1)}$.
For example, when we fix $d=3$ and $s\geq 5$, we have $\alpha_{d,s}(N)\leq
N^{\frac{1}{2}+o(1)}$.
We also study the classical problem of determining the maximum number of
points selected from the grid $[n]^{d}$ such that no $k+2$ members lie on a
$k$-flat. The key ingredient of Theorem 1.1 mentioned above can be seen as a
supersaturation version of this Turán-type problem. When $k=1$, this is the
famous _no-three-in-line problem_ raised by Dudeney [7] in 1917: Is it true
that one can select $2n$ points in $[n]^{2}$ such that no three are collinear?
Clearly, $2n$ is an upper bound as any vertical line must contain at most 2
points. For small values of $n$, many authors have published solutions to this
problem obtaining the bound of $2n$ (e.g. see [9]), but for large $n$, the
best known general construction is due to Hall et al. [13] with slightly fewer
than $3n/2$ points.
More generally, we let $a(d,k,r,n)$ denote the maximum number of points from
$[n]^{d}$ such that no $r$ points lie on a $k$-flat. Since $[n]^{d}$ can be
covered by $n^{d-k}$ many $k$-flats, we have the trivial upper bound
$a(d,k,r,n)\leq(r-1)n^{d-k}$. For certain values $d$, $k$, and $r$ fixed and
$n$ tends to infinity, this bound is known to be asymptotically best possible:
Many authors [22, 4, 16] noticed that $a(d,d-1,d+1,n)=\Theta(n)$ by looking at
the modular moment curve over a finite field $\mathbb{Z}_{p}$; In [21], Pór
and Wood proved that $a(3,1,3,n)=\Theta(n^{2})$; Very recently, Sudakov and
Tomon [25] showed that $a(d,k,r,n)=\Theta(n^{d-k})$ when $r>d^{k}$.
We shall focus on the case when $r=k+2$ and write $a(d,k,n):=a(d,k,k+2,n)$.
Surprisingly, Lefmann [16] (see also [15]) showed that $a(d,k,n)$ behaves much
differently than $\Theta(n^{d-k})$. In particular, he showed that
$a(d,k,n)\leq O\left(n^{\frac{d}{\lfloor(k+2)/2\rfloor}}\right).$
Our next result improves this upper bound when $k+2$ is congruent to 0 or 1
mod 4.
###### Theorem 1.3.
For fixed $d$ and $k$, as $n\to\infty$, we have
$a(d,k,n)\leq
O\left(n^{\frac{d}{2\lfloor(k+2)/4\rfloor}(1-\frac{1}{2\lfloor(k+2)/4\rfloor
d+1})}\right).$
For example, we have $a(4,2,n)\leq O(n^{\frac{16}{9}})$ while Lefmann’s bound
in [16] gives us $a(4,2,n)\leq O(n^{2})$, which coincides with the trivial
upper bound. In particular, Theorem 1.3 tells us that, if $4$ divides $k+2$,
then $a(d,k,n)$ only behaves like $\Theta(n^{d-k})$ when $d=k+1$. This is
quite interesting compared to the fact that $a(3,1,n)=\Theta(n^{2})$ proved in
[21]. Lastly, let us note that the current best lower bound for $a(d,k,n)$ is
also due to Lefmann [16], who showed that
$a(d,k,n)\geq\Omega\left(n^{\frac{d}{k+1}-k-\frac{k}{k+1}}\right)$.
For integer $n>0$, we let $[n]=\\{1,\dots,n\\}$, and
$\mathbb{Z}_{n}=\\{0,1,\dots,n-1\\}$. We systemically omit floors and ceilings
whenever they are not crucial for the sake of clarity in our presentation. All
logarithms are in base two.
## 2 $(k+2)$-tuples of $[n]^{d}$ on a $k$-flat
In this section, we establish two lemmas that will be used in the proof of
Theorem 1.1.
Given a set $T$ of $k+2$ points in $\mathbb{R}^{d}$ that lie on a $k$-flat, we
say that $T$ is _degenerate_ if there is a subset $S\subset T$ of size $j$,
where $3\leq j\leq k+1$, such that $S$ lies on a $(j-2)$-flat. Otherwise, we
say that $T$ is _non-degenerate_. We establish a supersaturation lemma for
non-degenerate $(k+2)$-tuples of $[n]^{d}$.
###### Lemma 2.1.
For real number $\gamma>0$ and fixed positive integers $d,k$, such that $k$ is
even and $d-2\gamma>(k-1)(k+2)$, any subset $V\subset[n]^{d}$ of size
$n^{d-\gamma}$ spans at least $\Omega(n^{(k+1)d-(k+2)\gamma})$ non-degenerate
$(k+2)$-tuples that lie on a $k$-flat.
###### Proof.
Let $V\subset[n]^{d}$ such that $|V|=n^{d-\gamma}$. Set $r=\frac{k}{2}+1$ and
$E_{r}=\binom{V}{r}$ to be the collection of $r$-tuples of $V$. Notice that
the sum of a $r$-tuple from $V$ belongs to $[rn]^{d}$. For each
$v\in[rn]^{d}$, we define
$E_{r}(v)=\\{\\{v_{1},\dots,v_{r}\\}\in E_{r}:v_{1}+\dots+v_{r}=v\\}.$
Then for $T_{1},T_{2}\in E_{r}(v)$, where $T_{1}=\\{v_{1},\dots,v_{r}\\}$ and
$T_{2}=\\{u_{1},\dots,u_{r}\\}$, we have
$v_{1}+\dots+v_{r}=v=u_{1}+\dots+u_{r},$
which implies that $T_{1}\cup T_{2}$ lies on a common $k$-flat. Let
$E_{2r}=\bigcup_{v\in[rn]^{d}}\ \bigcup_{T_{1},T_{2}\in
E_{r}(v)}\\{T_{1},T_{2}\\}.$
Hence, for each $\\{T_{1},T_{2}\\}\in E_{2r}$, $T_{1}\cup T_{2}$ lies on a
$k$-flat. Moreover, by Jensen’s inequality, we have
$|E_{2r}|=\sum_{v\in[rn]^{d}}\binom{|E_{r}(v)|}{2}\geq(rn)^{d}\binom{\frac{\sum_{v}|E_{r}(v)|}{(rn)^{d}}}{2}=(rn)^{d}\binom{|E_{r}|/(rn)^{d}}{2}\geq\frac{|E_{r}|^{2}}{4(rn)^{d}}.$
Since $k$ and $d$ are fixed and $r=\frac{k}{2}+1$ and $|V|=n^{d-\gamma}$,
$|E_{r}|^{2}=\binom{|V|}{r}^{2}=\binom{|V|}{(k/2)+1}^{2}\geq\Omega(n^{(k+2)(d-\gamma)}).$
Combining the two inequalities above gives
$|E_{2r}|\geq\Omega(n^{(k+1)d-(k+2)\gamma}).$
We say that $\\{T_{1},T_{2}\\}\in E_{2r}$ is _good_ if $T_{1}\cap
T_{2}=\emptyset$, and the $(k+2)$-tuple $(T_{1}\cup T_{2})$ is non-degenerate.
Otherwise, we say that $\\{T_{1},T_{2}\\}$ is _bad_. In what follows, we will
show that at least half of the pairs (i.e. elements) in $E_{2r}$ are good. To
this end, we will need the following claim.
###### Claim 2.2.
If $\\{T_{1},T_{2}\\}\in E_{2r}$ is bad, then $T_{1}\cup T_{2}$ lies on a
$(k-1)$-flat.
###### Proof.
Write $T_{1}=\\{v_{1},\dots,v_{r}\\}$ and $T_{2}=\\{u_{1},\dots,u_{r}\\}$. Let
us consider the following cases.
_Case 1._ Suppose $T_{1}\cap T_{2}\neq\emptyset$. Then, without loss of
generality, there is an integer $j<r$ such that
$v_{1}+\dots+v_{j}=u_{1}+\dots+u_{j},$
where $v_{1},\dots,v_{j},u_{1},\dots,u_{j}$ are all distinct elements, and
$v_{t}=u_{t}$ for $t>j$. Thus $|T_{1}\cup T_{2}|=2j+(r-j)$. The $2j$ elements
above lie on a $(2j-2)$-flat. Adding the remaining $r-j$ points implies that
$T_{1}\cup T_{2}$ lies on a $(j-2+r)$-flat. Since $r=\frac{k}{2}+1$ and
$j\leq\frac{k}{2},$ $T_{1}\cup T_{2}$ lies on a $(k-1)$-flat.
_Case 2._ Suppose $T_{1}\cap T_{2}=\emptyset$. Then $T_{1}\cup T_{2}$ must be
degenerate, which means there is a subset $S\subset T_{1}\cup T_{2}$ of $j$
elements such that $S$ lies on a $(j-2)$-flat, for some $3\leq j\leq k+1$.
Without loss of generality, we can assume that $v_{1}\not\in S$. Hence,
$(T_{1}\cup T_{2})\setminus\\{v_{1}\\}$ lies on a $(k-1)$-flat. On the other
hand, we have
$v_{1}=u_{1}+\dots+u_{r}-v_{2}-\dots-v_{r}.$
Hence, $v_{1}$ is in the affine hull of $(T_{1}\cup
T_{2})\setminus\\{v_{1}\\}$ which implies that $T_{1}\cup T_{2}$ lies on a
$(k-1)$-flat. ∎
We are now ready to prove the following claim.
###### Claim 2.3.
At least half of the pairs in $E_{2r}$ are good.
###### Proof.
For the sake of contradiction, suppose at least half of the pairs in $E_{2r}$
are bad. Let $H$ be the collection of all the $j$-flats spanned by subsets of
$V$ for all $j\leq k-1$. Notice that if $S\subset V$ spans a $j$-flat $h$,
then $h$ is also spanned by only $j+1$ elements from $S$. So we have
$|H|\leq\sum_{j=0}^{k-1}|V|^{j+1}\leq kn^{k(d-\gamma)}.$
For each bad pair $\\{T_{1},T_{2}\\}\in E_{2r}$, $T_{1}\cup T_{2}$ lies on a
$j$-flat from $H$ by Claim 2.2. By the pigeonhole principle, there is a
$j$-flat $h$ with $j\leq k-1$ such that at least
$\frac{|E_{2r}|/2}{|H|}\geq\frac{\Omega(n^{(k+1)d-(k+2)\gamma})}{2kn^{k(d-\gamma)}}=\Omega(n^{d-2\gamma})$
bad pairs from $E_{2r}$ have the property that their union lies in $h$. On the
other hand, since $h$ contains at most $n^{k-1}$ points from $[n]^{d}$, $h$
can correspond to at most $O(n^{(k-1)(k+2)})$ bad pairs from $E_{2r}$. Since
we assumed $d-2\gamma>(k-1)(k+2)$, we have a contradiction for $n$
sufficiently large. ∎
Each good pair $\\{T_{1},T_{2}\\}\in E_{2r}$ gives rise to a non-degenerate
$(k+2)$-tuple $T_{1}\cup T_{2}$ that lies on a $k$-flat. On the other hand,
any such $(k+2)$-tuple in $V$ will correspond to at most $\binom{k+2}{r}$ good
pairs in $E_{2r}$. Hence, by Claim 2.3, there are at least
$\left.\frac{|E_{2r}|}{2}\middle/\binom{k+2}{r}\right.=\Omega(n^{(k+1)d-(k+2)\gamma})$
non-degenerate $(k+2)$-tuples that lie on a $k$-flat, concluding the proof. ∎
In the other direction, we will use the following upper bound.
###### Lemma 2.4.
For real number $\gamma>0$ and fixed positive integers $d,k,\ell$, such that
$\ell<k+2$, suppose $U,V\subset[n]^{d}$ satisfy $|U|=\ell$ and
$|V|=n^{d-\gamma}$, then $V$ contains at most $n^{(k+1-\ell)(d-\gamma)+k}$
non-degenerate $(k+2)$-tuples that lie on a $k$-flat and contain $U$.
###### Proof.
If $U$ spans a $j$-flat for some $j<\ell-1$, then by definition no non-
degenerate $(k+2)$-tuple contains $U$. Hence we can assume $U$ spans a
$(\ell-1)$-flat. Observe that a non-degenerate $(k+2)$-tuple $T$, which lies
on a $k$-flat and contains $U$, must contain a $(k+1)$-tuple
$T^{\prime}\subset T$ such that $T^{\prime}$ spans a $k$-flat and $U\subset
T^{\prime}$. Then there are at most $n^{(k+1-\ell)(d-\gamma)}$ ways to add
$k+1-\ell$ points to $U$ from $V$ to obtain such $T^{\prime}$. After
$T^{\prime}$ is determined, there are at most $n^{k}$ ways to add a final
point from the affine hull of $T^{\prime}$ to obtain $T$. So we conclude the
proof by multiplication. ∎
## 3 The container method: Proof of Theorem 1.1
In this section, we use the hypergraph container method to prove Theorem 1.1.
We follow the method outlined in [3]. Let
$\mathcal{H}=(V(\mathcal{H}),E(\mathcal{H}))$ denote a $(k+2)$-uniform
hypergraph. For any $U\subset V(\mathcal{H})$, its degree $\delta(U)$ is the
number of edges containing $U$. For each $\ell\in[k+2]$, we use
$\Delta_{\ell}(\mathcal{H})$ to denote the maximum $\delta(U)$ among all $U$
of size $\ell$. For parameter $\tau>0$, we define the following quantity
$\Delta(\mathcal{H},\tau)=\frac{2^{\binom{k+2}{2}-1}|V(\mathcal{H})|}{(k+2)|E(\mathcal{H})|}\sum_{\ell=2}^{k+2}\frac{\Delta_{\ell}(\mathcal{H})}{\tau^{\ell-1}2^{\binom{\ell-1}{2}}}.$
Then we have the following hypergraph container lemma from [3], which is a
restatement of Corollary 3.6 in [24].
###### Lemma 3.1.
Let $\mathcal{H}$ be a $(k+2)$-uniform hypergraph and $0<\epsilon,\tau<1/2$.
Suppose that $\tau<1/(200\cdot(k+2)\cdot(k+2)!)$ and
$\Delta(\mathcal{H},\tau)\leq\epsilon/(12\cdot(k+2)!)$. Then there exists a
collection $\mathcal{C}$ of subsets (containers) of $V(\mathcal{H})$ such that
1. 1.
Every independent set in $\mathcal{H}$ is a subset of some $C\in\mathcal{C}$;
2. 2.
$\log|\mathcal{C}|\leq
1000\cdot(k+2)\cdot((k+2)!)^{3}\cdot|V(\mathcal{H})|\cdot\tau\cdot\log(1/\epsilon)\cdot\log(1/\tau)$;
3. 3.
For every $C\in\mathcal{C}$, the induced subgraph $\mathcal{H}[C]$ has at most
$\epsilon|E(\mathcal{H})|$ many edges.
The main result in this section is the following theorem.
###### Theorem 3.2.
Let $k,r$ be fixed integers such that $r\geq k\geq 2$ and $k$ is even. Then
for any $0<\alpha<1$, there are constants $c=c(\alpha,k,r)$ and
$d=d(\alpha,k,r)$ such that the following holds. For infinitely many values of
$N$, there is a set $V$ of $N$ points in $\mathbb{R}^{d}$ such that no $r+3$
members of $V$ lie on an $r$-flat, and every subset of $V$ of size
$cN^{\frac{r+2}{2(k+1)}+\alpha}$ contains $k+2$ members on a $k$-flat.
Before we prove Theorem 3.2, let us show that it implies Theorem 1.1. In
dimensions $d_{0}\geq 3$ where $d_{0}$ is odd, we apply Theorem 3.2 with
$k=r=d_{0}-1$ to obtain a point set $V$ in $\mathbb{R}^{d}$ with the property
that no $d_{0}+2$ members lie on a $(d_{0}-1)$-flat, and every subset of size
$cN^{\frac{1}{2}+\frac{1}{2d_{0}}+\alpha}$ contains $d_{0}+1$ members on a
$(d_{0}-1)$-flat. By projecting $V$ to a generic $d_{0}$-dimensional subspace
of $\mathbb{R}^{d}$, we obtain $N$ points in $\mathbb{R}^{d_{0}}$ with no
$d_{0}+2$ members on a common hyperplane, and no
$cN^{\frac{1}{2}+\frac{1}{2d_{0}}+\alpha}$ members in general position.
In dimensions $d_{0}\geq 4$ where $d_{0}$ is even, we apply Theorem 3.2 with
$k=d_{0}-2$ and $r=d_{0}-1$ to obtain a point set $V$ in $\mathbb{R}^{d}$ with
the property that no $d_{0}+2$ members on a $(d_{0}-1)$-flat, and every subset
of size $cN^{\frac{1}{2}+\frac{1}{d_{0}-1}+\alpha}$ contains $d_{0}$ members
on a $(d_{0}-2)$-flat. By adding another point from this subset, we obtain
$d_{0}+1$ members on a $(d_{0}-1)$-flat. Hence, by projecting to $V$ a generic
$d_{0}$-dimensional subspace of $\mathbb{R}^{d}$, we obtain $N$ points in
$\mathbb{R}^{d_{0}}$ with no $d_{0}+2$ members on a common hyperplane, and no
$cN^{\frac{1}{2}+\frac{1}{d_{0}-1}+\alpha}$ members in general position. This
completes the proof of Theorem 1.1.
###### Proof of Theorem 3.2.
We set $d=d(\alpha,k,r)$ to be a sufficiently large integer depending on
$\alpha$, $k$, and $r$. Let $\mathcal{H}$ be the hypergraph with
$V(\mathcal{H})=[n]^{d}$ and $E(\mathcal{H})$ consists of non-degenerate
$(k+2)$-tuples $T$ such that $T$ lies on a $k$-flat. Let $C^{0}=[n]^{d}$,
$\mathcal{C}^{0}=\\{C^{0}\\}$, and $\mathcal{H}^{0}=\mathcal{H}$. In what
follows, we will apply the hypergraph container lemma to $\mathcal{H}^{0}$ to
obtain a family of containers $\mathcal{C}^{1}$. For each
$C_{j}^{1}\in\mathcal{C}^{1}$, we consider the induced hypergraph
$\mathcal{H}_{j}^{1}=\mathcal{H}[C^{1}_{j}]$, and we apply the hypergraph
container lemma to it. The collection of containers obtained from all
$\mathcal{H}_{j}^{1}$ will form another collection of containers
$\mathcal{C}^{2}$. We iterate this process until each container in
$\mathcal{C}^{i}$ is sufficiently small, and moreover, we will only produce a
small number of containers. As a final step, we apply the probabilistic method
to show the existence of the desired point set. We now flesh out the details
of this process.
We start by setting $C^{0}=[n]^{d},\mathcal{C}^{0}=\\{C^{0}\\}$, and set
$\mathcal{H}^{0}=\mathcal{H}[C^{0}]=\mathcal{H}$. Having obtained a collection
of containers $\mathcal{C}^{i}$, for each container
$C_{j}^{i}\in\mathcal{C}^{i}$ with $|C_{j}^{i}|\geq n^{\frac{k}{k+1}d+k}$, we
set $\mathcal{H}^{i}_{j}=\mathcal{H}[C_{j}^{i}]$. Let $\gamma=\gamma(i,j)$ be
defined by $|V(\mathcal{H}^{i}_{j})|=n^{d-\gamma}$. So,
$\gamma\leq\frac{d}{k+1}-k$. We set
$\tau=\tau(i,j)=n^{-\frac{k}{k+1}d+\gamma+\alpha}$ and
$\epsilon=\epsilon(i,j)=c_{1}n^{-\alpha}$, where $c_{1}=c_{1}(d,k)$ is a
sufficiently large constant depending on $d$ and $k$. Then we can verify the
following condition.
###### Claim 3.3.
$\Delta(\mathcal{H}^{i}_{j},\tau)\leq\epsilon/(12\cdot(k+2)!)$.
###### Proof.
Since $|V(\mathcal{H}^{i}_{j})|=n^{d-\gamma}$, $\gamma\leq\frac{d}{k+1}-k$,
and $d$ is sufficiently large, Lemma 2.1 implies that
$|E(\mathcal{H}^{i}_{j})|\geq c_{2}n^{(k+1)d-(k+2)\gamma}$ for some constant
$c_{2}=c_{2}(d,k)$. Hence, we have
$\frac{|V(\mathcal{H}^{i}_{j})|}{|E(\mathcal{H}^{i}_{j})|}\leq\frac{n^{d-\gamma}}{c_{2}n^{(k+1)d-(k+2)\gamma}}=\frac{1}{c_{2}n^{kd-(k+1)\gamma}}.$
On the other hand, by Lemma 2.4, we have
$\Delta_{\ell}(\mathcal{H}_{j}^{i})\leq n^{(d-\gamma)(k+1-\ell)+k}\text{\quad
for $\ell<k+2$},$
and obviously $\Delta_{k+2}(\mathcal{H}_{j}^{i})\leq 1$.
Applying these inequalities together with the definition of $\Delta$, we
obtain
$\displaystyle\Delta(\mathcal{H}^{i}_{j},\tau)$
$\displaystyle=\frac{2^{\binom{k+2}{2}-1}|V(\mathcal{H}^{i}_{j})|}{(k+2)|E(\mathcal{H}_{j}^{i})|}\sum_{\ell=2}^{k+2}\frac{\Delta_{\ell}(\mathcal{H}_{j}^{i})}{\tau^{\ell-1}2^{\binom{\ell-1}{2}}}$
$\displaystyle\leq\frac{c_{3}}{n^{kd-(k+1)\gamma}}\left(\sum_{\ell=2}^{k+1}\frac{n^{(k+1-\ell)(d-\gamma)+k}}{\tau^{\ell-1}}+\frac{1}{\tau^{k+1}}\right)$
$\displaystyle=\sum_{\ell=2}^{k+1}\frac{c_{3}}{\tau^{\ell-1}n^{(\ell-1)d-k-\ell\gamma}}+\frac{c_{3}}{\tau^{k+1}n^{kd-(k+1)\gamma}},$
for some constant $c_{3}=c_{3}(d,k)$. Let us remark that the summation above
is where we determined our $\tau$ and $\gamma$. In order to make the last term
small, we choose $\tau=n^{-\frac{k}{k+1}d+\gamma+\alpha}$. Having determined
$\tau$, in order for the first term in the summation to be small, we choose
$\gamma\leq\frac{d}{k+1}-k$.
By setting $\epsilon=c_{1}n^{-\alpha}$ with $c_{1}=c_{1}(d,k)$ sufficiently
large, we have
$\displaystyle\Delta(\mathcal{H}^{i}_{j},\tau)$ $\displaystyle\leq
c_{3}\left(\sum_{\ell=2}^{k+1}n^{-\frac{\ell-1}{k+1}d+\gamma+k-(\ell-1)\alpha}+n^{-(k+1)\alpha}\right)$
$\displaystyle\leq c_{3}kn^{-\alpha}+c_{3}n^{-(k+1)\alpha}$
$\displaystyle<\frac{\epsilon}{12(k+2)!}.$
This verifies the claimed condition. ∎
Given the condition above, we can apply Lemma 3.1 to $\mathcal{H}_{j}^{i}$
with chosen parameters $\tau$ and $\epsilon$. Hence we obtain a family of
containers $\mathcal{C}_{j}^{i+1}$ such that
$\displaystyle|\mathcal{C}_{j}^{i+1}|$ $\displaystyle\leq
2^{10^{3}(k+2)((k+2)!)^{3}|V(\mathcal{H}_{i}^{j})|\tau\log(1/\epsilon)\log(1/\tau)}$
$\displaystyle\leq 2^{c_{4}n^{\frac{d}{k+1}+\alpha}\log^{2}n},$
for some constant $c_{4}=c_{4}(d,k)$. In the other case where
$|C_{j}^{i}|<n^{\frac{k}{k+1}d+k}$, we just define
$\mathcal{C}_{j}^{i+1}=\\{C_{j}^{i}\\}$. Then, for each container
$C\in\mathcal{C}_{j}^{i+1}$, we have either $|C|<n^{\frac{k}{k+1}d+k}$ or
$|E(\mathcal{H}[C])|\leq\epsilon|E(\mathcal{H}_{j}^{i})|\leq\epsilon^{i}|E(\mathcal{H})|$.
After applying this procedure for each container in $\mathcal{C}^{i}$, we
obtain a new family of containers
$\mathcal{C}^{i+1}=\bigcup\mathcal{C}^{i}_{j}$ such that
$|\mathcal{C}^{i+1}|\leq|\mathcal{C}^{i}|2^{c_{4}n^{\frac{d}{k+1}+\alpha}\log^{2}n}\leq
2^{(i+1)c_{4}n^{\frac{d}{k+1}+\alpha}\log^{2}n}.$
Notice that the number of edges in $\mathcal{H}_{j}^{i}$ shrinks by a factor
of $c_{1}n^{-\alpha}$ whenever $i$ increases by one, while on the other hand,
Lemma 2.1 tells us that every large subset $C\subset[n]^{d}$ induces many
edges in $\mathcal{H}$. Hence, after at most $t\leq c_{5}/\alpha$ iterations,
for some constant $c_{5}=c_{5}(d,k)$, we obtain a collection of containers
$\mathcal{C}=\mathcal{C}^{t}$ such that: each container $C\in\mathcal{C}$
satisfies $|C|<n^{\frac{k}{k+1}d+k}$; every independent set of $\mathcal{H}$
is a subset of some $C\in\mathcal{C}$; and
$|\mathcal{C}|\leq 2^{(c_{5}/\alpha)c_{4}n^{\frac{d}{k+1}+\alpha}\log^{2}n}.$
Before we construct the desired point set, we make the following crude
estimate.
###### Claim 3.4.
The grid $[n]^{d}$ contains at most $O(n^{(r+1)d+2r})$ many $(r+3)$-tuples
that lie on an $r$-flat.
###### Proof.
Let $T$ be an arbitrary $(r+3)$-tuple that spans a $j$-flat. There are at most
$n^{(j+1)d}$ ways to choose a subset $T^{\prime}\subset T$ of size $j+1$ that
spans the affine hull of $T$. After this $T^{\prime}$ is determined, there are
at most $n^{(r+2-j)j}$ ways to add the remaining $r+2-j$ points from the
$j$-flat spanned by $T^{\prime}$. Then the total number of $(r+3)$-tuples that
lie on a $r$-flat is at most
$\sum_{j=1}^{r}n^{(j+1)d+(r+2-j)j}\leq\sum_{j=1}^{r}n^{(j+1)d+(r+2-j)r}\leq
rn^{(r+1)d+2r},$
since we can assume $d>r$. ∎
Now, we randomly select a subset of $[n]^{d}$ by keeping each point
independently with probability $p$. Let $S$ be the set of selected elements.
Then for each $(r+3)$-tuple $T$ in $S$ that lies on an $r$-flat, we delete one
point from $T$. We denote the resulting set of points by $S^{\prime}$. By the
claim above, the number of $(r+3)$-tuples in $[n]^{d}$ that lie on a $r$-flat
is at most $c_{6}n^{(r+1)d+2r}$ for some constant $c_{6}=c_{6}(r)$. Therefore,
$\mathbb{E}[|S^{\prime}|]\geq pn^{d}-c_{6}p^{r+3}n^{(r+1)d+2r}.$
By setting $p=(2c_{6})^{-\frac{1}{r+2}}n^{-\frac{r}{r+2}(d+2)}$, we have
$\mathbb{E}[|S^{\prime}|]\geq\frac{pn^{d}}{2}=\Omega(n^{\frac{2(d-r)}{r+2}}).$
Finally, we set $m=(c_{7}/\alpha)n^{\frac{d}{k+1}+2\alpha}$ for some
sufficiently large constant $c_{7}=c_{7}(d,k,r)$. Let $X$ denote the number of
independent sets of size $m$ in $S^{\prime}$. Using the family of containers
$\mathcal{C}$, we have
$\displaystyle\mathbb{E}[X]$
$\displaystyle\leq|\mathcal{C}|\cdot\binom{n^{\frac{k}{k+1}d+k}}{m}p^{m}$
$\displaystyle\leq\left(2^{(c_{5}/\alpha)c_{4}n^{\frac{d}{k+1}+\alpha}\log^{2}n}\right)\left(\frac{en^{\frac{k}{k+1}d+k}p}{m}\right)^{m}$
$\displaystyle\leq\left(2^{(c_{5}/\alpha)c_{4}n^{\frac{d}{k+1}+\alpha}\log^{2}n}\right)\left(c_{8}\alpha\frac{n^{\frac{k}{k+1}d+k}\cdot
n^{-\frac{r}{r+2}(d+2)}}{n^{\frac{d}{k+1}+2\alpha}}\right)^{m}$
$\displaystyle\leq\left(2^{(c_{5}/\alpha)c_{4}n^{\frac{d}{k+1}+\alpha}\log^{2}n}\right)\left(c_{8}\alpha
n^{\frac{2(k-r-1)d}{(k+1)(r+2)}+k-\frac{2r}{r+2}-2\alpha}\right)^{(c_{7}/\alpha)n^{\frac{d}{k+1}+2\alpha}},$
for some constant $c_{8}=c_{8}(d,k,r)$. Since $r\geq k$, $0<\alpha<1$, and $d$
is large, for $n$ sufficiently large, we have
$c_{8}\alpha n^{\frac{2(k-r-1)d}{(k+1)(r+2)}+k-\frac{2r}{r+2}-2\alpha}<1/2.$
Hence, we have $\mathbb{E}[X]\leq o(1)$ as $n$ tends to infinity. Notice that
$|S^{\prime}|$ is exponentially concentrated around its mean by Chernoff’s
inequality. Therefore, some realization of $S^{\prime}$ satisfies:
$|S^{\prime}|=N=\Omega(n^{2(d-r)/(r+2)})$; $S^{\prime}$ contains no
$(r+3)$-tuples on a $r$-flat; and $\mathcal{H}[S^{\prime}]$ does not contain
an independent set of size
$m=(c_{7}/\alpha)n^{\frac{d}{k+1}+2\alpha}\leq
cN^{\frac{r+2}{2(k+1)}+\frac{(r+2)r}{2(k+1)(d-r)}+\frac{r+2}{d}2\alpha}\leq
cN^{\frac{r+2}{2(k+1)}+\alpha},$
for some constant $c=c(\alpha,d,k,r)$. Here we assume $d$ is sufficiently
large so that
$\frac{(r+2)r}{2(k+1)(d-r)}+\frac{r+2}{d}2\alpha\leq\alpha.$
This completes the proof. ∎
## 4 No $d+s$ points on a hyperplane: Proof of Theorem 1.2
In this section, we prove Theorem 1.2. The proof is essentially the same as in
the previous section with a different choice of parameters. For the reader’s
convenience, we include the details here. We start by proving the following
theorem.
###### Theorem 4.1.
Let $k,r,s$ be fixed integers such that $r\geq k\geq 2$, $s\geq 2$, $k$ is
even, and $\frac{r+\frac{s-1}{2}}{r+s-1}<\frac{k}{k+1}$. Then for any
$0<\alpha<1$, there are constants $c=c(\alpha,k,r,s)$ and $d=d(\alpha,k,r,s)$
such that the following holds. For infinitely many values of $N$, there is a
set $V$ of $N$ points in $\mathbb{R}^{d}$ such that no $r+s$ members of $V$
lie on an $r$-flat, and every subset of $V$ of size $cN^{\frac{1}{2}+\alpha}$
contains $k+2$ members on a $k$-flat.
###### Proof.
Just as before, let $\mathcal{H}$ be the hypergraph with
$V(\mathcal{H})=[n]^{d}$ and $E(\mathcal{H})$ consists of non-degenerate
$(k+2)$-tuples $T$ such that $T$ lies on a $k$-flat. Let $x=x(k,r,s)$ be a
constant that will be determined later, and set
$C^{0}=[n]^{d},\mathcal{C}^{0}=\\{C^{0}\\}$, and
$\mathcal{H}^{0}=\mathcal{H}[C^{0}]=\mathcal{H}$. Having obtained a collection
of containers $\mathcal{C}^{i}$, for each container
$C_{j}^{i}\in\mathcal{C}^{i}$ with $|C_{j}^{i}|\geq n^{xd+k}$, we set
$\mathcal{H}^{i}_{j}=\mathcal{H}[C_{j}^{i}]$. Let $\gamma=\gamma(i,j)$ be
defined by $|V(\mathcal{H}^{i}_{j})|=n^{d-\gamma}$. So, $\gamma\leq d-xd-k$.
We set $\tau=\tau(i,j)=n^{-xd+\gamma+\alpha}$ and
$\epsilon=\epsilon(i,j)=c_{1}n^{-\alpha}$, where $c_{1}=c_{1}(d,k,x)$ is a
sufficiently large constant. We now make the following claim.
###### Claim 4.2.
If $x\leq\frac{k}{k+1}$, then
$\Delta(\mathcal{H}^{i}_{j},\tau)\leq\epsilon/(12\cdot(k+2)!)$.
###### Proof.
Since $|V(\mathcal{H}^{i}_{j})|=n^{d-\gamma}$, $\gamma\leq d-xd-k$, and $d$ is
sufficiently large, Lemma 2.1 implies that $|E(\mathcal{H}^{i}_{j})|\geq
c_{2}n^{(k+1)d-(k+2)\gamma}$ for some constant $c_{2}=c_{2}(d,k)$. Hence, we
have
$\frac{|V(\mathcal{H}^{i}_{j})|}{|E(\mathcal{H}^{i}_{j})|}\leq\frac{n^{d-\gamma}}{c_{2}n^{(k+1)d-(k+2)\gamma}}=\frac{1}{c_{2}n^{kd-(k+1)\gamma}}.$
On the other hand, by Lemma 2.4, we have
$\Delta_{\ell}(\mathcal{H}_{j}^{i})\leq n^{(d-\gamma)(k+1-\ell)+k}\text{\quad
for $\ell<k+2$},$
and obviously $\Delta_{k+2}(\mathcal{H}_{j}^{i})\leq 1$. Applying these
inequalities together with the definition of $\Delta$, we obtain
$\displaystyle\Delta(\mathcal{H}^{i}_{j},\tau)$
$\displaystyle=\frac{2^{\binom{k+2}{2}-1}|V(\mathcal{H}^{i}_{j})|}{(k+2)|E(\mathcal{H}_{j}^{i})|}\sum_{\ell=2}^{k+2}\frac{\Delta_{\ell}(\mathcal{H}_{j}^{i})}{\tau^{\ell-1}2^{\binom{\ell-1}{2}}}$
$\displaystyle\leq\frac{c_{3}}{n^{kd-(k+1)\gamma}}\left(\sum_{\ell=2}^{k+1}\frac{n^{(k+1-\ell)(d-\gamma)+k}}{\tau^{\ell-1}}+\frac{1}{\tau^{k+1}}\right)$
$\displaystyle=\sum_{\ell=2}^{k+1}\frac{c_{3}}{\tau^{\ell-1}n^{(\ell-1)d-k-\ell\gamma}}+\frac{c_{3}}{\tau^{k+1}n^{kd-(k+1)\gamma}},$
for some constant $c_{3}=c_{3}(d,k,x)$. Let us remark that the summation above
is where we determined our $\tau$ and $\gamma$. In order to make the last term
small, we set $\tau=n^{-xd+\gamma+\alpha}$, and recall that we assumed
$x\leq\frac{k}{k+1}$. Having determined $\tau$, in order for the first term in
the summation to be small, we choose $\gamma\leq d-xd-k$.
By setting $\epsilon=c_{1}n^{-\alpha}$ with $c_{1}=c_{1}(d,k,x)$ sufficiently
large, we have
$\displaystyle\Delta(\mathcal{H}^{i}_{j},\tau)$ $\displaystyle\leq
c_{3}\left(\sum_{\ell=2}^{k+1}n^{-(\ell-1)(1-x)d+k+\gamma-(\ell-1)\alpha}+n^{-(k-x(k+1))d-(k+1)\alpha}\right)$
$\displaystyle\leq c_{3}kn^{-\alpha}+c_{3}n^{-(k+1)\alpha}$
$\displaystyle<\frac{\epsilon}{12(k+2)!}.$
This verifies the claimed condition. ∎
Given the claim above, we can apply Lemma 3.1 to $\mathcal{H}_{j}^{i}$ with
the chosen parameters $\tau$ and $\epsilon$. Hence, we obtain a family of
containers $\mathcal{C}_{j}^{i+1}$ such that
$\displaystyle|\mathcal{C}_{j}^{i+1}|$ $\displaystyle\leq
2^{10^{3}(k+2)((k+2)!)^{3}|V(\mathcal{H}_{i}^{j})|\tau\log(1/\epsilon)\log(1/\tau)}$
$\displaystyle\leq 2^{c_{4}n^{d-xd+\alpha}\log^{2}n},$
for some constant $c_{4}=c_{4}(d,k)$. In the other case, where
$|C_{j}^{i}|<n^{xd+k}$, we just define
$\mathcal{C}_{j}^{i+1}=\\{C_{j}^{i}\\}$. Then, for each container
$C\in\mathcal{C}_{j}^{i+1}$, we have either $|C|<n^{xd+k}$ or
$|E(\mathcal{H}[C])|\leq\epsilon|E(\mathcal{H}_{j}^{i})|\leq\epsilon^{i}|E(\mathcal{H})|$.
After applying this procedure for each container in $\mathcal{C}^{i}$, we
obtain a new family of containers
$\mathcal{C}^{i+1}=\bigcup\mathcal{C}^{i}_{j}$ such that
$|\mathcal{C}^{i+1}|\leq|\mathcal{C}^{i}|2^{c_{4}n^{d-xd+\alpha}\log^{2}n}\leq
2^{(i+1)c_{4}n^{d-xd+\alpha}\log^{2}n}.$
Notice that the number of edges in $\mathcal{H}_{j}^{i}$ shrinks by a factor
of $c_{1}n^{-\alpha}$ whenever $i$ increases by one, while on the other hand,
Lemma 2.1 tells us that every large subset $C\subset[n]^{d}$ induces many
edges in $\mathcal{H}$. Hence, after at most $t\leq c_{5}/\alpha$ iterations,
for some constant $c_{5}=c_{5}(d,k,x)$, we obtain a collection of containers
$\mathcal{C}=\mathcal{C}^{t}$ such that: each container $C\in\mathcal{C}$
satisfies $|C|<n^{xd+k}$; every independent set of $\mathcal{H}$ is a subset
of some $C\in\mathcal{C}$; and
$|\mathcal{C}|\leq 2^{(c_{5}/\alpha)c_{4}n^{d-xd+\alpha}\log^{2}n}.$
Now, we randomly select a subset of $[n]^{d}$ by keeping each point
independently with probability $p$. Let $S$ be the set of selected elements.
Then for each $(r+s)$-tuple $T$ in $S$ that lies on an $r$-flat, we delete one
point from $T$. We denote the resulting set of points by $S^{\prime}$. By the
proof of Claim 3.4, the number of $(r+s)$-tuples in $[n]^{d}$ that lie on a
$r$-flat is at most $c_{6}n^{(r+1)d+(s-1)r}$ for some constant
$c_{6}=c_{6}(r)$. Therefore,
$\mathbb{E}[|S^{\prime}|]\geq pn^{d}-c_{6}p^{r+s}n^{(r+1)d+(s-1)r}.$
By setting $p=(2c_{6})^{-\frac{1}{r+s-1}}n^{-\frac{r}{r+s-1}(d+s-1)}$, we have
$\mathbb{E}[|S^{\prime}|]\geq\frac{pn^{d}}{2}=\Omega(n^{\frac{(s-1)(d-r)}{r+s-1}}).$
Finally, we set $m=(c_{7}/\alpha)n^{d-xd+2\alpha}$ for some sufficiently large
constant $c_{7}=c_{7}(d,k,r,x)$. Let $X$ denote the number of independent sets
of size $m$ in $S^{\prime}$. Using the family of containers $\mathcal{C}$, we
have
$\displaystyle\mathbb{E}[X]$
$\displaystyle\leq|\mathcal{C}|\cdot\binom{n^{xd+k}}{m}p^{m}$
$\displaystyle\leq\left(2^{(c_{5}/\alpha)c_{4}n^{d-xd+\alpha}\log^{2}n}\right)\left(\frac{en^{xd+k}p}{m}\right)^{m}$
$\displaystyle\leq\left(2^{(c_{5}/\alpha)c_{4}n^{d-xd+\alpha}\log^{2}n}\right)\left(c_{8}\alpha\frac{n^{xd+k}\cdot
n^{-\frac{r}{r+s-1}(d+s-1)}}{n^{d-xd+2\alpha}}\right)^{m}$
$\displaystyle\leq\left(2^{(c_{5}/\alpha)c_{4}n^{d-xd+\alpha}\log^{2}n}\right)\left(c_{8}\alpha
n^{2xd+k-2\alpha-\frac{r(s-1)}{r+s-1}-\frac{2r+s-1}{r+s-1}d}\right)^{(c_{7}/\alpha)n^{d-xd+2\alpha}},$
for some constant $c_{8}=c_{8}(d,k,r,x)$. Since $r\geq k$, $0<\alpha<1$, and
$d$ is large, for $n$ sufficiently large, we have
$c_{8}\alpha
n^{2xd+k-2\alpha-\frac{r(s-1)}{r+s-1}-\frac{2r+s-1}{r+s-1}d}<1/2,$
as long as
$x\leq\frac{r+\frac{s-1}{2}}{r+s-1}+\frac{1}{2d}\frac{r(s-1)}{r+s-1}-\frac{k}{2d}.$
We now set $x(k,r,s)$ equal to the right-hand side of the inequality above.
Moreover, using our assumption that
$\frac{r+\frac{s-1}{2}}{r+s-1}<\frac{k}{k+1}$, we have $x\leq\frac{k}{k+1}$
for $d=d(\alpha,k,r,s)$ sufficiently large. Thus, satisfying the condition for
Claim 4.2.
Hence, we have $\mathbb{E}[X]=o(1)$ as $n$ tends to infinity. Notice that
$|S^{\prime}|$ is exponentially concentrated around its mean by Chernoff’s
inequality. Therefore, some realization of $S^{\prime}$ satisfies:
$|S^{\prime}|=N=\Omega(n^{(s-1)(d-r)/(r+s-1)})$, $S^{\prime}$ contains no
$(r+s)$-tuples on an $r$-flat, and $\mathcal{H}[S^{\prime}]$ does not contain
an independent set of size
$m=(c_{7}/\alpha)n^{d-xd+2\alpha}\leq
cN^{\frac{1}{2}+\frac{k/2}{\frac{(s-1)(d-r)}{r+s-1}}+\frac{2\alpha}{\frac{(s-1)(d-r)}{r+s-1}}}\leq
N^{\frac{1}{2}+\alpha},$
for some constant $c=c(\alpha,d,k,r,x)$. Here we assume $d$ is sufficiently
large so that
$\frac{k/2}{\frac{(s-1)(d-r)}{r+s-1}}+\frac{2\alpha}{\frac{(s-1)(d-r)}{r+s-1}}\leq\alpha.$
This completes the proof. ∎
###### Proof of Theorem 1.2.
In dimensions $d_{0}\geq 3$, where $d_{0}$ is odd, we obtain an upper bound
for $\alpha_{d_{0},s_{0}}(N)$, where
$\frac{2d_{0}+s_{0}-2}{2d_{0}+2s_{0}-2}<\frac{d_{0}-1}{d_{0}},$
by applying Theorem 4.1 with $k=r=d_{0}-1$ and $s=s_{0}+1$. Hence, we have
$\frac{r+\frac{s-1}{2}}{r+s-1}<\frac{k}{k+1},$
and therefore, we obtain a point set $V$ in $\mathbb{R}^{d}$ with the property
that no $d_{0}+s_{0}$ members lie on a $(d_{0}-1)$-flat, and every subset of
size $cN^{\frac{1}{2}+\alpha}$ contains $d_{0}+1$ members on a
$(d_{0}-1)$-flat. By projecting $V$ to a generic $d_{0}$-dimensional subspace
of $\mathbb{R}^{d}$, we obtain $N$ points in $\mathbb{R}^{d_{0}}$ with no
$d_{0}+s_{0}$ members on a common hyperplane, and no $cN^{\frac{1}{2}+\alpha}$
members in general position.
In dimensions $d_{0}\geq 4$ where $d_{0}$ is even, we obtain an upper bound
for $\alpha_{d_{0},s_{0}}(N)$, where
$\frac{2d_{0}+s_{0}-2}{2d_{0}+2s_{0}-2}<\frac{d_{0}-2}{d_{0}-1},$
by applying Theorem 4.1 with $k=d_{0}-2$, $r=d_{0}-1$, and $s=s_{0}+1$. Hence,
we have
$\frac{r+\frac{s-1}{2}}{r+s-1}<\frac{k}{k+1},$
and therefore, we obtain a point set $V$ in $\mathbb{R}^{d}$ with the property
that no $d_{0}+s_{0}$ members on a $(d_{0}-1)$-flat, and every subset of size
$cN^{\frac{1}{2}+\alpha}$ contains $d_{0}$ members on a $(d_{0}-2)$-flat. By
adding another point from this subset, we obtain $d_{0}+1$ members on a
$(d_{0}-1)$-flat. Hence, by projecting to $V$ a generic $d_{0}$-dimensional
subspace of $\mathbb{R}^{d}$, we obtain $N$ points in $\mathbb{R}^{d_{0}}$
with no $d_{0}+s_{0}$ members on a common hyperplane, and no
$cN^{\frac{1}{2}+\alpha}$ members in general position. This completes the
proof of Theorem 1.2. ∎
## 5 Avoiding non-trivial solutions: Proof of Theorem 1.3
In this section, we will give a proof of Theorem 1.3. Let $V\subset[n]^{d}$
such that there are no $k+2$ points that lie on a $k$-flat. In [16], Lefmann
showed that $|V|\leq O\left(n^{\frac{d}{\lfloor(k+2)/2\rfloor}}\right)$. To
see this, assume that $k$ is even and consider all elements of the form
$v_{1}+\dots+v_{\frac{k}{2}+1}$, where $v_{i}\neq v_{j}$ and $v_{i}\in V$. All
of these elements are distinct, since otherwise we would have $k+2$ points on
a $k$-flat. In other words, the equation
$\left(\textbf{x}_{1}+\dots+\textbf{x}_{\frac{k}{2}+1}\right)-\left(\textbf{x}_{\frac{k}{2}+2}+\dots+\textbf{x}_{k+2}\right)=\textbf{0},$
does not have a solution with
$\\{\textbf{x}_{1},\dots,\textbf{x}_{\frac{k}{2}+1}\\}$ and
$\\{\textbf{x}_{\frac{k}{2}+2},\dots,\textbf{x}_{k+2}\\}$ being two different
$(\frac{k}{2}+1)$-tuples of $V$. Therefore, we have
$\binom{|V|}{\frac{k}{2}+1}\leq(kn)^{d}$, and this implies Lefmann’s bound.
More generally, let us consider the equation
$c_{1}\textbf{x}_{1}+c_{2}\textbf{x}_{2}+\dots+c_{r}\textbf{x}_{r}=\textbf{0},$
(5.1)
with constant coefficients $c_{i}\in\mathbb{Z}$ and $\sum_{i}c_{i}=0$. Here,
the variables $\textbf{x}_{i}$ takes value in $\mathbb{Z}^{j}$. A solution
$(\textbf{x}_{1},\dots,\textbf{x}_{r})$ to equation (5.1) is called _trivial_
if there is a partition
$\mathcal{P}:[r]=\mathcal{I}_{1}\cup\dots\cup\mathcal{I}_{t}$, such that
$\textbf{x}_{j}=\textbf{x}_{\ell}$ if and only if $j,\ell\in\mathcal{I}_{i}$,
and $\sum_{j\in\mathcal{I}_{i}}c_{j}=0$ for all $i\in[t]$. In other words,
being trivial means that, after combining like terms, the coefficient of each
$\textbf{x}_{i}$ becomes zero. Otherwise, we say that the solution
$(\textbf{x}_{1},\dots,\textbf{x}_{r})$ is _non-trivial_. A natural extremal
problem is to determine the maximum size of a set $A\subset[n]^{d}$ with only
trivial solutions to (5.1). When $d=1$, this is a classical problem in
additive number theory, and we refer the interested reader to [23, 19, 17, 6].
By combining the arguments of Cilleruelo and Timmons [6] and Jia [14], we
establish the following theorem.
###### Theorem 5.1.
Let $d,r$ be fixed positive integers. Suppose $V\subset[n]^{d}$ has only
trivial solutions to each equation of the form
$c_{1}\left((\textbf{x}_{1}+\dots+\textbf{x}_{r})-(\textbf{x}_{r+1}+\dots+\textbf{x}_{2r})\right)=c_{2}\left((\textbf{x}_{2r+1}+\dots+\textbf{x}_{3r})-(\textbf{x}_{3r+1}+\dots+\textbf{x}_{4r})\right),$
(5.2)
for integers $c_{1},c_{2}$ such that $1\leq c_{1},c_{2}\leq
n^{\frac{d}{2rd+1}}$. Then we have
$|V|\leq O\left(n^{\frac{d}{2r}\left(1-\frac{1}{2rd+1}\right)}\right).$
Notice that Theorem 1.3 follows from Theorem 5.1. Indeed, when $k+2$ is
divisible by $4$, we set $r=(k+2)/4$. If $V\subset[n]^{d}$ contains $k+2$
points $\\{v_{1},\dots,v_{k+2}\\}$ that is a non-trivial solution to (5.2)
with $\textbf{x}_{i}=v_{i}$, then $\\{v_{1},\dots,v_{k+2}\\}$ must lie on a
$k$-flat. Hence, when $k+2$ is divisible by $4$, we have
$a(d,k,n)\leq
O\left(n^{\frac{d}{(k+2)/2}\left(1-\frac{1}{(k+2)d/2+1}\right)}\right).$
Since we have $a(d,k,n)<a(d,k-1,n)$, this implies that for all $k\geq 2$, we
have
$a(d,k,n)\leq
O\left(n^{\frac{d}{2\lfloor(k+2)/4\rfloor}\left(1-\frac{1}{2\lfloor(k+2)/4\rfloor
d+1}\right)}\right).$
In the proof of Theorem 5.1, we need the following well-known lemma (see e.g.
[6]Lemma 2.1 and [23]Theorem 4.1). For $U,T\subset\mathbb{Z}^{d}$ and
$x\in\mathbb{Z}^{d}$, we define
$\Phi_{U-T}(x)=\\{(u,t):u-t=x,u\in U,t\in T\\}.$
###### Lemma 5.2.
For finite sets $U,T\subset\mathbb{Z}^{d}$, we have
$\frac{(|U||T|)^{2}}{|U+T|}\leq\sum_{x\in\mathbb{Z}^{d}}|\Phi_{U-U}(x)|\cdot|\Phi_{T-T}(x)|.$
###### Proof of Theorem 5.1.
Let $d$, $r$, and $V$ be as given in the hypothesis. Let $m\geq 1$ be an
integer that will be determined later. We define
$S_{r}=\\{v_{1}+\dots+v_{r}:v_{i}\in V,v_{i}\neq v_{j}\\},$
and a function
$\sigma:\binom{V}{r}\rightarrow S_{r},\ \\{v_{1},\dots,v_{r}\\}\mapsto
v_{1}+\dots+v_{r}.$
Notice that $\sigma$ is a bijection. Indeed, suppose on the contrary that
$v_{1}+\dots+v_{r}=v^{\prime}_{1}+\dots+v^{\prime}_{r}$
for two different $r$-tuples in $V$. Then by setting
$(\textbf{x}_{1},\dots,\textbf{x}_{r})=(v_{1},\dots,v_{r})$,
$(\textbf{x}_{r+1},\dots,\textbf{x}_{2r})=(v^{\prime}_{1},\dots,v^{\prime}_{r})$,
$(\textbf{x}_{2r+1},\dots,\textbf{x}_{3r})=(\textbf{x}_{3r+1},\dots,\textbf{x}_{4r})$
arbitrarily, and $c_{1}=c_{2}=1$, we obtain a non-trivial solution to (5.2),
which is a contradiction. In particular, we have $|S_{r}|=\binom{|V|}{r}$.
For $j\in[m]$ and $w\in\mathbb{Z}_{j}^{d}$, we let
$U_{j,w}=\\{u\in\mathbb{Z}^{d}:ju+w\in S_{r}\\}.$
Notice that for fixed $j\in[m]$, we have
$\sum_{w\in\mathbb{Z}_{j}^{d}}|U_{j,w}|=\sum_{w\in\mathbb{Z}_{j}^{d}}|\\{v\in
S_{r}:v\equiv w\text{ mod $j$}\\}|=|S_{r}|.$
Applying Jensen’s inequality to above, we have
$\sum_{w\in\mathbb{Z}_{j}^{d}}|U_{j,w}|^{2}\geq|S_{r}|^{2}/j^{d}.$ (5.3)
For $i\geq 0$, we define
$\Phi^{i}_{U_{j,w}-U_{j,w}}(x)=\\{(u_{1},u_{2})\in\Phi_{U_{j,w}-U_{j,w}}(x):|\sigma^{-1}(ju_{1}+w)\cap\sigma^{-1}(ju_{2}+w)|=i\\}.$
It’s obvious that these sets form a partition of $\Phi_{U_{j,w}-U_{j,w}}(x)$.
We also make the following claims.
###### Claim 5.3.
For a fixed $x\in\mathbb{Z}^{d}$, we have
$\sum_{j\in[m]}\sum_{w\in\mathbb{Z}_{j}^{d}}|\Phi^{0}_{U_{j,w}-U_{j,w}}(x)|\leq
1,$
###### Proof.
For the sake of contradiction, suppose the summation above is at least two,
then we have $(u_{1},u_{2})\in\Phi^{0}_{U_{j,w}-U_{j,w}}(x)$ and
$(u_{3},u_{4})\in\Phi^{0}_{U_{j^{\prime},w^{\prime}}-U_{j^{\prime},w^{\prime}}}(x)$
such that either $(u_{1},u_{2})\neq(u_{3},u_{4})$ or
$(j,w)\neq(j^{\prime},w^{\prime})$.
Let $s_{1},s_{2},s_{3},s_{4}\in S_{r}$ such that $s_{1}=ju_{1}+w$,
$s_{2}=ju_{2}+w$, $s_{3}=j^{\prime}u_{3}+w^{\prime}$,
$s_{4}=j^{\prime}u_{4}+w^{\prime}$ and write
$\sigma^{-1}(s_{i})=\\{v_{i,1},\dots,v_{i,r}\\}$. Notice that
$u_{1}-u_{2}=x=u_{3}-u_{4}$. Putting these equations together gives us
$j^{\prime}((v_{1,1}+\dots+v_{1,r})-(v_{2,1}+\dots+v_{2,r}))=j((v_{3,1}+\dots+v_{3,r})-(v_{4,1}+\dots+v_{4,r})).$
(5.4)
It suffices to show that (5.4) can be seem as a non-trivial solution to (5.2).
The proof now falls into the following cases.
_Case 1._ Suppose $j\neq j^{\prime}$. Without loss of generality we can assume
$j^{\prime}>j$. Notice that $(u_{1},u_{2})\in\Phi^{0}_{U_{j,w}-U_{j,w}}(x)$
implies
$\\{v_{1,1},\dots,v_{1,r}\\}\cap\\{v_{2,1},\dots,v_{2,r}\\}=\emptyset.$
Then after combining like terms in (5.4), the coefficient of $v_{1}^{1}$ is at
least $j^{\prime}-j$, which means this is indeed a non-trivial solution to
(5.2).
_Case 2._ Suppose $j=j^{\prime}$, then we must have $s_{1}\neq s_{3}$. Indeed,
if $s_{1}=s_{3}$, we must have $w=w^{\prime}$ (as $s_{1}$ modulo $j$ equals
$s_{3}$ modulo $j^{\prime}$) and $s_{2}=s_{4}$ (as
$j^{\prime}(s_{1}-s_{2})=j(s_{3}-s_{4})$). This is a contradiction to either
$(u_{1},u_{2})\neq(u_{3},u_{4})$ or $(j,w)\neq(j^{\prime},w^{\prime})$.
Given $s_{1}\neq s_{3}$, we can assume, without loss of generality,
$v_{1,1}\not\in\\{v_{3,1},\dots,v_{3,r}\\}$. Again, we have
$\\{v_{1,1},\dots,v_{1,r}\\}\cap\\{v_{2,1},\dots,v_{2,r}\\}=\emptyset$. Hence,
after combining like terms in (5.4), the coefficient of $v_{1}^{1}$ is
positive and we have a non-trivial solution to (5.2).∎
###### Claim 5.4.
For a finite set $T\subset\mathbb{Z}^{d}$, and fixed integers $i,j\geq 1$, we
have
$\sum_{w\in\mathbb{Z}_{j}^{d}}\sum_{x\in\mathbb{Z}^{d}}|\Phi^{i}_{U_{j,w}-U_{j,w}}(x)|\cdot|\Phi_{T-T}(x)|\leq|V|^{2r-i}|T|.$
###### Proof.
The summation on the left-hand side counts all (ordered) quadruples
$(u_{1},u_{2},t_{1},t_{2})$ such that
$(u_{1},u_{2})\in\Phi^{i}_{U_{j,w}-U_{j,w}}(t_{1}-t_{2})$. For each such a
quadruple, let $s_{1},s_{2}\in S_{r}$ such that
$s_{1}=ju_{1}+w\text{\quad and\quad}s_{2}=ju_{2}+w.$
There are at most $|V|^{2r-i}$ ways to choose a pair $(s_{1},s_{2})$
satisfying $|\sigma^{-1}(s_{1})\cap\sigma^{-1}(s_{2})|=i$. Such a pair
$(s_{1},s_{2})$ determines $(u_{1},u_{2})$ uniquely. Moreover, $(s_{1},s_{2})$
also determines the quantity
$t_{1}-t_{2}=u_{1}-u_{2}=\frac{s_{1}-w}{j}-\frac{s_{2}-w}{j}=\frac{1}{j}(s_{1}-s_{2}).$
After such a pair $(s_{1},s_{2})$ is chosen, there are at most $|T|$ ways to
choose $t_{1}$ and this will also determine $t_{2}$. So we conclude the claim
by multiplication.∎
Now, we set $T=\mathbb{Z}_{\ell}^{d}$ for some integer $\ell$ to be determined
later. Notice that $U_{j,w}+T\subset\\{0,1,\dots,\lfloor
rn/j\rfloor+\ell-1\\}^{d}$, which implies
$|U_{j,w}+T|\leq(rn/j+\ell)^{d}.$ (5.5)
By Lemma 5.2, we have
$\frac{|U_{j,w}|^{2}||T|^{2}}{|U_{j,w}+T|}\leq\sum_{x\in\mathbb{Z}^{d}}|\Phi_{U_{j,w}-U_{j,w}}(x)|\cdot|\Phi_{T-T}(x)|.$
Summing over all $j\in[m]$ and $w\in\mathbb{Z}_{j}^{d}$, and using Claims 5.3
and 5.4, we can compute
$\displaystyle\sum_{j\in[m]}\sum_{w\in\mathbb{Z}_{j}^{d}}\frac{|U_{j,w}|^{2}||T|^{2}}{|U_{j,w}+T|}$
$\displaystyle\leq\sum_{j\in[m]}\sum_{w\in\mathbb{Z}_{j}^{d}}\sum_{x\in\mathbb{Z}^{d}}|\Phi_{U_{j,w}-U_{j,w}}(x)|\cdot|\Phi_{T-T}(x)|$
$\displaystyle=\sum_{x\in\mathbb{Z}^{d}}\sum_{j\in[m]}\sum_{w\in\mathbb{Z}_{j}^{d}}\left(|\Phi^{0}_{U_{j,w}-U_{j,w}}(x)|+\sum_{i=1}^{r}|\Phi^{i}_{U_{j,w}-U_{j,w}}(x)|\right)|\Phi_{T-T}(x)|$
$\displaystyle\leq\sum_{x\in\mathbb{Z}^{d}}|\Phi_{T-T}(x)|\sum_{j\in[m]}\sum_{w\in\mathbb{Z}_{j}^{d}}|\Phi^{0}_{U_{j,w}-U_{j,w}}(x)|+\sum_{j\in[m]}\sum_{i=1}^{r}|V|^{2r-i}\ell^{d}$
$\displaystyle\leq\sum_{x\in\mathbb{Z}^{d}}\Phi_{T-T}(x)+\sum_{j\in[m]}\sum_{i=1}^{r-1}|V|^{2r-i}\ell^{d}$
$\displaystyle\leq\ell^{2d}+rm|V|^{2r-1}\ell^{d},$
On the other hand, using (5.3) and (5.5), we can compute
$\displaystyle\sum_{j\in[m]}\sum_{w\in\mathbb{Z}_{j}^{d}}\frac{|U_{j,w}|^{2}||T|^{2}}{|U_{j,w}+T|}$
$\displaystyle\geq\sum_{j\in[m]}\sum_{w\in\mathbb{Z}_{j}^{d}}\frac{|U_{j,w}|^{2}\ell^{2d}}{(rn/j+\ell)^{d}}$
$\displaystyle\geq\sum_{j\in[m]}\frac{|S_{r}|^{2}\ell^{2d}}{j^{d}(rn/j+\ell)^{d}}$
$\displaystyle=\sum_{j\in[m]}\frac{|S_{r}|^{2}\ell^{2d}}{(rn+j\ell)^{d}}$
$\displaystyle\geq\frac{m|S_{r}|^{2}\ell^{2d}}{(rn+m\ell)^{d}},$
Combining the two inequalities above gives us
$\displaystyle\frac{m|S_{r}|^{2}\ell^{2d}}{(rn+m\ell)^{d}}\leq\ell^{2d}+rm|V|^{2r-1}\ell^{d}$
$\displaystyle\implies$
$\displaystyle|S_{r}|^{2}\leq\frac{(rn+m\ell)^{d}}{m}+r|V|^{2r-1}\frac{(rn+m\ell)^{d}}{\ell^{d}}.$
By setting $m=n^{\frac{d}{2rd+1}}$ and $\ell=n^{1-\frac{d}{2rd+1}}$, we get
$\binom{|V|}{r}^{2}=|S_{r}|^{2}\leq
cn^{d-\frac{d}{2rd+1}}+c|V|^{2r-1}n^{\frac{d^{2}}{2rd+1}},$
for some constant $c$ depending only on $d$ and $r$. We can solve from this
inequality that
$|V|=O\left(n^{\frac{d}{2r}\left(1-\frac{1}{2rd+1}\right)}\right),$
completing the proof.∎
## 6 Concluding remarks
1\. It is easy to see that $\alpha_{d,s}(N)\geq\Omega(N^{1/d})$ for any fixed
$d,s\geq 2$. Let $S$ be a set consisting of $N$ points in $\mathbb{R}^{d}$
with no $d+s$ members on a hyperplane. Suppose $V$ is a maximal subset of $S$
in general position, then $V$ generates at most $\binom{|V|}{d}$ hyperplanes
and each of them covers at most $s$ points from $S\setminus V$. Hence we have
the inequality
$s\binom{|V|}{d}+|V|\geq|S|=N,$
which justifies the claimed lower bound of $\alpha_{d,s}(N)$.
###### Problem 6.1.
Are there fixed integers $d,s\geq 3$ such that $\alpha_{d,s}(N)\leq
o(N^{\frac{1}{2}})$?
2\. We call a subset $V\subset[n]^{d}$ a _$m$ -fold $B_{g}$-set_ if $V$ only
contains trivial solutions to the equations
$c_{1}\textbf{x}_{1}+c_{2}\textbf{x}_{2}+\dots+c_{g}\textbf{x}_{g}=c_{1}\textbf{x}^{\prime}_{1}+c_{2}\textbf{x}^{\prime}_{2}+\dots+c_{g}\textbf{x}^{\prime}_{g},$
with constant coefficients $c_{i}\in[m]$. We call $1$-fold $B_{g}$-sets simply
_$B_{g}$ -sets_. By counting distinct sums, we have an upper bound $|V|\leq
O(n^{\frac{d}{g}})$ for any $B_{g}$-set $V\subset[n]^{d}$.
Our Theorem 5.1 can be interpreted as the following phenomenon: by letting $m$
grow as some proper polynomial in $n$, we have an upper bound for $m$-fold
$B_{g}$-sets, where $g$ is even, which gives a polynomial-saving improvement
from the trivial $O(n^{\frac{d}{g}})$ bound. We believe this phenomenon should
also hold without the parity condition on $g$.
## References
* [1] J. Balogh, R. Morris, and W. Samotij, Independent sets in hypergraphs, _Journal of the American Mathematical Society_ 28 (2015), 669–709.
* [2] J. Balogh, R. Morris, W. Samotij, The method of hypergraph containers, in _Proceedings of the International Congress of Mathematicians: Rio de Janeiro 2018_ , pp. 3059–3092, 2018.
* [3] J. Balogh, J. Solymosi, On the number of points in general position in the plane, _Discrete Analysis_ 16 (2018), 20pp.
* [4] P. Brass, C. Knauer, On counting point-hyperplane incidences, _Computational Geometry_ 25 (2003), 13–20.
* [5] J. Cardinal, C.D. Tóth, D. Wood, General position subsets and independent hyperplanes in $d$-space, _Journal of Geometry_ 108 (2017), 33–43.
* [6] J. Cilleruelo, C. Timmons, $k$-fold Sidon sets, _Electronic Journal of Combinatorics_ 21 (2014), 4–12.
* [7] H.E. Dudeney, _Amusements in Mathematics_ , Nelson, London 1917.
* [8] P. Erdős, On some metric and combinatorial geometric problems, _Discrete Mathematics_ 60 (1986), 147–153.
* [9] A. Flammenkamp, Progress in the no-three-in-line problem, II, _Journal of Combinatorial Theory, Series A_ 81 (1998), 108–113.
* [10] Z. Füredi, Maximal independent subsets in Steiner systems and in planar sets, _SIAM Journal on Discrete Mathematics_ 4 (1991), 196–199.
* [11] H. Furstenberg, Y. Katznelson, A density version of the Hales-Jewett theorem for $k=3$, _Discrete Mathematics_ 75 (1989), 227–241.
* [12] H. Furstenberg, Y. Katznelson, A density version of the Hales-Jewett theorem, _Journal d’Analyse Mathématique_ 57 (1991), 64–119.
* [13] R.R. Hall, T.H. Jackson, A. Sudbery, K. Wild, Some advances in the no-three-in-line problem, _Journal of Combinatorial Theory, Series A_ 18 (1975), 336–341.
* [14] X.D. Jia, On finite Sidon sequences, _Journal of Number Theory_ 49 (1994), 246-249.
* [15] H. Lefmann, No $\ell$ Grid-Points in Spaces of Small Dimension, in _Proceedings of International Conference on Algorithmic Applications in Management_ , pp. 259–270, 2008.
* [16] H. Lefmann, Extensions of the No-Three-In-Line problem, preprint. www.tu-chemnitz.de/informatik/ThIS/downloads/publications/lefmann_no_three_submitted.pdf.
* [17] F. Lazebnik, J. Verstraëte, On hypergraphs of girth five, _Electronic Journal of Combinatorics_ 10 (2003), #R25.
* [18] L. Milićević, Sets in almost general position, _Combinatorics, Probability and Computing_ 26 (2017), 720–745.
* [19] K. O’Bryant, A complete annotated bibliography of work related to Sidon sequences, _Electronic Journal of Combinatorics_ DS 11 (2004).
* [20] K.T. Phelps, V. Rödl, Steiner triple systems with minimum independence number, _Ars combinatoria_ 21 (1986), 167–172.
* [21] A. Pór, D.R. Wood, No-Three-in-Line-in-3D, _Algorithmica_ 47 (2007), 481–488.
* [22] K.F. Roth, On a problem of Heilbronn, _Journal of the London Mathematical Society_ 26 (1951), 198–204.
* [23] I. Ruzsa, Solving a linear equation in a set of integers I, _Acta Arithmetica_ 65 (1993), 259–282.
* [24] D. Saxton and A. Thomason, Hypergraph containers, _Inventiones mathematicae_ 201 (2015), 925–992.
* [25] B. Sudakov, I. Tomon, Evasive sets, covering by subspaces, and point-hyperplane incidences, arxiv:2207.13077.
|
# Democratizing Machine Learning for Interdisciplinary Scholars:
Report on Organizing the NLP+CSS Online Tutorial Series
Ian Stewart Pacific Northwest National Laboratory<EMAIL_ADDRESS>and
Katherine A. Keith Williams College<EMAIL_ADDRESS>
###### Abstract.
Many scientific fields—including biology, health, education, and the social
sciences—use machine learning (ML) to help them analyze data at an
unprecedented scale. However, ML researchers who develop advanced methods
rarely provide detailed tutorials showing how to apply these methods. Existing
tutorials are often costly to participants, presume extensive programming
knowledge, and are not tailored to specific application fields. In an attempt
to democratize ML methods, we organized a year-long, free, online tutorial
series targeted at teaching advanced natural language processing (NLP) methods
to computational social science (CSS) scholars. Two organizers worked with
fifteen subject matter experts to develop one-hour presentations with hands-on
Python code for a range of ML methods and use cases, from data pre-processing
to analyzing temporal variation of language change. Although live
participation was more limited than expected, a comparison of pre- and post-
tutorial surveys showed an increase in participants’ perceived knowledge of
almost one point on a 7-point Likert scale. Furthermore, participants asked
thoughtful questions during tutorials and engaged readily with tutorial
content afterwards, as demonstrated by over 10K total views of posted tutorial
recordings. In this report, we summarize our organizational efforts and
distill five principles for democratizing ML+X tutorials. We hope future
organizers improve upon these principles and continue to lower barriers to
developing ML skills for researchers of all fields.111NLP+CSS Tutorial
Website: https://nlp-css-201-tutorials.github.io/nlp-css-201-tutorials/
machine learning, data science, graduate instruction, interdisciplinary
programs
††conference: ; ; ††ccs: Computing methodologies Artificial intelligence††ccs:
Social and professional topics Computing education programs
## 1\. Introduction
Interest in incorporating machine learning into scientific analyses has
exploded in the last two decades. Machine learning (ML)—the process of
teaching a machine to predict statistical patterns in data (jordan2015machine,
)—has gained prominence in biology (jones2019setting, ), physics
(karniadakis2021physics, ), health care (beam2018big, ), and the social
sciences (mason2014computational, ) inter alia, yielding many successful
“ML+X” collaborations. While this potential for ML+X is enormous, many
researchers unfamiliar with ML methods face barriers to entry, partly because
implementing complex methods can be challenging for those without strong
mathematical or programming backgrounds (cai2019software, ).
Figure 1. Recording for _Tutorial 2: Extracting Information from Documents_
led by Andrew Halterman. As of October 23, 2022, this video had 5.1K views on
YouTube.
As a starting point for ML newcomers, some ML+X educational material covers
simpler methods such as regression (saunders2018eleven, ). For example, the
Summer Institutes for Computational Social Science (SICSS) have developed
learning materials that focus on basic ML methods for network analysis and
text processing (sharlach2019, ). Computational social science researchers
often leverage these kinds of methods to analyze web-scale data, which can
shed light on complicated social processes such as political polarization
(bail2018exposure, ).
Basic ML methods provide a useful starting place but often lack the analytical
power required to handle more complex questions that other fields require.
Social scientists often want to use ML to develop deep semantic
representations of language or to estimate causal effects that can lead to
better predictive performance. Scientists who seek to advance their
understanding of ML beyond the basics are often left searching for tutorial-
like materials on their own, a difficult and often time-consuming task. On the
other hand, well-meaning ML experts may try to share their expertise through
media such as blog posts, but they run the risk of “parachuting” into
unfamiliar fields with ill-adapted solutions (adame2021meaningful, ;
summers2021artificial, ). Finally, many formal avenues for sharing knowledge
about ML—such as academic conferences—can systematically _exclude_ researchers
outside of ML via high fees to access materials.222Among other issues, social
science research generally receives less funding compared to computer science.
For instance, in 2021, the NSF dispersed $283 million in funding for social,
behavioral and economic sciences, versus $1 billion for computer and
information sciences and engineering (from
https://www.nsf.gov/about/congress/118/highlights/cu21.jsp, accessed 10 August
2022). This lack of funding can often prevent social science researchers from
attending ML conferences where many tutorials are presented.
We take the position that ML researchers can make their methods more
accessible and inclusive to researchers outside the field by creating online
instruction explicitly tailored to the fields of subject matter experts. Using
the NLP+CSS tutorial series we organized in 2021-2022 as a case study, we
argue that these interdisciplinary training sessions should incorporate the
following Principles for Democratizing ML+X Tutorials:
1. P.1
Teach machine learning (ML) methods that are relevant and targeted to specific
non-ML fields—e.g. biology, health, or the social sciences
2. P.2
Teach ML methods that are recent and cutting-edge
3. P.3
Lower start-up costs of programming languages and tooling
4. P.4
Provide open-source code that is clearly written, in context, and easily
adapted to new problems
5. P.5
Reduce both monetary and time costs for participants
#### ML+Social Sciences
Starting in summer 2021, we put our principles into action and created the
_NLP+CSS 201 Online Tutorial Series_. We focused on an applied branch of
machine learning to language data—a field called natural language processing
(NLP)—aimed at early career researchers in the social sciences. This report
reflects on our experience and provides clear takeaways so that others can
generalize our NLP + social sciences tutorials to tutorials targeted at other
ML+X disciplines.
As we describe in Section 3, we incorporated the principles above into our
tutorial series by: (P.1&P.2) inviting experts in computational social science
(CSS) to each lead a tutorial on a cutting edge NLP method; (P.3&P.4) working
with the experts to create a learning experience that is hosted in a self-
contained interactive development environment in Python—Google
CoLaboratory—and uses real-world social science datasets to provide context
for the method; and (P.5) hosting our tutorials live via Zoom and posting the
recordings on YouTube, while providing all the materials and participation
without any monetary costs to participants.
The impact of the inaugural year of our series is tangible. We created twelve
stand-alone tutorials made by fifteen area-expert tutorial hosts, have 396
members on an e-mail mailing list, and accumulated over 10K total views on
tutorial recordings posted to YouTube.333As of October 2022, videos available
here: https://www.youtube.com/channel/UCcFcF9DkanjaK3HEk7bsd-A Comparing
surveys pre- and post-tutorial, participants during the live sessions self-
assessed as improving their knowledge of the topic by 0.77 on a 7-point Likert
scale (Section 4). After exploring highlights of the series, we discuss areas
for improvement in Section 5, including a suggestion to frame the tutorials a
“springboard” for researcher’s own exploration of advanced ML methods.
## 2\. Related work
### 2.1. Interdisciplinary tutorials
Researchers specializing in NLP methods have proposed a variety of
interdisciplinary tutorials to address social science questions, which we
surveyed before we began planning our tutorial series. However, none satisfied
all the principles we listed in Section 1. The tutorials presented at the
conferences for the Association for Computational Linguistics
(ACL)444https://www.aclweb.org/portal/acl_sponsored_events—one of the premiere
venues for NLP research—are on the cutting edge of research ($+$P.2) and often
include code ($+$P.4), but the ACL tutorials are also often are geared towards
NLP researchers rather than researchers in fields outside of computer science
($-$P.1), contain code that assumes substantial background knowledge ($-$P.3)
and cost hundreds of dollars to attend ($-$P.5). Other interdisciplinary
conferences such as the International Conference on Computational Social
Science (IC2S2)555https://iscss.org/ic2s2/conference/ also have tutorials that
explain recent NLP methods to computational social scientists
($+$P.1,P.2,P.4), but often the tutorials are presented with inconsistent
formats ($-$P.3) and cost money to attend ($-$P.5). The Summer Institutes in
Computational Social Science (SICSS) (sharlach2019, ) provide free ($+$P.5)
tutorials on NLP methods for social scientists ($+$P.1) with accompanying code
($+$P.3&P.4), but they cover only the basic NLP techniques and not cutting
edge methods ($-$P.2), while also limiting their target audience to people
already involved with CSS research.666NLP methods include word counting and
basic topic modeling: https://sicss.io/curriculum (accessed 11 August 2022).
### 2.2. Online learning
While not without flaws, online learning experiences such as Massive Online
Open Courses (MOOCs) have proven useful in higher education when meeting
physically is impossible or impractical to due to students’ geographic
distance (de2011using, ; harasim2000shift, ; marcelino2018learning, ). Online
courses have disrupted traditional education such as in-person college classes
(vardi2012will, ), but they may eventually prove most useful as a supplement
rather than a replacement to traditional education (twigg2003models, ). For
one, computer science students have found online learning useful when it
incorporates interactive components such as hands-on exercises which may not
be possible to execute during a lecture (meerbaum2013learning, ;
tang2014environment, ). Additionally, while the centralized approach to
traditional education can provide useful structure for students new to a
domain, the decentralized approach of many online courses can provide room for
socialization and creativity in content delivery (wallace2013social, ;
wiley2002online, ). We intended our tutorial series to fit into the developing
paradigm of online education as a decentralized and interactive experience,
which would not replace but supplement social science education in machine
learning. However, our tutorial series differs from MOOCs in that we limit the
time committment for each topic to one hour (+P.5) and each tutorial hour is
meant to be stand-alone so that researchers can watch only the topics that are
relevant to them.
## 3\. Methods for Tutorial Series: Process and Timeline
No. | Tutorial Title | Views
---|---|---
Fall 2021
T1 | Comparing Word Embedding Models | 1427
T2 | Extracting Information from Documents | 5386
T3 | Controlling for Text in Causal Inference with Double Machine Learning | 472
T4 | Text Analysis with Contextualized Topic Models | 570
T5 | BERT for Computational Social Scientists | 948
Spring 2022
T6 | Moving from Words to Phrases when Doing NLP | 356
T7 | Analyzing Conversations in Python Using ConvoKit | 430
T8 | Preprocessing Social Media Text | 659
T9 | Aggregated Classification Pipelines | 139
T10 | Estimating Causal Effects of Aspects of Language with Noisy Proxies | 264
T11 | Processing Code-mixed Text | 259
T12 | Word Embeddings for Descriptive Corpus Analysis | 192
Table 1. Tutorial content. Order, title, and number of views of the
corresponding recordings on YouTube as of October, 2022. Full abstracts of
each tutorial are provided in the appendix, Table 3.
We describe our process and timeline for creating the tutorial series with the
hope that future ML+X tutorial series organizers can copy or build from our
experience. Throughout our planning process, we based our decisions on the
five principles mentioned earlier (P.1-P.5). Our tutorial series spanned two
semesters: Fall 2021 (August through December) and Spring 2022 (February
through May). The tutorial content is summarized in Table 1.
### 3.1. Interest survey
To identify relevant methods (P.1), for one month before each semester we
distributed a survey via our personal Twitter accounts, via a Google group
mailing list that we created at the beginning of the fall 2021 semester, and
via topically related mailing lists (e.g. a political methods list-serv). We
asked participants to list the methods that they would be most interested in
learning about during a tutorial, which we then grouped into categories based
on underlying similarities.
The distribution of interest categories is shown in Figure 2. As expected, the
responses covered many different NLP applications (nguyen2020we, ), including
data preparation (preprocessing, multilingual), conversion of text to relevant
constructs (information extraction, word embeddings, deep learning), and
downstream analysis (causal inference, application). Most participants
expressed interest in word embeddings, unsupervised learning, and downstream
applications of NLP methods, which aligns with the current popularity of such
methods.
Figure 2. Distribution of NLP methods indicated in initial surveys.
#### Lessons learned
Since we typically publish in NLP venues, we took a prescriptive approach to
choosing the tutorial methods to present, in an attempt to more actively shape
the field of computational social science (addressing P.1& P.2). We used the
results of the survey to brainstorm potential topics for each upcoming
semester, but did not restrict ourselves to only the most popular methods.
While useful, the interest surveys revealed a disconnect between our ideal
tutorials, which focused on advanced NLP methods, and the participants’ ideal
tutorials, e.g. entry-level methods with immediate downstream results. For
example, many participants in the Spring 2022 interest survey mentioned
sentiment analysis, a well-studied area of NLP (birjali2021, ) that we
considered to be more introductory-level and sometimes unreliable
(diaz2018addressing, ). This was one source of tension between our
expectations and those of the participants, and future tutorial series
organizers may want to focus their efforts on highly-requested topics to
ensure consistent participation and satisfaction (P.1).
### 3.2. Leader recruitment
Aligning with P.2, we recruited other NLP experts who worked on cutting-edge
methods to lead each individual tutorial.777We recruited tutorial leaders
through our own social networks and through mutual acquaintances. We targeted
post-doctoral fellows, early-career professors, and advanced graduate
students. To ensure P.3, we also met with the tutorial hosts to agree on a
common format for the programming platform—Google CoLaboratory with
Python888https://colab.research.google.com/—and to help them understand the
tutorials’ objectives. The process involved several meetings: an introduction
meeting to scope the tutorial, and at least one planning meeting to review the
slides and code to be presented. Normally, this process was guided by a paper
or project for which the tutorial leader had code available. For example, the
leader of tutorial T4 was able to leverage an extensive code base already
tested by her lab.
#### Lessons learned
During the planning process, we were forced to plan the tutorials one at a
time due to complicated schedules among the leaders. We spread out the
planning meetings during the semester so that the planning meetings would
begin roughly two to three weeks before the associated tutorial. We strongly
encouraged leaders to provide their code to us at least one week in advance to
give us time to review it, but we found this difficult to enforce due to time
constraints on the leaders’ side (e.g. some leaders had to prioritize other
teaching commitments). Future organizers should set up a consistent schedule
for contacting leaders in advance and agree with leaders on tutorial code that
is relatively new and usable (P.2& P.4) without presenting an undue burden for
the leader, e.g. re-using existing code bases.
### 3.3. Participant recruitment
Even if we guaranteed P.1-P.5 with the content developed, recruiting social
science participants was essential to the success of our tutorial series. In
September 2021, we set up an official mailing list through Google Groups and
advertised it on social media and other methods-related list-servs.999The
Google Group was only accessible to participants with Google Mail accounts,
which in retrospect likely discouraged some participants who only use
institutional email accounts. The mailing list eventually hosted 396 unique
participants. For all tutorials, we set up a RSVP system using Google Forms
for participants to sign up, and we provided an RSVP link up to one week
before each tutorial. We chose this “walled garden” approach to discourage
anti-social activity such as Zoom-bombing which is often made easier by open
invitation links (ling2021first, ), and to provide tutorial leaders with a
better sense of their participants.
(a)
(b)
(c)
Figure 3. Excerpts from the tutorial on topic modeling (T4), demonstrating (a)
the application of a neural model to (b) text from politicians’ interviews,
which produces (c) word lists for inductively discovered topics.
#### Lesson learned
This process revealed significant drop-out: between 10-30% of people who
signed up actually attended the tutorial. While the reasons for the drop-out
remained unclear, we reasoned that people signed up for the tutorial as a
back-up and were willing to miss the live session if another obligation arose,
under the assumption that the recording would be available later. Although we
believe in the benefits of asynchronous learning, the low number of live
participants was somewhat discouraging to the live tutorial hosts.
### 3.4. Running the tutorials
During the tutorials, we wanted to ensure low start-up cost of the programming
environment (P.3) and well-written code that participants could use
immediately after the tutorials (P.4). We designed each tutorial to run for
slightly under 60 minutes, to account for time required for introductions,
transitions, and follow-up questions. The tutorial leader began the session
with a presentation to explain the method of interest with minimal math, using
worked examples on toy data and examples of prior research that leveraged the
method.
After 20-30 minutes of presentation, the tutorial leader switched to showing
the code written in a Google CoLaboratory Python notebook (P.3), which is an
internet-based coding environment that allows users to run modular blocks of
Python code. The leader would load or generate a simple text dataset, often no
more than several hundred documents in size, to illustrate the method’s
application. Depending on the complexity of the method, the leader might start
with some basic steps and then show the students increasingly complicated code
snippets. In general, the leaders walked the students through separate modules
that showed different aspects of the method in question. During the topic
modeling session (T4), the leader showed first how to train the topic model,
then provided extensive examples of what the topic output looked like and how
it should be interpreted (e.g. top words per topic, example documents with
high topic probabilities).101010Topic models are used to identify latent
groupings for words in a document, e.g. a health-related topic might include
“exercise” and “nutrition.” (blei2003latent, ) As a point of comparison, the
leader also would often show the output of a simpler “baseline” model to
demonstrate the superior performance of the tutorial’s more advanced method.
We show excerpts from the tutorial notebook on topic modeling in Figure 3,
which includes an overview of the topic model, a sample of the text data which
relates to politics, and the resulting learned “topics” as lists of words.
#### Lessons learned
To encourage critical thinking, some of the tutorial leaders provided
questions or exercises in the Colab notebooks for students to complete at a
later time. The leader of the information extraction tutorial (T2) created an
exercise for students to parse sentences from news text related to military
activity, and then to extract all sentences that described an attack between
armies. Some of these exercises posed challenges to participants who lacked
experience with the data structures or function calls involved in the code.
For future tutorials, leaders should consider simply showing participants how
to solve a simple exercise (e.g. live-coding) rather than expecting
participants to attack the problem on their own.
Tutorial | Question sample
---|---
T1 | Is there a reason that we’re using word2vec rather than other models such as fastText? What does Euclidean distance between embeddings mean? Does word2vec work on short “documents” such as Twitter data?
T4 | How is the bag of words representation combined with contextualized representation? How should someone choose the model to use for this component? What models does your package support?
T6 | Are Phrases mostly Nouns since Nouns are the ones that have multi-words? Have you tried this model in languages other than English? How was the PhraseBert model trained?
T7 | How do you keep track of who’s responding to what previous utterance? How do you create a conversation corpus from scratch? Can the code provide statistics or summary for each speaker or utterance?
T9 | Could you interpret a well-calibrated model as estimating the moral outrage in a post? Do choices in favor of hard modeling an aggregation techniques lead to higher values in outcome measurement? What would you recommend to handle annotator disagreement when the task is to label spans inside the text?
Table 2. Example questions from tutorial sessions. Some wording changed for
clarity.
(a)
(b)
| $\mu$ | $\sigma$
---|---|---
Pre-Q1–Learned from code (1-5) | 4.00 | 0.94
Pre-Q2–Learned from content (1-5) | 4.24 | 0.90
Pre- vs post-survey | E[Post-Q3] - E[Pre-Q3] |
Knowledge about the topic (1-7) | $0.77^{*}$ |
(c)
Figure 4. Participant responses for the survey sent during the live tutorials
(aggregated from T4-T12). Figure (a) indicates participant disciplines
(Pre-Q1) and (b) coding experience (Pre-Q2). Table (c) shows _(top)_ the mean
($\mu)$ and standard deviation ($sigma$) on a 5-point Likert scale for
Post-Q1&2 and _(bottom)_ for the question about participants’ self-rated
knowledge about the topic graded on a 7-point Likert scale, the expected value
of the post survey minus the expected value of the pre-survey (E[Post-Q3] -
E[Pre-Q3]). ∗ indicates statistical significance with p-value $<10^{-5}$ via a
two-sided T-test.
### 3.5. Participation during tutorials
During each tutorial, we—the authors of the study—acted as facilitators to
help the leaders handle questions and manage time effectively. The leaders
were often unable to see the live chat while presenting, and we therefore
found natural break points in the presentation to answer questions sent to the
chat. While we allowed for written and spoken questions, participants
preferred to ask questions in the chat, possibly to avoid interrupting the
presenter and to allow them to answer asynchronously.
Participants were encouraged to test out the code on their own during the
tutorial, and the code was generally written to execute quickly without
significant lag for e.g. downloads or model training (P.3). This often
required the leaders to run some of the code in advance to automate less
interesting components of the tutorial, e.g. selecting the optimal number of
topics for the topic model.
#### Lessons learned
Based on some of the questions received, participants seemed to engage well
with the code and to follow up with some of the methods. Participants asked
between 1 and 15 questions per tutorial (median 5). We show example questions
from the tutorials with the largest number of questions in Table 2. The
questions cover both simple closed-answer questions (“Can the code provide
statistics”) and more complicated open-ended questions (“How should someone
choose the model to use”). While the number of questions was relatively low
overall, the participants who asked questions were engaged and curious about
the limitations and ramifications of the methods being presented. To improve
participant engagement via questions, future leaders may find it useful to
pose their own questions throughout the code notebook (“what do you think
would happen if we applied method X while setting parameter Z=1?”) as a way to
guide the participants’ curiosity.
## 4\. Analysis of Effectiveness
### 4.1. Pre- and post-surveys during live tutorials
During the live portions of the tutorials, we distributed an optional survey
to participants at the beginning and end of the one-hour sessions.111111During
T1-T3 we were prototyping the series, so we only distributed the surveys for
T4-T12. The pre-survey consisted of three questions in a Google form: (Pre-Q1)
_Academic discipline background_ in which participants chose one of the given
disciplines or wrote their own; (Pre-Q2) _How many years of experience in
coding/data analysis do you have?_ which had four options; and (Pre-Q3) _How
much do you currently know about the topic?_ which was judged on a 7-point
Likert scale with 1 described as _I know nothing about the topic_ , 4
described as _I could possibly use the methods in my research, but I’d need
guidance_ and 7 described as _Knowledgeable, I could teach this tutorial._ The
post-survey consisted of four questions: (Post-Q1) _Code: How much did you
learn from the hands-on code aspect of the tutorial?_ ; (Post-Q2) _Content:
How much did you learn from the content part of the tutorial?_ ; (Post-Q3)
_Now, after the tutorial, how much do you currently know about the topic?_ and
(Post-Q4) _Any suggestions or changes we should make for the next tutorial?_.
Questions 1 and 2 were judged on a 5-point Likert scale with 1 described as
_Learned nothing new_ and 5 described as _Learned much more than I could have
on my own_. Question 3 was judged on the same 7-point Likert scale as the
analogous question in the pre-survey.
#### Results
We report aggregated survey responses in Figure 4. Across the eight tutorials
for which we collected data, the pre-surveys had 113 respondents total and the
post-surveys had 63 respondents. Figure 4(a) shows the results of the
breakdown by academic discipline or background (Pre-Q1). The three largest
areas of participation came from the fields of computer science, sociology,
and political science. Figure 4(b) shows that our participants actually had
quite a lot of experience in coding or data analysis (Pre-Q2)–78.8% of
participants who responded had three or greater years of experience in coding.
Analyzing Post-Q1 about how much they learned from code, participants
responded with $\mu=4,\sigma=0.94$. Post-Q2 about learning from content was
similar with $\mu=4.24,\sigma=0.9$. Interpreting these results, many
participants perceived a high degree of learning from attending the live
tutorials. We measure the pre- to post-survey learning by computing the
difference between the mean of Post-Q3 and mean of Pre-Q3, and we find a
difference of 0.77.121212Ideally, we would look not at the aggregate
participant responses but instead test the pairwise differences for each
individual’s pre- versus post-survey. However, we found that only 18
participants could be matched from pre- to post-survey due to drop-out, which
is too small for pairwise significance testing. We ran a two-sided T-test to
see if the pre- versus post-survey differences were greater than zero with
statistical significance, which produced a t-value of 4.16 and a p-value less
than $10^{-5}$. While seemingly small in aggregate, this change represents a
consistent growth in perceived knowledge among participants that is surprising
considering the relatively short tutorial length of one hour. Manually reading
the responses from (Post-Q4), participants described very positive
experiences, including ”very good tutorial” “Excellent tutorial!!!” and “very
helpful.”
#### Lessons learned
As Figure 4(a) shows, we were successful in recruiting participants from a
wide variety of social science disciplines. However, computer science or data
science—top-most bar in Figure 4(a)—was the most represented field. In
reflection, having another organizer who was primarily focused on social
science, rather than NLP, would help us recruit more CSS-oriented participants
and would align better with P.1. Responses from Post-Q4 also indicated that
the tutorials were not long enough for some participants. One participant said
“It would be great to make something like this into a multi-part tutorial. It
seemed like too much new material for 1 hour.” Some suggestions for future
tutorial organizers could be to make the tutorials 2-3 hours long. In the
first hour, the tutorial could provide an overview, followed by more advanced
topics or practice in hours 2-3. It’s difficult to satisfy the trade-offs of
(1) audience attention bandwidth and (2) fully explaining a particular method.
We also could have improved how we set audience expectations: introducing the
tutorials as a crash course and explaining that participants should expect to
spend 4-5 hours on their own afterwards to learn the material in depth.
Furthermore, future leaders may want to require or strongly encourage
participation in the surveys to improve data collection as we had relatively
low participation rates.131313After all the tutorials were presented, we also
sent a survey to the mailing list to ask about how much participants had
learned from the tutorials and whether they used the material in their own
work. We received only five responses total, therefore we do not present
statistics here.
### 4.2. Downstream impact
Despite the relatively low synchronous participation (roughly 4-30
participants per session), the views on the tutorial videos posted to YouTube
showed consistent growth during the tutorial series and even afterward,
culminating in over 10K total views. In addition, the tutorial materials were
showcased on the website for the Summer Institute for Computational Social
Science,141414Accessed 15 October 2022: https://sicss.io/overview. and several
tutorial leaders presented their tutorials again at an international social
science conference, having prepared relevant materials as part of our series
(P.4). 151515International Conference on Web and Social Media 2022, accessed
15 October 2022: https://www.icwsm.org/2022/index.html/#tutorials-schedule The
tutorial series may therefore have the greatest impact not for the synchronous
participants but instead for the large and growing audience of researchers who
discover the materials after the fact and may not have the resources to learn
about the methods via traditional methods (P.5). The success of the tutorials
in other contexts also points to the beginning of a virtuous cycle, in which
tutorial leaders test-drive their work in an informal setting and then present
a more formal version at an academic conference.
## 5\. Conclusion
#### Future improvements
Reflecting on this experience report, we suggest the following improvements
for future ML+X tutorial organizers:
* •
Despite the results of the pre-tutorial interest surveys, we made curatorial
decisions about the content and we cannot be sure that we satisfied the needs
of what participants wanted versus what we thought was important. Future
organizers may achieve higher participation by focusing on methods with high
public interest, regardless of their lower perceived utility by subject area
experts.
* •
The two co-organizers were both computer scientists, and we largely leveraged
a computer science professional network for recruitment. Future renditions
would ideally include a social scientist co-organizer to provide better
insight into current ML needs and desires among researchers (P.1), as well as
helping tutorial participants feel more at ease with complicated ML methods.
* •
Despite high sign-up rates and lack of cost (P.5), participants would often
fail to attend tutorials for which they had signed up. This may reflect a lack
of commitment among participants due to the virtual presence (“just another
Zoom meeting”) (toney2021fighting, ), or a failure to send frequent reminders.
Future tutorial organizers should experiment with other strategies for
encouraging attendance, including more topical data sets, a “hackathon”
setting (mtsweni2015stimulating, ), or a structured community to engage
participants before and after the tutorial (harasim2000shift, ).
* •
We found that participants did not consistently engage in the hands-on coding
segments of the tutorials. We recommend that future tutorial leaders either
simplify the hands-on coding for short sessions, or follow up on the tutorial
with additional “office hours” for interested students to try out the code and
ask further questions about the method. Similar to some computer science
courses, this approach might have a lecture component and a separate
“recitation” session for asking questions about the code.
* •
In the early stages of the tutorial series, we focused more on executing the
tutorials rather than collecting quantitative data about the participants’
experience. This makes it difficult to judge some aspects of the tutorials’
success, especially how the tutorials were received by participants with
different backgrounds and expectations. With more extensive evaluation and
participation in surveys, we hope that future organizers will make quicker and
more effective improvements during the course of a tutorial series.
#### Successes
Despite these drawbacks, we believe our tutorial series succeeded in its
goal—to help social scientists advance their skills beyond introductory NLP
methods. We hope other ML+X tutorials can build from our successes:
* •
We accumulated over 10K total views among our public recordings. Thus, we’d
encourage future ML+X organizers to put even more effort into the recordings
rather than live sessions.
* •
Although participants came in skilled—78.8% of participants who responded had
three or greater years of experience in coding (Figure 4(b))–they reported
aggregate increase in perceived knowledge of the methods presented—0.77 on a
7-point Likert scale.
* •
We generated education content for a diverse set of relevant and new NLP
methods (P.1&P.2) that can accelerate social science research. The subject
matter experts who led the tutorials were able to translate complicated ML
concepts into understandable, step-by-step lessons. We hope future ML+X
organizers can take inspiration from these tutorials’ choice of content and
social organization.
* •
Our tutorials have produced ready-to-use, modular, and freely available Python
code with a low barrier to entry (P.3,P.4,P.5), which will provide
“scaffolding” to future students seeking to start their own projects
(nam2010effects, ). We envision future ML+X organizers using this codebase as
a template for releasing code in their own domain.
As machine learning methods become more available and more powerful,
scientists may feel encouraged to implement these methods within their own
domain-specific research. We believe tutorial series such as the one described
in this report will help guide these researchers on their journey. Like the
tutorials themselves, we hope that our _Principles for Democratizing ML+X
Tutorials_ (P.1–P.5) will be used as springboard toward more open and
inclusive learning experiences for all researchers. Rather than wait for top-
down solutions, we encourage other ML practitioners to get involved and shape
the future of applied science by sharing their knowledge directly with
scholars eager to know more.
## Acknowledgments
We are deeply grateful for financial assistance from a Social Science Research
Council (SSRC)/Summer Institutes in Computational Social Science (SICSS)
Research Grant. We thank SIGCSE reviewers and various computer science
education experts for their feedback on initial drafts. We thank the fifteen
organizers who generously donated their expertise and time to making these
tutorials possible: Connor Gilroy, Sandeep Soni, Andrew Halterman, Emaad
Manzoor, Silvia Terragni, Maria Antoniak, Abe Handler, Shufan Wang, Jonathan
Chang, Steve Wilson, Dhanya Sridhar, Monojit Choudhury, Sanad Rizvi, and Neha
Kennard.
## References
* (1) Adame, F. Meaningful collaborations can end “helicopter research”. Nature (2021).
* (2) Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Hunzaker, M. F., Lee, J., Mann, M., Merhout, F., and Volfovsky, A. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences 115, 37 (2018), 9216–9221.
* (3) Beam, A. L., and Kohane, I. S. Big data and machine learning in health care. Jama 319, 13 (2018), 1317–1318.
* (4) Birjali, M., Kasri, M., and Beni-Hssane, A. A comprehensive survey on sentiment analysis: Approaches, challenges and trends. Knowledge-Based Systems 226 (2021), 107134.
* (5) Blei, D. M., Ng, A. Y., and Jordan, M. I. Latent Dirichlet Allocation. Journal of machine Learning research 3, Jan (2003), 993–1022.
* (6) Cai, C. J., and Guo, P. J. Software developers learning machine learning: Motivations, hurdles, and desires. In 2019 IEEE symposium on visual languages and human-centric computing (VL/HCC) (2019), IEEE, pp. 25–34.
* (7) De Waard, I., Abajian, S., Gallagher, M. S., Hogue, R., Keskin, N., Koutropoulos, A., and Rodriguez, O. C. Using mLearning and MOOCs to understand chaos, emergence, and complexity in education. The International Review of Research in Open and Distributed Learning 12, 7 (2011), 94–115.
* (8) Díaz, M., Johnson, I., Lazar, A., Piper, A. M., and Gergle, D. Addressing age-related bias in sentiment analysis. In Proceedings of the 2018 chi conference on human factors in computing systems (2018), pp. 1–14.
* (9) Harasim, L. Shift happens: Online education as a new paradigm in learning. The Internet and higher education 3, 1-2 (2000), 41–61.
* (10) Jones, D. T. Setting the standards for machine learning in biology. Nature Reviews Molecular Cell Biology 20, 11 (2019), 659–660.
* (11) Jordan, M. I., and Mitchell, T. M. Machine learning: Trends, perspectives, and prospects. Science 349, 6245 (2015), 255–260.
* (12) Karniadakis, G. E., Kevrekidis, I. G., Lu, L., Perdikaris, P., Wang, S., and Yang, L. Physics-informed machine learning. Nature Reviews Physics 3, 6 (2021), 422–440.
* (13) Ling, C., Balcı, U., Blackburn, J., and Stringhini, G. A First Look at Zoombombing. In 2021 IEEE Symposium on Security and Privacy (SP) (2021), IEEE, pp. 1452–1467.
* (14) Marcelino, M. J., Pessoa, T., Vieira, C., Salvador, T., and Mendes, A. J. Learning computational thinking and Scratch at distance. Computers in Human Behavior 80 (2018), 470–477.
* (15) Mason, W., Vaughan, J. W., and Wallach, H. Computational social science and social computing, 2014.
* (16) Meerbaum-Salant, O., Armoni, M., and Ben-Ari, M. Learning computer science concepts with scratch. Computer Science Education 23, 3 (2013), 239–264.
* (17) Mtsweni, J., and Abdullah, H. Stimulating and maintaining students’ interest in computer science using the hackathon model. The Independent Journal of Teaching and Learning 10, 1 (2015), 85–97.
* (18) Nam, D., Kim, Y., and Lee, T. The effects of scaffolding-based courseware for the Scratch programming learning on student problem solving skill. In Proceedings of the 18th International Conference on Computers in Education (2010), vol. 723, Asia-Pacific Society for Computers in Education Putrajaya, Malaysia, p. 727.
* (19) Nguyen, D., Liakata, M., DeDeo, S., Eisenstein, J., Mimno, D., Tromble, R., and Winters, J. How we do things with words: Analyzing text as social and cultural data. Frontiers in Artificial Intelligence 3 (2020), 62.
* (20) Saunders, T. E., He, C. Y., Koehl, P., Ong, L. S., and So, P. T. Eleven quick tips for running an interdisciplinary short course for new graduate students. PLoS computational biology 14, 3 (2018), e1006039.
* (21) Sharlach, M. Summer institute advances social science in the digital age. Princeton Office of Engineering Communications (2019).
* (22) Summers, R. M. Artificial intelligence of COVID-19 imaging: a hammer in search of a nail. Radiology (2021).
* (23) Tang, T., Rixner, S., and Warren, J. An environment for learning interactive programming. In Proceedings of the 45th ACM technical symposium on Computer science education (2014), pp. 671–676.
* (24) Toney, S., Light, J., and Urbaczewski, A. Fighting Zoom fatigue: Keeping the zoombies at bay. Communications of the Association for Information Systems 48, 1 (2021), 10.
* (25) Twigg, C. A. Models for online learning. Educause review 38 (2003), 28–38.
* (26) Vardi, M. Y. Will MOOCs destroy academia? Communications of the ACM 55, 11 (2012), 5–5.
* (27) Wallace, A. Social learning platforms and the flipped classroom. In 2013 Second International Conference on E-Learning and E-Technologies in Education (ICEEE) (2013), IEEE, pp. 198–200.
* (28) Wiley, D. A., and Edwards, E. K. Online self-organizing social systems: The decentralized future of online learning. Quarterly review of distance education 3, 1 (2002), 33–46.
## Appendix
We provide the full abstracts of the tutorials in Table 3, which the tutorial
leaders wrote in coordination with the organizers.
| Summary
---|---
T1 | We’ll demonstrate an extension of the use of word embedding models by fitting multiple models on a social science corpus (using gensim’s word2vec implementation), then aligning and comparing those models. This method is used to explore group variation and temporal change. We’ll discuss some tradeoffs and possible extensions of this approach.
T2 | This workshop provides an introduction to information extraction for social science–techniques for identifying specific words, phrases, or pieces of information contained within documents. It focuses on two common techniques, named entity recognition and dependency parses, and shows how they can provide useful descriptive data about the civil war in Syria. The workshop uses the Python library spaCy, but no previous experience is needed beyond familiarity with Python.
T3 | Establishing causal relationships is a fundamental goal of scientific research. Text plays an increasingly important role in the study of causal relationships across domains especially for observational (non-experimental) data. Specifically, text can serve as a valuable “control” to eliminate the effects of variables that threaten the validity of the causal inference process. But how does one control for text, an unstructured and nebulous quantity? In this tutorial, we will learn about bias from confounding, motivation for using text as a proxy for confounders, apply a “double machine learning” framework that uses text to remove confounding bias, and compare this framework with non-causal text dimensionality reduction alternatives such as topic modeling.
T4 | Most topic models still use Bag-Of-Words (BoW) document representations as input. These representations, though, disregard the syntactic and semantic relationships among the words in a document, the two main linguistic avenues to coherent text. Recently, pre-trained contextualized embeddings have enabled exciting new results in several NLP tasks, mapping a sentence to a vector representation. Contextualized Topic Models (CTM) combine contextualized embeddings with neural topic models to increase the quality of the topics. Moreover, using multilingual embeddings allows the model to learn topics in one language and predict them for documents in unseen languages, thus addressing a task of zero-shot cross-lingual topic modeling.
T5 | What is BERT? How do you use it? What kinds of computational social science projects would BERT be most useful for? Join for a conceptual overview of this popular natural language processing (NLP) model as well as a hands-on, code-based tutorial that demonstrates how to train and fine-tune a BERT model using HuggingFace’s popular Python library.
T6 | Most people starting out with NLP think of text in terms of single-word units called “unigrams.” But many concepts in documents can’t be represented by single words. For instance, the single words “New” and “York” can’t really represent the concept “New York.” In this tutorial, you’ll get hands-on practice using the phrasemachine package and the Phrase-BERT model to 1) extract multi-word expressions from a corpus of U.S. Supreme Court arguments and 2) use such phrases for downstream analysis tasks, such as analyzing the use of phrases among different groups or describing latent topics from a corpus.
T7 | ConvoKit is a Python toolkit for analyzing conversational data. It implements a number of conversational analysis methods and algorithms spanning from classical NLP techniques to the latest cutting edge, and also offers a database of conversational corpora in a standardized format. This tutorial will walk through an example of how to use ConvoKit, starting from loading a conversational corpus and building up to running several analyses and visualizations.
T8 | hmm howwww should we think about our #NLProc preprocessing pipeline when it comes to informal TEXT written by social media users?!? In this tutorial, we’ll discuss some interesting features of social media text data and how we can think about handling them when doing computational text analyses. We will introduce some Python libraries and code that you can use to process text and give you a chance to experiment with some real data from platforms like Twitter and Reddit.
T9 | NLP has helped massively scale-up previously small-scale content analyses. Many social scientists train NLP classifiers and then measure social constructs (e.g sentiment) for millions of unlabeled documents which are then used as variables in downstream causal analyses. However, there are many points when one can make hard (non-probabilistic) or soft (probabilistic) assumptions in pipelines that use text classifiers: (a) adjudicating training labels from multiple annotators, (b) training supervised classifiers, and (c) aggregating individual-level classifications at inference time. In practice, propagating these hard versus soft choices down the pipeline can dramatically change the values of final social measurements. In this tutorial, we will walk through data and Python code of a real-world social science research pipeline that uses NLP classifiers to infer many users’ aggregate “moral outrage” expression on Twitter. Along the way, we will quantify the sensitivity of our pipeline to these hard versus soft choices.
T10 | Does the politeness of an email or a complaint affect how quickly someone responds to it? This question requires a causal inference: how quickly would someone have responded to an email had it not been polite? With observational data, causal inference requires ruling out all the other reasons why polite emails might be correlated with fast responses. To complicate matters, aspects of language such as politeness are not labeled in observed datasets. Instead, we typically use lexicons or trained classifiers to predict these properties for each text, creating a (probably noisy) proxy of the linguistic aspect of interest. In this talk, I’ll first review the challenges of causal inference from observational data. Then, I’ll use the motivating example of politeness and response times to highlight the specific challenges to causal inference introduced by working with text and noisy proxies. Next, I’ll introduce recent results that establish assumptions and a methodology under which valid causal inference is possible. Finally, I’ll demonstrate this methodology: we’ll use semi-synthetic data and adapt a text representation method to recover causal effect estimates.
T11 | Code-mixing, i.e., the mixing of two or more languages in a single utterance or conversation, is an extremely common phenomenon in multilingual societies. It is amply present in user-generated text, especially in social media. Therefore, CSS research that handles such text requires to process code-mixing; there are also interesting CSS and socio-linguistic questions around the phenomenon of code-mixing itself. In this tutorial, we will equip you with some basic tools and techniques for processing code-mixed text, starting with hands-on experiments with word-level language identification, all the way up to methods for building code-mixed text classifiers using massively multilingual language models.
T12 | Word embeddings such as word2vec have recently garnered attention as potentially useful tools for analysis in social science. They promise an unsupervised method to quantify the connotations of words, and compare these across time or different subgroups. However, when training or using word embeddings, researchers may find that they don’t work as well as expected, or produce unreplicable results. We focus on three subtle issues in their use that could result in misleading observations: (1) indiscriminate use of analogical reasoning, which has been shown to underperform on many types of analogies; (2) the surprising prevalence of polysemous words and distributional similarity of antonyms, both leading to counterintuitive results; and (3) instability in nearest-neighbor distances caused by sensitivity to noise in the training process. Through demonstrations, we will learn how to detect, understand, and most importantly mitigate the effects of these issues.
Table 3. Tutorial abstracts, provided by leaders.
|
# Interlayer ferromagnetism and insulator-metal transition in element-doped
CrI3 thin films
Shiyang Sun‡ College of Physics and Hebei Advanced Thin Film Laboratory, Hebei
Normal University, Shijiazhuang, Hebei 050024, China Xuyan Chen‡ School of
Gifted Young, University of Science and Technology of China, Hefei, Anhui
230026, China Xuqi Li‡ College of Physics and Hebei Advanced Thin Film
Laboratory, Hebei Normal University, Shijiazhuang, Hebei 050024, China Huihui
Zhang College of Physics and Hebei Advanced Thin Film Laboratory, Hebei
Normal University, Shijiazhuang, Hebei 050024, China Haidan Sang College of
Physics and Hebei Advanced Thin Film Laboratory, Hebei Normal University,
Shijiazhuang, Hebei 050024, China Shifei Qi<EMAIL_ADDRESS>College of
Physics and Hebei Advanced Thin Film Laboratory, Hebei Normal University,
Shijiazhuang, Hebei 050024, China International Center for Quantum Design of
Functional Materials, CAS Key Laboratory of Strongly-Coupled Quantum Matter
Physics, and Department of Physics, University of Science and Technology of
China, Hefei, Anhui 230026, China Zhenhua Qiao<EMAIL_ADDRESS>International Center for Quantum Design of Functional Materials, CAS Key
Laboratory of Strongly-Coupled Quantum Matter Physics, and Department of
Physics, University of Science and Technology of China, Hefei, Anhui 230026,
China
###### Abstract
The exploration of magnetism in two-dimensional layered materials has
attracted extensive research interest. For the monoclinic phase CrI3 with
interlayer antiferromagnetism, finding a static and robust way of realizing
the intrinsic interlayer ferromagnetic coupling is desirable. In this Letter,
we study the electronic structure and magnetic properties of the nonmagnetic
element (e.g., O, S, Se, N, P, As and C) doped bi- and triple-layer CrI3
systems via first-principles calculations. Our results demonstrate that O, P,
S, As, and Se doped CrI3 bilayer can realize interlayer ferromagnetism.
Further analysis shows that the interlayer ferromagnetic coupling in the doped
few-layer CrI3 is closely related to the formation of localized spin-polarized
state. This finding indicates that insulated interlayer ferromagnetism can be
realized at high doping concentration (larger than 8.33$\%$). When the doping
concentration is less than 8.33$\%$, but larger than 2.08$\%$, an insulator-
metal phase transition can occur since the localized spin-polarized states
percolate to form contiguous grids in few-layer CrI3.
Introduction—. Two-dimensional(2D) magnetic semiconductors have attracted
extensive attention due to the enormous potential for novel magneto-optic
Huang1 ; Gong ; Deng ; Gibertini ; Zhong ; Seyler , magnetoelectronic Zollner
; Song1 ; Klein ; Cardoso ; Wang1 ; Ghazaryan ; Wang2 ; Wang3 , and spintronic
devices Cummings ; Kim1 ; Karpiak . As a representative 2D layered material,
CrI3 possesses its own unique physical properties. The bulk CrI3 has two
different structures, i.e., high-temperature monoclinic phase and low-
temperature rhombohedral phase. The bilayer CrI3 with rhombohedral stacking
exhibits interlayer ferromagnetic coupling, while that with monoclinic
stacking exhibits interlayer antiferromagnetic coupling Sivadas ; Jiang1 ;
Jang ; Soriano1 ; Thiel ; Ubrig ; Kim2 , and their phase transition
temperature is 220 K McGuire . Thus, the magnetic order of CrI3 is susceptible
to the variation of layer thickness and stacking order. Previous first-
principles calculations have predicted that interlayer magnetic coupling can
be effectively modulated by stacking order in bilayer Sivadas ; Jiang1 ; Jang
. Later experiments Thiel ; Ubrig approved that different stacking orders can
affect the observed magnetic states of CrI3 in both bulk and few-layer CrI3
systems.
In van de Waals layered systems, the relatively weak interlayer coupling
indicates that the interlayer magnetic order can be easily tuned via external
means. Indeed, it was experimentally reported that monoclinic bilayer CrI3 can
be transformed from interlayer antiferromagnetic to ferromagnetic coupling by
applying electric gating Jiang2 ; Jiang3 ; Huang2 ; Xu . A possible physical
mechanism describing this magnetic transition is the formation of magnetic
polaron, which was theoretically confirmed Soriano2 . Beside above external
electric gating, a natural question arises: whether it is possible to find a
static and robust way of realizing the intrinsic interlayer ferromagnetic
coupling in few-layer CrI3? In addition, for semiconductor materials, the
carrier doping concentration may destroy the physical properties. Therefore,
it is desirable to realize the interlayer ferromagnetically-coupled few-layer
CrI3 while maintaining its semiconducting characteristics without introducing
additional carriers.
In this Letter, we perform a systematic study on the magnetic and electronic
properties of nonmagnetic-element doped few-layer CrI3 by using first-
principles calculation methods. We first show that the interlayer
ferromagnetic coupling can be established in bilayer CrI3 doped with C, N, O,
P, S, As, or Se. We then find that the interlayer ferromagnetic coupling is
intimately related to the formation of magnetic polaron. Especially for the
As-doped bi- or tri-layer CrI3, it can achieve higher Curie temperature and
does not introduce extra carriers in the presence of increasing doping
concentration within certain scale, therefore maintaining the system’s
semiconducting properties. In addition, an insulator-metal phase transition
occurs with the help of percolated spin-polarized states at low doping
concentration.
Figure 1: (a) Side and top views of crystal structures of high-temperature
monoclinic bilayer CrI3 phase and the substitution sites are labeled as I1 and
I2. Formation energies of (b) O, S, Se, N and (c) P, As or C element-doped
bilayer CrI3 as a function of the host element chemical potentials.
Calculation Methods—. Our first-principle calculations were performed by using
the projected augmented-wave method Blochl as implemented in the Vienna ab
initio simulation package (VASP) Kresse1 ; Kresse2 . The generalized gradient
approximation (GGA) of Perdew-Burke-Ernzerhof (PBE) type was used to treat the
exchange-correlation interaction Perdew . In our calculations, the lattice
constant of the high-temperature phase of CrI3 was chosen to be $a_{0}$=6.92 Å
Jiang1 . A vacuum buffer space of 15 Åwas used to prevent the coupling between
adjacent slabs. The kinetic energy cutoff was set to be 340 eV. With fixed
supercells, all structures were fully relaxed. The van der Waals (vdW) force
was taken into account by employing the Grimme’s method (DFT-D2) Grimme . The
Brillouin-zone integration was carried out by using $5\times 5\times 1$
Monkhorst-Pack grids. Unless mentioned otherwise, GGA+U Anisimov ; Dudarev
method was used with the on-site repulsion parameter $U=3.9~{}eV$ and the
exchange parameter $J=1.1~{}eV$ Jiang1 , where $U$ is for the more localized
$3d$ orbitals of Cr atoms. The Curie temperature $T{{}_{C}}$ was estimated
within the mean-field approximation by using $k{{}_{B}}T{{}_{C}}=2/3Jx$
Bergqvist , where $k{{}_{B}}$ is the Boltzmann constant, $x$ is the dopant
concentration, and $J$ is the exchange parameter obtained from the total
energy difference between ferromagnetic and antiferromagnetic configurations.
Figure 2: (a) Energy difference between interlayer ferromagnetic (FM) and
antiferromagnetic (AFM) states. (b) Difference of interlayer distance between
doping configuration and pristine CrI3. (c) Charge difference between Cr atoms
near doping site in the doped and pristine bilayer CrI3. (d) Schematic
illustration of spin-polarized state-mediated interlayer ferromagneitic
coupling in doped bilayer CrI3.
Experimental Possibility of Element Doping—. We first study the possibility of
element doping in bilayer CrI3, i.e., substituting I by nonmagnetic dopants.
Some typical candidates of nonmagnetic dopants including O, S, Se, N, P, As
and C are considered. As displayed in Fig. 1(a), there are two types of
I-doping sites labelled as I1 (at the surface) and I2 (inside the interlayer).
The formation energy was obtained by using the expression Zhang ; Qi ; Han
$\Delta H{{}_{F}}=E{{}_{tot}^{D}}-E{{}_{tot}}-\Sigma n{{}_{i}}\mu_{i}$, where
$E{{}_{tot}^{D}}$ is the total energy of the system including one nonmagnetic
impurity, $E{{}_{tot}}$ is the total energy of the system, ${\mu_{i}}$ is the
chemical potential for the species $i$ (host atoms or dopants), and ni is the
corresponding number that was added/removed from the system.
As displayed in Fig. 1(b), for O, S, Se, N substitutions at two I sites in the
same CrI3 layer, the formation energy is within the range of $-0.4\sim
1.5~{}eV$. It indicates that the I1 substitutional site is preferred due to
smaller formation energy than that at I2 substitution. For example, N
substitution leads to smaller formation energy (about $-0.6\sim 0.2eV$) than
those from O (about $-0.2\sim 0.2eV$), S (about $0.4\sim 0.9eV$) and Se (about
$1.1\sim 1.4eV$) substitution in the whole range of the accessible host
element chemical potentials. However, for P, As, and C substitutions [see Fig.
1(c)], they have larger formation energies than those with O, S, Se or N
dopants. In addition, we find that the I1 substitutional site is preferred by
P and C, while I2 substitutional site is preferred by As. The formation energy
shows that all candidate elements (except As) are more stable at I1 position.
The formation energy of As substituted I2 site is positive (about $2.3\sim
2.7eV$). It is noteworthy that C-doped ZnO has been experimentally fabricated
even the estimated formation energy is about 5.3 eV Pan , which is much larger
than all element-doped CrI3. Therefore, it is reasonable to believe that O, S,
Se, N, P and As doped CrI3 bilayer could be experimentally fabricated.
Magnetic Properties—. We now move to investigate the interlayer magnetic
coupling of the doped CrI3. Figure 2(a) displays the energy difference
$\Delta$${E}$=${E_{FM}}$-${E_{AFM}}$ between interlayer ferromagnetic and
antiferromagnetic states for different element-doped bilayer CrI3. As
reported, the pristine bilayer CrI3 exhibits interlayer antiferromagnetic
coupling Sivadas ; Jiang1 ; Jang . The introduction of dopants except C and N
leads to $\Delta E<0$, indicating the formation of interlayer ferromagnetism.
For the O, P, S, As, and Se substitution at I1(I2) site [see Fig. 2(a)],
$\Delta E$ are respectively -7.7(-2.2), -3.6(-6.3), -6.7(-4.0), -49.7(-113.9),
and -5.4(-12.3) meV, indicating the interlayer ferromagnetic coupling. As a
contrast, it maintains the interlayer antiferromagnetism in C or N doped case.
Hereinbelow, we choose the I2-site As-doped bilayer CrI3 as an example to
analyze the origin of the interlayer ferromagnetism.
Figure 3: Differential charge density of (a) pristine and (b) As-doped bilayer
CrI3. Spin density of (c) pristine and (d) As-doped bilayer CrI3. Local
density of states of (e) pristine and (f) As-doped bilayer CrI3. Yellow and
blue isosurfaces represent respectively charge accumulation and reduction. Red
and green isosurfaces represent respectively spin up and spin down. Cr-d, I-p
and As-p orbitals in each layer of CrI3 are displayed.
For vdW magnetic materials, many studies have shown that the interlayer
distance plays a crucial role in determining the interlayer magnetic coupling
Li1 ; Song2 ; Xia ; Zhu1 . Thus, we first investigate the relationship between
the interlayer distance and the energy difference $\Delta E$. Figure 2(b)
displays the difference of interlayer distances between doped and pristine
CrI3. One can find that the interlayer distances in nearly all doped systems
[except P-doped configuration at I1 substitution)] decrease with respect to
the pristine case. Particularly, the interlayer distances in C, N, and O doped
systems at I1-site substitution shrink respectively about 0.14, 0.17, and 0.18
Å, which are much larger than that in the As-doped system. These together show
that there is no obvious correlation between the strength of ferromagnetic
coupling and the interlayer distance.
It was reported that the interlayer magnetism of vdW materials is closely
related to different 3d electron occupation between different layers Xiao ;
Li2 ; Zhu2 . In Fig. 2(c), we display the charge difference of Cr atoms near
the dopants between the doped and pristine bilayer CrI3. It shows that the
charge of Cr atoms near dopants increases for all doped systems. However, in
As-doped bilayer, the change of 3d electron occupation due to doping is much
less than those of the C and N doped bilayers. Therefore, the difference of 3d
electrons occupation between two layers cannot explain the formation of the
interlayer ferromagnetism.
Figure 4: (a-c-e-g) Schematic plots of two spin-polarized states at different
distances (dP-P) in As-doped bilayer CrI3. (b-d-f-h) The density of states of
ferromagnetic state and the energy difference between interlayer ferromagnetic
(FM) and antiferromagnetic (AFM) states at different dP-P in As-doped bilayer.
The shadow part indicates the formation of spin-polarized state.
We now move to calculate the differential charge densities of pristine and As-
doped systems [see Figs. 3(a) and 3(b)] Jiang1 ; Liu . One can see that As-
doping indeed leads to obvious change of charge distribution inside the
interlayer space. Figures 3(c) and 3(d) display the spin densities of pristine
and doped cases, respectively. For pristine case, it exhibits intralayer
ferromagnetism and interlayer anti-ferromagnetism. After As-doping, the
interlayer anti-ferromagnetism transits to ferromagnetism, accompanying with a
strongly localized spin-polarized state near the doping site. To confirm this
finding, the local density of states of pristine and As doped bilayers are
displayed in Figs. 3(e) and 3(f). In Fig. 3(f), one can find that an extremely
sharp local density of state appears near the Fermi level, arising from the
hybridization among As p-orbital, bonded Cr d-orbital in the same layer, and I
p-orbital in the other layer. All together indicate that As-doping results in
a bound spin-polarized state, potentially forming the interlayer
ferromagnetism.
To further confirm this physical origin, in Fig. S1, we display the density of
states of the C, N, O, P, S, and Se doped bilayers, where C and N doped
systems exhibit interlayer antiferromagnetism, but O, P, S, and Se doped
systems exhibit interlayer ferromagnetism SM . Figures S1(a) and S1(b) display
that no spin-polarized bound states appear near the Fermi level for C and N
doped systems. While for other doped systems, spin-polarized bound states
arise near the Fermi level [see Figs. S1(c)-S1(f)]. These suggest a direct
evidence that the interlayer ferromagnetic coupling in doped bilayer CrI3 is a
consequence of the formation of spin-polarized bound state.
Another striking transport phenomenon is the insulating nature after doping.
It is known that doping or gating can result in the ferromagnetism of
semiconductors, but may also break the semiconducting property due to the
carrier injection. Surprisingly, for O, P, S, As, and Se doped bilayer CrI3,
our results show that they exhibit both ferromagnetic and insulating features.
In association with the experimental finding that different gate doping levels
do not lead to n- or p-type conduction of bilayer CrI3 dominantly with
affected magnetic properties Huang2 , it is believed that only insulated
interlayer ferromagnetism in few-layer CrI3 can be observed due to the
formation of spin-polarized bound state at certain doping concentrations.
When localized spin-polarized states percolate to form contiguous grids, an
insulator-metal transition can occur at certain doping concentration. As shown
in Figs. 4(a-c-e-g), we study the electronic structure and magnetic property
of two-As-atom doped bilayer CrI3 at different As-As distances (i.e., various
doping concentrations). For the nearest As-As distance dP-P=6.92 Å, in
comparison with density of states of only one-As-atom doped bilayer CrI3 (see
Fig. 3(f)), the spin-polarized states in Fig. 4(b) is more delocalized and
only one spin-polarized states is formed [A more larger spin-polarized states
is schematically plotted in Fig. 4(a)]. At this configuration (the estimated
doping concentration is about 8.33$\%$), it is a ferromagnetic insulator and
the interlayer ferromagnetic coupling ($\Delta E$= -177.0 meV) becomes
stronger than that($\Delta E$= -113.9 meV) with only one-As dopant. When the
As doping concentration decreases, from Figs. 4(c) and 4(e), we find that two
spin-polarized states can be formed and percolated. In such percolated
systems, they are p-type semiconductor with interlayer ferromagnetism (see
Figs. 4(d) and 4(f)), but the ferromagnetic coupling becomes weaker than that
of the insulating case. For longer As-As distance, two independent spin-
polarized states are formed and the interlayer magnetic coupling almost
disappears. For example, when the As-As distance dP-P is 13.17 Å, one spin-
polarized state is about 6.6 Å. If the doping concentration is further
decreased to 2.08$\%$, our results indicate that an insulated and
ferromagnetic bilayer CrI3 can be formed again due to the independent spin-
polarized states, similar to that in Fig. 3(f).
So far, we have shown that element doping in bilayer CrI3 can induce
interlayer ferromagnetism. The trilayer CrI3 has weak interlayer ferromagnetic
coupling Huang1 . Whether the element doping can enhance the interlayer
ferromagnetism? By taking As-doping as an example in Fig. S2(a), three I
substituted sites including I1, I2, and I3 are selected. The interlayer
magnetic couplings and their relative stabilities are respectively displayed
in Fig. S2(b) and Table S1. One can see that the state with interlayer
ferromagnetic coupling is more stable than the other three states with
interlayer antiferromagnetic coupling, which agrees well with the experimental
observation Huang1 . One can find that the total energy of As substitution at
I3 site is the lowest, indicating the most stable structure. Therefore, we
show that the strong interlayer ferromagnetic coupling can also be established
in trilayer CrI3 via As doping.
Conclusions—. In summary, we demonstrate that the interlayer ferromagnetic
coupling can be realized in both bilayer and trilayer CrI3 by doping
nonmagnetic elements. Our finding provides a new evidence that the interlayer
ferromagnetic coupling in CrI3 thin films may be related to the formation of
spin-polarized states, and also provides an alternative scheme for the
realization of CrI3 based ferromagnetic insulator.
Acknowledgements—. This work was financially supported by the NNSFC (11974098,
11974327, and 12004369), Natural Science Foundation of Hebei
Province(A2019205037), and Science Foundation of Hebei Normal University
(2019B08), Fundamental Research Funds for the Central Universities
(WK2030020032 and WK2340000082), Anhui Initiative in Quantum Information
Technologies. The supercomputing services of AM-HPC and USTC are gratefully
acknowledged.
‡ These authors contribute equally to this work.
## References
* (1) B. Huang, G. Clark, E. Navarro-Moratalla, D. R. Klein, R. Cheng, K. L. Seyler, D. Zhong, E. Schmidgall, M. A. McGuire, D. H. Cobden, W. Yao, D. Xiao, P . Jarillo-Herrero, and X. Xu,Nature (London) 546, 270 (2017).
* (2) C. Gong, L. Li, Z. Li, H. Ji, A. Stern, Y. Xia, T. Cao, W. Bao, C. Wang, Y . Wang, Z. Q. Qiu, R. J. Cava, S. G. Louie, J. Xia,and X. Zhang,Nature (London) 546, 265 (2017).
* (3) Y. Deng, Y. Yu, Y. Song, J. Zhang, N. Z. Wang, Z. Sun, Y. Yi,Y. Z. Wu, S. Wu, J. Zhu, J. Wang, X. H. Chen, and Y. Zhang,Nature (London) 563, 94 (2018).
* (4) M. Gibertini, M. Koperski, A. F. Morpurgo, and K. S. Novoselov,Nat. Nanotechnol. 14, 408 (2019).
* (5) D. Zhong, K. L. Seyler, X. Linpeng, R. Cheng, N. Sivadas, B. Huang, E. Schmidgall, T. Taniguchi, K. Watanabe, M. A. McGuire, W. Yao, D. Xiao, K.-M. C. Fu, and X. Xu,Sci. Adv. 3, e1603113 (2017).
* (6) K. L. Seyler, D. Zhong, B. Huang, X. Linpeng, N. P . Wilson, T. Taniguchi, K. Watanabe, W. Yao, D. Xiao, M. A. McGuire, K.-M. C. Fu, and X. Xu,Nano Lett. 18, 3823 (2018).
* (7) K. Zollner, M. Gmitra, and J. Fabian,New J. Phys. 20, 073007(2018).
* (8) T. Song, X. Cai, M. W.-Y. Tu, X. Zhang, B. Huang, N. P. Wilson, K. L. Seyler, L. Zhu, T. Taniguchi, K. Watanabe, M. A. McGuire, D. H. Cobden, D. Xiao, W. Yao, and X. Xu, Science 360, 1214 (2018).
* (9) D. R. Klein, D. MacNeill, J. L. Lado, D. Soriano, E. Navarro-Moratalla, K. Watanabe, T. Taniguchi, S. Manni, P. Canfield, J. Fern$\acute{a}$ndez-Rossier, and P. Jarillo-Herrero, Science 360, 1218(2018).
* (10) C. Cardoso, D. Soriano, N. A. Garc$\acute{i}$a-Mart$\acute{i}$nez, and J. Fern$\acute{a}$ndez-Rossier, Phys. Rev. Lett. 121, 067701 (2018).
* (11) Z. Wang, I. Guti$\acute{e}$rrez-Lezama, N. Ubrig, M. Kroner, M. Gibertini, T. Taniguchi, K. Watanabe, A. Imamo$\breve{g}$lu, E. Giannini, and A. F. Morpurgo, Nat. Commun. 9, 2516(2018)
* (12) D. Ghazaryan, M. T. Greenaway, Z. Wang, V. H. GuarochicoMoreira, I. J. Vera-Marun, J. Yin, Y. Liao, S. V. Morozov, O. Kristanovski, A. I. Lichtenstein, M. I. Katsnelson, F. Withers, A. Mishchenko, L. Eaves, A. K. Geim, K. S. Novoselov, and A. Misra, Nat. Electron. 1, 344 (2018).
* (13) Z. Wang, T. Zhang, M. Ding, B. Dong, Y. Li, M. Chen, X. Li, J. Huang, H. Wang, X. Zhao, Y. Li, D. Li, C. Jia, L. Sun, H. Guo, Y. Ye, D. Sun, Y. Chen, T. Yang, J. Zhang, S. Ono, Z. Han and Z. Zhang, Nat. Nanotechnol. 13, 544 (2018).
* (14) Z. Wang, M. Gibertini, D. Dumcenco, T. Taniguchi, K. Watanabe, E. Giannini, and A. F. Morpurgo, Nat. Nanotechnol. 14, 1116 (2019).
* (15) A. W. Cummings, J. Phys. Mater. 2, 045007 (2019).
* (16) M. Kim, P. Kumaravadivel, J. Birkbeck, W. Kuang, S. G. Xu, D. G. Hopkinson, J. Knolle, P. A. McClarty, A. I. Berdyugin, M. Ben Shalom, R. V. Gorbachev, S. J. Haigh, S. Liu, J. H. Edgar, K. S. Novoselov, I. V. Grigorieva, and A. K. Geim, Nat. Electron. 2, 457 (2019).
* (17) B. Karpiak, A. W. Cummings, K. Zollner, M. Vila, D. Khokhriakov, A. M. Hoque, A. Dankert, P. Svedlindh, J. Fabian, S. Roche, and S. P. Dash, 2D Mater. 7, 015026 (2019).
* (18) N. Sivadas, S. Okamoto, X. Xu, C. J. Fennie, and D. Xiao,Nano Lett. 18, 7658 (2018).
* (19) P. Jiang, C. Wang, D. Chen, Z. Zhong, Z. Y uan, Z.-Y. Lu, and W. Ji,Phys. Rev. B 99, 144401 (2019).
* (20) S. W. Jang, M. Y. Jeong, H. Yoon, S. Ryee, and M. J. Han, Phys. Rev. Mater. 3, 031001 (2019).
* (21) D. Soriano, C. Cardoso, and J. Fernndez-Rossier,Solid State Commun. 299, 113662 (2019).
* (22) L. Thiel, Z. Wang, M. A. Tschudin, D. Rohner, I. Guti rrez-Lezama, N. Ubrig, M. Gibertini, E. Giannini, A. F. Morpurgo, and P. Maletinsky, Science 364, 973 (2019).
* (23) N. Ubrig, Z. Wang, J. Teyssier, T. Taniguchi, K. Watanabe, E. Giannini, A. F. Morpurgo, and M. Gibertini, 2D Mater. 7, 015007 (2019).
* (24) H. H. Kim, B. Yang, T. Patel, F. Sfigakis, C. Li, S. Tian, H. L. Lei, and A. W. Tsen,Nano Lett. 18, 4885 (2018)
* (25) M. A. McGuire, H. Dixit, V. R. Cooper, and B. C. Sales,Chem.Mater. 27, 612 (2015).
* (26) S. Jiang, L. Li, Z. Wang, K. F. Mak, and J. Shan, Nat. Nanotechnol. 13, 549 (2018).
* (27) S. Jiang, J. Shan, and K. F. Mak, Nat. Mater. 17, 406 (2018).
* (28) B. Huang, G. Clark, D. R. Klein, D. MacNeill, E. Navarro-Moratalla, K. L. Seyler, N. Wilson, M. A. McGuire, D. H. Cobden, D. Xiao, W. Yao, P. Jarillo-Herrero, and X. Xu, Nat. Nanotechnol. 13, 544 (2018).
* (29) R. Xu and X. Zou, J. Phys. Chem. Lett. 11, 3152(2020)
* (30) D. Soriano and M. I. Katsnelson,Phys. Rev. B 101, 041402(R) (2020)
* (31) P. E. Bl$\ddot{o}$chl, Phys. Rev. B 50, 17953 (1994).
* (32) G. Kresse and J. Furthmuller, Phys. Rev. B 54, 11169 (1996).
* (33) G. Kresse and D. Joubert,Phys. Rev. B 59, 1758 (1999).
* (34) J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1996).
* (35) S. Grimme, J. Comput. Chem. 27, 1787 (2006).
* (36) V. I. Anisimov, J. Zaanen, and O. K. Anderson, Phys. Rev. B 44, 943 (1991).
* (37) S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys, and A. P. Sutton, Phys. Rev. B 57, 1505 (1998).
* (38) L. Bergqvist, O. Eriksson, J. Kudrnovsky, V. Drchal, A. Bergman, L. Nordstrom, and I. Turek, Phys. Rev. B 72, 195210 (2005).
* (39) J. M. Zhang, W. G. Zhu, Y. Zhang, D. Xiao, and Y. G. Yao, Phys. Rev. Lett. 109, 266405 (2012).
* (40) S. Qi, R. Gao, M. Chang, T. Hou, Y. Han, and Z. Qiao,Phys. Rev. B 102, 085419 (2020).
* (41) Y. Han, S. Sun, S. Qi, X. Xu,and Z. Qiao, Phys. Rev. B 103, 245403 (2021)
* (42) H. Pan, J. B. Yi, L. Shen, R. Q. Wu, J. H. Yang, J. Y. Lin, Y. P. Feng, J. Ding, L. H. Van, and J. H. Yin, Phys. Rev. Lett. 99, 127201 (2007).
* (43) T. Li, S. Jiang, N. Sivadas, Z. Wang, Y. Xu, D. Weber, J. E. Goldberger, K. Watanabe, T. Taniguchi, C. J. Fennie, K. F. Mak and J. Shan, Nat. Mater. 18, 1303(2019)
* (44) T. Song, Z. Fei, M. Yankowitz, Z. Lin, Q. Jiang, K. Hwangbo, Q. Zhang, B. Sun, T. Taniguchi, K. Watanabe, M. A. McGuire, D. Graf, T. Cao, J-H Chu, D. H. Cobden, C. R. Dean, D. Xiao and X. Xu, Nat. Mater. 18, 1298(2019)
* (45) J. Xia, J. Yan, Z. Wang, Y. He, Y. Gong, W. Chen, T. C. Sum, Z. Liu, P. M. Ajayan and Z. Shen, Nat. Phys.17,92 (2021).
* (46) W. Zhu, C. Song, Y. Zhou, Q. Wang, H. Bai, and F. Pan, Phys. Rev. B 103, 224404 (2021).
* (47) J. W. Xiao and B. H. Yan, 2D Mater. 7, 045010 (2020).
* (48) Z. Li, J. Li, K. He, X. Wan, W. Duan, and Y. Xu, Phys. Rev. B 102, 081107(R) (2020).
* (49) W. Zhu, C. Song, L. Liao, Z. Zhou, H. Bai, Y. Zhou, and F. Pan, Phys. Rev. B 102, 085111 (2020).
* (50) N. Liu, S. Zhou and J. Zhao, Phys. Rev. Mater. 4, 094003 (2020).
* (51) See Supplemental Material at [URL will be inserted by publisher] for [the density of states of the C, N, O, P, S, and Se doped bilayer CrI3, four magnetic states related interlayer magnetic coupling and their relative stability.].
|
# End-to-End Neural Discourse Deixis Resolution in Dialogue
Shengjie Li Vincent Ng
Human Language Technology Research Institute
University of Texas at Dallas
Richardson, TX 75083-0688
<EMAIL_ADDRESS>
###### Abstract
We adapt Lee et al.’s Lee et al. (2018) span-based entity coreference model to
the task of end-to-end discourse deixis resolution in dialogue, specifically
by proposing extensions to their model that exploit task-specific
characteristics. The resulting model, dd-utt, achieves state-of-the-art
results on the four datasets in the CODI-CRAC 2021 shared task.
## 1 Introduction
Discourse deixis (DD) resolution, also known as abstract anaphora resolution,
is an under-investigated task that involves resolving a deictic anaphor to its
antecedent. A deixis is a reference to a discourse entity such as a
proposition, a description, an event, or a speech act Webber (1991). DD
resolution is arguably more challenging than the extensively-investigated
entity coreference resolution task. Recall that in entity coreference, the
goal is to cluster the entity mentions in narrative text or dialogue, which
are composed of pronouns, names, and nominals, so that the mentions in each
cluster refer to the same real-world entity. Lexical overlap is a strong
indicator of entity coreference, both among names (e.g., “President Biden”,
“Joe Biden”) and in the resolution of nominals (e.g., linking “the president”
to “President Biden”). DD resolution, on the other hand, can be viewed as a
generalized case of event coreference involving the clustering of deictic
anaphors, which can be pronouns or nominals, and clauses, such that the
mentions in each cluster refer to the same real-world proposition/event/speech
act, etc. The first example in Figure 1 is an example of DD resolution in
which the deictic anaphor “the move” refers to Salomon’s act of issuing
warrants on shares described in the preceding sentence. DD resolution is
potentially more challenging than entity coreference resolution because (1) DD
resolution involves understanding clause semantics since antecedents are
clauses, and clause semantics are arguably harder to encode than noun phrase
semantics; and (2) string matching plays little role in DD resolution, unlike
in entity coreference.
Salomon Brothers International Ltd. announced it will issue warrants on shares
of Hong Kong Telecommunications Ltd. The move closely follows a similar offer
by Salomon of warrants for shares of Hongkong & Shanghai Banking Corp.
A: Would you donate to Save the Children?
B: Yes, I will do $10 to both.
B: I am of a tight budget, but I do make room for good causes.
A: Thank you very much.
A: The children will appreciate it.
Figure 1: Examples of discourse deixis resolution. In each example, the
deictic anaphor is italicized and the antecedent is boldfaced.
In this paper, we focus on end-to-end DD resolution in dialogue. The second
example in Figure 1 shows a dialogue between A and B in which the deictic
anaphor “it” refers to the utterance by B in which s/he said s/he would donate
$10. While the deictic anaphors in dialogue are also composed of pronouns and
nominals, the proportion of pronominal deictic anaphors in dialogue is much
higher than that in narrative text. For instance, while 76% of the deictic
anaphors in two text corpora (ARRAU RST and GNOME) are pronominal, the
corresponding percentage rises to 93% when estimated based on seven dialogue
corpora (TRAINS91, TRAINS93, PEAR, and the four CODI-CRAC 2021 development
sets). In fact, the three pronouns “that”, “this”, and “it” alone comprise 89%
of the deictic anaphors in these dialogue corpora. The higher proportion of
pronominal deictic anaphors potentially makes DD resolution in dialogue more
challenging than those in text: since a pronoun is semantically empty, the
successful resolution of a pronominal deictic anaphor depends entirely on
proper understanding of its context. In addition, it also makes DD recognition
more challenging in dialogue. For instance, while the head of a non-pronominal
phrase can often be exploited to determine whether it is a deictic anaphor
(e.g., “the man” cannot be a deictic anaphor, but “the move” can), such cues
are absent in pronouns.
Since DD resolution can be cast as a generalized case of event coreference, a
natural question is: how successful would a state-of-the-art entity
coreference model be when applied to DD resolution? Recently, Kobayashi et al.
(2021) have applied Xu and Choi’s Xu and Choi (2020) re-implementation of Lee
et al.’s span-based entity coreference model to resolve the deictic anaphors
in the DD track of the CODI-CRAC 2021 shared task after augmenting it with a
type prediction model (see Section 4). Not only did they achieve the highest
score on each dataset, but they beat the second-best system Anikina et al.
(2021), which is a non-span-based neural approach combined with hand-crafted
rules, by a large margin. These results suggest that a span-based approach to
DD resolution holds promise.
Our contributions in this paper are three-fold. First, we investigate whether
task-specific observations can be exploited to extend a span-based model
originally developed for entity coreference to improve its performance for
end-to-end DD resolution in dialogue. Second, our extensions are effective in
improving model performance, allowing our model to achieve state-of-the-art
results on the CODI-CRAC 2021 shared task datasets. Finally, we present an
empirical analysis of our model, which, to our knowledge, is the first
analysis of a state-of-the-art span-based DD resolver.
## 2 Related Work
Broadly, existing approaches to DD resolution can be divided into three
categories, as described below.
#### Rule-based approaches.
Early systems that resolve deictic expressions are rule-based (Eckert and
Strube, 2000; Byron, 2002; Navarretta, 2000). Specifically, they use
predefined rules to extract anaphoric mentions, and select antecedent for each
extracted anaphor based on the dialogue act types of each candidate
antecedent.
#### Non-neural learning-based approaches.
Early non-neural learning-based approaches to DD resolution use hand-crafted
feature vectors to represent mentions (Strube and Müller, 2003; Müller, 2008).
A classifier is then trained to determine whether a pair of mentions is a
valid antecedent-anaphor pair.
#### Deep learning-based approaches.
Deep learning has been applied to DD resolution. For instance, Marasović et
al. (2017) and Anikina et al. (2021) use a Siamese neural network, which takes
as input the embeddings of two sentences, one containing a deictic anaphor and
the other a candidate antecedent, to score each candidate antecedent and
subsequently rank the candidate antecedents based on these scores. In
addition, motivated by the recent successes of Transformer-based approaches to
entity coreference (e.g., Kantor and Globerson (2019)), Kobayashi et al.
(2021) have recently proposed a Transformer-based approach to DD resolution,
which is an end-to-end coreference system based on SpanBERT (Joshi et al.,
2019, 2020). Their model jointly learns mention extraction and DD resolution
and has achieved state-of-the-art results in the CODI-CRAC 2021 shared task.
## 3 Corpora
We use the DD-annotated corpora provided as part of the CODI-CRAC 2021 shared
task. For training, we use the official training corpus from the shared task
(Khosla et al., 2021), ARRAU (Poesio and Artstein, 2008), which consists of
three conversational sub-corpora (TRAINS-93, TRAINS-91, PEAR) and two non-
dialogue sub-corpora (GNOME, RST). For validation and evaluation, we use the
official development sets and test sets from the shared task. The shared task
corpus is composed of four well-known conversational datasets: AMI (McCowan et
al., 2005), LIGHT (Urbanek et al., 2019), Persuasion (Wang et al., 2019), and
Switchboard (Godfrey et al., 1992). Statistics on these corpora are provided
in Table 1.
| | Total | Total | Total | Avg. | Avg. #toks | Avg. | Avg. | Avg. | Avg. #speakers
---|---|---|---|---|---|---|---|---|---|---
| | #docs | #sents | #turns | #sents | per sent | #turns | #ana | #ante | per doc
ARRAU | train | 552 | 22406 | - | 40.6 | 15.5 | - | 2.9 | 4.8 | -
LIGHT | dev | 20 | 908 | 280 | 45.4 | 12.7 | 14.0 | 3.1 | 4.2 | 2.0
| test | 21 | 923 | 294 | 44.0 | 12.8 | 14.0 | 3.8 | 4.6 | 2.0
AMI | dev | 7 | 4139 | 2828 | 591.3 | 8.2 | 404.0 | 32.9 | 42.0 | 4.0
| test | 3 | 1967 | 1463 | 655.7 | 9.3 | 487.7 | 39.3 | 47.3 | 4.0
Pers. | dev | 21 | 812 | 431 | 38.7 | 11.3 | 20.5 | 4.5 | 4.5 | 2.0
| test | 28 | 1139 | 569 | 40.7 | 11.1 | 20.3 | 4.4 | 4.8 | 2.0
Swbd. | dev | 11 | 1342 | 715 | 122.0 | 11.2 | 65.0 | 11.5 | 15.9 | 2.0
| test | 22 | 3652 | 1996 | 166.0 | 9.6 | 90.7 | 12.0 | 14.7 | 2.0
Table 1: Statistics on the datasets.
## 4 Baseline Systems
We employ three baseline systems.
The first baseline, coref-hoi, is Xu and Choi’s Xu and Choi (2020) re-
implementation of Lee et al.’s Lee et al. (2018) widely-used end-to-end entity
coreference model. The model ranks all text spans of up to a predefined length
based on how likely they correspond to entity mentions. For each top-ranked
span $x$, the model learns a distribution $P(y)$ over its antecedents
$y\in\mathcal{Y}(x)$, where $\mathcal{Y}(x)$ includes a dummy antecedent
$\epsilon$ and every preceding span:
$\displaystyle
P(y)=\frac{e^{s(x,y)}}{\sum_{y^{\prime}\in\mathcal{Y}(x)}e^{s(x,y^{\prime})}}$
where $s(x,y)$ is a pairwise score that incorporates two types of scores: (1)
$s_{m}(\cdot)$, which indicates how likely a span is a mention, and (2)
$s_{c}(\cdot)$ and $s_{a}(\cdot)$, which indicate how likely two spans refer
to the same entity111See Lee et al. (2018) for a description of the
differences between $s_{c}(\cdot)$ and $s_{a}(\cdot)$, ($s_{a}(x,\epsilon)=0$
for dummy antecedents):
$\displaystyle s(x,y)$ $\displaystyle=s_{m}(x)+s_{m}(y)+s_{c}(x,y)+s_{a}(x,y)$
(1) $\displaystyle s_{m}(x)$ $\displaystyle=\texttt{FFNN}_{m}(g_{x})$ (2)
$\displaystyle s_{c}(x,y)$ $\displaystyle=g_{x}^{\top}W_{c}g_{y}$ (3)
$\displaystyle s_{a}(x,y)$
$\displaystyle=\texttt{FFNN}_{c}(g_{x},g_{y},g_{x}\circ g_{y},\phi(x,y))$ (4)
where $g_{x}$ and $g_{y}$ are the vector representations of $x$ and $y$,
$W_{c}$ is a learned weight matrix for bilinear scoring, FFNN($\cdot$) is a
feedforward neural network, and $\phi(\cdot)$ encodes features. Two features
are used, one encoding speaker information and the other the segment distance
between two spans.
The second baseline, UTD_NLP222For an analysis of this and other resolvers
competing in the CODI-CRAC 2021 shared task, see Li et al. (2021)., is the
top-performing system in the DD track of the CODI-CRAC 2021 shared task
Kobayashi et al. (2021). It extends coref-hoi with a set of modifications. Two
of the most important modifications are: (1) the addition of a sentence
distance feature to $\phi(\cdot)$, and (2) the incorporation into coref-hoi a
type prediction model, which predicts the type of a span. The possible types
of a span $i$ are: Antecedent (if $i$ corresponds to an antecedent), Anaphor
(if $i$ corresponds to an anaphor), and Null (if it is neither an antecedent
nor an anaphor). The types predicted by the model are then used by coref-hoi
as follows: only spans predicted as Anaphor can be resolved, and they can only
be resolved to spans predicted as Antecedent. Details of how the type
prediction model is trained can be found in Kobayashi et al. (2021).
The third baseline, coref-hoi-utt, is essentially the first baseline except
that we restrict the candidate antecedents to be utterances. This restriction
is motivated by the observation that the antecedents of the deictic anaphors
in the CODI-CRAC 2021 datasets are all utterances. To see what an utterance
is, consider again the second example in Figure 1. Each line in this dialogue
is an utterance. As can be seen, an utterance roughly corresponds to a
sentence, although it can also be a text fragment or simply an interjection
(e.g., “uhhh”). While by definition the antecedent of a deictic anaphor can be
any clause, the human annotators of the DD track of the CODI-CRAC 2021 shared
task decided to restrict the unit of annotation to utterances because (1)
based on previous experience it was difficult to achieve high inter-annotator
agreement when clauses are used as the annotation unit Poesio and Artstein
(2008); and (2) unlike the sentences in narrative text, which can be composed
of multiple clauses and therefore can be long, the utterances in these
datasets are relatively short and can reliably be used as annotation units.
From a modeling perspective, restricting candidate antecedents also has
advantages. First, it substantially reduces the number of candidate
antecedents and therefore memory usage, allowing our full model to fit into
memory. Second, it allows us to focus on resolution rather than recognition of
deictic anaphors, as the recognition of clausal antecedents remains a
challenging task, especially since existing datasets for DD resolution are
relatively small compared to those available for entity coreference (e.g.,
OntoNotes Hovy et al. (2006)).
## 5 Model
Next, we describe our resolver, dd-utt, which augments coref-hoi-utt with 10
extensions.
#### E1. Modeling recency.
Unlike in entity coreference, where two coreferent names (e.g., “Joe Biden”,
“President Biden”) can be far apart from each other in the corresponding
document (because names are non-anaphoric), the distance between a deictic
anaphor and its antecedent is comparatively smaller. To model recency, we
restrict the set of candidate antecedents of an anaphor to be the utterance
containing the anaphor as well as the preceding 10 utterances, the choice of
which is based on our observation of the development data, where the 10
closest utterances already cover 96–99% of the antecedent-anaphor pairs.
#### E2. Modeling distance.
While the previous extension allows us to restrict our attention to candidate
antecedents that are close to the anaphor, it does not model the fact that the
likelihood of being the correct antecedent tends to increase as its distance
from the anaphor decreases. To model this relationship, we subtract the term
$\gamma_{1}Dist(x,y)$ from $s(x,y)$ (see Equation (1)), where $Dist(x,y)$ is
the utterance distance between anaphor $x$ and candidate antecedent $y$ and
$\gamma_{1}$ is a tunable parameter that controls the importance of utterance
distance in the resolution process. Since $s(x,y)$ is used to rank candidate
antecedents, modeling utterance distance by updating $s(x,y)$ will allow
distance to have a direct impact on DD resolution.
#### E3. Modeling candidate antecedent length.
Some utterances are pragmatic in nature and do not convey any important
information. Therefore, they cannot serve as antecedents of deictic anaphors.
Examples include “Umm”, “Ahhhh… okay”, “that’s right”, and “I agree”. Ideally,
the model can identify such utterances and prevent them from being selected as
antecedents. We hypothesize that we could help the model by modeling such
utterances. To do so, we observe that such utterances tend to be short and
model them by penalizing shorter utterances. Specifically, we subtract the
term $\gamma_{2}\frac{1}{Length(y)}$ from $s(x,y)$, where $Length(y)$ is the
number of words in candidate antecedent $y$ and $\gamma_{2}$ is a tunable
parameter that controls the importance of candidate antecedent length in
resolution.
#### E4. Extracting candidate anaphors.
As mentioned before, the deictic anaphors in dialogue are largely composed of
pronouns. Specifically, in our development sets, the three pronouns “that”,
“this”, and ‘it’ alone account for 74–88% of the anaphors. Consequently, we
extract candidate deictic anaphors as follows: instead of allowing each span
of length $n$ or less to be a candidate anaphor, we only allow a span to be a
candidate anaphor if its underlying word/phrase has appeared at least once in
the training set as a deictic anaphor.
#### E5. Predicting anaphors.
Now that we have the candidate anaphors, our next extension involves
predicting which of them are indeed deictic anaphors. To do so, we retrain the
type prediction model in UTD_NLP, which is a FFNN that takes as input the
(contextualized) span representation $g_{i}$ of candidate anaphor $i$ and
outputs a vector $ot_{i}$ of dimension 2 in which the first element denotes
the likelihood that $i$ is a deictic anaphor and the second element denotes
the likelihood that $i$ is not a deictic anaphor. $i$ is predicted as a
deictic anaphor if and only if the value of the first element of $ot_{i}$ is
bigger than its second value:
$\displaystyle ot_{i}$ $\displaystyle=\texttt{FFNN}(g_{i})$ $\displaystyle
t_{i}$
$\displaystyle=\operatorname*{arg\,max}_{x\in\\{\text{A},\text{NA}\\}}ot_{i}(x)$
where A (Anaphor) and NA (Non-Anaphor) are the two possible types. Following
UTD_NLP, this type prediction model is jointly trained with the resolution
model. Specifically, we compute the cross-entropy loss using $ot_{i}$,
multiply it by a type loss coefficient $\lambda$, and add it to the loss
function of coref-hoi-utt. $\lambda$ is a tunable parameter that controls the
importance of type prediction relative to DD resolution.
#### E6. Modeling the relationship between anaphor recognition and resolution.
In principle, the model should resolve a candidate anaphor to a non-dummy
candidate antecedent if it is predicted to be a deictic anaphor by the type
prediction model. However, type prediction is not perfect, and enforcing this
consistency constraint, which we will refer to as C1, will allow errors in
type prediction to be propagated to DD resolution. For example, if a non-
deictic anaphor is misclassified by the type prediction model, then it will be
(incorrectly) resolved to a non-dummy antecedent. To alleviate error
propagation, we instead enforce C1 in a soft manner. To do so, we define a
penalty function $p_{1}$, which imposes a penalty on span $i$ if C1 is
violated (i.e., a deictic anaphor is resolved to the dummy antecedent), as
shown below:
$p_{1}(i)=\begin{cases}0&\operatorname*{arg\,max}\limits_{y_{is}\in\mathcal{Y}}t_{i}=\text{NA}\\\
ot_{i}(\text{A})-ot_{i}(\text{NA})&\text{otherwise}\end{cases}$
Intuitively, $p_{1}$ estimates the minimum amount to be adjusted so that span
$i$’s type is not Anaphor.
We incorporate $p_{1}$ into the model as a penalty term in $s$ (Equation (1)).
Specifically, we redefine $s(i,j)$ when $j=\epsilon$, as shown below:
$s(i,\epsilon)=s(i,\epsilon)-[\gamma_{3}p_{1}(i)]$
where $\gamma_{3}$ is a positive constant that controls the hardness of C1.
The smaller $\gamma_{3}$ is, the softer C1 is. Intuitively, if C1 is violated,
$s(i,\epsilon)$ will be lowered by the penalty term, and the dummy antecedent
will less likely be selected as the antecedent of $i$.
#### E7. Modeling the relationship between non-anaphor recognition and
resolution.
Another consistency constraint that should be enforced is that the model
should resolve a candidate anaphor to the dummy antecedent if it is predicted
as a non-deictic anaphor by the type prediction model. As in Extension E6, we
will enforce this constraint, which we will refer to as C2, in a soft manner
by defining a penalty function $p_{2}$, as shown below:
$p_{2}(i)=\begin{cases}ot_{i}(\text{NA})-ot_{i}(\text{A})&\operatorname*{arg\,max}\limits_{y_{is}\in\mathcal{Y}}t_{i}=\text{NA}\\\
0&\text{otherwise}\end{cases}$
Then we redefine $s(i,j)$ when $j\neq\epsilon$ as follows:
$s(i,j)=s(i,j)-[\gamma_{4}p_{1}(i)]$
where $\gamma_{4}$ is a positive constant that controls the hardness of C2.
Intuitively, if C2 is violated, $s(i,j)$ will be lowered by the penalty term,
and $j$ will less likely be selected as the antecedent of $i$.
#### E8. Encoding candidate anaphor context.
Examining Equation (1), we see that $s(x,y)$ is computed based on the span
representations of $x$ and $y$. While these span representations are
contextualized, the contextual information they encode is arguably limited. As
noted before, most of the deictic anaphors in dialogue are pronouns, which are
semantically empty. As a result, we hypothesize that we could improve the
resolution of these deictic anaphors if we explicitly modeled their contexts.
Specifically, we represent the context of a candidate anaphor using the
embedding of the utterance in which it appears and add the resulting embedding
as features to both the bilinear score $s_{c}(x,y)$ and the concatenation-
based score $s_{a}(x,y)$:
$\displaystyle s_{c}(x,y)$
$\displaystyle=g_{x}^{\top}W_{c}g_{y}+g_{s}^{\top}W_{s}g_{y}$ $\displaystyle
s_{a}(x,y)$ $\displaystyle=\texttt{FFNN}_{c}(g_{x},g_{y},g_{x}\circ
g_{y},g_{s},\phi(x,y))$
where $W_{c}$ and $W_{s}$ are learned weight matrices, $g_{s}$ is the
embedding of the utterance $s$ in which candidate anaphor $x$ appears, and
$\phi(x,y)$ encodes the relationship between $x$ and $y$ as features.
Filling words
---
yeah, okay, ok, uh, right, so, hmm, well, um, oh, mm, yep, hi, ah, whoops,
alright, shhhh, yes, ay, hello, aww, alas, ye, aye, uh-huh, huh, wow, www, no,
and, but, again, wonderful, exactly, absolutely, actually, sure thanks,
awesome, gosh, ooops
Reporting verbs
command, mention, demand, request, reveal, believe, guarantee, guess, insist,
complain, doubt, estimate, warn, learn, realise, persuade, propose, announce,
advise, imagine, boast, suggest, remember, claim, describe, see, understand,
discover, answer, wonder, recommend, beg, prefer, suppose, comment, think,
argue, consider, swear, ask, agree, explain, report, know, tell, decide,
discuss, repeat, invite, reply, expect, forget, add, fear, hope, say, feel,
observe, remark, confirm, threaten, teach, forbid, admit, promise, deny,
state, mean, instruct
Table 2: Lists of filtered words.
#### E9. Encoding the relationship between candidate anaphors and antecedents.
As noted in Extension E8, $\phi(x,y)$ encodes the relationship between
candidate anaphor $x$ and candidate antecedent $y$. In UTD_NLP, $\phi(x,y)$ is
composed of three features, including two features from coref-hoi-utt (i.e.,
the speaker id and the segment distance between $x$ and $y$) and one feature
that encodes the utterance distance between them. Similar to the previous
extension, we hypothesize that we could better encode the relationship between
$x$ and $y$ using additional features. Specifically, we incorporate an
additional feature into $\phi(x,y)$ that encodes the utterance distance
between $x$ and $y$. Unlike the one used in UTD_NLP, this feature aims to more
accurately capture proximity by ignoring unimportant sentences (i.e., those
that contain only interjections, filling words, reporting verbs, and
punctuation) when computing utterance distance. The complete list of filling
words and reporting verbs that we filter can be found in Table 2.
#### E10. Encoding candidate antecedents.
In coref-hoi-utt, a candidate antecedent is simply encoded using its span
representation. We hypothesize that we could better encode a candidate
antecedent using additional features. Specifically, we employ seven features
to encode a candidate antecedent $y$ and incorporate them into $\phi(x,y)$:
(1) the number of words in $y$; (2) the number of nouns in $y$; (3) the number
of verbs in $y$; (4) the number of adjectives in $y$; (5) the number of
content word overlaps between $y$ and the portion of the utterance containing
the anaphor that precedes the anaphor; (6) whether $y$ is the longest among
the candidate antecedents; and (7) whether $y$ has the largest number of
content word overlap (as computed in Feature #5) among the candidate
antecedents. Like Extension E3, some features implicitly encode the length of
a candidate antecedent. Despite this redundancy, we believe the redundant
information could be exploited by the model differently and may therefore have
varying degrees of impact on it.
## 6 Evaluation
### 6.1 Experimental Setup
| Resolution | Recognition
---|---|---
| LIGHT | AMI | Pers. | Swbd. | Avg. | LIGHT | AMI | Pers. | Swbd. | Avg.
UTD_NLP | 42.7 | 35.4 | 39.6 | 35.4 | 38.3 | 70.1 | 61.0 | 69.9 | 68.1 | 67.3
coref-hoi | 42.7 | 30.7 | 49.7 | 35.4 | 39.6 | 70.9 | 49.3 | 67.8 | 61.9 | 62.5
coref-hoi-utt | 42.3 | 35.0 | 53.3 | 34.1 | 41.2 | 70.3 | 52.4 | 71.0 | 60.6 | 63.6
dd-utt | 48.2 | 43.5 | 54.9 | 47.2 | 48.5 | 71.3 | 56.9 | 71.4 | 65.2 | 66.2
Table 3: Resolution and recognition results on the four test sets.
#### Evaluation metrics.
We obtain the results of DD resolution using the Universal Anaphora Scorer Yu
et al. (2022b). Since DD resolution is viewed as a generalized case of event
coreference, the scorer reports performance in terms of CoNLL score, which is
the unweighted average of the F-scores of three coreference scoring metrics,
namely MUC (Vilain et al., 1995), B3 (Bagga and Baldwin, 1998), and CEAFe
(Luo, 2005). In addition, we report the results of deictic anaphor
recognition. We express recognition results in terms of Precision (P), Recall
(R) and F-score, considering an anaphor correctly recognized if it has an
exact match with a gold anaphor in terms of boundary.
| LIGHT | AMI | Pers. | Swbd.
---|---|---|---|---
Type loss coefficient $\lambda$ | 800 | 800 | 800 | 800
$\gamma_{1}$ | 1 | 1 | 1 | 1
$\gamma_{2}$ | 1 | 1 | 1 | 1
$\gamma_{3}$ | 5 | 10 | 10 | 5
$\gamma_{4}$ | 5 | 5 | 5 | 5
Table 4: Parameter values enabling dd-utt to achieve the best CoNLL score on
each development set.
#### Model training and parameter tuning.
For coref-hoi and coref-hoi-utt, we use SpanBERT${}_{\text{Large}}$ as the
encoder and reuse the hyperparameters from Xu and Choi (2020) with the only
exception of the maximum span width: for coref-hoi, we increase the maximum
span width from 30 to 45 in order to cover more than 97% of the antecedent
spans; for coref-hoi-utt we use 15 as the maximum span width, which covers
more than 99% of the anaphor spans in the training sets. For UTD_NLP, we
simply take the outputs produced by the model on the test sets and report the
results obtained by running the scorer on the outputs.333Since the shared task
participants were allowed to submit their system outputs multiple times to the
server to obtain results on the test sets, UTD_NLP’s results could be viewed
as results obtained by tuning parameters on the test sets. For dd-utt, we use
SpanBERT${}_{\text{Large}}$ as the encoder. Since we do not rely on span
enumerate to generate candidate spans, the maximum span width can be set to
any arbitrary number that is large enough to cover all candidate antecedents
and anaphors. In our case, we use 300 as our maximum span width. We tune the
parameters (i.e., $\lambda$, $\gamma_{1}$, $\gamma_{2}$, $\gamma_{3}$,
$\gamma_{4}$) using grid search to maximize CoNLL score on development data.
For the type loss coefficient, we search out of {0.2, 0.5, 1, 200, 500, 800,
1200, 1600}, and for $\gamma$, we search out of {1, 5, 10}. We reuse other
hyperparameters from Xu and Choi (2020).
All models are trained for 30 epochs with a dropout rate of 0.3 and early
stopping. We use $1\times 10^{-5}$ as our BERT learning rate and $3\times
10^{-4}$ as our task learning rate. Each experiment is run using a random seed
of 11 and takes less than three hours to train on an NVIDIA RTX A6000 48GB.
#### Train-dev partition.
Since we have four test sets, we use ARRAU and all dev sets other than the one
to be evaluated on for model training and the remaining dev set for parameter
tuning. For example, when evaluating on AMI${}_{\text{test}}$, we train models
on ARRAU, LIGHT${}_{\text{dev}}$, Persuasion${}_{\text{dev}}$ and
Switchboard${}_{\text{dev}}$ and use AMI${}_{\text{dev}}$ for tuning.
### 6.2 Results
Recall that our goal is to perform end-to-end DD resolution, which corresponds
to the Predicted evaluation setting in the CODI-CRAC shared task.
#### Overall performance.
Recognition results (expressed in F-score) and resolution results (expressed
in CoNLL score) of the three baselines and our model on the four test sets are
shown in Table 3, where the Avg. columns report the macro-averages of the
corresponding results on the four test sets, and the parameter settings that
enable our model to achieve the highest CoNLL scores on the development sets
are shown in Table 4. Since coref-hoi and coref-hoi-utt do not explicitly
identify deictic anaphors, we assume that all but the first mentions in each
output cluster are anaphors when computing recognition precision; and while
UTD_NLP (the top-performing system in the shared task) does recognize
anaphors, we still make the same assumption when computing its recognition
precision since the anaphors are not explicitly marked in the output (recall
that we computed results of UTD_NLP based on its outputs).
| | Resolution | Recognition
---|---|---|---
anaphor | count | UTD_NLP | coref-hoi | coref-hoi-utt | dd-utt | UTD_NLP | coref-hoi | coref-hoi-utt | dd-utt
that | 402 | 48.7 | 49.1 | 50.4 | 60.7 | 79.1 | 72.7 | 72.9 | 76.8
it | 95 | 21.8 | 27.5 | 28.5 | 37.0 | 36.5 | 36.4 | 37.0 | 35.5
this | 25 | 20.7 | 25.1 | 26.9 | 25.8 | 51.7 | 49.2 | 49.3 | 56.5
which | 10 | 4.5 | 8.2 | 14.9 | 14.4 | 33.3 | 28.6 | 42.1 | 35.3
Others | 52 | 33.0 | 39.6 | 41.2 | 48.5 | 33.8 | 28.1 | 40.0 | 10.3
Table 5: Per-anaphor recognition and resolution results on the test sets.
We test the statistical significance among the four models using two-tailed
Approximate Randomization Noreen (1989). For recognition, the models are
statistically indistinguishable from each other w.r.t. their Avg. score
($p<0.05$). For resolution, dd-utt is highly significantly better than the
baselines w.r.t. Avg. ($p<0.001$), while the three baselines are statistically
indistinguishable from each other. These results suggest that (1) dd-utt’s
superior resolution performance stems from better antecedent selection, not
better anaphor recognition; and (2) the restriction of candidate antecedents
to utterances in coref-hoi-utt does not enable the resolver to yield
significantly better resolution results than coref-hoi.
#### Per-anaphor results.
Next, we show the recognition and resolution results of the four models on the
most frequently occurring deictic anaphors in Table 5 after micro-averaging
them over the four test sets. Not surprisingly, “that” is the most frequent
deictic anaphor on the test sets, appearing as an anaphor 402 times on the
test sets and contributing to 68.8% of the anaphors. This is followed by “it”
(16.3%) and “this” (4.3%). Only 8.9% of the anaphors are not among the top
four anaphors.
Consider first the recognition results. As can be seen, “that” has the highest
recognition F-score among the top anaphors. This is perhaps not surprising
given the comparatively larger number of “that” examples the models are
trained on. While “it” occurs more frequently than “this” as a deictic
anaphor, its recognition performance is lower than that of “this”. This is not
surprising either: “this”, when used as a pronoun, is more likely to be
deictic than “it”, although both of them can serve as a coreference anaphor
and a bridging anaphor. In other words, it is comparatively more difficult to
determine whether a particular occurrence of “it” is deictic. Overall, UTD_NLP
recognizes more anaphors than the other models.
Next, consider the resolution results. To obtain the CoNLL scores for a given
anaphor, we retain all and only those clusters containing the anaphor in both
the gold partition and the system partition and apply the official scorer to
them. Generally, the more frequently occurring an anaphor is, the better its
resolution performance is. Interestingly, for the “Others” category, dd-utt
achieves the highest resolution results despite having the lowest recognition
performance. In contrast, while UTD_NLP achieves the best recognition
performance on average, its resolution results are among the worst.
#### Per-distance results.
To better understand how resolution results vary with the utterance distance
between a deictic anaphor and its antecedent, we show in Table 6 the number of
correct and incorrect links predicted by the four models for each utterance
distance on the test sets. For comparison purposes, we show at the top of the
table the distribution of gold links over utterance distances. Note that a
distance of 0 implies that the anaphor refers to the utterance in which it
appears.
A few points deserve mention. First, the distribution of gold links is
consistent with our intuition: a deictic anaphor most likely has the
immediately preceding utterance (i.e., distance = 1) as its referent. In
addition, the number of links drops as distance increases, and more than 90%
of the antecedents are at most four utterances away from their anaphors.
Second, although UTD_NLP recognizes more anaphors than the other models, it is
the most conservative w.r.t. link identification, predicting the smallest
number of correct and incorrect links for almost all of the utterance
distances. Third, dd-utt is better than the other models at (1) identifying
short-distance anaphoric dependencies, particularly when distance $\leq 1$,
and (2) positing fewer erroneous long-distance anaphoric dependencies. These
results provide suggestive evidence of dd-utt’s success at modeling recency
and distance explicitly. Finally, these results suggest that resolution
difficulty increases with distance: except for UTD_NLP, none of the models can
successfully recognize a link when distance $>5$.
| 0 | 1 | 2 | 3 | 4 | 5 | >5
---|---|---|---|---|---|---|---
Distribution of gold links over utterance distances
Gold | 90 | 209 | 97 | 46 | 21 | 8 | 19
Distribution of correctly predicted links
UTD_NLP | 28 | 64 | 23 | 9 | 4 | 2 | 1
coref-hoi | 22 | 108 | 42 | 13 | 5 | 3 | 0
coref-hoi-utt | 23 | 118 | 49 | 11 | 6 | 2 | 0
dd-utt | 40 | 142 | 45 | 17 | 5 | 3 | 0
Distribution of incorrectly predicted links
UTD_NLP | 16 | 56 | 31 | 24 | 9 | 6 | 52
coref-hoi | 83 | 148 | 55 | 33 | 22 | 13 | 58
coref-hoi-utt | 64 | 123 | 66 | 36 | 21 | 10 | 48
dd-utt | 43 | 131 | 57 | 24 | 6 | 7 | 7
Table 6: Distribution of links over the utterance distances between the
anaphor and the antecedents.
#### Ablation results.
To evaluate the contribution of each extension presented in Section 5 to dd-
utt’s resolution performance, we show in Table 7 ablation results, which we
obtain by removing one extension at a time from dd-utt and retraining it. For
ease of comparison, we show in the first row of the table the CoNLL scores of
dd-utt.
A few points deserve mention. First, when E1 (Modeling recency) is ablated, we
use as candidate antecedents the 10 highest-scoring candidate antecedents for
each candidate anaphor according to $s_{c}(x,y)$ (Equation (3)). Second, when
one of E2, E3, E6, and E7 is ablated, we set the corresponding $\lambda$ to
zero. Third, when E4 is ablated, candidate anaphors are extracted in the same
way as in coref-hoi and coref-hoi-utt, where the top spans learned by the
model will serve as candidate anaphors. Fourth, when E5 is ablated, E6 and E7
will also be ablated because the penalty functions $p_{1}$ and $p_{2}$ need to
be computed based on the output of the type prediction model in E5.
| LIGHT | AMI | Pers. | Swbd. | Avg.
---|---|---|---|---|---
dd-utt | 48.2 | 43.5 | 54.9 | 47.2 | 48.5
$-$ E1 | 47.1 | 36.4 | 53.6 | 46.2 | 45.8
$-$ E2 | 47.4 | 40.0 | 56.5 | 48.9 | 48.2
$-$ E3 | 45.3 | 44.6 | 53.7 | 49.0 | 48.1
$-$ E4 | 46.4 | 42.8 | 56.7 | 45.6 | 47.9
$-$ E5 | 43.6 | 43.9 | 50.2 | 47.4 | 46.3
$-$ E6 | 46.1 | 43.1 | 50.8 | 47.2 | 46.8
$-$ E7 | 48.6 | 43.6 | 56.0 | 49.4 | 49.4
$-$ E8 | 43.3 | 39.4 | 52.5 | 50.1 | 46.3
$-$ E9 | 44.6 | 43.2 | 52.1 | 47.7 | 46.9
$-$ E10 | 47.3 | 39.3 | 57.5 | 49.7 | 48.5
Table 7: Resolution results of ablated models.
We use two-tailed Approximate Randomization to determine which of these
ablated models is statistically different from dd-utt w.r.t. the Avg. score.
Results show that except for the model in which E1 is ablated, all of the
ablated models are statistically indistinguishable from dd-utt ($p<0.05$).
Note that these results do not imply that nine of the extensions fail to
contribute positively to dd-utt’s resolution performance: it only means that
none of them is useful in the presence of other extensions w.r.t. Avg. We
speculate that (1) some of these extensions model overlapping phenomena (e.g.,
both E2 and E9 model utterance distance); (2) when the model is retrained, the
learner manages to adjust the network weights so as to make up for the
potential loss incurred by ablating an extension; and (3) large fluctuations
in performance can be observed on individual datasets in some of the
experiments, but they may just disappear after averaging. Experiments are
needed to determine the reason.
### 6.3 Error Analysis
Below we analyze the errors made by dd-utt.
#### DD anaphora recognition precision errors.
A common type of recognition precision errors involves misclassifying a
coreference anaphor as a deictic anaphor. Consider the first example in Figure
2, in which the pronoun “that” is a coreference anaphor with “voice
recognition” as its antecedent but is misclassified as a deictic anaphor with
the whole sentence as its antecedent. This type of error occurs because
virtually all of the frequently occurring deictic anaphors, including “that”,
“it”, “this”, and “which”, appear as a coreference anaphor in some contexts
and as a deictic anaphor in other contexts, and distinguishing between the two
different uses of these anaphors could be challenging.
A: The design should minimize R_S_I and be easy to locate and we were still
slightly ambivalent as to whether to use voice recognition there, though that
did seem to be the favored strategy, but there was also, on the sideline, the
thought of maybe having a beeper function.
A: Sounds like a blessed organization.
B: Yes, it does.
A: Did you know they’ve won over 7 different awards for their charitable work?
A: As a former foster kid, it makes me happy to see this place bring such
awareness to the issues and needs of our young.
B: I am not surprised to hear that at all.
Figure 2: Examples illustrating the three majors types of errors made by dd-
utt.
#### DD anaphor recognition recall errors.
Consider the second example in Figure 2, in which “it” is a deictic anaphor
that refers to the boldfaced utterance, but dd-utt fails to identify this and
many other occurrences of “it” as deictic, probably because “it” is more
likely to be a coreference anaphor than a deictic anaphor: in the dev sets,
80% of the occurrences of “it” are coreference anaphors while only 5% are
deictic anaphors.
#### DD resolution precision errors.
A major source of DD resolution precision errors can be attributed to the
model’s failure in properly understanding the context in which a deictic
anaphor appears. Consider the third example in Figure 2, in which “that” is a
deictic anaphor that refers to the boldfaced utterance. While dd-utt correctly
identifies “that” as a deictic anaphor, it erroneously posits the italicized
utterance as its antecedent. This example is interesting in that without
looking at the boldfaced utterance, the italicized utterance is a plausible
antecedent for “that” because “I am not surprised to hear that at all” can be
used as a response to almost every statement. However, when both the boldfaced
utterance and the italicized utterance are taken into consideration, it is
clear that the boldfaced utterance is the correct antecedent for “that”
because winning over seven awards for some charitable work is certainly more
surprising than seeing a place bring awareness to the needs of the young.
Correctly resolving this anaphor, however, requires modeling the emotional
implication of its context.
### 6.4 Further Analysis
Next, we analyze the deictic anaphors correctly resolved by dd-utt but
erroneously resolved by the baseline resolvers.
A: You want your rating to be a two?
A: Is that what you’re saying?
B: Yeah, I just got it the other way.
B: Uh in Yep, I just got
A: Okay.
A: So, I’ll work out the average for that again at the end.
A: It’s very slightly altered. Okay, and we’re just waiting for your rating.
B: two point five
C: Its just two point five for that one.
A: Two point five, okay.
D: Yeah.
A: Losing one decimal place, that is okay.
Figure 3: Example in which the correct antecedent is identified by dd-utt but
not by the baseline resolvers.
The example shown in Figure 3 is one such case. In this example, dd-utt
successfully extracts the anaphor “that” and resolves it to the correct
antecedent “Losing one decimal place, that is okay”. UTD_NLP fails to extract
“that” as a deictic anaphor. While coref-hoi correctly extracts the anaphor,
it incorrectly selects “You want your rating to be a two?” as the antecedent.
From a cursory look at this example one could infer that this candidate
antecedent is highly unlikely to be the correct antecedent since it is 10
utterances away from the anaphor. As for coref-hoi-utt, the resolver
successfully extracts the anaphor but incorrectly selects “Its just two point
five for that one” as the antecedent, which, like the antecedent chosen by
coref-hoi, is farther away from the anaphor than the correct antecedent is. We
speculate that coref-hoi and coref-hoi-utt fail to identify the correct
antecedent because they do not explicitly model distance and therefore may not
have an idea about how far a candidate antecedent is from the anaphor under
consideration. The additional features that dd-utt has access to, including
the features that encode sentence distance as well as those that capture
contextual information, may have helped dd-utt choose the correct antecedent,
but additional analysis is needed to determine the reason.
## 7 Conclusion
We presented an end-to-end discourse deixis resolution model that augments Lee
et al.’s Lee et al. (2018) span-based entity coreference model with 10
extensions. The resulting model achieved state-of-the-art results on the CODI-
CRAC 2021 datasets. We employed a variant of this model in our recent
participation in the discourse deixis track of the CODI-CRAC 2022 shared task
Yu et al. (2022a) and achieved the best results (see Li et al. (2022) for
details). To facilitate replicability, we make our source code publicly
available.444See our website at https://github.com/samlee946/EMNLP22-dd-utt
for our source code.
## Limitations
Below we discuss several limitations of our work.
#### Generalization to corpora with clausal antecedents.
As mentioned in the introduction, the general discourse deixis resolution task
involves resolving a deictic anaphor to a clausal antecedent. The fact that
our resolver can only resolve anaphors to utterances raises the question of
whether it can be applied to resolve deictic anaphors in texts where
antecedents can be clauses. To apply our resolver to such datasets, all we
need to do is to expand the set of candidate antecedents of an anaphor to
include those clauses that precede it. While corpora annotated with clausal
antecedents exist (e.g., TRAINS-91 and TRAINS-93), we note that the decision
made by the CODI-CRAC 2021 shared task organizers to use utterances as the
unit of annotation has to do with annotation quality, as the inter-annotator
agreement on the selection of clausal antecedents tends to be low Poesio and
Artstein (2008),
#### Discourse deixis resolution in dialogue vs. narrative text.
Whether our model will generalize well to non-dialogue datasets (e.g.,
narrative text) is unclear. Given the differences between dialogue and non-
dialogue datasets (e.g., the percentage of pronominal anaphors), we speculate
that the performance of our resolver will take a hit when applied to resolving
deictic anaphors in narrative text.
#### Size of training data.
We believe that the performance of our resolver is currently limited in part
by the small amount of data on which it was trained. The annotated corpora
available for training a discourse deictic resolver is much smaller than those
available for training an entity coreference resolver (e.g., OntoNotes Hovy et
al. (2006)).
#### Data biases.
Generally, our work should not cause any significant risks. However, language
varieties not present in training data can potentially amplify existing
inequalities and contribute to misunderstandings.
## Acknowledgments
We thank the three anonymous reviewers for their insightful comments on an
earlier draft of the paper. This work was supported in part by NSF Grant
IIS-1528037. Any opinions, findings, conclusions or recommendations expressed
in this paper are those of the authors and do not necessarily reflect the
views or official policies, either expressed or implied, of the NSF.
## References
* Anikina et al. (2021) Tatiana Anikina, Cennet Oguz, Natalia Skachkova, Siyu Tao, Sharmila Upadhyaya, and Ivana Kruijff-Korbayova. 2021. Anaphora resolution in dialogue: Description of the DFKI-TalkingRobots system for the CODI-CRAC 2021 shared-task. In _Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue_ , pages 32–42, Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Bagga and Baldwin (1998) Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In _Proceedings of the LREC Workshop on Linguistics Coreference_ , pages 563–566.
* Byron (2002) Donna K. Byron. 2002. Resolving pronominal reference to abstract entities. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 80–87, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
* Eckert and Strube (2000) Miriam Eckert and Michael Strube. 2000. Dialogue acts, synchronizing units, and anaphora resolution. _Journal of Semantics_ , 17:51–89.
* Godfrey et al. (1992) John J. Godfrey, Edward C. Holliman, and Jane McDaniel. 1992. Switchboard: telephone speech corpus for research and development. In _Proceedings of the 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-92)_ , volume 1, pages 517–520.
* Hovy et al. (2006) Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In _Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers_ , pages 57–60, New York City, USA. Association for Computational Linguistics.
* Joshi et al. (2020) Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. _Transactions of the Association for Computational Linguistics_ , 8:64–77.
* Joshi et al. (2019) Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel Weld. 2019. BERT for coreference resolution: Baselines and analysis. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5803–5808, Hong Kong, China. Association for Computational Linguistics.
* Kantor and Globerson (2019) Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 673–677, Florence, Italy. Association for Computational Linguistics.
* Khosla et al. (2021) Sopan Khosla, Juntao Yu, Ramesh Manuvinakurike, Vincent Ng, Massimo Poesio, Michael Strube, and Carolyn Rosé. 2021. The CODI-CRAC 2021 shared task on anaphora, bridging, and discourse deixis in dialogue. In _Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue_ , pages 1–15, Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Kobayashi et al. (2021) Hideo Kobayashi, Shengjie Li, and Vincent Ng. 2021. Neural anaphora resolution in dialogue. In _Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue_ , pages 16–31, Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Lee et al. (2018) Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics.
* Li et al. (2021) Shengjie Li, Hideo Kobayashi, and Vincent Ng. 2021. The CODI-CRAC 2021 shared task on anaphora, bridging, and discourse deixis resolution in dialogue: A cross-team analysis. In _Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue_ , pages 71–95, Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Li et al. (2022) Shengjie Li, Hideo Kobayashi, and Vincent Ng. 2022. Neural anaphora resolution in dialogue revisited. In _Proceedings of the CODI-CRAC 2022 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue_ , pages 32–47, Gyeongju, Republic of Korea. Association for Computational Linguistics.
* Luo (2005) Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In _Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing_ , pages 25–32, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
* Marasović et al. (2017) Ana Marasović, Leo Born, Juri Opitz, and Anette Frank. 2017. A mention-ranking model for abstract anaphora resolution. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 221–232, Copenhagen, Denmark. Association for Computational Linguistics.
* McCowan et al. (2005) Iain McCowan, Jean Carletta, Wessel Kraaij, Simone Ashby, Sebastien Bourban, Mike Flynn, Maël Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Melissa Kronenthal, Guillaume Lathoud, Mike Lincoln, Agnes Lisowska, Wilfried Post, Dennis Reidsma, and Pierre D. Wellner. 2005. The AMI meeting corpus. The AMI Project Consortium, www.amiproject.org.
* Müller (2008) Christoph Müller. 2008. _Fully Automatic Resolution of “it”, “this”, and “that” in Unrestricted Multi-Party Dialog_. Ph.D. thesis, Universität Tübingen, Tübingen, Germany.
* Navarretta (2000) Costanza Navarretta. 2000. Abstract anaphora resolution in Danish. In _1st SIGdial Workshop on Discourse and Dialogue_ , pages 56–65, Hong Kong, China. Association for Computational Linguistics.
* Noreen (1989) Eric W. Noreen. 1989. _Computer-Intensive Methods for Testing Hypotheses: An Introduction_. John Wiley & Sons Inc.
* Poesio and Artstein (2008) Massimo Poesio and Ron Artstein. 2008. Anaphoric annotation in the ARRAU corpus. In _Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08)_ , Marrakech, Morocco. European Language Resources Association (ELRA).
* Strube and Müller (2003) Michael Strube and Christoph Müller. 2003. A machine learning approach to pronoun resolution in spoken dialogue. In _Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics_ , pages 168–175, Sapporo, Japan. Association for Computational Linguistics.
* Urbanek et al. (2019) Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Jason Weston. 2019. Learning to speak and act in a fantasy text adventure game. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 673–683, Hong Kong, China. Association for Computational Linguistics.
* Vilain et al. (1995) Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In _Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in Columbia, Maryland, November 6-8, 1995_.
* Wang et al. (2019) Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Persuasion for good: Towards a personalized persuasive dialogue system for social good. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5635–5649, Florence, Italy. Association for Computational Linguistics.
* Webber (1991) Bonnie Lynn Webber. 1991. Structure and ostension in the interpretation of discourse deixis. _Language and Cognitive Processes_ , 6(2):107––135.
* Xu and Choi (2020) Liyan Xu and Jinho D. Choi. 2020. Revealing the myth of higher-order inference in coreference resolution. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 8527–8533, Online. Association for Computational Linguistics.
* Yu et al. (2022a) Juntao Yu, Sopan Khosla, Ramesh Manuvinakurike, Lori Levin, Vincent Ng, Massimo Poesio, Michael Strube, and Carolyn Rosé. 2022a. The CODI-CRAC 2022 shared task on anaphora, bridging, and discourse deixis in dialogue. In _Proceedings of the CODI-CRAC 2022 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue_ , pages 1–14, Gyeongju, Republic of Korea. Association for Computational Linguistics.
* Yu et al. (2022b) Juntao Yu, Sopan Khosla, Nafise Sadat Moosavi, Silviu Paun, Sameer Pradhan, and Massimo Poesio. 2022b. The universal anaphora scorer. In _Proceedings of the Thirteenth Language Resources and Evaluation Conference_ , pages 4873–4883, Marseille, France. European Language Resources Association.
| MUC | B3 | CEAFe |
---|---|---|---|---
| P | R | F | P | R | F | P | R | F | CoNLL
LIGHT
UTD_NLP | 44.6 | 31.3 | 36.8 | 56.2 | 37.0 | 44.6 | 55.3 | 40.5 | 46.7 | 42.7
coref-hoi | 37.2 | 36.3 | 36.7 | 48.9 | 42.0 | 45.2 | 58.2 | 38.5 | 46.3 | 42.7
coref-hoi-utt | 36.5 | 38.8 | 37.6 | 46.7 | 42.3 | 44.4 | 55.3 | 38.0 | 45.0 | 42.3
dd-utt | 52.4 | 41.3 | 46.2 | 62.0 | 41.6 | 49.8 | 69.0 | 37.6 | 48.7 | 48.2
AMI
UTD_NLP | 45.5 | 21.2 | 28.9 | 52.4 | 29.5 | 37.8 | 44.9 | 35.1 | 39.4 | 35.4
coref-hoi | 21.7 | 30.5 | 25.4 | 28.7 | 36.3 | 32.1 | 39.0 | 31.0 | 34.6 | 30.7
coref-hoi-utt | 25.5 | 33.1 | 28.8 | 34.6 | 39.0 | 36.7 | 43.4 | 36.1 | 39.4 | 35.0
dd-utt | 41.2 | 39.8 | 40.5 | 48.9 | 42.8 | 45.6 | 54.4 | 37.5 | 44.4 | 43.5
Persuasion
UTD_NLP | 45.5 | 20.3 | 28.1 | 65.0 | 30.2 | 41.2 | 61.0 | 41.8 | 49.6 | 39.6
coref-hoi | 48.6 | 42.3 | 45.2 | 57.5 | 45.9 | 51.1 | 66.2 | 44.0 | 52.9 | 49.7
coref-hoi-utt | 50.0 | 49.6 | 49.8 | 56.8 | 51.7 | 54.1 | 64.4 | 49.4 | 55.9 | 53.3
dd-utt | 56.7 | 48.0 | 52.0 | 63.8 | 49.9 | 56.0 | 72.1 | 46.9 | 56.8 | 54.9
Switchboard
UTD_NLP | 35.2 | 21.3 | 26.5 | 52.3 | 30.4 | 38.5 | 50.5 | 34.9 | 41.3 | 35.4
coref-hoi | 31.5 | 30.4 | 31.0 | 40.9 | 34.0 | 37.1 | 51.4 | 30.2 | 38.0 | 35.4
coref-hoi-utt | 30.6 | 29.3 | 29.9 | 39.5 | 32.7 | 35.8 | 49.5 | 29.2 | 36.7 | 34.1
dd-utt | 46.3 | 43.4 | 44.8 | 54.9 | 44.5 | 49.2 | 63.4 | 38.3 | 47.7 | 47.2
Table 8: Resolution results on the test sets. | | LIGHT | AMI | Persuasion | Switchboard
---|---|---|---|---|---
| | P | R | F | P | R | F | P | R | F | P | R | F
Overall | UTD_NLP | 65.2 | 46.9 | 54.6 | 60.2 | 39.1 | 47.4 | 72.3 | 41.6 | 52.8 | 64.4 | 42.2 | 51.0
coref-hoi | 62.9 | 49.5 | 55.4 | 40.5 | 42.7 | 41.5 | 68.6 | 52.0 | 59.2 | 55.3 | 41.2 | 47.2
coref-hoi-utt | 59.3 | 50.0 | 54.2 | 43.9 | 45.2 | 44.5 | 66.2 | 57.6 | 61.6 | 53.3 | 39.6 | 45.5
dd-utt | 72.6 | 46.9 | 57.0 | 57.8 | 46.6 | 51.6 | 73.9 | 54.7 | 62.8 | 66.9 | 49.6 | 57.0
Anaphor | UTD_NLP | 71.4 | 68.8 | 70.1 | 58.0 | 64.4 | 61.0 | 76.7 | 64.2 | 69.9 | 65.7 | 70.7 | 68.1
coref-hoi | 71.8 | 70.0 | 70.9 | 42.2 | 59.3 | 49.3 | 72.9 | 63.4 | 67.8 | 63.0 | 60.8 | 61.9
coref-hoi-utt | 68.2 | 72.5 | 70.3 | 46.4 | 60.2 | 52.4 | 71.3 | 70.7 | 71.0 | 61.9 | 59.3 | 60.6
dd-utt | 81.0 | 63.8 | 71.3 | 57.9 | 55.9 | 56.9 | 77.9 | 65.9 | 71.4 | 67.5 | 63.1 | 65.2
Antecedent | UTD_NLP | 50.8 | 27.7 | 35.8 | 66.0 | 20.5 | 31.3 | 59.6 | 21.2 | 31.3 | 60.8 | 21.5 | 31.7
coref-hoi | 52.7 | 34.8 | 41.9 | 38.3 | 30.4 | 33.9 | 63.9 | 42.5 | 51.0 | 46.3 | 27.2 | 34.3
coref-hoi-utt | 49.4 | 33.9 | 40.2 | 41.0 | 34.2 | 37.3 | 60.7 | 46.6 | 52.7 | 43.3 | 25.5 | 32.1
dd-utt | 63.9 | 34.8 | 45.1 | 57.7 | 39.8 | 47.1 | 69.5 | 45.2 | 54.8 | 66.2 | 40.0 | 49.8
Table 9: Mention extraction results on the test sets.
## Appendix A Detailed Experimental Results
We report the resolution results of the four resolvers (UTD_NLP, coref-hoi,
coref-hoi-utt, and dd-utt) on the CODI-CRAC 2021 shared task test sets in
terms of MUC, B3, and CEAFe scores in Table 8 and their mention extraction
results in terms of recall (R), precision (P), and F-score (F) in Table 9.
Consider first the resolution results in Table 8. As can be seen, not only
does dd-utt achieve the best CoNLL scores on all four datasets, but it does so
via achieving the best MUC, B3, and CEAFe F-scores. In terms of MUC F-score,
the performance difference between dd-utt and the second best resolver on each
dataset is substantial (2.2%–14.9% points). These results suggest that better
link identification, which is what the MUC F-score reveals, is the primary
reason for the superior performance of dd-utt. Moreover, Persuasion appears to
be the easiest of the four datasets, as this is the dataset on which three of
the four resolvers achieved the highest CoNLL scores. Note that Persuasion is
also the dataset on which the differences in CoNLL score between dd-utt and
the other resolvers are the smallest. These results seem to suggest that the
performance gap between dd-utt and the other resolvers tends to widen as the
difficulty of a dataset increases.
Next, consider the anaphor extraction results in Table 9. In terms of F-score,
dd-utt lags behind UTD_NLP on two datasets, AMI and Switchboard. Nevertheless,
the anaphor extraction precision achieved by dd-utt is often one of the
highest in each dataset.
|
# Extremal black holes have external light rings
Shahar Hod The Ruppin Academic Center, Emeq Hefer 40250, Israel The Hadassah
Institute, Jerusalem 91010, Israel
###### Abstract
It is proved that spherically symmetric extermal black holes possess at least
one external light ring. Our remarkably compact proof is based on the dominant
energy condition which characterizes the external matter fields in the non-
vacuum extremal black-hole spacetimes.
## I Introduction
The non-linearly coupled Einstein-matter field equations predict, under
plausible physical assumptions Bar ; Chan ; Shap ; Hodub ; Herne , that curved
spacetimes of highly compact objects are characterized by the presence of
closed null circular trajectories (external light rings). The fact that
massless particles can perform orbital motions along closed circular geodesics
plays a key role in understanding many of the fundamental physical properties
of the corresponding central compact objects Bar ; Chan ; Shap ; Herne ; Hodns
; Mash ; Goeb ; Hod1 ; Dec ; Hodhair ; Hodfast ; YP ; Hodm ; Hodub ; Lu1 ;
Hodlwd ; Pod ; Ame ; Ste .
In particular, the intriguing physical phenomenon of strong gravitational
lensing, which can be used as an important observational tool to identify the
existence of cosmological black holes, is a direct outcome of the presence of
closed null geodesics in the highly curved near-horizon regions of the
corresponding black-hole spacetimes Pod ; Ame ; Ste . In addition, it is well
established (see Mash ; Goeb ; Hod1 ; Dec and references therein) that the
relaxation rates of perturbed black-hole spacetimes are closely related to the
instability timescales that characterize the geodesic motions of massless
particles along the null circular geodesics of the corresponding curved
spacetimes.
Interestingly, it has been proved Hodfast ; YP that, as judged by far away
asymptotic observers, the innermost null circular geodesic of a black-hole
spacetime provides the fastest way to travel around the central black hole. In
addition, it has been revealed Hodhair ; Hodub ; Hodlwd ; Hod1 that, in
spherically symmetric hairy (non-vacuum) black-hole spacetimes, the effective
lengths of the external matter fields are bounded from below by the radii of
the innermost null circular geodesics that characterize the corresponding
curved spacetimes.
The fact that null circular geodesics have a significant role in determining
many of the fundamental physical properties of black-hole spacetimes naturally
raises the following important question: Do the Einstein-matter field
equations guarantee the existence of external null circular trajectories
(light rings) in all black-hole spacetimes? Intriguingly, the existence of
closed null circular geodesics in the external regions of asymptotically flat
non-extremal spherically symmetric black-hole spacetimes has been proved in
Hodub . A general (and mathematically elegant) proof for the existence of null
circular geodesics in non-extremal stationary axi-symmetric black-hole
spacetimes has recently been presented in the physically interesting work
Herne .
It is important to point out that the theorems presented in Hodub ; Herne for
the existence of external null circular geodesics in generic black-hole
spacetimes seem to fail for extremal hairy (non-vacumm) black-hole
configurations. Motivated by this fact, it has recently been proved Hoddo
that extermal black-hole spacetimes with positive tangential pressures on
their horizons [$p_{\text{T}}(r=r_{\text{H}})>0$, see Eq. (2) below] possess
external light rings. However, as emphasized in Hoddo , the existence theorem
presented in Hoddo is not valid for extermal black holes with non-positive
horizon tangential pressures. This fact leaves open the possibility of finding
extremal black-hole spacetimes with non-positive horizon tangential pressures
that do not have external light rings.
The main goal of the present compact paper is to complete our knowledge about
the (in)existence of external null circular geodesics in extremal black-hole
spacetimes. In particular, using analytical techniques, we shall explicitly
prove below that the non-linearly coupled Einstein-matter field equations
guarantee that spherically symmetric hairy (non-vacuum) extermal black-hole
spacetimes whose external matter fields respect the dominant energy condition
are always characterized by the presence of external null circular geodesics
(closed light rings).
## II Description of the system
We consider spherically symmetric extremal black-hole spacetimes which, using
the familiar Schwarzschild spacetime coordinates $\\{t,r,\theta,\phi\\}$, are
characterized by the curved line element Hodfast ; Hodm ; Noteunit
$ds^{2}=-e^{-2\delta}\mu
dt^{2}+\mu^{-1}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2})\ ,$ (1)
where the radially-dependent functions $\mu=\mu(r)$ and $\delta=\delta(r)$ are
determined by the matter content of the non-vacuum spacetime.
Using the line element (1) and the notations Bond1
$\rho\equiv-T^{t}_{t}\ \ \ \ ,\ \ \ \ p\equiv T^{r}_{r}\ \ \ \ ,\ \ \ \
p_{T}\equiv T^{\theta}_{\theta}=T^{\phi}_{\phi}\ $ (2)
for the radially-dependent energy density $\rho$, radial pressure $p$, and
tangential pressure $p_{T}$ of the external static matter configurations, one
finds that the Einstein-matter field equations $G^{\mu}_{\nu}=8\pi
T^{\mu}_{\nu}$ yield the radial differential equations Hodfast ; Hodm
${{d\mu}\over{dr}}=-8\pi r\rho+{{1-\mu}\over{r}}\ $ (3)
and
${{d\delta}\over{dr}}=-{{4\pi r(\rho+p)}\over{\mu}}\ ,$ (4)
which relate the metric functions to the external matter sources.
Extremal black-hole spacetimes are characterized by the horizon boundary
conditions Bekreg :
$\mu(r=r_{\text{H}})=0\ ,$ (5)
$\Big{[}{{d\mu}\over{dr}}\Big{]}_{r=r_{\text{H}}}=0\ ,$ (6)
$\Big{[}{{d^{2}\mu}\over{dr^{2}}}\Big{]}_{r=r_{\text{H}}}>0\ ,$ (7)
$\delta(r=r_{\text{H}})<\infty\ \ \ ;\ \ \
\Big{[}{{d\delta}\over{dr}}\Big{]}_{r=r_{\text{H}}}<\infty\ ,$ (8)
and
$p(r=r_{\text{H}})=-\rho(r=r_{\text{H}})=-(8\pi r^{2}_{\text{H}})^{-1}\ .$ (9)
In addition, the radially-dependent metric functions of asymptotically flat
black-hole spacetimes are characterized by the functional relations
$\mu(r\to\infty)\to 1\ $ (10)
and
$\delta(r\to\infty)\to 0\ .$ (11)
Taking cognizance of the Einstein equation (3), one finds the expression
$\mu(r)=1-{{2m(r)}\over{r}}\ $ (12)
for the dimensionless metric function $\mu(r)$, where
$m(r)={{r_{\text{H}}}\over{2}}+\int_{r_{\text{H}}}^{r}4\pi r^{2}\rho(r)dr\ $
(13)
is the gravitational mass contained within a sphere of radius $r$ [here
$m(r=r_{\text{H}})=r_{\text{H}}/2$ is the mass contained within the black-hole
horizon]. Taking cognizance of Eqs. (10), (12), and (13), one deduces the
characteristic functional relation
$r^{3}\rho(r)\to 0\ \ \ \ \text{for}\ \ \ \ r\to\infty\ $ (14)
for the external energy density in asymptotically flat black-hole spacetimes.
Our proof, to be presented below, for the existence of external null circular
geodesics in extremal black-hole spacetimes is based on the well known
dominant energy condition which, for a given density profile of the matter
fields, bounds from above the (radial and tangential) pressure components of
the corresponding matter distribution Bekreg :
$|p|,|p_{T}|\leq\rho\ .$ (15)
## III The proof for the existence of external null circular geodesics in
extremal black-hole spacetimes
In the present section we shall explicitly prove that extremal hairy (non-
vacuum) black-hole spacetimes that respect the dominant energy condition (15)
necessarily possess at least one external light ring whose radius satisfies
the inequality $r_{\gamma}>r_{\text{H}}$.
To this end, we shall analyze the radial functional behavior of the
dimensionless function
${\cal N}(r)\equiv 3\mu-1-8\pi r^{2}p\ $ (16)
in the extremal black-hole spacetime (1). It has been explicitly proved
Hodhair that, in spherically symmetric black-hole spacetimes, the radii of
null circular geodesics are determined by the mathematically compact relation
${\cal N}(r=r_{\gamma})=0\ .$ (17)
Taking cognizance of Eqs. (5), (9), and (16), one deduces the boundary
relation
${\cal N}(r=r_{\text{H}})=0\ $ (18)
on the outer horizon of the extremal black hole. In addition, the functional
relation (14), which characterizes asymptotically flat black-hole spacetimes,
together with the assumed dominant energy condition (15), imply the asymptotic
functional behavior
$r^{3}p(r)\to 0\ \ \ \ \text{for}\ \ \ \ r\to\infty\ $ (19)
for the external radial pressure. From Eqs. (10) and (19) one finds the
characteristic large-$r$ behavior
${\cal N}(r\to\infty)\to 2\ $ (20)
of the dimensionless radial function (16).
Defining the dimensionless function
${\cal F}\equiv r{{d\mu}\over{dr}}\ $ (21)
and using Eqs. (6) and (7), one finds the characteristic relation
$\Big{[}{{d{\cal
F}}\over{dr}}\Big{]}_{r=r_{\text{H}}}=r_{\text{H}}\Big{[}{{d^{2}\mu}\over{dr^{2}}}\Big{]}_{r=r_{\text{H}}}>0\
$ (22)
for extremal black holes. In addition, from the Einstein equation (3) and the
boundary condition (6) one obtains the horizon relation
$\Big{[}{{d{\cal F}}\over{dr}}\Big{]}_{r=r_{\text{H}}}=-{{d}\over{dr}}[8\pi
r^{2}\rho]_{r=r_{\text{H}}}\ $ (23)
for the extremal black-hole spacetime (1). From Eqs. (22) and (23) one deduces
that the dimensionless function $r^{2}\rho$ decreases in the vicinity of the
extremal black-hole horizon:
$\Big{[}{{d{(r^{2}\rho)}}\over{dr}}\Big{]}_{r=r_{\text{H}}}<0\ .$ (24)
Taking cognizance of the horizon boundary relation (9) and the assumed
dominant energy condition (15), one deduces from (24) that the radial
expression $r^{2}p$ is a negative increasing function in the vicinity of the
black-hole horizon:
$[r^{2}p]_{r=r_{\text{H}}}<0\ \ \ \ \text{and}\ \ \ \
\Big{[}{{d{(r^{2}p)}}\over{dr}}\Big{]}_{r=r_{\text{H}}}>0\ .$ (25)
From the analytically derived functional relation (25) and the horizon
boundary condition (6) for extremal black holes, one finds the characteristic
inequality [see Eq. (16)]
$\Big{[}{{d{\cal
N}}\over{dr}}\Big{]}_{r=r_{\text{H}}}=-8\pi\Big{[}{{d{(r^{2}p)}}\over{dr}}\Big{]}_{r=r_{\text{H}}}<0\
,$ (26)
which, together with the relation (18), imply that the dimensionless radial
function (16) is non-positive in the vicinity of the black-hole horizon. In
particular, the radial function ${\cal N}(r)$ is characterized by the near-
horizon property:
${\cal N}(r/r_{\text{H}}\to 1^{+})\to 0^{-}\ .$ (27)
Finally taking cognizance of the analytically derived near-horizon relation
(27) and the characteristic asymptotic behavior (20) of the dimensionless
radial function (16), one deduces that spherically symmetric extremal black-
hole spacetimes whose external matter fields respect the dominant energy
condition (15) possess at least one external null circular geodesic (closed
light ring) which is characterized by the functional relation
${\cal N}(r=r_{\gamma})=0\ \ \ \ \text{with}\ \ \ \ r_{\gamma}>r_{\text{H}}\
.$ (28)
## IV Summary
Null circular geodesics play important roles in fundamental as well as
observational studies of the physics of curved black-hole spacetimes (see Bar
; Chan ; Shap ; Herne ; Hodns ; Mash ; Goeb ; Hod1 ; Dec ; Hodhair ; Hodfast ;
YP ; Hodub ; Lu1 ; Hodlwd ; Pod ; Ame ; Ste and references therein).
Interestingly, the existence of closed light rings in asymptotically flat non-
extremal black-hole spacetimes has been proved, using the Einstein-matter
field equations, in Hodub for spherically symmetric non-vacuum (hairy) black-
hole configurations. A mathematically elegant proof for the existence of
external null circular geodesics in non-extremal stationary axi-symmetric
black-hole spacetimes has been provided in the highly important work Herne .
Intriguingly, the existence theorems presented in Hodub ; Herne seem to fail
for extremal black-hole spacetimes. Motivated by this observation, we have
raised the following physically important question Hoddo : Do extremal black-
hole spacetimes always possess external light rings?
In the present paper we have presented a remarkably compact theorem Noteafc ,
which is based on the non-linearly coupled Einstein-matter field equations,
that reveals the physically important fact that spherically symmetric extermal
hairy (non-vacuum) black holes whose external matter fields respect the
dominant energy condition necessarily possess at least one external light ring
(closed null circular geodesic).
ACKNOWLEDGMENTS
This research is supported by the Carmel Science Foundation. I thank Yael
Oren, Arbel M. Ongo, Ayelet B. Lata, and Alona B. Tea for stimulating
discussions.
## References
* (1) J. M. Bardeen, W. H. Press and S. A. Teukolsky, Astrophys. J. 178, 347 (1972).
* (2) S. Chandrasekhar, The Mathematical Theory of Black Holes, (Oxford University Press, New York, 1983).
* (3) S. L. Shapiro and S. A. Teukolsky, Black holes, white dwarfs, and neutron stars: The physics of compact objects (Wiley, New York, 1983).
* (4) S. Hod, Phys. Lett. B 727, 345 (2013) [arXiv:1701.06587].
* (5) P. V. P. Cunha, and C. A. R. Herdeiro, Phys. Rev. Lett. 124, 181101 (2020).
* (6) M. A. Podurets, Astr. Zh. 41, 1090 (1964) [English translation in Sovet Astr.-AJ 8, 868 (1965)].
* (7) W. L. Ames and K. S. Thorne, Astrophys. J. 151, 659 (1968).
* (8) I. Z. Stefanov, S. S. Yazadjiev, and G. G. Gyulchev, Phys. Rev. Lett. 104, 251103 (2010).
* (9) B. Mashhoon, Phys. Rev. D 31, 290 (1985).
* (10) C. J. Goebel, Astrophys. J. 172, L95 (1972).
* (11) S. Hod, Phys. Rev. D 80, 064004 (2009) [arXiv:0909.0314]; S. Hod, Phys. Rev. D 78, 084035 (2008) [arXiv:0811.3806]; S. Hod, Phys. Rev. D 75, 064013 (2007) [arXiv:gr-qc/0611004]; S. Hod, Class. Quant. Grav. 24, 4235 (2007) [arXiv:0705.2306]; S. Hod, Phys. Lett. B 715, 348 (2012) [arXiv:1207.5282].
* (12) Y. Dećanini, A. Folacci, and B. Raffaelli, Phys. Rev. D 81, 104039 (2010); Phys. Rev. D 84, 084035 (2011).
* (13) S. Hod, Phys. Rev. D 84, 104024 (2011) [arXiv:1201.0068].
* (14) Y. Peng, Phys. Lett. B 792, 1 (2019).
* (15) S. Hod, Phys. Rev. D 84, 124030 (2011) [arXiv:1112.3286].
* (16) S. Hod, Phys. Rev. D 101, 084033 (2020) [arXiv:2012.03962].
* (17) S. Hod, Phys. Lett. B 718, 1552 (2013) [arXiv:1210.2486].
* (18) H. Lu and H. D. Lyu, Phys. Rev. D 101, 044059 (2020).
* (19) S. Hod, Phys. Lett. B 657, 255 (2007) [arXiv:0711.4541]; S. Hod, Class. Quant. Grav. 24, 6019 (2007) [arXiv:0712.1988]; S. Hod, Phys. Lett. B 661, 175 (2008) [arXiv:0803.0608].
* (20) S. Hod, The Euro. Phys. Jour. C 82, 663 (2022).
* (21) We use natural units in which $G=c=1$.
* (22) H. Bondi, Mon. Not. Roy. Astr. Soc. 259, 365 (1992).
* (23) A. E. Mayo and J. D. Bekenstein, Phys. Rev. D 54, 5059 (1996); N. E. Mavromatos, arXiv:gr-qc/9606008.
* (24) It is interesting to mention that after the completion of this work, an alternative and elegant proof for the existence of external null circular geodesics in spherically symmetric black-hole spacetimes has been presented by Y. Peng in arXiv:2211.14463 .
|
# Genuine multipartite entanglement measures based on multi-party
teleportation capability
Minjin Choi Division of National Supercomputing, Korea Institute of Science
and Technology Information, Daejeon 34141, Korea Qunova Computing, Inc.,
Daejeon 34051, Korea Eunok Bae School of Computational Sciences, Korea
Institute for Advanced Study, Seoul 02455, Korea Department of Mathematics,
Research Institute for Natural Sciences, Hanyang University, Seoul 04763,
Korea<EMAIL_ADDRESS>Soojoon Lee School of Computational Sciences, Korea
Institute for Advanced Study, Seoul 02455, Korea Department of Mathematics
and Research Institute for Basic Sciences, Kyung Hee University, Seoul 02447,
Korea<EMAIL_ADDRESS>
###### Abstract
Quantifying entanglement is vital to understand entanglement as a resource in
quantum information processing, and many entanglement measures have been
suggested for this purpose. When mathematically defining an entanglement
measure, we should consider the distinguishability between entangled and
separable states, the invariance under local transformation, the monotonicity
under local operations and classical communication, and the convexity. These
are reasonable requirements but may be insufficient, in particular when taking
into account the usefulness of quantum states in multi-party quantum
information processing. Therefore, if we want to investigate multipartite
entanglement as a resource, then it can be necessary to consider the
usefulness of quantum states in multi-party quantum information processing
when we define a multipartite entanglement measure. In this paper, we define
new multipartite entanglement measures for three-qubit systems based on the
three-party teleportation capability, and show that these entanglement
measures satisfy the requirements for being genuine multipartite entanglement
measures. We also generalize our entanglement measures for $N$-qubit systems,
where $N\geq 4$, and discuss that these quantities may be good candidates to
measure genuine multipartite entanglement.
## Introduction
Entanglement is a crucial resource in quantum computing and quantum
information tasks that cannot be explained classically. Typical two-party
quantum applications of entanglement as a resource include quantum
teleportation [1] and quantum key distribution [2], which are performed on
bipartite entangled states. However, there also exist multi-party quantum
applications, such as multi-party quantum teleportation [3, 4], conference key
agreement [5], and quantum secret sharing [6], where multipartite entanglement
is considered as a resource. In particular, genuine multipartite entanglement
(GME) is a significant concept for multipartite systems, since it plays an
essential role in quantum communication [3, 7] and quantum cryptography [5, 6,
8]. GME is also a critical resource in measurement-based quantum computing
[9], quantum-enhanced measurements [10], quantum phase transitions [11, 12],
and quantum spin chains [13]. Therefore, in order to make use of GME as a
resource, its quantification is necessary.
Entanglement measures are mathematical tools to quantify entanglement. For
bipartite systems, concurrence is one of the well-known entanglement measures
[14, 15, 16]. It can distinguish between entangled and separable states and
does not increase under local operations and classical communication (LOCC),
which are important requirements for quantifying entanglement. For
multipartite systems, it is more complicated to investigate entanglement, in
particular GME. Even for three-qubit pure states, we should consider three
different bipartition scenarios, $A|BC$, $B|AC$, and $C|AB$. In addition,
there are two inequivalent classes, the Greenberger-Horne-Zeilinger (GHZ)
class and the W class, differentiated by stochastic LOCC [17]. A
straightforward approach to define entanglement measures for quantifying GME
is to deal with bipartite entanglement measures for all bipartitions. For
instance, the minimum and the geometric mean of concurrences for all
bipartitions [18, 19] satisfy the conditions for being a GME measure [20, 18];
the distinguishability between genuinely multipartite entangled states and
biseparable states, the invariance under local transformations, the
monotonicity under LOCC, and the convexity. The concurrence fill [21], which
is the square root of the area of the three-qubit concurrence triangle, was
also proposed as a GME measure, but it has recently been shown that this
measure does not satisfy the monotonicity under LOCC [22].
We now ask whether a GME measure can compare the usefulness of any pure states
in some specific multipartite quantum information processing. This question is
natural when we use GME as a resource. For example, a monotonic relationship
exists between bipartite entanglement and teleportation fidelity for pure
states [23, 24]. Indeed, such a concept of the proper GME measure has been
discussed [21], which makes the GHZ state rank as more entangled than the W
state. This concept stems from the fact that the GHZ state can be more useful
than the W state in three-qubit teleportation [25]. However, teleportation
capabilities for other arbitrary pure states have not been taken into account.
In fact, the minimum and the geometric mean of concurrences for all
bipartitions are proper GME measures, but it is not difficult to find quantum
states for which these GME measures and the three-qubit teleportation
capability [26] give different orders.
In order to appropriately utilize GME as a resource, we need a GME measure
that can compare the usefulness of quantum states in a given quantum
information processing. In this paper, we first take account of three-qubit
teleportation, and propose novel GME measures for three-qubit systems based on
three-qubit teleportation capability. To this end, we consider the maximal
average teleportation fidelity of resulting states on the other parties
obtained after a measurement by one of the parties, and prove that our
measures based on the fidelity can be used to observe separability on three-
qubit systems and does not increase on average under LOCC. By comparing our
GME measures with other GME measures, we show that there are quantum states
such that their usefulness in three-qubit teleportation cannot be explained by
the other GME measures, while it can naturally be done from ours. We also show
that our GME measures can be defined by using only two of the possible three
fidelities, unlike the minimum and the geometric mean of concurrences should
consider concurrences for all bipartitions. In other words, we can make GME
measures that have a simpler form.
This paper is organized as follows. We first introduce the maximal average
teleportation fidelity obtained after one of the parties measures his/her
system, and look into its properties. After defining entanglement measures
based on the three-qubit teleportation capability, we prove that they fulfill
the conditions for the GME measures. We give examples to show that our newly
defined GME measures are more appropriate than the other GME measures to
compare the capability of three-qubit teleportation. We finally generalize our
entanglement measures to $N$-qubit systems, and discuss that these $N$-partite
entanglement measures have the potential to be GME measures by showing that
GME is related to $N$-qubit teleportation capability when $N\geq 4$.
## Main results
### Three-qubit teleportation capability and its properties
Three-qubit teleportation we consider proceeds as follows. Suppose that three
parties, Alice, Bob, and Charlie, share a three-qubit state. After one
performs an orthogonal measurement on his/her system, the rest carry out the
standard teleportation [1] over the resulting state with the measurement
outcome. For instance, if the initial state is
$\ket{\rm{GHZ}}_{ABC}=\frac{1}{\sqrt{2}}\left(\ket{000}_{ABC}+\ket{111}_{ABC}\right)$,
then having one of them measures his/her system in the $X$ basis
$\left\\{\ket{0_{x}},\ket{1_{x}}\right\\}$, where
$\ket{t_{x}}=\frac{1}{\sqrt{2}}\left(\ket{0}+(-1)^{t}\ket{1}\right)$ for $t=0$
or $1$, makes it possible for them to perfectly accomplish three-qubit
teleportation since it can be written as
$\ket{\rm{GHZ}}_{ABC}=\frac{1}{2}\left(\ket{0_{x}0_{x}0_{x}}_{ABC}+\ket{0_{x}1_{x}1_{x}}_{ABC}+\ket{1_{x}0_{x}1_{x}}_{ABC}+\ket{1_{x}1_{x}0_{x}}_{ABC}\right)$.
Let us first look at the maximal fidelity of two-qubit teleportation. For a
given teleportation scheme $\Lambda_{\rho_{AB}}$ over a two-qubit state
$\rho_{AB}$, the fidelity of two-qubit teleportation is defined as [23]
$F\left(\Lambda_{\rho_{AB}}\right)=\int
d\xi\bra{\xi}\Lambda_{\rho_{AB}}\left(\ket{\xi}\bra{\xi}\right)\ket{\xi}.$ (1)
It has been proven that when $\Lambda_{\rho_{AB}}$ represents the standard
teleportation scheme over $\rho_{AB}$ to attain the maximal fidelity, the
following equation holds [27, 28]:
$F\left(\Lambda_{\rho_{AB}}\right)=\frac{2f\left(\rho_{AB}\right)+1}{3},$ (2)
where $f$ is the fully entangled fraction [14], which is given by
$f\left(\rho_{AB}\right)=\rm{max}\bra{e}\rho_{AB}\ket{e},$ (3)
where the maximum is over all maximally entangled states $\ket{e}$ of two
qubits. The given state $\rho_{AB}$ is said to be useful for teleportation if
and only if $F(\Lambda_{\rho_{AB}})>2/3$ [23, 29, 24].
We now consider three-qubit teleportation capability. Let $F_{ij}$ be the
maximal average fidelity of teleportation over the resulting state in the
systems $i$ and $j$ after measuring the system $k$, where $i$, $j$, and $k$
are distinct systems in $\\{A,B,C\\}$. Since three-qubit teleportation
consists of a one-qubit measurement and two-qubit teleportation, it is
straightforwardly obtained that [26]
$F_{ij}\left(\rho_{ABC}\right)=\frac{2f_{ij}\left(\rho_{ABC}\right)+1}{3},$
(4)
where
$f_{ij}\left(\rho_{ABC}\right)=\max_{U_{k}}\sum_{t=0}^{1}\bra{t}U_{k}\rho_{k}{U_{k}}^{\dagger}\ket{t}f\left(\rho_{ij}^{U_{k},t}\right).$
(5)
Here, $U_{k}$ is a unitary operator, that is,
{${U_{k}}^{\dagger}\ket{0},{U_{k}}^{\dagger}\ket{1}\\}$ describes a one-qubit
orthogonal measurement on the system $k$, and $\rho_{ij}^{U_{k},t}$ is the
resulting state with the outcome $t$. We say that a given state $\rho_{ABC}$
is useful for three-qubit teleportation if and only if
$\min\\{F_{AB}\left(\rho_{ABC}\right),F_{BC}\left(\rho_{ABC}\right),F_{CA}\left(\rho_{ABC}\right)\\}>2/3$.
Before showing the properties of the maximal average teleportation fidelity
$F_{ij}$, let us first consider the two-qubit maximal teleportation fidelity
$F$. Hereafter, we denote $\ket{\psi}_{S_{1}S_{2}}\in SEP(S_{1}:S_{2})$ when
$\ket{\psi}_{S_{1}S_{2}}$ is separable between the systems $S_{1}$ and $S_{2}$
for simplicity. By the Schmidt decomposition, any two-qubit pure state
$\ket{\phi}_{AB}$ can be written in the form
$\ket{\phi}_{AB}=\sqrt{a}\ket{u_{0}v_{0}}_{AB}+\sqrt{1-a}\ket{u_{1}v_{1}}_{AB}$,
where $0\leq a\leq 1$ and $\\{\ket{u_{0}},\ket{u_{1}}\\}$,
$\\{\ket{v_{0}},\ket{v_{1}}\\}$ are orthonormal sets. Thus, by calculating the
entanglement fraction $f(\ket{\phi}_{AB})$, we get
$F\left(\Lambda_{\ket{\phi}_{AB}}\right)=\frac{2}{3}+\frac{2}{3}\sqrt{a(1-a)}$.
Note that the concurrence [15, 16] for a pure state $\ket{\psi}_{S_{1}S_{2}}$
is defined as
$C\left(\ket{\psi}_{S_{1}S_{2}}\right)=\sqrt{2\left(1-\mathrm{Tr}\left(\varrho_{S_{1}}^{2}\right)\right)},$
(6)
where $\varrho_{S_{1}}$ is the reduced density operator of
$\ket{\psi}_{S_{1}S_{2}}$, so we have
$C(\ket{\phi}_{AB})=3F\left(\Lambda_{\ket{\phi}_{AB}}\right)-2.$ (7)
From this equation, we can see that $\ket{\phi}_{AB}\in SEP(A:B)$ if and only
if $F\left(\Lambda_{\ket{\phi}_{AB}}\right)=\frac{2}{3}$, and for two-qubit
pure states, the more entangled state with respect to the concurrence, the
higher the maximal teleportation fidelity $F$. Moreover, since the concurrence
satisfies the monotonicity under LOCC on pure states, so does the maximal
teleportation fidelity $F$.
We now show that the maximal average teleportation fidelity $F_{ij}$ on three-
qubit pure states has similar properties. For three-qubit pure states, it has
been shown that the following equation holds [26]:
$F_{ij}\left(\ket{\phi}_{ABC}\right)=\frac{\sqrt{\left(\tau+C_{ij}^{2}\right)\left(\ket{\phi}_{ABC}\right)}+2}{3},$
(8)
where $\tau$ is the three-tangle [30] and $C_{ij}$ is the concurrence for the
reduced density operator $\rho_{ij}$ of $\ket{\phi}_{ABC}$. We note that the
three-tangle $\tau$ satisfies
$\tau=C^{2}_{i(jk)}-C^{2}_{ij}-C^{2}_{ik}$ (9)
for any distinct $i$, $j$, and $k$, where $C_{i(jk)}$ denotes the concurrence
between $i$ and the other system $jk$. For mixed states, the concurrence is
defined by means of the convex roof extension. In particular, for two-qubit
systems, the concurrence of mixed state can be computed by [16]
$C(\rho)=\mathrm{max}\\{0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}\\},$
(10)
where the $\lambda_{l}$s are eigenvalues of the matrix
$X=\sqrt{\sqrt{\rho}(\sigma_{y}\otimes\sigma_{y})\rho^{*}(\sigma_{y}\otimes\sigma_{y})\sqrt{\rho}}$
in decreasing order, $\sigma_{y}$ is the Pauli $Y$ operator, and $\rho^{*}$ is
the conjugate of $\rho$. Hence, it is not difficult to calculate $F_{ij}$ for
a given three-qubit pure state.
By using Eq. (8), we obtain the following lemmas, which are important
properties when we define our GME measures. Let $\ket{\phi}_{ABC}$ be a three-
qubit pure state. Then for any distinct $i$, $j$, and $k$ in $\\{A,B,C\\}$,
$F_{ij}\left(\ket{\phi}_{ABC}\right)=\frac{2}{3}$ if and only if
$\ket{\phi}_{ABC}\in SEP(i:jk)$ or $\ket{\phi}_{ABC}\in SEP(j:ik)$. Moreover,
if $F_{ij}\left(\ket{\phi}_{ABC}\right)>\frac{2}{3}$ and
$F_{ik}\left(\ket{\phi}_{ABC}\right)>\frac{2}{3}$, then
$F_{jk}\left(\ket{\phi}_{ABC}\right)>\frac{2}{3}$.
For three-qubit pure states, the maximal average teleportation fidelities
$F_{AB}$, $F_{BC}$, and $F_{CA}$ does not increase on average under LOCC.
We remark that Lemma Three-qubit teleportation capability and its properties
and Lemma Three-qubit teleportation capability and its properties are not
directly derived from the properties of the three-tangle $\tau$ and the
concurrence $C_{ij}$ although we use Eq. (8) to prove them. If
$\ket{\phi}_{ABC}\in SEP(i:jk)$ for some $i$, $j$, and $k$, then
$\tau(\ket{\phi}_{ABC})=0$, but the converse is not true. For example,
$\tau(\ket{W}_{ABC})=0$, where
$\ket{W}_{ABC}=\frac{1}{\sqrt{3}}(\ket{001}_{ABC}+\ket{010}_{ABC}+\ket{100}_{ABC})$.
However, $\ket{W}_{ABC}\notin SEP(i:jk)$ for any distinct $i$, $j$ and $k$. In
addition, the concurrence $C_{ij}$ can increase under LOCC on three-qubit
states. Indeed, $C_{ij}(\ket{\rm{GHZ}})=0$, but after measuring the system $k$
in the $X$ basis, $C_{ij}$ of the resulting state becomes $1$.
### GME measures based on three-qubit teleportation capability
Let
$\ket{\psi}\in\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\cdots\otimes\mathcal{H}_{N}$
be an $N$-partite pure state. The state $\ket{\psi}$ is called biseparable if
it can be written as $\ket{\psi}=\ket{\phi_{G}}\otimes\ket{\phi_{\bar{G}}}$,
where
$\ket{\phi_{G}}\in\mathcal{H}_{j_{1}}\otimes\cdots\otimes\mathcal{H}_{j_{k}}$
and
$\ket{\phi_{\bar{G}}}\in\mathcal{H}_{j_{k+1}}\otimes\cdots\otimes\mathcal{H}_{j_{N}}$.
Here, $k<N$ and $\\{j_{1},...,j_{k}|j_{k+1},...,j_{N}\\}$ is a bipartition of
the whole system. An $N$-partite mixed state $\rho$ is called biseparable if
it can be written as a convex sum of biseparable pure states
$\rho=\sum_{i}p_{i}\ket{\psi_{i}}\bra{\psi_{i}}$, where the biseparable pure
states $\\{\ket{\psi_{i}}\\}$ can be biseparable with respect to different
bipartitions. If an $N$-partite state is not biseparable, then it is a
genuinely $N$-partite entangled state.
Note that minimal conditions for being a good entanglement measure have been
suggested as follows [20, 18]:
* (i)
$E(\rho)>0$ if and only if $\rho$ is a nonbiseparable state.
* (ii)
$E$ is invariant under local unitary transformations.
* (iii)
$E$ is not increasing on average under LOCC. That is, if we have states
$\\{\rho_{k}\\}$ with probabilities $\\{p_{k}\\}$ after applying a LOCC
transformation to $\rho$, then $\sum_{k}p_{k}E(\rho_{k})\leq E(\rho)$.
* (iv)
$E$ is convex.
If a multipartite entanglement measure satisfies these conditions, then we
call it a GME measure. Our approach is to define a multipartite entanglement
measure on pure states and generalize it to mixed states through the convex
roof extension
$E(\rho)=\min_{\\{p_{l},\psi_{l}\\}}\sum_{l}p_{l}E\left(\ket{\psi_{l}}\right),$
(11)
where the minimum is over all possible decompositions
$\rho=\sum_{l}p_{l}\ket{\psi_{l}}\bra{\psi_{l}}$. This approach has the
advantage that it suffices to define an entanglement measure on pure states
that satisfies the conditions (i), (ii), and (iii) in order to construct a GME
measure.
Let us now define entanglement measures based on the three-qubit teleportation
fidelity. For a three-qubit pure state $\ket{\phi}_{ABC}$, let
$\mathcal{T}_{ij}\left(\ket{\phi}_{ABC}\right)=3F_{ij}\left(\ket{\phi}_{ABC}\right)-2$,
where $F_{ij}$ is the maximal average teleportation fidelity in Eq. (8). We
define multipartite entanglement measures $\mathcal{T}_{min}$ and
$\mathcal{T}_{GM}$ as
$\displaystyle\mathcal{T}_{min}\equiv\min\\{\mathcal{T}_{AB},\mathcal{T}_{BC},\mathcal{T}_{CA}\\},$
$\displaystyle\mathcal{T}_{GM}\equiv\sqrt[3]{\mathcal{T}_{AB}\mathcal{T}_{BC}\mathcal{T}_{CA}},$
(12)
respectively, on three-qubit pure states. For three-qubit mixed states, we
generalize them via the convex roof extension. The reason why we use
$\mathcal{T}_{ij}$ instead of $F_{ij}$ itself is to set the values of
$\mathcal{T}_{min}$ and $\mathcal{T}_{GM}$ between 0 and 1. It directly
follows from Lemma Three-qubit teleportation capability and its properties
that $\mathcal{T}_{min}$ and $\mathcal{T}_{GM}$ satisfy the condition (i). We
know that they are invariant under local transformations, which is the
condition (ii), from the definition of $F_{ij}$. From Lemma Three-qubit
teleportation capability and its properties, we can also prove that they
fulfill the condition (iii). The condition (iv) is guaranteed by the convex
roof extension. Therefore, we have the following theorem. Entanglement
measures $\mathcal{T}_{min}$ and $\mathcal{T}_{GM}$ are GME measures.
In Lemma Three-qubit teleportation capability and its properties, we also
showed that for a three-qubit pure state $\ket{\phi}_{ABC}$, if
$F_{ij}\left(\ket{\phi}_{ABC}\right)>\frac{2}{3}$ and
$F_{ik}\left(\ket{\phi}_{ABC}\right)>\frac{2}{3}$, then
$F_{jk}\left(\ket{\phi}_{ABC}\right)>\frac{2}{3}$ for any distinct $i$, $j$,
and $k$ in $\\{A,B,C\\}$. Therefore, only two quantities $\mathcal{T}_{ij}$
and $\mathcal{T}_{ik}$ are enough to define a GME measure. For any distinct
$i$, $j$, and $k$ in $\\{A,B,C\\}$, we define multipartite entanglement
measures $\mathcal{T}_{min}^{(i)}$ and $\mathcal{T}_{GM}^{(i)}$ as
$\displaystyle\mathcal{T}_{min}^{(i)}\equiv\min\\{\mathcal{T}_{ij},\mathcal{T}_{ik}\\},$
$\displaystyle\mathcal{T}_{GM}^{(i)}\equiv\sqrt{\mathcal{T}_{ij}\mathcal{T}_{ik}},$
(13)
on three-qubit pure states. For three-qubit mixed states, we generalize them
through the convex roof extension. We have the following theorem by applying
the same proof method in Theorem GME measures based on three-qubit
teleportation capability. Entanglement measures $\mathcal{T}_{min}^{(i)}$ and
$\mathcal{T}_{GM}^{(i)}$ are GME measures for any $i\in\\{A,B,C\\}$. We can
interpret entanglement measures $\mathcal{T}_{min}^{(i)}$ and
$\mathcal{T}_{GM}^{(i)}$ as the minimum and the average teleportation
capability of the system $i$, respectively. Remark that if one defines an
entanglement measure with concurrence in this way, then it cannot be a GME
measure. For example, let us think of the biseparable state
$\ket{\xi}_{ABC}=\frac{1}{\sqrt{2}}(\ket{000}_{ABC}+\ket{110}_{ABC}).$ We can
clearly see that $C_{A(BC)}=C_{B(CA)}=1$, but $C_{C(AB)}=0$. Thus, if we
define $G_{min}\equiv\min\\{C_{A(BC)},C_{B(CA)}\\}$, it cannot be a GME
measure since $G_{min}(\ket{\xi}_{ABC})\neq 0$.
The following examples show that our GME measures are more suitable to capture
the usefulness of a given state for three-qubit teleportation. We note that
GME measures $C_{min}$ and $C_{GM}$ in References [18, 19] are given by
$C_{min}\equiv\min\\{C_{A(BC)},C_{B(CA)},C_{C(AB)}\\}$ and
$C_{GM}\equiv\sqrt[3]{C_{A(BC)}C_{B(CA)}C_{C(AB)}}$, respectively, on three-
qubit pure states.
Figure 1: Graphs of the GME measures $C_{min}$, $\mathcal{T}_{min}$,
$\mathcal{T}^{(A)}_{min}$, $C_{GM}$, $\mathcal{T}_{GM}$, and
$\mathcal{T}^{(A)}_{GM}$ for the state $\ket{\psi(r)}$ of the form in Eq.
(16). For any two of these measures, we can choose $r$ for which these
measures have different values. Note that for the state $\ket{\phi(t)}$ of the
form in Eq (14), these measures have the same value $2t\sqrt{1-t^{2}}$ for any
$t\in[0,1]$, and these values vary from $0$ to $1$. Hence, we can easily find
states in which the values of the measures have different orders. For
instance, let $t^{\prime}$ be a value such that
$C_{GM}(\phi(t^{\prime}))=\mathcal{T}_{GM}(\phi(t^{\prime}))=0.8$. Then
$C_{GM}(\psi(0.7))>C_{GM}(\phi(t^{\prime}))$, but
$\mathcal{T}_{GM}(\psi(0.7))<\mathcal{T}_{GM}(\phi(t^{\prime}))$. This means
that $C_{GM}$ and $\mathcal{T}_{GM}$ rank the states $\psi(0.7)$ and
$\phi(t^{\prime})$ differently.
###### Example 0.1.
We calculate $\mathcal{T}_{min}$, $\mathcal{T}_{GM}$,
$\mathcal{T}^{(A)}_{min}$, $\mathcal{T}^{(A)}_{GM}$, $C_{min}$, and $C_{GM}$
for some states, and show that they give the opposite order for the states,
which means that these GME measures are distinct from one another. For $0\leq
t\leq 1$, let
$\ket{\phi(t)}_{ABC}=t\ket{000}_{ABC}+\sqrt{1-t^{2}}\ket{111}_{ABC}.$ (14)
Then all GME measures return the same value $2t\sqrt{1-t^{2}}$ on the state
$\ket{\phi(t)}_{ABC}$. Note that $h(t)\equiv 2t\sqrt{1-t^{2}}$ is a continuous
function of $t$, which has the minimum value $0$ at $t=0,1$ and the maximum
value $1$ at $t=1/\sqrt{2}$. Hence, if there is a state $\ket{\psi}_{ABC}$
such that $E\left(\psi\right)>E^{\prime}\left(\psi\right)$, where
$E,E^{\prime}\in\left\\{\mathcal{T}_{min},\mathcal{T}_{GM},\mathcal{T}^{(A)}_{min},\mathcal{T}^{(A)}_{GM},C_{min},C_{GM}\right\\}$,
then it follows from the intermediate value theorem that there exists
$t^{\prime}$ with
$E\left(\psi\right)>E\left(\phi(t^{\prime})\right)=E^{\prime}\left(\phi(t^{\prime})\right)>E^{\prime}\left(\psi\right),$
(15)
which means that these GME measures $E$ and $E^{\prime}$ provide the opposite
order for the quantum states $\ket{\psi}_{ABC}$ and
$\ket{\phi(t^{\prime})}_{ABC}$. Indeed, let us consider the following state
$\ket{\psi(r)}_{ABC}=r\ket{000}_{ABC}+\frac{\sqrt{1-r^{2}}}{2}\ket{101}_{ABC}+\frac{\sqrt{1-r^{2}}}{\sqrt{2}}\ket{110}_{ABC}+\frac{\sqrt{1-r^{2}}}{2}\ket{111}_{ABC},$
(16)
where $0\leq r\leq 1$. Then for any
$E,E^{\prime}\in\left\\{\mathcal{T}_{min},\mathcal{T}_{GM},\mathcal{T}^{(A)}_{min},\mathcal{T}^{(A)}_{GM},C_{min},C_{GM}\right\\}$,
it is easy to find $r^{\prime}$ such that
$E\left(\psi(r^{\prime})\right)>E^{\prime}\left(\psi(r^{\prime})\right)$ as
seen in FIG. 1. Hence, they are all different GME measures.
Figure 2: If we calculate GME measures $C_{min}$ and $\mathcal{T}_{min}$ for
the states $\ket{\psi(r)}$ in Eq. (16) and $\ket{\xi(r)}$ in Eq. (17), then
$C_{min}\left(\xi(r)\right)>C_{\min}\left(\psi(r)\right)$ but
$\mathcal{T}_{min}\left(\xi(r)\right)<\mathcal{T}_{min}\left(\psi(r)\right)$
for $0.7\leq r\leq 0.9$. Hence, we can say that $C_{min}$ is not appropriate
for comparing teleportation capabilites.
###### Example 0.2.
The GME measure $\mathcal{T}_{min}$ is defined based on three-qubit
teleportation capability. Thus, we can say that if $\ket{\psi}$ is more
entangled than $\ket{\xi}$ with respect to $\mathcal{T}_{min}$, then
$\ket{\psi}$ is more useful than $\ket{\xi}$ in three-qubit teleportation. Let
$\ket{\xi(r)}_{ABC}=\frac{\sqrt{1-r^{2}}}{\sqrt{2}}\ket{001}_{ABC}+\frac{\sqrt{1-r^{2}}}{\sqrt{2}}\ket{010}_{ABC}+r\ket{100}_{ABC},$
(17)
where $0\leq r\leq 1$. In FIG. 2, we can see that for $0.7<r<0.9$,
$\displaystyle C_{min}\left(\xi(r)\right)>C_{\min}\left(\psi(r)\right),$
$\displaystyle\mathcal{T}_{min}\left(\xi(r)\right)<\mathcal{T}_{min}\left(\psi(r)\right),$
(18)
where $\ket{\psi(r)}_{ABC}$ is the state in Eq. (16). In other words, although
$\ket{\psi(r)}_{ABC}$ is more valuable for three-qubit teleportation than
$\ket{\xi(r)}_{ABC}$ in this case, $C_{min}$ does not catch this fact. Similar
examples can be readily found for other GME measures as well.
### GME and $N$-qubit teleportation capability
We here discuss the relation between GME and $N$-qubit teleportation
capability, where $N\geq 4$. Note that $f_{ij}$ in Eq. (5) can be generalized
in the following two ways. Let $\rho_{A_{1}\cdots A_{N}}$ be a $N$-qubit
quantum state and
$K=\\{k_{1},k_{2},\dots,k_{N-2}\\}=\\{A_{1},A_{2},\dots,A_{N}\\}\setminus\\{i,j\\}$.
One is
$f_{ij}^{(N)}(\rho_{A_{1}\cdots
A_{N}})=\max_{U_{K}}\sum_{J\in\\{0,1\\}^{N-2}}\bra{J}U_{K}\rho_{K}U_{K}^{\dagger}\ket{J}f\left(\rho^{U_{K},J}_{ij}\right),$
(19)
where
$\rho_{K}=\rho_{k_{1}}\otimes\rho_{k_{2}}\otimes\cdots\otimes\rho_{k_{N-2}}$
and $U_{K}=U_{k_{1}}\otimes U_{k_{2}}\otimes\cdots\otimes U_{k_{N-2}}$ is a
product of local unitary operators. The other is
$\bar{f}_{ij}^{(N)}(\rho_{A_{1}\cdots A_{N}})=\max_{k_{l}\in
K}\left(\max_{U_{k_{l}}}\sum_{t=0}^{1}\bra{t}U_{k_{l}}\rho_{A_{k_{l}}}U_{k_{l}}^{\dagger}\ket{t}f_{ij}^{(N-1)}\left(\rho_{K_{l}}^{U_{k_{l}},t}\right)\right),$
(20)
where $K_{l}=K\setminus\\{k_{l}\\}$ and $f^{(3)}_{ij}=f_{ij}$ in Eq. (5). The
difference between these two definitions is whether or not communication
between assistants is allowed. Hence, we obtain two different maximal average
teleportation fidelities as follows:
$\displaystyle F_{ij}^{(N)}\left(\rho_{A_{1}\cdots
A_{N}}\right)=\frac{2f_{ij}^{(N)}\left(\rho_{A_{1}\cdots A_{N}}\right)+1}{3},$
$\displaystyle\bar{F}_{ij}^{(N)}\left(\rho_{A_{1}\cdots
A_{N}}\right)=\frac{2\bar{f}_{ij}^{(N)}\left(\rho_{A_{1}\cdots
A_{N}}\right)+1}{3}.$ (21)
Let us define new quantities based on these $N$-qubit teleportation
capabilities with a similar way for three-qubit case as follows. For a
$N$-qubit pure state $\ket{\phi}_{A_{1}\dots A_{N}}$, let
$\mathcal{T}_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\dots
A_{N}}\right)=3F_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\dots A_{N}}\right)-2$. We
define $\mathcal{T}_{min}^{(N)}$ and $\mathcal{T}_{GM}^{(N)}$ as
$\displaystyle\mathcal{T}_{min}^{(N)}\equiv\min_{i<j}\\{\mathcal{T}_{ij}^{(N)}\\},$
$\displaystyle\mathcal{T}_{GM}^{(N)}\equiv\sqrt[m]{\prod_{i<j}\mathcal{T}_{ij}^{(N)}},$
(22)
respectively, on $N$-qubit pure states, where $m=\binom{N}{2}$. For $N$-qubit
mixed states, we generalize them via the convex roof extension. In the same
way, we define $\bar{\mathcal{{T}}}_{min}^{(N)}$ and
$\bar{\mathcal{T}}_{GM}^{(N)}$ by using $\bar{F}_{ij}^{(N)}$.
Now, we discuss that the quantities proposed in Definition GME and $N$-qubit
teleportation capability may be good candidates to be GME measures for
$N$-qubit systems, where $N\geq 4$. As we have already shown in Lemma Three-
qubit teleportation capability and its properties, for a three-qubit pure
state $\ket{\phi}_{ABC},$ $F_{ij}\left(\ket{\phi}_{ABC}\right)=\frac{2}{3}$ if
and only if $\ket{\phi}_{ABC}\in SEP(i:jk)$ or $\ket{\phi}_{ABC}\in SEP(j:ik)$
for any distinct $i$, $j$, and $k$ in $\\{A,B,C\\}$. Here, we have a similar
argument for an $N$-qubit pure state.
Let us now assume that for an $N$-qubit pure state $\ket{\phi}_{A_{1}\cdots
A_{N}}$, there exists a bipartition $\\{G_{i}|G_{j}\\}$ of the whole system
with $i\in G_{i}$ and $j\in G_{j}$ such that $\ket{\phi}_{A_{1}\cdots
A_{N}}\in SEP(G_{i}:G_{j})$. If $k\in G_{i}$, where $k\notin\\{i,j\\}$, then
we clearly have that the resulting state after performing an orthogonal
measurement in the system $k$ is a pure state which belongs to
$SEP(\tilde{G}_{i}:G_{j})$, where $\tilde{G}_{i}=G_{i}\setminus\\{k\\}$. By
continuing this process, we can see that the quantum state obtained after
measuring all systems except $i$ and $j$ is a pure state and separable between
systems $i$ and $j$. Therefore, we have the following proposition. Let
$\ket{\phi}_{A_{1}\cdots A_{N}}$ be an $N$-qubit pure state. If
$\ket{\phi}_{A_{1}\cdots A_{N}}\in SEP(G_{i}:G_{j})$ for a bipartition
$\\{G_{i}|G_{j}\\}$ of the whole system with $i\in G_{i}$ and $j\in G_{j}$,
then $F_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots
A_{N}}\right)=\bar{F}_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots
A_{N}}\right)=\frac{2}{3}$. We note that
$F_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots
A_{N}}\right)\leq\bar{F}_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots
A_{N}}\right)$ for any $N$. Therefore,
$F_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots A_{N}}\right)>\frac{2}{3}$ for all
distinct $i,j$ implies that $\ket{\phi}_{A_{1}\cdots A_{N}}$ is a genuinely
$N$-partite entangled state. However, we do not need to check all possible
fidelities to verify that a given state is a genuinely $N$-partite entangled
state. Indeed, it suffices to check
$\min_{j}\left\\{F_{ij}^{(N)}\right\\}>\frac{2}{3}$ for a fixed $i$ since if
$\ket{\phi}_{A_{1}\cdots A_{N}}\in SEP(P:P^{\prime})$ for a bipartition
$\\{P|P^{\prime}\\}$ of the whole system, then we can always choose a system
$k$ from the party that does not contain $i$ and
$F_{ik}^{(N)}\left(\ket{\phi}_{A_{1}\cdots A_{N}}\right)=\frac{2}{3}$ by
Proposition GME and $N$-qubit teleportation capability. For example, when
$N=4$, if $F_{AB}^{(4)}\left(\ket{\phi}_{ABCD}\right)>\frac{2}{3}$,
$F_{AC}^{(4)}\left(\ket{\phi}_{ABCD}\right)>\frac{2}{3}$ and
$F_{AD}^{(4)}\left(\ket{\phi}_{ABCD}\right)>\frac{2}{3}$, then the remaining
fidelities are also greater than $\frac{2}{3}$ and so, $\ket{\phi}_{ABCD}$ is
a genuinely quadripartite entangled state.
Conversely, does $F_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots
A_{N}}\right)=\frac{2}{3}$ or $\bar{F}_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots
A_{N}}\right)=\frac{2}{3}$ implies separability for a bipartition
$\\{G_{i}|G_{j}\\}$ with $i\in G_{i}$ and $j\in G_{j}$? If this holds for
$F_{ij}^{(N)}$, then this also holds for $\bar{F}_{ij}^{(N)}$ since
$F_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots
A_{N}}\right)\leq\bar{F}_{ij}^{(N)}\left(\ket{\phi}_{A_{1}\cdots
A_{N}}\right)$ for any $N$. For $N=4$, by checking all the cases, we get the
following proposition. For $N=4$, if
$F_{ij}^{(4)}\left(\ket{\phi}_{ABCD}\right)=\frac{2}{3}$, then
$\ket{\phi}_{ABCD}\in SEP(G_{i}:G_{j})$ for some bipartition
$\\{G_{i}|G_{j}\\}$ with $i\in G_{i}$ and $j\in G_{j}$.
To sum up, we can distinguish genuinely quadripartite entangled states and
biseparable states by means of $F_{ij}^{(4)}$ or $\bar{F}_{ij}^{(4)}$. By
definition, $\mathcal{T}_{min}^{(4)}$, $\mathcal{T}_{GM}^{(4)}$,
$\bar{\mathcal{{T}}}_{min}^{(4)}$, and $\bar{\mathcal{T}}_{GM}^{(4)}$ satisfy
the GME condition (ii) and (iv), and hence, we have the following corollary.
$\mathcal{T}_{min}^{(4)}$, $\mathcal{T}_{GM}^{(4)}$,
$\bar{\mathcal{{T}}}_{min}^{(4)}$, and $\bar{\mathcal{T}}_{GM}^{(4)}$ satisfy
the GME condition (i), (ii), and (iv).
Note that we expect to have the same argument with more complicated
calculations when $N\geq 5$. Therefore, $\mathcal{T}_{min}^{(4)}$,
$\mathcal{T}_{GM}^{(4)}$, $\bar{\mathcal{{T}}}_{min}^{(4)}$, and
$\bar{\mathcal{T}}_{GM}^{(4)}$ have the potential to be GME measures.
## Conclusion
In this paper, we have introduced GME measures for three-qubit states based on
three-qubit teleportation capability. In order to do that, we have considered
the maximal average teleportation fidelity of the resulting states on the
other parties obtained after one of the parties measures his/her system. We
have shown that the fidelity can be used to observe separability, and does not
increase on average under LOCC for three-qubit pure states, and by using these
properties, we have proven that our entanglement measures defined using the
fidelity satisfy the conditions for the GME measure.
For three-qubit mixed states, we have defined our entanglement measures by
means of the convex roof extension. This method is a good way to make our
entanglement measures satisfy the conditions for being a good entanglement
measure on the mixed states, but it is not easy to get an exact value because
it is defined as a minimum for all possible ensembles. For a profound
understanding of multipartite entanglement, it is necessary to find this value
or at least find its lower bound. Furthermore, it would be important to see
how this value relates to multipartite teleportation capability.
We have shown that the maximal average fidelity of four-qubit teleportation
can be used to distinguish genuinely quadripartite entangled states and
biseparable states. This could be generalized to $N$-qubit systems, where
$N\geq 5$. Hence, the quantities defined in Definition GME and $N$-qubit
teleportation capability have the potential to be GME measures. It is not easy
to show that the entanglement measures satisfy the conditions for GME
measures, in particular the monotonicity under LOCC, because no analytic form
such as Eq. (8) has been known for $N\geq 4$. Our future work is to rigorously
prove that these quantities are GME measures. Moreover, it would also be
intriguing to explore entanglement measures for $N$-qudit systems using a
similar approach since we can define $N$-qudit teleportation capability
analogously as we define $N$-qubit teleportation capability in this work.
We note that there are other quantum information tasks that use GME, such as
conference key agreement or secret sharing. It has been shown that any
multipartite private state, which is the general form of quantum state capable
of conference key agreement, is a genuinely multipartite entangled state [8].
Hence, it could be interesting to see if we can define GME measures based on
those quantum information tasks. Besides, there have been recently known
quantum information tasks such as the quantum secure direct communication [31]
or controlled quantum teleportation based on quantum walks [32]. It would be
also a possible future work to see how these tasks can be related to GME
measures and how we define entanglement measures based on them.
## Methods
### Proof of Lemma Three-qubit teleportation capability and its properties
If $\ket{\phi}_{ABC}\in SEP(i:jk)$, which means
$C_{i(jk)}(\ket{\phi}_{ABC})=0$, then we obtain
$\left(\tau+C^{2}_{ij}\right)(\ket{\phi}_{ABC})=0$ and
$\left(\tau+C^{2}_{ik}\right)(\ket{\phi}_{ABC})=0$ from Eq. (9). In other
words, $F_{ij}(\ket{\phi}_{ABC})=\frac{2}{3}$ and
$F_{ik}(\ket{\phi}_{ABC})=\frac{2}{3}$. Similarly, if $\ket{\phi}_{ABC}\in
SEP(j:ik)$, we have $F_{ij}(\ket{\phi}_{ABC})=\frac{2}{3}$ and
$F_{jk}(\ket{\phi}_{ABC})=\frac{2}{3}$. For both cases,
$F_{ij}(\ket{\phi}_{ABC})=\frac{2}{3}$. Conversely, let us assume that
$F_{ij}(\ket{\phi}_{ABC})=\frac{2}{3}$. Then $\tau(\ket{\phi}_{ABC})=0$ and
$C^{2}_{ij}(\ket{\phi}_{ABC})=0$ since both are nonnegative. Note that any
three-qubit pure state $\ket{\phi}_{ABC}$ can be written as [33]
$\ket{\phi}_{ABC}=\alpha_{0}\ket{000}_{ABC}+\alpha_{1}e^{\mathbf{i}\theta}\ket{100}_{ABC}+\alpha_{2}\ket{101}_{ABC}+\alpha_{3}\ket{110}_{ABC}+\alpha_{4}\ket{111}_{ABC},$
(23)
where $\mathbf{i}=\sqrt{-1}$, $0\leq\theta\leq\pi$, $\alpha_{l}\geq 0$, and
$\sum_{l}\alpha^{2}_{l}=1$. Hence, it follows from straightforward
calculations that
$\displaystyle\tau\left(\ket{\phi}_{ABC}\right)=4\alpha_{0}^{2}\alpha_{4}^{2},$
$\displaystyle
C^{2}_{AB}\left(\ket{\phi}_{ABC}\right)=4\alpha_{0}^{2}\alpha_{3}^{2},$
$\displaystyle
C^{2}_{BC}\left(\ket{\phi}_{ABC}\right)=4\alpha_{1}^{2}\alpha_{4}^{2}+4\alpha_{2}^{2}\alpha_{3}^{2}-8\alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}\cos\theta,$
$\displaystyle
C^{2}_{CA}\left(\ket{\phi}_{ABC}\right)=4\alpha_{0}^{2}\alpha_{2}^{2}.$ (24)
From $\tau(\ket{\phi}_{ABC})=0$, we get $\alpha_{0}=0$ or $\alpha_{4}=0$. If
$\alpha_{0}=0$, then
$C^{2}_{AB}(\ket{\phi}_{ABC})=C^{2}_{CA}(\ket{\phi}_{ABC})=0$, and if
$\alpha_{0}\neq 0$, then $C^{2}_{BC}(\ket{\phi}_{ABC})=0$ if and only if
$C^{2}_{AB}(\ket{\phi}_{ABC})=0$ or $C^{2}_{CA}(\ket{\phi}_{ABC})=0$.
Therefore, from Eq. (9), we can see that
$F_{ij}(\ket{\phi}_{ABC})=\frac{2}{3}$ implies $C_{i(jk)}(\ket{\phi}_{ABC})=0$
or $C_{j(ki)}(\ket{\phi}_{ABC})=0$, that is, $\ket{\phi}_{ABC}\in SEP(i:jk)$
or $\ket{\phi}_{ABC}\in SEP(j:ik)$. Furthermore, we also have that if
$F_{ij}(\ket{\phi}_{ABC})=\frac{2}{3}$, then
$F_{ik}(\ket{\phi}_{ABC})=\frac{2}{3}$ or
$F_{jk}(\ket{\phi}_{ABC})=\frac{2}{3}$.
### Proof of Lemma Three-qubit teleportation capability and its properties
It is sufficient to show that $\sqrt{\tau+C^{2}_{AB}}$,
$\sqrt{\tau+C^{2}_{BC}}$, and $\sqrt{\tau+C^{2}_{CA}}$ do not increase on
average under LOCC, thanks to Eq. (8). One important observation is that it is
always possible to decompose any local protocol into positive operator-valued
measures (POVMs) such that only one party implements operations on his/her
system. Moreover, we also remark that a generalized local POVM can be carried
out by a sequence of POVMs with two outcomes [34, 17]. Without loss of
generality, let us assume that Alice performs POVM consisting of two elements,
say $A_{0}$ and $A_{1}$. By using the singular value decomposition, they can
be written as $A_{t}=U_{t}D_{t}V$, where $U_{t}$ and $V$ are unitary matrices,
and $D_{t}$ are diagonal matrices with entries $(a,b)$ and
$\left(\sqrt{1-a^{2}},\sqrt{1-b^{2}}\right)$, respectively, for some
$a,b\in[0,1]$. Here, the same unitary operation $V$ can be chosen for both
$A_{0}$ and $A_{1}$ because they compose the POVM.
Let $\ket{\phi}_{ABC}$ be an initial state of the form in Eq. (23). After
Alice’s POVM, we have
$\ket{\psi_{t}}_{ABC}=A_{t}\ket{\phi}_{ABC}/\sqrt{p_{t}}$, where
$p_{t}=\bra{\phi}A_{t}^{\dagger}A_{t}\ket{\phi}$. Let us calculate the three-
tangle and the concurrence of $\ket{\psi_{t}}_{ABC}$. Since the three-tangle
and the concurrence are invariant under local transformations, it suffices to
consider those of $D_{t}V\ket{\phi}_{ABC}/\sqrt{p_{t}}$ instead of
$\ket{\psi_{t}}_{ABC}$ itself. If we denote $v_{ij}=\bra{i}V\ket{j}$ and
$d_{t,i}=\bra{i}D_{t}\ket{i}$ for $i,j,t\in\\{0,1\\}$, then
$\displaystyle D_{t}V\ket{\phi}_{ABC}=$ $\displaystyle
d_{t,0}\left(v_{00}\alpha_{0}+v_{01}\alpha_{1}e^{\mathbf{i}\theta}\right)\ket{000}_{ABC}+d_{t,0}v_{01}\alpha_{2}\ket{001}_{ABC}+d_{t,0}v_{01}\alpha_{3}\ket{010}_{ABC}+d_{t,0}v_{01}\alpha_{4}\ket{011}_{ABC}$
(25)
$\displaystyle+d_{t,1}\left(v_{10}\alpha_{0}+v_{11}\alpha_{1}e^{\mathbf{i}\theta}\right)\ket{100}_{ABC}+d_{t,1}v_{11}\alpha_{2}\ket{101}_{ABC}+d_{t,1}v_{11}\alpha_{3}\ket{110}_{ABC}+d_{t,1}v_{11}\alpha_{4}\ket{111}_{ABC}.$
By definition, with some tedious calculations, we can obtain
$\displaystyle\tau\left(\ket{\psi_{t}}_{ABC}\right)=\frac{4}{p_{t}^{2}}\cdot
d_{t,0}^{2}d_{t,1}^{2}\alpha_{0}^{2}\alpha_{4}^{2},$ $\displaystyle
C^{2}_{A(BC)}\left(\ket{\psi_{t}}_{ABC}\right)=\frac{4}{p_{t}^{2}}\cdot
d_{t,0}^{2}d_{t,1}^{2}\alpha_{0}^{2}\left(\alpha_{2}^{2}+\alpha_{3}^{2}+\alpha_{4}^{2}\right),$
$\displaystyle
C^{2}_{B(CA)}\left(\ket{\psi_{t}}_{ABC}\right)=\frac{4}{p_{t}^{2}}\cdot\left(d_{t,0}^{2}d_{t,1}^{2}\alpha_{0}^{2}\alpha_{3}^{2}+g\left(\ket{\psi_{t}}_{ABC}\right)\right),$
$\displaystyle
C^{2}_{C(AB)}\left(\ket{\psi_{t}}_{ABC}\right)=\frac{4}{p_{t}^{2}}\cdot\left(d_{t,0}^{2}d_{t,1}^{2}\alpha_{0}^{2}\alpha_{2}^{2}+g\left(\ket{\psi_{t}}_{ABC}\right)\right),$
(26)
where
$g\left(\ket{\psi_{t}}_{ABC}\right)\equiv\left(d_{t,0}^{2}|v_{01}|^{2}+d_{t,1}^{2}|v_{11}|^{2}\right)\cdot\sum_{i=0}^{1}d_{t,i}^{2}\left|\alpha_{4}\left(v_{i0}\alpha_{0}+v_{i1}\alpha_{1}e^{\mathbf{i}\theta}\right)-v_{i1}\alpha_{2}\alpha_{3}\right|^{2}$
(27)
From Eq. (9) and Eq. (Proof of Lemma), we have
$\displaystyle\left(\tau+C^{2}_{AB}\right)\left(\ket{\psi_{t}}_{ABC}\right)=\frac{1}{p_{t}^{2}}\cdot
d_{t,0}^{2}d_{t,1}^{2}\cdot\left(\tau+C^{2}_{AB}\right)\left(\ket{\phi}_{ABC}\right),$
$\displaystyle\left(\tau+C^{2}_{BC}\right)\left(\ket{\psi_{t}}_{ABC}\right)=\frac{4}{p_{t}^{2}}\cdot
g\left(\ket{\psi_{t}}_{ABC}\right),$
$\displaystyle\left(\tau+C^{2}_{CA}\right)\left(\ket{\psi_{t}}_{ABC}\right)=\frac{1}{p_{t}^{2}}\cdot
d_{t,0}^{2}d_{t,1}^{2}\cdot\left(\tau+C^{2}_{CA}\right)\left(\ket{\phi}_{ABC}\right).$
(28)
One can readily check that
$\sqrt{\left(\tau+C^{2}_{ij}\right)\left(\ket{\phi}_{ABC}\right)}\geq\sum_{t=0}^{1}p_{t}\sqrt{\left(\tau+C^{2}_{ij}\right)\left(\ket{\psi_{t}}_{ABC}\right)},$
(29)
for $ij\in\\{AB,CA\\}$, since
$\sum_{t=0}^{1}d_{t,0}d_{t,1}=ab+\sqrt{(1-a^{2})(1-b^{2})}\leq 1$. Hence,
$\sqrt{\tau+C^{2}_{AB}}$ and $\sqrt{\tau+C^{2}_{CA}}$ do not increase on
average under LOCC for three-qubit pure states. Now, it remains to show that
$\sqrt{\tau+C^{2}_{BC}}$ does not increase on average under LOCC for three-
qubit pure states, or equivalently,
$\sqrt{\left(\tau+C^{2}_{BC}\right)\left(\ket{\phi}_{ABC}\right)}\geq\sum_{t=0}^{1}p_{t}\sqrt{\left(\tau+C^{2}_{BC}\right)\left(\ket{\psi_{t}}_{ABC}\right)}=\sum_{t=0}^{1}2\sqrt{g\left(\ket{\psi_{t}}_{ABC}\right)}.$
(30)
Observe that it can be written as
$\left|\alpha_{4}\left(v_{i0}\alpha_{0}+v_{i1}\alpha_{1}e^{\mathbf{i}\theta}\right)-v_{i1}\alpha_{2}\alpha_{3}\right|=\left|\left<w|v_{i}\right>\right|,$
(31)
where
$\ket{w}=\alpha_{0}\alpha_{4}\ket{0}+(\alpha_{1}\alpha_{4}e^{-\mathbf{i}\theta}-\alpha_{2}\alpha_{3})\ket{1}$
is an unnormalized vector and $\ket{v_{i}}=v_{i0}\ket{0}+v_{i1}\ket{1}$ for
$i=0,1$. In addition, since $\ket{v_{0}}$ and $\ket{v_{1}}$ are orthonormal,
we get
$\left|\left<w|v_{0}\right>\right|^{2}+\left|\left<w|v_{1}\right>\right|^{2}=||\ket{w}||^{2}=\frac{1}{4}\left(\tau+C^{2}_{BC}\right)\left(\ket{\phi}_{ABC}\right).$
(32)
Hence, we can reduce the desired inequality in Eq. (30) to
$||\ket{w}||\geq\sqrt{g\left(\ket{\psi_{0}}_{ABC}\right)}+\sqrt{g\left(\ket{\psi_{1}}_{ABC}\right)}.$
(33)
With simple algebra, it can be readily shown that this inequality is
equivalent to $\left(T||\ket{w}||^{2}-S\right)^{2}\geq 0$, where
$T=a^{2}|v_{01}|^{2}+b^{2}|v_{11}|^{2}$ and
$S=a^{2}\left|\left<w|v_{0}\right>\right|^{2}+b^{2}\left|\left<w|v_{1}\right>\right|^{2}$.
Therefore, Eq. (30) always holds, and so $\sqrt{\tau+C^{2}_{BC}}$ does not
increase on average under LOCC for three-qubit pure states.
### Proof of Theorem GME measures based on three-qubit teleportation
capability
We here provide a rigorous proof that $\mathcal{T}_{min}$ and
$\mathcal{T}_{GM}$ satisfy the condition (iii). As in the proof of Lemma
Three-qubit teleportation capability and its properties, we assume that one
party performs a two-outcome POVM. If we have $\ket{\psi_{t}}_{ABC}$ with
probability $p_{t}$ after applying the POVM on $\ket{\phi}_{ABC}$, then it
follows from Lemma Three-qubit teleportation capability and its properties
that
$\mathcal{T}_{ij}\left(\ket{\phi}_{ABC}\right)\geq\sum_{t=0}^{1}p_{t}\mathcal{T}_{ij}\left(\ket{\psi_{t}}_{ABC}\right),$
(34)
for any distinct $i$ and $j$ in $\\{A,B,C\\}$. Without loss of generality, let
$\mathcal{T}_{AB}$ take the minimum, that is,
$\mathcal{T}_{min}\left(\ket{\phi}_{ABC}\right)=\mathcal{T}_{AB}\left(\ket{\phi}_{ABC}\right)$.
Then
$\mathcal{T}_{min}\left(\ket{\phi}_{ABC}\right)\geq\sum_{t=0}^{1}p_{t}\mathcal{T}_{AB}\left(\ket{\psi_{t}}_{ABC}\right)\geq\sum_{t=0}^{1}p_{t}\mathcal{T}_{min}\left(\ket{\psi_{t}}_{ABC}\right).$
(35)
Thus, $\mathcal{T}_{min}$ does not increase on average under LOCC for three-
qubit pure states. For $\mathcal{T}_{GM}$, we have
$\displaystyle\mathcal{T}_{GM}\left(\ket{\phi}_{ABC}\right)$
$\displaystyle\geq$
$\displaystyle\sqrt[3]{\sum_{t=0}^{1}p_{t}\mathcal{T}_{AB}\left(\ket{\psi_{t}}_{ABC}\right)\sum_{s=0}^{1}p_{s}\mathcal{T}_{BC}\left(\ket{\psi_{s}}_{ABC}\right)\sum_{r=0}^{1}p_{r}\mathcal{T}_{CA}\left(\ket{\psi_{r}}_{ABC}\right)}$
(36) $\displaystyle\geq$
$\displaystyle\sum_{t=0}^{1}\sqrt[3]{p_{t}\mathcal{T}_{AB}\left(\ket{\psi_{t}}_{ABC}\right)}\sqrt[3]{p_{t}\mathcal{T}_{BC}\left(\ket{\psi_{t}}_{ABC}\right)}\sqrt[3]{p_{t}\mathcal{T}_{CA}\left(\ket{\psi_{t}}_{ABC}\right)}$
$\displaystyle=$
$\displaystyle\sum_{t=0}^{1}p_{t}\mathcal{T}_{GM}\left(\ket{\psi_{t}}_{ABC}\right).$
The second inequality in Eq. (36) comes from Mahler’s inequality. Hence,
$\mathcal{T}_{GM}$ also does not increase on average under LOCC for three-
qubit pure states.
Now, let $\rho$ be a three-qubit mixed state and $\mathcal{T}$ represent
$\mathcal{T}_{min}$ or $\mathcal{T}_{GM}$. Assume that an ensemble
${\\{p_{l},\psi_{l}\\}}$ of $\rho$ attains the minimum of $\mathcal{T}(\rho)$,
that is,
$\mathcal{T}(\rho)=\sum_{l}p_{l}\mathcal{T}\left(\ket{\psi_{l}}\right).$ (37)
If we obtain
$\ket{\phi_{l,t}}=\frac{A_{t}\ket{\psi_{l}}}{\|A_{t}\ket{\psi_{l}}\|}$ (38)
after a POVM, it follows from the monotonicity of $\mathcal{T}$ on pure states
that
$\mathcal{T}(\rho)\geq\sum_{l,t}p_{l}q_{l,t}\mathcal{T}\left(\ket{\phi_{l,t}}\right),$
(39)
where $q_{l,t}=\|A_{t}\ket{\psi_{l}}\|^{2}$. From the linearity of POVM, we
note that the resulting state is
$\sigma_{t}=\frac{1}{r_{t}}\sum_{l}p_{l}q_{l,t}\ket{\phi_{l,t}}\bra{\phi_{l,t}}$
(40)
with probability $r_{t}=\sum_{l}p_{l}q_{l,t}$. By the definition of the convex
roof extension, we finally have
$\mathcal{T}(\sigma_{t})\leq\frac{1}{r_{t}}\sum_{l}p_{l}q_{l,t}\mathcal{T}\left(\ket{\phi_{l,t}}\right),$
(41)
which completes the proof.
### Proof of Proposition GME and $N$-qubit teleportation capability
To prove the proposition, we first introduce the following lemma. Let
$\ket{\phi_{0}}$, $\ket{\phi_{1}}$, $\ket{\psi_{0}}$, and $\ket{\psi_{1}}$ be
qubits and $|\inn{\phi_{0}}{\phi_{1}}|\neq 1$. Define
$\ket{\xi}_{AB}\equiv\frac{1}{\sqrt{P}}\left(\alpha_{0}\ket{\phi_{0}}_{A}\ket{\psi_{0}}_{B}+\alpha_{1}\ket{\phi_{1}}_{A}\ket{\psi_{1}}_{B}\right),$
(42)
where $\alpha_{0},\alpha_{1}\in\mathbb{C}$ with
$|\alpha_{0}|^{2}+|\alpha_{1}|^{2}=1$, $|\alpha_{i}|\neq 0$ for $i=0,1$, and
$P\equiv
1+2Re\left(\alpha_{0}^{*}\alpha_{1}\inn{\phi_{0}}{\phi_{1}}\inn{\psi_{0}}{\psi_{1}}\right)$
is the normalization factor. If $\ket{\xi}_{AB}$ is separable, then
$\ket{\psi_{0}}$ is equivalent to $\ket{\psi_{1}}$ up to a global phase.
#### Proof of Lemma Proof of Proposition
Since $\ket{\xi}_{AB}$ is separable, we have
$\mathrm{Tr}\left(\sigma_{B}^{2}\right)=1$, where $\sigma_{B}$ is the reduced
matrix of $\ket{\xi}_{AB}$. Let us decompose $\ket{\phi_{1}}$ as
$\ket{\phi_{1}}=\inn{\phi_{0}}{\phi_{1}}\ket{\phi_{0}}+\inn{\phi_{0}^{\perp}}{\phi_{1}}\ket{\phi_{0}^{\perp}}$,
where $\inn{\phi_{0}^{\perp}}{\phi_{0}}=0$. Let
$\beta=\inn{\phi_{0}}{\phi_{1}}$. Then
$P\sigma_{B}=|\alpha_{0}|^{2}\ket{\psi_{0}}\bra{\psi_{0}}+|\alpha_{1}|^{2}\ket{\psi_{1}}\bra{\psi_{1}}+\alpha_{0}^{*}\alpha_{1}\beta\ket{\psi_{1}}\bra{\psi_{0}}+\alpha_{0}\alpha_{1}^{*}\beta^{*}\ket{\psi_{0}}\bra{\psi_{1}}.$
(43)
By straightforward calculations, we have
$\displaystyle P^{2}$ $\displaystyle=$ $\displaystyle
P^{2}\mathrm{Tr}\left(\sigma_{B}^{2}\right)$ (44) $\displaystyle=$
$\displaystyle
P^{2}-2|\alpha_{0}|^{2}|\alpha_{1}|^{2}(1-|\beta|^{2})(1-|\gamma|^{2}),$
where $\gamma=\inn{\psi_{0}}{\psi_{1}}$. Since $|\alpha_{i}|\neq 0$ for
$i=0,1$ and $|\beta|\neq 1$, we have $|\gamma|=1$. Therefore, $\ket{\psi_{0}}$
is equivalent to $\ket{\psi_{1}}$ up to a global phase.
#### Proof of Proposition GME and $N$-qubit teleportation capability
Without loss of generality, let us assume that
$F^{(4)}_{AB}(\ket{\phi}_{ABCD})=\frac{2}{3}$. Then, it can be written as
$\ket{\phi}_{ABCD}=\alpha_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}\ket{00}_{CD}+\alpha_{1}\ket{\eta_{1}}_{A}\ket{\zeta_{1}}_{B}\ket{01}_{CD}+\alpha_{2}\ket{\eta_{2}}_{A}\ket{\zeta_{2}}_{B}\ket{10}_{CD}+\alpha_{3}\ket{\eta_{3}}_{A}\ket{\zeta_{3}}_{B}\ket{11}_{CD},$
(45)
where $\alpha_{i}\in\mathbb{C}$ with $\sum_{i=0}^{3}|\alpha_{i}|^{2}=1$. If
$|\alpha_{i}|=1$ for some $i$, then we clearly have $\ket{\phi}_{ABCD}\in
SEP(G_{A}:G_{B})$ for any bipartition $\\{G_{A}|G_{B}\\}$.
If $|\alpha_{i}|^{2}+|\alpha_{j}|^{2}=1$ and $\alpha_{i}\alpha_{j}\neq 0$,
where $(i,j)\in\\{(0,1),(0,2),(1,3),(2,3)\\}$, then it can be immediately
applied the results for the three-qubit system. For example, let
$|\alpha_{0}|^{2}+|\alpha_{2}|^{2}=1$, then
$\ket{\phi}_{ABCD}=\ket{\phi^{\prime}}_{ABC}\ket{0}_{D}$, where
$\ket{\phi^{\prime}}_{ABC}=\alpha_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}\ket{0}_{C}+\alpha_{2}\ket{\eta_{2}}_{A}\ket{\zeta_{2}}_{B}\ket{1}_{C}$.
Since $F^{(4)}_{AB}(\ket{\phi}_{ABCD})=\frac{2}{3}$ and $\ket{\phi}_{ABCD}\in
SEP(ABC:D)$, we have $F^{(3)}_{AB}(\ket{\phi^{\prime}}_{ABC})=\frac{2}{3}$.
Hence, $\ket{\phi^{\prime}}_{ABC}\in SEP(A:BC)$ or
$\ket{\phi^{\prime}}_{ABC}\in SEP(AC:B)$, and thus $\ket{\phi}_{ABCD}\in
SEP(G_{A}:G_{B})$ for some bipartition $\\{G_{A}|G_{B}\\}$. If
$|\alpha_{0}|^{2}+|\alpha_{3}|^{2}=1$ and $\alpha_{0}\alpha_{3}\neq 0$, then
$\ket{\phi}_{ABCD}=\alpha_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}\ket{00}_{CD}+\alpha_{3}\ket{\eta_{3}}_{A}\ket{\zeta_{3}}_{B}\ket{11}_{CD}.$
(46)
In this case, after measuring each system $C$ and $D$ in the $X$ basis, we
have $\rho_{AB}^{H_{C}H_{D},J_{CD}=00}=\ket{\psi}_{AB}\bra{\psi}$ with
$\ket{\psi}_{AB}=\frac{1}{\sqrt{P}}\left(\alpha_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}+\alpha_{3}\ket{\eta_{3}}_{A}\ket{\zeta_{3}}_{B}\right)$,
where $H$ is the Hadamard operator and $P$ is the normalization factor. Since
$\rho_{AB}^{H_{C}H_{D},J_{CD}=00}$ is separable due to the fact that
$F^{(4)}_{AB}(\ket{\phi}_{ABCD})=\frac{2}{3}$, we apply Lemma Proof of
Proposition to show separability of $\ket{\phi}$ between some bipartition
$\\{G_{A}|G_{B}\\}$. In case of $|\alpha_{1}|^{2}+|\alpha_{2}|^{2}=1$ with
$\alpha_{1}\alpha_{2}\neq 0$, we can apply the same logic.
The next is the case of $|\alpha_{i}|^{2}+|\alpha_{j}|^{2}+|\alpha_{k}|^{2}=1$
with $\alpha_{i}\alpha_{j}\alpha_{k}\neq 0$. For simplicity,
$\ket{\phi}\sim\ket{\phi^{\prime}}$ means $\ket{\phi}$ and
$\ket{\phi^{\prime}}$ are equivalent up to a global phase from now on. By
symmetry, it is enough to deal with the case of
$\ket{\phi}_{ABCD}=\alpha_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}\ket{00}_{CD}+\alpha_{1}\ket{\eta_{1}}_{A}\ket{\zeta_{1}}_{B}\ket{01}_{CD}+\alpha_{2}\ket{\eta_{2}}_{A}\ket{\zeta_{2}}_{B}\ket{10}_{CD}.$
(47)
Since $\rho_{AB}^{I_{C}H_{D},J_{CD}=00}$ and
$\rho_{AB}^{H_{C}I_{D},J_{CD}=00}$ are separable, where $I$ is the identity
operator, we have ($\ket{\eta_{0}}\sim\ket{\eta_{1}}$ or
$\ket{\zeta_{0}}\sim\ket{\zeta_{1}}$) and ($\ket{\eta_{0}}\sim\ket{\eta_{2}}$
or $\ket{\zeta_{0}}\sim\ket{\zeta_{2}}$), respectively, by Lemma Proof of
Proposition. If $\ket{\eta_{0}}\sim\ket{\eta_{1}}$ and
$\ket{\eta_{0}}\sim\ket{\eta_{2}}$, then $\ket{\phi}_{ABCD}\in SEP(A:BCD)$.
Similarly, $\ket{\zeta_{0}}\sim\ket{\zeta_{1}}$ and
$\ket{\zeta_{0}}\sim\ket{\zeta_{2}}$ implies $\ket{\phi}_{ABCD}\in
SEP(ACD:B)$. Let $\ket{\eta_{0}}\sim\ket{\eta_{1}}$ and
$\ket{\zeta_{0}}\sim\ket{\zeta_{2}}$. Then there exist real numbers $r_{1}$
and $s_{2}$ such that $\ket{\eta_{0}}=e^{ir_{1}}\ket{\eta_{1}}$ and
$\ket{\zeta_{0}}=e^{is_{2}}\ket{\zeta_{2}}$. Hence, it can be written as
$\ket{\phi}_{ABCD}=\alpha_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}\ket{00}_{CD}+\beta_{1}\ket{\eta_{0}}_{A}\ket{\zeta_{1}}_{B}\ket{01}_{CD}+\beta_{2}\ket{\eta_{2}}_{A}\ket{\zeta_{0}}_{B}\ket{10}_{CD},$
(48)
where $|\beta_{i}|=|\alpha_{i}|$ for $i=1,2$. Note that
$\rho_{AB}^{H_{C}H_{D},J_{CD}=00}=\ket{\psi}_{AB}\bra{\psi}$, where
$\ket{\psi}_{AB}=\frac{1}{\sqrt{P}}(\alpha_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}+\beta_{1}\ket{\eta_{0}}_{A}\ket{\zeta_{1}}_{B}+\beta_{2}\ket{\eta_{2}}_{A}\ket{\zeta_{0}}_{B})$
(49)
with the normalization factor $P$. We can rewrite
$\ket{\psi}_{AB}=\frac{1}{\sqrt{P^{\prime}}}(\gamma_{0}\ket{\eta_{0}}_{A}\ket{\zeta^{\prime}}_{B}+\gamma_{1}\ket{\eta_{2}}_{A}\ket{\zeta_{0}}_{B})$
(50)
with some nonzero values $\gamma_{0},\gamma_{1}\in\mathbb{C}$ and
$P^{\prime}\in\mathbb{R}$. By applying Lemma Proof of Proposition once again,
we have $\ket{\eta_{0}}\sim\ket{\eta_{2}}$ or
$\ket{\zeta^{\prime}}\sim\ket{\zeta_{0}}$. In the former case, we get
$\ket{\phi}_{ABCD}\in SEP(A:BCD)$. In the latter case,
$\ket{\zeta_{0}}\sim\ket{\zeta_{1}}$ since $\ket{\zeta^{\prime}}$ is a linear
combination of $\ket{\zeta_{0}}$ and $\ket{\zeta_{1}}$, and thus
$\ket{\phi}_{ABCD}\in SEP(ACD:B)$.
Let us consider the case of $\alpha_{i}\neq 0$ for all $i$. Since
$\rho_{AB}^{I_{C}H_{D},J_{CD}=00}$, $\rho_{AB}^{H_{C}I_{D},J_{CD}=00}$,
$\rho_{AB}^{H_{C}I_{D},J_{CD}=01}$, and $\rho_{AB}^{I_{C}H_{D},J_{CD}=10}$ are
separable, we have
$\displaystyle a_{1}:\ket{\eta_{0}}\sim\ket{\eta_{1}}\leavevmode\nobreak\
\leavevmode\nobreak\ \rm{or}\leavevmode\nobreak\ \leavevmode\nobreak\
b_{1}:\ket{\zeta_{0}}\sim\ket{\zeta_{1}},$ $\displaystyle
a_{2}:\ket{\eta_{0}}\sim\ket{\eta_{2}}\leavevmode\nobreak\
\leavevmode\nobreak\ \rm{or}\leavevmode\nobreak\ \leavevmode\nobreak\
b_{2}:\ket{\zeta_{0}}\sim\ket{\zeta_{2}},$ $\displaystyle
a_{3}:\ket{\eta_{1}}\sim\ket{\eta_{3}}\leavevmode\nobreak\
\leavevmode\nobreak\ \rm{or}\leavevmode\nobreak\ \leavevmode\nobreak\
b_{3}:\ket{\zeta_{1}}\sim\ket{\zeta_{3}},$ $\displaystyle
a_{4}:\ket{\eta_{2}}\sim\ket{\eta_{3}}\leavevmode\nobreak\
\leavevmode\nobreak\ \rm{or}\leavevmode\nobreak\ \leavevmode\nobreak\
b_{4}:\ket{\zeta_{2}}\sim\ket{\zeta_{3}},$ (51)
respectively. Hence, there are 16 possible cases.
For a case in
$\\{a_{1}a_{2}a_{3}a_{4},a_{1}a_{2}a_{3}b_{4},a_{1}a_{2}b_{3}a_{4},a_{1}b_{2}a_{3}a_{4},b_{1}a_{2}a_{3}a_{4}\\}$,
we have $\eta_{0}\sim\eta_{i}$ for all $i$, and so $\ket{\phi}_{ABCD}\in
SEP(A:BCD)$. If a case is in
$\\{b_{1}b_{2}b_{3}b_{4},b_{1}b_{2}b_{3}a_{4},b_{1}b_{2}a_{3}b_{4},b_{1}a_{2}b_{3}b_{4},a_{1}b_{2}b_{3}b_{4}\\}$,
we obtain $\ket{\phi}_{ABCD}\in SEP(ACD:B)$. For the case
$a_{1}a_{2}b_{3}b_{4}$, we have
$\ket{\eta_{0}}\sim\ket{\eta_{1}}\sim\ket{\eta_{2}}$ and
$\ket{\zeta_{1}}\sim\ket{\zeta_{2}}\sim\ket{\zeta_{3}}$. In this case, after
applying the method used in the case of
$|\alpha_{i}|^{2}+|\alpha_{j}|^{2}+|\alpha_{k}|^{2}=1$ with
$\alpha_{i}\alpha_{j}\alpha_{k}\neq 0$ to $\rho_{AB}^{H_{C}H_{D},J_{CD}=00}$,
we can show separability. For the cases $a_{1}b_{2}a_{3}b_{4}$,
$b_{1}b_{2}a_{3}a_{4}$, and $b_{1}a_{2}b_{3}a_{4}$, it can be proven in the
same way.
The remainder cases are $a_{1}b_{2}b_{3}a_{4}$ and $b_{1}a_{2}a_{3}b_{4}$. By
symmetry, it suffices to consider the case $a_{1}b_{2}b_{3}a_{4}$. In this
case, the state can be written as
$\ket{\phi}_{ABCD}=\beta_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}\ket{00}_{CD}+\beta_{1}\ket{\eta_{0}}_{A}\ket{\zeta_{1}}_{B}\ket{01}_{CD}+\beta_{2}\ket{\eta_{2}}_{A}\ket{\zeta_{0}}_{B}\ket{10}_{CD}+\beta_{3}\ket{\eta_{2}}_{A}\ket{\zeta_{1}}_{B}\ket{11}_{CD}$
(52)
for some $\beta_{i}\in\mathbb{C}$ with $|\alpha_{i}|=|\beta_{i}|$ for all $i$.
Then $\rho_{AB}^{H_{C}H_{D},J_{CD}=00}=\ket{\psi}_{AB}\bra{\psi},$ where
$\ket{\psi}_{AB}=\frac{1}{\sqrt{P}}(\beta_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}+\beta_{1}\ket{\eta_{0}}_{A}\ket{\zeta_{1}}_{B}+\beta_{2}\ket{\eta_{2}}_{A}\ket{\zeta_{0}}_{B}+\beta_{3}\ket{\eta_{2}}_{A}\ket{\zeta_{1}}_{B})$
(53)
with a normalization factor $P$. We can rewrite
$\ket{\psi}_{AB}=\frac{1}{\sqrt{P^{\prime}}}(\gamma_{0}\ket{\eta_{0}}_{A}\ket{\zeta^{\prime}}_{B}+\gamma_{2}\ket{\eta_{2}}_{A}\ket{\zeta^{\prime\prime}}_{B})$
(54)
with nonzero values $\gamma_{0}$, $\gamma_{2}$, and $P^{\prime}$, where
$\ket{\zeta^{\prime}}$ and $\ket{\zeta^{\prime\prime}}$ are linear
combinations of $\ket{\zeta_{0}}$ and $\ket{\zeta_{1}}$. Since
$\rho_{AB}^{H_{C}H_{D},J_{CD}=00}$ is separable, by applying the Lemma Proof
of Proposition, we have $\ket{\eta_{0}}\sim\ket{\eta_{2}}$ or
$\ket{\zeta^{\prime}}\sim\ket{\zeta^{\prime\prime}}$. In the former case, we
have $\ket{\phi}_{ABCD}\in SEP(A:BCD)$. Let us consider the latter case. We
note that
$\ket{\zeta^{\prime}}=\delta_{0}\ket{\zeta_{0}}_{B}+\delta_{1}\ket{\zeta_{1}}_{B}$
and
$\ket{\zeta^{\prime\prime}}=\delta_{2}\ket{\zeta_{0}}_{B}+\delta_{3}\ket{\zeta_{1}}_{B}$
for some $\delta_{i}$. Since
$\ket{\zeta^{\prime}}\sim\ket{\zeta^{\prime\prime}}$, there exits $\theta$
such that
$\delta_{0}\ket{\zeta_{0}}_{B}+\delta_{1}\ket{\zeta_{1}}_{B}=e^{i\theta}(\delta_{2}\ket{\zeta_{0}}_{B}+\delta_{3}\ket{\zeta_{1}}_{B})$.
If $\zeta_{0}\sim\zeta_{1}$, then $\ket{\phi}_{ABCD}\in SEP(ACD:B)$. In the
other case, we have $\delta_{0}=e^{i\theta}\delta_{2}$ and
$\delta_{1}=e^{i\theta}\delta_{3}$ since $\ket{\zeta_{0}}$ and
$\ket{\zeta_{1}}$ are linearly independent. Hence, it can be rewritten as
$\ket{\phi}_{ABCD}=\tilde{\gamma}_{0}\delta_{0}\ket{\eta_{0}}_{A}\ket{\zeta_{0}}_{B}\ket{00}_{CD}+\tilde{\gamma}_{0}\delta_{1}\ket{\eta_{0}}_{A}\ket{\zeta_{1}}_{B}\ket{01}_{CD}+\tilde{\gamma}_{1}\delta_{0}\ket{\eta_{2}}_{A}\ket{\zeta_{0}}_{B}\ket{10}_{CD}+\tilde{\gamma}_{1}\delta_{1}\ket{\eta_{2}}_{A}\ket{\zeta_{1}}_{B}\ket{11}_{CD}$
(55)
with some complex values $\tilde{\gamma}_{i}$ and $\delta_{j}$. Reordering the
systems, we can have
$(\tilde{\gamma}_{0}\ket{\eta_{0}}_{A}\ket{0}_{C}+\tilde{\gamma}_{1}\ket{\eta_{2}}_{A}\ket{1}_{C})(\delta_{0}\ket{\zeta_{0}}_{B}\ket{0}_{D}+\delta_{1}\ket{\zeta_{1}}_{B}\ket{1}_{D}),$
(56)
which means $\ket{\phi}_{ABCD}\in SEP(AC:BD)$.
## Data availability
No datasets were generated or analysed during the current study. The results
were calculated by hand. Correspondence and requests for materials should be
addressed to E.B. or S.L..
## References
* [1] Bennett, C. H. _et al._ Teleporting an unknown quantum state via dual classical and einstein-podolsky-rosen channels. _Phys. Rev. Lett._ 70, 1895 (1993).
* [2] Ekert, A. K. Quantum cryptography based on bell’s theorem. _Phys. Rev. Lett._ 67, 661 (1991).
* [3] Karlsson, A. & Bourennane, M. Quantum teleportation using three-particle entanglement. _Phys. Rev. A_ 58, 4394 (1998).
* [4] Harraz, S., Cong, S. & Nieto, J. J. Optimal tripartite quantum teleportation protocol through noisy channels. _Quantum Information Processing_ 22 (2023).
* [5] Chen, K. & Lo, H.-K. Conference key agreement and quantum sharing of classical secrets with noisy GHZ states. In _Proceedings. International Symposium on Information Theory, 2005. ISIT 2005._ , 1607–1611 (IEEE, 2005).
* [6] Hillery, M., Bužek, V. & Berthiaume, A. Quantum secret sharing. _Phys. Rev. A_ 59, 1829 (1999).
* [7] Yeo, Y. & Chua, W. K. Teleportation and dense coding with genuine multipartite entanglement. _Phys. Rev. Lett._ 96, 060502 (2006).
* [8] Das, S., Bäuml, S., Winczewski, M. & Horodecki, K. Universal limitations on quantum key distribution over a network. _Phys. Rev. X_ 11, 041016 (2021).
* [9] Briegel, H. J., Browne, D. E., Dür, W., Raussendorf, R. & Van den Nest, M. Measurement-based quantum computation. _Nature Physics_ 5, 19–26 (2009).
* [10] Giovannetti, V., Lloyd, S. & Maccone, L. Quantum-enhanced measurements: beating the standard quantum limit. _Science_ 306, 1330–1336 (2004).
* [11] de Oliveira, T. R., Rigolin, G. & de Oliveira, M. C. Genuine multipartite entanglement in quantum phase transitions. _Physical Review A_ 73, 010305 (2006).
* [12] Montakhab, A. & Asadian, A. Multipartite entanglement and quantum phase transitions in the one-, two-, and three-dimensional transverse-field ising model. _Physical Review A_ 82, 062313 (2010).
* [13] Bruß, D., Datta, N., Ekert, A., Kwek, L. C. & Macchiavello, C. Multipartite entanglement in quantum spin chains. _Physical Review A_ 72, 014301 (2005).
* [14] Bennett, C. H., DiVincenzo, D. P., Smolin, J. A. & Wootters, W. K. Mixed-state entanglement and quantum error correction. _Phys. Rev. A_ 54, 3824 (1996).
* [15] Hill, S. A. & Wootters, W. K. Entanglement of a pair of quantum bits. _Phys. Rev. Lett._ 78, 5022 (1997).
* [16] Wootters, W. K. Entanglement of formation of an arbitrary state of two qubits. _Phys. Rev. Lett._ 80, 2245 (1998).
* [17] Dür, W., Vidal, G. & Cirac, J. I. Three qubits can be entangled in two inequivalent ways. _Phys. Rev. A_ 62, 062314 (2000).
* [18] Ma, Z.-H. _et al._ Measure of genuine multipartite entanglement with computable lower bounds. _Phys. Rev. A_ 83, 062325 (2011).
* [19] Li, Y. & Shang, J. Geometric mean of bipartite concurrences as a genuine multipartite entanglement measure. _Phys. Rev. Research_ 4, 023059 (2022).
* [20] Gühne, O. & Tóth, G. Entanglement detection. _Phys. Rep._ 474, 1–75 (2009).
* [21] Xie, S. & Eberly, J. H. Triangle measure of tripartite entanglement. _Phys. Rev. Lett._ 127, 040403 (2021).
* [22] Ge, X., Liu, L. & Cheng, S. Tripartite entanglement measure under local operations and classical communication. _Phys. Rev. A_ 107, 032405 (2023).
* [23] Popescu, S. Bell’s inequalities versus teleportation: What is nonlocality? _Phys. Rev. Lett._ 72, 797 (1994).
* [24] Horodecki, R., Horodecki, M. & Horodecki, P. Teleportation, bell’s inequalities and inseparability. _Phys. Lett. A_ 222, 21 (1996).
* [25] Joo, J., Park, Y.-J., Oh, S. & Kim, J. Quantum teleportation via a w state. _New J. Phys._ 5, 136 (2003).
* [26] Lee, S., Joo, J. & Kim, J. Entanglement of three-qubit pure states in terms of teleportation capability. _Phys. Rev. A_ 72, 024302 (2005).
* [27] Horodecki, M., Horodecki, P. & Horodecki, R. General teleportation channel, singlet fraction, and quasidistillation. _Phys. Rev. A_ 60, 1888 (1999).
* [28] Badziag, P., Horodecki, M., Horodecki, P. & Horodecki, R. Local environment can enhance fidelity of quantum teleportation. _Phys. Rev. A_ 62, 012311 (2000).
* [29] Massar, S. & Popescu, S. Optimal extraction of information from finite quantum ensembles. _Phys. Rev. Lett._ 74, 1259 (1995).
* [30] Coffman, V., Kundu, J. & Wootters, W. K. Distributed entanglement. _Phys. Rev. A_ 61, 052306 (2000).
* [31] Zhou, L. & Sheng, Y.-B. One-step device-independent quantum secure direct communication. _Sci. China Phys. Mech. Astron._ 65 (2022).
* [32] Shi, W.-M., Bai, M.-X., Zhou, Y.-H. & Yang, Y.-G. Controlled quantum teleportation based on quantum walks. _Quantum Information Processing_ 22 (2023).
* [33] Acín, A. _et al._ Generalized schmidt decomposition and classification of three-quantum-bit states. _Phys. Rev. Lett._ 85, 1560 (2000).
* [34] Acín, A., Jané, E., Dür, W. & Vidal, G. Optimal distillation of a greenberger-horne-zeilinger state. _Phys. Rev. Lett._ 85, 4811 (2000).
## Acknowledgements
M.C. acknowledges support from the National Research Foundation (NRF) of Korea
grant funded by the Korea Government (Grant No. NRF-2022M3K2A1083890), E.B.
acknowledges support from the NRF of Korea grant funded by the Ministry of
Science and ICT (MSIT) (Grant No. NRF-2022R1C1C2006396), and S.L. acknowledges
support from the NRF grant funded by the MSIT (Grant No.
NRF-2022R1F1A1068197), the MSIT, Korea, under the Information Technology
Research Center support program (Grant No. IITP-2023-2018-0-01402) supervised
by the Institute for Information and Communications Technology Planning and
Evaluation and the Creation of the Quantum Information Science R$\&$D
Ecosystem (Grant No. 2022M3H3A106307411) through the NRF funded by the MSIT.
## Author contributions statement
M.C., E.B. and S.L. conceived the idea. M.C. performed the calculations and
the proofs, and E.B. and S.L. checked them. M.C. wrote the main manuscript,
and E.B. and S.L. improved the manuscript. All authors contributed to the
discussion and reviewed the manuscript.
## Competing interests
The authors declare no competing interests.
|
Characterizing Engagement Dynamics across Topics on Facebook
Gabriele Etta1, Emanuele Sangiorgio2, Niccolò Di Marco3, Michele Avalle1,
Antonio Scala4, Matteo Cinelli1, Walter Quattrociocchi1
1 Center of Data Science and Complexity for Society, Department of Computer
Science, Sapienza Università di Roma
2 Department of Social and Economic Sciences, Sapienza Università di Roma
3 Department of Mathematics and Computer Science, University of Florence,
Italy
4 ISC-CNR UoS Sapienza, Rome,Italy
*<EMAIL_ADDRESS>
## Abstract
Social media platforms heavily changed how users consume and digest
information and, thus, how the popularity of topics evolves. In this paper, we
explore the interplay between the virality of controversial topics and how
they may trigger heated discussions and eventually increase users’
polarization. We perform a quantitative analysis on Facebook by collecting
$\sim 57M$ posts from $\sim 2M$ pages and groups between 2018 and 2022,
focusing on engaging topics involving scandals, tragedies, and social and
political issues. Using logistic functions, we quantitatively assess the
evolution of these topics finding similar patterns in their engagement
dynamics. Finally, we show that initial burstiness may predict the rise of
users’ future adverse reactions regardless of the discussed topic.
## Introduction
The advent of social media platforms changed how users consume information
online [1, 2, 3, 4]. The micro-blogging features on Twitter and Facebook,
combined with a direct interaction between news producers and consumers, have
remarkably affected how people get informed, shape their own opinions, and
debate with other peers online [5, 6, 7]. Over the years, following the
business model of social media platforms, news outlets and producers attempted
to maximize the time spent by users on their contents [8, 9], giving birth to
the concept of attention economy [10]. The term refers to the users’ limited
capability and time to process all information they interact with [11, 12,
13]. The transition toward a news ecosystem shaped on social media platforms
unveiled patterns in information consumption at multiple scales [14, 15],
which contributed to the emergence of the polarisation phenomenon, and the
formation of like-minded groups called echo chambers [16, 17, 18]. Within echo
chambers, characterized by homophily in the interaction network and bias in
information diffusion towards like-minded peers, selective exposure [19] is a
significant driver for news consumption [17]. The combination of echo chambers
and selective exposure makes users more likely to ignore dissenting
information [20], choosing to interact with narratives adhering to their point
of view [15, 21].
Several studies explored the existence of these mechanisms in many topics
concerning political elections, public health, climate change, and
trustworthiness of the news sources [15, 21, 22, 23, 24, 25, 26, 27, 28, 29].
Findings indicate neither the topic nor the quality of information explains
the users’ opinion-formation process. Instead, several studies observed how
the virality of discussions can increase the likelihood of inducing
polarization, hate speech, and toxic behaviors [30, 31, 32], highlighting how
recommendation algorithms may have a role in shaping the news diet of users.
Therefore, it is necessary to provide a better understanding of how user
interest evolves in online debates. To achieve this goal, we provide a
quantitative assessment of the dynamics underlying user interest in news
articles about different topics. In this paper, we analyze the engagement
patterns produced by $\sim 57M$ posts on Facebook related to $\sim 300$
topics, involving a total of $\sim 2M$ posting pages and groups over a period
that ranges from $2018$ to $2022$. We first provide a quantitative assessment
of topics’ resonance through time, extracting insightful parameters from their
engagement evolution. Then, we exploit the obtained parameters by assessing
relationships with the sentiment expressed by users through their positive and
negative reactions. Our results show that topics are generally characterized
by an interest that constantly increases since the appearance of the first
post. We find that topics’ interactions grow with permanent intensity, even
for prolonged periods, indicating how interest is a cumulative process that
takes time. We statistically validate this result by comparing parameters
across topic categories, discovering no differences in the evolution of the
engagement. Indeed, regardless of their category, topics keep users engaged
steadily over time, and their lifetime progression seems thus unrelated to its
thematic field. Finally, we find that topics with sudden virality tend to
trigger more controversial and heterogeneous interactions. In turn, topics
with a steady evolution exhibit more positive and homogeneous reaction types.
This difference in the sentiment of reactions, and the protracted duration of
topics’ lifetime, are both upshots consistent with the emergence of selective
exposure as a driver of news consumption.
## Materials and Methods
This section describes the data collection process, the topic extraction
process, the models and the metrics employed in assessing collective
attention.
### Overview of the data collection process
The data collection process comprises several parts, as described in Fig. 1.
We start by creating a sample of news articles from the GDELT event database
[33], and then we process the articles’ text to obtain a set of representing
terms. Consequently, we apply the Louvain community detection algorithm [34]
on the co-occurrence term network to identify the topics of interest. The
terms representing these topics will serve as input for the collection of
posts from Facebook.
Figure 1: Summary of the analysis workflow followed in the current study. News
articles are collected from the GDELT Database, and their corpus is extracted,
cleaned and analyzed to retrieve the most representing terms. The co-
occurrence network built upon these terms serves as an input for the Louvain
community detection algorithm to identify keyword clusters. Independent
labellers then analyze these clusters to identify the subset of words that
represent the topic under consideration, which are then used on Crowdtangle to
retrieve the Facebook posts relating to those events.
#### News Extraction from GDELT
The GDELT (Global Database of Events, Language, and Tone) Project [35],
powered by Google Jigsaw, is a database of global human society which monitors
the world’s broadcast, print, and web news from nearly every corner of every
country in more than 100 languages. It identifies the people, locations,
organisations, themes, sources, emotions, counts, quotes, images and events
driving our global society every second of every day [36]. We gathered news
articles from the GDELT 2.0 Event Database [33], which can store new world’s
breaking events every 15 minutes and translates the corresponding news
articles in 65 languages, representing 98.4% of its daily non-English
monitoring volume [37]. The analysis covers a period between $1/1/2018$ and
$13/5/2022$, collecting $50$ news articles each week for a total of $\sim
79K$.
#### Extracting representative keywords from news articles
To clean and extract the most representative keywords of each news article, we
employed the newspaper3k Python package [38]. We initially extracted words
from the body of the article, excluding stopwords and numbers. Then, we
computed the word frequency $f(w,i)$ for each word $w$ in article $i$.
Finally, we sorted words in descending order according to their frequency,
keeping the top 10 most frequent words.
#### Topic Extraction from News Article’s Keywords
The list of terms with the corresponding news articles can be formalised as a
bipartite graph $G=(T,A,E)$ whose partitions $T$ and $A$ represent the set of
terms $t\in T$ and the articles $a\in A$ respectively, for which an edge
$(t,a)\in E$ exists if a term $t$ is present in an article $a$. By projecting
graph G on its terms $T$ we obtain an undirected graph $P$ made up of nodes
$t\in T$, which are connected if they share at least one news article.
We perform community detection on the nodes of $P$ by employing the Louvain
algorithm [34]. As a result, we obtain a set of clusters $C$, where each
cluster $c\in C$ contains a list of keywords that are assumed to be
semantically related to a topic. We then asked a pool of three human labellers
to select, for each community, from two to three terms they considered the
most representative to identify a topic unambiguously.
#### Data collection of Facebook posts
The news articles obtained from the GDELT Event Database do not contain
information helpful in estimating the attention they generate online. To
include the dimension of user engagement, we employ each topic’s set of
representative terms to collect Facebook data over a period that goes from
$01/01/2018$ to $05/05/2022$. The data was obtained using CrowdTangle [39], a
Facebook-owned tool that tracks interactions on public content from Facebook
pages, groups, and verified profiles. CrowdTangle does not include paid ads
unless those ads began as organic, non-paid posts that were subsequently
“boosted” using Facebook’s advertising tools. CrowdTangle also does not store
data regarding the activity of private accounts or posts made visible only to
specific groups of followers.
The collection process produced a total of $\sim 57M$ posts from $\sim 2M$
unique pages and groups, generating $\sim 8B$ interactions. The result of the
data collection process is described in Table 1.
| Total News Articles
---
from GDELT
| Total Posts
---
from Facebook
| Total
---
Interactions
| Total Groups
---
and Pages
| Number of Topics
---
Collected
Period
79 650 | 57 031 026 | 8 015 177 602 | 2 224 430 | 296 | 1/1/2018 - 13/5/2022
Table 1: Data Breakdown of the study, including the total amount of news
articles and posts collected from GDELT and Facebook respectively, together
with the number of topics and the analysis period..
#### Topic Categorization
To provide a correspondence between topics and their area of interest, we
performed a categorization activity under the following labels: Art-Culture-
Sport (ACS), Economy, Environment, Health, Human Rights, Labor, Politics,
Religion, Social and Tech-Science. Three human labellers carried out the
activity to connect topics and categories, choosing as the representative only
those categories selected by at least two of the three labellers.
### Metrics
We begin by describing a measure for fitting the cumulative engagement
evolution. Then, based on the previous step, we outline an index to evaluate
the sharpness of the topic’s diffusion. Finally, we introduce a sentiment
score to assess the topic’s controversy by using Facebook’s reactions.
#### Fitting cumulative engagement evolution
The diffusion of new ideas has been widely studied in the past [40, 41, 42,
43, 44, 45], indicating how the logistic function can effectively model the
diffusion of innovations. Therefore, to model the evolution of the engagement
received by posts, we fit the cumulative distribution of the overall
engagement ( i.e., the number of likes, shares and comments) over time
employing a function $f_{\alpha,\beta}(t)$, with $\alpha,\beta\in\mathbb{R}$,
defined as
$f_{\alpha,\beta}(t)=\frac{1}{1+e^{-\alpha\left(t-\beta\right)}}.$ (1)
From a mathematical point of view, Eq. 1 defines a general sigmoid function
that depends on the parameters $\alpha$ and $\beta$. The $\alpha$ parameter
represents the slope of the function, describing the steepness of the
engagement evolution. On the other hand, $\beta$ is the point at which the
function reaches the value $0.5$ and quantifies the time required for a topic
to reach half its total interactions.
(a)
(b)
(c)
(d)
Figure 2: Representation of a sample of four topics employing their normalized
cumulative evolution of engagements and fittings. The incidence of the
$\alpha$ parameter can be observed in the sharpness of the fitting curves. The
$\beta$ parameter instead regulates the shift of the function through the $x$
axis: the higher its value, the higher the delay from $t_{0}$ where the
sigmoid produces its increment.
To provide a representation of the impact that $\alpha$ and $\beta$ can have
in topic engagement evolution, Fig. 2 displays four topics with peculiar
configurations. Fig. 2(a) shows a sigmoid in which the high values of $\alpha$
and $\beta$ produce a sharp increment relatively far from $t_{0}$. Such
behaviour corresponds to those topics that require some time before gaining
resonance with the public. Fig. 2(b) instead provides a fit where the sigmoid
produces low values for $\alpha$ and $\beta$, resulting in a smoother
increment in the proximity of $t_{0}$ than the one described in Fig.2(a).
Finally, Fig. 2(c) and 2(d) provide an example of how two curves that share
similar values of $\beta$ parameters can have a different evolution of their
increase by slightly modifying the values for $\alpha$ parameter.
#### Speed Index
To model the evolution of a topic by taking into account the joint
contribution of $\alpha$ and $\beta$ parameters, we define a measure called
the Speed Index $SI(f_{\alpha,\beta})$ as
$SI(f_{\alpha,\beta})=\frac{\int_{0}^{T}f_{\alpha,\beta}(t)dt}{T},$ (2)
where $T$ represents the time of the last observed value for
$f_{\alpha,\beta}(t)$. Note that $SI$ is the mean integral value of
$f_{\alpha,\beta}$, i.e. the normalised area under the curve of
$f_{\alpha,\beta}$ (therefore $SI(f_{\alpha,\beta})\in[0,1]$). The assumption
in the definition of this function relies on the fact that high-speed values
are obtained by sigmoids that reach the plateau in a short time, as the
behaviour represented in Fig. 2(b).
#### Love-Hate Score
To quantify the level of sentiment that a Facebook post produces, we define a
measure of controversy called Love-Hate Score $LH(i)\in\left[-1,1\right]$ as
$LH(i)=\frac{l_{i}-h_{i}}{l_{i}+h_{i}},$ (3)
where $h_{i}$ and $l_{i}$ are respectively the total number of Angry and Love
reactions collected by a post $i$. A value of $LH$ equal to $-1$ indicates
that the post received only Angry reactions from the users, while a value
equal to $1$ indicates that the post received only $Love$ reactions.
## Results and Discussion
### Quantifying topic resonance
We first provide a quantitative assessment of the topics’ resonance on social
media. To do so, we perform a Non-linear Least Squares (NLS) regression by
fitting the sigmoid function $f_{\alpha,\beta}(t)$ to the cumulative evolution
of the engagement for each topic.
Figure 3: Joint distribution of $\alpha$ and $\beta$ parameters obtained from
the NLS regression for each topic. We observe that topics are generally
characterized by values of $\alpha$ and $\beta$, which explains how user
interest in a topic does not increase all of a sudden but is the result of a
process that evolves over time.
The distribution of the $\alpha$ parameter provided in Fig. 3 describes how
the majority of topics have a value of $\alpha$ belonging to the
$\left[0,0.0047\right]$ interval. This result demonstrates how user interest
in a topic does not suddenly increase but results from a long-term process.
The distribution of the $\beta$ parameter, instead, describes a prevalence of
topics in the $\left[600,1000\right]$ interval, identifying the tendency of
topics to become a matter of interest with some delay w.r.t the first post
covering them.
### Evaluating the relationship between topic resonance and controversy
To quantify the interplay between user interest in a topic and the controversy
it produces, we compute the Spearman correlation between the Speed Index and
the Love-Hate (LH) Score for each topic. Results from the upper panel of Fig.
4 show a general negative tendency of users to react with adverse sentiment
when a topic gains engagement faster ($\rho=-0.26$), leaving positive
reactions to those topics that require time to gain resonance. Results
described in the lower panel of Fig. 4 provide further characterization of the
interplay between the Speed Index and the Love-Hate score after classifying
the topics according to the four most frequent categories analyzed, i.e.,
Politics, Labor, Human Rights and Health. We observe how the Politics and
Health categories have the lowest correlation scores ($\rho=-0.36$ and
$\rho=-0.45$), providing further evidence of their intrinsic polarizing
attitude (see Table 5 for the complete list of correlation coefficients).
Furthermore, the correlation between $\alpha$ and LH Score produces similar
results as with the Speed Index (see Fig. 5 in SI for more details).
Figure 4: Upper panel: correlation between $SI$ and $LH$ score for each of the
topics identified. Lower panel: correlation between $SI$ and $LH$ score for
the top $4$ most frequent topics. Overall, we observe how users react
negatively as topics become sharply viral.
### Assessing the differences of engagement behaviors across topic categories
To conclude our analysis, we investigate the differences in the evolution of
engagement across topic categories. In particular, for each parameter
distribution ($\alpha$, $\beta$ and $SI$), we apply a two-tailed Mann–Whitney
U test [46] to each pair of parameters. Table 2 provides the percentages of
the significant p-values for the four parameters. Due to the necessity to
perform multiple tests, we apply a Bonferroni correction to our standard
significance level of $0.05$, leading to reject the null hypothesis if the
p-value $p<0.001$. Our results show that the resulting p-values from the tests
do not lead to rejecting the null hypothesis. Such a result corroborates the
hypothesis that, on average, users are characterized by homogeneous engagement
patterns that are not influenced by the consumed topic. We further extend the
statistical assessment by performing the same test between Love-Hate Score
distributions of the different categories.
| $\alpha$ | $\beta$ | Speed Index | Love-Hate
---|---|---|---|---
<0.001 | 2.22% | 0% | 0% | 20%
>0.001 | 97.78% | 100% | 100% | 80%
Table 2: Percentage of p-values resulting from the two-sided Mann–Whitney U
test between each category employing their $\alpha$, $\beta$, Speed Index and
Love-Hate Score.
Conversely to engagement evolution results, the topic’s category explains
differences in the sentiment of reactions in 20% of cases. Such findings
reveal that some categories are composed of significantly more negative and
controversial topics, indicating how elicited reactions vary according to
specific subjects. Understanding that some of them are more prone to induce
negative feedback from users could be a proxy to introduce their related
topics in the online debate.
## Conclusions
In this work, we perform a quantitative analysis of user interest on a total
of $\sim 57M$ Facebook posts referring to $\sim 300$ different topics ranging
from $2018$ to $2022$. We initially quantify the distribution of topics’
resonance throughout the analysis. Then, we evaluate the relationship between
engagement and controversy. Ultimately, we assess the differences in
engagement across different categories of topics. Our findings show that, on
average, user interest in topics does not increase exponentially right after
their appearance but, instead, it grows steadily until it reaches a saturation
point. From a sentiment perspective, topics that gained resonance right after
their initial appearance are more likely to collect negative/controversial
reactions, whilst topics which are more steady in their growth tend to attract
positive user interactions. This result provides evidence about how
recommendation algorithms should introduce topics adequately since sudden
rises in topic resonance tend to reinforce polarization mechanisms. Finally,
we find no statistical difference between user interest across different
categories of topics, providing evidence that, on a relatively large time
window, the evolution of engagement with posts is primarily unrelated to their
subject. On the contrary, we observe differences in the sentiment generated by
the different topics, providing evidence of how polarisation drives people to
perceive the piece of content they consume online in different ways, according
to their framing and system of beliefs.
User interest and engagement evolution in the online debate are both aspects
of human behaviour on social media whose underlying dynamics still need to be
discovered from an individual point of view. Our findings provide an aggregate
perspective of the interplay between major emerging behavioral dynamics and
topics’ lifetime progression, deepening the relationship between diffusion
patterns and users’ reactions. Understanding that topics with an early burst
in virality are associated with primarily adverse reactions from users sheds
light on their tendency to react instinctively to new content. This approach
enables the identification of highly polarizing topics, since their initial
stage of diffusion, by observing the heterogeneity of users’ reactions. The
following study presents some limitations. In data collection, CrowdTangle
provides only posts from public Facebook pages with more than 25K Page Likes
or Followers, public Facebook groups with at least 95K members, all US-based
public groups with at least 2K members, and all verified profiles. These
restrictions affected our datasets’ sample and our findings’ generality.
Moreover, we did not have access to removed posts, groups, and pages, which
could have been a meaningful proxy to characterize the attention dynamics of
retracted content. Finally, since Crowdtangle does not provide information
about users interacting with posts, we cannot assess their engagement from an
individual perspective and model the possible relationship between users and
topics employing a network approach.
Future works may extend the application of the proposed methodology on
additional social media platforms to assess the role of the algorithms in the
attention economy. Researchers may also take advantage of the extensions to
further platforms by assessing the attention dynamics of users concerning
specific topics.
## 1 Acknowledgements
We acknowledge the 100683EPID Project “Global Health Security Academic
Research Coalition” SCH-00001-3391.
## References
* 1. Taha Yasseri, Patrick Gildersleve, and Lea David. Collective memory in the digital age. Collective Memory, Elsevier, 2022.
* 2. George Lazaroiu. The role of social media as a news provider. Review of Contemporary Philosophy, 13:78–84, 2014.
* 3. Ali Nobil Ahmad. Is twitter a useful tool for journalists? Journal of media practice, 11(2):145–155, 2010.
* 4. Daniele Notarmuzi, Claudio Castellano, Alessandro Flammini, Dario Mazzilli, and Filippo Radicchi. Universality, criticality and complexity of information propagation in social media. Nature communications, 13(1):1–8, 2022.
* 5. Jo Brown, Amanda J Broderick, and Nick Lee. Word of mouth communication within online communities: Conceptualizing the online social network. Journal of interactive marketing, 21(3):2–20, 2007.
* 6. Richard Kahn and Douglas Kellner. New media and internet activism: from the ‘battle of seattle’to blogging. New media & society, 6(1):87–95, 2004.
* 7. Shannon C McGregor. Social media as public opinion: How journalists use social media to represent public opinion. Journalism, 20(8):1070–1086, 2019.
* 8. Roope Jaakonmäki, Oliver Müller, and Jan Vom Brocke. The impact of content, context, and creator on user engagement in social media marketing. Proceedings of the 50th Hawaii International Conference on System Sciences, 2017.
* 9. Paul M Di Gangi and Molly M Wasko. Social media engagement theory: Exploring the influence of user engagement on social media usage. Journal of Organizational and End User Computing (JOEUC), 28(2):53–73, 2016.
* 10. Herbert A Simon et al. Designing organizations for an information-rich world. Computers, communications, and the public interest, 72:37, 1971\.
* 11. Stephen C Kies et al. Social media impact on attention span. Journal of Management & Engineering Integration, 11(1):20–27, 2018\.
* 12. Kristoffer Holt, Adam Shehata, Jesper Strömbäck, and Elisabet Ljungberg. Age and the effects of news media attention and social media use on political interest and participation: Do social media function as leveller? European journal of communication, 28(1):19–34, 2013.
* 13. Stoney Brooks. Does personal social media usage affect efficiency and well-being? Computers in Human Behavior, 46:26–37, 2015.
* 14. Matteo Cinelli, Emanuele Brugnoli, Ana Lucia Schmidt, Fabiana Zollo, Walter Quattrociocchi, and Antonio Scala. Selective exposure shapes the facebook news diet. PloS one, 15(3):e0229129, 2020.
* 15. Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H Eugene Stanley, and Walter Quattrociocchi. The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3):554–559, 2016.
* 16. Seth Flaxman, Sharad Goel, and Justin M Rao. Filter bubbles, echo chambers, and online news consumption. Public opinion quarterly, 80(S1):298–320, 2016.
* 17. Matteo Cinelli, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini. The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9):e2023301118, 2021.
* 18. J Anthony Cookson, Joseph Engelberg, and William Mullins. Echo chambers. Available at SSRN 3603107, 2022.
* 19. Joseph T Klapper. The effects of mass communication. 1960\.
* 20. Fabiana Zollo, Alessandro Bessi, Michela Del Vicario, Antonio Scala, Guido Caldarelli, Louis Shekhtman, Shlomo Havlin, and Walter Quattrociocchi. Debunking in a world of tribes. PloS one, 12(7):e0181821, 2017.
* 21. Alessandro Bessi, Antonio Scala, Luca Rossi, Qian Zhang, and Walter Quattrociocchi. The economy of attention in the age of (mis) information. Journal of Trust Management, 1(1):1–13, 2014.
* 22. Delia Mocanu, Luca Rossi, Qian Zhang, Marton Karsai, and Walter Quattrociocchi. Collective attention in the age of (mis) information. Computers in Human Behavior, 51:1198–1204, 2015.
* 23. Matteo Cinelli, Walter Quattrociocchi, Alessandro Galeazzi, Carlo Michele Valensise, Emanuele Brugnoli, Ana Lucia Schmidt, Paola Zola, Fabiana Zollo, and Antonio Scala. The covid-19 social media infodemic. Scientific reports, 10(1):1–10, 2020.
* 24. Gabriele Etta, Alessandro Galeazzi, Jamie Ray Hutchings, Connor Stirling James Smith, Mauro Conti, Walter Quattrociocchi, and Giulio Valentino Dalla Riva. Covid-19 infodemic on facebook and containment measures in italy, united kingdom and new zealand. PloS one, 17(5):e0267022, 2022.
* 25. Max Falkenberg, Alessandro Galeazzi, Maddalena Torricelli, Niccolò Di Marco, Francesca Larosa, Madalina Sas, Amin Mekacher, Warren Pearce, Fabiana Zollo, Walter Quattrociocchi, and Andrea Baronchelli. Nature Climate Change, pages 50–60, 2022.
* 26. Cristian Candia, C Jara-Figueroa, Carlos Rodriguez-Sickert, Albert-László Barabási, and César A Hidalgo. The universal decay of collective memory and attention. Nature human behaviour, 3(1):82–91, 2019.
* 27. Sylvie C Briand, Matteo Cinelli, Tim Nguyen, Rosamund Lewis, Dimitri Prybylski, Carlo M Valensise, Vittoria Colizza, Alberto Eugenio Tozzi, Nicola Perra, Andrea Baronchelli, et al. Infodemics: A new challenge for public health. Cell, 184(25):6010–6014, 2021.
* 28. Alexandre Bovet and Hernán A Makse. Influence of fake news in twitter during the 2016 us presidential election. Nature communications, 10(1):1–14, 2019.
* 29. Carlo M Valensise, Matteo Cinelli, Matthieu Nadini, Alessandro Galeazzi, Antonio Peruzzi, Gabriele Etta, Fabiana Zollo, Andrea Baronchelli, and Walter Quattrociocchi. Lack of evidence for correlation between covid-19 infodemic and vaccine acceptance. arXiv preprint arXiv:2107.07946, 2021.
* 30. Fatemeh Tahmasbi, Leonard Schild, Chen Ling, Jeremy Blackburn, Gianluca Stringhini, Yang Zhang, and Savvas Zannettou. “go eat a bat, chang!”: On the emergence of sinophobic behavior on web communities in the face of covid-19. In Proceedings of the Web Conference 2021, WWW ’21, page 1122–1133, New York, NY, USA, 2021. Association for Computing Machinery.
* 31. Matteo Cinelli, Andraž Pelicon, Igor Mozetič, Walter Quattrociocchi, Petra Kralj Novak, and Fabiana Zollo. Dynamics of online hate and misinformation. Scientific reports, 11(1):1–12, 2021.
* 32. Social Media and Democracy: The State of the Field, Prospects for Reform. SSRC Anxieties of Democracy. Cambridge University Press, 2020.
* 33. GDELT. Gdelt 2.0: Our global world in realtime.
* 34. Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, 2008(10):P10008, 2008.
* 35. GDELT. The gdelt project.
* 36. Kalev Leetaru and Philip A Schrodt. Gdelt: Global data on events, location, and tone, 1979–2012. In ISA annual convention, volume 2, pages 1–49. Citeseer, 2013\.
* 37. GDELT. Gdelt 2.0: Our global world in realtime.
* 38. Lucas Ou-Yang. Newspaper3k, 2013.
* 39. CrowdTangle Team. Crowdtangle. Facebook, Menlo Park, California, United States, 2020.
* 40. Gabriel De Tarde. The laws of imitation. H. Holt, 1903.
* 41. Everett M. Rogers. New Product Adoption and Diffusion. Journal of Consumer Research, 2(4):290–301, 03 1976.
* 42. Arnulf Grubler. The rise and fall of infrastructures: dynamics of evolution and technological change in transport. Physica-Verlag, 1990.
* 43. Carlota Perez. Technological revolutions and financial capital. Edward Elgar Publishing, 2003.
* 44. Les Robinson. Changeology. How to enable groups, communities and societies to do things they’ve never done before. 272p, 2012.
* 45. Orakanya Kanjanatarakul, Komsan Suriya, et al. Comparison of sales forecasting models for an innovative agro-industrial product: Bass model versus logistic function. The Empirical Econometrics and Quantitative Economics Letters, 1(4):89–106, 2012.
* 46. Henry B Mann and Donald R Whitney. Growing polarization around climate change on social media. The annals of mathematical statistics, pages 50–60, 1947.
## Supporting Information
### List of topics employed
Topic Keywords | First Post Date | Last Post Date | Categories
---|---|---|---
Amyotrophic_lateral_sclerosis | 2018-01-02 | 2021-12-31 | Social, Health
DeleteUber | 2018-01-01 | 2021-12-19 | Labor, Social
Roy_Moore_sexual_misconduct | 2018-01-02 | 2021-12-13 | Human_Rights, Politics, Social
abilene_zoo | 2018-01-08 | 2022-01-01 | Art_Culture_Sport, Environment
abu_sayyaf | 2018-01-02 | 2021-12-31 | Human_Rights, Politics, Religion
action_news_jax | 2018-01-01 | 2021-12-31 | Art_Culture_Sport
afghan_refugees | 2018-01-01 | 2021-12-31 | Human_Rights
afghanistan_pakistani_militant | 2018-01-02 | 2021-12-31 | Human_Rights, Politics, Religion
afghanistan_war | 2018-01-02 | 2022-01-01 | Human_Rights, Politics, Religion
afp_paedophile_ring | 2018-08-15 | 2021-11-02 | Human_Rights
agent_skripal_spy | 2018-03-05 | 2021-12-29 | Politics
aids_hiv | 2018-01-02 | 2021-12-31 | Social, Health
al_aqsa_jerusalem_raid | 2018-01-15 | 2021-12-20 | Human_Rights, Politics, Religion, Social
alaska_pipeline | 2018-01-02 | 2021-12-31 | Economy, Environment
alex_jones | 2018-01-02 | 2021-12-31 | Art_Culture_Sport, Politics, Social
alshabab_mogadishu_somalia | 2018-01-24 | 2021-12-31 | Human_Rights, Politics, Religion
aluminium_steel_tariffs | 2018-01-11 | 2021-12-31 | Economy, Labor, Politics
andhra_pradesh_uttarandhra | 2018-02-05 | 2021-12-03 | Economy, Politics, Social
animal_conservation | 2018-01-01 | 2021-12-31 | Environment
animal_cruelty | 2018-01-02 | 2021-12-31 | Environment, Health
animal_sanctuary_tiverton | 2018-03-19 | 2021-12-18 | Environment, Labor
antarctic_ice_melting | 2018-01-02 | 2021-12-31 | Environment, Social
antisemitic_jewish_orthodox | 2018-02-02 | 2021-12-26 | Human_Rights, Religion, Social
apc_pdp_sheriff | 2018-01-03 | 2021-12-30 | Politics
armenia_azerbaijan_border | 2018-01-09 | 2021-12-31 | Politics, Social
arvind_kejriwal | 2018-01-02 | 2020-12-31 | Human_Rights, Politics, Social
ashland_fundraising | 2018-01-03 | 2021-12-31 | Art_Culture_Sport, Economy, Social
asian_hate | 2018-01-01 | 2021-12-31 | Human_Rights, Social
aung_san_suu_kyi | 2018-01-02 | 2021-12-31 | Human_Rights, Politics, Social
australian_refugees | 2018-01-02 | 2022-01-01 | Human_Rights, Labor, Social
baghdad_shiites | 2018-01-04 | 2021-11-29 | Religion, Social
ballistic_missile_test | 2018-01-01 | 2021-12-31 | Environment, Politics, Social, Tech_Sci
band_debut_album | 2018-01-01 | 2021-12-31 | Art_Culture_Sport
benghazi_libya_militias | 2018-01-04 | 2021-12-21 | Human_Rights, Politics, Religion
bilateral_cooperation | 2018-01-01 | 2021-12-31 | Economy, Politics
birds_invasive_population_species_conservation | 2018-01-05 | 2021-12-31 | Environment
black_racism | 2018-01-01 | 2021-12-31 | Human_Rights, Social
blacklivesmatter | 2019-05-21 | 2021-12-31 | Human_Rights, Social
blue_whale_challenge | 2018-01-01 | 2021-12-31 | Social, Health
boat_sinks_die | 2018-01-01 | 2021-12-25 | Human_Rights, Social
boeing_737_max_crash | 2018-01-01 | 2021-12-31 | Social
boko_haram | 2018-01-02 | 2022-01-01 | Human_Rights, Politics, Religion
bollywood_celebrities | 2018-01-01 | 2021-12-31 | Art_Culture_Sport
bolsonaro_brazil | 2018-01-03 | 2021-12-31 | Human_Rights, Politics, Social
bomber_commits_suicide | 2018-01-24 | 2021-12-14 | Social
boris_hunt_tory_debate | 2018-01-31 | 2021-09-04 | Politics, Social
boris_johnson | 2018-01-01 | 2021-12-31 | Politics
bowe_bergdahl | 2018-01-03 | 2021-12-26 | Human_Rights
brain_cells_tumour | 2018-01-03 | 2021-12-31 | Social, Tech_Sci, Health
breast_cancer | 2018-01-01 | 2021-12-31 | Social, Health
britain_bridge_collapse | 2018-01-03 | 2021-12-23 | Art_Culture_Sport, Environment
bsf_jammu_kashmir | 2018-01-02 | 2021-12-31 | Politics
buckingham_palace | 2018-01-02 | 2021-12-31 | Art_Culture_Sport, Politics, Social
burqa | 2018-01-01 | 2021-12-31 | Human_Rights, Religion, Social
bus_accident | 2018-01-01 | 2021-12-31 | Labor, Social
california_wildfire | 2018-01-01 | 2021-12-31 | Environment
---|---|---|---
cameron_outcome_referendum | 2018-01-02 | 2021-12-20 | Politics, Social
capital_punishment | 2018-01-01 | 2021-12-31 | Human_Rights, Social
cathedral_notre_dame | 2018-01-02 | 2021-12-31 | Art_Culture_Sport, Environment, Social
charlie_hebdo | 2018-01-02 | 2021-12-31 | Art_Culture_Sport, Human_Rights, Politics, Religion, Social
charlottesville_rally_unite | 2018-01-02 | 2021-12-31 | Human_Rights, Social
chemtrails | 2018-01-01 | 2021-12-31 | Environment, Social, Tech_Sci
climate_warming | 2018-01-01 | 2021-12-31 | Environment, Social
co2_emissions | 2018-01-01 | 2021-12-31 | Environment, Politics, Tech_Sci
coach_k | 2018-01-01 | 2021-12-31 | Art_Culture_Sport
colombia_farc | 2018-01-01 | 2021-12-31 | Politics
colorado_shooting | 2018-01-02 | 2021-12-31 | Social
confederate_statue_removed | 2018-01-04 | 2021-12-31 | Art_Culture_Sport, Human_Rights, Social
contest_nobel_prize_winner | 2018-01-17 | 2021-12-16 | Art_Culture_Sport, Tech_Sci
correctional_prisons | 2018-01-02 | 2021-12-31 | Human_Rights
crypto_currency_exchange | 2018-01-02 | 2021-12-31 | Economy, Labor, Tech_Sci
cuban_embargo | 2018-01-02 | 2021-12-31 | Economy, Labor, Politics
cultural_heritage | 2018-01-02 | 2021-12-31 | Art_Culture_Sport, Environment, Human_Rights, Religion, Social
cyberbullying | 2018-01-01 | 2021-12-31 | Social, Tech_Sci
cybersecurity | 2018-01-01 | 2021-12-31 | Politics, Social, Tech_Sci
cybersquatting | 2018-01-09 | 2021-12-23 | Economy, Labor, Social, Tech_Sci
dakota_pipeline | 2018-01-01 | 2021-12-31 | Economy, Environment
dakota_standing_rock | 2018-01-01 | 2021-12-31 | Art_Culture_Sport, Environment, Human_Rights
delhi_pollution | 2018-01-01 | 2021-12-31 | Environment, Social, Tech_Sci
democracy_threat | 2018-01-01 | 2021-12-31 | Politics, Social
democrat_min_wage | 2021-01-02 | 2021-12-31 | Economy, Human_Rights, Labor, Politics
dieselgate | 2018-01-01 | 2021-12-31 | Economy, Environment, Labor, Tech_Sci
diplomatic_immunity | 2018-01-01 | 2021-12-31 | Politics
divorce_equality | 2018-01-01 | 2021-12-31 | Economy, Politics, Social
draft_nfl | 2018-01-02 | 2021-12-31 | Art_Culture_Sport
duncan_dallas_ebola | 2018-04-12 | 2021-08-01 | Social, Health
duterte_philippines | 2018-01-02 | 2021-01-01 | Human_Rights, Politics, Social
e-cigarettes | 2018-01-02 | 2021-12-31 | Economy, Environment, Tech_Sci, Health
early_late_voter | 2018-01-03 | 2021-12-31 | Politics, Social
earthquake_nepal | 2019-01-02 | 2022-01-01 | Environment
efcc_alleged_fraud | 2018-01-03 | 2021-12-30 | Economy, Labor
el_chapo_guzman | 2018-01-02 | 2021-12-31 | Economy, Social, Health,
elon_musk_tesla | 2018-01-02 | 2021-12-31 | Economy, Environment, Labor, Tech_Sci
endangered_species | 2018-01-01 | 2021-12-31 | Environment, Tech_Sci, Health
epa_effort | 2018-01-03 | 2021-12-30 | Environment, Social, Health
erdogan_coup_d_etat_attempt | 2018-01-21 | 2021-12-06 | Politics, Social,
erdogan_turkey | 2018-01-01 | 2021-12-31 | Human_Rights, Politics
eruption_volcanic_ash | 2018-01-02 | 2021-12-31 | Environment
european_commission | 2018-01-01 | 2021-12-31 | Economy, Labor, Politics, Social
fact-checking | 2018-01-01 | 2021-12-31 | Social
factory_farming | 2018-01-01 | 2021-12-31 | Economy, Environment, Labor
fadnavis_maharashtra | 2018-01-02 | 2021-12-31 | Environment, Politics, Social
farmers_irrigation_scheme | 2018-01-02 | 2021-12-31 | Economy, Environment, Labor, Tech_Sci
fashion_runway | 2018-01-01 | 2021-12-31 | Art_Culture_Sport
fiscal_cuts | 2018-01-01 | 2021-12-31 | Economy, Labor, Politics
flat_earth | 2018-01-01 | 2021-12-31 | Environment, Social, Tech_Sci
football_galbraith | 2019-01-07 | 2021-12-27 | Art_Culture_Sport
ford_kavanaugh | 2018-02-13 | 2021-12-30 | Human_Rights, Social
forest_wildfire | 2018-01-02 | 2021-12-31 | Environment
garda_dublin | 2018-01-03 | 2021-12-31 | Labor, Social
gay_marriages_ban | 2018-01-05 | 2021-12-30 | Human_Rights, Politics, Social
gdpr | 2018-01-02 | 2022-01-01 | Human_Rights, Politics, Social
geert_wilders_netherlands | 2018-01-03 | 2021-12-05 | Human_Rights, Politics, Religion
gender_bathroom | 2018-01-01 | 2021-12-31 | Social
gender_gap | 2018-01-01 | 2021-12-31 | Economy, Human_Rights, Labor, Politics, Social
gender_identity | 2018-01-02 | 2021-12-31 | Human_Rights, Social
---|---|---|---
george_bush | 2018-01-02 | 2021-12-31 | Politics
germany_nazi_merkel | 2018-01-02 | 2021-12-24 | Human_Rights, Politics, Religion, Social
grace_mugabe | 2018-01-02 | 2021-12-30 | Economy, Environment, Human_Rights, Politics
greek_bailout_tsipras | 2018-01-08 | 2021-07-16 | Economy, Labor, Politics, Social
hackers_disinformation | 2018-01-03 | 2021-12-28 | Social, Tech_Sci
haftar_lybia | 2018-01-02 | 2021-12-31 | Politics
hajj_pilgrimage | 2018-01-02 | 2021-12-31 | Religion, Social
halifax_mass_shooting | 2018-01-25 | 2021-12-06 | Social
hamas | 2018-01-01 | 2021-12-31 | Human_Rights, Politics, Religion
harvey_weinstein_sexual_abuse | 2018-01-02 | 2021-12-31 | Art_Culture_Sport, Human_Rights
hate_speech | 2018-01-01 | 2021-12-31 | Social
hezbollah_lebanon | 2018-01-02 | 2021-12-31 | Human_Rights, Politics, Religion, Social
hijab_ban | 2018-01-01 | 2021-12-31 | Human_Rights, Religion, Social
holocaust | 2018-01-01 | 2021-12-31 | Human_Rights, Religion, Social
homeless_shelter | 2018-01-01 | 2021-12-31 | Human_Rights, Social, Health
hong_kong_protest | 2018-01-02 | 2021-12-31 | Human_Rights, Politics, Social
honolulu_civil_beat | 2021-11-02 | 2021-12-31 | Art_Culture_Sport, Labor, Politics, Social
houthi_yemen | 2018-01-02 | 2021-12-31 | Politics, Religion, Social
humanitarian_aid | 2018-01-01 | 2021-12-31 | Human_Rights
hurricane_dorian | 2021-01-02 | 2021-12-31 | Environment
hydrogen_vehicles | 2018-01-01 | 2021-12-31 | Economy, Environment, Labor, Tech_Sci
illegal_immigration | 2018-01-01 | 2021-12-31 | Human_Rights, Politics
imran_khan | 2018-01-02 | 2021-12-31 | Art_Culture_Sport, Politics, Social
india_foreign_investment | 2018-01-02 | 2021-12-31 | Economy, Labor, Politics
intensive_animal_farming | 2018-01-04 | 2021-12-31 | Economy, Environment, Labor
iran_foreign_minister_zarif | 2018-01-02 | 2021-12-13 | Politics
iraqi_kurdish_mosul | 2018-01-03 | 2021-12-27 | Politics, Religion, Social
ireland_sinn_fein | 2018-01-01 | 2021-12-31 | Politics
jakarta_flood | 2018-01-03 | 2021-12-27 | Environment
jamal_khashoggi | 2018-01-01 | 2021-12-31 | Art_Culture_Sport, Human_Rights, Politics
jeffrey_epstein | 2018-01-01 | 2021-12-31 | Human_Rights
jeremy_corbyn_labour | 2018-01-02 | 2021-12-31 | Labor, Politics, Social
john_mccain | 2018-01-01 | 2021-12-31 | Politics
joyce_marcel | 2018-01-04 | 2021-12-31 | Art_Culture_Sport
julian_assange_wikileaks | 2018-01-02 | 2021-12-31 | Art_Culture_Sport, Economy, Politics, Social
kabila_congo | 2018-01-01 | 2021-12-31 | Politics
karnataka_assembly_poll | 2018-01-01 | 2021-12-30 | Politics
kayapo | 2018-01-03 | 2021-12-31 | Environment, Human_Rights
kenney_elected_mayor_philadelphia | 2018-01-20 | 2021-12-31 | Politics, Social
kiev_donetsk_separatists | 2018-01-13 | 2021-12-31 | Politics, Social
kiir_machar_south_sudan | 2018-01-03 | 2021-12-31 | Politics
kilauea_eruption | 2018-01-01 | 2021-12-31 | Environment, Tech_Sci
kim_jong_un | 2018-01-01 | 2021-12-31 | Human_Rights, Politics
klopp_liverpool | 2018-01-02 | 2021-12-31 | Art_Culture_Sport
labor_movement | 2018-01-02 | 2021-12-31 | Economy, Labor, Politics
lahore_rape | 2018-01-01 | 2021-12-26 | Human_Rights
lee_kuala_lumpur | 2018-01-02 | 2021-12-31 | Social
legalize_prostitution | 2018-01-05 | 2021-12-30 | Social
leo_varadkar_taoiseach | 2018-01-02 | 2021-12-31 | Human_Rights, Politics, Social
lgbt_discrimination | 2018-01-02 | 2021-12-31 | Human_Rights, Social
london_mayor_re-election_bid | 2018-02-01 | 2021-11-15 | Politics, Social
louisiana_parish_arrested | 2018-01-02 | 2021-12-31 | Social
lung_cancer | 2018-01-01 | 2021-12-31 | Social, Tech_Sci, Health
macron_france | 2018-01-01 | 2021-12-31 | Politics
maduro_venezuela | 2018-01-02 | 2021-12-29 | Politics
marco_rubio_debate | 2021-01-03 | 2021-12-25 | Politics
marijuana_legalization | 2018-01-02 | 2022-01-01 | Economy, Politics, Social, Health
marine_corps | 2018-01-01 | 2021-12-31 | Labor, Social
marine_le_pen | 2018-01-02 | 2021-12-31 | Politics
---|---|---|---
mars_mission | 2018-01-01 | 2021-12-31 | Environment, Tech_Sci
mars_spacecraft_mission | 2018-01-02 | 2021-12-31 | Environment, Tech_Sci
martin_luther_king | 2018-01-02 | 2022-01-01 | Human_Rights, Politics, Social
maryam_nawaz | 2018-01-01 | 2021-12-31 | Politics
mass_shootings | 2018-01-02 | 2022-01-01 | Social
measles | 2018-01-01 | 2021-12-31 | Social, Tech_Sci, Health
meghan_harry | 2018-01-02 | 2022-01-01 | Politics, Social
merkel_germany | 2018-01-01 | 2021-12-31 | Politics
metoo | 2018-01-01 | 2021-12-31 | Human_Rights, Labor, Social
metric_tonnes_waste | 2018-01-02 | 2021-12-30 | Environment, Tech_Sci
mexican_migrants | 2018-01-01 | 2021-12-31 | Human_Rights, Politics, Social
mexico_wall | 2018-01-01 | 2021-12-31 | Human_Rights, Politics, Social
mh370 | 2019-01-02 | 2021-12-29 | Social
michael_brown_shooting | 2018-01-03 | 2021-12-31 | Human_Rights, Social
michael_cohen_lawyer | 2018-01-05 | 2021-12-31 | Politics
migration_pact | 2018-01-05 | 2021-12-31 | Human_Rights, Politics, Social
mike_pence_indiana | 2018-01-02 | 2021-12-20 | Politics
mindanao_martial_law | 2018-01-01 | 2021-12-31 | Human_Rights, Politics, Social, Health
minimum_wage | 2018-01-02 | 2021-12-31 | Economy, Human_Rights, Labor, Politics, Social
mission_moon | 2018-01-01 | 2021-12-31 | Environment, Tech_Sci
modis_narendra | 2018-01-05 | 2021-12-29 | Politics
morsi_sisi_egypt | 2018-01-09 | 2021-12-30 | Politics, Religion
mueller_probe | 2018-01-02 | 2021-12-30 | Politics
muslim_brotherhood | 2018-01-01 | 2021-12-31 | Religion
nafta_trade | 2018-01-01 | 2021-12-31 | Economy, Labor, Politics
native_american_indigenous | 2018-01-01 | 2021-12-31 | Human_Rights
natural_gas_prices | 2018-01-01 | 2021-12-31 | Economy, Environment, Labor, Politics
nauru_refugees | 2018-01-02 | 2021-12-31 | Human_Rights
nelson_bay_cup | 2018-01-13 | 2021-12-31 | Art_Culture_Sport
netanyahu | 2018-01-01 | 2021-12-31 | Politics
nicola_sturgeon_scotland_independence | 2018-01-02 | 2021-12-31 | Politics, Social
nigel_farage_ukip | 2018-01-02 | 2021-12-28 | Politics, Social
nikolas_cruz | 2018-01-09 | 2021-12-31 | Social
nitish_kumar_bihar | 2018-01-02 | 2021-12-31 | Politics
npa_rebels | 2018-01-01 | 2021-12-31 | Politics
nuclear_war | 2018-01-01 | 2021-12-31 | Environment, Politics, Social, Tech_Sci
obrador_mexico | 2018-01-01 | 2021-12-31 | Politics
ocean_fishing | 2018-01-02 | 2021-12-31 | Economy, Environment, Labor
off_peak_season_travel | 2018-01-01 | 2021-12-31 | Art_Culture_Sport, Economy, Environment
offshore_wind | 2018-01-01 | 2021-12-31 | Economy, Environment, Tech_Sci
operation_varsity_blues | 2018-06-16 | 2021-12-31 | Art_Culture_Sport, Labor, Social
opioid_drug_crisis | 2018-01-01 | 2021-12-31 | Social, Health
organ_trade | 2018-01-01 | 2021-12-31 | Economy, Human_Rights, Health
pacific_solution | 2018-01-01 | 2021-12-31 | Human_Rights, Politics
panama_papers | 2018-01-02 | 2021-12-31 | Economy
paul_manafort | 2018-08-02 | 2021-12-30 | Politics
pension_retirement_age | 2018-01-01 | 2021-12-31 | Economy, Labor, Politics, Social, Health
pkk | 2018-01-02 | 2021-12-31 | Human_Rights, Politics, Social
planned_pregnancy | 2018-01-02 | 2022-01-01 | Human_Rights, Social, Tech_Sci, Health
plastic_surgery | 2018-01-02 | 2021-12-31 | Social, Tech_Sci, Health
ponte_morandi | 2018-01-03 | 2021-12-31 | Social
portland_standoff | 2018-01-03 | 2021-12-12 | Human_Rights, Social
protest_sign_mou | 2018-01-09 | 2021-12-22 | Social
rajapaksa_sri_lanka | 2018-01-02 | 2021-12-31 | Politics
rajasthan_mps | 2018-01-07 | 2021-12-31 | Politics
rajya_sabha_elections_bjp | 2018-01-02 | 2021-12-30 | Politics, Social
rakhine_rohingya_myanmar | 2018-01-01 | 2021-12-31 | Human_Rights, Religion, Social
ramaphosa_south_africa | 2018-01-01 | 2021-12-31 | Human_Rights, Labor, Politics, Social
randolph_holhut | 2018-02-21 | 2021-09-30 | Art_Culture_Sport
ransomware | 2018-01-01 | 2021-12-31 | Economy, Tech_Sci
---|---|---|---
rauner_illinois | 2018-01-01 | 2021-12-22 | Politics
recreational_cannabis | 2018-01-01 | 2021-12-31 | Politics, Social, Health
religion_freedom | 2018-01-01 | 2021-12-31 | Human_Rights, Religion
reynolds_mourned | 2018-07-04 | 2021-12-31 | Labor, Social
rock_n_roll_savannah_marathon | 2018-01-04 | 2021-12-14 | Art_Culture_Sport, Social
roe_v._wade_case | 2018-01-02 | 2021-12-31 | Human_Rights, Social
ryanair_pilot_strike | 2018-01-04 | 2019-10-03 | Labor
salman_saudi_arabia | 2018-01-01 | 2021-12-31 | Human_Rights, Politics, Religion,
santos_colombia | 2018-01-02 | 2021-12-31 | Politics
sargsyan_armenia | 2018-01-02 | 2021-12-31 | Politics
scientist_hansen_nasa | 2018-01-06 | 2021-12-29 | Art_Culture_Sport, Environment, Labor, Tech_Sci
scott_morrison_kabul | 2018-02-18 | 2021-12-30 | Human_Rights, Politics
scott_walker_wisconsin | 2018-01-02 | 2021-12-31 | Politics
self-driving_car | 2018-01-01 | 2021-12-31 | Economy, Labor, Social, Tech_Sci
shiv_sena_maharashtra | 2018-01-02 | 2021-12-31 | Human_Rights, Politics
sirisena_sri_lanka | 2018-01-01 | 2021-12-30 | Politics
smart_cities | 2018-01-01 | 2021-12-31 | Economy, Environment, Social, Tech_Sci
smith_scandal_resigns | 2018-03-25 | 2021-11-26 | Art_Culture_Sport
snowden | 2018-01-02 | 2021-12-31 | Human_Rights, Politics, Social, Tech_Sci
society_civil_servants | 2018-01-01 | 2021-12-31 | Labor, Politics
solar_panels | 2018-01-01 | 2021-12-31 | Economy, Environment, Tech_Sci
sonia_rahul_gandhi | 2018-01-02 | 2021-12-31 | Politics
spacex_moon_mission | 2018-01-01 | 2021-12-30 | Environment, Labor, Tech_Sci
spending_cuts | 2018-01-01 | 2021-12-31 | Economy, Labor, Politics, Social
stolen_identity | 2018-01-01 | 2021-12-31 | Social
syria_kurdish_war | 2018-01-02 | 2021-12-30 | Human_Rights, Politics, Social
syrian_refugees | 2018-01-01 | 2021-12-31 | Human_Rights
tamil_ltte | 2018-01-04 | 2021-12-31 | Human_Rights, Politics, Social
telangana_rao_trs | 2018-01-02 | 2021-12-31 | Politics
theresa_may_brexit | 2018-01-02 | 2021-12-30 | Politics, Social
tiger_woods_win | 2018-01-02 | 2021-12-31 | Art_Culture_Sport
tony_abbott_malcolm_turnbull | 2018-01-02 | 2022-01-01 | Politics
tourism_boost | 2018-01-01 | 2021-12-31 | Economy, Environment, Labor
tourists_overrun | 2018-01-03 | 2021-12-31 | Environment, Social
tpp | 2018-01-02 | 2021-12-31 | Economy, Politics
truck_highway_crashes | 2018-01-02 | 2021-12-31 | Labor
trudeau_canada | 2018-01-02 | 2021-12-31 | Politics
trump_impeachment | 2018-01-01 | 2021-12-31 | Politics
tsai_ing-wen_taiwan | 2018-01-01 | 2021-12-31 | Human_Rights, Politics, Social
tshisekedi_congo | 2018-01-01 | 2021-12-31 | Politics
tupac | 2018-01-26 | 2021-12-09 | Art_Culture_Sport
uber_ride_sharing | 2018-01-02 | 2021-12-31 | Economy, Labor
uhuru_kenyatta | 2018-01-01 | 2021-12-31 | Politics
undercover_comey_fbi | 2018-02-07 | 2021-11-08 | Politics
undp_procurement | 2018-01-04 | 2021-12-31 | Economy, Human_Rights, Labor, Politics, Social
unhcr_refugees | 2018-01-01 | 2021-12-31 | Human_Rights
vatican_abuse | 2018-01-03 | 2022-01-01 | Human_Rights, Religion
white_supremacist | 2018-01-01 | 2021-12-31 | Human_Rights, Social
william_barr_attorney_general | 2018-04-30 | 2021-12-31 | Politics
william_kate_middleton | 2018-01-02 | 2021-12-31 | Politics, Social
williams_plead_guilty | 2018-01-03 | 2021-12-25 | Social
wirecard_scandal | 2019-01-02 | 2022-05-05 | Economy, Labor
women_abortion | 2018-01-02 | 2022-01-01 | Human_Rights, Social, Health
xi_jinping | 2018-01-01 | 2021-12-31 | Politics
xinhua_silk_road | 2018-01-19 | 2021-12-30 | Economy, Labor
yakubu_dogara_sacked | 2018-01-15 | 2021-12-05 | Politics
yanukovych_crimea | 2018-01-03 | 2021-12-30 | Politics
ywca | 2018-01-02 | 2021-12-31 | Human_Rights, Religion, Social
zayed_uae | 2018-01-01 | 2021-12-31 | Economy, Politics, Social
zika_virus | 2018-01-02 | 2021-12-31 | Social, Health
zuma_south_africa | 2018-01-01 | 2021-12-31 | Human_Rights, Politics, Social
Table 3: List of terms employed to perform the research for each topic
together with the first and last date when a post related to each topic was
found.
### Evaluating the relationship between topic resonance and controversy
Figure 5: Correlation between $\alpha$ and $LH$ score for each of the topics
identified
### Goodness of the fitting procedure
We fit the cumulative evolution of engagement for the topics in Section List
of topics employed with the function $f_{\alpha,\beta}$. The fitting procedure
produces, for each topic $i$, the tuples
$\left(\hat{\alpha_{i}},\hat{\beta_{i}}\right)$ and
$\left(SE(\hat{\alpha_{i}}),SE(\hat{\beta_{i}})\right)$ containing the
estimated parameters with their standard errors, respectively. Fig. 6 provides
a joint distribution of the errors
$\left(SE(\hat{\alpha_{i}}),SE(\hat{\beta_{i}})\right)$ for each topic in
relationship with the number of posts they produced. We observe how
$SE(\hat{\alpha_{i}})$ errors follow a log-normal distribution, while
$SE(\hat{\beta_{i}})$ errors have a normal one. We can observe a reduction in
the errors for both parameters as the number of posts per topic increases. We
formerly assess such relationship by computing a Spearman correlation
coefficient between each standard error and the number of posts per topic,
obtaining a value of $\rho(SE(\hat{\alpha_{i}}),posts_{i})=-0.44$ and
$\rho(SE(\hat{\beta_{i}}),posts_{i})=-0.25$. We can therefore conclude that
our fitting procedure provides results with a reducing error as the number of
observations increases.
Figure 6: Joint distribution of the errors $SE(\hat{\alpha_{i}})$ and
$SE(\hat{\beta_{i}})$ for each topic $i$, whose cumulative curve was estimated
by means of $f_{\alpha,\beta}$. The colour of each point represent the number
of posts produced by topic $i$.
### Assessing the differences of engagement behaviors across topic categories
Category | $\alpha$ | $\beta$ | SI
---|---|---|---
Art_Culture_Sport | 0.043 (0.1647) | 693.8 (236) | 0.49 (0.13)
Economy | 0.0045 (0.0022) | 752.36 (241.45) | 0.48 (0.14)
Environment | 0.0215 (0.1214) | 761.41 (207.81) | 0.47 (0.11)
Human_Rights | 0.0244 (0.113) | 765.48 (223.24) | 0.47 (0.14)
Labor | 0.0137 (0.0618) | 715.41 (286.41) | 0.49 (0.15)
Politics | 0.0192 (0.0953) | 711.78 (243.12) | 0.5 (0.15)
Religion | 0.0405 (0.1906) | 786.5 (184.07) | 0.46 (0.12)
Social | 0.024 (0.1182) | 728.58 (204.3) | 0.49 (0.12)
Tech_Sci | 0.004 (0.0013) | 801.76 (187.67) | 0.46 (0.11)
Health | 0.0045 (0.0016) | 692.03 (128.7) | 0.52 (0.08)
Table 4: Summary of $\alpha$, $\beta$ and Speed Index mean values (and SD) per topic category. Category | Rho | p.value
---|---|---
All | -0.26 | 0
Art_Culture_Sport | -0.2 | 0.2631
Economy | -0.2 | 0.1665
Environment | -0.21 | 0.1379
Human_Rights | -0.29 | 0.0067
Labor | -0.35 | 0.0162
Politics | -0.36 | 0
Religion | -0.12 | 0.5308
Social | -0.23 | 0.0057
Tech_Sci | -0.21 | 0.2256
Health | -0.45 | 0.0267
Table 5: Spearman’s Rho between Speed and Love-Hate Score per category (CI =
0.95). For readability, 0 represents values lower than 0.0001.
| A_C_S | Econ | Env | H_R | Labor | Politics | Religion | Social | Tech | Health
---|---|---|---|---|---|---|---|---|---|---
A_C_S | | 0.3621 | 0.9226 | 0.3671 | 0.7594 | 0.5024 | 0.8931 | 0.8646 | 0.1195 | 0.9688
Econ | 0.3621 | | 0.2994 | 0.0104 | 0.4292 | 0.0146 | 0.3788 | 0.0793 | 0.3247 | 0.3496
Env | 0.9226 | 0.2994 | | 0.1287 | 0.9135 | 0.2025 | 0.992 | 0.5873 | 0.0546 | 0.9688
H_R | 0.3671 | 0.0104 | 0.1287 | | 0.134 | 0.6761 | 0.2437 | 0.2236 | 0.0009 | 0.2014
Labor | 0.7594 | 0.4292 | 0.9135 | 0.134 | | 0.1818 | 0.8575 | 0.4937 | 0.0874 | 0.9322
Politics | 0.5024 | 0.0146 | 0.2025 | 0.6761 | 0.1818 | | 0.335 | 0.3465 | 0.0018 | 0.2755
Religion | 0.8931 | 0.3788 | 0.992 | 0.2437 | 0.8575 | 0.335 | | 0.6836 | 0.0976 | 0.9347
Social | 0.8646 | 0.0793 | 0.5873 | 0.2236 | 0.4937 | 0.3465 | 0.6836 | | 0.011 | 0.6151
Tech | 0.1195 | 0.3247 | 0.0546 | 0.0009 | 0.0874 | 0.0018 | 0.0976 | 0.011 | | 0.08
Health | 0.9688 | 0.3496 | 0.9688 | 0.2014 | 0.9322 | 0.2755 | 0.9347 | 0.6151 | 0.08 |
Table 6: p-values of the two-tailed Mann–Whitney U tests performed on the
average $\alpha$ parameter value between categories (CI = 0.95). Bold values
represent the value for which the null hypothesis was rejected.
| A_C_S | Econ | Env | H_R | Labor | Politics | Religion | Social | Tech | Health
---|---|---|---|---|---|---|---|---|---|---
A_C_S | | 0.1688 | 0.171 | 0.2124 | 0.5789 | 0.7218 | 0.2055 | 0.5985 | 0.0532 | 0.8082
Econ | 0.1688 | | 0.952 | 0.7537 | 0.6159 | 0.1358 | 0.9875 | 0.181 | 0.5199 | 0.0567
Env | 0.171 | 0.952 | | 0.718 | 0.6412 | 0.0932 | 0.9759 | 0.1386 | 0.4838 | 0.0315
H_R | 0.2124 | 0.7537 | 0.718 | | 0.6522 | 0.1274 | 0.7229 | 0.1808 | 0.2939 | 0.0599
Labor | 0.5789 | 0.6159 | 0.6412 | 0.6522 | | 0.5506 | 0.5529 | 0.7086 | 0.2948 | 0.2793
Politics | 0.7218 | 0.1358 | 0.0932 | 0.1274 | 0.5506 | | 0.1676 | 0.7814 | 0.0274 | 0.3969
Religion | 0.2055 | 0.9875 | 0.9759 | 0.7229 | 0.5529 | 0.1676 | | 0.1816 | 0.5318 | 0.0418
Social | 0.5985 | 0.181 | 0.1386 | 0.1808 | 0.7086 | 0.7814 | 0.1816 | | 0.0338 | 0.2407
Tech | 0.0532 | 0.5199 | 0.4838 | 0.2939 | 0.2948 | 0.0274 | 0.5318 | 0.0338 | | 0.0076
Health | 0.8082 | 0.0567 | 0.0315 | 0.0599 | 0.2793 | 0.3969 | 0.0418 | 0.2407 | 0.0076 |
Table 7: p-values of the two-tailed Mann–Whitney U tests performed on the
average $\beta$ parameter value between categories (CI = 0.95). Bold values
represent the value for which the null hypothesis was rejected.
| A_C_S | Econ | Env | H_R | Labor | Politics | Religion | Social | Tech | Health
---|---|---|---|---|---|---|---|---|---|---
A_C_S | | 0.4121 | 0.3312 | 0.3731 | 0.5789 | 0.869 | 0.4838 | 0.8942 | 0.1862 | 0.4756
Econ | 0.4121 | | 0.9253 | 0.9877 | 0.9856 | 0.2284 | 0.9212 | 0.3115 | 0.5664 | 0.0496
Env | 0.3312 | 0.9253 | | 0.8325 | 0.908 | 0.1166 | 0.9037 | 0.1748 | 0.6437 | 0.0223
H_R | 0.3731 | 0.9877 | 0.8325 | | 0.9216 | 0.1215 | 0.9618 | 0.2021 | 0.5007 | 0.0447
Labor | 0.5789 | 0.9856 | 0.908 | 0.9216 | | 0.3042 | 0.8918 | 0.4106 | 0.5814 | 0.0888
Politics | 0.869 | 0.2284 | 0.1166 | 0.1215 | 0.3042 | | 0.2611 | 0.7012 | 0.0617 | 0.3407
Religion | 0.4838 | 0.9212 | 0.9037 | 0.9618 | 0.8918 | 0.2611 | | 0.3424 | 0.6332 | 0.0733
Social | 0.8942 | 0.3115 | 0.1748 | 0.2021 | 0.4106 | 0.7012 | 0.3424 | | 0.0784 | 0.1822
Tech | 0.1862 | 0.5664 | 0.6437 | 0.5007 | 0.5814 | 0.0617 | 0.6332 | 0.0784 | | 0.0103
Health | 0.4756 | 0.0496 | 0.0223 | 0.0447 | 0.0888 | 0.3407 | 0.0733 | 0.1822 | 0.0103 |
Table 8: p-values of the two-tailed Mann–Whitney U tests performed on the
average Speed Index between categories (CI = 0.95). Bold values represent the
value for which the null hypothesis was rejected.
| A_C_S | Econ | Env | H_R | Labor | Politics | Religion | Social | Tech | Health
---|---|---|---|---|---|---|---|---|---|---
A_C_S | | 0.0369 | 0.2472 | 0.0001 | 0.009 | 0.0001 | 0.0048 | 0.0005 | 0.3624 | 0.0656
Econ | 0.0369 | | 0.1372 | 0.0023 | 0.4167 | 0.0089 | 0.0792 | 0.0501 | 0.0777 | 0.9081
Env | 0.2472 | 0.1372 | | 0 | 0.0238 | 0 | 0.004 | 0.0003 | 0.7374 | 0.1817
H_R | 0.0001 | 0.0023 | 0 | | 0.0401 | 0.4758 | 0.5167 | 0.1535 | 0 | 0.0275
Labor | 0.009 | 0.4167 | 0.0238 | 0.0401 | | 0.107 | 0.3184 | 0.3601 | 0.0121 | 0.5761
Politics | 0.0001 | 0.0089 | 0 | 0.4758 | 0.107 | | 0.9014 | 0.4024 | 0 | 0.0708
Religion | 0.0048 | 0.0792 | 0.004 | 0.5167 | 0.3184 | 0.9014 | | 0.7623 | 0.0013 | 0.1672
Social | 0.0005 | 0.0501 | 0.0003 | 0.1535 | 0.3601 | 0.4024 | 0.7623 | | 0.0002 | 0.1763
Tech | 0.3624 | 0.0777 | 0.7374 | 0 | 0.0121 | 0 | 0.0013 | 0.0002 | | 0.1064
Health | 0.0656 | 0.9081 | 0.1817 | 0.0275 | 0.5761 | 0.0708 | 0.1672 | 0.1763 | 0.1064 |
Table 9: p-values of the two-tailed Mann–Whitney U tests performed on the average Love-Hate Score between categories (CI = 0.95). Bold values represent the value for which the null hypothesis was rejected. | A_C_S | Env | Tech | Politics | Social | H_R
---|---|---|---|---|---|---
A_C_S | | | | 0.0001 | 0.0003 | 0
Env | | | | 0 | 0.0002 | 0
Tech | | | | 0 | 0.0001 | 0
Politics | 0.9999 | 1 | 1 | | |
Social | 0.9998 | 0.9998 | 0.9999 | | |
H_R | 1 | 1 | 1 | | |
Table 10: p-values of Mann–Whitney U test on LH mean value between categories
for which the null hypotesis was rejected in Table 9 (H1: $\mu_{r}>\mu_{c}$,
where r and c represent row and column category; Conf. Level = 0.95). Bold
values represent the value for which the null hypothesis was rejected.
|
# Positivity properties for spherical functions of maximal Young subgroups
R. M. Green Department of Mathematics
University of Colorado Boulder, Campus Box 395
Boulder, Colorado
USA, 80309<EMAIL_ADDRESS>
###### Abstract.
Let $S_{k}\times S_{n-k}$ be a maximal Young subgroup of the symmetric group
$S_{n}$. We introduce a basis ${\mathcal{B}}_{n,k}$ for the coset space
$S_{n}/S_{k}\times S_{n-k}$ that is naturally parametrized by the set of
standard Young tableaux with $n$ boxes, at most two rows, and at most $k$
boxes in the second row. The basis ${\mathcal{B}}_{n,k}$ has positivity
properties that resemble those of a root system, and there is a composition
series of the coset space in which each term is spanned by the basis elements
that it contains. We prove that the spherical functions of the associated
Gelfand pair are nonnegative linear combinations of the ${\mathcal{B}}_{n,k}$.
###### Key words and phrases:
maximal Young subgroup, Gelfand pair, spherical function, canonical basis
###### 2020 Mathematics Subject Classification:
Primary: 20C30; Secondary: 05E10.
To appear in the Annals of Combinatorics
## Introduction
A pair $(G,K)$ of finite groups is called a Gelfand pair if $K\leq G$ and the
permutation representation of $G$ on the set of left cosets $X=G/K$ of $K$ in
$G$ is multiplicity-free as a ${\mathbb{C}}G$-module. One source of examples
of Gelfand pairs arises from the action of a group $G$ on a finite metric
space $(X,d)$. Such an action is said to be distance transitive if for all
$(x_{1},y_{1}),(x_{2},y_{2})\in X\times X$, we have
$d(x_{1},y_{1})=d(x_{2},y_{2})$ if and only if there exists a $g\in G$
satisfying $g(x_{1})=x_{2}$ and $g(y_{1})=y_{2}$; this condition implies that
$G$ acts transitively as a group of isometries of $X$. If $K$ is the
stabilizer of $x_{0}\in X$ under a distance transitive action of $G$, then
$(G,K)$ is a Gelfand pair [2, Lemma 4.3.4, Example 4.3.7]. Furthermore, in
this case, the number of orbits of $K$ on $X$ (i.e., the rank of $G$ acting on
$X$ as a permutation group as in [8, Definition 8.2.4]) is equal to the number
of irreducible direct summands of the permutation module on the cosets $G/K$
[2, Corollary 4.4.3 (iii)].
The action of a Weyl group on the weights of a minuscule representation of a
simple Lie algebra satisfies the conditions of the previous paragraph with
respect to Euclidean distance by [8, Theorem 8.2.22 (ii)], so each minuscule
representation of a simple Lie algebra gives rise to a Gelfand pair. In this
paper, we concentrate on the special case where the Lie algebra has type
$A_{n-1}$, which means that the Gelfand pair $(G,K)$ is given by
$(S_{n},S_{k}\times S_{n-k})$ for some $0<k<n$. We will assume that $n\geq 2$
throughout, and without loss of generality that $k\leq n/2$. In this case,
each left coset of $X=G/K$ can be naturally identified with a squarefree
monomial of degree $k$ in the commuting indeterminates
$x_{1},x_{2},\ldots,x_{n}$, where the action of $G$ is the natural action on
subscripts, and where the identity coset $K$ is identified with the monomial
$x_{1}x_{2}\cdots x_{k}$. If we denote the set of ${\mathbb{C}}$-valued
functions on $X$ by $L(X)$, then $L(X)$ decomposes as a ${\mathbb{C}}G$-module
into a direct sum of pairwise nonisomorphic irreducible representations
$L(X)\cong V_{0}\oplus V_{1}\oplus\cdots\oplus V_{k}.$
We will identify the vector space $L(X)$ with the linear span, $V_{n,k}$, of
the squarefree monomials of degree $k$ in the commuting indeterminates
$\\{x_{1},x_{2},\ldots,x_{n}\\}$. We will refer to both these versions of the
coset basis as the monomial basis, and denote it by ${\mathcal{M}}_{n,k}$.
It follows from Frobenius reciprocity that each of the $V_{j}$ has a
$1$-dimensional $K$-invariant submodule. For each $0\leq j\leq k$, the $j$-th
spherical function ${\Phi(n,k,j)}\in L(X)$ is defined to be the element of
this $1$-dimensional submodule that is normalized so that ${\Phi(n,k,j)}$
sends the identity coset $K=x_{1}x_{2}\cdots x_{k}$ to $1$. The value of the
spherical function ${\Phi(n,k,j)}$ on a coset $gK$ turns out to be a function
of the distance $d$ between $gK$ and $K$ in the natural metric on $X$. These
spherical functions are known explicitly [2, Theorem 6.1.10] and are sometimes
called dual Hahn polynomials. They have applications to random walks and the
Bernoulli–Laplace diffusion model [3, §3]. We will not use the metric in this
paper, and instead view the spherical functions ${\Phi(n,k,j)}$ as homogeneous
polynomials of degree $k$. Because the irreducible representations of $S_{n}$
over ${\mathbb{C}}$ are defined over ${\mathbb{Q}}$, we will work over the
field ${\mathbb{Q}}$ unless stated otherwise, but scalars can be extended if
necessary.
If $B$ is a basis for an $F$-vector space $V$ with $F\leq{\mathbb{R}}$, we say
that an element $v=\sum_{b\in B}\lambda_{b}b$ is $B$-positive with
coefficients $\\{\lambda_{b}\\}_{b\in B}$ if we have $\lambda_{b}\geq 0$ for
all $b\in B$. The spherical functions ${\Phi(n,k,j)}$ are generally not
${\mathcal{M}}_{n,k}$-positive elements of $V_{n,k}$, but in this paper, we
will introduce a basis ${\mathcal{B}}_{n,k}$ for $V_{n,k}$ with respect to
which the spherical functions are ${\mathcal{B}}_{n,k}$-positive. To
illustrate this, consider the case $n=4$ and $k=1$, where we have
${\mathcal{M}}_{4,1}=\\{x_{1},x_{2},x_{3},x_{4}\\}$ and
${\mathcal{B}}_{4,1}=\\{x_{1}-x_{2},x_{2}-x_{3},x_{3}-x_{4},x_{3}+x_{4}\\}.$
If we identify the $x_{i}$ with an orthonormal basis of ${\mathbb{R}}^{n}$,
the latter basis corresponds to the basis of simple roots of type $D_{4}$, as
in [10, §2.10]. In this case, $\Phi(4,1,0)$ is both
${\mathcal{M}}_{4,1}$-positive and ${\mathcal{B}}_{4,1}$-positive:
$\Phi(4,1,0)=x_{1}+x_{2}+x_{3}+x_{4}=(x_{1}-x_{2})+2(x_{2}-x_{3})+(x_{3}-x_{4})+2(x_{3}+x_{4}),$
whereas $\Phi(4,1,1)$ is not ${\mathcal{M}}_{4,1}$-positive, but is
${\mathcal{B}}_{4,1}$-positive:
$\Phi(4,1,1)=x_{1}-\frac{1}{3}x_{2}-\frac{1}{3}x_{3}-\frac{1}{3}x_{4}=(x_{1}-x_{2})+\frac{2}{3}(x_{2}-x_{3})+\frac{1}{3}(x_{3}-x_{4}).$
The purpose of this paper is to study ${\mathcal{B}}_{n,k}$-positivity for
arbitrary $n\geq 2$ and $0<k\leq n/2$. We replace the root system of type
$D_{n}$ by a generalization called $k$-roots. Using a suitable total order, we
can construct a canonical basis ${\mathcal{B}}_{n,k}$ of $k$-roots analogous
to the simple roots in the case $k=1$. The basis ${\mathcal{B}}_{n,k}$ is
naturally parametrized by the set of lattice words in the alphabet $\\{1,2\\}$
that have length $n$ and at most $k$ occurrences of $2$, or equivalently (see
Remark 2.2) by the set of standard Young tableaux with $n$ boxes that have at
most two rows and at most $k$ boxes in the second row. The basis
${\mathcal{B}}_{n,k}$ may be constructed in other ways, for example by using
Kazhdan–Lusztig theory (see Remark 2.13), but the $k$-root approach has the
advantage that it is easy to deal with computationally.
The results of this paper are largely self-contained, although the key result
Proposition 1.7 is implicit in recent work of the author and T. Xu [9] on the
case $k=2$ in a much more general setting. In [9], a $k$-root is defined to be
a symmetrized tensor product of $k$ mutually orthogonal roots in the sense of
Lie theory. The cases we study in this paper correspond to performing this
construction with a root system of type $D$, where convenient Euclidean
coordinates are available. We therefore usually dispense with the root system
point of view, and instead think of $k$-roots as polynomials in these
Euclidean coordinates. It should be noted that the polynomials corresponding
to certain pairs of orthogonal roots, such as $(x_{1}-x_{2})(x_{1}+x_{2})$, do
not appear in the construction because they are not linear combinations of
squarefree monomials, and such polynomials are not counted as $k$-roots for
the purposes of this paper.
We develop the combinatorial tools needed to define and study the canonical
basis ${\mathcal{B}}_{n,k}$ in sections 1 and 2. Although the initial
definition of ${\mathcal{B}}_{n,k}$ in Definition 1.6 may seem ad hoc, we will
prove in Theorem 2.10 that ${\mathcal{B}}_{n,k}$ has a simple characterization
as the set of positive $k$-roots that are minimal in the sense of being
indecomposable into sums of other positive $k$-roots. Section 3 explores some
applications of $k$-roots to representation theory. Theorem 3.5 gives a closed
formula for the spherical functions in terms of $k$-roots; the formula does
not involve the metric on the coset space $G/K$. We use this formula to prove
that the main result of this paper, which is that the spherical functions are
${\mathcal{B}}_{n,k}$-positive. Theorem 3.8 gives a sufficient condition for a
monomial basis element to be ${\mathcal{B}}_{n,k}$-positive.
## 1\. k-roots
Let $n\geq 2$ and $0<k\leq n/2$ be integers, and let $V_{n,k}$ be the
${\mathbb{Q}}$-vector space of dimension $\binom{n}{k}$ with basis consisting
of the set ${\mathcal{M}}_{n,k}$ of all squarefree monomials of degree $k$ in
the commuting indeterminates $\\{x_{1},x_{2},\ldots,x_{n}\\}$.
The set ${\mathcal{M}}_{n,k}$ can be ordered lexicographically, as follows.
Let $I=\\{i_{1},i_{2},\ldots,i_{k}\\}$ and $J=\\{j_{1},j_{2},\ldots,j_{k}\\}$
be two distinct subsets of $\\{1,2,\ldots,n\\}$ of size $k$, and let
$x_{I}=x_{i_{1}}x_{i_{2}}\cdots x_{i_{k}}$ and $x_{J}=x_{j_{1}}x_{j_{2}}\cdots
x_{j_{k}}$ be the corresponding squarefree monomials. If $t$ is the smallest
element of the symmetric difference $I\ \Delta\ J$, then we say $x_{I}\prec
x_{J}$ if $t\in I$, and $x_{J}\prec x_{I}$ if $t\in J$. (For example, we have
$x_{1}x_{2}x_{5}\prec x_{1}x_{3}x_{4}$, with $t=2$.)
We order the ${\mathbb{Q}}$-vector space $V_{n,k}$ by saying that a nonzero
vector $v\in V_{n,k}$ satisfies $v>0$ (respectively, $v<0$) if the
lexicographically minimal monomial appearing in $v$ has positive
(respectively, negative) coefficient. We then say that $v_{1}<v_{2}$ if
$v_{2}-v_{1}>0$. This makes $V_{n,k}$ into a totally ordered vector space.
(Note that we have $x_{1}x_{2}x_{5}>x_{1}x_{3}x_{4}$ in this case.)
###### Definition 1.1.
Let ${\mathcal{C}}_{n,k}$ be the subset of $V_{n,k}$ consisting of all
elements of the form
$\prod_{r=1}^{k}(\pm x_{i_{2r-1}}\pm x_{i_{2r}}),$
where the signs are chosen independently and where the set
$\\{i_{1},i_{2},\ldots,i_{2k}\\}$ is a set of $2k$ distinct indices from the
set $\\{1,2,\ldots,n\\}$. An element of ${\mathcal{C}}_{n,k}$ is called a
$k$-root. A $k$-root is called positive if it is positive in the ordering on
$V_{n,k}$, and negative otherwise.
###### Remark 1.2.
The definitions imply that if $\lambda$ is a scalar and $\alpha$ is a
$k$-root, then $\lambda\alpha$ is also a $k$-root if and only if $\lambda=\pm
1$. Note that for each $k$-root $\alpha$, the factors $(\pm x_{i}\pm x_{j})$
are well-defined up to order and multiplication by nonzero scalars, because
they are the irreducible factors of the $k$-root in the unique factorization
domain ${\mathbb{Q}}[x_{1},x_{2},\ldots,x_{n}]$.
###### Lemma 1.3.
A $k$-root is positive if and only if it can be written in the form
$\prod_{r=1}^{k}(x_{i_{2r-1}}\pm x_{i_{2r}}),$
where the indices satisfy $i_{2r-1}<i_{2r}$ for all $i$.
###### Proof.
Let $\alpha$ be a $k$-root. We can reorder each factor of $\alpha$ to be of
the form $(\pm x_{i}\pm x_{j})$ where $i<j$, and commute the negative signs on
$x_{i}$ to the front to obtain
$\alpha=\pm\prod_{r=1}^{k}(x_{i_{2r-1}}\pm x_{i_{2r}}),$
where $i_{2r-1}<i_{2r}$ for all $i$. The lexicographically minimal monomial
appearing in $\alpha$ is
$x_{i_{1}}x_{i_{3}}\cdots x_{i_{2k-1}},$
which occurs with coefficient $1$ if $\alpha$ is in the form given in the
statement, and with coefficient $-1$ if $-\alpha$ is in the form given in the
statement. The conclusion now follows. ∎
###### Definition 1.4.
We say that a positive $k$-root is in normal form if it is in the form given
by Lemma 1.3. If $\alpha$ is a positive $k$-root in normal form, then we call
a factor of $\alpha$ antisymmetric if it is of the form $(x_{i}-x_{j})$ for
$i<j$, and symmetric if it is of the form $(x_{i}+x_{j})$. If $x_{i}$ does not
appear in the factorization of $\alpha$, then we say that $i$ is an unused
index of $\alpha$.
Note that the normal form of a $k$-root is unique up to reordering the
factors.
In the next result, $\chi^{(n-i,i)}$ refers to the character of the
irreducible ${\mathbb{C}}S_{n}$-module corresponding to the two part partition
$(n-i,i)$ of $n$, as in [6, §4]. Part (ii) is well known.
###### Lemma 1.5.
* (i)
The set ${\mathcal{C}}_{n,k}$ is a spanning set for $V_{n,k}$ over
${\mathbb{Q}}$.
* (ii)
The character of $V_{n,k}$ as a ${\mathbb{C}}S_{n}$-module is
$\sum_{i=0}^{k}\chi^{(n-i,i)}$.
###### Proof.
Let $x_{I}=x_{i_{1}}x_{i_{2}}\cdots x_{i_{k}}$ be an arbitrary monomial basis
element. Because $k\leq n/2$, we can choose indices $j_{1},j_{2},\ldots,j_{k}$
so that the set $\\{i_{1},i_{2},\ldots,i_{k},j_{1},j_{2},\ldots,j_{k}\\}$ has
cardinality $2k$. By substituting
$(x_{i_{r}}+x_{j_{r}})+(x_{i_{r}}-x_{j_{r}})$ for $2x_{i_{r}}$, the monomial
$2^{k}x_{I}$ can be expressed as a sum of $2^{k}$ distinct $k$-roots, and (i)
follows.
Since $V_{n,k}$ is induced from the trivial module of $S_{k}\times S_{n-k}$,
its character corresponds to the product of the Schur functions $s_{(n-k)}$
and $s_{(k)}$. The Pieri rule shows that we have
$s_{(n-k)}s_{(k)}=\sum_{i=0}^{k}s_{(n-i,i)},$
which proves (ii). ∎
###### Definition 1.6.
Let $\alpha\in{\mathcal{C}}_{n,k}$ be a positive $k$-root. We say that
$\alpha$ has a defect if the normal form of $\alpha$ has any of the following
features, where in each case we have $i<j<r<s$:
* (i)
two factors of the form $(x_{i}\pm x_{r})$ and $(x_{j}\pm x_{s})$;
* (ii)
a factor of the form $(x_{i}\pm x_{s})$ and a symmetric factor of the form
$(x_{j}+x_{r})$;
* (iii)
a factor of the form $(x_{i}\pm x_{r})$, and an unused index $j$;
* (iv)
a symmetric factor of the form $(x_{i}+x_{j})$, and an unused index $r$.
In case (i), we say that $\alpha$ has a crossing; in case (ii), we say that
$(x_{j}+x_{r})$ is a nested symmetric factor; in case (iii), we say that $j$
is a nested unused index; and in case (iv), we say that $(x_{i}+x_{j})$ is an
obstructed symmetric factor.
We define ${\mathcal{B}}_{n,k}$ to be the set of positive $k$-roots with no
defects.
The next result is implicit in [9, §5], and will be very useful in the sequel.
###### Proposition 1.7.
Let $\alpha\in{\mathcal{C}}_{n,k}$ be a positive $k$-root, written in normal
form, and suppose that $\alpha$ has a defect. Then $\alpha$ can be written as
a sum of positive $k$-roots that are strictly lower than $\alpha$ in the total
order on $V_{n,k}$, by replacing the factor(s) involved in the defect
according to the following rules, where $i<j<r<s$ in all cases:
(1.1) $\displaystyle(x_{i}-x_{r})(x_{j}-x_{s})$
$\displaystyle\quad\longrightarrow\quad(x_{i}-x_{j})(x_{r}-x_{s})+(x_{i}-x_{s})(x_{j}-x_{r});$
(1.2) $\displaystyle(x_{i}+x_{r})(x_{j}-x_{s})$
$\displaystyle\quad\longrightarrow\quad(x_{i}+x_{j})(x_{r}-x_{s})+(x_{i}+x_{s})(x_{j}-x_{r});$
(1.3) $\displaystyle(x_{i}-x_{r})(x_{j}+x_{s})$
$\displaystyle\quad\longrightarrow\quad(x_{i}-x_{j})(x_{r}+x_{s})+(x_{i}+x_{s})(x_{j}-x_{r});$
(1.4) $\displaystyle(x_{i}+x_{r})(x_{j}+x_{s})$
$\displaystyle\quad\longrightarrow\quad(x_{i}+x_{j})(x_{r}+x_{s})+(x_{i}-x_{s})(x_{j}-x_{r});$
(1.5) $\displaystyle(x_{i}-x_{s})(x_{j}+x_{r})$
$\displaystyle\quad\longrightarrow\quad(x_{i}-x_{j})(x_{r}+x_{s})+(x_{i}+x_{j})(x_{r}-x_{s})+(x_{i}+x_{s})(x_{j}-x_{r});$
(1.6) $\displaystyle(x_{i}+x_{s})(x_{j}+x_{r})$
$\displaystyle\quad\longrightarrow\quad(x_{i}-x_{j})(x_{r}-x_{s})+(x_{i}+x_{j})(x_{r}+x_{s})+(x_{i}-x_{s})(x_{j}-x_{r});$
(1.7) $\displaystyle(x_{i}-x_{r})$
$\displaystyle\quad\longrightarrow\quad(x_{i}-x_{j})+(x_{j}-x_{r});$ (1.8)
$\displaystyle(x_{i}+x_{r})$
$\displaystyle\quad\longrightarrow\quad(x_{i}-x_{j})+(x_{j}+x_{r});$ (1.9)
$\displaystyle(x_{i}+x_{j})$
$\displaystyle\quad\longrightarrow\quad(x_{i}-x_{j})+(x_{j}-x_{r})+(x_{j}+x_{r}).$
The index $j$ in relations 1.7 and 1.8 and the index $r$ in relation 1.9 are
assumed to be unused indices of $\alpha$, and they may be chosen arbitrarily
subject to the constraint that $i<j<r$.
###### Proof.
If $\alpha$ has a defect, then the normal form factorization of $\alpha$ must
contain the factor(s) on the left hand side of one of these nine identities,
as follows. If $\alpha$ has a crossing, then one of the relations 1.1, 1.2,
1.3, or 1.4 is applicable. If $\alpha$ has a nested symmetric factor, then one
of the relations 1.5 or 1.6 is applicable. If $\alpha$ has a nested unused
index, then one of the relations 1.7 or 1.8 is applicable. Finally, if
$\alpha$ has an obstructed symmetric factor, then relation 1.9 is applicable.
A routine verification shows that in each of the nine cases in the statement,
the polynomial on the left hand side is equal to the polynomial on the right
hand side. By inspection, all of the factors appearing are of the correct type
to appear in a normal form factorization. It follows that making the
substitutions indicated will express the normal form of the positive $k$-root
$\alpha$ as a sum of other positive $k$-roots that are also in normal form.
It remains to show that if we have $\alpha=\sum_{p=1}^{m}\beta_{p}$ as above,
where the $\beta_{p}$ are positive $k$-roots and $m>1$, then we have
$\beta_{p}<\alpha$ for all $p$. This follows from the definition of the total
order on $V_{n,k}$, because we have
$\alpha-\beta_{p}=\sum_{q:1\leq q\leq m,q\neq p}\beta_{q}.$
The right hand side is positive because it is a nontrivial sum of positive
elements of $V_{n,k}$, which means that $\alpha-\beta_{p}>0$. By definition,
this means that we have $\beta_{p}<\alpha$, as required. ∎
###### Remark 1.8.
It may be convenient to visualize the relations in Proposition 1.7 as
(singular) skein relations, in which the factors of the form $(x_{i}-x_{j})$
(respectively, $(x_{i}+x_{j})$) are represented by an undecorated
(respectively, decorated) arc from $i$ to $j$. Figure 1 shows the pictorial
version of relation 1.6, and Remark 2.13 gives some more details on the
relationship between $k$-roots and diagram algebras.
$\longrightarrow$ $+$ $+$
Figure 1. Relation 1.6 interpreted as a skein relation
###### Lemma 1.9.
* (i)
Every positive $k$-root in ${\mathcal{C}}_{n,k}$ can be written as a linear
combination of elements of ${\mathcal{B}}_{n,k}$ with nonnegative integer
coefficients.
* (ii)
The set ${\mathcal{B}}_{n,k}$ is a spanning set for $V_{n,k}$ over
${\mathbb{Q}}$, and we have $|{\mathcal{B}}_{n,k}|\geq\binom{n}{k}$.
###### Proof.
To prove (i), suppose that $\alpha$ is a positive $k$-root. If we have
$\alpha\in{\mathcal{B}}_{n,k}$, then we are done, so suppose that $\alpha$ has
a defect. We then apply Proposition 1.7 repeatedly to $\alpha$, applying the
nine types of reduction in any order. This process must eventually terminate
because the poset ${\mathcal{C}}_{n,k}$ is finite, and it will result in an
expression for $\alpha$ as a positive integral linear combination of elements
of ${\mathcal{B}}_{n,k}$.
The first assertion of (ii) follows from (i) and Lemma 1.5, and the second
assertion holds because $V_{n,k}$ has dimension $\binom{n}{k}.$ ∎
###### Remark 1.10.
If $1\leq i<j\leq n$ are integers, then the signed transposition
$\overline{(i,j)}$ of the $2n$ symbols $\\{\pm x_{1},\pm x_{2},\ldots,\pm
x_{n}\\}$ is the permutation that sends $\pm x_{i}$ to $\mp x_{j}$, and fixes
$\pm x_{r}$ for $r\neq i,j$. For each $k$-root $\alpha$, there is a natural
way to assign a transposition $(i,j)$ to each antisymmetric factor
$(x_{i}-x_{j})$ and a signed transposition $\overline{(i,j)}$ to each
symmetric factor $(x_{i}+x_{j})$ to give a set
$\\{t_{1},t_{2},\ldots,t_{k}\\}$ of distinct, mutually commuting signed and
unsigned transpositions with the property that $t_{i}(\alpha)=-\alpha$ for all
$i$. Up to multiplying by nonzero scalars, the vector $\alpha\in V_{n,k}$ can
be characterized as the unique common eigenvector of the
$\\{t_{1},t_{2},\ldots,t_{k}\\}$ with common eigenvalue $-1$.
## 2\. The canonical basis
The main result of this section is Theorem 2.10, which proves that the set
${\mathcal{B}}_{n,k}$ of $k$-roots without defects is a ${\mathbb{Q}}$-basis
for $V_{n,k}$, and that ${\mathcal{B}}_{n,k}$ can naturally be parametrized by
a certain set of lattice words. Recall that a lattice word is a sequence
$a_{1}a_{2}\cdots a_{n}$ of positive integers with the property that for each
positive integer $i$, each initial segment of the sequence contains at least
as many occurrences of $i$ as of $i+1$.
###### Definition 2.1.
Let $\alpha\in{\mathcal{B}}_{n,k}$ be a positive $k$-root with no defects. We
define the label, $\lambda(\alpha)$, of $\alpha$ to be the word of length $n$
in the alphabet $\\{1,2\\}$ with the property that $\lambda(\alpha)_{j}=2$ if
and only if the normal form of $\alpha$ has a factor of the form
$(x_{i}-x_{j})$ for some $i<j$. We define ${\Lambda_{n,k}}$ to be the set of
lattice words of length $n$ that have entries in the set $\\{1,2\\}$, and at
most $k$ occurrences of $2$.
###### Remark 2.2.
The set ${\Lambda_{n,k}}$ is in canonical bijection with the set of all
standard Young tableaux with $n$ boxes having at most two rows and at most $k$
boxes in the second row. The positions of the occurrences of $2$ in the
lattice word correspond to the labels of the boxes in the second row of the
tableau.
Because each negative term $-x_{j}$ in the normal form is paired with a
distinct term $x_{i}$ with $i<j$, the following result follows immediately.
###### Lemma 2.3.
If $\alpha\in{\mathcal{B}}_{n,k}$ is a positive $k$-root with no defects, then
we have $\lambda(\alpha)\in{\Lambda_{n,k}}$. ∎
###### Definition 2.4.
Let $\alpha\in{\mathcal{C}}_{n,k}$ be a positive $k$-root. We define the
height, $h(\alpha)$, of $\alpha$ to be the number of symmetric factors
appearing in the normal form of $\alpha$. We define ${\mathcal{B}}_{n,k,h}$ to
be the subset of ${\mathcal{B}}_{n,k}$ consisting of $k$-roots of height $h$.
Note that if $\alpha\in{\mathcal{C}}_{n,k}$ then we always have $0\leq
h(\alpha)\leq k$.
###### Lemma 2.5.
* (i)
There is a function
$f:{\mathcal{B}}_{n,k,h}\rightarrow{\mathcal{B}}_{n,k-h,0}$, where $f(\alpha)$
is defined to be the homogeneous polynomial of degree $k-h$ obtained by
removing the $h=h(\alpha)$ symmetric factors from the normal form of $\alpha$.
* (ii)
The label $\lambda(\alpha)$ has $k-h(\alpha)$ occurrences of $2$, and
satisfies $\lambda(\alpha)=\lambda(f(\alpha))$.
* (iii)
The function $f$ is injective.
###### Proof.
It follows from the definition of normal form that removing $h$ factors from
the normal form of a positive $k$-root will give the normal form of a positive
$(k-h)$-root. It remains to show that the resulting $(k-h)$-root has no
defects.
Because $f(\alpha)$ has no symmetric factors by construction, it can have no
defects of types (ii) or (iv) in Definition 1.6. It is also immediate that the
removal of factors cannot create new crossings, which means that $f(\alpha)$
has no defects of type (i) in Definition 1.6. The only way $f(\alpha)$ can
have a defect is if we are in the situation of Definition 1.6 (iii), and
because $f(\alpha)$ has no symmetric factors, we must be in the more specific
situation of relation 1.7 of Proposition 1.7.
We may now assume that $f(\alpha)$ has a factor of the form $(x_{i}-x_{r})$
and an unused index $j$ with $i<j<r$. Because $\alpha$ has no defects, the
index $j$ must be involved in a symmetric factor $(x_{j}+x_{m})$ of $\alpha$,
for some $m\neq i,j,r$. We cannot have $m<i$ or $m>r$ because the factors
$(x_{i}-x_{r})$ and $(x_{j}+x_{m})$ of $\alpha$ would create a crossing, and
we cannot have $i<m<r$ because the factors $(x_{i}-x_{r})$ and $(x_{j}+x_{m})$
of $\alpha$ would create a nested symmetric factor. This completes the proof
of (i).
Part (ii) follows from (i) and Definition 2.1.
To prove (iii), we need to show that the $h$ symmetric factors of $\alpha$ are
uniquely determined by $h$ and $f(\alpha)$. Because $\alpha$ has no obstructed
symmetric factors, it must be the case that the largest $2h$ unused indices of
$f(\alpha)$ are precisely the indices of the symmetric factors of $\alpha$.
Let us denote these indices by $\\{i_{1},i_{2},\ldots,i_{2h}\\}$, where
$i_{1}<i_{2}<\cdots<i_{2h}$. Because $\alpha$ has no crossings and no nested
symmetric factors, the symmetric factors of $\alpha$ must be
$(x_{i_{1}}+x_{i_{2}}),(x_{i_{3}}+x_{i_{4}}),\ldots,(x_{i_{2h-1}}+x_{i_{2h}}).$
This completes the proof of (iii). ∎
###### Example 2.6.
Consider the case $n=12$, $k=5$, $h=2$. Let $\alpha$ be the element of
${\mathcal{B}}_{12,5,2}$ given by
$\alpha=(x_{2}-x_{3})(x_{5}+x_{10})(x_{6}-x_{9})(x_{7}-x_{8})(x_{11}+x_{12}),$
so that $f(\alpha)$ is the element of ${\mathcal{B}}_{12,3,0}$ given by
$f(\alpha)=(x_{2}-x_{3})(x_{6}-x_{9})(x_{7}-x_{8}).$
Both $\alpha$ and $f(\alpha)$ have the label $112111122111$, where the $2$s
appear in positions $3$, $8$, and $9$. The largest $2h(=4)$ indices not
appearing in $f(\alpha)$ are $\\{5,10,11,12\\}$, from which it follows that
the symmetric factors appearing in $\alpha$ are $(x_{5}+x_{10})$ and
$(x_{11}+x_{12})$.
###### Lemma 2.7.
Let $\alpha\in{\mathcal{B}}_{n,k,0}$ be a $k$-root of height $0$ that has no
defects, and let $p(\alpha)$ be the normal form of $\alpha$. For each index
$j$ satisfying $\lambda(\alpha)_{j}=2$, define $g(j)$ to be the unique index
for which $(x_{g(j)}-x_{j})$ is a factor in $p(\alpha)$. Then $i=g(j)$ is the
largest index $i<j$ such that both $\lambda(\alpha)_{i}=1$ and $p(\alpha)$
contains no factor of the form $(x_{i}-x_{m})$ for any $m<j$.
###### Remark 2.8.
If one replaces the $1$s in $\lambda(\alpha)$ by open parentheses and the $2$s
by close parentheses, then the map $g$ in the statement locates the open
parenthesis that matches a given close parenthesis. Lemma 2.3 shows that it is
always possible to find a match.
###### Proof of Lemma 2.7.
We know that there is at least one index $i<j$ such that both
$\lambda(\alpha)_{i}=1$ and $p(\alpha)$ contains no factor of the form
$(x_{i}-x_{m})$ for any $m<j$, because $g(j)$ itself satisfies these
conditions. Suppose for a contradiction that there exists such an $i$ for
which $g(j)<i<j$.
If $i$ is an unused index in $\alpha$, then it is a nested unused index
relative to the factor $(x_{g(j)}-x_{j})$, which contradicts the assumption
that $\alpha\in{\mathcal{B}}_{n,k,0}\subseteq{\mathcal{B}}_{n,k}$. Because
$\lambda(\alpha)_{i}=1$, the only other possibility is for $x_{i}$ to be
involved in a factor of the form $(x_{i}-x_{m})$ with $g(j)<i<j<m$. In this
case, the pair $(x_{g(j)}-x_{j})$, $(x_{i}-x_{m})$ forms a crossing, which
also contradicts the assumption $\alpha\in{\mathcal{B}}_{n,k,0}$, completing
the proof. ∎
###### Lemma 2.9.
Maintain the above notation.
* (i)
For each $0\leq h\leq k$, the restriction of $\lambda$ to
${\mathcal{B}}_{n,k,h}$ is injective.
* (ii)
The labelling function $\lambda:{\mathcal{B}}_{n,k}\rightarrow{\Lambda_{n,k}}$
is injective.
###### Proof.
Lemma 2.7 shows that if $h=0$ then we can use induction on $j$ to reconstruct
$\alpha$ from $\lambda(\alpha)$. This proves (i) in the case $h=0$. The
general result of (i) now follows by combining the result for $h=0$ with Lemma
2.5.
For (ii), observe that the set ${\mathcal{B}}_{n,k}$ is the disjoint union of
the sets ${\mathcal{B}}_{n,k,h}$ for $0\leq h\leq k$. If
$\alpha\in{\mathcal{B}}_{n,k,h}$, then the number of occurrences of $2$ in
$\lambda(\alpha)$ is $k-h$. It follows that the images of the sets
${\mathcal{B}}_{n,k,h}$ for $0\leq h\leq k$ are pairwise disjoint, which
completes the proof. ∎
###### Theorem 2.10.
Let $n\leq 2$ and $0\leq k\leq n/2$ be integers, and let ${\mathcal{B}}_{n,k}$
be the set of positive $k$-roots with no defects.
* (i)
The labelling function $\lambda:{\mathcal{B}}_{n,k}\rightarrow{\Lambda_{n,k}}$
is a bijection.
* (ii)
The set ${\mathcal{B}}_{n,k}$ is a basis for $V_{n,k}$ over ${\mathbb{Q}}$.
* (iii)
Every positive $k$-root is ${\mathcal{B}}_{n,k}$-positive, with integer
coefficients.
* (iv)
The elements ${\mathcal{B}}_{n,k}$ are the only positive $k$-roots that cannot
be written as positive linear combinations of other positive $k$-roots.
###### Proof.
Let $T(n,j)$ be the number of lattice words of length $n$ in the alphabet
$\\{1,2\\}$ where there are precisely $j$ occurrences of $2$. It is known (see
Sequence A008315 of [12]) that
$|T(n,j)|=\binom{n}{j}-\binom{n}{j-1},$
where we interpret $\binom{n}{-1}$ to be zero.
Lemma 2.5 now implies that we have $|{\mathcal{B}}_{n,k,h}|\leq T(n,k-h)$, and
summing over $h$ gives
$|{\mathcal{B}}_{n,k}|=\sum_{h=0}^{k}|{\mathcal{B}}_{n,k,h}|\leq\sum_{h=0}^{k}T(n,k-h)=\sum_{h=0}^{k}T(n,h)=\sum_{h=0}^{k}\left[\binom{n}{h}-\binom{n}{h-1}\right]=\binom{n}{k}.$
Lemma 1.9 (ii) now implies that the inequality of the last paragraph is an
equality, and that the injective maps of Lemma 2.9 are bijective, proving (i).
Part (ii) follows because we have
$|{\mathcal{B}}_{n,k}|=\binom{n}{k}=\dim(V_{n,k})$, and part (iii) follows by
combining (ii) with Lemma 1.9 (i).
If $\alpha$ is a positive $k$-root that is not an element of
${\mathcal{B}}_{n,k}$, then $\alpha$ has a defect, and it can be written as a
positive integral linear combination of other positive $k$-roots by
Proposition 1.7. On the other hand, if $\alpha\in{\mathcal{B}}_{n,k}$ and
$\alpha$ is a positive linear combination of other positive $k$-roots, it
follows from (iii) that each of these positive $k$-roots must be a scalar
multiple of $\alpha$. By Remark 1.2, each of the positive $k$-roots must equal
$\alpha$, which is a contradiction and proves (iv). ∎
From now on, we will call ${\mathcal{B}}_{n,k}$ the canonical basis of
$V_{n,k}$.
###### Remark 2.11.
Parts (iii) and (iv) of Theorem 2.10 are familiar in the context of root
systems. They show that the canonical basis may be characterized purely in
terms of the vector space ordering on $V_{n,k}$, without relying on the
concept of defects at all.
###### Example 2.12.
An example of the basis ${\mathcal{B}}_{n,k}$ that does not come from a root
system is the case $n=4$, $k=2$. There are 12 positive and 12 negative
$2$-roots, and the elements of ${\mathcal{B}}_{4,2}$ and their labels are as
follows.
Canonical basis element | Label
---|---
$(x_{1}+x_{2})(x_{3}+x_{4})$ | $1111$
$(x_{1}+x_{2})(x_{3}-x_{4})$ | $1112$
$(x_{1}+x_{4})(x_{2}-x_{3})$ | $1121$
$(x_{1}-x_{2})(x_{3}+x_{4})$ | $1211$
$(x_{1}-x_{4})(x_{2}-x_{3})$ | $1122$
$(x_{1}-x_{2})(x_{3}-x_{4})$ | $1212$
The six positive roots that are not basis elements come from the left hand
sides of relations 1.1–1.6 of Proposition 1.7, taking $i=1$, $j=2$, $r=3$ and
$s=4$.
###### Remark 2.13.
There are other constructions of the basis ${\mathcal{B}}_{n,k}$. One of these
comes from the Kazhdan–Lusztig basis $\\{C_{w}\\}$ from [11], specifically,
the basis of the module arising from the left cell containing the permutation
$(1,2)(3,4)\cdots(k-1,k)$ in type $D_{n}$ with $q=1$. One can also construct
${\mathcal{B}}_{n,k}$ from a basis for the generalized Temperley–Lieb algebra
of type $D_{n}$, after specializing $q$ to $1$ and twisting by sign. The
latter basis may be defined in terms of monomials as in [4, §6.2], or in terms
of diagrams as in [7], and the definition of “defect” in this paper is closely
related to the diagrammatic rules in [7]. When $k=n/2$, it is necessary in all
these constructions to take the union of two cells: the one just described,
and its image under the automorphism that sends $x_{n}$ to $-x_{n}$ and fixes
$x_{i}$ for $i<n$. One can find module isomorphisms between these various
constructions by using the characterization of Remark 1.10.
The $k$-root approach has a significant advantage over these other
constructions, which is that Proposition 1.7 makes it easy (a) to work out the
effect of applying an arbitrary (signed) permutation $w$ to a basis element
$\alpha$ and then (b) to express the result as a linear combination of basis
elements.
## 3\. Main results
In Section 3, we explore some applications of $k$-roots and the basis
${\mathcal{B}}_{n,k}$ in representation theory. We first show how
${\mathcal{B}}_{n,k}$ naturally gives rise to a composition series of
$V_{n,k}$ as an $S_{n}$-module. We refer the reader to Fulton and Harris [6]
for background information on the character theory of the symmetric groups.
###### Definition 3.1.
Let $V_{n,k,t}$ be the ${\mathbb{Q}}$-linear span of all $k$-roots in
${\mathcal{B}}_{n,k}$ that have height at most $t$; that is,
$V_{n,k,t}:=\text{\rm
Span}\left(\bigsqcup_{h=0}^{t}{\mathcal{B}}_{n,k,h}\right).$
###### Proposition 3.2.
* (i)
The subspaces $V_{n,k,t}$ of Definition 3.1 are ${\mathbb{Q}}S_{n}$-submodules
of $V_{n,k}$.
* (ii)
The chain
$V_{n,k,-1}:=0<V_{n,k,0}<V_{n,k,1}<\cdots<V_{n,k,k}=V_{n,k}$
is a composition series of $V_{n,k}$ as a ${\mathbb{Q}}S_{n}$-module.
###### Proof.
Let $\alpha\in{\mathcal{B}}_{n,k}$ be a canonical basis element of height $h$,
and let $w\in S_{n}$ be a permutation. One of $\pm w(\alpha)$ is a positive
$k$-root of height $h$. A routine case by case check shows that the reduction
rules in Proposition 1.7 all express a positive $k$-root as a linear
combination of positive $k$-roots of the same, or lower heights. It follows
that $w(\alpha)$ is a linear combination of canonical basis elements of height
at most $h$, and this proves part (i).
Note that for each $j$ satisfying $0\leq j\leq k$, there exists an element of
${\Lambda_{n,k}}$ with precisely $j$ occurrences of $2$; for example, the word
$1^{n-j}2^{j}$. Each such word corresponds via Theorem 2.10 (i) to an element
of ${\mathcal{B}}_{n,k}$ of height $k-j$, so there exist basis elements of all
possible heights $h$ in the range $0\leq h\leq k$. It follows that each step
in the chain in (ii) corresponds to a strict submodule, and thus that the
series has $k+1$ nontrivial quotients $V_{n,k,h}/V_{n,k,h-1}$.
Lemma 1.5 (ii) implies that $V_{n,k}$ is the direct sum of $k+1$ irreducible
$S_{n}$-submodules. Since this is the same as the number of nontrivial
quotients in the series of (ii), it follows both that $V_{n,k}$ is a direct
sum of $k+1$ irreducible submodules over ${\mathbb{Q}}$, and that the series
in (ii) is a composition series. ∎
The next result is useful for determining when a positive $k$-root stays
positive after a permutation acts on it.
###### Lemma 3.3.
Let $\alpha\in{\mathcal{C}}_{n,k}$ be a positive $k$-root, and let $w\in
S_{n}$ be a permutation. If $w(\alpha)$ is negative, then the normal form of
$\alpha$ must contain a factor $(x_{i}-x_{j})$ for which $w(i)>w(j)$.
###### Proof.
If $w(\alpha)$ is negative, but there is no factor in $\alpha$ of the form
$(x_{i}-x_{j})$ satisfying $w(i)>w(j)$, then each factor in the normal form of
$\alpha$ is sent by $w$ to another factor in normal form. It follows from
Lemma 1.3 that $w(\alpha)$ is positive, which is a contradiction. ∎
Although we know that the irreducible components of $V_{n,k}$ have characters
$\chi^{(n-i,i)}$, we will need to be able to match these to the composition
factors of the series in Proposition 3.2. The next result helps with this.
###### Lemma 3.4.
Let $A$ be a subset of $\\{1,2,\ldots,n\\}$ of cardinality $a$, and let
$S_{A}$ be the full symmetric group on $A$ considered as a subgroup of $S_{n}$
of order $a!$. Define
$x_{A}:=\sum_{w\in S_{A}}w.$
* (i)
If $V_{i}$ is an irreducible ${\mathbb{C}}S_{n}$-module with character
$\chi^{(n-i,i)}$ for some $i\leq n/2$, then we have $x_{A}.V_{i}\neq 0$ if and
only if $a\leq n-i$.
* (ii)
If $1\leq j\leq k$, then the $k$-root
$\beta_{j}=\prod_{i=1}^{j}(x_{i}-x_{k+i})\prod_{i=j+1}^{k}(x_{i}+x_{k+i})$
lies in $V_{n,k,k-j}$.
* (iii)
If $V_{n,k,t}$ is as in Proposition 3.2, then we have $x_{A}.V_{n,k,t}\neq 0$
if and only if $a\leq n-k+t$.
* (iv)
The composition factor $V_{n,k,t}/V_{n,k,t-1}$ has character
$\chi^{(n-k+t,k-t)}$, and $V_{n,k,t}$ has character
$\sum_{i=k-t}^{k}\chi^{(n-i,i)}.$
###### Proof.
If we define $B=\\{1,2,\ldots,a\\}$, then we have $x_{A}=gx_{B}g^{-1}$ for
some $g\in S_{n}$, which implies that $x_{A}$ and $x_{B}$ annihilate the same
modules. It is therefore enough to consider the case where
$A=\\{1,2,\ldots,a\\}$.
The condition that $x_{A}.V_{i}\neq 0$ is equivalent to the condition that
$V_{i}$, when regarded as an $S_{A}$-module, contains a copy of the trivial
representation, that is, that
$\langle 1,V_{i}\downarrow^{S_{n}}_{S_{A}}\rangle\neq 0$
in the usual inner product on characters. By Frobenius reciprocity, this is
equivalent to
$\langle 1\uparrow_{S_{A}}^{S_{n}},V_{i}\rangle\neq 0.$
Since $S_{A}$ is a Young subgroup of $S_{n}$ of type $S_{a}\times
S_{1}\times\cdots\times S_{1}$, where there are $n-a$ copies of $S_{1}$, it
follows that the character of $1\uparrow_{S_{A}}^{S_{n}}$ corresponds to the
product of Schur functions
$s_{(a)}s_{(1)}\cdots s_{(1)},$
where again there are $n-a$ copies of $s_{(1)}$. This corresponds to adding
$n-a$ boxes, one at a time, to the partition $(a)$. This will result in at
least one copy of $s_{(n-i,i)}$ if and only if we have $n-a\geq i$; otherwise,
there are not enough single boxes to fill the second row. This proves (i).
Let the $k$-root $\beta_{j}$ be as in the statement of (ii). Observe that
$\beta_{j}$ is a $k$-root of height $k-j$, and it is in the same $S_{n}$-orbit
as any element of ${\mathcal{B}}_{n,k,k-j}$, for example, the element of
${\mathcal{B}}_{n,k}$ whose label is $1^{n-j}2^{j}$. Since $\beta_{j}$ is in
the same $S_{n}$-orbit as an element of $V_{n,k,k-j}$, it follows that
$\beta_{j}\in V_{n,k,k-j}$, proving (ii).
To prove (iii), note that any basis element $\alpha$ of $V_{n,k,t}$ has at
least $k-t$ antisymmetric factors. If we have $a>n-(k-t)$, then it is
inevitable that at least one of the antisymmetric factors of $\alpha$ is of
the form $(x_{i}-x_{j})$ where both of $i$ and $j$ lie in $A$. It follows that
$\alpha$ is annihilated by $1+w$, where $w$ is the transposition $(i,j)$. By
summing over a set of left coset representatives of $\langle w\rangle$ in
$S_{A}$, we can factorize $x_{A}$ as $x^{\prime}(1+w)$, from which it follows
that $x_{A}$ annihilates $\alpha$. Since $\alpha$ was arbitrary, we deduce
that $x_{A}$ annihilates $V_{n,k,t}$ if $a>n-k+t$.
It remains to show that if $a\leq n-k+t$, then $x_{A}$ does not annihilate
$V_{n,k,t}$. Let $\beta=\beta_{k-t}$ be the $k$-root defined in (ii). By
construction, no antisymmetric factor of $\beta$ has both endpoints in the set
$A$. Lemma 3.3 now implies that any $w\in S_{A}$ has the property that
$w(\beta)$ is a positive $k$-root. It follows that $x_{A}(\beta)$ is a
nontrivial sum of positive $k$-roots, and Theorem 2.10 (iii) shows that
$x_{A}(\beta)$ is a nontrivial sum of canonical basis elements. In particular,
$x_{A}$ does not annihilate $\beta$, and therefore $x_{A}$ does not annihilate
$V_{n,k,t}$, proving (iii).
If we set $A=\\{1,2,\ldots,n-k+t\\}$ and $B=\\{1,2,\ldots,n-k+t+1\\}$, then
(iii) implies that $x_{A}$ annihilates $V_{n,k,t-1}$, but not $V_{n,k,t}$, and
that $x_{B}$ annihilates $V_{n,k,t}$. The character of $V_{n,k,t}/V_{n,k,t-1}$
is therefore the character of the form $\chi^{(n-i,i)}$ that is annihilated by
$x_{B}$ but not by $x_{A}$. By (i), we find the solution is to take $i=k-t$,
which proves the first assertion of (iv). The second assertion of (iv) follows
by summing over all the composition factors of $V_{n,k,t}$. ∎
Recall (for example, from sections 2 and 3 of [3] or the proof of [2,
Corollary 4.6.4 (ii)]) that the spherical functions ${\Phi(n,k,j)}$ are
characterized by the following properties:
* (i)
${\Phi(n,k,j)}$ lies in the irreducible summand of $L(X)$ with character
$\chi^{(n-j,j)}$;
* (ii)
${\Phi(n,k,j)}$ is fixed pointwise by the subgroup $K=S_{k}\times S_{n-k}$;
* (iii)
${\Phi(n,k,j)}$ takes the value $1$ at the identity coset; in other words, the
coefficient of $x_{1}x_{2}\cdots x_{k}$ in ${\Phi(n,k,j)}$ is $1$.
We are now ready to give a construction of these spherical functions in terms
of $k$-roots.
###### Theorem 3.5.
Let $n\geq 2$ and $0\leq k\leq n/2$, and $0\leq j\leq k$ be integers. Let
$A=\\{1,2,\ldots,k\\}$ and let $B=\\{j+1,j+2,\ldots,n\\}$.
* (i)
As a homogeneous polynomial in $x_{1},x_{2},\ldots,x_{n}$, the $j$-th
spherical function ${\Phi(n,k,j)}$ of the Gelfand pair $(S_{n},S_{k}\times
S_{n-k})$ is given by
$\frac{(n-2k)!}{k!(n-k)!2^{k-j}(k-j)!}\left(\sum_{v\in
S_{A}}v\right)\left(\sum_{w\in S_{B}}w\right)\cdot\beta_{j},$
where $\beta_{j}$ is the $k$-root defined by
$\beta_{j}=\prod_{i=1}^{j}(x_{i}-x_{k+i})\prod_{i=j+1}^{k}(x_{i}+x_{k+i}).$
* (ii)
The function ${\Phi(n,k,j)}$ is ${\mathcal{B}}_{n,k}$-positive, and its
coefficients are nonnegative integer multiples of $1/N$, where $N$ is the
integer
$(k)_{j}(n-k)_{k}=\frac{k!(n-k)!}{(k-j)!(n-2k)!}.$
###### Proof.
Let ${\Psi(n,k,j)}$ be the polynomial given in the formula; we will show that
${\Psi(n,k,j)}$ is equal to the $j$-th spherical function, ${\Phi(n,k,j)}$.
Any permutation in $S_{A}$ will fix ${\Psi(n,k,j)}$, because we have already
symmetrized over $S_{A}$. Any permutation of $\\{k+1,k+2,\ldots,n\\}$ will
commute with each element of $S_{A}$, and be equal to an element of $S_{B}$,
so these too will fix ${\Psi(n,k,j)}$. It follows that ${\Psi(n,k,j)}$ is
fixed by the subgroup $K=S_{k}\times S_{n-k}$.
We next prove that ${\Psi(n,k,j)}$ lies in the unique irreducible submodule
$V_{j}$ of $V_{n,k}$ with character $\chi^{(n-j,j)}$. The $k$-root $\beta_{j}$
lies in $V_{n,k,k-j}$ by Lemma 3.4 (ii), and $V_{n,k,k-j}$ has character
$\sum_{i=j}^{k}\chi^{(n-i,i)}$ by Lemma 3.4 (iv). It is enough to show that
$\sum_{w\in S_{B}}w\cdot\beta_{j}$ lies in $V_{j}$. Since $B$ has cardinality
$n-j$, it follows from Lemma 3.4 (i) that $\sum_{w\in S_{B}}w$ will annihilate
every submodule whose character is in the set
$\\{\chi^{(n-i,i)}:j\leq i\leq k\\}$
except the one with character $\chi^{(n-j,j)}$. It follows that $\sum_{w\in
S_{B}}w\cdot\beta_{j}$ lies in $V_{j}$, as required.
In order to complete the proof of (i), it remains to show that
$x_{1}x_{2}\ldots x_{k}$ appears in ${\Psi(n,k,j)}$ with coefficient $1$.
Observe that the polynomial $\prod_{i=j+1}^{k}(x_{i}+x_{k+i})$ is stabilized
by a subgroup $U\leq S_{B}$, generated by transpositions $(i,k+i)$ that fix
each factor, together with permutations of the $k-j$ factors. It follows that
$U$ has order $|U|=2^{k-j}(k-j)!$. By summing over a set of left coset
representatives, $X_{B}$, for the left cosets $S_{B}/U$ of $U$ in $S_{B}$, we
can obtain an expression for ${\Psi(n,k,j)}$ that is equivalent to the one in
the statement but has fewer terms, as follows:
${\Psi(n,k,j)}=\frac{(n-2k)!}{k!(n-k)!}\left(\sum_{v\in
S_{A}}v\right)\left(\sum_{w\in X_{B}}w\right)\cdot\beta_{j}.$
Note that every antisymmetric factor of $\beta_{j}$ contains precisely one
index from the set $\\{1,2,\ldots,j\\}$, and these indices are fixed pointwise
by every element in $S_{B}$. Lemma 3.3 shows that the $k$-roots
$\\{w\cdot\beta_{j}:w\in X_{B}\\}$ are all positive, and because we are
summing over cosets of the stabilizer, each element of $X_{B}$ gives a
different positive $k$-root $w\cdot\beta_{j}$. It is convenient to separate
the $k$-roots $w\cdot\beta_{j}$ into three mutually exclusive types.
* Type 1:
positive $k$-roots containing at least one antisymmetric factor
$(x_{p}-x_{q})$ where $1\leq p<q\leq k$;
* Type 2:
positive $k$-roots that are not of type 1, but that contain least one
symmetric factor $(x_{p}+x_{q})$ where either $1\leq p<q\leq k$ or $k+1\leq
p<q\leq n$;
* Type 3:
positive $k$-roots where each factor contains precisely one $x_{p}$ where
$1\leq p\leq k$.
The $k$-roots of type 1 are annihilated by elements of the form $1+w$ where
$w$ is the transposition $(p,q)$. As in the proof of Lemma 3.4 (iii), it
follows that $k$-roots of type $1$ are annihilated by $\sum_{v\in S_{A}}v$,
and thus that they make no net contribution to the sum. These terms may be
ignored from now on.
The $k$-roots of types 2 and 3 contain no antisymmetric factors
$(x_{p}-x_{q})$ with $1\leq p<q\leq k$. If $\alpha$ is a $k$-root of type 2 or
3 and $v\in S_{A}$, it follows by Lemma 3.3 that $v\cdot\alpha$ is also a
positive $k$-root.
If $\alpha$ has type 2, then the $k$-root $v\cdot\alpha$ will have a factor
$(x_{v(p)}+x_{v(q)})$ with two indices in the range $1\leq v(p),v(q)\leq k$ or
in the range $k+1\leq v(p),v(q)\leq n$, and this means that $x_{1}x_{2}\cdots
x_{k}$ appears in $v\cdot\alpha$ with coefficient zero.
A $k$-root $\alpha$ is of type $3$ if and only if it has the form
$\alpha=\prod_{i=1}^{j}(x_{i}-x_{\iota(i)})\prod_{i=j+1}^{k}(x_{i}+x_{\iota(i)})$
for some injective function
$\iota:\\{1,2,\ldots,k\\}\rightarrow\\{k+1,k+2,\ldots,n\\}$. The monomial
$x_{1}x_{2}\cdots x_{k}$ appears in each such $k$-root with coefficient $1$,
and the number of $k$-roots of type $3$ is the same as the number of functions
$\iota$, which is $(n-k)_{k}=(n-k)!/(n-2k)!$. The action of a permutation
$v\in S_{A}$ leaves invariant the coefficient of $x_{1}x_{2}\cdots x_{k}$,
which implies that the coefficient of $x_{1}x_{2}\cdots x_{k}$ in
$\left(\sum_{v\in S_{A}}v\right)\left(\sum_{w\in X_{B}}w\right)\cdot\beta_{j}$
is $k!(n-k)!/(n-2k)!$, proving (i).
The above argument has also shown that $\beta^{\prime}_{j}:=\left(\sum_{v\in
S_{A}}v\right)\left(\sum_{w\in X_{B}}w\right)\cdot\beta_{j}$ is a sum of
positive $k$-roots: the terms of type 1 all cancel, and the terms of types 2
and 3 lead to sums of positive $k$-roots. Theorem 2.10 (iii) implies that
$\beta^{\prime}_{j}$ is a linear combination of canonical basis elements with
nonnegative integer coefficients. Now let $C=A\cap B$, so that $|C|=k-j$, and
let $X_{A}$ be a set of left coset representatives of $S_{C}$ in $S_{A}$. The
left $S_{B}$-invariance of $\sum_{w\in X_{B}}w\cdot\beta_{j}$ then implies
that
$\beta^{\prime}_{j}=\left(\sum_{v\in X_{A}}v\right)\left(\sum_{u\in
S_{C}}u\left(\sum_{w\in X_{B}}w.\beta_{j}\right)\right)=(k-j)!\left(\sum_{v\in
X_{A}}v\right)\left(\sum_{w\in X_{B}}w\right)\cdot\beta_{j}.$
It follows that the coefficients of the canonical basis elements in
$\beta^{\prime}_{j}$ are all integer multiples of $(k-j)!$, and dividing by
the factor of $k!(n-k)!/(n-2k)!$ from the previous paragraph then proves (ii).
∎
###### Remark 3.6.
The bound on the denominator given in Theorem 3.5 (ii) is sharp in some cases,
such as the case $n=4$ and $k=j=1$ in which the denominator in the theorem is
the best possible denominator of $1/3$.
Finally, we consider the problem of expressing elements of
${\mathcal{M}}_{n,k}$ as linear combinations of the canonical basis
${\mathcal{B}}_{n,k}$. The basis ${\mathcal{B}}_{n,k}$ only has one element
that is a positive linear combination of the natural basis of monomials,
namely the basis element whose label is $1^{n}$. However, it often happens
that a squarefree monomial can be written as a positive linear combination of
the ${\mathcal{B}}_{n,k}$. The following definition is helpful for
understanding this.
###### Definition 3.7.
Let $x_{I}=x_{i_{1}}x_{i_{2}}\cdots x_{i_{k}}$ be a squarefree monomial of
degree $k$ in the indeterminates $x_{1},\ldots x_{n}$. Define the label,
$\mu(x_{I})$ of $x_{I}$ to be the sequence of length $n$ in the alphabet
$\\{1,2\\}$ with the property that $\mu(x_{I})_{j}=2$ if and only if $x_{j}$
appears in $x_{I}$. We say that $\mu(x_{I})$ is a reverse lattice word if
every terminal segment of $\mu(x_{I})$ contains at least as many $1$s as $2$s.
###### Theorem 3.8.
Let $n\geq 2$ and $0\leq k\leq n/2$ be integers, and let
$x_{I}=x_{i_{1}}x_{i_{2}}\cdots x_{i_{k}}$ be a squarefree monomial of degree
$k$ in the indeterminates $x_{1},\ldots x_{n}$. If the label $\mu(x_{I})$ is a
reverse lattice word, then $x_{I}$ is ${\mathcal{B}}_{n,k}$-positive, with
coefficients that are nonnegative integer multiples of $1/2^{k}$.
###### Proof.
Suppose that $x_{I}$ satisfies the hypotheses in the statement. It is enough
to prove that $2^{k}x_{I}$ is ${\mathcal{B}}_{n,k}$-positive with integer
coefficients.
The hypothesis that $\mu(x_{I})$ is a reverse lattice word means that there is
an injective function
$f:\\{i_{1},i_{2},\ldots,i_{k}\\}\rightarrow\\{1,2,\ldots,n\\}$ with the
properties that for all $1\leq r\leq k$, both (a) $f(i_{r})>i_{r}$ and (b)
$\mu(x_{I})$ has a $1$ at position $f(i_{r})$. By making the substitutions
$2x_{i_{r}}\rightarrow(x_{i_{r}}-x_{f(i_{r})})+(x_{i_{r}}+x_{f(i_{r})}),$
we can express $2^{k}x_{I}=(2x_{i_{1}})(2x_{i_{2}})\cdots(2x_{i_{k}})$ as an
sum of $2^{k}$ positive $k$-roots in normal form. The result now follows from
Theorem 2.10 (iii). ∎
###### Remark 3.9.
The number of reverse lattice words with $k$ occurrences of $2$ is
$\binom{n}{k}-\binom{n}{k-1}$. It therefore follows that all but at most
$\binom{n}{k-1}$ elements of the monomial basis are
${\mathcal{B}}_{n,k}$-positive. If $k$ is small compared to $n$, then the
hypotheses of Theorem 3.8 will usually be satisfied, but if $k$ is close to
$n/2$, the hypotheses will rarely be satisfied. The monomial $x_{1}x_{2}\cdots
x_{k}$, whose label is $2^{k}1^{n-k}$, will always satisfy the hypotheses.
It would be interesting to know whether the necessary condition in the theorem
is also sufficient. This is the case when $k=1$, where $x_{n}$ is the only
monomial that is not ${\mathcal{B}}_{n,1}$-positive.
## 4\. Concluding remarks
### 4.1. Other Gelfand pairs
A natural question is whether the results of this paper have analogues for
other Gelfand pairs. This happens, for example, in the case of the Gelfand
pair $(S_{2n},S_{n}\wr({\mathbb{Z}}/2{\mathbb{Z}}))$, which corresponds to the
action of $S_{2n}$ on the size $n$ subsets of $\\{1,2,\ldots,2n\\}$, where
each subset is identified with its complement. In this case, the coset space
of the Gelfand pair has a canonical basis induced by the elements of
${\mathcal{B}}_{2n,n}$ that have an even number of asymmetric factors.
An example of a Gelfand pair that has a simpler treatment than the one in this
paper is $(({\mathbb{Z}}/2{\mathbb{Z}})\wr S_{n},S_{n})$. In this case, the
$2^{n}$ cosets correspond to the weights of the spin representation of a
simple Lie algebra of type $B_{n}$ [8, Proposition 6.4.5]. There is a well-
known action of the Weyl group $W(B_{n})\cong({\mathbb{Z}}/2{\mathbb{Z}})\wr
S_{n}$ on $2n$ symbols
$\\{1,\overline{1},2,\overline{2},\ldots,n,\overline{n}\\}$ [8, Example
1.4.5]. This induces an action by signed permutations on the span of the
$2^{n}$ linearly independent polynomials in the $2n$ commuting indeterminates
$x_{1},x_{\overline{1}},\ldots,x_{n},x_{\overline{n}}$ of the form
$(x_{1}\pm x_{\overline{1}})(x_{2}\pm x_{\overline{2}})\cdots(x_{n}\pm
x_{\overline{n}}),$
where the signs are chosen independently. This is a basis for the permutation
module on the cosets that is compatible with the direct sum decomposition into
$W(B_{n})$-irreducibles, and the corresponding spherical functions are given
by taking the average of each $S_{n}$-orbit of basis elements.
The results of this paper can also be thought of in terms of averaging
operators. It follows from Theorem 3.5 that when the spherical functions of
$(S_{n},S_{k}\times S_{n-k})$ are written as linear combinations of canonical
basis elements, the denominators of the coefficients divide the order of $K$,
namely $k!(n-k)!$. Using this, one can replace the expression in Theorem 3.5
(i) by an averaging operator over $K$ acting on a sum with far fewer terms.
This suggests that there may be continuous versions of these results in which
the averaging operator is replaced by a suitable integral.
### 4.2. Sign-coherence
Because the set ${\mathcal{C}}_{n,k}$ of $k$-roots is permuted by the action
of any permutation $w$, it follows from Theorem 2.10 (iii) that the matrix
$\rho(w)$ representing $w$ with respect to ${\mathcal{B}}_{n,k}$ is an integer
valued column sign-coherent matrix. The property of column sign-coherence
comes from the theory of cluster algebras ([1, Definition 2.2 (i)], [5,
Definition 6.12], [9, §5]), and means that any two nonzero entries in the same
column of $\rho(w)$ have the same sign. Each simple $S_{n}$-module
$V_{n,k,t}/V_{n,k,t-1}$ inherits a basis from ${\mathcal{B}}_{n,k}$ that also
has the sign-coherence property. This sign-coherence property is remarkable
because it fails easily for irreducible $S_{n}$-modules corresponding to
partitions with more than two rows; for example, the irreducible module for
$S_{4}$ with character $\chi^{(2,1,1)}$ contains a counterexample. The
monomial basis for $V_{n,k}$ and the basis mentioned in 4.1 both have the
sign-coherence property, but in the trivial sense that the matrices
representing group elements have only one nonzero entry per column.
### 4.3. Differential operators
Some of the results of this paper say something about the differential
operators $d:V_{n,k}\rightarrow V_{n,k-1}$ given by
$d=\sum_{i=1}^{n}\partial/\partial x_{i}$. It follows from the definitions
that $d$ sends positive $k$-roots to linear combinations of positive $k$-roots
with positive even integer coefficients. Theorem 2.10 (iii) then implies that
the entries of the matrix of $d$ relative to ${\mathcal{B}}_{n,k}$ and
${\mathcal{B}}_{n,k-1}$ are positive even integers. The submodules $V_{n,k,t}$
of Definition 3.1 can be simply characterized as the kernels of the composite
operators $d^{t+1}$, as in [2, Theorem 6.1.6 (v)].
### 4.4. Categorification
The appearance of ${\mathcal{B}}_{n,k}$-positivity in various contexts in this
paper raises the question of whether the positive integers and rational
numbers that arise have combinatorial interpretations. A related question is
whether $k$-roots can be categorified, and the connection with Kazhdan–Lusztig
bases mentioned in Remark 2.13 is an additional hint that this may be
possible. It seems likely that the reduction rules in Proposition 1.7 would
play an important role in any such categorification.
## Acknowledgements
I am grateful to Nathan Lindzey and Nat Thiem for some helpful conversations,
and to Tianyuan Xu for making many helpful comments and suggestions on an
earlier version of this paper. I also thank the referees for their corrections
and feedback.
## Data availability statement
Data sharing not applicable to this article as no datasets were generated or
analysed during the current study.
## Conflict of interest statement
On behalf of all authors, the corresponding author states that there is no
conflict of interest.
## References
* [1] Peigen Cao and Fang Li. Uniform column sign-coherence and the existence of maximal green sequences. Journal of Algebraic Combinatorics, 50(4):403–417, 2019.
* [2] Tullio Ceccherini Silberstein, Fabio Scarabotti, and Filippo Tolli. Harmonic analysis on finite groups, volume 108 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2008.
* [3] Persi Diaconis and Mehrdad Shahshahani. Time to reach stationarity in the Bernoulli–Laplace diffusion model. SIAM Journal on Mathematical Analysis, 18(1):208–218, 1987.
* [4] C.K. Fan. Structure of a Hecke algebra quotient. Journal of the American Mathematical Society, 10(1):139–167, 1997\.
* [5] Sergey Fomin and Andrei Zelevinsky. Cluster algebras IV: coefficients. Compositio Mathematica, 143(1):112–164, 2007.
* [6] William Fulton and Joe Harris. Representation theory: a first course, volume 129 of Graduate Texts in Mathematics. Springer Science & Business Media, 2013.
* [7] R.M. Green. Generalized Temperley–Lieb algebras and decorated tangles. Journal of Knot Theory and its Ramifications, 7(02):155–171, 1998\.
* [8] R.M. Green. Combinatorics of minuscule representations, volume 199 of Cambridge Tracts in Mathematics. Cambridge University Press, 2013.
* [9] R.M. Green and Tianyuan Xu. 2-roots for simply laced Weyl groups. To appear in Transformation Groups; arXiv:2204.09765.
* [10] James E. Humphreys. Reflection groups and Coxeter groups, volume 29 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, 1990.
* [11] David Kazhdan and George Lusztig. Representations of Coxeter groups and Hecke algebras. Inventiones mathematicae, 53(2):165–184, 1979.
* [12] Neil J. A. Sloane and The OEIS Foundation Inc. The On-Line Encyclopedia of Integer Sequences, 2022.
|
# Optimal Beam Training for mmWave Massive MIMO using 802.11ay
Lyutianyang Zhang and Sumit Roy
(April 2019)
###### Abstract
Beam training of 802.11 ad is a technology that helps accelerate the analog
weighting vector (AWV) selection process under the constraint of the existing
code-book for AWV. However, 5G milli-meter wave (mmWave) multiple-input-
multiple-output (MIMO) system brings challenges to this new technology due to
the higher order of complexity of antennae. Hence, the existing codebook of
11ad is unlikely to even include the near-optimal AWV and the data rate will
degrade severely. To cope with this situation, this paper proposed a new beam
training protocol combined with the state-of-the-art compressed sensing
channel estimation in order to find the AWV to maximize the optimal data-rate.
Simulation is implemented to show the data-rate of AWV achieved by 11 ad is
worse than the proposed protocol.
## I Introduction
The growing demand for bandwidth is driving the adoption of millimeter-wave
systems (mmWave) in cellular last-mile access technology. As part of the 5G
revolution, mmWave is expected to support gigabit rates at millisecond
latency. Such high performance will be enabled by beamforming base stations
equipped with a large number of mmWave antennas.
Beamforming is crucial for overcoming the significant increase in over-the-air
path loss at mmWave bands. As is well-known, such increased channel gain is
required at both ends of a link, implying both Tx and Rx must implement
beamforming meeting the link budget for the desired data rates or modulation
and coding scheme (MCS) employed. In turn, this introduces a different problem
- of beam alignment. When the antenna radiation pattern is a narrow spot beam,
the increased antenna gain is confined to a small angular region. Thus, the
benefits of beamforming - net beamforming gain on both the Tx and Rx sides- is
available only if the two beams are pointed in an optimal manner. Beam
training refers to the process of finding the best beam alignment from an
initial state of little or no information about the channel. In contrast, beam
tracking refers to the process of maintaining beam alignment when the devices
are moving during communication.
The need for beam training and beam tracking comes at a price - significant
resource overhead must be added to the cellular systems to achieve the needed
beamforming gains in real systems. Because last-mile mmWave deployments will
be implemented in small cells, user mobility implies frequent entry and
departure and consequent need for frequent initial beam training. In turn,
this will dominate the overhead complexity and latency constraints that must
be budgeted for. For massive MIMO mmWave systems, beam-training implies
accurate channel state (CSI) whose complexity grows polynomially as a function
of key system parameters.
In summary, the great challenge in mmWave systems design is to find the sweet
spot between achieving beam pointing accuracy that necessarily involves
requisite CSI complexity and feedback latency, and what is achievable within
the 11ad/11ay beam-training architecture that must necessarily manage
complexity and latency budgets. Our approach is to start from the latter as a
point of departure and seek an efficient beam training approach that achieves
enhanced beam training accuracy with limited complexity/latency overhead; this
is in contrast to most of the academic mmWave MIMO signal processing
literature that provides optimal CSI solutions but which are largely
infeasible for integration within the 11ad/11ay MAC.
Figure 1:
### I-A Literature Review
mm Wave communication is a promising technology for next-generation wireless
communication owing to its abundant frequency spectrum resource, which
promises a much higher capacity than the exist- ing wireless local area
networks (WLANs) and the current cellular mobile communication. In fact,
mmWave communication has received increasing attentions as an important
candidate technology in both the next-generation WLANs [1, 2, 3] and mobile
cellular communication [4, 5, 6]. A fundamental challenge to mmWave
communication is the extremely high path loss, thanks to the very high carrier
frequency on the order of 30-60 GHz. To bridge this significant link budget
gap, joint Tx/Rx beamforming is usually required to bring large antenna array
gains, which typically requires large Tx/Rx antenna array size (e.g. array
size of 36 [4]). Thanks to the small wavelength at the mmWave frequency range,
large antenna arrays may be packed into a small area. Hence in our paper, we
address the beamforming problem at the beginning phase of 11 ay for hybrid
MIMO architecture sub-connected antenna arrays, in the absence of any CSI.
In [7], hierarchical codebook design is introduced to decrease the total steps
to reach the optimal beam in the codebook in single RF-chain MIMO
architecture. However, this codebook design requires multiple feedback rounds
to close the link. One the contrary, we propose an one-round beamtraining
method for hybrid MIMO sub-connected architecture with multiple RF-chain on
both transmitter and receiver. In [8], the optimal precoding matrix is
expressed in closed form given the hybyrid sub-connected architecture with
multiple RF-chain. However, full CSI is assumed and the optimal precoding
matrix is obtained that maximizes link capacity between the transmitted and
the received RF signal between the antennas. Although [9] considers the
precoder and combiner as a whole in optimizing the precoding matrix, again the
full CSI is assumed and the system architecture is not sub-connected but
fully-connected. We propose, on the other hand, a method which obtains optimal
beamforming solution without CSI and the state-of-the-art sub-connected MIMO
architecture. [10] studied the hybrid beamforming in hyrbrid sub-connected
massive MIMO system and proves that sub-connected architecture can achieve the
exactly same performance as fully-connected architecture. However, full CSI is
still assumed.
In this paper, we consider the problem of optimizing initial beam training for
hybrid sub-connected MIMO RF transceiver. Beginning with no initial CSI at
receiver, we explore what is feasible with one-round of algorithmic design
optimization and feedback to transmitter. Note that we use beam pattern vector
(BPV) and analog weighting vector (AWV) interchangeably throughout this paper.
_Notation_ : Lower-case and upper-case boldface letters denote vectors and
matrices respectively; $(\bullet)^{T}$, $(\bullet)^{H}$, $(\bullet)^{-1}$, and
$|\bullet|$ denote the transpose, conjugate transpose, inversion, and
determinant of a matrix, respectively; $||\bullet||_{1}$ and $||\bullet||_{2}$
denote the $l_{1}$ and $l_{2}$ norm of a vector, respectively.
$\mathbf{I}_{N}$ denotes a $N\times N$ identity matrix.
Figure 2:
## II system model
### II-A Formulation for beam training in 802.11ay
IEEE 802.11ay aims to do beam-forming with no or almost no CSI. We propose
time-efficient beamforming algorithms to find the optimal AWV in one-shot,
i.e., only one-time beamforming feedback from the receiver to transmitter
after all AWVs has been tried. Now, we introduce the system model for massive
MIMO hybrid architecture with sub-connected antenna arrays. In sub-connected
architecture, N data streams in the baseband are precoded by the digital
precoder $D$. In cases where complexity is a concern, $D$ is designed to be a
diagonal matrix $\text{diag}[d_{1},d_{2},\dots,d_{N}]$, where $d_{n}\in R$ for
$n=1,\dots,N$. Then the role of D essentially performs some power allocation.
After passing through the corresponding RF chain, the digital-domain signal
from each RF chain is delivered to only M PSs to perform the analog precoding,
which can be denoted by the analog weighting vector
$\mathbf{a}_{n}\in\mathcal{C}^{M\times 1}$, whose elements have the same
amplitude $\frac{1}{\sqrt{M}}$ but different phases. After the analog
precoding, each data stream is finally transmitted by a sub-antenna array with
only M antennas associated with the corresponding RF chain. The receiver also
has analog combiner and digital combiner and we denote
$\mathbf{W}=\tilde{\mathbf{D}}\tilde{\mathbf{A}}$, where
$\tilde{\mathbf{D}}\in\mathcal{C}^{K\times K}$ and
$\tilde{\mathbf{A}}\in\mathcal{R}^{K\times N_{r}}$ denote digital combiner and
analog combiner respectively. Then the received signal vector
$\mathbf{y}=[y_{1},\dots,y_{K}]^{T}$ at the user in a narrow band system can
be presented as
$\displaystyle\mathbf{y}=\sqrt{\rho}\mathbf{W}\mathbf{H}\mathbf{A}\mathbf{D}\mathbf{s}+\mathbf{W}\mathbf{n},$
(1)
where $\rho$ is the average received power.
$\mathbf{H}\in\mathcal{C}^{N_{r}\times N_{t}}$ denote the channel matrix, and
$N_{t}=M\times N$. $\mathbf{A}$ is the $NM\times N$ analog precoding matrix
comprising N analog weighting vector $\\{{\mathbf{a}}_{m}\\}^{N}_{m1}$ as
$\mathbf{A}=\begin{bmatrix}\mathbf{a}_{1}&0&\dots&0\\\ 0&\mathbf{a}_{2}&&0\\\
\vdots&&\ddots&\dots\\\ 0&0&\dots&\mathbf{a}_{N}\end{bmatrix}_{NM\times N},$
(2)
where $\mathbf{s}=[s_{1},s_{2},\dots,s_{N}]^{T}$ represents the transmitted
signal vector in the base band. In this paper, we assume the widely used
Gaussian signals with normalized signal power
$\mathbf{E}[ss^{\dagger}]=\frac{1}{N}I_{N}$.$\mathbf{P}=\mathbf{A}\mathbf{D}$
presents the hybrid precoding matrix.
$\mathbf{n}=[n_{1},n_{2},\dots,n_{N_{r}}]^{T}$ is an AWGN vector, whose
entries follow the i.i.d. $\mathcal{CN}(0,\sigma^{2})$. Finally, in the
transmitter beam training phase, the receiver adopts an omni-directional
receiving pattern which indicate the following analog combinging matrix
$\displaystyle\bar{\mathbf{A}}=\begin{bmatrix}\bar{\mathbf{a}}_{1}&0&\dots&0\\\
0&\bar{\mathbf{a}}_{2}&&0\\\ \vdots&&\ddots&\dots\\\
0&0&\dots&\bar{\mathbf{a}}_{K}\end{bmatrix}_{K\times N_{r}},$ (3)
where the number of antennae in each sub-antenna array is $\frac{N_{r}}{K}$,
and $\bar{\mathbf{a}}_{k}\in\mathcal{C}^{1\times\frac{N_{r}}{K}}$. The digital
combiner matrix
$\bar{\mathbf{D}}=\text{diag}[\bar{d}_{1},\bar{d}_{2},\dots,\bar{d}_{K}]$.
Assume now we are in the phase of transmitter beam training phase, in which
receiver is using an fixed omni-directional antenna to receive $N$ trials of
different AWV pattern of $\mathbf{A}$ with identity digital precoding matrix
$\mathbf{D}=I_{N}$.
Now we give our orthogonal beamtraining codebook design and beamtraining
solution for transmitter.
Result: $\mathbf{A}^{est}$
Design an orthogonal codebook for $\mathbf{A}$ in which
$\mathbf{a}_{i}^{\dagger}\mathbf{a}_{j}=1$. Then do N trial of transmitting
AWV by cyclic rotation on the block diagonal matrix $\mathbf{A}$. For example,
after one cyclic rotation (we define the rotation number as k=1 and k=0 when
there is no rotation), the analog precoding matrix is
$\begin{bmatrix}\mathbf{a}_{N}&0&\dots&0\\\ 0&\mathbf{a}_{1}&&0\\\
\vdots&&\ddots&\dots\\\ 0&0&\dots&\mathbf{a}_{N-1}\end{bmatrix}_{NM\times N},$
(4) and so on. We denote
$\mathbf{Y}=[\mathbf{y}_{1},\mathbf{y}_{2},\dots,\mathbf{y}_{N}]_{K\times N}$
in which $\mathbf{y}_{n}$ is the received signal at $n^{th}$ trial.
$i=0$;
$\mathbf{a}^{est}_{k}=zeros(M,1)$;
while _$i <N$_ do
if _$k-i+1 >0$_ then
$\mathbf{a}^{est}_{k}=\mathbf{a}^{est}_{k}+\frac{\mathbf{y}_{i+1}^{\dagger}\mathbf{y}_{i+1}}{\sum_{n=1}^{N}||\mathbf{y}_{n}||^{2}_{2}}\mathbf{a}_{k-i+1}$;
else
$\mathbf{a}^{est}_{k}=\mathbf{a}^{est}_{k}+\frac{\mathbf{y}_{i+1}^{\dagger}\mathbf{y}_{i+1}}{\sum_{n=1}^{N}||\mathbf{y}_{n}||^{2}_{2}}\mathbf{a}_{k-i+1+N}$;
$i=i+1$;
Construct $\mathbf{A}^{est}$ as shown in Eq. (3).
Algorithm 1 Orthogonal beam training algorithm for sub-connected MIMO
architecture for IEE 802.11 ay
## III Simulation
### III-A Channel model
It is know that mmWave channel $\mathbf{H}$ is not likely to have the property
of rich-scattering model assumed at low frequencies due to limited number of
scatters in the mmWave propagation environment [11]. In this paper, we adopt
the geometric Saleh-Valenzuela channel model to embody the low rank and
spatial correlation characteristics of mmWave communications [9] as
$\mathbf{H}=\gamma\sum_{l=1}^{L}\alpha_{l}\Lambda_{r}(\phi^{r}_{l},\theta_{l}^{r})\Lambda_{t}(\phi^{t}_{l},\theta_{l}^{t})\mathbf{f}_{r}(\phi_{l}^{r},\theta_{l}^{r})\mathbf{f}_{t}^{h}(\phi_{l}^{t},\theta_{l}^{t}),$
(5)
where $\gamma=\sqrt{\frac{N_{r}N_{t}}{L}}$ is a normalization factor, $L$ is
the number of effective channel paths corresponding to limited number of
scatters, and we usually have $L\leq N$ for mmWave communication systems,
$\alpha_{l}\in\mathcal{C}$ is the gain of the $l^{th}$ path.
$\phi_{l}^{t}(\phi_{l}^{t})$ and $\phi_{l}^{r}(\phi_{l}^{r})$ are the azimuth
(elevation) angle of departure and arrival (AoDS/AoAs), respectively,
$\Lambda_{t}(\phi_{l}^{t},\theta_{l}^{t})$ and
$\Lambda_{r}(\phi_{l}^{r},\theta_{l}^{r})$ denote the transmit and receive
antenna array gain at a specific AoD and AoA, respectively. For simplicity but
without loss of generality, $\Lambda_{t}(\phi_{l}^{t},\theta_{l}^{t})$ and
$\Lambda_{r}(\phi_{l}^{r},\theta_{l}^{r})$ can be set as one within the range
of AoDs/AoAs [12]. Finally, $\mathbf{f}_{t}(\phi_{l}^{t},\theta_{l}^{t})$ and
$\mathbf{f}_{r}(\phi_{l}^{r},\theta_{l}^{r})$ are the antenna array response
vectors depending on the antenna array structures at the BS and the user,
respectively. We use the uniform linear array (ULA) with $U$ elements in this
paper without the loss of generality, the array response vector (ARV) can be
presented as
$\mathbf{f}_{ULA}(\phi)=\frac{1}{\sqrt{U}}[1,e^{j\frac{2\pi}{\lambda}d\sin(\phi)},\dots,e^{j\frac{(}{U}-1){2\pi}{\lambda}d\sin(\phi)}],$
(6)
where $\lambda$ denotes the wavelength of the signal, and d is the antenna
spacing.
### III-B Performance analysis of COM and 11 ay
Firstly, we construct a codebook design according to the requirement of
orthogonality, i.e., $\mathbf{A}^{\dagger}\mathbf{A}=I$, which is also
equivalent to $\mathbf{a}_{i}^{\dagger}\mathbf{a}_{j}=1$ unless $i=j$ and $0$
otherwise. Channel matrix is assumed to be unknown through entire beamforming
phase and $E[||H||^{2}_{F}]=N_{t}N_{r}$. Signal power
$E[\mathbf{s}^{\dagger}\mathbf{s}]=\frac{1}{N}\mathbf{I}_{N}$. AWGN vector,
whose entries follow the i.i.d. $\mathcal{CN}(0,\sigma^{2})$. $f=28GHz$ and
$\lambda=\frac{c}{f}$, where $c=3*10^{8}m/s$. $d=\lambda/2$. AoDs are assumed
to follow the uniform distribution within $[-\pi/6,\pi/6]$.AoA follows uniform
distribution within $[-\pi,\pi]$. $L=3$. In the beginning phase of
beamforming, receiver does not have information of CSI, so we assign equal
power to each RF chain, i.e., $\tilde{\mathbf{D}}=I$ and $\tilde{\mathbf{A}}$
is a constant matrix. $N=8$ $M=8$ $N_{t}=64$ $N_{r}=16$ $K=4$. SNR is
$\frac{\rho}{\sigma^{2}}$.
The performance of COM is measured in terms of effective channel capacity
between the transmitter’s data stream and the receiver’s data stream, which
can be expressed as follows [13],
$C=\log\left(|I_{K}+\frac{\rho}{N}\mathbf{R}_{n}^{-1}\mathbf{W}\mathbf{H}\mathbf{P}^{H}\mathbf{H}^{H}\mathbf{W}^{H}|\right),$
(7)
where $\mathbf{R}_{n}=\sigma_{n}^{2}\mathbf{W}\mathbf{W}^{H}$.
Figure 3: IEEE 802.11 ay beam training protocol for receiver [14]
IEEE 802.11 ay beam training and its training (TRN) field format is shown in
Fig. 3, which introduces a beam refinement protocol (BRP) that can improve its
antenna configuration for transmission. The transmitter uses difference AWVs
in the transmission of the TRN field while the receiver uses the same AWV in
its reception. Three parameters define the format and length of a TRN-Unit
used for transmit beamforming training, also called as EDMG BRP-TX packets:
EDMG TRN unit $T_{P}$, EDMG TRN-Unit $T_{M}$, and EDMG TRN-Unit $T_{N}$. In a
TRN-Unit, the first $T_{P}$ TRN subfields are transmitted with the same AWV as
the data field. Therefore, the receiver may use such TRN subfields to maintain
synchronization, for which we do not consider in thie paper. In the
transmission of the remaining $T_{M}$ TRN subfields of a TRN-Unit, the
transmitter changes AWV at the beginning of each TRN subfield. In order to
improve the robustness of the beamforming training process, of the last
$T_{M}$ TRN subfields of a TRN-Unit, more than one consecutive TRN subfield
may be transmitted with the same AWV, in this paper, we do not consider the
repeated transmission of same AWV. The number of the consecutive TRN sub-
fileds tranmitted with the same AWV is $T_{N}$. In standard IEEE 802.11 ay,
all received signal of transmitter AWV are compared to yield the one with the
highest energy and consequently its corresponding transmitter AWV, the, this
AWV is fed back to transmitter. In this paper, however, Algorithm.1 follows
the reception of all signal and its performance is shown in Fig. 4. The
comparison between COM and standard 802.11 ay shows that there is around 4
bit/Hz gain. The gain increases as the SNR increases. Moreover, the time
complexity of this algorithm is only $O(n)$ and is does not do any complex
matrix decomposition like SVD in most of the state-of-the-art precoding
method.
Figure 4: EDMG BRP-TX packets
## IV Conclusion
Traditional 11 ad and 11 ay beam training method is simply choosing the
received signal with the highest energy and find its corresponding
transmitting AWV. And, the state-of-the-art beamforming methods for mmWave
MIMO systems are always derived with the assumption of full CSI, which brings
high latency and impractical especially in the beginning phase of beamforming
when CSI is completely unknown. However, this paper proposes a beamforming
method in physical layer which aligns with the MAC-layer protocol IEEE 802.11
ay. The energy of the received signal as the coarse estimation channel is
utilized to yield the optimal transmitter AWV with orthogonal codebook design
and time complexity of $\mathcal{O}(n^{2})$. This method is shown in the
simulation to be of much higher capacity gain compared to 11 ad naive beam
training method.
## References
* [1] E. Perahia, C. Cordeiro, M. Park, and L. L. Yang, “Ieee 802.11 ad: Defining the next generation multi-gbps wi-fi,” in _2010 7th IEEE consumer communications and networking conference_. IEEE, 2010, pp. 1–5.
* [2] Z. Xiao, “Suboptimal spatial diversity scheme for 60 ghz millimeter-wave wlan,” _IEEE Communications Letters_ , vol. 17, no. 9, pp. 1790–1793, 2013\.
* [3] P. Xia, H. Niu, J. Oh, and C. Ngo, “Practical antenna training for millimeter wave mimo communication,” in _2008 IEEE 68th Vehicular Technology Conference_. IEEE, 2008, pp. 1–5.
* [4] P. Xia, S.-K. Yong, J. Oh, and C. Ngo, “Multi-stage iterative antenna training for millimeter wave communications,” in _IEEE GLOBECOM 2008-2008 IEEE Global Telecommunications Conference_. IEEE, 2008, pp. 1–6.
* [5] F. Khan and Z. Pi, “mmwave mobile broadband (mmb): Unleashing the 3–300ghz spectrum,” in _34th IEEE Sarnoff Symposium_. IEEE, 2011, pp. 1–6.
* [6] L. Zhang, S. Roy, and L. Cao, “Architecture-algorithmic trade-offs in multi-path channel estimation for mmwave systems,” _arXiv preprint arXiv:2209.02944_ , 2022.
* [7] Z. Xiao, T. He, P. Xia, and X.-G. Xia, “Hierarchical codebook design for beamforming training in millimeter-wave communication,” _IEEE Transactions on Wireless Communications_ , vol. 15, no. 5, pp. 3380–3392, 2016\.
* [8] X. Gao, L. Dai, S. Han, I. Chih-Lin, and R. W. Heath, “Energy-efficient hybrid analog and digital precoding for mmwave mimo systems with large antenna arrays,” _IEEE Journal on Selected Areas in Communications_ , vol. 34, no. 4, pp. 998–1009, 2016.
* [9] O. El Ayach, S. Rajagopal, and S. Abu-Surra, ““spatially sparse precoding in millimeter wave mimo systems,” ieee journ,” _Selected Areas of Comm_ , vol. 8, no. 3, 2017.
* [10] A. F. Molisch, V. V. Ratnam, S. Han, Z. Li, S. L. H. Nguyen, L. Li, and K. Haneda, “Hybrid beamforming for massive mimo: A survey,” _IEEE Communications Magazine_ , vol. 55, no. 9, pp. 134–141, 2017.
* [11] Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband systems,” _IEEE communications magazine_ , vol. 49, no. 6, pp. 101–107, 2011\.
* [12] A. Alkhateeb, O. El Ayach, G. Leus, and R. W. Heath, “Hybrid precoding for millimeter wave cellular systems with partial channel knowledge,” in _2013 Information Theory and Applications Workshop (ITA)_. IEEE, 2013, pp. 1–5.
* [13] A. Goldsmith, S. A. Jafar, N. Jindal, and S. Vishwanath, “Capacity limits of mimo channels,” _IEEE Journal on selected areas in Communications_ , vol. 21, no. 5, pp. 684–702, 2003.
* [14] Y. Ghasempour, C. R. da Silva, C. Cordeiro, and E. W. Knightly, “Ieee 802.11 ay: Next-generation 60 ghz communication for 100 gb/s wi-fi,” _IEEE Communications Magazine_ , vol. 55, no. 12, pp. 186–192, 2017.
|
# Temperature on rods with Robin boundary conditions
Dimitrios Betsakos and Alexander Solynin
###### Abstract.
We consider solutions $u_{f}$ to the one-dimensional Robin problem with the
heat source $f\in L^{1}[-\pi,\pi]$ and Robin parameter $\alpha>0$. For given
$m$, $M$, and $s$, $0\leq m<s<M$, we identify the heat sources $f_{0}$, such
that $u_{f_{0}}$ maximizes the temperature gap
$\max_{[-\pi,\pi]}u_{f}-\min_{[-\pi,\pi]}u_{f}$ over all heat sources $f$ such
that $m\leq f\leq M$ and $\|f\|_{L^{1}}=2\pi s$. In particular, this answers a
question raised by J. J. Langford and P. McDonald in [5]. We also identify
heat sources, which maximize/minimize $u_{f}$ at a given point
$x_{0}\in[-\pi,\pi]$ over the same class of heat sources as above and discuss
a few related questions.
###### Key words and phrases:
Heating problem, Poisson equation, Robin boundary conditions, comparison
theorem
###### 2010 Mathematics Subject Classification:
34B08, 34C10
## 1\. Heating with Robin boundary conditions.
Recently, J. J. Langford and P. McDonald [5] studied the one-dimensional
Poisson equation with Robin boundary conditions. They considered the following
physical setup: Suppose that a metal rod of length $2\pi$ is located along the
interval $[-\pi,\pi]$. Suppose that to half of the locations of the rod, heat
is generated uniformly; call this set $E$. On the remaining half, heat is
neither generated nor absorbed. The ends of the rod interact with the cooler
environment so that there is a heat flux from the rod which is proportional to
the temperature at each end (Newton’s law of cooling).
Let $u$ be the steady-state temperature function; it satisfies the Poisson
equation
$-u^{\prime\prime}(x)=\chi_{E}(x),\;\;\;x\in[-\pi,\pi]$ (1.1)
with Robin boundary conditions
$-u^{\prime}(-\pi)+\alpha u(-\pi)=u^{\prime}(\pi)+\alpha u(\pi)=0,$ (1.2)
where $\alpha>0$ and $\chi_{E}$ stands for the characteristic function of the
set $E$.
Langford and McDonald [5] studied the problem of where one should locate the
heat sources to maximize the hottest steady-state temperature. In other words,
for which set $E$, for the solution $u$ of the boundary value problem
(1.1)-(1.2), $\max_{[-\pi,\pi]}u$ is maximal. They showed that this quantity
is maximal when $E$ is an interval located symmetrically in the middle of the
rod; namely $E=[-\pi/2,\pi/2]$. The authors of [5] actually studied a more
general problem and obtained much stronger comparison results. They observed,
however, that the symmetric interval is not extremal for another problem. They
considered the temperature gap over $[-\pi,\pi]$, i.e. the quantity
${\rm osc}(u):=\max_{[-\pi,\pi]}u-\min_{[-\pi,\pi]}u$
and showed that ${\rm osc}(u)$ is not maximized for $E=[-\pi/2,\pi/2]$. So
they raised the following question:
###### Problem 1.
Where should we place the heat sources to maximize the temperature gap?
They suggested that the extremal set $E$ is again an interval which, however,
is not symmetrically located on $[-\pi,\pi]$. In the present note, we will
study this conjecture. As in [5], we will consider a more general setting.
The Robin problem for $f\in L^{1}[-\pi,\pi]$ and $\alpha>0$ is to find $u\in
C^{1}[-\pi,\pi]$ such that
1. 1.
$u^{\prime}$ is absolutely continuous on $[-\pi,\pi]$,
2. 2.
$-u^{\prime\prime}=f$ a.e. on $(-\pi,\pi)$,
3. 3.
$-u^{\prime}(-\pi)+\alpha u(-\pi)=u^{\prime}(\pi)+\alpha u(\pi)=0$.
It was shown in Proposition 2.1 in [5] that the Robin problem has a unique
solution given by the equation
$u_{f}(x)=\int_{-\pi}^{\pi}G(x,y)f(y)\,dy.$ (1.3)
Here, $G(x,y)$ stands for the Green’s function for Robin problem, which is
$G(x,y)=-\frac{1}{2}c_{\alpha}xy-\frac{1}{2}|x-y|+\frac{1}{2c_{\alpha}},\quad
x,y\in[-\pi,\pi],$ (1.4)
where
$c_{\alpha}=\frac{\alpha}{1+\alpha\pi}.$ (1.5)
To recall a few basic facts about solutions of the Robin problem, we note
first that, as simple Calculus shows, $G(x,y)>0$ for all $x,y\in[-\pi,\pi]$.
Another simple but important conclusion from (1.3) is that the solution to the
Robin problem is an additive function of the heat source; i.e. if $m_{1}$,
$m_{2}$ are constants and $f_{1},f_{2}\in L^{1}$ (here and below $L^{1}$
stands for $L^{1}[-\pi,\pi]$), then
$u_{f}=m_{1}u_{f_{1}}+m_{2}u_{f_{2}},\quad{\mbox{where
$f=m_{1}f_{1}+m_{2}f_{2}$.}}$ (1.6)
Furthermore, if $f\geq 0$, then it is immediate from property 2 above that
$u_{f}$ is concave on $[-\pi,\pi]$ and therefore it takes its minimal value at
one of the end points $x=\pm\pi$ and it takes its maximal value either at a
single point or on some closed subinterval of $[-\pi,\pi]$. If $u_{f}$ takes
its minimal value at $-\pi$, it follows from property 3 that $u_{f}(x)\geq
u_{f}(-\pi)=\alpha^{-1}u^{\prime}_{f}(-\pi)>0$ and therefore, in this case,
$u_{f}(x)>0$ for $x\in[-\pi,\pi]$ unless $f\equiv 0$. The same conclusion
follows if $u_{f}$ takes its minimal value at $\pi$.
To put the question in Problem 1 in a more general setting, we consider, for
given $m$, $M$, and $s$ such that $0\leq m<s<M$, the class
$\mathcal{F}=\mathcal{F}(m,M,s)$ of heat sources $f\in L^{1}$ such that $m\leq
f(x)\leq M$ for $x\in[-\pi,\pi]$ and $\|f\|_{L^{1}}=2\pi s$. The parameters
$m$, $M$, and $s$ can be interpreted as the ground heat, the top heat, and the
average heat over the rod.
Our main goal in this note is to prove the following theorem, which solves the
maximal temperature gap problem for the class $\mathcal{F}(m,M,s)$ and
therefore, as a special case, it provides a solution to Problem 1. In this
theorem and below, $I(a,l)\subset\mathbb{R}$ denotes the closed interval of
length $2l>0$ centered at $a$.
###### Theorem 1.
Let $u_{f}$ solves the Robin problem for $f\in\mathcal{F}(m,M,s)$ and
$\alpha>0$. Then
${\rm osc}(u_{f})\leq(M-m)\Theta_{\alpha}(l,\delta),$ (1.7)
where $l=\pi(s-m)/(M-m)$, $\delta=m/(M-m)$ and $\Theta_{\alpha}(l,\delta)$ is
defined by equations (4.11), (4.12) and (4.13) in Section 4.
Equality holds in (1.7) if and only if $f(x)=f_{0}(x)$ or $f(x)=f_{0}(-x)$
a.e. on $[-\pi,\pi]$, where $f_{0}=m+(M-m)\,\chi_{I(a_{g},l)}$ with
$l=\pi(s-m)/(M-m)$ and $a_{g}$ defined by equation (4.10) in Section 4.
For $m=1$, $M=3$, and $s=\frac{7}{5}$, the graph of the maximal temperature
gap $(M-m)\Theta_{\alpha}(l,\delta)$ considered as a function of $\alpha$ is
shown in Figure 1(a). Figure 1(b) displays the graph of the extremal function
$u_{f_{0}}(x)$, where $f_{0}=m+(M-m)\,\chi_{I(a_{g},l)}$ with $m=1$, $M=3$,
$s=\frac{7}{5}$ and $\alpha=1/2$.
(a)
(b)
Fig 1. (a) The maximal temperature gap $(M-m)\Theta_{\alpha}(l,\delta)$ for
$m=1$, $M=3$, $s=\frac{7}{5}$ and $0<\alpha<2$; (b) Extremal function
$u_{f_{0}}(x)$ for $m=1$, $M=3$, $s=\frac{7}{5}$ and $\alpha=1/2$.
Returning to the context of Problem 1, let us suppose that $E$ is a measurable
subset of $[-\pi,\pi]$ of length (one-dimensinal Lebesgue measure) equal to
$\pi$. We apply Theorem 1 with $f=\chi_{E}$, $m=0$, $M=1$, $\delta=0$, and
$l=\pi/2$. The parameter $\alpha_{0}$ defined by equation (4.8) in Section 4
takes the value $\alpha_{0}=\frac{2}{\sqrt{3}\pi}$. Let
$E^{*}=I(a_{g},\pi/2)=[a_{g}-\pi/2,a_{g}+\pi/2]\;\;\;\;\hbox{and}\;\;\;\;f^{*}=\chi_{E^{*}},$
where (see formula (4.10))
$a_{g}=\begin{cases}\pi/2,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\hbox{if}\;\;0<\alpha\leq\frac{2}{\sqrt{3}\pi},\\\
\frac{\pi/2}{(1+\alpha\pi)(\pi
c_{\alpha}/2-\pi^{2}c_{\alpha}^{2}/4)},\;\;\hbox{if}\;\;\alpha\geq\frac{2}{\sqrt{3}\pi}.\end{cases}$
It follows (cf. [5, Proposition 3.3]) that when
$0<\alpha\leq\frac{2}{\sqrt{3}\pi}$, we have $E^{*}=[0,\pi]$; when
$\alpha\geq\frac{2}{\sqrt{3}\pi}$, the location of the interval $E^{*}$
depends on $\alpha$ and as $\alpha$ increases, $E^{*}$ moves from the right
end to the center. By Theorem 1, we have ${\rm osc}(u_{f})\leq{\rm
osc}(u_{f^{*}})$. The solution to Problem 1 is: The temperature gap is
maximized uniquely when the heat sources are placed on $E^{*}$ or on $-E^{*}$.
Another interesting problem on the distribution of heat on a rod is to
identify heat sources $f\in\mathcal{F}(m,M,s)$, which generate the maximal
possible temperature and the minimal possible temperature at a fixed location
$x_{0}\in[-\pi,\pi]$ of the rod. Notice that if $f^{-}(x)=f(-x)$, then
$u_{f^{-}}(x)=u_{f}(-x)$. Thus, working with this problem, we may assume that
$x_{0}\in[0,\pi]$. Its solution is given by the following theorem.
###### Theorem 2.
Let $u_{f}$ solves the Robin problem for $f\in\mathcal{F}(m,M,s)$ and
$\alpha>0$ and let $x_{0}\in[0,\pi]$ be fixed. Then
$\eta_{\alpha}(x_{0})M-(M-m)\nu_{\alpha}(x_{0},l^{-})\leq
u_{f}(x_{0})\leq\eta_{\alpha}(x_{0})m+(M-m)\nu_{\alpha}(x_{0},l),$ (1.8)
where $l=\pi(s-m)/(M-m)$, $l^{-}=\pi-l$, and the functions $\eta_{\alpha}(x)$
and $\nu_{\alpha}(x,l)$ are defined in Section 2 by equations (2.5) and (2.7),
respectively.
Equality holds in the right inequality in (1.8) if and only if $f=f_{0}^{+}$
a.e. on $[-\pi,\pi]$, where $f_{0}^{+}=m+(M-m)\,\chi_{I(a_{m},l)}$ with $l$
defined above and $a_{m}=x_{0}(1-lc_{\alpha})$ if $x_{0}(1-lc\alpha)<\pi-l$
and $a_{m}=\pi-l$ otherwise.
Equality holds in the left inequality in (1.8) if and only if $f=f_{0}^{-}$
a.e. on $[-\pi,\pi]$, where $f_{0}^{-}=M-(M-m)\,\chi_{I(a_{m}^{-},l^{-})}$
with $l^{-}=\pi-l$ and $a_{m}^{-}=x_{0}(1-l^{-}c_{\alpha})$ if
$x_{0}(1-l^{-}c\alpha)<\pi-l^{-}$ and $a_{m}^{-}=\pi-l^{-}$ otherwise.
It would be also useful to know how warmer a fixed spot $x_{0}$ could be
compared to the edges of the rod. The answer to this question is the
following.
###### Theorem 3.
Let $u_{f}$ solves the Robin problem for $f\in\mathcal{F}(m,M,s)$ and
$\alpha>0$ and let $x_{0}\in[-\pi,\pi]$ be fixed. Then
$u_{f}(x_{0})-u_{f}(-\pi)\leq
m(\eta_{\alpha}(x_{0})-\eta_{\alpha}(-\pi))+(M-m)\tau_{\alpha}(x_{0},l),$
(1.9)
where $l=\pi(s-m)/(M-m)$, the function $\eta_{\alpha}(x)$ is defined by
equation (2.5) and the function $\tau_{\alpha}(x,l)$ is defined by equation
(3.5).
Equality holds in (1.9) if and only if $f=f_{e}$ a.e. on $[-\pi,\pi]$, where
$f_{e}=m+(M-m)\,\chi_{I(a_{e},l)}$ with $l$ defined above and
$a_{e}=a_{e}(x,l)$ defined in equation (3.4).
The main results of [5] stated in Theorems 1.3 and 1.5 concern the comparison
principles for heating problems with the Robin and Newmann boundary
conditions, respectively. In these problems, the extremal distribution of heat
is symmetric with respect to the center of the rod. With this symmetry, the
authors of [5] were able to use symmetrization methods due to G. Talenti [9]
and A. Baernstein II [2] to prove their theorems. We want to mention here that
S. Abramovich in her paper [1] published in 1975 already used the
symmetrization method due to G. Pólya and G. Szegö [7] to prove interesting
results on the eigenvalues of the differential system
$y^{\prime\prime}(x)+\lambda p(x)y(x)=0$, $y(\pm 1)=0$ for the function
$p(x)\geq 0$ defined on the string $(-1,1)$. More recently, the symmetrization
method similar to the one used in [5] in combination with the polarization
technique was used in [3] to study several problems on heat distribution in
the cylindrical pipes heated along various regions on the surface area.
We stress here that the extremal distributions of heat in our Theorems 1, 2
and 3 are not symmetric with respect to the center of the rod, in general.
Thus, the classical symmetrization technique cannot be applied in these
problems while certain versions of polarization technique used in [3] still
can be applied.
## 2\. Heating a fixed spot by a single interval.
Suppose that the Robin rod $[-\pi,\pi]$ is heated with unit density along the
interval $I=I(a,l)$ centered at the point $a\in(-\pi,\pi)$ with length $2l$
such that $-\pi\leq a-l<a+l\leq\pi$. Thus, we assume here that $f=\chi_{I}$.
Using the integral representation (1.4) for the Robin temperature with the
Green’s function given by (1.5), we evaluate
$u_{\chi_{I}}=u_{\chi_{I(a,l)}}(x,\alpha)$ as follows:
$\displaystyle u_{\chi_{I}}$ $\displaystyle=$
$\displaystyle\int_{I}[(-1/2)c_{\alpha})xy-(1/2)|x-y|+1/(2c_{\alpha})]\,dy$
(2.4) $\displaystyle{=}$
$\displaystyle\left\\{\begin{array}[]{lr}l[(1-ac_{\alpha})x+(c_{\alpha}^{-1}-a)],&-\pi\leq
x\leq a-l,\\\
-\frac{1}{2}x^{2}+a(1-lc_{\alpha})x+\frac{l}{c_{\alpha}}-\frac{a^{2}+l^{2}}{2},&a-l<x<a+l,\\\
l[-(1+ac_{\alpha})x+(c_{\alpha}^{-1}+a)],&a+l\leq x\leq\pi.\end{array}\right.$
In particular, if the whole rod $[-\pi,\pi]$ is heated with unit density, then
$a=0$, $l=\pi$, and $u_{\chi_{[-\pi,\pi]}}(x)=\eta_{\alpha}(x)$, where
$\eta_{\alpha}(x)=-\frac{1}{2}x^{2}+\frac{\pi}{\alpha}+\frac{\pi^{2}}{2},\quad-\pi\leq
x\leq\pi.$ (2.5)
Next, we fix $x_{0}\in[0,\pi]$, $l\in(0,\pi)$, $\alpha>0$ and treat
$u_{\chi_{I}}$ as a function $F(a)$ of the variable $a\in[-\pi+l,\pi-l]$.
We have to consider the following cases:
1. 1)
If $0\leq x_{0}\leq-\pi+2l$, then
$F(a)=-\frac{1}{2}a^{2}+x_{0}(1-lc_{\alpha})a-\frac{1}{2}x_{0}^{2}+\frac{l}{c_{\alpha}}-\frac{l^{2}}{2},\quad-\pi+l\leq
a\leq\pi-l,$
2. 2)
If $-\pi+2l<x_{0}<\pi-2l$, then
$F(a)=\left\\{\begin{array}[]{ll}l(1-x_{0}c_{\alpha})a+l(\frac{1}{c_{\alpha}}-x_{0}),&-\pi+l\leq
a\leq x_{0}-l,\\\
-\frac{1}{2}a^{2}+x_{0}(1-lc_{\alpha})a-\frac{1}{2}x_{0}^{2}+\frac{l}{c_{\alpha}}-\frac{l^{2}}{2},&x_{0}-l\leq
a\leq x_{0}+l,\\\
-l(1+x_{0}c_{\alpha})a+l(\frac{1}{c_{\alpha}}+x_{0}),&x_{0}+l\leq
a\leq\pi-l.\end{array}\right.$
3. 3)
If $\max\\{-\pi+2l,\pi-2l\\}\leq x_{0}\leq\pi$, then
$F(a)=\left\\{\begin{array}[]{ll}l(1-x_{0}c_{\alpha})a+l(\frac{1}{c_{\alpha}}-x_{0}),&-\pi+l\leq
a\leq x_{0}-l,\\\
-\frac{1}{2}a^{2}+x_{0}(1-lc_{\alpha})a-\frac{1}{2}x_{0}^{2}+\frac{l}{c_{\alpha}}-\frac{l^{2}}{2},&x_{0}-l\leq
a\leq\pi-l.\end{array}\right.$
A simple argument, left to the interested reader, shows that, in all three
cases, $F(a)$ is positive and concave on the interval $[-\pi+l,\pi-l]$ and
takes its minimal value $\mu_{\alpha}=\mu_{\alpha}(x_{0},l)$ at $a=-\pi+l$.
Evaluating $\mu_{\alpha}=F(-\pi+l)$, we find
$\mu_{\alpha}=\left\\{\begin{array}[]{l}l[((\pi-l)c_{\alpha}-1)x_{0}+\frac{1}{c_{\alpha}}+l-\pi],{\mbox{\
if $-\pi+2l<x_{0}\leq\pi$,}}\\\
-\frac{1}{2}x_{0}^{2}-(\pi-l)(1-lc_{\alpha})x_{0}-l^{2}+l(\frac{1}{c_{\alpha}}+\pi)-\frac{\pi^{2}}{2},{\
\mbox{otherwise.}}\end{array}\right.$
Next, we find the maximum $\nu_{\alpha}=\nu_{\alpha}(x_{0},l)=\max F(a)$ taken
over the interval $-\pi+l\leq a\leq\pi-l$ and identify the point
$a_{m}\in[-\pi+l,\pi-l]$, where this maximum is achieved. Let $q(a)$ denote
the quadratic function as in parts 1)–3). Then $q(a)$ takes its maximum at the
point $a_{0}=x_{0}(1-lc_{\alpha})$. Notice that $0\leq a_{0}\leq x_{0}$ with
equality sign in either of these inequalities if and only if $x_{0}=0$. If
$a_{0}+l\leq\pi$, then the function $F(a)$ is defined at $a_{0}$ and takes its
maximum at this point. Thus, if $a_{0}+l\leq\pi$, then
$a_{m}=x_{0}(1-lc_{\alpha})$. If $a_{0}+l>\pi$, then $F(a)$ takes its maximum
at $a_{m}=\pi-l$. Combining these cases, we have the following equation for
the central point $a_{m}=a_{m}(x_{0},l,\alpha)$ of the heating interval of
length $2l$, which generates the maximal temperature at the point $x_{0}$:
$a_{m}=\left\\{\begin{array}[]{ll}x_{0}(1-lc_{\alpha}),&{\mbox{if
$x_{0}<\frac{\pi-l}{1-lc_{\alpha}}$,}}\\\
\pi-l,&{\mbox{otherwise.}}\end{array}\right.$ (2.6)
With these notations, we can evaluate the maximum
$\nu_{\alpha}=\nu_{\alpha}(x_{0},l)$ as follows:
$\nu_{\alpha}=\left\\{\begin{array}[]{lr}\frac{l}{c_{\alpha}}\left(1-\frac{lc_{\alpha}}{2}\right)\,\left(1-x_{0}^{2}c_{\alpha}^{2}\right),&{\mbox{if
$x_{0}<\frac{\pi-l}{1-lc_{\alpha}}$,}}\\\ \frac{1}{2}(\pi-
x_{0})^{2}+l(1-c_{\alpha}x_{0})(2\pi-l+\frac{1}{\alpha}),&{\mbox{otherwise.}}\end{array}\right.$
(2.7)
The inequality $a_{0}+l\leq\pi$ is equivalent to the inequality
$\alpha\geq\frac{x_{0}+l-\pi}{(\pi-l)(\pi-x_{0})}.$
Let us define $\alpha_{m}\geq 0$ as
$\alpha_{m}=\left\\{\begin{array}[]{ll}\frac{x_{0}+l-\pi}{(\pi-l)(\pi-
x_{0})},&{\mbox{if $x_{0}<\pi-l$,}}\\\
0,&{\mbox{otherwise.}}\end{array}\right.$ (2.8)
Now, our arguments above show that if $x_{0}\in[0,\pi)$, $l\in(0,\pi)$ and
$\alpha>\alpha_{m}$, then the function $F(a)$ achieves its maximum
$\nu_{\alpha}(x_{0},l)$, given by the first line of (2.7), at the point
$a_{m}=x_{0}(1-lc_{\alpha})$, and if $0<\alpha\leq\alpha_{m}$, then $F(a)$
achieves its maximum $\nu_{\alpha}(x_{0},l)$, given by the second line of
(2.7), at the point $a_{m}=\pi-l$.
Combining our results and using the notation introduced above, we obtain the
following lemma.
###### Lemma 1.
1) Let $x_{0}\in[0,\pi]$, $l\in(0,\pi)$, and $\alpha>0$ be fixed and let $a$
varies from $-\pi+l$ to $\pi-l$.
If $\alpha>\alpha_{m}$, then the function
$F(a)=u_{\chi_{I(a,l)}}(x_{0},\alpha)$ increases from its minimal value
$\mu_{\alpha}(x_{0},l)$ to its maximal value $\nu_{\alpha}(x_{0},l)$ as $a$
varies from $-\pi+l$ to $a_{m}=x_{0}(1-lc_{\alpha})<\pi-l$, and $F(a)$
decreases as $a$ varies from $a_{m}$ to $\pi-l$.
If $0<\alpha\leq\alpha_{m}$, then the function
$F(a)=u_{\chi_{I(a,l)}}(x_{0},\alpha)$ increases from $\mu_{\alpha}(x_{0},l)$
to $\nu_{\alpha}(x_{0},l)$ as $a$ varies from $-\pi+l$ to $a_{m}=\pi-l$.
2) Furthermore, the point $a_{m}$, where $F(a)$ takes its maximum, stays at
$\pi-l$ for $0<\alpha\leq\alpha_{m}$ and $a_{m}$ decreases from $\pi-l$ to
$\frac{\pi-l}{\pi}x_{0}$, when $\alpha$ runs from $\alpha_{m}$ to $\infty$.
3) Moreover, if $x_{0}\in[0,\pi]$ and $\alpha>0$ are fixed and $a_{m}$ is
considered as a function $a_{m}(l)$ of $l$, then if $0<l_{1}<l_{2}<\pi$, then
$x_{0}\in[a_{m}(l_{1})-l_{1},a_{m}(l_{1})+l_{1}]\subset[a_{m}(l_{2})-l_{2},a_{m}(l_{2})+l_{2}].$
## 3\. Temperature gap between a fixed spot and the edges of a rod for a
single interval.
As in the previous section, we assume that the Robin rod $[-\pi,\pi]$ is
heated with unit density along the interval $I=I(a,l)$. Let us fix
$x_{0}\in[-\pi,\pi]$, $l\in(0,\pi)$, $\alpha>0$, and consider the temperature
gap between the point $x_{0}$ and the left edge of the rod as a function of
$a\in[-\pi+l,\pi-l]$; i.e. we consider the function
$E(a)=u_{\chi_{I}}(x_{0})-u_{\chi_{I}}(-\pi).$ (3.1)
To find $E(a)$, we use equation (2.4). Depending on the values of $l$ and
$x_{0}$, we have to consider the following cases:
1. 1)
If $-\pi\leq x_{0}\leq\min\\{-\pi+2l,\pi-2l\\}$, then
$E(a)=\left\\{\begin{array}[]{ll}-\frac{1}{2}a^{2}+(x_{0}(1-lc_{\alpha})+l(1-\pi
c_{\alpha}))a-\\\ -\frac{1}{2}x_{0}^{2}+\pi l-\frac{l^{2}}{2},&{\mbox{if
$-\pi+l\leq a\leq x_{0}+l$,}}\\\
-lc_{\alpha}(\pi+x_{0})a+l(\pi+x_{0}),&{\mbox{if $x_{0}+l\leq
a\leq\pi-l$.}}\end{array}\right.$
2. 2)
If $-\pi+2l<x_{0}<\pi-2l$, then
$E(a)=\left\\{\begin{array}[]{ll}l(2-(\pi+x_{0})c_{\alpha})a+l(\pi-
x_{0}),&{\mbox{if $-\pi+l\leq a\leq x_{0}-l$,}}\\\
-\frac{1}{2}a^{2}+(x_{0}(1-lc_{\alpha})+l(1-\pi c_{\alpha}))a-\\\
-\frac{1}{2}x_{0}^{2}+\pi l-\frac{l^{2}}{2},&{\mbox{if $x_{0}-l\leq a\leq
x_{0}+l$,}}\\\ -lc_{\alpha}(\pi+x_{0})a+l(\pi+x_{0}),&{\mbox{if $x_{0}+l\leq
a\leq\pi-l$.}}\end{array}\right.$
3. 3)
If $\max\\{-\pi+2l,\pi-2l\\}\leq x_{0}\leq\pi$, then
$E(a)=\left\\{\begin{array}[]{ll}l(2-(\pi+x_{0})c_{\alpha})a+l(\pi-
x_{0}),&{\mbox{if $-\pi+l\leq a\leq x_{0}-l$,}}\\\
-\frac{1}{2}a^{2}+(x_{0}(1-lc_{\alpha})+l(1-\pi c_{\alpha}))a-\\\
-\frac{1}{2}x_{0}^{2}+\pi l-\frac{l^{2}}{2},&{\mbox{if $x_{0}-l\leq
a\leq\pi-l$.}}\end{array}\right.$
4. 4)
If $\pi-2l\leq x_{0}\leq-\pi+2l$, then for $-\pi+l\leq a\leq\pi-l$,
$E(a)=-\frac{1}{2}a^{2}+(x_{0}(1-lc_{\alpha})+l(1-\pi
c_{\alpha}))a-\frac{1}{2}x_{0}^{2}+\pi l-\frac{l^{2}}{2}.$
Next, we show that in each of the four cases above there is a unique point
$a_{e}\in[-\pi+l,\pi-l]$, where the function $E(a)$ achieves its maximum, we
call it $\tau_{\alpha}=\tau_{\alpha}(x_{0},l)$.
On can easily check that in all cases, $E^{\prime}(-\pi+l)>0$ and, in the
cases 1) and 2), $E^{\prime}(\pi-l)<0$. Since $E(a)$ is a smooth at most
quadratic function, we conclude from this that, in the cases 1) and 2), $E(a)$
achieves its maximum $\tau_{\alpha}$ at the point
$a_{e}=x_{0}+l-lc_{\alpha}(\pi+x_{0})$, $-\pi+l<a_{e}<x_{0}+l$.
In the cases 3) and 4), we find that
$E^{\prime}(\pi-l)=-\pi+2l+x_{0}-l(\pi+x_{0})c_{\alpha}.$ (3.2)
Since, in the cases 3) and 4), $-\pi+2l+x_{0}>0$, we conclude from (3.4) and
(1.5) that $E^{\prime}(\pi-l)<0$ if $\alpha>\alpha_{e}$ and
$E^{\prime}(\pi-l)\geq 0$ if $0<\alpha\leq\alpha_{e}$, where $\alpha_{e}$ is
defined as
$\alpha_{e}=\left\\{\begin{array}[]{ll}\frac{-\pi+2l+x_{0}}{(\pi-l)(\pi-
x_{0})}&{\mbox{if $x_{0}>\pi-2l$,}}\\\ 0&{\mbox{if
$x_{0}\leq\pi-2l$.}}\end{array}\right.$ (3.3)
In the first of these cases, $E(a)$ takes its maximum $\tau_{\alpha}$ at the
point $a_{e}=x_{0}+l-lc_{\alpha}(\pi+x_{0})$. In the second case, $E(a)$
achieves its maximum at the point $a_{e}=\pi-l$.
Combining these cases, we conclude that $E(a)$ achieves its maximum at the
point
$a_{e}=\left\\{\begin{array}[]{lr}\pi-l,&{\mbox{if $x_{0}>\pi-2l$ and
$0<\alpha\leq\alpha_{e}$,}}\\\
x_{0}+l-lc_{\alpha}(\pi+x_{0}),&{\mbox{otherwise.}}\end{array}\right.$ (3.4)
Calculating the values of $\tau_{\alpha}$ in the cases mentioned above, we
obtain:
$\tau_{\alpha}=\left\\{\begin{array}[]{l}-\frac{1}{2}x_{0}^{2}+\frac{\pi^{2}}{2}+(x_{0}(1-lc_{\alpha})+l(1-\pi
c_{\alpha}))(\pi-l),\\\ {\mbox{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ if $x_{0}>\pi-2l$ and $0<\alpha\leq\alpha_{0}$,}}\\\
\frac{1}{2}(x_{0}+l-lc_{\alpha}(\pi+x_{0}))^{2}-\frac{1}{2}x_{0}^{2}+\pi
l-\frac{1}{2}l^{2},{\ \ \ \ \ \mbox{otherwise.}\par}\end{array}\right.$ (3.5)
Combining our results, we obtain the following lemma.
###### Lemma 2.
Let $x_{0}\in[-\pi,\pi]$, $l\in(0,\pi)$, and $\alpha>0$ be fixed and let $a$
vary from $-\pi+l$ to $\pi-l$.
If $x_{0}$ and $l$ satisfy the inequalities in parts 1) or 2) given above in
this section or if $x_{0}$ and $l$ satisfy inequalities given in parts 3) or
4) and also $\alpha>\alpha_{e}$, where $\alpha_{e}$ is defined in (3.3), then
the function $E(a)=u_{\chi_{I}}(x_{0})-u_{\chi_{I}}(-\pi)$ increases from
$E(-\pi+l)$ to its maximal value $\tau_{\alpha}=\tau_{\alpha}(x_{0},l)$ given
by the second line in the equation (3.5) as $a$ varies from $-\pi+l$ to
$a_{e}$ defined by equation (3.4), and $E(a)$ decreases as $a$ varies from
$a_{e}$ to $\pi-l$.
If $x_{0}$ and $l$ satisfy the inequalities given in parts 3) or 4) and also
$0<\alpha\leq\alpha_{e}$, then the function $E(a)$ increases from $E(-\pi+l)$
to its maximal value $\tau_{\alpha}$ given by the first line in the equation
(3.5) as $a$ varies from $-\pi+l$ to $\pi-l$.
## 4\. Maximal temperature gap for a single interval.
The goal here is to evaluate the temperature gap ${\rm{osc}}(u_{J})$ for the
heat source $J=\delta+\chi_{I(a,l)}$, $l\in[0,\pi)$, $a\in[0,\pi-l]$,
$\delta\geq 0$. It follows from our calculations in Section 2 that
$u_{J}(x)=\delta\eta_{\alpha}(x)+u_{\chi_{I(a,l)}}(x),$ (4.1)
where $\eta_{\alpha}(x)$ is defined by (2.5) and $u_{\chi_{I(a,l)}}$ is
defined by (2.4).
An elementary calculation shows that, under our assumptions, $u_{J}(-\pi)\leq
u_{J}(\pi)$ with equality sign if and only if $a=0$. Since $u_{J}$ is concave
on $[-\pi,\pi]$ it follows that
$\min_{[-\pi,\pi]}u_{J}=u_{J}(-\pi)=\alpha^{-1}(\pi\delta+l(1-ac_{\alpha})).$
(4.2)
To find the maximum of $u_{J}$, we note that $u_{J}$ is a concave piece-wise
at most quadratic function, which achieves its maximum at the point
$x_{0}=\frac{a(1-lc_{\alpha})}{1+\delta}.$ (4.3)
Thus, $0\leq x_{0}\leq a$ with equality sign if and only if $a=0$. Evaluating
$u_{J}(x_{0})$ and simplifying, we find:
$\max_{[-\pi,\pi]}u_{J}=u_{J}(x_{0})=\frac{1}{2}\left(\frac{(1-lc_{\alpha})^{2}}{1+\delta}-1\right)\,a^{2}+\frac{\pi\delta}{\alpha}+\frac{\pi^{2}\delta}{2}+\frac{l}{c_{\alpha}}-\frac{l^{2}}{2}.$
(4.4)
Combining (4.2) and (4.4) and simplifying, we find that
${\rm{osc}}(u_{J})=H_{\alpha}$, where $H_{\alpha}=H_{\alpha}(a,l,\delta)$ is
given by
$H_{\alpha}=\frac{1}{2}\left(\frac{(1-lc_{\alpha})^{2}}{1+\delta}-1\right)\,a^{2}+\frac{l}{1+\alpha\pi}a+\pi
l+\frac{\pi^{2}\delta}{2}-\frac{l^{2}}{2}.$ (4.5)
Next we fix $l\in(0,\pi)$, $\alpha>0$, $\delta\geq 0$ and treat $H_{\alpha}$
as a function $H_{\alpha}(a)$ of $a\in[0,\pi-l]$. Since
$(1-lc_{\alpha})^{2}/(1+\delta)<1$, it follows from (4.5) that $H_{\alpha}(a)$
is concave and takes its maximum at the point
$a_{0}=\frac{(1+\delta)l}{(1+\alpha\pi)(\delta+2lc_{\alpha}-l^{2}c_{\alpha}^{2})}.$
(4.6)
Now, the question is whether or not the point $a_{0}$ given in (4.6) belongs
to the interval $[0,\pi-l]$. Notice that the function
$\delta+2lc_{\alpha}-l^{2}c_{\alpha}^{2}$ in the denominator of (4.6)
increases when $\alpha$ varies from $0$ to $\infty$. This implies that, if
$\delta$ and $l$ are fixed, then $a_{0}$, defined by (4.6) and considered as a
function $a_{0}=a_{0}(\alpha)$ of $\alpha$, decreases from
$(1+\delta)l/\delta$ to $0$ if $\delta>0$ or from $\infty$ to $0$ if
$\delta=0$, when $\alpha$ varies from $0$ to $\infty$. Therefore if
$(1+\delta)l\leq\delta(\pi-l)$, then $a_{0}\in[0,\pi-l]$ for all $\alpha>0$
and if $(1+\delta)>\delta(\pi-l)$, then there is a unique $\alpha_{0}>0$ such
that $a_{0}\in[0,\pi-l]$ for $\alpha\geq\alpha_{0}$ and $a_{0}>\pi-l$ for
$0<\alpha<\alpha_{0}$. The value of $\alpha_{0}$ is the positive root of the
quadratic equation
$(\pi^{2}\delta+2\pi l-l^{2})\alpha^{2}+\frac{2(\pi\delta+l)(\pi-l)-\pi
l(1+\delta)}{\pi-l}\,\alpha+\delta-\frac{(1+\delta)l}{\pi-l}=0,$ (4.7)
which is
$\alpha_{0}=\frac{-2\pi^{2}\delta+3\pi\delta l-\pi
l+2l^{2}+l\sqrt{(1+\delta)(\pi^{2}\delta+9\pi^{2}-16\pi
l+8l^{2})}}{2(\pi^{3}\delta-\pi^{2}\delta l+2\pi^{2}l-3\pi l^{2}+l^{3})}.$
(4.8)
When $\delta=0$ and $l=\pi/2$, we obtain $\alpha_{0}=\frac{2}{\sqrt{3}\pi}$
that is the transitional value of $\alpha$ found in Proposition 3.3 in [5].
Thus, we can define the following parameters:
$\alpha_{g}=\left\\{\begin{array}[]{ll}\alpha_{0}&{\mbox{if
$(1+\delta)l>\delta(\pi-l$),}}\\\ 0&{\mbox{if
$(1+\delta)l\leq\delta(\pi-l)$}}\end{array}\right.$ (4.9)
and
$a_{g}=\left\\{\begin{array}[]{ll}\pi-l&{\mbox{if
$(1+\delta)l>\delta(\pi-l$),}}\\\ a_{0}&{\mbox{if
$(1+\delta)l\leq\delta(\pi-l)$.}}\end{array}\right.$ (4.10)
Figure 2, which illustrates possible situations, contains graphs of ${\rm
osc}(u_{J})=H_{\alpha}(a)$ considered as a function of the parameter $a$ for
some fixed values of $l$, $\delta$ and $\alpha$. These graphs show, in
particular, that when $\alpha$ varies from $0$ to $\infty$, the central point
of the segment providing the maximal oscillation for $u_{J}$ moves toward the
center of the rod from the right. For a special case when $\delta=0$,
$l=\pi/2$, this behavior had been already observed in Proposition 3.3 in [5].
Fig 2. Graphs of ${\rm osc}(u_{J})=H_{\alpha}(a)$ as a function of $a$ for
different choices of $l$, $\delta$, and $\alpha$.
Summarizing the results of this section and evaluating the value of the
oscillation ${\rm{osc}}(u_{J})=H_{\alpha}(a,l,\delta)$ at the point $a=a_{g}$
defined in (4.10), we obtain the following.
###### Lemma 3.
Let $l\in(0,\pi)$, $a\in[0,\pi-l]$, $\alpha>0$, and $\delta\geq 0$ be fixed
and let $u_{J}$ be defined as in (4.1). Then
$\max_{a\in[0,\pi]}{\rm{osc}}(u_{J})=\Theta_{\alpha}(l,\delta),$ (4.11)
where
$\Theta_{\alpha}=\frac{1}{2}\frac{(1+\delta)l^{2}}{(\pi^{2}\delta+2\pi
l-l^{2})\alpha^{2}+2(\pi\delta+l)\alpha+\delta}+\pi
l+\frac{\pi^{2}\delta}{2}-\frac{l^{2}}{2},$ (4.12)
when $(1+\delta)l<\delta(\pi-l)$ and $\alpha>0$ or
$(1+\delta)l\geq\delta(\pi-l)$ and $\alpha>\alpha_{g}$, and
$\Theta_{\alpha}=\frac{1}{2}\frac{(l^{2}c_{\alpha}^{2}-2lc_{\alpha}-\delta)(\pi-l)^{2}}{1+\delta}+\frac{l(\pi-l)}{1+\alpha\pi}+\pi
l+\frac{\pi^{2}\delta}{2}-\frac{l^{2}}{2},$ (4.13)
when $(1+\delta)l\geq\delta(\pi-l)$ and $0<\alpha\leq\alpha_{g}$.
Furthermore, if $l\in(0,\pi)$, $\alpha>\alpha_{g}$, and $\delta\geq 0$ are
fixed, then ${\rm{osc}}(u_{J})$ considered as a function of $a$ increases when
$a$ varies from $0$ to $a_{0}$ defined by (4.6) and ${\rm{osc}}(u_{J})$
decreases when $a$ varies from $a_{0}$ to $\pi-l$. If
$0<\alpha\leq\alpha_{g}$, then ${\rm{osc}}(u_{J})$ increases when $a$ varies
from $0$ to $\pi-l$.
## 5\. Approximation by step functions and continuity.
In this section, we collect auxiliary results on convergence of sequences of
solutions $u_{f_{n}}$ of the Robin problem, on approximation of $u_{f}$ by
sequences $u_{f_{n}}$ with piece-wise constant functions $f_{n}$ and on
continuity of $u_{f_{n}}$ as a function of the parameters of approximants.
These results will be used to prove our main theorems. We start with the
following convergence lemma.
###### Lemma 4.
If $f_{n}\in\mathcal{F}(m,M,s)$, $n=1,2,\dots$, and $f_{n}\to f$
($n\to\infty$) a.e. on $[-\pi,\pi]$, then $u_{f_{n}}\to u_{f}$ uniformly on
$[-\pi,\pi]$ and for some subsequence,
$\lim_{k\to\infty}{\rm osc}(u_{f_{n_{k}}})={\rm osc}(u_{f}).$ (5.1)
_Proof._ By the Dominated Convergence Theorem, $f_{n}$ converges to $f$ in
$L^{1}$. By (1.4), for every $x\in[-\pi,\pi]$,
$\displaystyle|u_{f_{n}}(x)-u_{f}(x)|$ $\displaystyle=$
$\displaystyle\left|\int_{-\pi}^{\pi}G(x,y)(f_{n}(y)-f(y))dy\right|$
$\displaystyle\leq$ $\displaystyle\max_{[-\pi,\pi]^{2}}G\;\;\|f_{n}-f\|_{1}.$
So, $u_{f_{n}}$ converges uniformly to $u_{f}$ on $[-\pi,\pi]$.
For each $n\in\mathbb{N}$, let $x_{n}$, $\tilde{x}_{n}$ be points of minimum
and maximum of $u_{f_{n}}$, respectively. Choose a subsequence such that
$x_{n_{k}}\to x_{o}$ and $\tilde{x}_{n_{k}}\to\tilde{x}_{o}$. By a standard
property of uniform convergence (see e.g. [4, Problems 3.1.18, 3.1.23]),
$u_{f_{n_{k}}}(x_{n_{k}})\to
u_{f}(x_{o}),\;\;\;\;\;\;u_{f_{n_{k}}}(\tilde{x}_{n_{k}})\to
u_{f}(\tilde{x}_{o}),\;\;\;\;\;\;(k\to\infty).$
Therefore, for every $x\in[-\pi,\pi]$,
$u_{f}(x)=\lim_{k}u_{f_{n_{k}}}(x)\geq\lim_{k}u_{f_{n_{k}}}(x_{n_{k}})=u_{f}(x_{o}).$
So $u_{f}(x_{o})=\min u_{f}$. Similarly $u_{f}(\tilde{x}_{o})=\max u_{f}$. It
follows that
${\rm
osc}(u_{f_{n_{k}}})=u_{f_{n_{k}}}(\tilde{x}_{n_{k}})-u_{f_{n_{k}}}(x_{n_{k}})\to
u_{f}(\tilde{x}_{o})-u_{f}(x_{o})={\rm osc}(u_{f}).$
The proof is complete. $\Box$
For $n,k\in\mathbb{N}$, $1\leq k\leq n$, let
$I_{n,k}=[-\pi+2\pi(k-1)/n,-\pi+2\pi k/n]$. Thus, the intervals
$I_{n,1},\ldots,I_{n,n}$ constitute a partition of the interval $[-\pi,\pi]$
into $n$ subintervals of equal length. We need the following approximation
result.
###### Lemma 5.
Let $f\in\mathcal{F}(m,M,s)$ and $\alpha>0$. Then for every
$n,k\in\mathbb{N}$, $1\leq k\leq n$, there are constants $c_{n,k}$, $m\leq
c_{n,k}\leq M$, such that $f_{n}=\sum_{k=1}^{n}c_{n,k}\chi_{I_{n,k}}$
satisfies the following:
1. 1)
$m\leq f_{n}\leq f_{n+1}\leq M$, $f_{n}\to f$ a.e. on $[-\pi,\pi]$,
$\|f_{n}\|_{L^{1}}\to\|f\|_{L^{1}}=2\pi s$.
2. 2)
$u_{f_{n}}(x)\to u_{f}(x)$ uniformly on $[-\pi,\pi]$.
_Proof._ Let $\varepsilon>0$. By a standard approximation result (see e.g. [8,
Theorem 3.14]), there exists a continuous function $g$ on $[-\pi,\pi]$ such
that $\|f-g\|_{L^{1}}<\varepsilon/2$. We can demand that $m\leq g\leq M$. For
$n\in\mathbb{N}$ and $k=1,2,\dots,n$, set
$c_{n,k}=\min\\{g(x):x\in
I_{n,k}\\}\;\;\;\;\;\hbox{and}\;\;\;\;\;f_{n}=\sum_{k=1}^{n}c_{n,k}\chi_{I_{n,k}}.$
Since $f_{n}\leq g$ and $g$ is Riemann integrable,
$\|f_{n}-g\|_{L^{1}}=\int_{[-\pi,\pi]}(g-f_{n})\to 0\;\;(n\to\infty).$
Hence $f_{n}\to g$ in $L^{1}$. So $\|f_{n}-g\|_{L^{1}}<\varepsilon/2$ for all
sufficiently large $n$. It follows that $f_{n}\to f$ in $L^{1}$. Moreover,
$(f_{n})$ is an increasing sequence of functions. Therefore, $f_{n}\to f$ a.e.
on $[-\pi,\pi]$. The other assertions come easily from Lemma 4. $\Box$
Let $n\in\mathbb{N}$, $m,M,s\in\mathbb{R}$, $0\leq m\leq s\leq M$, and
$\alpha>0$ be fixed. Let $K_{n}=K_{n}(m,M,s)$ denote the compact set of points
$(\overline{t},\overline{c})=(t_{1},t_{2},\ldots,t_{n},c_{1},\ldots,c_{n})\in\mathbb{R}^{2n}$
such that $-\pi=t_{0}\leq t_{1}\leq\ldots\leq t_{n}=\pi$, $0\leq c_{k}\leq
M-m$ and
$\sum_{k=1}^{n}c_{k}(t_{k}-t_{k-1})=2\pi(s-m).$
For $(\overline{t},\overline{c})\in K_{n}$ and $x\in[-\pi,\pi]$, consider the
function $f_{\overline{t},\overline{c}}\in\mathcal{F}(m,M,s)$ defined as
$f_{\overline{t},\overline{c}}(x)=m+\sum_{k=1}^{n}c_{k}\chi_{I_{k}}(x),\;\;\hbox{where}\;\;I_{k}=[t_{k-1},t_{k}].$
Then the solution $u_{f_{\overline{t},\overline{c}}}(x)$ of the Robin problem
is a linear combination of elementary functions as in the equation (2.4). This
immediately implies the following continuity lemma.
###### Lemma 6.
Let $u_{f_{\overline{t},\overline{c}}}(x)$ be the solution to the Robin
problem considered as a function of $x\in[-\pi,\pi]$ and
$(\overline{t},\overline{c})\in K_{n}$. Then
$u_{f_{\overline{t},\overline{c}}}(x)$ is continuous on the compact set
$[-\pi,\pi]\times K_{n}$.
In particular, if $E_{1}$ is a compact subset of $[-\pi,\pi]$, $E_{2}$ is a
compact subset of $K_{n}$, then there are points in $E_{1}\times E_{2}$, where
$u_{f_{\overline{t},\overline{c}}}(x)$ achieves its minimum and maximum on
$E_{1}\times E_{2}$.
## 6\. Main proofs.
Now we are ready to present proofs of our main results. We start with the
proof of Theorem 2.
Proof of Theorem 2. Let $f\in\mathcal{F}(m,M,s)$. First, we prove the right
inequality in (1.8). It follows from Lemma 5 that, for every $\varepsilon>0$,
there exists a piece-wise constant function
$f_{n,\overline{c}^{\prime}}=m+\sum_{k=1}^{n}c_{n,k}^{\prime}\chi_{I_{n,k}}\in\mathcal{F}(m,M,s_{1})$,
where $\overline{c}^{\prime}=(c^{\prime}_{n,1},\ldots,c^{\prime}_{n,n})$, such
that $0\leq c_{n,k}^{\prime}\leq M-m$, $|s_{1}-s|<\varepsilon$ and, for the
given $x_{0}\in[-\pi,\pi]$,
$|u_{f_{n,\overline{c}^{\prime}}}(x_{0})-u_{f}(x_{0})|<\varepsilon.$ (6.1)
We note here that the integer $n$ in the definition of the function
$f_{n,\overline{c}^{\prime}}$ can be chosen as large as we need for our proof.
Indeed, for any integer $j\geq 1$, consider the partition of $[-\pi,\pi]$ into
the intervals $I_{nj,s}$, $1\leq s\leq nj$. Then consider
$\overline{c}^{\prime\prime}=(c_{nj,1}^{\prime\prime},\ldots,c_{nj,nj}^{\prime\prime})$
such that $c_{nj,s}^{\prime\prime}=c_{n,k}^{\prime}$ if $I_{nj,s}\subset
I_{n,k}$. With these notations, we have
$f_{nj,\overline{c}^{\prime\prime}}=f_{n,\overline{c}^{\prime}}$ a.e. on
$[-\pi.\pi]$.
Next, for $n$ sufficiently large, we consider
$f_{\overline{c}}(x_{0})=m+\sum_{k=1}^{n}c_{k}\chi_{I_{n,k}}(x_{0})$ and
$u_{f_{\overline{c}}}(x_{0})$ as functions of
$\overline{c}=(c_{1},\ldots,c_{n})$, assuming that $\overline{c}$ varies over
the compact set defined by the following conditions:
$0\leq c_{k}\leq M-m,\quad\frac{1}{n}\sum_{k=1}^{n}c_{k}=s_{1}-m.$ (6.2)
It follows from the continuity Lemma 6, that there is
$\overline{c}^{*}=(c_{1}^{*},\ldots,c_{n}^{*})$, satisfying conditions (6.2),
such that
$\max u_{f_{\overline{c}}}(x_{0})\leq u_{f_{\overline{c}^{*}}}(x_{0}),$ (6.3)
where the maximum is taken over all $\overline{c}$ satisfying conditions
(6.2).
Let $a^{*}=a_{m}(\frac{\pi}{n})$ denote the center of the interval $I$ of
length $\frac{2\pi}{n}$, which provides the maximum to the function
$u_{\chi_{I}}(x)$ at the point $x=x_{0}$. We recall that
$a^{*}=a_{m}(\frac{\pi}{n})$, given by equation (2.6), is such that $0\leq
a^{*}\leq x_{0}\leq a^{*}+\frac{\pi}{n}$.
Suppose that the centers of the intervals $I_{n,k}$ and $I_{n,k+1}$ lie in the
interval $[-\pi,a^{*}]$. Under these conditions, we claim that either
$c_{k}^{*}=0$ or $c_{k+1}^{*}=M-m$. Indeed, if $c_{k}^{*}>0$ and
$c_{k+1}^{*}<M-m$ then, for all sufficiently small $\delta>0$,
$\widetilde{f}=f_{\overline{c}^{*}}-\delta\chi_{I_{n,k}}+\delta\chi_{I_{n,k+1}}\in\mathcal{F}(m,M,s_{1})$.
Furthermore, it follows from Lemma 1 that
$u_{\chi_{I_{n,k}}}(x_{0})<u_{\chi_{I_{n,k+1}}}(x_{0})$. The latter together
with the additivity property (1.6) imply that
$u_{\widetilde{f}}(x_{0})>u_{f_{\overline{c}^{*}}}(x_{0})$, contradicting the
inequality (6.3). A similar argument shows that if the centers of the
intervals $I_{n,k}$ and $I_{n,k+1}$ lie in the interval $[a^{*},\pi]$ then
either $c_{k}^{*}=M-m$ or $c_{k+1}^{*}=0$.
Using these observations, we conclude that, if the integer $n$ is large
enough, then there are integers $k_{1},k_{2}$, $1\leq k_{1}<k_{2}\leq n$, such
that $c_{j}^{*}=0$ for $1\leq j<k_{1}$ and $k_{2}<j\leq n$, $c_{j}^{*}=M-m$
for $k_{1}<j<k_{2}$, and $0\leq c_{j}^{*}<M-m$, for $j=k_{1}$ and $j=k_{2}$.
Since $f_{n,\overline{c}^{\prime}}\in\mathcal{F}(m,M,s_{1})$, it follows from
(6.3) and the additivity property (1.6) that
$u_{f_{n,\overline{c}^{\prime}}}(x_{0})\leq
u_{f_{\overline{c}^{*}}}(x_{0})<u_{\widehat{f}}(x_{0}),$ (6.4)
where $\widehat{f}=m+(M-m)\chi_{\widehat{I}}$ and $\widehat{I}=[-\pi+2\pi
k_{1}/n,-\pi+2\pi k_{2}/n]$.
Let $l_{1}=\pi(s_{1}-m)/(M-m)$ and let $\widehat{l}$ denote the half length of
the interval $\widehat{I}$. Since $f^{*}\in F(m,M,s_{1})$, it follows from our
construction of $\widehat{I}$ that
$l_{1}\leq\widehat{l}\leq l_{1}+\frac{2\pi}{n}.$ (6.5)
It follows from Lemma 1 and equation (6.5) that
$u_{\chi_{\widehat{I}}}(x_{0})\leq\nu_{\alpha}(x_{0},\widehat{l})\leq\nu_{\alpha}(x_{0},l_{1})+O(1/n),$
(6.6)
where $\nu_{\alpha}(x,l)$ is defined by (2.7). The latter inequality together
with equation (2.5) and the additivity property (1.6) implies the following:
$u_{\widehat{f}}(x_{0})\leq\eta_{\alpha}(x_{0})m+(M-m)\nu_{\alpha}(x_{0},l_{1})+O(1/n).$
(6.7)
Combining (6.1), (6.4) and (6.7), we conclude that
$u_{f}(x_{0})\leq
u_{f_{n,\overline{c}^{\prime}}}(x_{0})+\varepsilon\leq\eta_{\alpha}(x_{0})m+(M-m)\nu_{\alpha}(x_{0},l)+\delta(\varepsilon,n),$
(6.8)
where $\delta(\varepsilon,n)\to 0$ when $\varepsilon\to 0$ and $n\to\infty$.
Since $\varepsilon>0$ in equation (6.1) can be chosen arbitrarily small and
the integer $n$ can be chosen arbitrarily large, (6.8) implies the second
inequality in (1.8).
To prove uniqueness, we assume that $f$ does not coincide with
$f_{0}^{+}=m+(M-m)\chi_{I(a_{m},l)}$ on a set of positive measure. Since $f\in
L^{1}$, there are “density points” $x_{1}$ and $x_{2}$, such that
$f_{0}^{+}(x_{1})=M$ and $f(x)<M-\varepsilon$ on a subset $E_{1}$ of some
small interval $I_{1}$ centered at $x_{1}$, $f_{0}^{+}(x_{2})=m$ and
$f(x)>m+\varepsilon$ on a subset $E_{2}$ of some small interval $I_{2}$
centered at $x_{2}$. Let $E=E_{1}\cap(E_{2}+(x_{1}-x_{2}))$. Since $x_{1}$ and
$x_{2}$ are density points for $f\in L^{1}$, the one-dimensional Lebesgue
measure of $E$ is strictly positive. We may assume without loss of generality
that $E$ and $E-(x_{1}-x_{2})$ either both lie on the interval $[-\pi,a_{m}]$
or both lie on the interval $[a_{m},\pi]$. Under these assumptions, it follows
from the monotonicity properties of Lemma 1 that
$u_{\chi_{E}}(x_{0})>u_{\chi_{E+(x_{1}-x_{2})}}(x_{0})$. Therefore, replacing
$f$ by $\widetilde{f}=f-\varepsilon\chi_{E-(x_{1}-x_{2})}+\varepsilon\chi_{E}$
with $\varepsilon>0$ small enough, we obtain a function
$\widetilde{f}\in\mathcal{F}(m,M,s)$ such that
$u_{f}(x_{0})<u_{\widetilde{f}}(x_{0})$. Since
$\widetilde{f}\in\mathcal{F}(m,M,s)$, the right inequality in (1.8) holds true
for $u_{\widetilde{f}}$. Therefore, for the function $u_{f}$ the right
inequality in (1.8) holds with the sign of strict inequality.
To prove the left inequality in (1.8), we assume that $f\in\mathcal{F}(m,M,s)$
and consider the function $f^{-}=M+m-f$. We have $m\leq f^{-}\leq M$ and
$\|f^{-}\|_{L^{1}}=2\pi s^{-}$ with $s^{-}=M+m-s$, $m<s^{-}<M$. Thus,
$f^{-}\in\mathcal{F}(m,M,s^{-})$ and therefore, by our proof above,
$u_{f^{-}}(x_{0})\leq\eta_{\alpha}(x_{0})m+(M-m)\nu_{\alpha}(x_{0},l^{-}),$
(6.9)
where $l^{-}=\pi(s^{-}-m)/(M-m)=\pi-l$. Since $f+f^{-}=M+m$, we have
$u_{f}(x)+u_{f^{-}}(x)=\int_{-\pi}^{\pi}G(x,y)(f(y)+f^{-}(y))\,dy=(M+m)\eta_{\alpha}(x).$
(6.10)
Combining equations (6.9) and (6.10), we obtain the left inequality in (1.8).
Furthermore, (1.8) holds with the equality sign in the left inequality if and
only if (6.9) holds with the sign of equality. We proved above that the latter
holds if and only if $f^{-}=m+(M-m)\chi_{I(a_{m}^{-},l^{-})}$ a.e. on
$[-\pi,\pi]$, which implies that the sign of equality in the left inequality
in (1.9) occurs if and only if $f=f_{0}^{-}$ a.e. on $[-\pi,\pi]$. $\Box$
The structure of proofs of Theorems 1 and 3 presented below is the same as in
the proof of Theorem 2. Basically, to prove these theorems, we use the same
arguments as in the proof of Theorem 2 with minor changes. Below, we sketch
these proofs emphasizing these minor changes.
Proof of Theorem 3. First, given $\varepsilon>0$, we approximate
$f\in\mathcal{F}(m,M,s)$ with
$f_{n,\overline{c}^{\prime}}\in\mathcal{F}(m,M,s_{1})$ such that
$|s_{1}-s|<\varepsilon$ and
$|(u_{f_{n,\overline{c}^{\prime}}}(x_{0})-u_{f_{n,\overline{c}^{\prime}}}(-\pi))-(u_{f}(x_{0})-u_{f}(-\pi))|<\varepsilon.$
(6.11)
Then, using the continuity Lemma 6, we find $f^{*}\in\mathcal{F}(m,M,s_{1})$
such that
$\max\\{u_{f_{\overline{c}}}(x_{0})-u_{f_{\overline{c}}}(-\pi)\\}\leq
u_{f_{\overline{c}^{*}}}(x_{0})-u_{f_{\overline{c}^{*}}}(-\pi),$ (6.12)
where the maximum is taken over all $\overline{c}$ satisfying conditions
(6.2).
Next, performing tricks with the intervals $I_{n,k}$ and $I_{n,k+1}$ as we did
in the proof of Theorem 2 and using Lemma 2, we obtain a function
$\widehat{f}=m+(M-m)\chi_{\widehat{I}}$, where $\widehat{I}=[-\pi+2\pi
k_{1}/n,-\pi+2\pi k_{2}/n]$ with appropriate $k_{1}$ and $k_{2}$, such that
the following holds:
$\displaystyle u_{f_{\overline{c}^{*}}}(x_{0})$ $\displaystyle-$
$\displaystyle
u_{f_{\overline{c}^{*}}}(-\pi)<u_{\widehat{f}}(x_{0})-u_{\widehat{f}}(-\pi)\leq$
$\displaystyle
m(\eta_{\alpha}(x_{0})-\eta_{\alpha}(-\pi))+(M-m)\tau_{\alpha}(x_{0},l_{1})+O(1/n).$
Since $\varepsilon>0$ in (6.11) can be taken arbitrarily small and the integer
$n$ in (6) can be taken arbitrarily large, combining (6.11), (6.12) and (6),
we obtain the inequality in (1.9).
The proof of the uniqueness statement of Theorem 3, is almost identical with
the uniqueness proof of Theorem 2. Namely, given $f\in\mathcal{F}(m,M,s)$, we
use our “two density points argument”, to construct a function
$\widetilde{f}=f-\varepsilon\chi_{E-(x_{1}-x_{2})}+\varepsilon\chi_{E}\in\mathcal{F}(m,M,s)$,
with $\varepsilon>0$ small enough, such that
$u_{f}(x_{0})-u_{f}(-\pi)<u_{\widetilde{f}}(x_{0})-u_{\widetilde{f}}(-\pi)$.
Since $\widetilde{f}\in\mathcal{F}(m,M,s)$, the inequality in (1.9) holds true
for $u_{\widetilde{f}}$. Therefore, for the function $u_{f}$, (1.9) holds with
the sign of strict inequality. $\Box$
Proof of Theorem 1. It follows from Lemma 5 that, for given $f\in F(m,M,s)$
and $\varepsilon>0$ arbitrarily small, there exists a piece-wise constant
function
$f_{n,\overline{c}^{\prime}}=m+\sum_{k=1}^{n}c_{n,k}^{\prime}\chi_{I_{n,k}}\in\mathcal{F}(m,M,s_{1})$,
where $\overline{c}^{\prime}=(c^{\prime}_{n,1},\ldots,c^{\prime}_{n,n})$, such
that $0\leq c_{n,k}^{\prime}\leq M-m$, $|s_{1}-s|<\varepsilon$ and
$|{\rm osc}(u_{f_{n,\overline{c}^{\prime}}})-{\rm osc}(u_{f})|<\varepsilon.$
(6.14)
Here, we assume once more, that the integer $n$ is chosen as large as we need
for our proof.
Then, using the continuity Lemma 6, we find
$f_{\overline{c}^{*}}=m+\sum_{k=1}^{n}c_{n,k}^{*}\chi_{I_{n,k}}\in\mathcal{F}(m,M,s_{1})$
such that
$\max\\{{\rm osc}(u_{f_{\overline{c}^{\prime}}})\\}\leq{\rm
osc}(u_{f_{\overline{c}^{*}}}),$ (6.15)
where the maximum is taken over all $\overline{c}$ satisfying conditions
(6.2). Since $u_{f_{\overline{c}^{*}}}$ is concave on $[-\pi,\pi]$, we may
assume without loss of generality that ${\rm
osc}(u_{f_{\overline{c}^{*}}})=u_{f_{\overline{c}^{*}}}(x_{0})-u_{f_{\overline{c}^{*}}}(-\pi)$
for some $x_{0}\in(-\pi,\pi]$. If this is not the case, we replace
$f_{\overline{c}^{*}}(x)$ with $f_{\overline{c}^{*}}(-x)$.
Next, we consider the temperature gap
$E(a)=u_{\chi_{I(a,l_{n})}}(x_{0})-u_{\chi_{I(a,l_{n})}}(-\pi)$ for the
interval $I(a,l_{n})$ centered at $a$ with half length $l_{n}=\pi/n$. As we
have shown in Section 3, there is a unique point $a^{*}=a_{e}$ with $a_{e}$
given by equation (3.4), where $E(a)$ achieves its maximum $\tau_{\alpha}$
given by equation (3.5).
If the centers of the intervals $I_{n,k}$ and $I_{n,k+1}$ both lie in the
interval $[-\pi,a^{*}]$ or both lie in the interval $[a^{*},\pi]$ then arguing
as in the proof of Theorem 2 and using the monotonicity properties of Lemma 2,
we conclude that if $f_{\overline{c}^{*}}$ is maximal in the sense of equation
(6.15), then either $c_{k}^{*}=0$ or $c_{k+1}^{*}=M-m$.
The latter implies that there exists an interval $\widehat{I}=[-\pi+2\pi
k_{1}/n,-\pi+2\pi k_{2}/n]$ with half length $\widehat{l}=\pi(k_{2}-k_{1})/n$
such that $l_{1}<\widehat{l}<l_{1}+\pi/n$, where $l_{1}=\pi(s_{1}-m)/(M-m)$,
and such that for $\widehat{f}=m+(M-m)\chi_{\widehat{I}}$ we have
$u_{f_{\overline{c}^{*}}}(x_{0})-u_{f_{\overline{c}^{*}}}(-\pi)\leq
u_{\widehat{f}}(x_{0})-u_{\widehat{f}}(-\pi)\leq{\rm osc}(u_{\widehat{f}}).$
(6.16)
Now, it follows from equations (4.11), (4.12), (4.13) of Lemma 3 that
${\rm
osc}(u_{\widehat{f}})\leq(M-m)\Theta_{\alpha}(\widehat{l},\delta)=(M-m)\Theta_{\alpha}(l,\delta)+\beta(\varepsilon,n),$
(6.17)
where $\delta=m/(M-m)$ and $\beta(\varepsilon,n)\to 0$ when $\varepsilon\to 0$
and $n\to\infty$.
Finally, combining equations (6.14) –(6.17), we obtain the inequality in
(1.7).
To prove the uniqueness statement of Theorem 1, we use once more the “two
density points argument” as in the proofs of Theorems 2 and 3, to construct a
function
$\widetilde{f}=f-\varepsilon\chi_{E-(x_{1}-x_{2})}+\varepsilon\chi_{E}\in\mathcal{F}(m,M,s)$,
with $\varepsilon>0$ small enough, such that ${\rm osc}(u_{f})<{\rm
osc}(u_{\widetilde{f}})$. Since $\widetilde{f}\in\mathcal{F}(m,M,s)$, the
inequality in (1.7) holds true for $u_{\widetilde{f}}$. Therefore, for the
function $u_{f}$, (1.7) holds with the sign of strict inequality.
$\Box$
## 7\. Temperature gap in higher dimensions
One can consider a variety of higher dimensional analogs of Problem 1. Here we
present two such problems on the temperature gap in pipes $P_{L}$, which are
cylindrical domains
$P_{L}=\\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}:\,x_{1}^{2}+x_{2}^{2}<1,\,|x_{3}|<L\\}$,
$L>0$. Let $S_{L}$ denote the cylindrical boundary of $P_{L}$ and let
$D_{L}\ni(0,0,L)$ and $D_{-L}\ni(0,0,-L)$ denote the boundary disks of
$P_{L}$. By $\frac{\partial}{\partial{\rm n}}$ we denote the outward normal
derivative on $\partial P_{L}$ (for the boundary points where it is defined).
###### Problem 2.
Let $E$ be a compact subset of $P_{L}$ of given volume $V$, $0<V<2\pi L$, and
let $\alpha>0$. Suppose that $u_{E}$ is a bounded solution to the Poisson
equation
$-\Delta u=\chi_{E}\;\;\;\;\hbox{in}\;\;\;P_{L},$
with mixed Neumann-Robin boundary conditions
$\frac{\partial u}{\partial{\rm n}}=0\;\;\;\;\hbox{on}\;\;\;S_{L},$
$\frac{\partial u}{\partial{\rm n}}+\alpha u=0\;\;\;\;\hbox{on}\;\;\;D_{\pm
L}.$
Find $\max{\rm osc}(u_{E})$ over all open sets $E\subset P_{L}$ of volume $V$
and identify sets $E^{*}$ providing this maximum.
###### Problem 3.
Let $\Omega$ be a compact subset of $S_{L}$ of given area $A$, $0<A<4\pi L$
and let $\alpha>0$. Suppose that $v_{\Omega}$ is a bounded solution to the
Laplace equation
$\Delta v=0\;\;\;\;\hbox{in}\;\;\;P_{L},$
with mixed Dirichlet-Robin boundary conditions
$v=\chi_{\Omega}\;\;\;\;\hbox{on}\;\;\;S_{L},$ $\frac{\partial v}{\partial{\rm
n}}+\alpha v=0\;\;\;\;\hbox{on}\;\;\;D_{\pm L}.$
Find $\max{\rm osc}(v_{\Omega})$ over all open sets $\Omega\subset S_{L}$ of
area $A$ and identify sets $\Omega^{*}$ providing this maximum.
We assume that solutions of Problems 2 and 3 exist and are regular; for the
existence and regularity of elliptic problems with Robin boundary conditions,
we refer to [6] and references therein. Problems 2 and 3 look challenging.
Similar problems to find $\max u_{E}$ or $\max v_{\Omega}$ on $P_{L}$ instead
of ${\rm osc}(u_{E})$ in Problem 2 or ${\rm osc}(v_{\Omega})$ in Problem 3 can
be more tractable. We expect that the symmetric configurations will provide
the corresponding maxima and therefore the symmetrization methods due to
Talenti and Baernstein can be applied to solve these problems.
One can also consider analogs of Problems 2 and 3 in cylindrical domains in
$\mathbb{R}^{n}$ of any dimension $n\geq 2$.
Acknowledgments. We thank Dr. G. Sakellaris and Dr. K. Yamazaki for helpful
discussions.
## References
* [1] S. Abramovich, _Monotonicity of eigenvalues under symmetrization_. SIAM J. Appl. Math., 28 (1975), no. 2, 350–361.
* [2] A. Baernstein II, _Symmetrization in analysis._ With David Drasin and Richard S. Laugesen. With a foreword by Walter Hayman. New Mathematical Monographs, 36. Cambridge University Press, Cambridge, 2019.
* [3] D. Betsakos and A. Yu. Solynin, _Heating long pipes._ Anal. Math. Phys. 11 (2021), no. 1, Paper No. 40, 35 pp.
* [4] W.J. Kaczor and M.T. Nowak, _Problems in Mathematical Analysis. II. Continuity and differentiation_. Translated from the 1998 Polish original, revised and augmented by the authors. Student Mathematical Library, 12. American Mathematical Society, Providence, RI, 2001
* [5] J. J. Langford and P. McDonald, _Extremizing temperature functions of rods with Robin boundary conditions_. Ann. Fenn. Math. 47 (2022), no. 2, 759-775.
* [6] R. Nittka, Regularity of solutions of linear second order elliptic and parabolic boundary value problems on Lipschitz domains. J. Differential Equations 251 (2011), no. 4–5, 860–880.
* [7] G. Pólya and G. Szegö, _Isoperimetric Inequalities in Mathematical Physics_. Princeton University Press, Princeton, N.J., 1951.
* [8] W. Rudin, _Real and Complex Analysis_. Third edition. McGraw-Hill Book Co., New York, 1987.
* [9] G. Talenti, _The art of rearranging_. Milan J. Math., 84:1, 2016, 105–157.
|
An Empirical Study on Snapshot DAOs
Qin Wang^1, Guangsheng Yu^1, Yilin Sai^1, Caijun Sun^2,
Lam Duc Nguyen^1, Sherry Xu^1, Shiping Chen^1
$^1$CSIRO Data61, Australia
$^2$Zhejiang Lab, China
The notion of Decentralized Autonomous Organization (DAO) is an organization constructed by automatically executed rules such as via smart contracts, holding features of the permissionless committee, transparent proposals, and fair contributions by stakeholders. As of May 2023, DAO has impacted over $24.3B market caps. However, there are no substantial studies focused on this emerging field. To fill the gap, we start from the ground truth by empirically studying the breadth and depth of the DAO markets in mainstream public chain ecosystems in this paper. We dive into the most widely adoptable DAO launchpad, Snapshot, which covers 95% in the wild DAO projects for data collection and analysis. By integrating extensively enrolled DAOs and corresponding data measurements, we explore statistical resources from Snapshot and analyze data from 581 DAO projects, encompassing 16,246 proposals over the course of 5 years. Our empirical research has uncovered a multitude of previously unknown facts about DAOs, spanning topics such as their status, features, performance, threats, and ways of improvement. We have distilled these findings into a series of key insights and takeaway messages, emphasizing their significance. Notably, our study is the first of its kind to comprehensively examine the DAO ecosystem.
§ INTRODUCTION
Decentralized autonomous organizations (DAOs) emerge with the rapid development of cryptocurrency and blockchain. DAO is an entity that is collaboratively managed by on-chain participants to deploy resources, release proposals and make decisions. The usage of DAO in governance can decentralize the operation via blockchain by enabling on-chain rules and activities transparent and traceable. Every stakeholder is eligible to propose, vote, and enact changes for DAO proposals. As of May 2023, a total of 12,824 organizations have been created. The invested funding towards these DAOs (a.k.a., treasury) has reached up to $24.3B, while engaged members increased 531x from 13K to 6.9M members during the last 6 years[Data source: <https://deepdao.io/organizations>. [May 2023]]. Among them, 4.5M participants are active voters or proposal makers. DAOs accordingly become a force to be reckoned with in the Web3 space [1] and new cryptocurrency markets.
The open problem. Although DAOs have gained traction in recent years, their structural development is still in its nascent stage. One of the primary challenges is that DAO projects are diverse, with varying objectives and functionalities. Some DAOs begin with a clear purpose, such as Uniswap and Bancor, which can remain focused on serving a specific community or users. In contrast, other DAOs may diverge from their initial objectives. This means a DAO may begin with a simple goal like collecting NFTs, and then morph into a community to attract participants (e.g., PleasrDAO), a trading platform to trade NFTs (Opensea), or an incubator to invest artists (BAYC). The variety of forms and outcomes can be confusing for newcomers and experts alike in the blockchain space. Creating a comprehensive and structural view of DAOs is still a significant challenge that requires further development and refinement.
Recent studies have attempted to examine DAOs (cf. Table <ref>). Several studies (see Line 1) begin by providing an overview of existing literature. However, due to the delay in publication, academic works may lack current and persuasive examples of DAO projects, leading to outdated information (equiv. in-time or not). Other works (Line 2) propose a high-level framework to discuss DAO properties, but their abstracted metrics are created before events, making them impractical (close-to-real or applicable?). Several studies (Line 3) focus on capturing features from real DAO projects through empirical research, but their sample pools are often limited (resourceful or investigated projects in large scale?). All of these efforts seem to fall short of providing a comprehensive and up-to-date understanding of DAOs for readers.
This work v.s. DAO Studies
1cExamples 1cMethod
1cTarget 1cIn-time 1cApplicable 1cScale
[2][3][4][5] 1*Literature review Publications Not very n/a <30
[6][7] 1*Framework Properties n/a Priori n/a
[8][9][10] 1*Empirical study Projects In-time Practical <22
This work Empirical study Launchpad In-time Practical >500
Our attempts. To address the aforementioned shortcomings, we have devised a unique approach to our study. After conducting thorough research, we have found that existing DAO launchpads and DataFeeds have amassed a wealth of information on numerous DAO projects, including both long-standing DAOs that have ceased operations and newly-launched ones that have appeared within the last month. In an effort to avoid duplicating the work of others, we have opted to omit DataFeeds and some launchpads that have already presented analyzed data. Instead, we are focusing our attention on a lesser-known launchpad, namely Snapshot [11], which has compiled a significant number of DAOs but has yet to conduct extensive analyses on them.
The emergence of Snapshot is to overcome the issue of high costs associated with on-chain operations due to the complexity of consensus and frequent voter interactions. Snapshot accordingly introduced an off-chain voting tool that enables practitioners to efficiently access popular DAOs for voting, managing, auditing, and researching. Snapshot serves as a launchpad that captures over 95% of in-the-wild DAO projects (over 11,000 spaces) and offers open access to create new DAOs that are compatible with mainstream blockchain platforms like Ethereum [12], Avalanche [13], Binance smart chain (BSC) [14], Polygon [15], Solana [16], and more. The ample and reliable data collected by Snapshot motivate us to develop the following in-depth as well as comprehensive research surrounding DAOs.
Contributions. In this paper, we dive into the DAO projects that are created and managed on Snapshot. We develop the research by gradually approaching the DAO basic concept, operating mechanism, and relevant techniques, and analyzing the statistical data collected from Snapshot. Our work is the first study to strictly explore the features of DAOs, providing in-time guidance for the following readers. Specifically, we detail our contributions here.
1em$\triangleright$ A structural investigation on DAOs (Sec.<ref>). Intending to be a complete study focused on DAOs, we clear the fog surrounding this fuzzy term by presenting its underlying structures (e.g., components, supportive standards), core mechanisms, and outstanding instances. In particular, based on extensive investigation, we decouple DAO constructions (e.g., decentralized identifier, utility token, smart contract, e-voting) and extract a series of metrics to reflect the features (details refer to Sec.<ref>) in DAO's designs. As tokenization plays an essential role in DAO governance (discussed in Sec.<ref>), we also sort out the relevant token standards that are essential to the DAO's incentive. Further, we provide a short list of tools (cf. Table <ref>) that well support DAO operations.
1em$\triangleright$ A comprehensive exploration on Snapshot launchpad (Sec.<ref>). We study one mainstream DAO-related governance tool, Snapshot, by summarising the features of involved entities, running mechanisms, typical operations, and voting strategies. As of Nov. 2022[A notable distinction between two dates: Nov. 2022 marks the conclusion of the experiments, while May 2023 represents the nearing completion of writing the paper.], Snapshot has registered 11K+ spaces (projects), which covers 95% in-the-wild DAOs. However, many of them are inactive with very few members or proposals. We ignore such projects and put our focus on the influential ones. Thus, we collect the 581 most prevalent DAO projects that contain a total of 16,246 proposals over the span of the past 5 years. In particular, we dive into each project and scrutinize included proposals with basic information, voting strategy, proposal content, and voting results.
1em$\triangleright$ A solid analysis for collected data from Snapshot (Sec.<ref>). Based on extensive investigation and exploration, we structure our experimental results from four aspects that separately interpret the project scale, supporting infrastructure, dependent e-voting schemes, and the operational tokens (cf. Table <ref>). We evaluate each item by diving into multiple sub-aspects. Accordingly, we study the details of DAO members (e.g., number of participants), basic project information (project duration, language usage, storage condition, underlying platform), and voting process (voting pattern, results, distribution, variances, token usage, meaningful contexts). With substantive evidence, we conclude seven pieces of home-taking messages (Insights 202-208 in grey banners and the tail of last page) as high-level summaries for interested readers.
1em$\triangleright$ A series of reasonable analyses and discussions for building better DAOs (Sec.<ref>&<ref>). Existing DAO fields are absent of rigorous studies that can deliver effective educational guidance. We thus provide our discussions based on previous empirical efforts. Specifically, we delineate our analyses from three dimensions: (i) Surrounding existing projects, we study the compatible tools used for DAOs (e.g., on-&off-chain voting, compatible coordination tools) that can maximally extend the scope of applicability and usage; (ii) Diving into each DAO constructions, we point out several unavoidable drawbacks (e.g., centralization, high cost) that may hinder the DAO progress and development. Chasing an optimal balance-off among all distributed projects should be aligned with concrete requirements. (iii) Excavating historical failures and the reality of today's DAOs (e.g., contract reliance). We further provide several promising directions that can be improved in the future to fit our identified four aspects in results (e.g., multi-DAO Collaboration, usage of subDAOs).
Overview of this work
Key Takeaways. The rapid growth and widespread adoption of DAOs have brought about significant changes in the way organizations are structured and governed. On the positive side, we observe that DAO participation & usage are distributed, application & proposal topics are diversified, and execution & decision-making are automated, which are confirmed by our empirical observations. However, on the flip side, DAO development faces inevitable challenges, including issues of centralization, high costs, unsustainable tokenization mechanisms, and immature supporting technologies (refer to Sec.<ref> for more information). All such issues are vital and require much notice. To create a better DAO, we need to examine these issues at every level of the DAO and strive for a healthy approach to distributed governance. This involves developing new tokenization mechanisms that incentivize long-term participation, finding a more fair governance structure, and exploring alternative blockchain technologies that address security concerns.
§ APPROACHING DAO
In this section, we present a systematic overview of Ethereum DAOs. We achieve this by breaking down the integrated components, identifying key features, reviewing the leading DAOs, and discussing the underlying EIP standards and surrounding tools.
§.§ DAO Components
DAOs consist of various components to facilitate decentralized governance, decision-making, and management.
Smart Contract.
A smart contract is a piece of code that securely runs on blockchain nodes at the same time in a decentralized manner. Thinking of it as a black-box, both the input and output are guaranteed synchronized upon reaching a consensus without any assistance of trustworthy third parties. Smart contracts are considered suitable to achieve autonomous organizing by enabling completed self-execution once the defined condition is triggered by traceable and immutable transactions. This enables real-time auditing and verification (e.g., [17]), hence significantly enhancing the machine-execution security [18]. In the context of DAOs, smart contracts are often deployed to create multi-sig wallets for secure asset reservation and set voting strategies for fair governance.
On-chain Identifier.
Traditional identifiers that rely on third parties are replaced with Decentralized identifiers (DIDs) [19] which are not issued, managed, or controlled by any central entity. DIDs are instead managed by individuals whose preferences for data storage platforms are blockchains upon a peer-to-peer (P2P) network. By making use of Public Key Infrastructure (PKI) technology to generate an asymmetric key pair that is stored on the blockchain, DIDs can achieve globally unique, secure, and cryptographically verifiable authentication services. Typical implementations of DIDs include Ethereum address and Ethereum name service (ENS) [20].
Off-chain Snapshot.
Snapshot is a technique to record the in-time status of data at a specific height of blocks. Such a type of technique is quite important for DAO governance where all the historical results voted by participants are recorded as evidence, which is necessary for both on- and off-chain governance. Off-chain signatures are often used to adjust the weights of on-chain tokens during the voting process. To achieve a smooth collaboration, a snapshot of on-chain balances and addresses will be captured to determine voting rights, and then the participants of community members will start to vote for DAO proposals under the weights. In this way, on-chain transaction fees are significantly waived. Notably, the name of the Snapshot platform, researched in this paper, exactly comes from this technical term.
Stake/Governance Token.
The self-controlled, and portable DIDs can offer tamper-proof and cryptographically secure attestations for on-chain decentralized identity. By raising the burden on each attestation's provenance and validity to be securely proved, at the same time, easing the validation process, DIDs become suitable for implementing wallet services in which stakes and utility tokens can be securely stored. Stakes refer to the tokens that a holder can deposit in the system. The more stakes a holder provides, the higher confidence he will have in operating consensus procedures (e.g., Proof-of-Stake). In contrast, utility tokens are designed to be used for a specific purpose, especially in a DApp or in a game. They offer users benefits such as access to products and services. Staked tokens and utility tokens, in most cases, are separate where the former ensures the normal operation of systems, and the latter is used for governance [21] and votes in the context of DAO. In this sense, communities sometimes equivalently use the name of governance token. Besides, the staked tokens can be further used to establish an on-chain reputation, which is primarily to give corresponding values to individuals who frequently participate DAOs.
Reputation Mechanism. Reputation is a crucial element in maintaining trust and promoting collaboration within DAOs. It serves as a measure of a member's contributions, determining their level of influence. Members can earn a reputation by actively participating in governance decisions, providing liquidity to a protocol, or contributing to ongoing projects. The more a member contributes to a DAO, the higher their reputation will be. This reputation can be leveraged in various ways, such as determining voting power in governance decisions, allocating rewards from the organization's treasury, or granting access to certain resources and privileges. Typically, reputation is quantified through the number of governance tokens held by a member.
Secure e-Voting Scheme.
Although traditional e-Voting systems have been growing, they are still susceptible to manipulation. One of the most critical problems is it being prone to the Sybil attack [22] where malicious users create false identities to vote. In the DAO space, by using DIDs and asking for an on-chain attestation, the integrity of the e-Voting process could be improved. Staked tokens and utility tokens, which are bound with DIDs, are also commonly used in e-Voting to represent the voting influence. Based on our investigation, existing DAO voting schemes are based on relatively simple mechanisms such as basic voting, single-choice voting and ranked choice voting (cf. Fig.<ref>), rather than complicated cryptographic e-Voting systems [23].
§.§ DAO Features
We examine DAOs from four key perspectives: operational mechanism (for underlying foundations/dependencies), functional features (processing phases), non-functional features (advanced properties), and market performance (real-world impact). We additionally investigate 30+ projects summarised in Table <ref>.
Operational Mechanism.
172 Network refers to the underlying blockchain platform on which the DAO operates. Since DAOs rely on self-executing contracts where the terms of the agreement are directly written into the code, the network plays a crucial role in determining the functionality and shape of these smart contracts. 173 Protocol/Field describes the specific usage or application of the organization. This can range from areas such as finance and governance to art and social impact. 174 Governance token represents the voting power in DAO governance. Stakeholders can thereby vote on proposals to make decisions and allocate resources.
Functional Features. Based on the DAO projects on Snapshot, we conclude that a typical lifecycle of DAOs includes the phases of create, propose, vote, and action. Specifically, 172 create involves setting up the initial configurations of the DAO, covering not only the DAO space and related information but also personal identifiers (e.g., ENS, DiD). 173 Propose focuses on drafting, editing, and releasing proposals. Specific requirements will be applied to the proposer, such as holding enough stakes. vote Vote calls for feedback and preferences from community participants. Stakeholders can vote for multiple options, mostly just for or against based on their interest and willingness. Different voting strategies will be used to adjust the power of a voter. Further, action is to execute the decisions once reaching an agreement. Although this phase is critical, it cannot be effectively measured, so we have omitted it from our table.
Non-functional Features. 172 Permissionless is a key factor to measure decentralized governance due to its dynamic joining/leaving mechanisms. Token holders can make decentralized decision-making by voting on preferred proposals and influencing the organization's direction. 173 Transparency/Immutability means that all transactions and decisions within a DAO will be transparent and cannot be tampered with, fostering trust among members and stakeholders. 174 Anti-censorship refers to the ability to prevent stakeholders from censoring the flow of transactions (e.g., OFAC compliant). 175 Interoperability is the feature that allows a DAO to interact and exchange data with other DAOs, enabling seamless integration and collaboration with various ecosystems. 176 Token-based Incentives. Token-based incentives align the interests of stakeholders and encourage active participation. Members can earn tokens by contributing to the organization or by staking them to support projects, resulting in a more engaged community.
Market Performance. Market performance can be evaluated via quantitative metrics in multiple dimensions. 172 Treasury refers to a pool of funds that is collectively owned and controlled by the members of the organization. With similarity to the total value locked (TVL) in DeFi, the treasury is typically built up over time through contributions from members or profits generated from the organization's activities. 173 Holders represents participants who own governance tokens and are therefore eligible to vote on proposals that shape the direction of the organization. It can provide insight into the level of participation and engagement within the DAO. 174 Proposals are the specific documents that outline a suggested course of action for the organization. These proposals can be put forward by any member of the DAO. 175 Votes counts for the total number of votes (equiv. decisions) cast by stakeholders.
A Collection of Leading DAOs (upper Table <ref>). Based on the aforementioned metrics, we investigate a group of (30+) DAO projects that are currently active and operating in real-world environments. These selected projects are highly influential within their respective communities, as evidenced by their market performance. It's worth noting that most of the selected DAOs operate on the Ethereum blockchain (also supported by Fig.<ref>) and are classified as belonging to the DeFi track (by Fig.<ref>). Additionally, DAOs are expected to have certain essential properties such as permissionless access and transparency, but the others possess additional qualities.
§.§ Supporting Standards
Recall that the standards referred to in this paper are formatted in technical documents dedicated to on-chain programming. Conventions are established by using the standards during programming without having to reinvent the wheel, making it easier and more efficient for applications and contracts to interact with each other. Here, we list the relevant standards that support DAO scenarios.
EIP-20/BEP-20. Common Interfaces for fungible-token (FT) [24]. Running the e-voting normally requires stakes and utility tokens that typically implement ERC-20 on Ethereum, BEP-20 on BSC, or similar standards on any other blockchain platforms.
EIP-721/BEP-721/EIP-1155. Common Interfaces for the non-FT (NFT) [25] and multi-token [26]. Stakes and governance tokens can also include the forms of NFTs [27], being the voting power of e-voting or being the deposit for users to participate in any campaigns of a DAO. BEP means the standards of BSC.
EIP-4824. Common Interfaces for DAOs [28]. This standard aims to establish conventions on matching on- and off-chain representations of membership and proposals for DAOs by using daoURI, an indicative reference inspired by ERC-721, which enhances DAO search, discoverability, legibility, and proposal simulation.
EIP-1202. Common Interfaces for the voting process [29]. The standard implements a range of voting functions (e.g., $\mathsf{VoteCast}$, $\mathsf{castVote}$, $\mathsf{MultiBote}$) and informative functions (e.g., voting period, eligibility criteria, weight) to enable on-chain voting as well as to view the vote result and set voting status.
ERC-779. Common Interfaces for DAOs [30]. Unlike other hard forks that have altered the Ethereum protocol, the DAO Fork is executed solely through the alteration of the state of the DAO smart contract whereas transaction format, block structure, and protocol were not changed. It is an "irregular state change" that was transferred ether balances from the child DAO contracts into a specified account, the "WithdrawDAO" contract.
§.§ Surrounding Tools
Additionally, a wide variety of tools have been proposed to ease the process of joining, launching, and managing a DAO. We list several of them in the bottom of Table <ref>. Besides the launchpads that can manage DAOs, a host of providers introduce their services and infrastructure [31] such as token services (e.g., MakerDAO for maintaining the DAI stablecoin), on&off-chain voting tools (Tally, Snapshot), treasury oversight (TokenTerminal, Zapper), growth products, risk management (Gnosis), task collaboration (Mirro, Colony), community platforms (MolochDAO, Metagovernance), analytic tools (Dune, RootData), operational tools (Aragon, DAOstack), wallet services (Gnosis Safe) and legal services (LegalDAO).
§ DIVING INTO SNAPSHOT
Snapshot is an off-chain voting system designed for DAOs created on multiple blockchain platforms. The system has been widely adopted by many crypto startups and companies to assist in surveying users. Each project can create proposals for users to poll votes by using the staked or governance tokens. All the voting procedures are essentially feeless as the operation is executed off-chain, avoiding costly on-chain verification. Users only need to connect their wallet to the launchpad and allow the action of signing. Besides, the projects, voting proposals for each project, and corresponding results are stored based on the IPFS decentralized storage system [32]. The snapshot thereby becomes a convenient tool for DAO creators to query the feedback from communities and audiences. Here, we provide detailed actions for each party.
* DAO creator. DAO creators are those companies or projects that aim to use Snapshot. The creator needs to hold a valid ENS domain and register his project on the Snapshot launchpad by creating a profile with inputs of detailed information such as project name, about, website, symbol, service, network (equiv. blockchain platforms) and contacts like Twitter, Github and CoinGecko.
* Poll proposer. They can create their proposals for a specific project if he holds a sufficient amount of relevant governance tokens. In many cases, poll proposers are the DAO-creating team members as they have enough staked tokens and motivations to improve the protocol.
* Users. Users can vote for each proposal based on their preferences. All participants are required to have valid accounts with staked tokens, such as an Ethereum address or a short name registered on ENS. Users can add a record on accounts to allow votes to be viewable at the connected addresses.
Running Mechanism. Snapshot roots in the technique of snapshot. The snapshot technique is to record the in-time token-holding status of all accounts and wallets on-chain at a specific block height. It acts as the way of a camera, taking photos of the entire picture at the moment. In this way, a stakeholder can learn information like who has the token, how many tokens they have, etc. Owing to the benefits of transparency and traceability, the technique has been applied to many crypto-events, such as airdrops for incentive distribution and compensation for users after hacking or attacks. Accordingly, the Snapshot project leverages such technology to solve the problem existing in the voting processes. It can intercept the historical data at a certain block height and the associated holding status (e.g., accounts, tokens, NFTs) of a certain type of token. Based on these data, the voting weights can be reasonably assigned to individual community members aligned with different rules.
Typical Operations. Based on different roles, we capture three main types of operations. Notably, operations on Snapshot are aligned with the DAO deployed on other launchpads.
* Creating spaces. If a project aims to introduce decentralized governance into the project, they can create a Space in Snapshot for users to propose proposals and perform voting processes. As discussed, a distributed identifier is required before the application. This identifier is used to connect the created unique project profile. The community (equiv. $\mathsf{Space}$) is created once the basic information is fully fulfilled. Importantly, setting the community's distribution strategy (a.k.a., $\mathsf{Strategy}$). It is written a Javascript function that can be used to adjust the weight of impact.
* Proposing proposals. To propose a proposal in the community, the community member must first adhere to the guidelines set forth by the community manager. For instance, an eligible user in the ENS community is required to hold more than 10k ENS for creating proposals. Once meeting these requirements, the proposer can fill in the proposal content, options, start/end dates, and set voting rules.
* Poll/Vote. The voting process is open for the community only if a user has governance tokens. Every project has its unique governance tokens where a user can even trade (buy/sell/exchange) them on secondary markets. The voting process is designed in a clean and simple style: connect to the wallet, select options, and sign with signatures. Users can view their options, voting power, and snapshot time for each submission of voting. All the data is obtained from the snapshot.
Voting Strategy. As the most essential part of profit distribution, different strategies provide a series of methods of calculating voting power. The strategy in Snapshot is essentially a JavaScript function. Users can combine at most 8 strategies on every single proposal while voting power is cumulative. Meanwhile, users can write customized strategies according to their requirements. At the time of writing, Snapshot has 350+ voting strategies and ERC20-balance-of is the most adopted strategy. We list mainstream strategies here.
* Delegated voting. The voting power is based on a delegation strategy. Only permitted stakeholders have valid impacts on the voting process.
* Weighted voting. The voting power can be calculated either by the single weight strategy (one-coin-one-vote) or a quadratic strategy. The quadratic strategy weakens the significant influence of rich stakeholders, diminishing the gap across different individuals.
* Whitelist voting. The permitted stakeholders who are on the whitelist are allowed to vote. The whitelist may either get updated manually or by certain rules.
* NFT voting. Voting by using NFT needs to be compatible with ERC-721 or ERC-1155 based strategies.
§ EXPERIMENTS AND RESULTS
This section provides our experiments and corresponding results.
Result Guidance
1cIndex 1cDescription
5*90Project scale Fig.<ref> Number of registered members in each considered DAO project.
Fig.<ref> Number of votes of each proposal among all considered DAO projects.
Fig.<ref> Up-to-date number of DAO projects kicked off every month.
Fig.<ref> Duration of each proposal among all considered DAO projects.
Fig.<ref> Languages distribution among all considered DAO projects.
3* 90Infrast. 1*Fig.<ref> [r]Fraction of different blockchain networks being used for
running each considered DAO projects.
1* Fig.<ref> [r]Fraction of different IPFS addresses being used for data storage
of each proposal among all considered DAO projects.
5* 90e-Voting 1*Fig.<ref> [r]Fraction of different voting mechanisms being used for e-voting of
each proposal among all considered DAO projects.
1*Fig.<ref> [r]Voting patterns (in terms of the number of candidates and variances
1* &<ref> of results) among all considered DAO projects.
1*Fig.<ref> [r]Number of votes of each proposal among all considered DAO projects.
1*Fig.<ref> [r]Clustering among all considered DAO projects.
3* 90Token
Usage 1*Fig.<ref> [r]Fraction of the usage of prevalent DAO tokens and other
self-issued tokens in the Snapshot.
1*Fig.<ref> [r]Fraction of the usage between different prevalent DAO tokens.
1*Fig.<ref> [r]Fraction of the usage between different self-issued DAO tokens.
Measurement Establishment. Our experiment consists of three steps. Firstly, we develop a crawling script, which is deployed on AWS EC2 cloud server ($\mathsf{m6i.32xlarge}$) with 128-vCPU and 512GiB-memory, to capture all data from the Snapshot platform. The script is designed to collect all the information presented on the Snapshot main page and subpages created by DAO creators, including numerical values (such as voting results and participation scales) and context-aware strings (such as language usage and topic classification). All data will be compiled into a final CSV document. Secondly, we analyze the data using Python and generate corresponding visualizations. During this analysis, we sort, clean up, and classify the metadata to obtain meaningful results. Finally, we present our findings in this section and provide derived insights.
Overall Statistics. Our study is based on the analysis of 16,246 proposals from the 581 most prevalent DAO projects collected from Snapshot. Our data crawling covers a period of more than three years, starting from the establishment of Snapshot in August 2020 and continuing until November 2022 (the time of writing this paper). We have collected a wide range of key data fields such as project title, number of members, proposal title, status, network, IPFS address, voting strategy, project start/end date, block snapshots, result name, result outcome, proposal content, number of votes, etc. To present our statistical results clearly, we have categorized them into four perspectives, namely, project scale, infrastructure, e-voting schemes, and token usage. To guide the readers through the result analyses, we provide a high-level overview in Table <ref>.
Data Use Disclaimer. All the data we crawl from Snapshot are open-released and free to use with CC0 licenses. We strive to maintain the accuracy of all data that we crawl from Snapshot and declare that the data will not be used for any commercial purposes.
§.§ Project Scale
[Number of members in different DAOs]
[DAO launching dates]
Project Scale
This sector describes the scale of the considered DAO projects in views of its participating members, vote distributions for proposals, launching dates, active proposal duration, and language distribution.
Fig.<ref> shows that the very top DAO projects have reached a six-order of magnitude (millions), with the two most popular projects reaching over 7M members, i.e., PancakeSwap and Aave. Note that the “Others” bar computes the average number of members of all the DAO projects from the 16th and onwards. This indicates that the member distribution is likely following the Pareto principle [33] that the majority of DAO participants show up to a small part of DAO communities. When diving into each DAO proposal, it can be found from Fig.<ref> that the fraction of having over 100 votes and having less than 10 votes come first and second, respectively. This also indicates the Pareto principle is being complied with in the sense that a huge number of votes are aggregated to a small portion of proposals, while there are still a significant number of proposals that are marginalized by the community.
Fig.<ref> shows that the concept of DAO appears to be accepted and realized by a broader public since Q3 2020 (align with [31]). From then on, the Web3 supporters kept drawing traffic to the DAO community. It turns out that a peak arose from Nov 2021 to Jan 2022 in regard to the number of projects being kicked off during the period (along with the booming of DeFi and NFT). It can also be found that the average monthly number of new projects is much higher than that before the peak, which indicates a milestone in the development of DAO communities. According to Fig.<ref>, the duration of each proposal is found mostly within a week, which is reasonable and is matched with the duration of many real-world election campaigns. Fig.<ref> shows that English is the most popular language used in the proposals, accounting for 75.1% of the proposals among all considered DAO projects in our collection. Chinese comes second with a fraction of 4.3%, followed by Germany (2.9%), Korean (1.8%), Italian (1.5%), and French (1.4%). All the rest of the languages are categorized in “Others”, accounting for 12.9% of the proposals. The usage of languages can also indirectly reflects the nationality distribution of participating members.
lifted shadow=1mm-2mm3mm0.1mmblack!50!white
Insight-202: The DAO community has been tremendously developed across many countries and regions and the usage has remained high. However, the member distribution follows the Pareto principle (the 80-20 rule [34]), which requires further notice to avoid unexpected centralization.
§.§ Infrastructure
This sector describes the information of IPFS network and blockchain platform infrastructure being used by the considered DAO projects.
It is found from Fig.<ref> that Ethereum Mainnet [12] is the most popular blockchain platform used by the considered DAO projects in our collection, accounting for a fraction of 65.4%. Binance Smart Chain Mainnet comes second with a fraction of 14.3%, followed by Polygon Mainnet (8.9%), Fantom Opera (3.3%), Arbitrum One (1.4%), and Gnosis Chain (1.4%). All the other platforms are categorized in “Others”, accounting for 5.2% of the projects. In regard to the decentralized IPFS data storage as illustrated in Fig.<ref>, the “#bafkrei” is used the most often, accounting for 23.7% among all the proposals. This implies that 76.3% of the DAO proposals (starting with “#QM”) are still using tools like nmkr.io or other minting platforms that use the outdated version of content identifier (CIDv0 [35], Base58) which is more expensive and less effective for IPFS data storage. This deserves a caution that there are lack of motivation for DAO developers to upgrade their infrastructure, which might degrade the capacity and efficiency of data storage to DAO communities if IPFS would only completely root for CIDv1 [35] in the future.
[Number of votes of each proposal]
[Duration of each proposal]
[Proposal voting variances]
[Project voting variances]
[Language distribution]
[Fraction of e-voting schemes]
[Fraction of blockchains]
[Fraction of IPFS storage]
Snapshot Empirical Results in Multi-dimensions
Voting patterns with the number of candidates
lifted shadow=1mm-2mm3mm0.1mmblack!50!white
Insight-203: It is fortunate that the platforms used by existing DAO proposals and projects are diversified. On the contrary, the insufficient motivation of upgrading the content identifier version of IPFS data storage may degrade the capacity and efficiency of the community.
§.§ E-voting Scheme
This sector describes the number of valid votes and the fraction of different e-voting schemes used in the considered DAO projects and proposals, as well as different voting patterns.
E-voting schemes. It is found from Fig.<ref> that the e-voting schemes can be categorized into the following ranks. The single-choice voting is the dominant strategy that accounts for 83.0% among all reviewed strategies, followed by the basic voting strategy with a fraction of 7.2%. Conversely, the weighted voting (5.2%), approval voting (1.7%), quadratic voting (1.6%), and ranked choice voting (1.3%) strategies are rarely adopted by participating members in comparison. The results demonstrate that single-choice voting is the most popular strategy. This indicates that DAO users still prefer to adopt the simplest way of polling. Although an intuitive concern comes that the single-choice voting and basic voting strategies may result in Matthew effect [36] in vote distribution, the results show that most proposal advisors ignore such drawbacks in practice.
Voting Patterns. Fig.<ref> shows that the most popular voting pattern in our collection is the binary voting with a quantity of over 10,000 proposals, followed by the ternary and quaternary voting patterns. On the other hand, Figs.<ref> and Figs.<ref> learn the variances of voting results of each proposal and each project, respectively, aiming to investigate how much a statement is being agreed with or opposed to by the community. It is realized from Fig.<ref> that over 60% of the proposals end up with a large variance of over 40. This indicates that any e-voting held in the current DAO communities is more likely to end up with a one-sided result. In each project, nevertheless, the number of projects with an average great variance is not that large, only accounting for 9.2%. Balanced results account for 38.5% while there are 52.3% of the projects have an average variance ranging from 10 to 20. This indicates that the voting pattern of a one-sided result rarely happens in a project-wised context. Each project may have several one-sided voting results with the majority more likely ending up with a balanced result.
Here, we try to explain the reason that causes the variance differences between proposal- and project-level voting results. As observed in Fig.<ref>, most of the voting results are binary-based patterns, whose corresponding variances are naturally very large. This will significantly increase the result (value) of proposal-level variances as each proposal is merely established on top of one voting pattern. In contrast, the results in project-level variances are relatively balanced because each project contains a series of proposals that may moderate the extreme value caused by binary results. In our view, one-sided results do not necessarily mean “bad”, which instead indicate that DAO members are more likely to make an instant decision without significant debates. A balanced result shows that DAO communities are difficult to reach an agreement among the participants. However, on the flip side, this exactly reflects the so-claimed properties of decentralization or democracy. Controversial arguments indicate that defining what is a normal or healthy voting result is complicated in an unclear context.
lifted shadow=1mm-2mm3mm0.1mmblack!50!white
Insight-204: Current e-voting patterns and results of many DAO projects reflect the decentralization or democracy in DAO communities, but at the same time bringing difficulty in reaching an agreement, at least not as effective as traditional e-voting does. This is essentially the weakness of the flat organizational structure, where the trade-off between the flat- and hierarchical-structure remains debatable.
Clustering the Voting Contexts.
The clustering among all considered DAO projects is investigated in Fig.<ref>. We apply K-means clustering [37] to analyze and categorize different DAO projects based on the textual features extracted from their titles. As the titles are strings, we first preprocess the dataset by transforming the textual data into numerical representations, such as term frequency-inverse document frequency (TF-IDF) or word embeddings.
To effectively visualize and interpret the resulting clusters, we then utilize two widely-used dimensionality reduction techniques, namely Principal Component Analysis (PCA) [38] and t-Distributed Stochastic Neighbor Embedding (t-SNE) [39]. PCA is a linear technique that identifies directions of maximum variance, transforming the high-dimensional data into a one-dimensional representation (pca-one). This provides an intuitive visualization and interpretation of the resulting clusters, preserving as much of the original variance as possible.
Conversely, t-SNE is a nonlinear technique that preserves the local structure of the data, capturing complex patterns and relationships. We use t-SNE to generate a two-dimensional representation (tsne-2d-one) of the DAO project titles, enabling a detailed examination of clusters, substructures, and intricate relationships among the projects. By conducting both, we obtain a richer understanding of the underlying patterns and relationships among different DAO projects based on their titles.
Clustering among all considered DAO projects
As a result, we cluster the projects into 10 labels (cf. Fig.<ref>) along with brief summaries as presented below:
Label 0 ${\color[rgb]{1,0.3,0.3} \blacksquare}$ <Protocol Upgrades and Implementations>.
This category focuses on proposals and discussions related to upgrades, implementations, and enhancements of decentralized protocols and platforms. Topics include network upgrades, smart contract implementations, and consensus mechanism improvements.
Label 1 ${\color[rgb]{1,0.7,0.4} \blacksquare}$ <Governance and Decision-making>.
This category primarily covers various proposals and discussions related to governance, management, and decision-making processes within DAOs and other decentralized organizations. Topics include voting systems, governance structure, and various aspects of administration.
Label 2 ${\color[rgb]{0.85,0.95,0.05} \blacksquare}$ <Tokenomics, Staking, and Rewards>.
This category is focused on tokenomics, staking, rewards, and incentives for decentralized platforms and protocols. Discussions and proposals revolve around token distribution, staking mechanisms, yield farming, liquidity provision, and other related financial aspects.
Label 3 ${\color[rgb]{0.6,0.95,0.2} \blacksquare}$ <Development and Technical Improvements>.
This category deals with discussions and proposals related to the development, improvement, and maintenance of decentralized platforms, protocols, and applications. Topics include technical improvements, bug fixes, new features, and other aspects of software development.
Label 4 ${\color[rgb]{0.4,1,0.5} \blacksquare}$ <Marketing, Branding, and Community Building>.
This category covers marketing, branding, and community-building efforts within the decentralized ecosystem. Topics include community engagement, social media presence, promotional campaigns, partnerships, and collaborations to increase visibility and adoption.
Label 5 ${\color[rgb]{0.55,1,0.98} \blacksquare}$ <Budgets, Funding, and Financial Management>.
This category focuses on various budgets, funding, and financial aspects related to decentralized organizations and projects. Discussions and proposals revolve around allocating resources, managing expenses, funding proposals, and other financial matters.
Label 6 ${\color[rgb]{0,0.4,0.8} \blacksquare}$ <Project-related Requests and Resources>.
This category encompasses project-related requests, including requests for resources, support, or collaboration from the community. Topics include project funding, hiring, development services, and other resources needed to move a project forward.
Label 7. ${\color[rgb]{0.4,0,0.8} \blacksquare}$ <Asset Management and Acquisitions>.
This category deals with asset management, acquisitions, and purchases within the decentralized ecosystem. Topics include buying and selling NFTs, real estate in virtual worlds, and other digital assets, as well as decisions regarding strategic investments or acquisitions.
Label 8 ${\color[rgb]{0.8,0,0.8} \blacksquare}$ <Contests, Competitions, and Events>.
This category is focused on contests, competitions, programs, and events within the decentralized ecosystem. Discussions and proposals revolve around voting on the outcomes of various competitions, participating in events or programs, and other community engagement activities.
Label 9 ${\color[rgb]{0.9,0,0.6} \blacksquare}$ <Activation and Continuation>.
This category covers the activation and continuation of individuals or projects within decentralized organizations. Topics include activating new members, continuing or ending ongoing initiatives, adjusting reward structures, and other decisions related to the management of human resources and projects.
Relations between Labels. Evidence of correspondence between the label descriptions and the clustering outcomes can be observed in Fig.<ref> through select examples.
Label 0 ${\color[rgb]{1,0.7,0.4} \blacksquare}$ adjacent to Label 3 ${\color[rgb]{0.6,0.95,0.2} \blacksquare}$ signifies the close relationship between executing protocol upgrades and specific development and technical improvements. Label 4 ${\color[rgb]{0.4,1,0.5} \blacksquare}$ encompasses marketing-related events and is closely linked with financial management represented by Label 5 ${\color[rgb]{0.55,1,0.98} \blacksquare}$ and asset management represented by Label 7 ${\color[rgb]{0.4,0,0.8} \blacksquare}$.
Label 8 ${\color[rgb]{0.8,0,0.8} \blacksquare}$, concentrating on contest events, stems from the initiatives and activation characterized by Label 9 ${\color[rgb]{0.9,0,0.6} \blacksquare}$. Simultaneously, both Label 8 ${\color[rgb]{0.8,0,0.8} \blacksquare}$ and Label 9 ${\color[rgb]{0.9,0,0.6} \blacksquare}$ intersect with Label 2 ${\color[rgb]{0.85,0.95,0.05} \blacksquare}$, highlighting that the activation and continuation of individuals or projects within DAOs necessitate extensive discussions about adopting appropriate token incentives.
Conversely, Label 1 ${\color[rgb]{1,0.7,0.4} \blacksquare}$ and Label 2 ${\color[rgb]{0.85,0.95,0.05} \blacksquare}$, positioned at the center, validate that the governance and tokenomics components form the core of DAOs, aligning with the introduction presented in Sec.<ref>.
lifted shadow=1mm-2mm3mm0.1mmblack!50!white
Insight-205: DAOs currently exhibit a broad range of voting contexts, covering topics from budget allocations and project funding to community events and hiring decisions. This diversity showcases the potential for decentralized governance to empower communities and drive innovation across various domains. However, challenges such as voter apathy and the concentration of power among a few token holders highlight the need for more robust, inclusive, and accessible governance mechanisms that encourage broader participation and ensure a sustainable future for DAOs.
§.§ DAO Tokens Usage
This sector describes the usage of different DAO tokens used in the considered DAO projects or proposals.
Fig.<ref> reveals that 97.1% of the DAO projects use self-issued (equiv. customized) tokens or minor tokens, while only 2.9% of the DAO projects use the mainstream tokens including USDT (54.2%), ETH (24.6%), USDC (18.3%), and ENS (2.9%), as shown in Fig.<ref>. The results reveal a risk of the current usage of tokens in DAO spaces. The majority stays on using self-issued tokens or minor tokens which are much less stable and have much fewer merits than the prevalent tokens. Unhealthy opportunistic behaviors could be apparently realized, which is adverse to leveraging smooth and efficient governance. Across the DAOs using self-issued tokens, the top 3 are STALK, HUWA, and PEOPLE whereas HUWA is tailored specifically to internet memes compared to STALK facilitating a fiat stablecoin protocol and PEOPLE aiming to develop the subDAOs, as shown in Fig.<ref>. This also implies the immaturity of the DAO community and needs further improvement.
Another interesting observation is that most customized tokens (over 75% among “Others” in Fig.<ref>) are minted on the top of Ethereum ecosystems, which means they are intriguingly designed in forms of ERC-20 tokens that are closely relied on the development of Ethereum platforms. Similarly, the rest of the customized tokens are created on other mainstream public chains, such as BSC and Avalanche. Such situations indicate a potential threat of implicit centralization caused by oligopolistic blockchain organizations that have taken the first-mover advantages.
lifted shadow=1mm-2mm3mm0.1mmblack!50!white
Insight-206: Unhealthy opportunistic behaviors are still common in the current DAO community in the sense that the majority of the projects rather relies on self-issued tokens than the apparently more valuable and stable mainstream tokens such as USDT, ETH, etc.
[Fraction of all tokens (general)]
[Fraction of prevalent tokens]
[Fraction of self-issued tokens]
Distribution of the Token Usage
§ DISCUSSIONS ON THREATS
This section highlights potential challenges. The analysis of threats is largely based on empirical evidence gathered through our study.
The governance in DAOs relies prominently on the possession of stakes or utility tokens. Although it is originally expected to be core to the decentralization in DAOs, highly active groups of participants are likely to accumulate major shares of tokens (investigated by our results Fig.<ref>&<ref>), hence breaching the decentralization due to the concentration of e-voting power. Beyond that, it's disheartening to observe the growing centralization of various aspects within DAOs. For instance, language usage (see Fig.<ref>), voting strategy (Fig.<ref>), platform adoption (Fig.<ref>), and even storage (Fig.<ref>) all seem to be following a similar path towards centralization. This trend has been discussed in depth in our finding from Insight-202. Such a phenomenon raises questions about whether we can truly achieve the promise of decentralized governance in the long run.
To avoid centralization, DAOs could accordingly prioritize diversity, decentralize decision-making, avoid concentration of assets, embrace transparency, and foster community. Having a diverse group of participants from different backgrounds and expertise can prevent power from being concentrated in the hands of a few. Decentralizing decision-making by allowing all members to participate in governance and voting, through mechanisms like quadratic voting and delegation, can prevent the decision-making process from being controlled by a small group. Avoiding the concentration of assets in a single wallet or exchange can reduce the risk of a single point of failure. Embracing transparency by making all decisions and transactions publicly visible can prevent any hidden centralization from occurring. Additionally, fostering a sense of community among members, despite it is pretty difficult, may help ensure that everyone feels invested in the success of the organization.
Disunity and Fairness.
DAO communities come across disagreements much more often than a traditional organization does (cf. Fig.<ref>, also mentioned in [8][10])). While this reflects the democratic nature of DAOs, it also highlights the potential for disagreements to divide the community. A disagreement can arise over a wide range of issues such as strategic direction, resource allocation, or operational procedures. If left unresolved, a disagreement can escalate and lead to the formation of factions within the community. These factions may then compete against each other for power, which can undermine the decentralized nature of the DAO.
It's important to have effective mechanisms in place to resolve disagreements in a fair and transparent manner. DAOs can consider implementing dispute resolution protocols or mediation processes to address disagreements and prevent them from dividing the community. By addressing disagreements proactively and collaboratively, DAOs can maintain their democratic and decentralized nature while avoiding factionalism and preserving their collective decision-making power. Additionally, DAO governance should dictate any progress updates for the project source code or other initiatives in a fully transparent way via public communication channels, e.g., Discord, and Slack.
Legality. It is evident that the majority of successful DAOs operate within the financial sector (cf. Fig.<ref>&Fig.<ref>&Fig.<ref>, Insight-208), which poses significant risks from various fronts. These risks include potential attacks from malicious actors, as well as the threat of being censored by governmental entities (e.g., more than 51% block proposers in Ethereum 2.0 are OFACed by U.S. government, referring to our Insight-207). As a result, smaller organizations may face severe limitations on their ability to operate effectively, and in some cases, these risks could even lead to their demise.
Properly embracing legal regulation can avoid the above problems. Laws or regulations about blockchain governance need to be properly established by standardizing the structures, processes, developments, and the use of blockchain and making every component (e.g., DAO) compliant with legal regulations and ethical responsibilities [21][40]. In particular, after The DAO hack, DAOs started to be concerned about being legally managed with better security and protection in several countries and regions [41].
High Cost. Running a DAO on-chain can be expensive (Fig.<ref>), with costs varying based on factors such as the underlying blockchain platform, complexity of smart contracts, and transaction volume. These costs are incurred through gas fees paid to the network, which are collected by miners or arbitrage bots and can become expensive in US dollars. Many DAOs create their own ERC20 tokens (Fig.<ref>) to use as governance votes, which also incurs gas fees with each action taken. Even DAOs that use stablecoins (Fig.<ref>) for voting power still need to purchase or borrow the coins from exchanges, adding to the expenses. Additionally, fees for development, maintenance, auditing, security assessments, marketing, and community building can be difficult to quantify and are excluded.
A reasonable way to reduce the costs of operating a DAO is to rely on off-chain or layer-two techniques that can execute most operations locally. Snapshot is an off-chain platform designed to manage DAOs and enable votes. Additionally, other off-chain tools can be found in Table <ref> to further reduce costs. By leveraging these techniques, DAO operators can minimize their reliance on costly on-chain transactions and reduce their overall expenses.
Nonsense Governance Activity. After analyzing the voting contexts (e.g., proposal titles and topics), we have found that a non-negligible proportion of governance activities are nonsensical in nature (consistent with a recent report by [8]). Our analysis reveals that a considerable number of proposals (approximately 17.7% of all proposals, raw data of Fig.<ref>) are completely irrelevant to the project's development, and merely consist of inappropriate or offensive content such as jokes and impolite questions. We think that the current ease of proposal creation, which allows anyone to submit a proposal, has contributed to the prevalence of such nonsensical activities within the governance process.
Thus, the implementation of more stringent entry requirements for proposal creation is necessary, such as mandatory completion of a tutorial on governance principles or holding a minimum number of project tokens. By introducing such measures, we expect to see an improvement in the overall quality of proposals and a reduction in the number of frivolous or fraudulent proposals.
In addition, we recommend the establishment of a mechanism to flag and remove any proposals that violate the platform's terms of service or are deemed inappropriate by the community. This could be done through the appointment of community moderators or the development of automated systems to detect such proposals.
Contract reliance. Most of the DAOs rely prominently on the authenticity and validity of the smart contracts that offer trustless environments. This implies that the vulnerability of smart contract codes and implicit design pitfalls will pose potential threats to running DAOs. A famous historical example caused by contract pitfalls is the huge failure of The DAO hack due to a severe bug in its smart contract code [42]. TheDAO raised $150M+ (by ETH) for building a collective investment platform. However, the project crashed shortly afterward due to a severe bug in its smart contract code. As a result, a considerable amount of assets was siphoned off and a disruptive hard fork happened that significantly affected the entire Ethereum blockchain till now [43]. Attacks like flash loans [44] in DeFi protocols that exploit the time interval of block confirmation can also undermine the sustainability of DAO communities.
To avoid a repeat of this fiasco and stabilize the monetization mechanisms for long-term growth, DAO communities should spare efforts to establish security protocols for auditing code and develop improved tools or supportive infrastructure. Also, they need a robust marketing and product design department and develop an effective product and content strategy associated with the concepts and principles of each DAO project. Meanwhile, well-organized and consistent communication plans are also crucial to attract attention from a broader public to establish loyalty in a decentralized context.
§ FURTHER ACTIONS
In this section, we continue the discussions of previous solutions and conduct a more detailed analysis of each category.
§.§ On Projects
Each project involves both competition and cooperation, and we will discuss them from these two perspectives.
DAO-2-DAO Collaboration.
The interaction and collaboration between different DAOs matter, spawning the design of decentralized negotiation protocols [45]. Decentralized governance in DAOs relies prominently on negotiation protocols since a unique set of parameters for forming consensus with its community, infrastructure, and use case are defined by each DAO based on a general formalization of each component such as the proposal format. This allows the evaluation of each proposal to be routinized and improves the efficiency of interactions such as joint ventures, token swaps, and distributed monetary policy. Moreover, a crafted formalization can bring DAO-2-DAO collaboration towards an inter-organizational framework by enabling proposals of different DAOs to be extended to serve a large variety of complicated contracts.
Learn on SubDAOs – Management/Competition.
DAO management has been evolving to feature a tree structure indicating the hierarchy of different DAOs where ones might belong to the others. There will be new groups of members that operate independently of the group's inception as DAOs grow. New divisions, teams, focus, and ideas will be brought into the community. Rather than trying to house all that activity under one roof, SubDAOs are an emerging approach for different working groups to create their own foundation and ownership structure [46]. All the SubDAOs tie value back to the originating entity. At the same time, one thing to be noted is the competition among different subDAOs within the same domain. Multiple DAO participants will compete for one goal set by its superior nodes. Balanced-off games among subDAOs should be further considered for such scenarios.
§.§ On Infrastructure
n addition to guaranteeing the secure operation of the core blockchain, a well-developed infrastructure and a range of useful applications are crucial for promoting the widespread adoption of DAOs.
DAO Stacks and Tools. As a generic term, the DAO space has included a variety of projects that cover many components and fields. We could sketch a relatively clear picture by learning from its “stack" (Daostack [47]). The foundation is the basic and backend software modules such as voting mechanisms (as discussed before) for decentralized governance. On top of it, a library layer used to build models for back ends is established (e.g., Arc.js [48]). Also, a caching layer is needed for collecting and structuring data (e.g., The Graph [49]). On the top, the application layer is designed for DAO users to deploy or participate in DAOs (Aragon [50]). In addition, a variety of widely used coordination tools, such as Twitter, Discord, and Github, play a role in supporting and facilitating DAO games from an external perspective.
Applications via DAO.
DAOs have been considered as one of the biggest innovations in the Blockchain ecosystems [51]. Therein, crowdfunding is one of the prime applications where DAO plays a vital role. For instance, ConstitutionDAO successfully pulled together $47 million worth of ether in a week to try to buy a first-edition copy of the U.S. Constitution at a Sotheby's auction [52]. Besides, DAO has been involved in democratizing the Metaverse ecosystem by offering contributions to decentralized infrastructure [53]. In addition, the paradigm of DAO paradigm is also applied by NFT-based investment projects in order to create and confirm shared ownership of assets. The emergence of a new generation of Dapps via DAO in various sectors, e.g., supply chain, finance accounting, IoT, healthcare, and transportation [54] has demonstrated the innovation and the need for DAO in current technology trends. Especially, DAO is also investigated that it could be promising for E-government systems in improving the efficiency and transparency of government operations [55].
§.§ On Voting Strategies
Two key questions regarding voting are: how to cast a vote and how the outcome of the vote impacts decisions.
Voting Routes. Voting could be conducted through both on-chain or off-chain. The on-chain voting service, such as Tally [56], has to introduce the time-lock mechanism to provide the polling period. Implementing such a mechanism typically relies on the usage of smart contracts. Tally's voting contains two types of smart contracts: a token contract and a governor contract. Meanwhile, the multi-sig wallet (e.g., Gnosis Safe [57]) is necessary for managing the deployed assets. However, on-chain voting confronts the disadvantages of costly and delayed confirmation, significantly decreasing the willingness of participation of users. In contrast, Snapshot is an off-chain voting tool that removes the expensive consumption of on-chain interactive operations. The number of created DAO spaces in two platforms indicates that users have much more willing to participate and vote in a gas-free platform.
Strategy Design. The design of voting strategies in DAOs plays a crucial role in ensuring effective decision-making and fostering user participation. These strategies should strike a balance between security, efficiency, and inclusiveness, accommodating various voting tools, applications, and regulatory requirements.
On-chain and off-chain voting methods can be combined to create hybrid strategies, leveraging the strengths of each approach. For instance, off-chain voting through tools like Snapshot can be employed for preliminary or less critical decisions, allowing for a more agile and gas-free voting process. On the other hand, on-chain voting, such as the Tally mechanism, can be reserved for more critical decisions, where the security and immutability provided by blockchain technology are essential.
Another dimension to consider in voting strategy design is the DAO-to-DAO voting mechanism, where one DAO can participate in the decision-making process of another DAO. This can promote cross-DAO collaboration and resource sharing, fostering synergies within the decentralized ecosystem. Voting in SubDAOs can also be utilized to facilitate the delegation of decision-making power to specialized groups, enabling efficiency in governance.
§.§ On Tokenization
Tokenization forms the foundation of the blockchain economy and incentive mechanisms. Achieving sustainability and a healthy Web3 ecosystem requires building on tokenization.
Healthy v.s. Unhealthy Tokenization.
A healthy tokenization distribution enables fairness to people who are involved in the DAO projects. It means that anyone who is purchasing the token competes on the same terms and is subjected to the same token sales policies. Besides, reasonable token usage of DAOs significantly impacts project-controlled liquidity[58]. As discussed in the previous section, the imbalance between self-issued tokens and mainstream tokens raises the risk of manipulation in the market. The equality of token usage logic has significant implications. For example, small market cap projects launched at cheap initial prices usually face the potential abuse of whale and team purchases. Meanwhile, the larger the market capitalization which is managed by DAOs, the less likely the chances that the whale and inside team can purchase tokens on the market. Hence, the balance between the mainstream tokens and self-issued tokens not only reduces the possibility of market manipulation but also provides space and finances for the founding team to develop the DAO projects.
Governance via Tokenization. Effective governance is crucial for aligning the interests of various stakeholders and ensuring the stability of the ecosystem. This is particularly important in DAOs where incentivizing responsible behavior can be challenging. Tokenization can be used to provide monetary incentives to all parties involved, including the project team, application providers, node operators, blockchain users, and even regulators. However, ensuring fairness in the distribution of incentives is equally important. A well-designed governance structure should incorporate game theory to encourage diverse stakeholder participation and representation. Additionally, transparent distribution of on-chain and off-chain incentives can help build trust and cooperation among stakeholders toward achieving common goals. At a higher level, maintaining a balance between mainstream tokens and self-issued tokens can reduce the risk of market manipulation. Building better governance through rational incentives and transparent mechanisms can lead to a Schelling point [59], where desirable behaviors are encouraged, and fairness is maintained.
§ RELATED WORK
This section covers three dimensions of DAO progress: the evolution of several major DAOs in the industry, formative research on DAOs, and related work on Web3 governance.
DAO Evolution. We callback several milestones in DAO's history. The first DAO, known as The DAO [60], was established on Ethereum in 2016, marking the beginning of DAOs on blockchains. Unfortunately, the project was hacked, ultimately leading to a hard fork of the Ethereum blockchain [43][42]. After this setback, DAOs regained popularity with the emergence of MakerDAO [61] in 2018. The project introduced an on-chain governance system to produce a deposited stablecoin protocol (a.k.a. DAI). Then, in 2020, a surge of decentralized finance (DeFi) protocols [62], known as the DeFi summer, propelled DAOs to new heights. These protocols are built on top of various blockchain platforms, such as Ethereum, BSC, and Avalanche, and enable decentralized finance services such as DEXs (Uniswap, dYdX), lending (Compound, Aave), yield aggregators (Convex), and staking (Lido), among others. Till now, DAOs embraced the concept of Web3 [1], and their development became intertwined with the surrounding components that make up the decentralized web. This includes elements such as wallets, smart contracts, various blockchain platforms, and even regulations [63].
Formative DAO Studies. Liu et al. [2] provide an overview of early DAOs by explaining the definitions, and preliminaries and giving a simple taxonomy. Daian shows a series of DAO attack analyses [64][65] from the technical level by diving into the source code. They point out the reasons for recursive send vulnerabilities in Ethereum that cause a monetary loss ($150M). Robin et al. [9] have investigated three DAO projects (Compound, Uniswap, and ENS) by empirically analyzing their voting powers and discussing governance. Later, Daian et al. [66] propose a potential attack form called Dark DAO, which means a group of members form a decentralized cartel and can opaquely manipulate (e.g., buy) on-chain votes. Yu et al. [3] provide a quick review of existing DAO literature and deliver their responses by statistically reviewed papers. Feichtinger et al. [8] conduct an empirical study on 21 DAOs to explore the hidden problems, including high centralization, monetary costs, and pointless activities. Sharma et al. [10] have developed their
research on 10 diverse DAOs to examine their degree of decentralization and autonomy. Besides, many researchers and organizations also put their focus on DAO by generating in-time reports [31][67] and online posts [63]. Instead of concentrating on single attacks or individual projects, we provide an in-depth analysis of DAO projects resident on the Snapshot platform.
Governance in Web3. Governance is the cornerstone of DAOs, but research in this area remains vague. DAOs are rooted in blockchain and heavily influenced by on-chain tokens, which means they are affected by the underlying technology as well as cryptocurrency regulations. For the former, Kiayias et al. [21] have explored the governance process and properties within the blockchain context. They use a first principles approach to derive seven fundamental properties and apply them to evaluate existing projects. Wang et al. [1] have partially incorporated this notion into Web3, emphasizing the importance of DAO governance in Web3. Liu et al. [68] have presented a systematic review of primary studies on governance and provide a qualitative and quantitative synthesis. Regarding the latter, governance primarily comes from governments of different countries, even though blockchain is designed to be decentralized. However, it can still be easily regulated by increasingly centralized miners or validators. Ethereum (after the Merge) PoS validators who rely on MEV-boots services may be monitored and regulated by the government (OFAC-compliant). Additionally, pure cryptocurrency companies can also be governed. A noteworthy example is the US sanctions against Tornadocash [69].
§ CONCLUSION
In this paper, we empirically studied the ground truth of the wild DAO projects collected from the largest off-chain voting platform, Snapshot. We conducted comprehensive data collection and analysis, and shed insights on the design principles of the prevalent DAOs. Our empirical results discover a series of facts hidden from the light, covering its unfair distributions on such as participant regions or language usage, as well as revealing potential threats of centralization and contract reliance. We also showed our minds ways of building a better DAO from different aspects. This work, at least in our opinion, delineates a clear view of current DAOs.
[1]
Qin Wang, Rujia Li, et al.
Exploring Web3 from the view of blockchain.
arXiv preprint arXiv:2206.08821, 2022.
[2]
Lu Liu, Sicong Zhou, et al.
From technology to society: An overview of blockchain-based DAO.
IEEE Open Journal of the Computer Society, 2:204–215, 2021.
[3]
Guangsheng Yu, Qin Wang, Tingting Bi, Shiping Chen, and Sherry Xu.
Leveraging architectural approaches in Web3 applications–a DAO
perspective focused.
IEEE International Conference on Blockchain and Cryptocurrency
Workshop on Cryptocurrency Exchanges (CryptoEx@ICBC), 2023.
[4]
Youssef Faqir-Rhazoui et al.
A Comparative Analysis of the Platforms for Decentralized Autonomous
Organizations in the Ethereum Blockchain.
Journal of Internet Services and Applications, 12/1(1):1–20,
[5]
Shuai Wang, Wenwen Ding, Juanjuan Li, and other.
Decentralized Autonomous Organizations: Concept, Model, and
IEEE Transactions on Computational Social Systems,
6(5):870–878, 2019.
[6]
Evelyne Bischof, Alex Botezatu, Sergey Jakimov, Ilya Suharenko, Alexey
Ostrovski, Andrey Verbitsky, Yury Yanovich, Alex Zhavoronkov, and Garri
Longevity foundation: Perspective on decentralized autonomous
organization for special-purpose financing.
IEEE Access, 10:33048–33058, 2022.
[7]
Ibrahim Mehdi et al.
Data Centric DAO: When blockchain reigns over the Cloud.
In IEEE International IOT, Electronics and Mechatronics
Conference (IEMTRONICS), pages 1–7. IEEE, 2022.
[8]
Rainer Feichtinger, Robin Fritsch, et al.
The hidden shortcomings of DAOs–an empirical study of on-chain
arXiv preprint arXiv:2302.12125, 2023.
[9]
Robin Fritsch, Marino Müller, et al.
Analyzing voting power in decentralized governance: Who controls
arXiv preprint arXiv:2204.01176, 2022.
[10]
Tanusree Sharma, Yujin Kwon, Kornrapat Pongmala, Henry Wang, Andrew Miller,
Dawn Song, and Yang Wang.
Unpacking how decentralized autonomous organizations (daos) work in
arXiv preprint arXiv:2304.09822, 2023.
[11]
Accessible at <https://snapshot.org/#/>, 2022.
[12]
Gavin Wood et al.
Ethereum: A secure decentralised generalised transaction ledger.
Ethereum Yellow Paper, 151(2014):1–32, 2014.
[13]
Avalanche network.
Accessible at <https://www.avax.network/>, 2022.
[14]
Binance smart chain.
Accessible at <https://www.bnbchain.org/en>, 2022.
[15]
Polygon network.
Accessible at <https://polygon.technology/>, 2022.
[16]
Solana network.
Accessible at <https://solana.com/>, 2022.
[17]
Rujia Li et al.
An offline delegatable cryptocurrency system.
In IEEE International Conference on Blockchain and
Cryptocurrency (ICBC), pages 1–9. IEEE, 2021.
[18]
Palina Tolmach, Yi Li, et al.
A survey of smart contract formal specification and verification.
ACM Computing Surveys (CSUR), 54(7):1–38, 2021.
[19]
Reed Drummond, Sporny Manu, Sabadello Markus, Longley Dave, and Allen
Decentralized identifiers (DIDs) v1.0: Core architecture,
data model, and representations.
Accessible at <https://www.w3.org/TR/did-core/>, 2021.
[20]
Ethereum name service.
Accessible at <https://ens.domains/>, 2022.
[21]
Aggelos Kiayias and Philip Lazos.
SoK: Blockchain governance.
Proceedings of the ACM Conference on Advances in Financial
Technologies (AFT), 2022.
[22]
John R. Douceur.
The sybil attack.
In Peter Druschel, Frans Kaashoek, and Antony Rowstron, editors, Peer-to-Peer Systems, pages 251–260, Berlin, Heidelberg, 2002. Springer
Berlin Heidelberg.
[23]
Véronique Cortier, David Galindo, Ralf Küsters, Johannes Müller,
and Tomasz Truderung.
SoK: Verifiability notions for e-voting protocols.
In IEEE Symposium on Security and Privacy (SP), pages 779–798.
IEEE, 2016.
[24]
Fabian Vogelsteller and Vitalik Buterin.
EIP-20: Token standard.
Ethereum Improvement Proposals, accessible at
<https://eips.ethereum.org/EIPS/eip-20>, 2015.
[25]
William Entriken, Dieter Shirley, Jacob Evans, and Nastassia Sachs.
EIP-721: Non-fungible token standard.
Ethereum Improvement Proposals, accessible at
<https://eips.ethereum.org/EIPS/eip-721>, 2018.
[26]
Radomski Witek, Cooke Andrew, Castonguay Philippe, Therien James, Binet Eric,
and Sandford Ronan.
EIP-1155: Multi token standard.
Ethereum Improvement Proposals, accessible at
<https://eips.ethereum.org/EIPS/eip-1155>, 2018.
[27]
Qin Wang, Rujia Li, et al.
Non-fungible token (nft): Overview, evaluation, opportunities and
arXiv preprint arXiv:2105.07447, 2021.
[28]
Tan Joshua, Patka Isaac, Gershtein Ido, Eithcowich Eyal, et al.
Common interfaces for daos.
<https://eips.ethereum.org/EIPS/eip-4824>, 2022.
[29]
Zhou Zainan, Evan, and Xu Yin.
EIP-1202: Voting interface [draft].
Accessible at <https://eips.ethereum.org/EIPS/eip-1202>,
[30]
Detrio Casey.
EIP-779: Hardfork meta: DAO fork.
Accessible at <https://eips.ethereum.org/EIPS/eip-779>,
[31]
Slavin Aiden et al.
World economic forum: Decentralized autonomous organizations: Beyond
the hype.
[32]
Juan Benet.
IPFS-content addressed, versioned, P2P file system.
arXiv preprint arXiv:1407.3561, 2014.
[33]
The pareto principle.
<https://www.wikiwand.com/en/Pareto_principle>, 2022.
[34]
Vilfredo Pareto.
Cours d'économie politique, volume 1.
Librairie Droz, 1964.
[35]
Ipfs doc: Content addressing and cids.
Accessible at
[36]
Matthew effect.
<https://www.wikiwand.com/en/Matthew_effect>, 2022.
[37]
Stuart Lloyd.
Least squares quantization in pcm.
IEEE Transactions on Information Theory (TIT), 28(2):129–137,
[38]
Harold Hotelling.
Analysis of a complex of statistical variables into principal
Journal of Educational Psychology, 24(6):417, 1933.
[39]
Laurens Van der Maaten and Geoffrey Hinton.
Visualizing data using t-sne.
Journal of Machine Learning Research, 9(11), 2008.
[40]
Yue Liu, Qinghua Lu, Guangsheng Yu, Hye-Young Paik, Harsha Perera, and Liming
A pattern language for blockchain governance.
In Proceedings of the 27th European Conference on Pattern
Languages of Programs, pages 1–16, 2022.
[41]
Stéphane Blemus.
Law and blockchain: A legal perspective on current regulatory trends
Revue Trimestrielle de Droit Financier (Corporate Finance and
Capital Markets Law Review) RTDF, (4-2017), 2017.
[42]
Muhammad Izhar Mehar, Charles Louis Shier, Alana Giambattista, Elgar Gong,
Gabrielle Fletcher, Ryan Sanayhie, Henry M Kim, and Marek Laskowski.
Understanding a revolutionary and flawed grand experiment in
blockchain: the DAO attack.
Journal of Cases on Information Technology (JCIT),
21(1):19–32, 2019.
[43]
OLEKSII KONASHEVYCH.
Takeaways: 5 years after the DAO crisis and ethereum hard fork.
Accessible at
[44]
Kaihua Qin, Liyi Zhou, Benjamin Livshits, and Arthur Gervais.
Attacking the DeFi ecosystem with flash loans for fun and profit.
In International Conference on Financial Cryptography and Data
Security (FC), pages 3–32. Springer, 2021.
[45]
Cem F Dagdelen.
D2D: Towards decentralized negotiation protocols.
[46]
How to subdao.
[47]
DaoStack: An operating system for collective intelligence.
Accessible at <https://medium.com/daostack>, 2022.
[48]
Arc.js: Daostack javascript client.
<https://daostack.github.io/arc.js/>, 2022.
[49]
The Graph: APIs for a decentralized future.
<https://thegraph.com/en/>, 2022.
[50]
Aragon platform.
Accessible at <https://aragon.org/>, 2022.
[51]
Yan Chen.
Blockchain tokens and the potential democratization of
entrepreneurship and innovation.
Business horizons, 61(4):567–575, 2018.
[52]
Constitutiondao crypto investors lose bid to buy constitution copy.
(Accessed on 11/26/2022).
[53]
Thippa Reddy Gadekallu et al.
Blockchain for the metaverse: A review.
arXiv preprint arXiv:2203.09738, 2022.
[54]
Youssef El Faqir, Javier Arroyo, and Samer Hassan.
An overview of decentralized autonomous organizations on the
In Proceedings of the 16th international symposium on open
collaboration, pages 1–8, 2020.
[55]
Nour Diallo, Weidong Shi, Lei Xu, Zhimin Gao, Lin Chen, Yang Lu, Nolan Shah,
Larry Carranco, et al.
eGov-DAO: A better government using blockchain based decentralized
autonomous organization.
In International Conference on eDemocracy & eGovernment
(ICEDEG), pages 166–171. IEEE, 2018.
[56]
Accessible at <https://www.tally.xyz/>, 2022.
[57]
Gnosis-safe: Trusted platform to manage digital assets on ethereum.
Accessible at <https://gnosis-safe.io/>, 2022.
[58]
Will Warren et al.
0x: An open protocol for decentralized exchange on the ethereum
<https://github.com/0xProject/whitepaper>, 2017.
[59]
Yue Liu, Qinghua Lu, Guangsheng Yu, Hye-Young Paik, and Liming Zhu.
Defining blockchain governance principles: A comprehensive framework.
Information Systems, 109:102090, 2022.
[60]
Samuel Falkon.
The story of the DAO — its history and consequences.
[61]
Makerdao foundation proposal v2.
[62]
Sam M Werner, Daniel Perez, Lewis Gudgeon, Ariah Klages-Mundt, Dominik Harz,
and William J Knottenbelt.
SoK: Decentralized finance (DeFi).
International Conference on Financial Cryptography and Data
Security (FC), 2022.
[63]
Daos and the complexities of web3 governance.
Accessible at <https://blog.chain.link/daos/>, 2022.
[64]
Daian Philip.
Analysis of the DAO exploit.
Accessible at
[65]
Daian Philip.
Chasing the DAO attacker’s wake.
Accessible at
<https://pdaian.com/blog/chasing-the-dao-attackers-wake/>, 2016.
[66]
Daian Philip, Kell Tyler, Miers Ian, and Juels Ari.
On-chain vote buying and the rise of dark daos.
Accessible at
<https://hackingdistributed.com/2018/07/02/on-chain-vote-buying/>, 2018.
[67]
Hassan Samer and Filippi Primavera.
Decentralized autonomous organization.
Internet Policy Review,
[68]
Yue Liu, Qinghua Lu, Liming Zhu, Hye-Young Paik, et al.
A systematic literature review on blockchain governance.
Journal of Systems and Software, 2022.
[69]
U.S. treasury sanctions notorious virtual currency mixer tornado cash.
<https://home.treasury.gov/news/press-releases/jy0916>, 2022.
Mainstream DAOs & Tools & DataFeeds
1c 2*DAOs 3cOperational Features 3cFunctionalities 5cNon-Functionalities 4cMarket Performance [May 2023]
1c 1cNetwork 1cProtocol/Field
1c00Treasury (USD)
40*90Projects Uniswap Ethereum DeFi (DEX) UNI 2.7B 363k 124 203.8k
BitDAO Ethereum DeFi (DEX) BIT 2.7B 18.5k 23 4.8k
ENS Ethereum Name Service ENS 1.1B 64.2k 60 111.8k
Gnosis Ethereum DeFi (DEX) GNO 1B 363k 124 203.8k
dYdX Ethereum DeFi (Lending) DYDX 903.5M 36.4k 26 11.1k
Stargate.Fin Ethereum Service STG n/a 374.8M 26.8k 47 2.2M
Lido Ethereum DeFi (Lending) LDO 352.6M 33.2k 128 42.3k
Polkadot Substrate Service DOT n/a 280.4M 1.3M 363 2.17k
Frax.Fin Ethereum Stablecoin FXS 271.3M 13.2k 276 9.04k
Aragon Ethereum Service ETH 199.1M 14.2k 606 1.03k
Curve Ethereum Stablecoin CRV 148.8M 76.8k 221 2.10k
Fei Ethereum Stablecoin TRIBE 145.8M 14.3k 161 15.1k
Decentraland Polygon NFTs MANA n/a 138.5M 308.9k 2k 94.7k
Radicle Ethereum Service RAD 126.4M 6.6k 26 686
Aave Polygon DeFi (Lending) AAVE n/a 124.9M 155.8k 268 527.1k
Compound Ethereum DeFi (Lending) COMP 121.5M 208.6k 169 13.4k
DXdao Polygon DeFi (DEX) DXD n/a 117.1M 1.4k 915 2.54k
Ribbon Ethereum DeFi (Derivative) RBN 116.3M 4.4k 31 4.75k
Synthetix Ethereum DeFi (DEX) SNX 115.3M 91.5k 569 14.6k
MangoDAO Solana DeFi (DEX) MNGO n/a 102.9M 36k 401 3.83k
Gitcoin Ethereum Social network GTC 92.2M 33.7k 144 70.4k
Phala Substrate Polka's testnet PHA n/a 77.6M 3.1k 24 72
Vesta.Fin Polygon Stablecoin VSTA n/a 67.4M 256.5k 8 34.7k
JPEG'd Ethereum DeFi (Lending) JPEG 66M 5.3k 59 2.51k
Euler.Fin Ethereum DeFi (Lending) EUL 63.5M 2.6k 55 8.27k
Merit Circle Solana NFTs MC n/a 61M 8.9k 26 2.83k
SuperRare Ethereum NFTs RARE 54.1M 8.7k 17 1.23k
KeeperDAO Ethereum DeFi (MEV-extractor) ROOK 53.5M 17k 41 1.21k
MakerDAO Ethereum Stablecoin MKR 49.1M 90.9k n/a n/a
UXDProtocol Solana Stablecoin UXP n/a 49.6M 11.7k 819 3.27k
Yearn Ethereum DeFi (Lending) YFI 37.9M 54.2k 16 4.84k
Balancer Ethereum DeFi (DEX) BAL 36.1M 45k 378 82.1k
PleasrDAO Ethereum NFTs USDC 31.4M 149 54 1.02k
Sushiswap Ethereum DeFi (DEX) SUSHI 28.7M 109.1k 290 49.2k
Pangolin Polygon DeFi (DEX) PNG n/a 19.2M 32.5k 45 2k
1inch Ethereum DeFi (DEX) 1INCH 18.2M 87.5k 22 2.45k
Lucidao Polygon Service USDT n/a 11.8M 1.3k 6 154
Kusama Ethereum Polka's testnet KSM 11.5M 291.3k 863 5.65k
Serum Solana DeFi (DEX) SRM n/a 4.5M 226.1k 52 246
Bifrost Substrate DeFi (Lending) BNC n/a 4M 84.7k 686 1.53k
1c0Projects 2c0Field 2c0Note
4c0Projects 3c0Field/Coverage 3c0Note
11* 90Tools & Launchpad Aragon 2cManagement tools 2c|9*90Keywords on Guide, NFT, Arts,
Treasury, Web3, DeFi, Game,
Analytics, DiD, Legal, Launcher,
Media, Governance, Reputation,
Infrastructure, Social, Dispute 1c11*90Related DataFeeds
4c|Dune 3cData analytic 3c<https://dune.com/home>
DAOStack 2cManagement tools 2c| 1c
4c|GraphQL 3cData analytic 3c<https://daostack.io>
Colony 2cManagement tools 2c| 1c
4c|Colony API 3cData analytic 3c<https://colony.io>
Snapshot 2cOff-chain voting platform 2c| 1c
4c|DexTools 3cTrading pair 3c<https://www.dextools.io>
Tally 2cOn-chain voting platform 2c| 1c
4c|DefiLlama 3cDeFi TVL aggregator 3c<https://defillama.com>
DeepDAO 2cInformation/aggregator 2c| 1c
4c|TokenTerminal 3cProjects, Financial data 3c<https://tokenterminal.com>
DAOMasters 2cLauncher/Management 2c| 1c
4c|RootData 3cFundraising, Investors 3chttps://www.rootdata.com
DAOlist 2cInformation/aggregator 2c| 1c
4c|CoinMarketCap 3cProjects, Ranking 3c<https://coinmarketcap.com>
Mirror 2cPublishing/Writing 2c| 1c
4c|Zapper 3cDAOs, NFTs, DeFi 3c<https://zapper.xyz/daos>
Gnosis Safe 2cMultisig wallets 2c| 1c
4c|DappRadar 3cDApps, NFTs, DeFi 3c<https://dappradar.com>
IPFS 2cStorage infrastructure 2c| 1c
4c|DexScreener 3cTrading pair, Price 3c<https://dexscreener.com>
43 Source data in this paper mainly refers to DeepDAO (<https://deepdao.io/organizations>) [May 2023].
43 Insights-207 Ethereum DAOs (post-Merge) are censored due to the OFAC-compliant blocks (MEV Watch <https://www.mevwatch.info>).
208 DeFi-related DAOs are incentive-compatible as stakeholders are motivated to hold and use tokens to maximize their profits.
|
# Activity induced trapping in a saw-tooth ratchet potential
M Muhsin Department of Physics, University of Kerala, Kariavattom,
Thiruvananthapuram-$695581$, India M Sahoo<EMAIL_ADDRESS>Department
of Physics, University of Kerala, Kariavattom, Thiruvananthapuram-$695581$,
India
###### Abstract
We consider an inertial active Ornstein-Uhlenbeck particle self-propelling in
a saw-tooth ratchet potential. Using the Langevin simulation and matrix
continued fraction method, the particle transport, steady state diffusion, and
coherence in transport are investigated throughout the ratchet. Spatial
asymmetry is found to be the key criterion for the possibility of directed
transport in the ratchet. Interestingly, the simulated particle trajectories
and the corresponding position and velocity distribution functions reveal that
the system passes through an activity-induced transition in the transport from
the running phase to the locked phase with the self-propulsion/activity time
of the dynamics. This is further corroborated by the mean square displacement
(MSD) calculation. The MSD gets suppressed with increase in the persistence of
activity in the medium and finally approaches zero for very large value of
self propulsion time, reflecting a kind of trapping of the particle by the
ratchet for longer persistent of activity in the medium. The non-monotonic
behaviour of the particle current and Peclet number with self propulsion time
confirms that the particle transport and it’s coherence can be enhanced or
reduced by fine tuning the persistent duration of activity. Moreover, for an
intermediate range of self-propulsion time in the dynamics as well as for an
intermediate range of mass of the particle, even though the particle current
shows a pronounced unusual maximum with mass, there is no enhancement in the
Peclet number, instead the Peclet number decreases with mass, confirming the
degradation of coherence in transport. Finally, from the analytical
calculations, it is observed that for a highly viscous medium, where the
inertial influence is negligibly small, the particle current approaches the
current in the over damped regime.
## I INTRODUCTION
Noise is omnipresent and is an indispensable part of nature, which plays an
important role in the dynamics of systems operating at microscopic length
scale[1, 2]. A system which generates an unidirectional net transport out of a
noisy environment in the molecular level utilizing non-equilibrium condition
and spatial (or temporal) asymmetry is referred as a Brownian ratchet or
Brownian motor[3, 4, 5]. In the recent literature, active ratchets are
realized through the use of active matter systems consisting of self propelled
units that can be biological or non biological in nature [6, 7, 8, 9, 10, 11,
12]. Active matter is a special class of condensed matter systems, in which
the individual constituents are having the ability to self-propel on their own
by consuming energy from the environment. Such self-propelled particles are
known as active particles and they are inherently driven far away from
equilibrium, violating the well known fluctuation dissipation relation[13, 14,
15]. Examples of such active matter systems range from the microscopic to
macroscopic length scale such as unicellular organisms like motile
bacteria[16, 17], self motile Janus particles[18, 19], micro and
nanorobots[20, 21], hexbugs[22], flocking of birds[23], school of fishes[24],
etc. Different models are proposed to study the dynamics of active matter both
in single particle and in the collective level, such as Active Brownian
particle(ABP) model[25, 26, 27, 28], Active Ornstein-Uhlenbeck particle(AOUP)
model[29, 30, 31], run and tumble particle (RTP) model[32, 33], etc.
Active ratchets are experimentally realized to generate unidirectional
transport even in the absence of an external bias unlike passive Brownian
ratchets [6, 9, 10]. When self-propelled particles are placed in an asymmetric
potential, the particles on average can travel to the gentler side of the
potential giving rise to unidirectional transport with a non-zero net particle
flux[12, 10, 9]. Recently, there is an immense interest in the study of active
ratchets and it is a growing field of research because of its enormous
applications in the fabrication of different types of nanorobots, artificial
swimmers, and other self-driven systems[13]. The rectification effect of
active matter in a periodic structure was first observed for run and tumble
bacteria moving through funnel-shaped barriers[8]. Subsequently, the
rectification effects in active matter are studied both theoretically and
numerically for different types of systems such as self-propelled particles on
asymmetric substrates, bacterial colony, dusty plasma[9, 10, 34, 35], etc. The
simulation results for the dynamics of active Janus particle in an asymmetric
channel confirms that the rectification can be orders of magnitude stronger
than that of ordinary thermal ratchets[36]. Potosky et. al. found that the
spatially modulated self-propelled velocity can also induce directed transport
[37]. Similarly, Angelani and co-workers observed active ratchet effects for
run and tumble particles in a piecewise ratchet potential[7]. Rectification of
twitching bacteria through 2D narrow channel is studied numerically in Ref.
[38]. The rectification effect in asymmetric periodic structures is found to
be a general feature of active matter[39, 40, 41, 42, 37, 43, 44, 45, 46].
However, most of the active ratchet studies are based on the overdamped
dynamics of self propelled particles, where inertial effects are ignored. But
the over damped approximation in the dynamical behavior is not appropriate in
many situations such as granular matter in a diluted medium with high Reynold
number, self propelling microdiode, colloidal particles in air, Janus
particles in dusty plasm, and so on[47, 48, 49, 50]. Recently, the dynamics of
inertial active Brownian particles in a sawtooth potential results current
reversal for an intermediate range of viscosity in the medium [51]. Similarly,
the rotational rectification is investigated in a ratchet gear powered by
active particles in Refs. [52, 53]. A common interesting phenomena observed in
these inertial ratchet models is the reversal of particle current. Although
transport of inertial active particles is discussed in these models, the
transport coherence remains largely unexplored. Coherence in transport in a
stochastic environment is an important factor for determining the reliability
of transport.
In this work, we focus on the inertial active motion of a particle following
Ornstein-Uhlenbeck process in a sawtooth ratchet potential. The dynamics of a
passive Brownian particle in a sinusoidal potential driven by an exponentially
correlated noise and Gaussian thermal noise is already discussed in Refs. [54,
55]. In these models, the dynamics is mapped to a thermal bath at a certain
temperature. However, in our model we consider the dynamics of an active
particle in contact with an athermal bath. We mainly analyze the particle
transport which is characterized by average current, diffusion, and the
coherence in transport. One of our interesting findings is that an inertial
active Ornstein-Uhlenbeck particle while self-propelling in a sawteeth ratchet
potential, eventually gets trapped by the ratchet with longer persistence of
self-propulsion in the environment. Both the particle transport and the
coherence in transport show nonmonotonic behaviour with the self-propulsion
time of the dynamics. Surprisingly, the current reversal is not observed in
our model unlike the previously discussed inertial active ratchets in Refs.
[51, 52, 6].
## II MODEL AND METHOD
We consider the motion of an active Ornstein-Uhlenbeck particle (AOUP) of mass
$m$ through a ratchet potential. The dynamics of the particle is given by the
Langevin’s equation of motion[54, 28, 56, 57, 58] as
$m\ddot{x}=-\gamma\dot{x}-V^{\prime}(x)+\xi(t),$ (1)
with $x$ being the position co-ordinate and $v=\dot{x}$ as the velocity co-
ordinate of the particle. Here, $\gamma$ is the viscous coefficient of the
medium and $V(x)$ is the confining ratchet potential. $\xi(t)$ is the
exponentially correlated noise with strength $C$, which follows the Ornstein-
Uhlenbeck process[30] as
$t_{c}\dot{\xi}(t)=-\xi(t)+\sqrt{2C}\ \eta(t).$ (2)
Here, $\eta(t)$ is the delta correlated Gaussian white noise which satisfies
the properties $\langle\eta(t)\rangle=0$ and
$\langle\eta(t)\eta(s)\rangle=\delta(t-s)$. The angular bracket
$\langle\cdots\rangle$ denotes the ensemble average over noise. The
statistical properties of the Ornstein-Uhlenbeck (OU) noise $\xi(t)$ is given
by
$\langle\xi(t)\rangle=0,\quad\langle\xi(t)\xi(s)\rangle=\frac{C}{t_{c}}\exp\left(-\frac{|t-s|}{t_{c}}\right),$
(3)
with $t_{c}$ being the noise correlation time. It is the time up to which the
particle self-propels in the ratchet and hence activity persists in the medium
for a time interval of $t_{c}$. A finite $t_{c}$ notably quantifies the
presence of activity or correlation in the medium, that decays exponentially
with $t_{c}$. For a nonzero $t_{c}$ value, the system is inherently driven
away from equilibrium[59]. However, In the passive limit ($t_{c}\rightarrow 0$
limit) of our model, we consider the strength of noise $C$ to be $\gamma
k_{B}T$ (fluctuation-dissipation relation) in order for the system to approach
the typical thermal equilibrium limit of the dynamics at temperature $T$ [60,
61]. The potential $V(x)$ that appears in Eq. (1) has the form
$V(x)=\begin{cases}\dfrac{Q}{\lambda_{1}}x,&x\leq\lambda_{1}\\\\[10.00002pt]
\dfrac{Q}{\lambda_{2}}(\lambda-x),&\lambda_{1}<x\leq\lambda.\end{cases}$ (4)
Here, $Q$ is the potential height and $\lambda=\lambda_{1}+\lambda_{2}$ is the
periodicity of the ratchet potential (see Fig. 1). Eq. (4) represents a
sawtooth potential which is symmetric when $\lambda_{1}=\lambda_{2}$.
Therefore, we introduce an asymmetric parameter $\Delta$ such that
$\Delta=\lambda_{1}-\lambda_{2}$.
Figure 1: Schematic diagram of the ratchet potential [Eq. (4)]
In this work, we are mainly interested in the particle transport, associated
dispersive spread or diffusion, and coherence in transport of the particle.
The particle transport can be quantified by measuring an essential quantity,
known as particle current. As per the geometry of the ratchet potential, the
motion of the particle is along $x$-direction and hence the average particle
current in the stationary state can be defined as[3, 62],
$\langle
j\rangle=\lim_{t\rightarrow\infty}\left\langle\frac{x(t)-x(0)}{t}\right\rangle.$
(5)
Similarly, the diffusive spread or diffusion can be quantified by measuring
the diffusion coefficient $D$ about the mean position of particle, which is
given by [63],
$D=\lim_{t\rightarrow\infty}\frac{\langle x^{2}\rangle-\langle
x\rangle^{2}}{2t}.$ (6)
Here, $\langle x^{2}\rangle-\langle x\rangle^{2}$ can be characterized as the
mean square displacement (MSD) of the particle. The transport of the particle
in such an asymmetric potential and stochastic environment depends on the
diffusive spread and the mean velocity of the particle. The effectiveness or
coherence in the transport can be quantified by measuring a dimensionless
parameter called Peclet number $Pe$, which is defined as
$Pe=\frac{\langle j\rangle\lambda}{D}.$ (7)
We have set the periodicity of the potential $\lambda$ as unity throughout
this paper.
## III RESULTS AND DISCUSSION
The Fokker-Planck equation corresponding to the dynamics in Eq. (1) for the
probability density function $P(x,v,\xi;t)$ is given by
$\begin{split}\frac{\partial P}{\partial t}=-v\frac{\partial P}{\partial
x}&+\frac{\partial}{\partial v}\left(\frac{\gamma
v}{m}+\frac{V^{\prime}(x)}{m}-\frac{\xi(t)}{m}\right)P\\\
&+\frac{\partial}{\partial\xi}\left(\frac{\xi}{t_{c}}+\frac{C}{t_{c}^{2}}\frac{\partial}{\partial\xi}\right)P.\end{split}$
(8)
It is not possible to obtain the exact analytical solution of Eq. (8) even for
the steady state because of the non-linearity in the gradient of the potential
function $V(x)$. However, the steady-state solution is possible for
$P(x,v,\xi;t)$ with the help of various numerical approximation schemes. In
order to investigate the transport of the particle, one can solve the dynamics
either by directly simulating Eq. (1) or by employing the Matrix continued
fraction method (MCFM) to Eq. (8) for the approximate steady state solution of
$P(x,v,\xi)$.
Figure 2: (a-c) Velocity distribution $P(v)$ as a function of $v$ for
different values of $t_{c}$. (d-f) Position distribution $P(x)$ as a function
of $x$ for different values of $t_{c}$. The black solid line represents the
corresponding saw-tooth potential. (g-i) Particle trajectories are plotted for
different values of $t_{c}$. The common parameters taken are $m=1.0$,
$\Delta=0.9$, $Q=0.5$, and $\gamma=1.0$.
For the overdamped dynamics of the particle, the inertial term in Eq. (1) is
neglected and the corresponding probability density function $P(x,\xi;t)$
satisfies the Fokker-Plank equation [64, 55],
$\frac{\partial P}{\partial t}=\frac{\partial}{\partial
x}\left(\frac{V^{\prime}(x)}{\gamma}-\frac{\xi}{\gamma}\right)P+\frac{\partial}{\partial\xi}\left(\frac{\xi}{t_{c}}+\frac{C}{t_{c}}\frac{\partial}{\partial\xi}\right)P.$
(9)
In the stationary state or steady state limit, the probability density
function $P(x,\xi;t)$ satisfies
$\frac{\partial P}{\partial t}=0.$ (10)
In order to find the approximate solution of Eq. (10) for the stationary state
probability distribution function $P(x,\xi)$, it can be expanded in complete
sets of functions in both variables $x$ and $\xi$ using set of Hermite
functions. Since the potential is periodic in nature, $P(x,\xi$) can take the
form [55]
$P(x,\xi)=\phi_{0}(\xi)\sum_{p=0}^{\infty}\sum_{\mu=-\infty}^{\infty}c_{p}^{\mu}e^{2\pi
i\mu x/\lambda}\phi_{p}(\xi).$ (11)
Here, the prefactor $\phi_{0}(\xi)$ is introduced for the simplification of
mathematical calculations and $\phi_{p}(\xi)$ is the set of Hermite functions
given by
$\phi_{p}(\xi)=\frac{1}{\sqrt{\alpha
2^{p}p!\sqrt{\pi}}}e^{\frac{-\xi^{2}}{2\alpha^{2}}}H_{p}\left(\frac{\xi}{\alpha}\right),$
(12)
with $\alpha$ as the scaling parameter considered as
$\alpha=\sqrt{\frac{2D}{t_{c}}}$ and $H_{p}(x)$ is the Hermite polynomial.
Since the potential $V(x)$ is periodic in nature, the force exerted by the
potential, $f(x)=-V^{\prime}(x)$ can be expanded in terms of Fourier series as
$f(x)=\sum_{l=-\infty}^{\infty}f_{l}e^{2\pi ilx/\lambda}.$ (13)
Substituting Eq. (11) in Eq. (10) and using Eq. (12) and Eq. (13), we obtain a
tridiagonal vector recurrence relation in terms of expansion coefficients
$c_{p}^{\mu}$ as
$Q_{p}^{-}c_{p-1}+Q_{p}c_{p}+Q_{p}^{+}c_{p+1}=0,$ (14)
with
$\displaystyle Q_{p}^{-}$ $\displaystyle=\sqrt{\frac{pC}{t_{c}}}B,$ (15)
$\displaystyle Q_{p}$ $\displaystyle=A-\frac{p}{t_{c}}I,$ (16)
$\displaystyle\text{and}\quad Q_{p}^{+}$
$\displaystyle=\sqrt{\frac{(p+1)C}{t_{c}}}B.$ (17)
Figure 3: 2D plot of $\langle j\rangle$ and $D$ as a function of $\Delta$ and
$t_{c}$ in (a) and (c), respectively. 2D plot of $\langle j\rangle$ and $D$ as
a function of $\Delta$ and $Q$ in (b) and (d), respectively. The common
parameters in (a) and (c) are ($m=1.0$, $Q=0.5$) and in (b) and (d) are
($m=1.0,\ t_{c}=1.0$).
Here, $c_{p}$ is a column vector consisting of the elements $c_{p}^{0},\
c_{p}^{1},\ c_{p}^{2}\cdots c_{p}^{\mu}$ . The elements of matrices $A$ and
$B$ are given by
$\displaystyle\left[A_{n,m}\right]$ $\displaystyle=\frac{2\pi
in}{\gamma\lambda}f_{n-m},$ (18) $\displaystyle\left[B_{n,m}\right]$
$\displaystyle=-\frac{2\pi im}{\gamma\lambda}\delta_{n,m},$ (19)
with $I$ being the identity matrix. The vector recurrence relation in Eq. (14)
can be solved numerically using the matrix continued fraction method as
described in Ref. [64]. For this purpose, we introduce the matrix $S_{p}$ such
that
$c_{p+1}=S_{p}c_{p}.$ (20)
Now substituting Eq. (20) in Eq. (14), we obtain
$Q_{p}^{-}c_{p-1}+\left(Q_{p}+Q_{p}^{+}S_{p}\right)c_{p}=0.$ (21)
Further solving Eq. (21), we obtain the matrix $S_{p}$ as the matrix continued
fraction:
$S_{p}=-\left(Q_{p+1}+Q_{p+1}^{+}S_{p+1}\right)^{-1}Q_{p+1}^{-}.$ (22)
For $p=0$, Eq. (21) takes the form
$\left(Q_{0}+Q_{0}^{+}S_{0}\right)c_{0}=0.$ (23)
Normalization of the steady state probability distribution $P(x,\xi)$
$\int\limits_{0}^{\lambda}dx\int\limits_{-\infty}^{\infty}d\xi\;P(x,\xi)=1,$
(24)
yields
$c_{0}^{0}=\frac{1}{\lambda}.$ (25)
Using this arbitrary component $c_{0}^{0}$ in Eqs. (23) and (20), one can
compute all the components of $c_{p}$. In order to find the average particle
current, the Fokker-Planck equation [Eq. (9)] can be written in the form of a
continuity equation as
$\frac{\partial P(x,\xi;t)}{\partial
t}=-\frac{\partial\rho_{x}(x,\xi;t)}{\partial
x}-\frac{\partial\rho_{\xi}(x,\xi;t)}{\partial\xi},$ (26)
where $\rho_{x}(x,\xi;t)$ and $\rho_{\xi}(x,\xi;t)$ are the probability
currents in the $x$ and $\xi$ directions, respectively. Now Comparing Eq. (26)
with Eq. (9), we have
$\rho_{x}(x,\xi;t)=\left(\frac{f(x)}{\gamma}-\frac{\xi}{\gamma}\right)P(x,\xi;t).$
(27)
Hence, the average stationary current in the $x$ direction over a period is
given by
$\displaystyle\langle j\rangle$
$\displaystyle=\frac{1}{\lambda}\int\limits_{0}^{\lambda}dx\int\limits_{-\infty}^{\infty}d\xi\;\rho_{x}^{(st)}(x,\xi)$
$\displaystyle=\frac{1}{\gamma}\left[-\sum_{\mu=-\infty}^{\infty}f_{\mu}c_{0}^{-\mu}+\frac{C}{t_{c}}c_{1}^{0}\right].$
(28)
Figure 4: $\langle j\rangle$, $D$, and $Pe$ as a function of $t_{c}$ for
different values of $m$ are shown in (a), (b), and (c), respectively. The
results of simulation (OD sim) and MCFM calculation (OD MCFM) for overdamped
case are also plotted as a function of $t_{c}$. MSD as a function of $t$ is
shown in (d) for different values of $t_{c}$ and for $m=0.5$. Inset of (d)
shows the exponent $\beta$ as a function of $t$. The other common parameters
are $\Delta=0.9$ and $Q=0.5$.
Proceeding in the same way as in overdamped case described above, one can
solve Eq. (8) for the steady state probability distribution $P(x,v;\xi)$ and
find out the average particle current. The approximate steady state solution
of Eq. (8) can take the form
$P(x,v,\xi)=\phi_{0}(\xi)\psi_{0}(v)\sum_{r=0}^{\infty}\sum_{p=0}^{\infty}\sum_{\mu=-\infty}^{\infty}c_{p,r}^{\mu}e^{2\pi
i\mu x/\lambda}\phi_{p}(\xi)\psi_{r}(v).$ (29)
Here, $\psi_{r}(v)$ is the Hermite function given by
$\psi_{r}(v)=\dfrac{1}{\sqrt{\beta
2^{r}r!\sqrt{\pi}}}e^{\frac{-v^{2}}{2\beta^{2}}}H_{r}\left(\frac{v}{\beta}\right),$
(30)
with $\beta$ being a scaling parameter. Following the same method discussed
earlier, we get the recursion relation in $c_{p,r}^{\mu}$
$\begin{split}A_{p,r}\ c_{p,r-2}+&B_{p,r}\ c_{p,r-1}+\Gamma_{p,r}\
c_{p,r}+E_{p,r}\ c_{p,r+1}\\\ &+Z_{p,r}\ c_{p-1,r-1}+\Theta_{p,r}\
c_{p+1,r-1}=0.\end{split}$ (31)
Here, $A,\ B,\ \Gamma,\ E,\ Z$ and $\Theta$ are matrices whose elements are
given by
$\displaystyle[A_{\mu,\nu}]_{p,r}$
$\displaystyle=-\frac{\gamma}{m}\sqrt{(r-1)r}\ \delta_{\mu,\nu},$
$\displaystyle[B_{\mu,\nu}]_{p,r}$ $\displaystyle=\frac{\sqrt{2r}}{\beta
m}f_{\mu-\nu}-\frac{i\nu k\beta\sqrt{r}}{\sqrt{2}}\ \delta_{\mu,\nu},$
$\displaystyle[\Gamma_{\mu,\nu}]_{p,r}$ $\displaystyle=-\left(\frac{\gamma
r}{m}+\frac{p}{t_{c}}\right)\ \delta_{\mu,\nu},$
$\displaystyle[E_{\mu,\nu}]_{p,r}$ $\displaystyle=-\frac{i\nu
k\beta\sqrt{r+1}}{\sqrt{2}}\ \delta_{\mu,\nu},$
$\displaystyle[Z_{\mu,\nu}]_{p,r}$
$\displaystyle=\frac{\alpha\sqrt{rp}}{m\beta}\ \delta_{\mu,\nu},$
$\displaystyle\text{and}\quad[\Theta_{\mu,\nu}]_{p,r}$
$\displaystyle=\frac{\alpha}{m\beta}\sqrt{r(p+1)}\ \delta_{\mu,\nu},$
respectively. $c_{p,r}$ is a column matrix given as
$c_{p,r}=\left[\cdots\ \ c_{p,r}^{-1}\ \ c_{p,r}^{0}\ \ c_{p,r}^{1}\ \ \cdots\
\ \right]^{T}.$
Now, by solving Eq. (31) and using the column vector $c_{p,r}$, one can
compute the steady state probability distribution and the average current.
We have also simulated the dynamics [Eq. (1)] using Huen’s method algorithm.
The simulation was run for $10^{5}$ seconds and averaged over $10^{4}$
realization. The simulation results of trajectory and position and velocity
distributions of the particle for different values of $t_{c}$ are shown in
Fig. 2. For very small value of $t_{c}$, the particle is distributed uniformly
throughout the potential as shown in Fig. 2(d). Thus for very small $t_{c}$,
the particle is merely influenced by the presence of barriers exerted by the
ratchet potential and hence distributed uniformly throughout the space. As a
result, the velocity distribution is almost Gaussian [see Fig. 2(a)]. This can
also be confirmed by looking at the simulated trajectory of the particle,
which does not show any signature of the presence of potential trap. This
behavior can be understood as follows. In the steady-state, the magnitude of
the noise correlation of the OU process [Eq. (3)] varies inversely with the
correlation time $t_{c}$, such that
$\left\langle\xi^{2}(t)\right\rangle=\frac{C}{t_{c}}$. Hence, for a very small
value of $t_{c}$, even though the noise correlation persists for a very small
interval of time, the intensity of the correlation is very high. As a result,
the magnitude of random kicks on the particle is very large. Due to this, the
particle does not feel the presence of the potential barrier and moves freely
both forward and backward directions of ratchet potential, resulting an
uniform distribution of the particles in $t_{c}\rightarrow 0$ limit. In this
limit, the system behaves as if it is in the running state.
With further increase in $t_{c}$, the magnitude of the noise correlation
decreases and at the same time the duration of its persistence increases. The
particle starts getting more and more confined at the potential minima and
feels the influence of the barriers in both the forward and backward
directions of the ratchet potential. This is very well reflected from the
position distribution of the particle in Fig. 2(e) with maximum probability of
finding the particle in one of the potential minima. Due to presence of
asymmetry in the potential, the particle on an average makes more jumps
towards the forward direction as compared to the backward direction of the
potential. This can also be seen from the trajectory plotted in Fig. 2(h),
where there are sudden jumps and stable regions indicating the presence of
potential being felt by the particle. As a result, the velocity distribution
becomes non-Gaussian with exponential tails in both the directions [see Fig.
2(b)]. In this regime, the value of $\left\langle\xi^{2}(t)\right\rangle$ is
such that the particle becomes capable of overcoming the potential barrier in
the direction with gentler slope of the potential. Hence, on an average, a
non-zero particle current is expected. For very large $t_{c}$, the magnitude
of the noise correlation [Eq. (3)] becomes very small and the correlation
persists for longer interval of time. Hence, the magnitude of the random kicks
are very small and are not strong enough to make the particle escape from the
potential minimum [see Fig. 2(f-i)]. This is the reason for which the velocity
distribution approaches a delta function centered at zero [Fig. 2(c)] for very
large $t_{c}$ value. In this limit, the system behaves as if it is trapped or
in the locked state. Thus, with $t_{c}$ the system passes through a transition
from the running state to the locked state of the particle transport.
Next, we have simulated the steady-state particle current $\langle j\rangle$
and diffusion coefficient $D$ for different values of $\Delta$, $t_{c}$, and
$Q$, which are shown in Fig. 3. The two dimensional (2D) plots of $\langle
j\rangle$ and $D$ as a function of $\Delta$ and $t_{c}$ are shown in Fig. 3(a)
and (c), respectively. Similarly, we have shown the 2D plots of $\langle
j\rangle$ and $D$ as a function of $\Delta$ and $Q$ in Fig. 3(b) and (d),
respectively. From these plots, it is observed that for a given spatial
asymmetry in the potential, $\langle j\rangle$ shows a non- monotonic behavior
with both $t_{c}$ and $Q$ values. Further, the maximum current is found to be
sensitive to the spatial asymmetry of the potential and it increases with
increase in the asymmetry parameter $\Delta$. Whereas diffusion shows
decreasing behavior with both $t_{c}$ and $Q$.
Figure 5: 2D plot of $\langle j\rangle$ and $D$ as a function of $t_{c}$ and
$m$ are shown in (a) and (b), respectively. The other common parameters are:
$\Delta=0.9\ \text{and}\ Q=0.5$. Figure 6: $\langle j\rangle$, $D$, and $Pe$
as a function of $m$ with $t_{c}=1.0$ for different values of $\Delta$ in (a),
(c), and (d) and for $t_{c}=5$ in (b), (d), and (f), respectively. $Q=0.5$ is
taken for all the cases.
The plot of $\langle j\rangle$, $D$, and $Pe$ as a function $t_{c}$ are
presented in Fig. 4(a), (b), and (c), respectively for different values of
$m$. For a given mass of the particle, $\langle j\rangle$ shows a non-
monotonic behavior with $t_{c}$. It starts from zero and increases with
$t_{c}$, attains the maximum value for an intermediate range of $t_{c}$, and
finally approaches back to zero value for larger $t_{c}$. With increase in
mass of the particle ($m$), the critical value of $t_{c}$ at which the current
starts to flow, shifts towards right, reflecting that with increase in $m$,
larger $t_{c}$ is required for having a net current in the ratchet. At the
same time, the maximum current gets suppressed with $m$ and shifts towards
larger value of $t_{c}$. This implies that for larger mass, the noise
correlation in the dynamics has to persist for longer interval of time to
obtain maximum current. However, $D$ shows a decaying behavior with $t_{c}$ as
in Fig. 4(b). For very small value of $t_{c}$, $D$ has maximum value which
persists as long as there is no net current in ratchet. At the critical
$t_{c}$, at which the current starts to flow, $D$ also decays before
approaching zero as expected. It is observed that the $t_{c}$ at which the
current has maximum value, the diffusion shows a minima type feature. Further,
diffusion gets suppressed with mass of particle. As the effectiveness of the
transport can be understood by analyzing the behavior of Peclet number, we
have presented $Pe$ with $t_{c}$ in Fig. 4(c). $Pe$ follows the same behavior
as that of $\langle j\rangle$, confirming a coherent or reliable transport in
the intermediate range of $t_{c}$.
In order to further understand the diffusive behavior of the transport, we
have simulated the mean square displacement (MSD) $\langle
x^{2}\rangle-\langle x\rangle^{2}$ and plotted as a function of $t$ in Fig.
4(d) for different values of $t_{c}$. For a particular $t_{c}$, in the lower
time regime, MSD is found to be proportional to $t^{4}$, hence, the transport
is super-diffusive. On the other hand, in the long time regime, the transport
is diffusive in nature as the MSD is proportional to $t$. With increase in
$t_{c}$, the MSD gets suppressed and approaches zero for very large $t_{c}$
values, reflecting a kind of trapping of the particle for longer persistence
of noise correlation in the dynamics. To have a better understanding of the
dependence of MSD with time, we introduce a parameter $\beta$ such that
$\text{MSD}\propto t^{\beta}$. The variation of $\beta$ with time is shown in
the inset of Fig. 4(d). In the lower time regime, $\beta$ is found to be $4$,
which confirms the super-diffusive transport of the particle at short
timescale. In the long time limit, $\beta$ is one, which reflects the as usual
steady state diffusive behavior of the particle. On the other hand, $\langle
x^{2}\rangle$ shows different features. In the lower time limit, it is
ballistic, i.e. $\langle x^{2}\rangle\propto t^{2}$, irrespective of the
persistence duration of noise correlation in the dynamics. However, in the
long time limit or at stationary state, $\langle x^{2}\rangle$ depends on the
correlation time. In this state, $\langle x^{2}\rangle$ is diffusive (i.e.
$\langle x^{2}\rangle\propto t$) for lower $t_{c}$ limit, ballistic (i.e.
$\langle x^{2}\rangle\propto t^{2}$) for intermediate $t_{c}$ limit, and non-
diffusive (i.e. independent of $t$) for larger $t_{c}$ limit. The different
behaviors of $\langle x^{2}\rangle$ and MSD in the steady state are due to the
non-zero value of $\langle x\rangle$, as it is proportional to time.
Figure 7: The simulation results of $\langle j\rangle$ as a function of
$t_{c}$ both for overdamped (OD sim) and underdamped (UD sim) (with $m=0.5$ )
cases along with the MCFM calculation (OD MCFM) for overdamped case are
presented in (a), (b), (c), and (d) for different values of $\gamma$. The
other common parameters are $\Delta=0.9$ and $Q=0.5$.
In Fig. 5(a) and (b), we depict the 2D-plots of $\langle j\rangle$ and $D$,
respectively as a function of $t_{c}$ and $m$. It is observed that in the low
$t_{c}$ limit, the current shows a monotonically decreasing behavior whereas
in the intermediate regime of $t_{c}$, it shows a non-monotonic behavior with
$m$. Similarly, the diffusion coefficient ($D$) shows a decreasing behavior
with $m$ in the low $t_{c}$ regime and it increases with $m$ in the high
$t_{c}$ regimes. In order to understand this unusual behavior of current with
$m$ in the intermediate range of $t_{c}$, we have plotted $\langle j\rangle$,
$D$, and $Pe$ versus $m$ for different values of $\Delta$ in Fig. 6 and each
for two different values of $t_{c}$. It is observed that current increases
with $m$ and reaches a maximum value in the intermediate range of $m$ and this
maximum value increases with increase in asymmetry of the potential. With
further increase in $t_{c}$ value, even though the magnitude of current
decreases, it shows a pronounced maximum as a function of $m$ [see Fig. 6(b)].
$D$ shows roughly a minimum exactly around the same point, where the current
shows maximum. It starts increasing as a function of $m$ from the point where
current starts decreasing and finally shows a maximum at which the current
approaches zero as expected. Eventhough $\langle j\rangle$ shows a prominent
maximum as a function of $m$, $Pe$ does not follow the nature of $\langle
j\rangle$ and it rather decreases with increase in $m$, reflecting the
degradation of the coherence in the particle transport.
Finally the simulation results of $\langle j\rangle$ as a function of $t_{c}$
for both the case of overdamped dynamics (excluding the inertia term) and
underdamped dynamics (with inertia) for different values of $\gamma$ along
with the MCFM result, are shown in Fig. 7. It is observed that the overdamped
current is always larger than that of the underdamped current as expected.
With increase in $\gamma$ value, the underdamped current approaches towards
the overdamped current and exactly matches with the overdamped current for
very large $\gamma$ value. Further, the simulation result of average current
in the overdamped limit is in good agreement with the analytical computation
of average current.
## IV SUMMARY
In summary, we have studied the inertial active dynamics of an Orntsein-
Uhlenbeck particle in a sawtooth ratchet potential. In particular, we have
investigated the transport properties as well as coherence in it’s transport
with the help of both analytical approximation methods and computer
simulations. From the simulation results, it is inferred that potential
asymmetry is the key ingredient for having a net current in the ratchet. One
of our interesting findings is that the particle gets trapped in any of the
minima of the ratchet potential for longer duration of self
propulsion/activity in the medium, which is reflected from the simulated
particle trajectories as well as from the probability distribution functions.
In the regime of small duration of persistence of activity in the medium, the
particle does not feel the influence of the potential barrier in either of the
directions. As a result, it fluctuates randomly around the minimum of the
potential and gets uniformly distributed through out the ratchet. In this
regime, the particle behaves like an inertial passive Brownian particle that
is reflected from the Gaussian velocity distribution, uniform position
distribution, and the zero average current. With further increase in the
duration of self propulsion or activity in the medium, the particle starts to
feel the influence of the barrier in both the directions of ratchet. As a
consequence, the free diffusion decreases and due to the asymmetry of the
potential, an unidirectional net current starts to develop in the system. This
can be understood from the particle trajectories, where there are intermediate
abrupt jumps of the particle. Further, for very long duration of persistence
of activity in the medium, the particle gets locked or trapped in one of the
minima of the potential and randomly fluctuates across it. Because of which,
both the diffusion and average particle current vanishes.
The nature of particle current is found to be non-monotonic with the noise
correlation time or self propulsion time. It increases, manifests a maximum,
and then decreases as the correlation time increases. From this behavior, it
is confirmed that the net transport can be controlled by fine tuning the
persistent duration of activity in the medium. For an intermediate range of
persistent of activity, the particle current shows a maximum as a function of
mass of the particle,which is quite unusual and the absolute value of this
maximum is quite sensitive to the potential asymmetry. Interestingly, it is
observed that even though the particle current increases with mass in certain
regime of the parameter space, the Peclet number does not follow the nature of
the current and it decreases with increase in mass of the particle, confirming
the degradation of reliability of transport. Moreover, we don’t see any of the
current reversal in our model as discussed in Refs. 51, 52, 6. We believe that
the results obtained in our model can be experimentally realized in some
active matter systems in the regime of high Reynolds number. Further, it would
be interesting to extend this model for investigating the collective behavior
and making use of the rectified motion in terms of stochastic energetic
parameters.
## V Acknowledgement
M.S. acknowledges the start-up grant from UGC Faculty recharge program, Govt.
of India for financial support.
## References
* Einstein [1906] A. Einstein, Zur theorie der brownschen bewegung, Annalen der physik 324, 371 (1906).
* Lemons and Gythiel [1997] D. S. Lemons and A. Gythiel, Paul langevin’s 1908 paper “on the theory of brownian motion” [“sur la théorie du mouvement brownien,” c. r. acad. sci. (paris) 146, 530–533 (1908)], Am. J. Phys. 65, 1079 (1997).
* Reimann [2002] P. Reimann, Brownian motors: noisy transport far from equilibrium, Phys. Rep. 361, 57 (2002).
* Astumian [1997] R. D. Astumian, Thermodynamics and kinetics of a brownian motor, Science 276, 917 (1997).
* Magnasco [1993] M. O. Magnasco, Forced thermal ratchets, Phys. Rev. Lett. 71, 1477 (1993).
* Reichhardt and Reichhardt [2017] C. O. Reichhardt and C. Reichhardt, Ratchet effects in active matter systems, Annu. Rev. Condens. Matter Phys. 8, 51 (2017).
* Angelani _et al._ [2011] L. Angelani, A. Costanzo, and R. Di Leonardo, Active ratchets, Europhys. Lett. 96, 68002 (2011).
* Galajda _et al._ [2007] P. Galajda, J. Keymer, P. Chaikin, and R. Austin, A wall of funnels concentrates swimming bacteria, J. Bacteriol. 189, 8704 (2007).
* Kaiser _et al._ [2014] A. Kaiser, A. Peshkov, A. Sokolov, B. ten Hagen, H. Löwen, and I. S. Aranson, Transport powered by bacterial turbulence, Phys. Rev. Lett. 112, 158101 (2014).
* Koumakis _et al._ [2013] N. Koumakis, A. Lepore, C. Maggi, and R. Di Leonardo, Targeted delivery of colloids by swimming bacteria, Nat. Commun. 4, 2588 (2013).
* Bricard _et al._ [2013] A. Bricard, J.-B. Caussin, N. Desreumaux, O. Dauchot, and D. Bartolo, Emergence of macroscopic directed motion in populations of motile colloids, Nature 503, 95 (2013).
* Kümmel _et al._ [2013] F. Kümmel, B. ten Hagen, R. Wittkowski, I. Buttinoni, R. Eichhorn, G. Volpe, H. Löwen, and C. Bechinger, Circular motion of asymmetric self-propelling particles, Phys. Rev. Lett. 110, 198302 (2013).
* Bechinger _et al._ [2016] C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, and G. Volpe, Active particles in complex and crowded environments, Rev. Mod. Phys. 88, 045006 (2016).
* Gompper _et al._ [2020] G. Gompper, R. G. Winkler, T. Speck, A. Solon, C. Nardini, F. Peruani, H. Löwen, R. Golestanian, U. B. Kaupp, L. Alvarez, T. Kiørboe, E. Lauga, W. C. K. Poon, A. DeSimone, S. Muiños-Landin, A. Fischer, N. A. Söker, F. Cichos, R. Kapral, P. Gaspard, M. Ripoll, F. Sagues, A. Doostmohammadi, J. M. Yeomans, I. S. Aranson, C. Bechinger, H. Stark, C. K. Hemelrijk, F. J. Nedelec, T. Sarkar, T. Aryaksama, M. Lacroix, G. Duclos, V. Yashunsky, P. Silberzan, M. Arroyo, and S. Kale, The 2020 motile active matter roadmap, J. Phys.: Condens. Matter 32, 193001 (2020).
* De Magistris and Marenduzzo [2015] G. De Magistris and D. Marenduzzo, An introduction to the physics of active matter, Physica A 418, 65 (2015).
* Berg and Brown [1972] H. C. Berg and D. A. Brown, Chemotaxis in Escherichia coli analysed by three-dimensional tracking, Nature 239, 500 (1972).
* Jones _et al._ [2021] C. Jones, M. Gomez, R. M. Muoio, A. Vidal, R. A. Mcknight, N. D. Brubaker, and W. W. Ahmed, Stochastic force dynamics of the model microswimmer $chlamydomonasreinhardtii$: Active forces and energetics, Phys. Rev. E 103, 032403 (2021).
* Howse _et al._ [2007] J. R. Howse, R. A. Jones, A. J. Ryan, T. Gough, R. Vafabakhsh, and R. Golestanian, Self-Motile Colloidal Particles: From Directed Propulsion to Random Walk, Phys. Rev. Lett. 99, 048102 (2007).
* Mallory _et al._ [2018] S. A. Mallory, C. Valeriani, and A. Cacciuto, An active approach to colloidal self-assembly, Annu. Rev. Phys. Chem. 69, 59 (2018).
* Scholz _et al._ [2018] C. Scholz, M. Engel, and T. Pöschel, Rotating robots move collectively and self-organize, Nat. Commun. 9, 1 (2018).
* Palagi and Fischer [2018] S. Palagi and P. Fischer, Bioinspired microrobots, Nat. Rev. Mater. 3, 113 (2018).
* Dauchot and Démery [2019] O. Dauchot and V. Démery, Dynamics of a self-propelled particle in a harmonic trap, Phys. Rev. Lett. 122, 068002 (2019).
* Cavagna _et al._ [2015] A. Cavagna, L. Del Castello, I. Giardina, T. Grigera, A. Jelic, S. Melillo, T. Mora, L. Parisi, E. Silvestri, M. Viale, _et al._ , Flocking and Turning: a New Model for Self-Organized Collective Motion, J. Stat. Phys. 158, 601 (2015).
* Jhawar _et al._ [2020] J. Jhawar, R. G. Morris, U. Amith-Kumar, M. Danny Raj, T. Rogers, H. Rajendran, and V. Guttal, Noise-induced schooling of fish, Nat. Phys. 16, 488 (2020).
* ten Hagen _et al._ [2009] B. ten Hagen, S. van Teeffelen, and H. Lowen, Non-gaussian behaviour of a self-propelled particle on a substrate, Condens. Matter Phys. 12, 725 (2009).
* ten Hagen _et al._ [2011] B. ten Hagen, S. van Teeffelen, and H. Löwen, Brownian motion of a self-propelled particle, J. Phys.: Condens. Matter 23, 194119 (2011).
* Malakar _et al._ [2020] K. Malakar, A. Das, A. Kundu, K. V. Kumar, and A. Dhar, Steady state of an active brownian particle in a two-dimensional harmonic trap, Phys. Rev. E 101, 022610 (2020).
* Löwen [2020] H. Löwen, Inertial effects of self-propelled particles: From active brownian to active langevin motion, J. Chem. Phys. 152, 040901 (2020).
* Lehle and Peinke [2018] B. Lehle and J. Peinke, Analyzing a stochastic process driven by ornstein-uhlenbeck noise, Phys. Rev. E 97, 012113 (2018).
* Bonilla [2019] L. L. Bonilla, Active ornstein-uhlenbeck particles, Phys. Rev. E 100, 022601 (2019).
* Martin _et al._ [2021] D. Martin, J. O’Byrne, M. E. Cates, É. Fodor, C. Nardini, J. Tailleur, and F. van Wijland, Statistical mechanics of active ornstein-uhlenbeck particles, Phys. Rev. E 103, 032607 (2021).
* Cates [2012] M. E. Cates, Diffusive transport without detailed balance in motile bacteria: does microbiology need statistical physics?, Rep. Prog. Phys. 75, 042601 (2012).
* Cates and Tailleur [2013] M. E. Cates and J. Tailleur, When are active brownian particles and run-and-tumble particles equivalent? consequences for motility-induced phase separation, Europhys. Lett. 101, 20010 (2013).
* Koumakis _et al._ [2014] N. Koumakis, C. Maggi, and R. Di Leonardo, Directed transport of active particles over asymmetric energy barriers, Soft Matter 10, 5695 (2014).
* He _et al._ [2020] Y.-f. He, B.-q. Ai, C.-x. Dai, C. Song, R.-q. Wang, W.-t. Sun, F.-c. Liu, and Y. Feng, Experimental demonstration of a dusty plasma ratchet rectification and its reversal, Phys. Rev. Lett. 124, 075001 (2020).
* Ghosh _et al._ [2013] P. K. Ghosh, V. R. Misko, F. Marchesoni, and F. Nori, Self-propelled janus particles in a ratchet: Numerical simulations, Phys. Rev. Lett. 110, 268301 (2013).
* Pototsky _et al._ [2013] A. Pototsky, A. M. Hahn, and H. Stark, Rectification of self-propelled particles by symmetric barriers, Phys. Rev. E 87, 042124 (2013).
* Bisht and Marathe [2020] K. Bisht and R. Marathe, Rectification of twitching bacteria through narrow channels: A numerical simulations study, Phys. Rev. E 101, 042409 (2020).
* Potiguar _et al._ [2014] F. Q. Potiguar, G. A. Farias, and W. P. Ferreira, Self-propelled particle transport in regular arrays of rigid asymmetric obstacles, Phys. Rev. E 90, 012307 (2014).
* Wan _et al._ [2008] M. B. Wan, C. J. Olson Reichhardt, Z. Nussinov, and C. Reichhardt, Rectification of swimming bacteria and self-driven particle systems by arrays of asymmetric barriers, Phys. Rev. Lett. 101, 018102 (2008).
* Mijalkov and Volpe [2013] M. Mijalkov and G. Volpe, Sorting of chiral microswimmers, Soft Matter 9, 6376 (2013).
* Angelani _et al._ [2009] L. Angelani, R. Di Leonardo, and G. Ruocco, Self-starting micromotors in a bacterial bath, Phys. Rev. Lett. 102, 048104 (2009).
* McDermott _et al._ [2016] D. McDermott, C. J. Olson Reichhardt, and C. Reichhardt, Collective ratchet effects and reversals for active matter particles on quasi-one-dimensional asymmetric substrates, Soft Matter 12, 8606 (2016).
* Sándor _et al._ [2017] C. Sándor, A. Libál, C. Reichhardt, and C. J. O. Reichhardt, Collective transport for active matter run-and-tumble disk systems on a traveling-wave substrate, Phys. Rev. E 95, 012607 (2017).
* Lambert _et al._ [2010] G. Lambert, D. Liao, and R. H. Austin, Collective escape of chemotactic swimmers through microscopic ratchets, Phys. Rev. Lett. 104, 168102 (2010).
* Drocco _et al._ [2012] J. A. Drocco, C. J. Olson Reichhardt, and C. Reichhardt, Bidirectional sorting of flocking particles in the presence of asymmetric barriers, Phys. Rev. E 85, 056102 (2012).
* Nagai _et al._ [2015] K. H. Nagai, Y. Sumino, R. Montagne, I. S. Aranson, and H. Chaté, Collective motion of self-propelled particles with memory, Phys. Rev. Lett. 114, 168001 (2015).
* Sharma and Velev [2015] R. Sharma and O. D. Velev, Remote steering of self-propelling microcircuits by modulated electric field, Adv. Funct. Mater. 25, 5512 (2015).
* Ivlev _et al._ [2015] A. V. Ivlev, J. Bartnick, M. Heinen, C.-R. Du, V. Nosenko, and H. Löwen, Statistical mechanics where newton’s third law is broken, Phys. Rev. X 5, 011035 (2015).
* Jung _et al._ [1996] P. Jung, J. G. Kissner, and P. Hänggi, Regular and chaotic transport in asymmetric periodic potentials: Inertia ratchets, Phys. Rev. Lett. 76, 3436 (1996).
* Ai and Li [2017] B.-Q. Ai and F.-G. Li, Transport of underdamped active particles in ratchet potentials, Soft Matter 13, 2536 (2017).
* Xu and Ai [2021] G.-h. Xu and B.-q. Ai, Rotation reversal of a ratchet gear powered by active particles, Soft Matter 17, 7124 (2021).
* Hatatani _et al._ [2022] M. Hatatani, Y. Okamoto, D. Yamamoto, and A. Shioi, Reversed spin of a ratchet motor on a vibrating water bed, Sci. Rep. 12, 14141 (2022).
* Lindner _et al._ [1999] B. Lindner, L. Schimansky-Geier, P. Reimann, P. Hänggi, and M. Nagaoka, Inertia ratchets: A numerical study versus theory, Phys. Rev. E 59, 1417 (1999).
* Bartussek _et al._ [1996] R. Bartussek, P. Reimann, and P. Hänggi, Precise Numerics versus Theory for Correlation Ratchets, Phys. Rev. Lett. 76, 1166 (1996).
* Noushad _et al._ [2021] A. Noushad, S. Shajahan, and M. Sahoo, Velocity auto correlation function of a confined brownian particle, Eur. Phys. J. B 94, 202 (2021).
* Muhsin _et al._ [2021] M. Muhsin, M. Sahoo, and A. Saha, Orbital magnetism of an active particle in viscoelastic suspension, Phys. Rev. E 104, 034613 (2021).
* Muhsin and Sahoo [2022] M. Muhsin and M. Sahoo, Inertial active ornstein-uhlenbeck particle in the presence of a magnetic field, Phys. Rev. E 106, 014605 (2022).
* Tailleur and Cates [2009] J. Tailleur and M. E. Cates, Sedimentation, trapping, and rectification of dilute bacteria, Europhys. Lett. 86, 60002 (2009).
* Fodor _et al._ [2016] E. Fodor, C. Nardini, M. E. Cates, J. Tailleur, P. Visco, and F. van Wijland, How far from equilibrium is active matter?, Phys. Rev. Lett. 117, 038103 (2016).
* Mandal _et al._ [2017] D. Mandal, K. Klymko, and M. R. DeWeese, Entropy production and fluctuation theorems for active matter, Phys. Rev. Lett. 119, 258001 (2017).
* Ai [2009] B.-q. Ai, Directed transport driven by the transverse wall vibration, J. Chem. Phys. 131, 054111 (2009).
* Lindner and Nicola [2008] B. Lindner and E. M. Nicola, Diffusion in different models of active brownian motion, Eur. Phys. J. Spec. Top. 157, 43 (2008).
* Risken [1996] H. Risken, _The Fokker-Planck Equation_, 3rd ed., Springer Series in Synergetics (Springer Berlin, Heidelberg, 1996).
|
# MedalCare-XL: 16,900 healthy and pathological 12 lead ECGs obtained through
electrophysiological simulations
Karli Gillette Gottfried Schatz Research Center: Division of Medical Physics
and Biophysics, Medical University of Graz, Graz, Austria Matthias A.F. Gsell
Gottfried Schatz Research Center: Division of Medical Physics and Biophysics,
Medical University of Graz, Graz, Austria Claudia Nagel Institute of
Biomedical Engineering, Karlsruhe Institute of Technology (KIT), Karlsruhe,
Germany Jule Bender Institute of Biomedical Engineering, Karlsruhe Institute
of Technology (KIT), Karlsruhe, Germany Bejamin Winkler Physikalisch-
Technische Bundesanstalt, National Metrology Institute, Berlin, Germany
Steven E. Williams King’s College London, London, United Kingdom University
of Edinburgh, Edinburgh, United Kingdom Markus Bär Physikalisch-Technische
Bundesanstalt, National Metrology Institute, Berlin, Germany Tobias Schäffter
Physikalisch-Technische Bundesanstalt, National Metrology Institute, Berlin,
Germany King’s College London, London, United Kingdom Biomedical
Engineering, Technische Universität Berlin, Einstein Centre Digital Future
Olaf Dössel Institute of Biomedical Engineering, Karlsruhe Institute of
Technology (KIT), Karlsruhe, Germany Gernot Plank Gottfried Schatz Research
Center: Division of Medical Physics and Biophysics, Medical University of
Graz, Graz, Austria corresponding authors: Axel Loewe
(publications@ibt.kit.edu), Gernot Plank<EMAIL_ADDRESS> Axel
Loewe Institute of Biomedical Engineering, Karlsruhe Institute of Technology
(KIT), Karlsruhe, Germany corresponding authors: Axel Loewe
(publications@ibt.kit.edu), Gernot Plank<EMAIL_ADDRESS>
###### Abstract
Mechanistic cardiac electrophysiology models allow for personalized
simulations of the electrical activity in the heart and the ensuing
electrocardiogram (ECG) on the body surface. As such, synthetic signals
possess known ground truth labels of the underlying disease and can be
employed for validation of machine learning ECG analysis tools in addition to
clinical signals. Recently, synthetic ECGs were used to enrich sparse clinical
data or even replace them completely during training leading to improved
performance on real-world clinical test data.
We thus generated a novel synthetic database comprising a total of 16,900 12
lead ECGs based on electrophysiological simulations equally distributed into
healthy control and 7 pathology classes. The pathological case of myocardial
infraction had 6 sub-classes. A comparison of extracted features between the
virtual cohort and a publicly available clinical ECG database demonstrated
that the synthetic signals represent clinical ECGs for healthy and
pathological subpopulations with high fidelity. The ECG database is split into
training, validation, and test folds for development and objective assessment
of novel machine learning algorithms.
## Background & Summary
The 12 lead ECG is a standard non-invasive clinical tool for the diagnosis and
long-term monitoring of cardiovascular disease. To support cardiac disease
classification and interpretation of 12 lead ECGs in clinical practice,
algorithms based on machine learning are increasingly utilized. Training of
these algorithms requires large databases of 12 lead ECGs that have been
labeled according to desired disease classifications with high accuracy and
represent the target population. The most extensive publicly available
database for such purpose to date is PTB-XL [1].
Clinical 12 lead ECG databases like PTB-XL, however, have several limitations
reducing efficacy of machine learning algorithms [2]. As the databases are
typically attained from multiple medical centers, different filtering levels
may be applied to reduce noise. Labeling uncertainties may arise due to
differences in expertise or judgment between clinicians. Patient enrollment
can also lead to both gender bias [3] and uneven representation of certain
cardiac diseases [4]. Furthermore, such databases provide limited insight into
the underlying mechanisms of cardiovascular disease. Databases of synthetic
ECGs have the potential to either complement and enrich [5, 6], or in the long
run to even replace [7], clinical datasets to overcome such limitations.
Currently, no sizeable and open synthetic ECG databases are available due to
the high computational cost and limitations in modeling complete four-chamber
cardiac electrophysiology _in silico_ at scale. While four-chamber cohorts
exist for the modeling of cardiac electrophysiology [8], such cohorts are not
suited for the generation of large ECG databases due to a lack of controllable
electrophysiology or limited anatomical variation. Separate models of atrial
and ventricular electrophysiology that are individually more detailed and
steerable can be later joined together to capture the P wave and the QRST
complex within the 12 lead ECG, respectively, and overcome such limitations.
We thus aimed to assemble the first public database of labeled synthetic 12
lead ECGs by joining two independent multi-scale models of atrial and
ventricular electrophysiology used to compute P waves and QRS complexes,
respectively. This approach provides a complete chain of traceability from the
anatomical and electrophysiological input parameters of the model to the final
12 lead ECGs. Common diseases were modeled mechanistically in addition to
normal healthy control within the synthetic database. Within the ventricular-
torso model, the pathologies of myocardial infarction (MI) and complete bundle
branch block of both the left ventricle (LBBB) and the right ventricle (RBBB)
were modeled. The MI class comprised 6 sub-classes pertaining to the three
predominant arteries of right-anterior descending (RAD), left anterior
descending (LAD), and left circumflex (LCX) [9] each with two different
transmural extent. The diseases fibrotic atrial cardiomyopathy (FAM), complete
interatrial conduction block (IAB) and left atrial enlargement (LAE) were
modeled within the atria. Also, 1st degree AV block (AVB) was modeled as an
atrio-ventricular (AV) conduction-based disease. In this way, the chosen
pathologies cover a wide range of both atrial and ventricular diseases
representing conduction disturbances as well as structural remodeling for
which established modeling approaches published in previous work could be
resorted to. A total of 16,900 synthetic ECGs equally distributed into the 8
groups (healthy control and 7 cardiac pathologies) were made publicly
available in the MedalCare-XL database. This MedalCare-XL dataset is publicly
available under the Creative Commons Attribution 4.0 International license
[10]. Thus, we provide a large and balanced ECG dataset with precisely known
ground truth labels of the underlying pathology as derived from the
mechanistic multi-scale simulations.
Validation of the synthetic ECG database was performed using two approaches to
analyze to what extent the synthetic ECG database could represent clinical ECG
databases. First, we tested the MedalCare-XL data set of simulated ECGs by
comparing the statistical distribution of crucial ECG features extracted from
MedalCare-XL with the same features taken from the clinical PTB-XL data base
for normal healthy ECGs and for different pathology classes. The comparison
showed excellent qualitative agreement, while still exhibiting quantitative
differences that provide a starting point for future improvement of the
underlying models as well as of the quality of future simulation data bases.
Second, two clinical Turing tests were also conducted to evaluate the ability
of the generated synthetic ECG signals to represent clinical signals
undergoing ECG diagnostics by cardiologists. The first test required trained
cardiologists to determine the origin of both measured and simulated 12 lead
ECGs under normal healthy control. The second test additionally involved
pathology classification. Both tests were performed on a subset of 50
synthetics ECG signals extracted from the database and mixed with 50 clinical
signals taken from PTB-XL [1]. Altogether, the MedalCare-XL data base provides
the first example for a large-scale data set of physiologically-realsitic
simulated ECGs.
## Methods
We separate the genesis of the 12 lead ECG into P waves and the QRST complex,
modeled by two separate atrial and ventricle-torso models. Generation of the
anatomical model cohorts and the simulation of electrophysiology to mimic a
large patient population is described for both the atrial and ventricular
models. Having run single beat simulations for P waves and QRST complexes
separately in the two independent models, both signal parts had to be merged
in a post-processing step to obtain an ECG of a full heart cycle comprising
one P wave, one QRS complex and one T wave. Subsequently, the single heartbeat
was repeated with varying RR intervals to account for heart rate variability
(HRV) to obtain a time series signal of 10 $\mathrm{s}$ length. A visual
overview of the pipeline for generating the synthetic 12 lead ECG database is
visualized in Figure 1.
### Anatomical Model Populations
#### Ventricles
A cohort of anatomically-specific ventricular-torso models was generated for
13 healthy subjects (8 M, 5 F) ranging from 30 to 65 years of age. All
subjects were part of a clinical study approved by ethical review board at the
Medical University of Graz (EKNr: 24–126 ex 11/12). Written and informed
consent for each subject was attained at the time of the study. Two separate
MRI scans of the full torso and whole heart were sequentially acquired using
standardized protocols at 3T (Magnetom Skyra, Siemens Healthcare, Erlangen,
Germany). The torso MRI (1.3 x 1.3 x 3.0 ${\mathrm{mm}}^{3}$) was acquired in
four overlapping stacks using a non-ECG gated 3D T1-weighted gradient-echo
sequence. The whole heart MRI (0.7 x 0.7 x 0.7 ${\mathrm{mm}}^{3}$) was
acquired using an ECG-gated, fat-saturated, T2-prepared, isotropic 3D
gradient-echo sequence. Respiratory navigators were employed to gate the MR-
acquisition under free-breathing to end-expiration. MRI-compatible electrodes
for recording the 12 lead ECG of each subject were left intact during image
acquisition. Intensity thresholding techniques implemented in _Seg $3D$_ [11]
were used to segment each torso MRI into heart, lungs, and general torso
tissue. Segmentation of the cardiac MRI was automatically performed using a
two-kernel convolutional neural network. The network was tailored for MRIs
from the original network implemented for computed tomography images [12].
Segmented structures included blood pools, ventricles, and general atrial
tissue. To automatically register the four-chamber heart segmentation into the
torso, an iterative closest point algorithm was utilized in _Seg $3D$_ [11,
13]. Anatomical meshes were generated automatically from the joint
segmentations using the Tarantula software meshing package [14]. Target
resolutions within the cardiac and torso surfaces of $1.2\text{\,}\mathrm{mm}$
and $4.0\text{\,}\mathrm{mm}$ were prescribed, respectively. All models within
the cohort were equipped with universal ventricular coordinates (UVCs) to
allow for automated manipulation of all geometric-based entities [15, 16]. The
entire framework for the generation of the ventricular-torso model cohort is
described in detail in Gillette et al.[15]. The ventricular-torso model cohort
comprising geometries $\Gamma_{V,i},i\in[1,13]$ is visualized in Figure 2.
#### Atria
An overview of the anatomical model cohort generated for the atrial
simulations is shown in Figure 3. A total of 125 anatomical models
$\Gamma_{A,h,i},i\in[1,80]$ and $\Gamma_{A,LAE,i},i\in[1,45]$ of the atrial
endocardium were derived from a bi-atrial statistical shape model [17, 18].
The endocardial surfaces were augmented with a homogeneous wall thickness of
$3\text{\,}\mathrm{mm}$, rule-based myocardial fiber orientation, tags for
anatomical structures and interatrial connections as described by Azzolin et
al.[19, 20]. Out of these 125 geometries, 80 models exhibited left and right
atrial volumes in physiological ranges reported for healthy subjects [21]. In
these geometries, 10 different fractions from 0 to 45 % of the atrial
myocardial tissue volume were additionally replaced by fibrotic patches as
described previously [22] to model atrial cardiomyopathy. The remaining 45
anatomical models were generated by constraining the coefficients of the
statistical shape model such that left atrial volumes were increased to value
ranges typically observed in left atrial enlargement patients [21].
Additionally, 25 torso geometries $\Gamma_{T,i},i\in[1,25]$ were obtained by
modifying the coefficients of the two leading eigenmodes in the human body
statistical shape model constructed by Pishchulin et al. [23]. In this way,
height, weight and gender differences were represented in the anatomical torso
model cohort. By applying random rotation angles
$\alpha_{x},\alpha_{y},\alpha_{z}$ and translation parameters
$t_{x},t_{y},t_{z}$ in ranges summarized in Table 4 to the atrial geometry,
heart location and orientation variability were additionally accounted for in
the virtual population.
### Simulation Protocol and Parameters
#### Ventricles
Under normal healthy control, activation of the ventricles was assumed to be
Durrer-based [24], where the His-Purkinje System was modeled assuming 5
fascicular sites of earliest breakthrough on a fast-conducting endocardium.
Three fascicular sites were placed in the left ventricle (LV) on the anterior
endocardium $\vec{x}_{lv,ant}$, posterior endocardium $\vec{x}_{lv,post}$, and
the septum $\vec{x}_{lv,sept}$. Activation of the right ventricle (RV) was
controlled using a site corresponding to the moderator band
$\vec{x}_{rv,mod}$. An additional site $\vec{x}_{rv,sept}$ was also placed on
the right-ventricular septum. All fascicular sites were defined in UVCs. The
RV moderator band was placed in the middle of the RV free wall. The transmural
depth of the remaining fascicular sites was assumed to be constant at $20\,\%$
of the ventricular free wall. The fascicles were assumed to be of disc-like
shape with a transmural thickness of $0.5\,\%$ of the ventricular wall, and a
radius controlled through additional parameter $\vec{r}$ that related to
endocardial extent. Activation was assumed to be simultaneous, apart from a
prescribed delay $\vec{t}_{mod}$ in the activation of the RV moderator band
site.
To modulate the fast spread of conduction on the endocardial surface of the
ventricles modulated by the His-Purkinje System, a fast-conducting endocardium
was also included that spanned from the middle $10\,\%$ to $90\,\%$ of the
ventricular mesh along the apico-basal direction. Details of the His-Purkinje
representation are available in Gillette et al. [15]. An isotropic conduction
velocity of $2.0\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ was prescribed
within the fast-conducting endocardium [25].
Myocardial fiber directions were applied using a rule-based method [26] that
assumed principal fiber directions rotate radially from $60.0^{\circ}$ on the
endocardium to the epicardium $-60.0^{\circ}$ [27]. Corresponding sheet fiber
directions of $-65.0^{\circ}$ and $25.0^{\circ}$ were applied, respectively
[27]. Conduction velocity along the principal direction of myocardial fibers
of $0.6\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ was applied with an off-
axis conduction velocity ratio of 4:2:1 [28]. Conductivity within the
myocardium was set according to Roberts et al. [29]. All remaining
conductivities within the volume conductor containing lungs, blood pools,
atria, and general torso tissue were set according to Keller et al. [30].
Ventricular myocyte electrophysiology was modeled using the Mitchell-Schaeffer
ionic model $\vec{i}_{sinus}$ [31]. A resting membrane voltage of
$-86.2\text{\,}\mathrm{mV}$ and a peak action potential voltage of
$40\text{\,}\mathrm{mV}$ was assumed. Gradients in action potential duration
(APD) within the myocardium, needed to establish physiological T waves, were
generated by utilizing a known relationship between the $\tau_{close}$
parameter and APDs. A linear combination of the UVCs weighted with given
weights $\vec{q}_{w}$ was first computed at each node of the mesh. The
weighted UVC gradients were mapped into a range between $APD_{min}$ and
$APD_{max}$ to generate an APD map within the entirety of the ventricles.
Values for the gradients and the APD are derived from the literature [32, 33,
34]. In total, variation in electrophysiology during normal healthy control
was controlled through 20 variable parameters summarized in the parameter
vector $\vec{\omega}_{qrs}$ for the QRS complex:
$\displaystyle\vec{\omega}_{qrs}=\\{\ \vec{x}_{lv,ant},\ \vec{x}_{lv,post},\
\vec{x}_{lv,sept},\ \vec{x}_{rv,mod},\ \vec{x}_{rv,sept},\ \vec{t}_{mod}\ \\}$
(1)
and $\vec{\omega}_{t}$ for the T wave:
$\displaystyle\vec{\omega}_{t}=\\{\ \vec{i}_{sinus},\ APD_{min},\ APD_{max},\
\vec{q}_{w}\ \\}.$ (2)
All geometric-based parameters could be mapped into the mesh using $k$D-trees
implemented in _meshtool_ [35]. Parameters relating to both the QRS complex
and T wave under normal healthy control were varied in physiological ranges to
generate variation in the QRST complex as reported in Table 1 and Table
Figures & Tables, respectively. Sampling through the ranges for each of the
parameters was done using Latin Hyper Cubes.
The two pathologies of BBB and MI were then modeled in the ventricles
alongside normal healthy control. Pathologies of LBBB and RBBB were included
in the ventricular-torso model. To cause a complete branch block, all
fascicular root sites within either the LV or the RV were neglected to inhibit
activation. All other relevant electrophysiology parameters were allowed to
vary in the same ranges as reported for normal healthy control above.
A MI stemming from occlusion of one of the three primary arteries of RAD, LAD,
and LCX was inserted into the ventricles. For each of the arteries
$\nu\in\\{RAD,LAD,LCX\\}$, a core center $\vec{x}_{\nu,mi}$ was defined using
the apico-basal and rotational UVC coordinate values that were bounded
according to recommendations of affected regions on the clinical 17-segment
model determined by the American Heart Association (AHA) [9]. Namely, the LAD
was restricted to the anterior-anteroseptal region spanning the entire apico-
basal extent. Both the RAD and LCX extended less apically, and were confined
to the lateral wall and the inferior-inferioseptal regions, respectively. For
each artery, the infarct was either assumed to span the entirety of the
ventricular wall or transmural extent of $30\%$ from the endocardium, giving
rise to a transmural extent value $\rho_{n,mi}$ such that $n\in\\{0.3,1.0\\}$.
The outer $5\,\%$ of the infarct area was allocated to be border zone (BZ),
and the remaining area was defined as the infarct core. All scars were assumed
to be left-sided, thus presenting only in LV.
From each infarct center, an Eikonal activation map was computed within the
ventricular geometry assuming the same conduction velocity and off-axis ratios
as assigned in the general myocardium during normal healthy control. An
infarct geometry was taken by thresholding the activation map according to the
computed time that generated a radius of distance $d_{co}$. The infarct core
was assumed to be electrically inert, while the conduction velocity in the BZ
was set to $0.15\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ with an off-
axis ratio of 1.0 [36]. The conductivity within the BZ was set to the same
values reported for the healthy myocardium. Parameters for the Mitchell-
Schaeffer ionic model within the BZ $\vec{i}_{bz}$ were manually adjusted
using bench leading to characteristic action potential changes during MI [37].
In total, the MI class comprised 6 sub-classes. The parameters varied to
induce various degrees and positions of MI $\vec{\omega}_{\nu,mi}$ included:
$\displaystyle\vec{\omega}_{mi}=\\{\ \vec{x}_{\nu,mi},\rho_{n,mi}\ d_{co}\
\\}:\ \nu\in\\{RAD,LAD,LCX\\},\ n\in\\{0.3,1.0\\}$ (3)
Parameters were similarly varied using Latin Hyper Cubes through ranges based
on clinical observation for characteristic occlusion sites and action
potential changes (Table 3).
Transmembrane voltages were simulated using the efficient reaction-Eikonal
method in the monodomain formulation without diffusion [38]. Electrical
potentials of each electrode on the torso surface were recovered from
transmembrane voltages using lead fields precomputed once for every model
[39]. A ventricular 12 lead ECG (QRST complex) was generated by simulating a
ventricular beat for $450\text{\,}\mathrm{ms}$. All simulations were run using
the _CARPentry_ cardiac solver [40] and the _openCARP_ simulation framework
[41, 42] on a desktop machine with 24 cores, parallelized into 3 threads.
#### Atria
Local activation times in the atria were obtained by solving the Eikonal
equation with the Fast Iterative Method [43] and the Fast Marching Method
[44]. Excitation was initiated at the sinoatrial node with an exit site
located at the junction of crista terminalis and the superior vena cava.
Locally heterogeneous conduction velocity $\mathrm{CV_{[Region]}}$ and
anisotropy ratios $\mathrm{AR_{[Region]}}$ for
$\mathrm{[Region]}\in\mathrm{\\{bulk\leavevmode\nobreak\
tissue,interatrial\leavevmode\nobreak\ connections,crista\leavevmode\nobreak\
terminalis,pectinate\leavevmode\nobreak\ muscles,inferior\leavevmode\nobreak\
isthmus\\}}$ were modeled as summarized in Table 4. The spatio-temporal
distributions of transmembrane voltages $\mathrm{TMV}(t,x)$ were subsequently
derived from the computed activation times by shifting pre-computed
Courtemanche et al. action potential templates $\mathrm{TMV}(t)$ in time.
Remodeling of cellular electrophysiology was applied in fibrotic regions as
described below. For all simulations except for those of fibrotic atrial
cardiomyopathy, the baseline parameters of the Courtemanche et al. model
remained unchanged in all atrial regions. The atria were placed inside a torso
geometry and were rotated ($\alpha_{x},\alpha_{y},\alpha_{z}$) and translated
($t_{x},t_{y},t_{z}$) around and along all three coordinate axes to account
for additional anatomical variability in the cohort. The forward problem of
electrocardiography was solved with the infinite volume conductor method (for
the normal healthy control cases and fibrotic atrial cardiomyopathy) or the
boundary element method (for interatrial conduction block and left atrial
enlargement). Single beat 12 lead ECGs of the P wave lasting
$150\text{\,}\mathrm{-}$$200\text{\,}\mathrm{ms}$ were subsequently extracted
at standard electrode positions. In total, variation during healthy sinus
rhythm simulations was controlled through the parameters summarized in the
following vector
$\omega_{P}=\\{\vec{\mathrm{CV}}_{[Region]},\alpha_{x},\alpha_{y},\alpha_{z},t_{x},t_{y},t_{z},\vec{\lambda}_{T,i},\vec{\lambda}_{A,i},\\}.$
(4)
For simulations of fibrotic atrial cardiomyopathy, nine different fractions
from 5 % to 45 % of the healthy atrial myocardial volume were replaced by
fibrotic tissue as described in detail by Nagel et al. [22] in the same 80
atrial anatomical models that were employed for the healthy control
simulations. In fibrotic patches, 50 % of the cells were modeled as passive
conduction barriers by removing the affected elements from the volumetric
meshes. In the remaining 50 % of the fibrotic cells, conduction velocity was
reduced by a factor of 0.2 and 0.5 compared to the healthy baseline values in
Table 4 in transversal and longitudinal fiber direction, respectively. In this
way, anisotropy ratios were increased by a factor of 2.5, which typically
facilitates functional reentry in patients with atrial fibrillation. To
account for paracrine cytokine remodeling effects in fibrotic regions, maximum
ionic conductances of the Courtemanche et al. cell model were rescaled
(0.6$\times g_{Na}$, 0.5$\times g_{K1}$, 0.5$\times g_{CaL}$).
For left atrial enlargement simulations, 45 additional atrial geometries were
derived from the bi-atrial statistical shape model. Constraints were applied
to the coefficients of the leading eigenmodes to generate anatomical atrial
models with systematically increasing left atrial volumes [6]. Different
rotation angle combinations and conduction velocity variations were applied
for the simulations as reported in Table 4.
Complete interatrial conduction block was modeled by inhibiting conduction
propagation through the elements in Bachmann’s bundle at the junction between
the left and the right atrium in the same 80 bi-atrial geometries that were
used for the control simulations. Different combinations of rotation angles
and spatial translations of the atria within the torso were applied for the
ECG calculations.
### Synthesization of Complete ECGs
Signal components were synthesized to a full ECG using a heart rate
variability (HRV) model to obtain 10 s recordings in accordance with the
standard clinical 12 lead ECG. As atrial and ventricular ECGs were carried out
using different forward calculation methods, the amplitudes of P waves and
QRST complexes needed to be scaled prior to concatenation to ensure that
signal amplitudes of single waveforms are consistent within one heartbeat.
Thus, maximum P wave and R peak amplitudes were extracted in lead II of all
clinical recordings from healthy subjects in PTB-XL [1] using ECGdeli [45].
Based on these values, a multi-variate normal distribution was set up
representing the relation between P wave and R peak amplitudes in clinical
ECGs. In this way, the simulated QRST complex could be scaled with a factor
sampled from this multi-variate probability distribution to match the
amplitude of the simulated P wave. A PQ interval complying with the simulated
P wave duration was selected like-wise by drawing from a multi-variate normal
distribution generated from clinical P wave duration and PQ interval values.
Finally, the P waves and the scaled QRST complexes were concatenated using a
sigmoid shaped segment of a length determined by the difference of PQ interval
and P wave duration. When synthesizing ECG segments for the 1st degree AV
block class, the PQ interval was sampled from the range $>$200 $\mathrm{ms}$.
To account for heart rate variability in the simulated 10 $\mathrm{s}$ ECGs,
we refrained from simply repeating the concatenated single heart beat multiple
times. Instead, the heart rate variability model developed by Kantelhardt et
al. [46] was used to generate a series of RR intervals for an average heart
rate within physiological ranges (50-90 bpm) determined from the QT interval
of the respective simulation run using the multi-variate normal distribution.
For each heart beat holding a different RR interval, the signal was shrunk or
stretched in the [QRSoff, Toff] interval, again by sampling values from a
multi-variate normal distribution derived from clinical QRS duration, QT- and
RR interval values. After adding a sigmoidal shaped TP segment to connect
subsequent heart beats in the defined RR interval, we obtained the final 10
$\mathrm{s}$ 12 lead ECG. The raw ECG signal was superimposed with realistic
ECG noise as reported by Petranas et al. [47]. The amplitudes of the noise
vectors were scaled based on a chosen signal to noise ratio between 15 and 20
dB.
## Data Records
The MedalCare-XL dataset is publicly available under the Creative Commons
Attribution 4.0 International license [10]. Approximately 1,300 ECGs of
10$\mathrm{s}$ length for each disease class are stored in csv format. Rows
1-12 contain the 12 leads of each ECG following the order I, II, III, aVR,
aVL, aVF, V1-V6. All signals are sampled at 500 $\mathrm{Hz}$, amplitudes are
in $\mathrm{mV}$. Each signal is available in three different versions:
’run_*_raw.csv’ contains the noise-free synthesized ECG, ’run_*_noise.csv’
contains the synthesized ECG with superimposed realistic ECG noise [47],
’run_*_filtered.csv’ contains the bandpass filtered version (Butterworth
filters of order 3, cut off frequencies of 0.5 $\text{\,}\mathrm{Hz}$
(highpass) and 150 $\text{\,}\mathrm{Hz}$ (lowpass)) of the synthesized ECGs
with superimposed noise. For meaningful machine learning approaches, the
signals are split in suggested subsets for training, validation and testing
depending on the atrial and ventricular anatomical models the single
simulation runs were based on to make sure each anatomical model is only
contained in one of the subsets. Example ECGs of lead II for each disease are
shown in Figure 4 (A). In Figure 4 (B), exemplary ECGs for each MI pathology
class are shown corresponding to different occlusion sites and degrees of
transmurality.
## Technical Validation
We have employed two different approaches for the technical validation of the
MedalCare-XL dataset of simulated, synthetic 12 lead ECGs as described in the
following. For a validation of the complete dataset, the statistical
distribution of ECG features extracted separately for each class (healthy
control and specific pathologies) from the records in the MedalCare-XL
database were compared to the distributions of the corresponding features
extracted from the clinical PTB-XL that were recently summarized in the PTB-
XL-Feat dataset [48]. In addition, we performed several so-called clinical
Turing tests, where the ability of expert cardiologists to distinguish the
simulated ECGs from clinical ECGs was evaluated again with representative
samples from the MedalCare-XL and PTB-XL databases as described in detail
below.
### Feature Distribution
To validate the simulated data against the statistical properties of
clinically recorded ECGs, interval and amplitude features were extracted from
the synthetic dataset and from PTB-XL using ECGdeli [45] and compared to one
another. Figure 5 shows the probability density functions for 6 timing and 5
amplitude features extracted from lead II of all ECGs in the healthy clinical
and virtual cohort. Except for the T wave amplitudes, the feature values for
the synthetic signals lie within the clinical and physiological ranges.
However, the feature distributions from the healthy and the virtual data do
only coincide for the QRS duration. All other simulated timing and amplitude
features only cover a subset of the clinically observed ranges.
In Figure 6, a comparison of feature distributions for healthy and
pathological ECGs in the virtual cohort (top panel) and the clinical cohort
(bottom panel) is visualized for timing or amplitude features that are
clinically considered for a diagnosis of the respective disease. It is
apparent that the change in feature values extracted from healthy and diseased
ECGs is consistent between the simulated and the clinical data even though
absolute feature ranges sometimes deviate.
### Clinical Turing Tests
We aimed to ensure that the synthetic ECG signals correspond to the clinically
measured signals with respect to ECG features which are characteristic for
healthy cases. If cardiologists are not able to distinguish between measured
and simulated ECG signals, this will increase confidence in the _in-silico_
model as a surrogate for real clinical data. Therefore such a test can be
considered as a clinical Turing test. For this, cardiologists were asked to
perform an online Turing test to evaluate and to provide feedback on both
healthy and pathological ECGs. A first clinical Turing test was conducted to
determine the ability of the synthetic 12 lead ECGs within the database to
pass as real clinical signals. In a second test, cardiologists were asked to
determine the pathology of the signals as conducted routinely in ECG
diagnostics. Under all clinical Turing tests, the PTB-XL [1] database served
as the basis for the measured signals and the simulated database described
above was used for the synthetically generated signals.
#### Development of Online Platform for Clinical Turing Test
In order to conduct clinical Turing tests, an online solution provided by the
Know-Centeriiihttps://www.know-center.at, a research center for data science
and artificial intelligence located in Graz, was used. The Know-Center
extended its TimeFuseiiiiiihttps://ecgviewer.timefuse.io/public/login/turing
online signal data platform to include a survey feature and a plotter to
visualize 12 lead ECG signals. The ECG plotter was designed specifically to
present 12 lead ECGs in a typical visualization as seen by cardiologists in
the clinic on chart paper. Namely, horizontal lines on the pink background
correspond to $0.4\text{\,}\mathrm{s}$ and vertical lines correspond to
$0.1\text{\,}\mathrm{mV}$. The platform was also designed for hosting of
multiple clinical Turing tests. Clinical Turing tests of either healthy
signals or pathological signals could then be organized and conducted
separately.
#### Conducting Tests
In a first iteration, Turing tests were performed with normal healthy control
ECGs to better understand the ability of signals to pass as clinical signals
under normal healthy. For this purpose, five groups with 20 signals each were
created, resulting in a total of 100 signals. For the measured ECGs, 50
signals were randomly selected from a subset of the PTB-XL database, which
contained only signals annotated as 100% healthy. For the generated ECGs, 50
signals under healthy sinus rhythm were randomly taken from the synthetic
database described above. After pre-processing and filtering the 100 signals,
the five groups were uploaded to the online platform and assigned to the
survey participants. Within the test, expert cardiologists were required to
evaluate whether each ECG test case from the total 100 was measured or
generated. Clinicians were also allowed to refrain from answering, but a lack
of a statement was taken as a false classification. All clinicians were also
asked to provide reasoning behind the classification. A total of 6 clinicians
performed the test.
A similar test was also performed with pathological conditions to demonstrate
that the synthetic ECGs of the various modeled pathological cases would be
classified by expert clinicans at the same accuracy as real clinical signals
and could not be distinguished from clinically measured ECGs taken from the
PTB-XL database. The cases included myocardial infraction (MI), left bundle
branch block (LBBB), right bundle branch block (RBBB), first degree AV block
(1AVB), and left atrial overload/enlargement (LAO/LAE). Conditions of fibrotic
atrial cardiomyopathy (FAM) and complete interatrial conduction block (IAB)
were neglected as such diseases were not present within PTB-XL. Examples of
the disease are provided in Figures 4 (A) and 4 (B).
Similar to the healthy Turing test, 50 generated ECG signals were taken from
the synthetic database such that each of the five pathological classes is
represented by 10 ECGs. The 50 measured ECGs were randomly selected from five
subsets of the PTB-XL database, 10 cases per subset, where each subset only
contained signals labeled as 100% pathological according to the 5 classes.
Clinicians could choose from a list of 11 labels. Clinicians were asked to
make at least one annotation for each of the 100 pathological 12 lead ECG
signals from a list of 11 pathologies as listed below:
* •
1AVB
* •
atrial fibrillation (AFIB)
* •
FAM
* •
IAB
* •
LAO
* •
LBBB
* •
MI
* •
normal healthy control (NORM)
* •
right atrial overload/enlargement (RAO/RAE)
* •
RBBB
* •
Wolf-Parkinson-White syndrome (WPW)
*
A total of two cardiologists responded.
#### Results
##### Normal Healthy Control Clinical Turing Test
The six clinicians correctly classified 464 of the 600 cases, which
corresponds to an accuracy of $77.33\%$. On the other side, 136 signals
($22.67\%$) could not be correctly classified, including 62 ($10.34\%$)
synthetic and 74 ($12.33\%$) measured ECGs, see Figure 7 (B). A detailed
summary is given in Figure 7 (A),(C). Primary ECG features leading to
classification as simulated included fractionation or improper R wave
propagation in the QRS complex, a spiking or biphasic T wave, and a lack of
physiological noise in the signals.
##### Turing Test of Pathological ECGs
The two clinicians correctly classified the signals as either measured or
clinical in 166 of the 200 cases, which corresponds to an overall accuracy of
$83\%$. On the other side, the type of 34 signals ($17\%$) could not be
correctly classified, including 10 ($5\%$) synthetic and 24 ($12\%$) measured
ECGs, see Figure 7 (E). A detailed summary is given in Figure 7 (D),(F).
Regarding the correct classification of pathological cases, only 101 of the
200 ($50.5\%$) overall cases including both simulated and clinical signals
were classified correctly by both clinicians. Namely, 38 measured ECGs were
classified as the wrong pathology by experts resulting in an accuracy of
$62\%$. Inversely, simulated pathologies were correctly classified at only
$39\%$, with 61 signals being classified incorrectly. A detailed summary is
given in Figure 8 (A),(B).
The actual pathology and the diagnoses given by each clinician within the
pathological clinical Turing test is provided in Figure 8 (C). Some
pathologies were more commonly misdiagnosed by the clinicians and mistaken as
either normal healthy control or an alternative clinical pathology.
Differences in performance were also observed between simulated and clinical
ECG sets. This is highlighted by the confusion matrices constructed for all
pathological cases from the results for both measured and simulated signals
(Figure 8 (D)).
Within clinical signals, the pathological cases of LAO, 1AVB, and MI were
commonly mistaken as a 12 lead ECG in normal sinus rhythm by both clinicians.
Largest differences in diagnostic outcomes between simulated and clinical data
sets is observed for LBBB and RBBB.Within simulated ECGs, LBBB and RBBB were
commonly mistaken for MI.
### Limitations and Summary
The feature analysis showed that the synthetic signals exhibit interval and
amplitude features that are mostly in line with feature ranges reported in
PTB-XL for the healthy and the pathological cohorts. However, they neither
cover the full range of feature values that occur in clinical practice nor are
they characterized by accurately coinciding distributions. This could be
attributed to the fact that the atrial model population was parameterized
using ECG biomarker ranges for P wave amplitudes and durations reported for
extensive clinical cohorts partially comprising $>$200,000 subjects [49, 50]
which might lead to slightly different feature distributions compared to those
extractable from PTB-XL. The QRST complexes were also parameterized according
to experimental data or clinical data conducted on smaller model cohorts that
may not be representative of the entire population. Some parameters were also
estimated as no direct clinical or experimental data is available for these
entities. One such example is the heightened T wave amplitudes, which stem
from repolarization gradients in the ventricles that generate large cardiac
source. While the occurrence of repolarization gradients are known [32, 33],
the exact nature of such gradients are not well understood and thus hard to
parameterize for a patient population. Therefore, the synthetic signals are
not fully representative for an entire population, such as the one in PTB-XL.
The feature distributions in the synthetic cohort are however consistent in
themselves, i.e., unrealistic combinations of different features are unlikely
to occur. For example, the upper limit of RR intervals in the simulated
healthy cohort does not exceed 1000 $\mathrm{ms}$, while simultaneously, the
QT interval also only covers lower ranges of the clinical QT interval values
(compare Figure 5). This is due to the fact that multi-variate normal
distributions were used during the synthesization procedure ensuring that
clinically reported correlations between ECG biomarkers (such as P wave
duration and PR interval or QT duration and RR intervals) are taken into
account. Furthermore, detailed mechanistic electrophysiological models of the
heart were employed and simulation parameters in reasonable ranges reported in
literature were chosen leading to realistic single beat P waves and QRST
complexes in most cases.
It must be noted that PTB-XL lacks clinical data for fibrotic atrial
cardiomyopathy and for interatrial conduction block. Thus, fidelity assessment
of ECG features within these two classes by means of a comparison to clinical
data was not possible using the same clinical ECG resources. However, we
already showed in previous work that the simulated P waves reproduce
characteristic changes in key diagnostic ECG markers [22, 51]. These include a
prolongation of the P wave duration compared to the control simulations due to
delayed depolarization in fibrotic patches as well as a retrograde activation
of the left atrium through interatrial conduction pathways on the posterior
wall. Moreover, as shown in Figure 6, in interatrial conduction block
patients, the morphology and therefore the P wave amplitude is markedly
changed in lead aVL compared to the healthy cohort. In patients with fibrotic
atrial cardiomyopathy, the most pronounced decrease in P wave amplitude due to
scar tissue not contributing to the overall source distribution in the atria
occurs in the lateral leads (compare Figure 6).
The clinical Turing tests aimed to investigate the ability of the 12 lead ECG
signal to exhibit morphological features in accordance with clinical
diagnostic criteria as routinely assessed by clinicians under both normal
healthy control and pathological conditions. Within the clinical Turing test
performed for normal healthy control, it can be observed that accuracy in
identifying whether a signal was simulated or clinical was 77$\%$ accurate.
Primary ECG features leading to identification as a synthetic signal included
fractionation and R wave progression of the QRS complex. Spiked T waves with
high amplitudes or biphasic T waves could also be observed. Real ECG signals
tended to also exhibit a certain noise types not accounted for, including
electrical disturbances and large baseline wander, that must either be
modulated within simulated data or removed during the clinical Turing test.
Within the clinical Turing test to diagnose pathological ECGs, the accuracy of
type classification increased to 83$\%$, indicating type classification was
easier with synthetic pathological data. Misdiagnosis was common across both
signal types as pathologies were only diagnosed correctly by the two expert
cardiologists in $51\%$ of cases. More clinicians should perform the clinical
Turing test on pathology classification to give a better indication of the
true accuracy of ECG diagnosis on both simulated and clinical signals.
Furthermore, the clinical Turing test must be conducted on a larger number
signals beyond the 100 analyzed, ideally, for the entire ECG synthetic
database.
Regardless, it can be observed that clinicians had varying performance on
clinical-based 12 lead ECG signals in comparison to those taken from the
synthetic ECG database. Clinical signals were classified with the correct
pathology at an accuracy of $62\%$. Simulated signals, on the other side, were
classified correctly for the underlying disease pathology at only $39\%$. LAO
within both clinical and simulated data experienced the highest level of
misdiagnosis and resulted in similar performance. This could be contributed to
the fact that LAO manifests only within the P wave, where morphological
deviations are harder to detect due to a substantially lower amplitude than
the QRS complex. Misdiagnosis was also high among the diseases of LBBB and
RBBB within the simulated data set. Differences in outcome between the
clinical and synthetic signals may stem from the inability of the synthetic
ECG database to manifest the full complexity of the underlying diseases. For
example, remodeling within the ventricles under such conditions may lead to
slower conduction properties and alternative wave morphology. Furthermore,
only complete LBBB or RBBB was modeled. In clinical practice, however, there
are varying degrees of conduction block. A lower reported diagnostic accuracy
for MI and 1AVB is seen for the clinical signals in comparison to the
simulated ECGs, which could also stem from a lack of complexity within the
simulated setup easing diagnosis.
To lower the mismatch in performance between clinical and synthetic signals,
further parameter tuning is needed. Iterative clinical Turing tests would be
beneficial to update parameters ranges to mitigate the prevalence of
undesirable ECG features within the entire database. Refinement could also be
guided by sensitivity analysis that provides more information on the
relationship of model parameters and the morphological traits of simulated
signals as determined by clinicians. However, this requires a large investment
due to the variety in clinical pathological classes, and the lack of known
electrophysiology in such conditions. Certain important ECG features may also
be detected by machine learning analysis [52] to provide insight into the
refined sub-classification of pathological cases beyond current routine
diagnoses.
Some results from the Turing test of pathological cases indicate that standard
protocols for ECG classification by clinicians are not sufficient. Machine
learning algorithms may offer a means to aide in ECG diagnosis to improve
reliability of clinical decisions. Therefore it is important to provide
reference data to test such algorithms. An earlier benchmark study
demonstrated this with the large data set of clinical ECGs in PTB-XL [52]. In
this work, deep learning algorithms were e. g. found to exhibit diagnosis
success rates in the range of 80 – 95 percent depending on the used metric.
The clinical PTB-XL data set was also instrumental in demonstrating the clear
improvement of algorithms based on self-supervised learning [53].
Nevertheless, clinical data bases strongly depend on the quality and the
terminology used to label the ECG data. In addition large sets of publicly
available clinical data sets are still rare and limited in number. Here is
where benchmarking ML algorithm with validated simulated data sets can become
an important tool in the development and benchmarking of new algorithm for ECG
classification. Machine learning algorithms could then also be trained and
tested on real and synthetic data in different combinations. Data bases of
simulated ECGs like the MedalCare-XL set presented in this paper provide also
an important link of the growing knowledge developed in the cardiac modelling
community and practical development of algorithm for data analysis.
## Usage Notes
When using the synthetic ECGs as an input data source for machine learning
applications, samples that were generated based on the same anatomical model
should explicitly belong to only one of the training, testing or validation
sets. As the main variation in morphology of the P waves and QRST complexes
stem predominantly from anatomical differences in the model cohort [54],
splitting the data in the described fashion thus helps to prevent overfitting
to similar or almost identical samples that were already seen during training
[55].
When applying the simulated data for extending or replacing small or
imbalanced clinical datasets, the user is advised to refer to the signals with
superimposed realistic ECG noise instead of the raw signal traces. In this
way, the simulated signals exhibit characteristics due to noise interference
that are also observable in clinical ECGs. Thus, possible domain gaps can be
reduced eventually leading to an improved classification outcome on actual
clinical data.
## Code availability
The anatomical model cohort of the atria is publicly available under the
Creative Commons Licence CC-BY 4.0 [18]. Code for solving the Eikonal equation
and the forward problem of electrocardiography using the boundary element
method as used for the atrial simulations is openly available (Stenroos et al.
[56], Schuler et al. [57]). Python code for synthesizing single beat P waves
and QRST complexes to a 10$\text{\,}\mathrm{s}$ time series using multi-
variate normal distributions for amplitude scaling and interval selection is
publicly available [58].
The cohort of ventricular-torso models are not publicly available due to
constraints of the IRB approval and subject consent. The framework to simulate
electrophysiology of the ventricular-torso model is also not available for
public use in its entirety. However, all simulations can be carried out within
the publicly available openCARP simulation framework [41, 42], albeit less
efficiently with higher compute costs.
## References
* [1] Wagner, P. _et al._ PTB-XL, a large publicly available electrocardiography dataset. _Scientific Data_ 7, 1–15 (2020).
* [2] Roberts, M. _et al._ Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. _Nature Machine Intelligence_ 3, 199–217, 10.1038/s42256-021-00307-0 (2021).
* [3] Puyol-Antón, E. _et al._ Fairness in cardiac mr image analysis: An investigation of bias due to data imbalance in deep learning based segmentation. In de Bruijne, M. _et al._ (eds.) _Medical Image Computing and Computer Assisted Intervention – MICCAI 2021_ , 413–423 (Springer International Publishing, Cham, 2021).
* [4] Pilia, N. _et al._ Quantification and classification of potassium and calcium disorders with the electrocardiogram: What do clinical studies, modeling, and reconstruction tell us? _APL Bioeng_ 4, 041501, 10.1063/5.0018504 (2020).
* [5] Luongo, G. _et al._ Hybrid machine learning to localize atrial flutter substrates using the surface 12-lead electrocardiogram. _EP Europace_ 10.1093/europace/euab322 (2022).
* [6] Nagel, C., Schaufelberger, M., Dössel, O. & Loewe, A. A bi-atrial statistical shape model as a basis to classify left atrial enlargement from simulated and clinical 12-lead ECGs. In Puyol Antón, E. _et al._ (eds.) _Statistical Atlases and Computational Models of the Heart. Multi-Disease, Multi-View, and Multi-Center Right Ventricular Segmentation in Cardiac MRI Challenge_ , 38–47, 10.1007/978-3-030-93722-5_5 (2022).
* [7] Luongo, G. _et al._ Machine learning enables noninvasive prediction of atrial fibrillation driver location and acute pulmonary vein ablation success using the 12-lead ECG. _Cardiovascular Digital Health Journal_ 2, 126–136, 10.1016/j.cvdhj.2021.03.002 (2021).
* [8] Strocchi, M. _et al._ A publicly available virtual cohort of four-chamber heart meshes for cardiac electro-mechanics simulations. _PloS one_ 15, e0235145 (2020).
* [9] American Heart Association Writing Group on Myocardial Segmentation and Registration for Cardiac Imaging _et al._ Standardized myocardial segmentation and nomenclature for tomographic imaging of the heart: statement for healthcare professionals from the cardiac imaging committee of the council on clinical cardiology of the american heart association. _Circulation_ 105, 539–542 (2002).
* [10] Gillette, K. _et al._ MedalCare-XL, 10.5281/zenodo.7293655 (2022).
* [11] CIBC (2016). Seg3D: Volumetric image segmentation and visualization. Scientific Computing and Imaging http://www.seg3d.org.
* [12] Payer, C., Štern, D., Bischof, H. & Urschler, M. Multi-label whole heart segmentation using cnns and anatomical label configurations. In _International Workshop on Statistical Atlases and Computational Models of the Heart_ , 190–198 (Springer, 2017).
* [13] Chetverikov, D., Svirko, D., Stepanov, D. & Krsek, P. The trimmed iterative closest point algorithm. In _Pattern Recognition, 2002. Proceedings. 16th International Conference on_ , vol. 3, 545–548 (IEEE, 2002).
* [14] Prassl, A. J. _et al._ Automatically generated, anatomically accurate meshes for cardiac electrophysiology problems. _IEEE Transactions on Biomedical Engineering_ 56, 1318–1330 (2009).
* [15] Gillette, K. _et al._ A framework for the generation of digital twins of cardiac electrophysiology from clinical 12-leads ECGs. _Medical Image Analysis_ 71, 102080 (2021).
* [16] Bayer, J. _et al._ Universal ventricular coordinates: A generic framework for describing position within the heart and transferring data. _Medical Image Analysis_ 45, 83–93 (2018).
* [17] Nagel, C., Schuler, S., Dössel, O. & Loewe, A. A bi-atrial statistical shape model for large-scale in silico studies of human atria: Model development and application to ECG simulations. _Medical Image Analysis_ 74, 102210, 10.1016/j.media.2021.102210 (2021).
* [18] Nagel, C., Schuler, S., Dössel, O. & Loewe, A. A bi-atrial statistical shape model and 100 volumetric anatomical models of the atria. _Zenodo_ 10.5281/zenodo.4309958 (2020).
* [19] Azzolin, L. _et al._ AugmentA: Patient-specific augmented atrial model generation tool. _medRxiv_ 10.1101/2022.02.13.22270835 (2022).
* [20] Zheng, T., Azzolin, L., Sánchez, J., Dössel, O. & Loewe, A. An automate pipeline for generating fiber orientation and region annotation in patient specific atrial models. _Current Directions in Biomedical Engineering_ 7, 136–139, 10.1515/cdbme-2021-2035 (2021).
* [21] Lang, R. M. _et al._ Recommendations for cardiac chamber quantification by echocardiography in adults: An update from the American Society of Echocardiography and the European Association of Cardiovascular Imaging. _Eur Heart J Cardiovasc Imaging_ 16, 233–70, 10.1093/ehjci/jev014 (2015).
* [22] Nagel, C. _et al._ Non-invasive and quantitative estimation of left atrial fibrosis based on P waves of the 12-lead ECG - a large-scale computational study covering anatomical variability. _J Clin Med_ 10, 10.3390/jcm10081797 (2021).
* [23] Pishchulin, L., Wuhrer, S., Helten, T., Theobalt, C. & Schiele, B. Building statistical shape spaces for 3D human modeling. _Pattern Recognition_ 67, 276–286, 10.1016/j.patcog.2017.02.018 (2017).
* [24] Durrer, D. _et al._ Total excitation of the isolated human heart. _Circulation_ 41, 899–912 (1970).
* [25] Kassebaum, D. G. & Van Dyke, A. R. Electrophysiological effects of isoproterenol on purkinje fibers of the heart. _Circulation Research_ 19, 940–946 (1966).
* [26] Bayer, J. D., Blake, R. C., Plank, G. & Trayanova, N. A. A novel rule-based algorithm for assigning myocardial fiber orientation to computational heart models. _Annals of Biomedical Engineering_ 40, 2243–2254 (2012).
* [27] Streeter Jr, D. D., Spotnitz, H. M., Patel, D. P., Ross Jr, J. & Sonnenblick, E. H. Fiber orientation in the canine left ventricle during diastole and systole. _Circulation Research_ 24, 339–347 (1969).
* [28] Taggart, P. _et al._ Inhomogeneous transmural conduction during early ischaemia in patients with coronary artery disease. _Journal of Molecular and Cellular Cardiology_ 32, 621–630 (2000).
* [29] Roberts, D. E. & Scher, A. M. Effect of tissue anisotropy on extracellular potential fields in canine myocardium in situ. _Circulation Research_ 50, 342–351 (1982).
* [30] Keller, D. U., Weber, F. M., Seemann, G. & Dössel, O. Ranking the influence of tissue conductivities on forward-calculated ECGs. _IEEE Transactions on Biomedical Engineering_ 57, 1568–1576 (2010).
* [31] Mitchell, C. C. & Schaeffer, D. G. A two-current model for the dynamics of cardiac membrane. _Bulletin of Mathematical Biology_ 65, 767–793 (2003).
* [32] Opthof, T. _et al._ Cardiac activation–repolarization patterns and ion channel expression mapping in intact isolated normal human hearts. _Heart Rhythm_ 14, 265–272 (2017).
* [33] Opthof, T. _et al._ Dispersion in ventricular repolarization in the human, canine and porcine heart. _Progress in Biophysics and Molecular Biology_ 120, 222–235 (2016).
* [34] Keller, D. U., Weiss, D. L., Dossel, O. & Seemann, G. Influence of $I_{Ks}$ heterogeneities on the genesis of the t-wave: A computational evaluation. _IEEE Transactions on Biomedical Engineering_ 59, 311–322 (2011).
* [35] Neic, A., Gsell, M. A. F., Karabelas, E., Prassl, A. J. & Plank, G. Automating image-based mesh generation and manipulation tasks in cardiac modeling workflows using Meshtool. _SoftwareX_ 11, 100454, 10.1016/j.softx.2020.100454 (2020).
* [36] Mendonca Costa, C., Plank, G., Rinaldi, C. A., Niederer, S. A. & Bishop, M. J. Modeling the electrophysiological properties of the infarct border zone. _Frontiers in Physiology_ 9, 356 (2018).
* [37] Loewe, A., Wülfers, E. M. & Seemann, G. Cardiac ischemia-insights from computational models. _Herzschrittmacher & Elektrophysiologie_ 29, 48–56, 10.1007/s00399-017-0539-6 (2018).
* [38] Neic, A. _et al._ Efficient computation of electrograms and ECGs in human whole heart simulations using a reaction-eikonal model. _Journal of Computational Physics_ 346, 191–211, 10.1016/j.jcp.2017.06.020 (2017).
* [39] Potse, M. Scalable and accurate ECG simulation for reaction-diffusion models of the human heart. _Frontiers in Physiology_ 9, 370 (2018).
* [40] Vigmond, E., Dos Santos, R. W., Prassl, A., Deo, M. & Plank, G. Solvers for the cardiac bidomain equations. _Progress in Biophysics and Molecular Biology_ 96, 3–18 (2008).
* [41] Plank, G. _et al._ The openCARP simulation environment for cardiac electrophysiology. _Computer Methods and Programs in Biomedicine_ 208, 106223, 10.1016/j.cmpb.2021.106223 (2021).
* [42] openCARP Consortium _et al._ openCARP v11.0. _RADAR4KIT_ 10.35097/703 (2022).
* [43] Fu, Z., Kirby, R. M. & Whitaker, R. T. A fast iterative method for solving the eikonal equation on tetrahedral domains. _SIAM J Sci Comput_ 35, c473–c494, 10.1137/120881956 (2013).
* [44] Loewe, A. _et al._ Patient-specific identification of atrial flutter vulnerability–a computational approach to reveal latent reentry pathways. _Frontiers in Physiology_ 9, 10.3389/fphys.2018.01910 (2019).
* [45] Pilia, N. _et al._ ECGdeli - An open source ECG delineation toolbox for MATLAB. _SoftwareX_ 13, 100639, 10.1016/j.softx.2020.100639 (2021).
* [46] Kantelhardt, J. W., Havlin, S. & Ivanov, P. C. Modeling transient correlations in heartbeat dynamics during sleep. _Europhysics Letters (EPL)_ 62, 147–153, 10.1209/epl/i2003-00332-7 (2003).
* [47] Petrenas, A. _et al._ Electrocardiogram modeling during paroxysmal atrial fibrillation: application to the detection of brief episodes. _Physiol Meas_ 38, 2058–2080, 10.1088/1361-6579/aa9153 (2017).
* [48] Strodthoff, N. _et al._ PTB-XL-Feat, a comprehensive electrocardiographic feature dataset. _in preparation_ .
* [49] Nielsen, J. B. _et al._ P-wave duration and the risk of atrial fibrillation: Results from the Copenhagen ECG study. _Heart Rhythm_ 12, 1887–1895, 10.1016/j.hrthm.2015.04.026 (2015).
* [50] Nagel, C., Pilia, N., Loewe, A. & Dössel, O. Quantification of interpatient 12-lead ECG variabilities within a healthy cohort. _Current Directions in Biomedical Engineering_ 6, 493–496, 10.1515/cdbme-2020-3127 (2020).
* [51] Bender, J. _et al._ A Large-scale Virtual Patient Cohort to Study ECG Features of Interatrial Conduction Block. _Current Directions in Biomedical Engineering_ 8, 97–100, 10.1515/cdbme-2022-1026 (2022).
* [52] Strodthoff, N., Wagner, P., Schaeffter, T. & Samek, W. Deep learning for ECG analysis: Benchmarks and insights from PTB-XL. _IEEE Journal of Biomedical and Health Informatics_ 25, 1519–1528 (2020).
* [53] Mehari, T. & Strodthoff, N. Self-supervised representation learning from 12-lead ECG data. _Computers in Biology and Medicine_ 141, 105114 (2022).
* [54] Dössel, O., Luongo, G., Nagel, C. & Loewe, A. Computer modeling of the heart for ECG interpretation—a review. _Hearts_ 2, 350–368, 10.3390/hearts2030028 (2021).
* [55] Luongo, G. _et al._ Automatic ECG-based discrimination of 20 atrial flutter mechanisms: Influence of atrial and torso geometries. In _Computing in Cardiology_ , vol. 47, 1–4, 10.22489/CinC.2020.066 (IEEE, 2020).
* [56] Stenroos, M., Mäntynen, V. & Nenonen, J. A Matlab library for solving quasi-static volume conduction problems using the boundary element method. _Computer Methods and Programs in Biomedicine_ 88, 256–263 (2007).
* [57] Schuler, S. & Loewe, A. FIM_Eikonal: v1.0. _Zenodo_ 10.5281/zenodo.7217554 (2022).
* [58] Nagel, C., Eichhorn, N. & Loewe, A. ECG-Synthesization: v1.0. _Zenodo_ 10.5281/zenodo.7293625 (2022).
* [59] Gillette, K. _et al._ Automated framework for the inclusion of a his–purkinje system in cardiac digital twins of ventricular electrophysiology. _Annals of Biomedical Engineering_ 49, 3143–3153 (2021).
* [60] Odille, F., Liu, S., van Dam, P. & Felblinger, J. Statistical variations of heart orientation in healthy adults. In _Computing in Cardiology Conference (CinC)_ , vol. 44, 10.22489/CinC.2017.225-058 (2017).
* [61] Loewe, A. _et al._ Left and right atrial contribution to the P-wave in realistic computational models. In van Assen, H., Bovendeerd, P. & Delhaas, T. (eds.) _Lecture Notes in Computer Science_ , vol. 9126 of _Functional Imaging and Modeling of the Heart_ , 439–447, 10.1007/978-3-319-20309-6 (2015).
## Acknowledgements
This work was supported by the EMPIR programme co-financed by the
participating states and from the European Union’s Horizon 2020 research and
innovation programme under grant MedalCare 18HLT07. The authors also
acknowledge the support of the British Heart Foundation Centre for Research
Excellence Award III (RE/18/5/34216). SEW is supported by the British Heart
Foundation (FS/20/26/34952).
We thank the cardiologists Dr. Anna-Sophie Eberl, Dr. Ewald Kolesnik, Dr.
Martin Manninger-Wünscher, Dr. Stefan Kurath-Koller, Dr. Susanne Prassl, and
Dr. Ursula Rohrer for their involvement in the clinical Turing tests and for
their feedback regarding the online platform and the ECG signal morphology. We
also thank Thomas Ebner and his colleagues from the Know-Center for the great
collaboration and the rapid implementation of our requirements in their online
platform TimeFuse.
## Author contributions statement
All authors were involved in the writing and revision of the manuscript.
K.G. built the ventricular-torso model cohort, parameterized and performed the
simulations of the QRST complexes under both sinus and disease, conducted
analysis on the clinical Turing tests, and organized the revision of the
manuscript.
M.G. managed the development of the testing platform for the clinical Turing
test and developed tools to evaluate the test results. He also assisted in all
aspects of simulations and model building.
C.N. built the atrial model cohort, parameterized and performed the P wave
simulations under both sinus and disease conditions, designed and implemented
the synthesization model, led the technical validation of simulated and
clinical ECG biomarkers.
J.B. performed and validated P wave simulations for interatrial conduction
block.
B.W. ran simulations on the ventricular-torso model cohort, and extracted
features from the clinical ECGs for technical validation.
S.W. provided clinical insight and feedback on the clinical Turing tests. He
also provided assistance on parameterization for both models.
M.B. was involved in funding acquisition, provided guidance on relevant data
processing and metrology aspects, reviewed and edited the final manuscript.
T.S. provided motivation behind the study and gave clinical insight and
guidance on relevant disease pathologies.
O.D. was involved in funding acquisition and provided supervision of the
atrial model cohort simulations.
G.P was involved in funding acquisition and provided supervision of the
ventricular-torso model cohort simulations.
A.L. was involved in funding acquisition and provided supervision of the
atrial model cohort simulations.
## Competing Interests
The authors declare no competing interests.
## Figures & Tables
Figure 1: Pipeline for the generation and validation of the synthetic 12 lead
ECG database using individual multi-scale models of the atria and the
ventricles. Electrophysiological Parameters of QRS Complex Simulations
---
Entity | Parameter | Value | Unit | Reference
Geometry | $\mathbf{\lambda}_{V,i},i\in[1,13]$ | [1, 13] | - | Gillette et al. 2021[15]
1-5 Fascicular Sites | $\vec{x}_{rv,mod}$ | {$\rho=0.2$, $\phi=$[0, 1.0], $z=$[0.1, 0.6], $r=$[0.2, 0.8], $t=$[0, 10]} | - | Durrer et al. 1970[24], Gillette et al. 2021[15]
| | -
| | -
| | -
| | $\text{\,}\mathrm{ms}$
| $\vec{x}_{rv,sept}$ | {$\rho=0.8$, $\phi=$[-1.5, 1.5], $r=0.4$, $z=$[0.2, 0.4], $t=10$ } | -
| | -
| | -
| | -
| | $\text{\,}\mathrm{ms}$
| $\vec{x}_{lv,sept}$ | {$\rho=0.2$, $\phi=$[-1.5, 1.5], $r=$[0.05, 0.4], $z=$[0.3, 0.7], $t=10$ } | -
| | -
| | -
| | -
| | $\text{\,}\mathrm{ms}$
| $\vec{x}_{lv,ant}$ | {$\rho=0.2$, $\phi=$[1.0, $\pi$], $r=$[0.05, 0.4], $z=$[0.2, 0.8], $t=10$ } | -
| | -
| | -
| | -
| | $\text{\,}\mathrm{ms}$
| $\vec{x}_{lv,post}$ | {$\rho=0.2$, $\phi=$[-$\pi$, -1.0], $r=$[0.05, 0.4], $z=$[0.2, 0.7], $t=10$} | -
| | -
| | -
| | -
| | $\text{\,}\mathrm{ms}$
1-5 Conduction Velocity | $cv_{endo}$ | 2.0 | $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ | Kassebaum et al. 1966[25]
| $cv_{endo,r}$ | 1.0 | - | Gillette et al. 2021[59]
| $cv_{myo}$ | 0.6 | $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ | Taggart et al. 2000[28]
| $cv_{myo,r}$ | 4:2:1 | - | Taggart et al. 2000[28]
1-5 Myocardial Fiber | $\alpha_{endo}$ | 60.0 | ∘ | Bayer et al. 2012[26], Streeter et al. 1969[27]
Orientations | $\alpha_{epi}$ | -60.0 | ∘
| $\beta_{endo}$ | -65.0 | ∘
| $\beta_{epi}$ | 25.0 | ∘
1-5 Heart Conductivity | $\sigma_{il}$ | 0.34 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$ | Roberts et al. 1982[29]
| $\sigma_{in}$ | 0.06 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$
| $\sigma_{it}$ | 0.06 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$
| $\sigma_{el}$ | 0.12 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$
| $\sigma_{en}$ | 0.08 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$
| $\sigma_{et}$ | 0.08 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$
1-5 Volume-Conductor | $\sigma_{torso}$ | 0.22 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$ | Keller et al. 2010[30]
Conductivities | $\sigma_{atria}$ | 0.0537 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$
| $\sigma_{lungs}$ | 0.0389 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$
| $\sigma_{blood}$ | 0.7 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$
Table 1: Model parameters for the electrophysiology within the ventricular
simulations generating QRS simulations. Positioning, sizing, and timing of the
5 sites of fascicular breakthrough representing the His-Purkinje System within
the ventricles provide variation in the QRS complex. Fixed parameters were
held constant at physiological values across all simulations as indicated.
Electrophysiological Parameters of T Wave Simulations
---
Entity | Parameter | Value | Unit | Reference
Ionic Model | $\vec{i}_{sinus}$ | {$V_{gate}=0.13$, $V_{min}=-86.2$, $V_{max}=40.0$, $\tau_{in}=0.3$, $\tau_{out}=5.4$, $\tau_{open}=80.0$} | - | Mitchell & Schaeffer 2003[31]
| | $\text{\,}\mathrm{mV}$
| | $\text{\,}\mathrm{mV}$
| | -
| | -
| | -
1-5 Repolarization Gradients | $APD_{min}$ | [150, 175] | $\text{\,}\mathrm{ms}$ | Opthof et al. 2017[32].Opthof et al. 2016[33],Keller et al. 2011[34]
| $APD_{max}$ | [225, 250] | $\text{\,}\mathrm{ms}$
| $\vec{q}_{w}$ | {$\rho=$[-0.6, 0.0], $\nu=$[0.1, 0.15], $\phi=0$, $z=$[0.9, 1.0]} | -
| | -
| | -
| | -
Table 2: Model parameters for the electrophysiology within the ventricular
simulations generating T waves simulations. Base parameters of the action
potential were held constant, but variations in action potential duration are
prescribed using weighted gradients.
Electrophysiological Parameters of Myocardial Infarction
---
Entity | Parameter | Value | Unit | Reference
Sizing of Infarct | $d_{co}$ | [0, 1.0] | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$ | Keller et al. 2011[30]
1-5 Infarct Center | $\vec{x}_{LAD,mi}$ | { $\phi=$[0.0,2.0], $z=$[0.1, 1.0]} | - | AHA et al. 2002[9]
| | -
| | -
| $\vec{x}_{RAD,mi}$ | {$\phi=$[-2.0, 0.0], $z=$[0.2,1.0] } | -
| | -
| | -
| $\vec{x}_{LCX,mi}$ | { $\phi=$[2.0, 3.14] $\cup$ [-3.14,-2.0], $z=$[0.2, 1.0] } | -
| | -
| | -
| | -
1-5 Infarct Transmurality | $\rho_{\nu,mi}$ | $\\{0.3,1.0\\}$ | - |
1-5 Conduction Velocity | $cv_{BZ}$ | 0.15 | $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ | Mendonca et al. 2018[36]
| $cv_{BZ,r}$ | 1.0 | -
1-5 Mitchell Schaeffer | $\vec{i}_{BZ}$ | {$V_{gate}=0.13$, $V_{min}=-73.1$, $V_{max}=12.5$, $\tau_{in}=0.45$, $\tau_{out}=3.6$, $\tau_{open}=44.0$} | - | Mitchell,Schaeffer 2003[31], Loewe et al. 2018[37]
Ionic Model | | $\text{\,}\mathrm{mV}$
| | $\text{\,}\mathrm{mV}$
| | -
| | -
| | -
Table 3: Additional parameters were included to define infarct zones within
the ventricular-torso model. Variations in the locations of the occlusion of
the 3 primary arteries (LCA, LCX, and RCA) are based on clinical observations.
Two different transmuralities are modeled. Fixed parameters comprise
conductivity, conduction velocity, and the cellular settings.
Figure 2: Cohort of ventricular-torso models derived from clinical MRIs.
Tissues include lungs, blood pools, atrial tissue, ventricles, and general
torso. Parameters dictating ventricular electrophysiologyfor normal healthy
control were varied through physiological ranges. Disease conditions of BBB
and MI were then modeled by making adaptions to the model.
Figure 3: Anatomical model cohort for atrial simulations. 80 atrial geometries
with physiological left and right atrial volumes were derived from a bi-atrial
statistical shape model [17] and served as a basis for normal healthy control
simulations. 9 different volume fractions of these models were additionally
replaced by fibrosis for simulations of fibrotic atrial cardiomyopathy.
Interatrial conduction block signals were generated by blocking conduction in
Bachmann’s Bundle in the same 80 geometries. Furthermore, 45 geometries with
enlarged left atrial volumes were generated. As for the torso anatomy, 25
geometries were derived from a human body statistical shape model to account
for height, weight and gender differences in the virtual patient cohort.
Moreover, the rotation angle as well as the spatial position of the atria
inside the torso were varied in physiological ranges.
Electrophysiological Parameters of P wave simulations
---
Entity | Parameter | Value | Unit | Reference
Geometry | $\mathbf{\lambda}_{A,i},i\in[1,24]$ | [-3, 3] | - | Nagel et al. 2021[17]
| $\mathbf{\lambda}_{T,i},i\in[1,2]$ | [-2, 2] | - | Pishchulin et al. 2017[23]
1-5 Atrial rotation | $\alpha_{x}$ | [-20, 20] | ∘ | Odille et al. 2017[60]
| $\alpha_{y}$ | [-20, 20] | ∘
| $\alpha_{z}$ | [-20, 20] | ∘
1-5 Atrial translation | $t_{x}$ | [-10, 10] | mm | Odille et al. 2017[60]
| $t_{y}$ | [-10, 10] | mm
| $t_{z}$ | [-10, 10] | mm
1-5 Transversal Conduction | $\mathrm{CV_{bulk\leavevmode\nobreak\ tissue}}$ | [0.57, 0.85] | $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$ | Loewe et al. 2015[61]
Velocities | $\mathrm{CV_{interatrial\leavevmode\nobreak\ connections}}$ | [0.46, 0.70] | $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$
| $\mathrm{CV_{crista\leavevmode\nobreak\ terminalis}}$ | [0.57, 0.85] | $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$
| $\mathrm{CV_{pectinate\leavevmode\nobreak\ muscles}}$ | [0.62, 0.92] | $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$
| $\mathrm{CV_{inferior\leavevmode\nobreak\ isthmus}}$ | [0.57, 0.85] | $\text{\,}\mathrm{m}\text{\,}{\mathrm{s}}^{-1}$
1-5 Anisotropy ratios | $\mathrm{AR_{bulk\leavevmode\nobreak\ tissue}}$ | 1.94 | - | Loewe et al. 2015[61]
| $\mathrm{AR_{interatrial\leavevmode\nobreak\ connections}}$ | 3 | -
| $\mathrm{AR_{crista\leavevmode\nobreak\ terminalis}}$ | 2.56 | -
| $\mathrm{AR_{pectinate\leavevmode\nobreak\ muscles}}$ | 3.24 | -
| $\mathrm{AR_{inferior\leavevmode\nobreak\ isthmus}}$ | 1 | -
1-5 Torso conductivity | $\sigma_{torso}$ | 0.22 | $\text{\,}\mathrm{S}\text{\,}{\mathrm{m}}^{-1}$ | Keller et al. 2010[30]
Table 4: Model parameters for atrial simulations. Values were varied randomly
following a uniform distribution in the specified intervals. Fixed parameters
comprise anisotropy ratios and torso conductivity, which were defined as
indicated in the respective column.
Figure 4: (A) Exemplary 10 $\mathrm{s}$ ECGs (lead II) of each pathology class
and a normal healthy control in the virtual cohort. (B) Exemplary 10
$\mathrm{s}$ ECGs (lead II) of each MI pathology class for different occlusion
sites and degrees of transmurality.
Figure 5: Comparison of features in the healthy clinical and virtual cohort.
Probability density functions are shown for timing features (left column, from
top to bottom: P wave duration, QRS duration, T wave duration, PR interval,
QTinterval, RR interval) and amplitude features (right column, from top to
bottom: P wave amplitude, Q / R / S peak amplitude, T wave amplitude). Blue
and red curves represent the distributions calculated based on the clinical
and the simulated data, respectively.
Figure 6: Comparison of features extracted from healthy (solid lines) and
pathological (dotted line) ECGs in the clinical (blue curves, bottom panel)
and virtual (red curve, top panel) cohorts. Probability density functions are
shown for selected timing or amplitude features that are clinically evaluated
for a diagnosis of the displayed disease (from left to right: RBBB, LBBB, MI,
1AVB, LAO, IAB and FAM).
Figure 7: (Type classification) Healthy cases: (A) Classification results for
each of the six expert clinicians for the five Turing tests and percentage of
correct assessments. In summary, 62 of 300 assessments of the synthetic ECGs
and 74 of 300 assessments of the measured ECGs could not be correctly
classified by the experts. (B) Type classification matrix across all 600
assessments. (C) Results of the clinical Turing tests performed by 6
clinicians. Each row corresponds to a clinical Turing test and each square
belongs to one of the 20 ECGs per test. Shown is the relative number of
clinicians who correctly classified the corresponding signal. Pathological
cases: (D) Type classification results for each of the two expert clinicians
for the five Turing tests and percentage of correct assessments. In summary,
10 of 100 assessments of the synthetic ECGs and 24 of 100 assessments of the
measured ECGs could not be correctly classified by the experts. (E) Type
classification matrix across all 100 assessments. (F) Results of the clinical
Turing tests performed by 2 clinicians. Each row corresponds to a clinical
Turing test and each square belongs to one of the 20 ECGs per test. Shown is
the relative number of clinicians who correctly classified the type of the
corresponding signal. Figure 8: (Pathology classification) (A) Pathology
classification results for each of the two expert clinicians for the five
Turing tests and percentage of correct assessments. In summary, 61 of 100
assessments of the synthetic ECGs and 38 of 100 assessments of the measured
ECGs could not be correctly classified by the experts. (B) Pathology
classification matrix across all 100 assessments. (C) (Clinician-based). Shown
are the classifications for both clinicians of all ECG Signals. For each ECG
signal designated by a s quare, the top entries are the correct pathology and
the bottom entries are the pathology actually selected by the user. Each row
corresponds to a clinical Turing test and each square belongs to one of the 20
ECGs per test. (D) Confusion Matrices.
|
Asymptotic consistency of the WSINDy algorithmD. A. Messenger and D. M. Bortz
Asymptotic consistency of the WSINDy algorithm in the limit of continuum data
Daniel A. MessengerDepartment of Applied Mathematics, University of Colorado, Boulder, CO 80309-0526, USA.
<EMAIL_ADDRESS>dmbortz@colorado.edu).
David M. Bortz[1]
[1]
In this work we study the asymptotic consistency of the weak-form sparse identification of nonlinear dynamics algorithm (WSINDy) in the identification of differential equations from noisy samples of solutions. We prove that the WSINDy estimator is unconditionally asymptotically consistent for a wide class of models which includes the Navier-Stokes equations and the Kuramoto-Sivashinsky equation. We thus provide a mathematically rigorous explanation for the observed robustness to noise of weak-form equation learning. Conversely, we also show that in general the WSINDy estimator is only conditionally asymptotically consistent, yielding discovery of spurious terms with probability one if the noise level is above some critical threshold and the nonlinearities exhibit sufficiently fast growth. We derive explicit bounds on the critical noise threshold in the case of Gaussian white noise and provide an explicit characterization of these spurious terms in the case of trigonometric and/or polynomial model nonlinearities. However, a silver lining to this negative result is that if the data is suitably denoised (a simple moving average filter is sufficient), then we recover unconditional asymptotic consistency on the class of models with locally-Lipschitz nonlinearities. Altogether, our results reveal several important aspects of weak-form equation learning which may be used to improve future algorithms. We demonstrate our results numerically using the Lorenz system, the cubic oscillator, a viscous Burgers growth model, and a Kuramoto-Sivashinsky-type higher-order PDE.
data-driven modeling, equation learning, weak formulation, asymptotic consistency
37M10, 62J99, 62-07, 65R99, 62-08
§ INTRODUCTION
§.§ Overview
A widespread challenge in the natural sciences is to create a mathematical model which makes accurate predictions, is mathematically analyzable, and amenable to parameter estimation from data. Typically, parameters exhibit a nonlinear relationship with the observed data, which explains the widespread use of nonlinear least-squares methods to fit parameters by minimizing the difference between the experimental data and numerical simulations. In many cases, parameter uncertainty is layered on top of uncertainty in the model itself, which has led to the field of model selection [1, 2], whereby criteria are devised to select an appropriate (e.g. parsimonious) model from a host of candidate models.
A subject of fervent study in recent years has been the use of data to learn the correct model equations for a given phenomenon, along with the model parameters, without resorting to the laborious nonlinear least-squares approach which often involves many expensive forward solves of candidate models. With equations in hand, one can conduct further experiments computationally which may not be immediately accessible in laboratory conditions. The weak-form sparse identification of nonlinear dynamics algorithm (WSINDy), introduced in [19, 18], is one such equation learning algorithm that has been observed to offer several advantages over other equation learning methods, including robustness to noise, high accuracy, low computational cost, and extensibility across many modeling paradigms [20, 21, 22].
WSINDy is based off of the SINDy algorithm [6, 26], which popularized the sparse equation learning technique and demonstrated its viability in discovering ordinary differential equations, partial differential equations, normal forms, and reduced-order models, to name a few. For other important works on sparse equation learning see [40, 31, 28, 16] and the references therein.
The key difference between SINDy and WSINDy is that the latter discretizes an integral representation of the dynamics, avoiding computation of pointwise derivatives. This leads to implicit noise filtering effects and relaxed constraints on the smoothness of the underlying ground-truth data. In addition to WSINDy as introduced in [19, 18], these and other advantages of weak-form equation learning have now been reported in several other studies [29, 25, 23, 37, 36, 11, 24, 7, 32, 3], leading to a consensus that exploiting integration can provide significant improvements in the discovery of equations from data. Nevertheless, other promising methods for reducing the deleterious effects of noise do exist, see in particular works that incorporate automatic differentiation [16, 17], ensembling [10], and improved optimization methods [39, 14]. However, each of these techniques can easily be combined with a weak-form approach, as demonstrated in [10, 5].
The goal of this paper is to provide rigorous justification for the observed robustness to noise of WSINDy and other weak form methods. We accomplish this by determining conditions under which a WSINDy model is asymptotically consistent with the true underlying model in the limit of continuum data. We consider the performance of WSINDy on noisy samples $\Ubf^{(m)}$ of some solution $u$ of a differential equation on grids $(\Xbf^{(m)},\tbf^{(m)})$ of increasing resolution within an underlying spatiotemporal domain $\Omega\times(0,T)$. This is cast as a support recovery problem for the support $\supp{\wstar}$ of the true weight vector $\wbf^\star$, which is nonzero only on the subset of terms that exist in the true model. It should be noted that throughout we assume that the true model is contained in the library of models considered, however extension beyond this case is possible (see Section <ref> discussion point II).
As we will see, for each class of models examined, there exists a critical noise level $\sigma_c>0$ such that for noise levels $\sigma<\sigma_c$, we have $\what^{(m)}$ satisfying $\supp{\what^{(m)}} = \supp{\wstar}$ with probability rapidly approaching one as $m\to \infty$, where $\what^{(m)}$ is the WSINDy model at discretization level $m$, and $m$ is inversely proportional to the volume element of the spatiotemportal grid $(\Xbf^{(m)},\tbf^{(m)})$. Most importantly, we identify a large class of systems for which $\sigma_c=\infty$, leading to unconditional support recovery as $m\to \infty$. We also prove that suitably denoising the data leads to unconditional asymptotic consistency on the class of models with locally-Lipschitz nonlinearities, in particular this is true if a simple moving average filter is used.
§.§ Related work
Despite the widespread use and development of sparse data-driven equation learning algorithms applied to dynamical systems since the onset of SINDy, very few studies have examined the performance of these methods from a mathematically rigorous perspective. The authors of [33] and [30] report sparse recovery guarantees for chaotic and structured ODE systems (respectively), however each result requires the availability of point-wise derivatives. Other results include convergence of the STLS algorithm used in SINDy [41] and recovery guarantees for sparse polynomial regression [13], however the former only asserts convergence to a local minimum of the $\ell_0$-regularized least-squares cost function, and the latter relies on sparsity of the measurement noise. The authors of [27] have recently proved that surrogate models learned using WSINDy converge to the true dynamics as the basis used to represent the dynamics grows in dimension, although this result concerns only continuum noise-free data.
In this work we undertake the problem of analyzing the performance of WSINDy applied to noisy discrete datasets by utilizing a suitable continuum limit, whereby many practical insights can be gleaned. One important related work is [12], in which the authors show that with local polynomial differentiation together with filtering kernels that obey certain asymptotics, the LASSO estimator is asymptotically consistent for PDE recovery problems from quadratic libraries of arbitrary spatial differentiation order. This approach appears to be extendable to a wide class of model libraries, but requires the so-called mutual incoherence condition on the design matrix, which is restrictive in settings with correlated columns (a common paradigm in sparse equation learning from data). In addition, the noise-free data in [12] is assumed to be classical, which we show in this work is not necessary if one employs the weak form. Nevertheless, we conjecture that results in the present article may be combined with those in [12] to yield asymptotically consistent algorithms for models with combined weak and classical derivative evaluation.
§.§ Summary of results
As alluded to above, in this work we focus on the performance of WSINDy in the continuum limit as the computational grid $(\Xbf^{(m)},\tbf^{(m)})$ on which the noisy data $\Ubf^{(m)}$ is sampled becomes dense in the domain of definition $\Omega\times (0,T)$ of the dynamical system. Motivated by future algorithmic developments, we aim to provide explicit results whenever possible. These include exponential rates of concentration which we use to quantify the probability of support recovery, and explicit bounds on the critical noise below which we recover the correct model unconditionally in the limit $m\to \infty$ (where $m$ is the number of points that the test function $\psi$ is supported on at the grid resolution $(\Delta x^{(m)},\Delta t^{(m)})$, see (<ref>)). The following is a qualitative summary of results with reference to specific theorems and lemmas (see Section <ref> for assumptions used in many of these results).
* We prove asymptotic results that explain the empirically observed robustness of weak-form equation learning methods (Theorem <ref> and supporting lemmas <ref> and <ref>, see also Theorem <ref> and Lemma <ref>).
* We prove that WSINDy is capable of identifying models from non-smooth data (Theorem <ref>), which was demonstrated empirically [18], and we quantify the effect of smoothness on convergence (theorems <ref> and <ref>, utilizing Lemma <ref>).
* We specify both the class of models and the bounds on the critical noise level below which the weak form is asymptotically consistent (lemmas <ref> and <ref>).
* We prove that suitably denoising the data (e.g. with a simple moving average filter) results in unconditional asymptotic consistency of WSINDy over the class of models with locally-Lipschitz nonlinearities (Theorem <ref>).
§.§ Outline
We first cover in Section <ref> the preliminaries necessary to analyze the limit of large data. These include an overview of the WSINDy algorithm for ordinary and partial differential equations (<ref>) which may be skipped for readers familiar with [18], a definition of the continuum problem and related notation (<ref>), an intuition-building discussion of the bias resulting from taking the continuum limit (<ref>), and lastly the assumptions used throughout and explanations thereof (<ref>). In Section <ref> we prove that the WSINDy discrete linear system concentrates at an exponential rate to the associated continuum linear system under the assumptions in Section <ref>. In Section <ref> we prove that results in Section <ref> imply conditional asymptotic consistency for raw data (<ref>) and unconditional consistency for a wide range of data filtering techniques (<ref>). Finally, Section <ref> contains numerical examples demonstrating the results from Section <ref> in practice. In particular, we show that with a simple moving-average filter we achieve stable and accurate recovery of systems for noise levels where WSINDy without filtering fails due to the existence of a critical noise. In the appendix we include a table of notations used (Appendix
<ref>), supporting lemmas (Appendices <ref>,<ref>,<ref>), additional information on numerical examples (Appendices <ref>,<ref>), and extension of several results in Section <ref> to more general settings (Appendix <ref>).
§ PRELIMINARIES
§.§ Overview of WSINDy
Let $\Ubf = u(\Xbf, \tbf) + \ep$ be a spatiotemporal dataset defined on the $d$-dimensional spatial grid $\Xbf \subset \Omega\subset \Rbb^d$ over timepoints $\tbf\subset [0, T]$, where $u:\Rbb^d\times\Rbb\to \Rbb^n$ is a weak solution to the PDE
\begin{equation}\label{gen_pde}
\partial^{\alpha^0}u(x,t) = \partial^{\alpha^1}g_1(u(x,t))+\partial^{\alpha^2}g_2(u(x,t))+\dots+\partial^{\alpha^S} g_S(u(x,t)), \qquad x\in \Omega,\ t\in (0,T).
\end{equation}
Here $\ep$ represents i.i.d. measurement noise associated with point-wise evaluations of $u$. The WSINDy algorithm uses the weak form of the dynamics (<ref>) to identify a PDE model for $\Ubf$ by discovering functional representations of the nonlinear differential operators[Throughout we use the multi-index notation $\alpha^s =~ (\alpha^s_1,\dots,\alpha^s_d,\alpha^s_{d+1}) \in \Nbb^{d+1}$ to denote partial differentiation with respect to $x = (x_1,\dots, x_d)$ and $t$:
\[\partial^{\alpha^s}u(x,t) = \frac{\partial^{\alpha^s_1+\cdots+\alpha^s_d+\alpha^s_{d+1}}}{\partial x_1^{\alpha^s_1}\dots \partial x_d^{\alpha^s_d}\partial t^{\alpha^s_{d+1}}}u(x,t).\]
We will avoid using subscript notation such as $u_x$ to denote partial derivatives, instead using $\partial^\alpha u$ or $\partial_x u$.]
$(\partial^{\alpha^s}g_s(\cdot))_{s\in [S]}$ in a computationally efficient sparse regression framework. We adopt a dictionary learning approach and use a basis of $J$ trial functions $\CalF:=(f_j)_{j\in [J]}$ and differential operators specified by the set of multi-indices $\pmb{\alpha}:=(\alpha^s)_{s\in [S]}$. We assume that $(g_s)_{s\in [S]} \subset \text{span}(\CalF)$, allowing for the representation of (<ref>):
\begin{equation}\label{diffform}
\partial^{\alpha^0} u = \sum_{s=1}^S\sum_{j=1}^J \wstar_{(s-1)J+j} \partial^{\alpha^s} f_j(u),
\end{equation}
with the assumption that the true $\wstar\in \Rbb^{SJ}$ is sparse.
To convert (<ref>) to a weak form, we convolve the equation against a sufficiently smooth test function $\psi(x,t)$, compactly-supported in $\Omega\times (0,T)$, arriving at a convolutional weak form of the equation
\begin{equation}\label{conv_form}
\Big(\partial^{\alpha^0}\psi\Big) * u (\xbf,t) = \sum_{s=1}^S\sum_{j=1}^J \wstar_{(s-1)J+j} \Big(\partial^{\alpha^s}\psi\Big) * f_j(u)(\xbf,t).
\end{equation}
Equation (<ref>) only holds if the support of $\psi$ centered at $(\xbf,t)$ lies inside $\Omega\times(0,T)$, or
\begin{equation}\label{IBPbcs}
\supp{\psi(\xbf-\cdot,t-\cdot)}\subset \Omega\times (0, T)
\end{equation}
which serves to eliminate any boundary terms that arise during integration by parts.
To discretize (<ref>), we first select a finite set of query points $\CalQ := \{(\xbf_k,t_k)\}_{k\in[K]}$ satisfying (<ref>). We then define discrete convolution operators
\[\Psi^s := \partial^{\alpha^s}\psi(\Ybf,\mathfrak{t})(\Delta x)^d\Delta t, \qquad s = 0, \dots, S\]
where $(\Ybf,\mathfrak{t})$ denotes a centered reference grid at the same spatiotemporal resolution $(\Delta x, \Delta t)$ as $(\Xbf,\tbf)$ and the factor $(\Delta x)^d\Delta t$ is the uniform weight of the trapezoidal rule given compact support of $\psi$ which eliminates boundary terms with weight $1/2$. The WSINDy linear system $(\Gbf,\bbf)$ with $\Gbf\in \Rbb^{K\times \mathfrak{J}}$, $\bbf\in \Rbb^{K\times n}$ is then defined by
\begin{equation}\label{conv_disc}
\begin{dcases} \hspace{1.5cm}\bbf_k := \Psi^0 * \Ubf (\xbf_k,t_k),\\
\Gbf_{k,(s-1)J+j} := \Psi^s * f_j(\Ubf) (\xbf_k,t_k),\end{dcases}
\end{equation}
where $K$ is the number of convolution query points, $\mathfrak{J} =SJ$ is the size of the candidate model library, and $n$ is the dimension of the state variable $u$ being observed. The discrete $(d+1)$-dimensional convolution between $\Psi^s$ and $f_j(\Ubf)$ at a point $(\xbf_k,t_k) \in (\Xbf,\tbf)$ is defined by
\[\Psi^s*f_j\left(\Ubf\right)(\xbf_k,t_k) := \sum_{\ell_1=1}^{N_1}\cdots\sum_{\ell_{d+1}=1}^{N_{d+1}} \Psi^s_{k_1-\ell_1,\dots,k_{d+1}-\ell_{d+1}} f_j\left(\Ubf_{\ell_1,\dots,\ell_{d+1}}\right).\]
To arrive at an approximate model we solve for a sparse solution $\what$ such that $\Gbf\what \approx \bbf$ using a modified sequential thresholding least squares algorithm (proposed in [18]) $\what = \text{MSTLS}(\Gbf,\bbf;\, \CalL,\pmb{\lambda})$ that combines the traditional sequential thresholding algorithm
\begin{equation}\label{STLS}
\text{STLS}(\Gbf,\bbf;\, \lambda)\qquad \begin{dcases}
\hspace{0.1cm} \wbf^{(\ell+1)} = H_\lambda\left(\argmin_{\supp{\wbf}\subset S^{(\ell)}}\nrm{\Gbf\wbf-\bbf}_2^2\right)\\
S^{(\ell+1)} =\supp{\wbf^{(\ell+1)}}
\end{dcases}
\end{equation}
with a line search for the sparsity threshold $\lambda$:
\begin{equation}\label{MSTLS2}
\text{MSTLS}(\Gbf,\bbf;\, \CalL,\pmb{\lambda})\qquad \begin{dcases}
\hspace{0.1cm} \widehat{\lambda} = \min\left\{\lambda\in \pmb{\lambda} \ :\ \CalL(\lambda) = \min_{\lambda\in \pmb{\lambda}} \CalL(\lambda)\right\}\\
\widehat{\wbf} =\text{STLS}(\Gbf,\bbf;\,\widehat{\lambda}).\end{dcases}
\end{equation}
Here $H_\lambda$ is the hard thresholding operator defined by
\begin{equation}\label{hardthresh}
H_\lambda(\wbf)_i=\begin{cases} \wbf_i, & |\wbf_i|\geq \lambda \\ 0, & \text{otherwise}\end{cases}
\end{equation}
and $\CalL$ is the auxiliary loss function, introduced in [18] and defined by
\begin{equation}\label{lossfcn}
\CalL(\lambda) = \frac{\nrm{\Gbf(\wbf^\lambda-\wbf^0)}_2}{\nrm{\Gbf\wbf^0}_2}+\frac{\nrm{\wbf^\lambda}_0}{\mathfrak{J}},
\end{equation}
which is defined on outputs $\wbf^\lambda = $ STLS$(\Gbf,\bbf;\,\lambda)$ of the sequential thresholding least squares algorithm with threshold $\lambda$. The collection of candidate sparsity thresholds $\pmb{\lambda}$ is chosen by the user (a finite set of equally log-spaced values of $\lambda$ is seen to be a successful choice for $\pmb{\lambda}$ in [18]).
Altogether, the hyperparameters of the WSINDy algorithm are the reference test function $\psi$, the query points $\CalQ$, the model library composed of trial functions $\CalF$ and differential operator indices $\pmb{\alpha}$, and the sparsity thresholds $\pmb{\lambda}$. These are collected in Table <ref>.
In the original formulation of MSTLS in [18], the authors employed a different hard thresholding operator than (<ref>) to incorporate relative term magnitudes $\nrm{\Gbf_i\wbf_i}_2/\nrm{\bbf}$ and a rescaling step to improve conditioning. Here we have chosen to present results using (<ref>) in order to simplify the analysis and identify explicit conditions for convergence, however we conjecture that the original formulation of MSTLS is more computationally advantageous, especially for support recovery from multiscale data. We elaborate on the practical advantages and possible theoretical conclusions regarding the original formulation of MSTLS in Section <ref>.
Hyperparameter Domain Description
$\psi$ $C^{|\pmb{\alpha}|}(\Rbb^d+1)$ test function
$\CalQ := \{(\xbf_k,t_k)\}_{k\in[K]}$ $\Rbb^{K\times(d+1)}$ satisfying (<ref>) convolution query points
$\CalF := (f_j)_{j\in[J]}$ $C(\Rbb)$ trial functions
$\pmb{\alpha} = (\alpha_s)_{s=0,\dots,S}$ $\Nbb^{(S+1) \times (d+1)}$ partial derivatives
$\pmb{\lambda}$ $[0,\infty)$ candidate sparsity thresholds
Hyperparameters for the WSINDy Algorithm. Note that $\psi\in C^{|\pmb{\alpha}|}$ and $\CalQ$ satisfying (<ref>) ensure that the convolutional weak form (<ref>) is well defined. Here $|\pmb{\alpha}| := \max_{\alpha^s\in \pmb{\alpha}}\nrm{\alpha^s}_\infty$ is the maximum derivative order in library.
§.§ WSINDy in the continuum limit: definitions and notation
In this work we analyze the performance WSINDy in the limit of continuum data, in the sense that the solution $u$ is sampled on computational grids at finer and finer scales (to be made precise below), all while keeping the spatiotemporal domain $\Omega\times (0,T)$ fixed and while fixing the hyperparameters in Table <ref>. As we will see in Section <ref>, this continuum limit leads to a linear system $(\overline{\Gbf},\overline{\bbf})$ (defined in (<ref>)), that is in general biased from the noise-free continuum linear system, denoted by $(\Gbf^\star,\bbf^\star)$, which has entries given analytically by either side of equation (<ref>). By analyzing this biased system, we are able to prove that the bias is controllable, in that we prove conditions under which WSINDy still recovers the true support $S^\star := \supp{\wstar}$ of the true model coefficients $\wstar$ in the limit (see Section <ref>). We let the least squares solution to $(\overline{\Gbf},\overline{\bbf})$ be denoted by $\overline{\wbf}^0$, which under mild assumptions on the measurement noise satisfies $\overline{\Gbf}\overline{\wbf}^0=\overline{\bbf}$.
Specifically, we consider a sequence of samples $\{\Ubf^{(m)}\}_{m=1}^\infty$ on a dense set of successively finer computational grids $\{(\Xbf^{(m)}, \tbf^{(m)})\}_{m=1}^\infty\subset \Omega\times(0, T)$, each of which is equally spaced with resolution $(\Delta x^{(m)}, \Delta t^{(m)})$. As in Section <ref>, we assume an i.i.d. additive noise model $\Ubf^{(m)} = u(\Xbf^{(m)}, \tbf^{(m)})+\ep$, detailed assumptions of which are specified in Section <ref>. With the hyperparameters in Table <ref> fixed, for each $m$ we let $(\Gbf^{(m)},\bbf^{(m)})$ denote the linear system associated with WSINDy at discretization level $m$. A notable (if unsurprising) result of this paper is that under the assumptions in Section <ref>, there exists a continuum linear system
\begin{equation}\label{contsys}
(\overline{\Gbf},\overline{\bbf}):=\lim_{m\to \infty}(\Gbf^{(m)},\bbf^{(m)})
\end{equation}
with convergence in probability and exhibiting an exponential concentration rate. We let $(\Gbf^\star,\bbf^\star)$ be the noise-free continuum linear system such that
\begin{equation}\label{nzfreesys}
\bbf^\star = \Gbf^\star\wstar
\end{equation}
(i.e. the entries of $(\Gbf^\star,\bbf^\star)$ are given on either side of equation (<ref>)). We will refer to solving (<ref>) for $\overline{\wbf}$ such that $\overline{\Gbf}\overline{\wbf}\approx \overline{\bbf}$ as the continuum problem and to solving (<ref>) for $\wstar$ as the noise-free problem. Throughout, weight vectors $\overline{\wbf}^\lambda$ refer to sequential thresholding least squares solutions to (<ref>) with threshold $\lambda$. In other words,
\begin{equation}\label{wlam}
\overline{\wbf}^\lambda = \text{STLS}(\overline{\Gbf},\overline{\bbf}; \lambda),
\end{equation}
and $\wbf^{(m),\lambda}$ is defined similarly with regard to $(\Gbf^{(m)},\bbf^{(m)})$. In particular, $\overline{\wbf}^0$ is the least-squares solution to the continuum problem and $\wbf^{(m),0}$ is the least-squares solution to the discrete system at level $m$.
§.§ Controllable bias
In this section we informally discuss in what sense the bias in the continuum problem is controllable, such that WSINDy applied to $(\overline{\Gbf},\overline{\bbf})$ still recovers $\supp{\wstar}$. This is crucial to proving that WSINDy applied to the discrete systems $(\Gbf^{(m)},\bbf^{(m)})$ also recovers $\supp{\wstar}$ with probability rapidly approaching one as $m\to \infty$. To derive explicit results, we focus on the case of nonlinearities $\CalF$ containing only polynomial and trigonometric functions, although similar results may be available for more general libraries. Favoring a more intuition-building presentation, we introduce concepts in this section in the setting of ordinary differential equations, saving more general results to sections <ref> and <ref>.
In terms of the continuum linear system and noise-free continuum linear system, for a fixed candidate weight vector $\wbf$ it holds that
\[\lim_{m\to \infty}\left(\bbf^{(m)} - \Gbf^{(m)} \wbf\right) = \overline{\bbf}-\overline{\Gbf}\wbf=\Gbf^\star(\wstar-\wbf)+(\Gbf^\star-\overline{\Gbf})\wbf+(\overline{\bbf} - \bbf^\star).\]
Hence, we see that recovering $\wstar$ asymptotically ultimately depends on the proximity of $(\overline{\Gbf}, \overline{\bbf})$ to $(\Gbf^\star, \bbf^\star)$. A main result of the current manuscript is to show that although the continuum system is in general biased from the noise-free continuum system (i.e. $(\overline{\Gbf}, \overline{\bbf})\neq (\Gbf^\star, \bbf^\star)$), asymptotic recovery of supp$(\wstar)$ is still guaranteed for polynomial and trigonometric systems for noise levels falling below a critical noise threshold. Below we provide an explicit characterization of this threshold in the case of Gaussian noise, which depends on the relative magnitudes of the true coefficients $\wstar$ and the growth rate of nonlinearities present in the true model (see the bounds (<ref>)). Furthermore, we show in Section <ref> that if we modify the WSINDy algorithm (as presented in Section <ref>) to additionally include a filtering step, then we recover the true support supp$(\wstar)$ asymptotically for all problems satisfying the assumptions in Section <ref>.
We will now explicitly characterize the continuum linear system $(\overline{\Gbf},\overline{\bbf})$ and its implications. The focus of Sections <ref> and <ref> is to derive the following results in the PDE setting, together with exponential concentration bounds. However, for the sake of building intuition, consider data $\Ubf = u(\tbf)+\ep$ where $u:\Rbb\to\Rbb$ is a function of time satisfying some ODE $\frac{d}{dt}u = F(u(t))$, and as before $\ep$ represents i.i.d. measurement noise. Entries of the discrete linear systems $(\Gbf^{(m)},\bbf^{(m)})$ (defined in general by equation (<ref>)) then consist of discretized integrals of the form
\begin{equation}\label{Im}
T^{(m)} := \sum_{i=1}^{m} \varphi(t_i) f(\Ubf_i)\Delta t^{(m)},
\end{equation}
where $\varphi$ denotes an arbitrary derivative of the reference test function $\psi$ and $f\in \CalF$ is a given function (possibly nonlinear). Letting $\rho$ be the distribution of the measurement noise $\ep$, under mild assumptions on $f$ and $\rho$ it holds that[We define the cross-correlation $f\star\rho(u) = \int_{\Rbb}f(u+x)d\rho(x)$.
This is equivalent to a convolution when $\rho$ is symmetric.]
\begin{equation}\label{EIm}
\lim_{m\to \infty}\Ebb[T^{(m)}] = \lim_{m\to \infty}\sum_{i=1}^{m} \varphi(t_i) f\star\rho(u(t_i))\Delta t^{(m)} = \int_{\supp{\varphi}} \varphi(t) f\star\rho(u(t))\,dt
\end{equation}
\[\lim_{m\to \infty}\Vbb[T^{(m)}] = \lim_{m\to \infty}\Delta t^{(m)}\left(\sum_{i=1}^{m} \left(\varphi(t_i)\right)^2\Big(f^2\star\rho(u(t_i))-(f\star\rho(u(t_i)))^2\Big) \Delta t^{(m)}\right) = 0.\]
In other words, all entries of the linear system $(\Gbf^{(m)},\bbf^{(m)})$ converge to deterministic quantities of the form (<ref>), where $f$ has been replaced by $f\,\star\,\rho$, representing a bias between the continuum and noise-free problems.
If the trial function library $\CalF$ is such that $\{f\star\rho\ |\ f\in \CalF\} \in \text{span}(\CalF)$, then a linear transformation exists between the continuum least-squares solution $\overline{\wbf}^0$ and true solution $\wstar$, despite the inherent bias. This turns out to be the case for polynomial and trigonometric $f$, in which case for moderate noise levels and reasonably well-behaved dynamics, recovery of the correct model in the limit of large $m$ follows from a single round of thresholding. This partially explains the observed robustness to noise in [18]. We now informally derive these linear transformations for polynomial and trigonometric libraries.
When $f$ is polynomial, the bias $f\star\rho - f$ is a polynomial of lower degree. Specifically, with $f(x) = x^k$, we have that
\[f\star\rho(x) = \sum_{j=0}^k{k \choose j}M_{k-j}(\rho)x^j = f(x) + \sum_{j=0}^{k-1}{k \choose j}M_{k-j}(\rho)x^j,\]
where $M_j(\rho)$ denotes the $j$th moment of the distribution $\rho$. For example if $\rho$ is a white Gaussian noise distribution, then[The double factorial for an integer $n$ is defined $n!!:=n(n-2)\cdots(1)$, with $(0)!!=(-1)!!=1$.]
\[M_j(\rho) = \begin{dcases} (j-1)!!\sigma^j, & j \text{ even} \\0, & \text{otherwise.}\end{dcases}\]
The monomial library $P^{(q)} = (1,x,\dots,x^q)$ thus transforms under cross-correlation with Gaussian $\rho$ as
\[P^{(q)}\star\rho = P^{(q)}\Abf^{(q)},\]
where $\Abf^{(q)}$ is defined
\begin{equation}\label{Agauss}
\Abf^{(q)}_{i,j} = \begin{dcases} {j\choose i}(j-i-1)!!\sigma^{j-i}, & j\geq i, \ (j-i)\ \text{even} \\ 0, &\text{otherwise.}\end{dcases}
\end{equation}
In words, $\Abf^{(q)}$ is upper triangular with $1$'s along the diagonal and $0$'s along odd superdiagonals. The least-squares solution to the continuum problem is then given by $\overline{\wbf}^0 = (\Abf^{(q)})^{-1}\wstar$, and using that (see Lemma <ref> in Appendix <ref>)
\[\left(\Abf^{(q)}\right)^{-1}_{i,j} = (-1)^{\frac{j-i}{2}} \Abf^{(q)}_{i,j},\]
the resulting coefficient error obeys the bound,
\[\nrm{\wstar-\overline{\wbf}^0}_\infty \leq C\sigma^2\nrm{\wbf^\star}_\infty.\]
The constant $C = C(p_{\max},\sigma)$ depends on the maximum degree monomial $p_{\max}$ in the true model, as well as the noise variance $\sigma^2$, and takes modest values for small $p_{\max}$:
\[C(1,\sigma) = 0,\quad C(2,\sigma) = 1,\quad C(3,\sigma) = 3,\quad C(4,\sigma) = 6+3\sigma^2,\quad C(5,\sigma) = 10+15\sigma^2.\]
Hence, if $p_{\max}$ is not too large and the noise variance $\sigma^2$ is moderate, then spurious terms in $\overline{\wbf}^0$ will be removed by a single round of thresholding.
The situation is even nicer for trigonometric functions. With $f=e^{i\omega x}$, we have
\[f\star\rho(x) = \int_\Rbb e^{i\omega(x+y)}\rho(y)dy = e^{i\omega x} \widehat{\rho}(\omega) = \hat{\rho}(\omega) f(x)\]
where $\widehat{\rho}$ is the Fourier transform $\rho$ up to scaling. A library of trigonometric terms $F^{\pmb{\omega}} = (e^{i\omega\cdot(\cdot)})_{\omega\in \pmb{\omega}}$ then satisfies
\[F^{\pmb{\omega}}\star\rho = F^{\pmb{\omega}}\Dbf^{\pmb{\omega}}\]
where $\Dbf^{\pmb{\omega}}$ is diagonal. In this case, under mild restrictions on $\widehat{\rho}$, solving the continuum least squares problem produces $\overline{\wbf}^0$ with supp$(\overline{\wbf}^0) =$ supp$(\wstar)$ and
\[\nrm{\wstar-\overline{\wbf}^0}_\infty \leq \max_{\omega\in \pmb{\omega}^\star}|1-\hat{\rho}(\omega)|\nrm{\wstar}_\infty\]
where $\pmb{\omega}^\star\subset \pmb{\omega}$ is the set of trigonometric frequencies present in the true model. In the case of Gaussian white noise we have
\[\widehat{\rho}(\omega) = \exp\left(-\frac{\sigma^2\omega^2}{2}\right),\]
so that for $\sigma\leq \frac{0.14}{\nrm{\pmb{\omega}^\star}_\infty}$, the vector $\overline{\wbf}^0$ will be 99% accurate.
For some noise distributions $\rho$ it is possible to have $\supp{\overline{\wbf}^0} \subsetneq \supp{\wstar}$ on trigonometric functions if $\widehat{\rho}(\omega)=0$ at some $\omega\in \pmb{\omega}$. Consider the uniform distribution $\rho = (2a)^{-1}\ind{[-a,a]}$. Then $\widehat{\rho}(n\pi/a) = 0$ for $n\in \Zbb$, which leads to the restriction $\nrm{\pmb{\omega}}_\infty<\frac{\pi}{a}$ in order for $\Dbf^{\pmb{\omega}}$ to be invertible. In this case the maximum allowable frequency is inversely proportional to the standard deviation $\sigma = a/\sqrt{3}$, which is not unreasonable: perturbations of the solution $u$ that are comparable to the periods of trigonometric terms will render their frequencies unobservable.
An understandable concern of using a weak-form recovery method is the biased recovery outlined in this section. However, we emphasize that the weak form is still advantageous over many strong-form methods which rely on point-wise derivative estimates. For a finite difference derivative operator $D$, the mean-squared error in the noisy derivative approximation of a one-dimensional derivative $u'(t)$ using data $\Ubf = u(\tbf) +\ep$ sampled at resolution $\Delta t$ satisfies
\[\Ebb_{\ep \sim \rho}[|u'(t)-D\Ubf(t)|^2]=\CalO\left(\frac{\sigma^2}{\Delta t^2}\right),\]
where $\sigma^2 = \Vbb[\rho]$. In this form, there is no hope of convergence as $\Delta t\to 0$ for any fixed $\sigma>0$ unless the data is suitably denoised as $\Delta t$ is brought to zero. As we will see in this work, weak-form recovery methods do not suffer from instabilities as $\Delta t\to 0$.
§.§ Assumptions
Here we list the main assumptions used in the results below, along with a brief overview of their motivations. Specific results will specify which of the following assumptions are in effect.
§.§.§ Regularity of the true solution
In order to present results in as general a context as possible with respect to solution regularity, we specify the regularity of the true solution $u$ by defining the following function spaces. For open, bounded $D\subset \Rbb^{d+1}$, and for $p\in[1,\infty]$ and $k>0$, define
\begin{equation}\label{eq:Hspaces}
\CalH^{k,p}(D) := \left\{ f\in L^p(D)\ :\ \exists \text{ disjoint, open } (D_i)_{i=1}^{\ell}\ \text{ s.t. }\ \overline{D} = \bigcup_{i=1}^\ell\overline{D}_i\,,\ f\big\vert_{D_i}\in H^k(D_i), \ \partial D_i\in C^{0,1}\right\},
\end{equation}
where $H^k(D)$ is the space of functions on $D$ with weak derivatives up to order $k$ in $L^2(D)$. The spaces $\CalH^{k,p}(D)$ are similar to the broken Sobolev spaces used in the analysis of discontinuous Galerkin finite element methods (see e.g. [15]). We assume that $u\in \CalH^{k,\infty}(\Omega\times (0,T))$ for some $k > (d+1)/2$ is a weak solution to (<ref>) with coefficients $\wbf=\wstar$ and that any points of reduced regularity (e.g. discontinuities) are contained to the boundaries between subdomains $D_i$ of $D:=\Omega\times (0,T)$. The restriction $k > (d+1)/2$ ensures by the Sobolev embedding theorem that $u$ is bounded and piecewise Hölder continuous on each $D_i$. We show in Appendix <ref>
that this is sufficient regularity for the trapezoidal rule to converge for integrals of smooth functions of $u$.
§.§.§ Sampling model
We assume that every pointwise evaluation[With $u\in \CalH^{k,\infty}(\Omega\times (0,T))$ such that $k>(d+1)/2$, we have that pointwise evaluations of $u$ are well-defined (apart from a set of measure zero, e.g. when considering solutions with shocks) by the Sobolev embedding theorem (see Appendix <ref> for more details).] of the solution $u$ (resulting in the samples $\{\Ubf^{(m)}\}_{m=1}^\infty$ for each grid $\{(\Xbf^{(m)},\tbf^{(m)})\}_{m=1}^\infty$) produces a measurement error $\ep \sim \rho$, where $\rho$ is symmetric and sub-Gaussian, that is,
\[\nrm{\rho}_{\text{SG}}:=\inf\{\lambda >0 \,:\, \Ebb_{\ep\sim \rho}\left[\exp(\ep^2/\lambda^2)\right]\leq 2\}<\infty.\]
In particular, this includes Gaussian white noise and bounded noise. In addition, we assume the noise at distinct spatiotemporal points is uncorrelated: $\Ebb[\ep(x,t)\ep(y,s)] = \sigma^2\ind{(x,t)=(y,s)}$ for two points $((x,t),(y,s))\in (\Omega\times[0,T])\times(\Omega\times[0,T])$ and fixed variance $\sigma^2$. We refer to $\sigma$ as the noise level and note that $\sigma\leq \sqrt{2}\nrm{\rho}_{\text{SG}}$ (see the textbook [34] for more details).
Furthermore, we assume that each grid $(\Xbf^{(m)},\tbf^{(m)})$ has equal resolution $(\Delta x^{(m)},\Delta t^{(m)})$, and throughout the discretization level $m$ is defined as the number of points that the reference test function $\psi$ is supported on at the resolution $(\Delta x^{(m)},\Delta t^{(m)})$, or
\begin{equation}\label{def_of_m}
m := \#\{\supp{\psi}\cap (\Xbf^{(m)},\tbf^{(m)})\}
\end{equation}
where $\#\{\cdot\}$ indicates the set cardinality.
§.§.§ Model library
We assume the collection of multi-indices $\pmb{\alpha} = (\alpha^0,\dots,\alpha^S)$ is known and the family of functions $\CalF = (f_j(u))_{j\in[J]}$ consists of $P^{(p_{\max})}$, the space of polynomials of degree at most $p_{\max}$ on $\Rbb^n$, as well as $F^{\pmb{\omega}} = \{\exp(i\omega^T u)\}_{\omega \in \pmb{\omega}}$, a finite collection of Fourier modes on $\Rbb^n$ (i.e. $\pmb{\omega}\subset \Rbb^n$).
§.§.§ Reference test function
We assume that $\psi\in C^{|\pmb{\alpha}|}(\Omega\times (0,T))$ with compact support in $\Omega\times (0,T)$. When $\psi$ is taken to be separable, $\psi(x) = \phi_1(x_1)\cdots\phi_d(x_d)\phi_{d+1}(t)$, it is assumed that each $\phi_i\in C^{|\pmb{\alpha}|}(\Rbb)$.
§.§.§ Conditioning of the noise-free system
In this work we ensure that the recovery problem is solvable by assuming that the noise-free continuum matrix $\Gbf^\star$ is full-rank. This implies several assumptions:
* The underlying PDE has a unique representation of the form (<ref>) with coefficients $\wstar$ over the library $(\CalF,\pmb{\alpha})$.
* The reference test function $\psi$, library $(\CalF,\pmb{\alpha})$, and solution $u$ are such that the set of vectors (with Query points $\CalQ$ arranged into a vector)
\[\left(\partial^{\alpha^s}\psi*f_j(u)(\CalQ)\right)_{s=1,\dots,S,\ j=1,\dots,J}\]
are linearly independent. In particular, the number of convolution query points is not smaller than the library size: $K\geq \mathfrak{J}$, where $K$ is the number of query points, $S$ is the number of differential operators in $\pmb{\alpha}$, $J$ is number of functions $\CalF$, and $\mathfrak{J} = SJ$ is the total number of terms in the model library.
The most restrictive assumption above is the full rank assumption on $\Gbf^\star$ in <ref>. This implies that there does not exist $\tilde{\wbf}\neq 0$ such that $u$ satisfies an additional constraint of the form
\[0 = \sum_{s,j}\tilde{\wbf}_{s,j}\partial^{\alpha^s}f_j(u).\]
Such constraints do indeed arise in practice (e.g. the divergence-free constraint in the incompressible Navier-Stokes equations). While this cannot be directly checked in practice from noisy data sets, the condition number of the weak-form linear system provides an excellent guide for the conditioning of the underlying noise-free system, and is reflected in the choice of reference test function $\psi$, query points $\CalQ$, and library $(\CalF,\pmb{\alpha})$, all of which the user has control over.
Ultimately, the full-rank assumption is due to our use of the STLS algorithm for performing sparse regression, however this can be relaxed if other algorithms are chosen, and extensions of STLS to the rank-deficient case are possible. However, due to spurious terms arising in the continuum limit, we conjecture that some form of thresholding-based sparse regression is advantageous (over e.g. greedy methods or $\ell_1$-regularization).
§ EXPONENTIAL CONCENTRATION OF THE WSINDY LINEAR SYSTEM
Using concentration results for sums of heavy-tailed random variables recently proved in [35, 4], we prove that $\nrm{\Gbf^{(m)}-\overline{\Gbf}}_\infty\to 0$ with an exponential convergence rate. We will need the following lemmas to connect existing results with the specific form of the entries of the matrix $\Gbf^{(m)}$. Proofs can be found in Appendix <ref>.
Using the terminology of [4], the following lemma establishes that the right tails of summands within each entry of $\Gbf^{(m)}$ can be modelled using a common rate function.
Let $f:\Rbb\to \Rbb$ satisfy
\begin{equation}\label{eq:growthcondition}
|f(x)|\leq C_f\left(1+|x|^p\right),
\end{equation}
for some $p\geq 1$ and $C_f>0$, and let $\rho$ be a symmetric sub-Gaussian distribution. For bounded sequences of real numbers $(\alpha_i)_{i\in \Nbb}$ and $(u_i)_{i\in \Nbb}$, and for $\ep_i\sim \rho$ i.i.d. $i\in \Nbb$, define the random variables $Y_i = \alpha_i f(u_i +\ep_i)$. Then for any $\kappa\geq \nrm{\rho}_{\text{SG}}$ and $t>0$, it holds that the right tails of $Y_i$ are captured by a common rate function $I(t)$,
\[\Pbb\left(Y_i > t\right)\leq \exp (-I(t)),\]
\begin{equation}\label{eq:rate_fcn}
I(t):=\ind{(t^*,\infty)}(t)\left[\frac{1}{\kappa^2}\left(\left(\frac{t}{C_f\alpha^*}-1\right)^{1/p}-u^*\right)^2-\log(2)\right] = \frac{t^{2/p}}{\kappa^2(C_f\alpha^*)^{2/p}}I_0(t),
\end{equation}
for $\alpha^* = \sup_i|\alpha_i|$, $u^* = \sup_i|u_i|$, and $t^*:= C_f\alpha^*\left(1+\left(u^*+ \kappa\sqrt{\log(2)}\right)^p\right)$. Moreover, $I_0(t)$ is monotonically increasing from $0$ to $1$ over $t\in(t^*,\infty)$, and is defined in the proof in Appendix <ref>.
The next lemma allows one to uniformly bound a sequence of independent, non-identically distributed random variables of the form in Lemma <ref>.
Let $Y_i$ be defined under the same conditions as Lemma <ref> and choose $\beta\in(0,1)$. Then there exists $\overline{v}(\beta) < \infty$ such that the sum $S_m = \sum_{i=1}^{m} Y_i$ satisfies
\begin{equation}\label{eq:convergence_of_sums}
\Pbb\left(|S_m-\Ebb S_m|> mt\right) \leq \begin{dcases} 2 \exp\left(-\frac{\beta}{2} I(mt)\right) + 2m\exp\left(-I(mt)\right), & t \geq t_m(\beta)\\
2 \exp\left(-\frac{mt^2}{2\overline{v}(\beta)}\right) + 2m\exp\left(-\frac{mt_m(\beta)^2}{\overline{v}(\beta)}\right), & 0\leq t < t_m(\beta), \end{dcases}
\end{equation}
where $t_m (\beta):= \sup\{t\geq 0\ :\ t\leq \beta\overline{v}(\beta)\frac{I(mt)}{mt}\} \to 0$ in m.
We now present a main result concerning concentration of the WSINDy discrete linear system to its continuum variant. Throughout we use the notation $\nrm{\Gbf}_{\vec{p}}$ to denote the vector $p$ norm of matrix $\Gbf$ stretched into a column vector, and the notation $(\Gbf,\bbf)$ to mean the concatenation of matrix $\Gbf\in \Rbb^{K\times \mathfrak{J}}$ with vector $\bbf\in \Rbb^K$.
Suppose each function in library $\CalF$ satisfies the growth bound (<ref>) for some $p:=p_{\max}$ and Assumptions <ref>, <ref>, and <ref> hold. Then it holds that for every $t>\overline{t}(m)$, where $\overline{t}(m)\to 0$ as $m\to \infty$, we have the concentration rates
\begin{equation}\label{eq:convergence_of_G}
\Pbb\Big(\nrm{(\Gbf^{(m)},\bbf^{(m)}) - (\overline{\Gbf},\overline{\bbf})}_{\overrightarrow{\infty}} > t \Big) \leq \begin{dcases} K\mathfrak{J}\exp\left(-\frac{c}{2} (mt)^{2/p_{\max}}\right) + K\mathfrak{J}m\exp\left(-c (mt)^{2/p_{\max}}\right), & t \geq t_m\\
K\mathfrak{J} \exp\left(-\frac{mt^2}{2\overline{v}}\right) + K\mathfrak{J}m\exp\left(-\frac{mt_m^2}{\overline{v}}\right), & 0\leq t < t_m, \end{dcases}
\end{equation}
where the rate factor $c$ depends on $\nrm{u}_\infty$, $|\Omega\times (0,T)|$, $\pmb{\alpha}$, $\psi$, $\CalF$, and $\nrm{\rho}_{\text{SG}}$, and $\overline{v} = \overline{v}(1/2)$ and $t_m=t_m(1/2)$ from Lemma <ref>.
For more details on the rate $c$ in Theorem <ref>, see the proof in Appendix <ref>. In addition, to make more straight-forward use of these concentration results, we have the following.
Under the assumptions of Theorem <ref>, for every $t>0$ and sufficiently large $m$ it holds that
\begin{equation}\label{eq:convergence_of_G_2}
\Pbb\Big(\nrm{(\Gbf^{(m)},\bbf^{(m)}) - (\overline{\Gbf},\overline{\bbf})}_{\overrightarrow{\infty}} > t \Big) \leq 2K\mathfrak{J}\exp\left(-\frac{c}{2}(mt)^{2/p_{\max}}\right).
\end{equation}
This comes from noting that for every $t$ and sufficiently large $m$ we have
\[\frac{c}{2}(mt)^{2/p_{\max}} < \min\left(c(mt)^{2/p_{\max}}-\log(m),\ \frac{mt^2}{2\overline{v}}-\log(m)\right).\]
The proof of Theorem <ref> reveals several ways that the rate of concentration can be increased. The most effective is to lower the growth-rate $p_{\max}$ and decrease $\nrm{\partial^{\alpha^s}\psi}_\infty$. This implies that in practice, concentration will be determined by the growth rate of nonlinearities and order of differential operators in the true model. In addition, if the true solution $u$ has increased smoothness, $\overline{t}(m)$ goes to zero much faster, leading to faster entry into the regime of exponential concentration. Lastly, $c$ is inversely proportional to $\nrm{\rho}_{\text{SG}}^2$, hence decreasing the variance of the noise directly increases the concentration rate.
§ ASYMPTOTIC CONSISTENCY
In this section we provide asymptotic consistency results in the form of support recovery of the true model coefficients $\supp{\wstar}$ in the limit as $m\to \infty$ (recall $m$ is the number of points that the reference test function $\psi$ is supported on at the resolution $(\Delta x^{(m)},\Delta t^{(m)})$).
In Section <ref> we prove that WSINDy with a hyperparameter-free version of the MSTLS algorithm recovers the true model support $S^\star = \supp{\wstar}$ with high-probability from the raw (un-filtered) data as $m\to \infty$, provided $\sigma<\sigma_c$ for some critical noise level $\sigma_c$ and the assumptions in Section <ref> are met. This is done by first proving recovery results for the continuum problem, and then combining these with the matrix concentration results in Section <ref>. In order to demonstrate how explicit bounds on $\sigma_c$ may be derived, in Section <ref> we focus on proving conditions for subset support recovery, or $\supp{\widehat{\wbf}}\subset S^\star$, under the restricted setting of Gaussian white noise. A proof of full support recovery $\supp{\widehat{\wbf}} = S^\star$, requiring an additional assumption on $(\Gbf^\star,\bbf^\star)$ (see Remark <ref>), is presented in Appendix <ref> and may be extended to the case of arbitrary symmetric sub-Gaussian noise using Lemma <ref>. Furthermore, Lemma <ref> provides a general mechanism for deriving bounds on $\sigma_c$ specific to any symmetric sub-Gaussian noise distribution $\rho$. Explicit bounds on $\sigma_c$ may be highly informative for future algorithmic developments.
To summarize, we first prove in Lemma <ref> that for noise levels below some critical noise $\sigma_c'$ there exists a feasible $\widehat{\lambda}$ for which the STLS solution of associated continuum problem recovers the true support $S^\star$. We use properties of the Gaussian moment matrix to present explicit bounds $\sigma_c'$. We then show in Lemma <ref> that there exists $\sigma_c\leq \sigma_c'$ below which the one-shot MSTLS algorithm (defined below) applied to the continuum problem produces $\widehat{\wbf}$ satisfying $\supp{\widehat{\wbf}}\subset S^\star$. The utility of Lemma <ref> lies in the fact that the one-shot MSTLS algorithm involves no hyperparameters, hence avoids the task of selecting a feasible $\widehat{\lambda}$. We then combine these results with classical stability of the least-squares problem (Lemma <ref>) and the concentration results in Section <ref> to yield $\supp{\widehat{\wbf}^{(m)}}\subset S^\star$ with high probability for the MSTLS solution $\widehat{\wbf}^{(m)}$ on the linear system $(\Gbf^{(m)},\bbf^{(m)})$, provided $\sigma<\sigma_c$ and $m$ is large enough. This result is extended to yield $\supp{\widehat{\wbf}^{(m)}}= S^\star$ with high probability in Appendix <ref>, and for general symmetric sub-Gaussian noise with the aid of Lemma <ref>.
In Section <ref>, we consider the case of filtering the data before applying the WSINDy algorithm. Examining the case of simple moving average filters, which may easily be extended to a wider class of filters, we prove the exponential concentration of $(\Gbf^{(m)},\bbf^{(m)})$ to $(\Gbf^\star,\bbf^\star)$, in contrast to $(\overline{\Gbf},\overline{\bbf})$. This implies that for a wider class of libraries (only requiring $\CalF$ to be locally Lipschitz) and arbitrary symmetric sub-Gaussian noise, that we get $\supp{\widehat{\wbf}^{(m)}}\subset S^\star$ with high probability using the hyperparameter-free MSTLS algorithm. As before, this is strengthened to $\supp{\widehat{\wbf}^{(m)}}= S^\star$ if an additional condition on $(\Gbf^\star,\bbf^\star)$ is satisfied.
§.§ Asymptotic consistency without filtering
Recall from Section <ref> that for trigonometric and polynomial libraries we have $\text{span}(\CalF\star\rho) \subset \text{span}(\CalF)$, that is, the cross-correlated library terms are linear combinations of the original library terms. More specifically, under assumptions <ref>-<ref> there exists an upper triangular, block diagonal matrix
\begin{equation}\label{Ablock}
\Abf = \text{blkdiag}(\underbrace{\Abf^{(p_{\max})},\Dbf^{\pmb{\omega}}}_{s=1},\dots,\underbrace{\Abf^{(p_{\max})},\Dbf^{\pmb{\omega}}}_{s=S})
\end{equation}
\[F^{\pmb{\omega}}\star\rho = F^{\pmb{\omega}}\Dbf^{\pmb{\omega}}, \qquad P^{(p_{\max})}\star\rho = P^{(p_{\max})}\Abf^{(p_{\max})}\]
such that $\overline{\Gbf} = \Gbf^\star \Abf$. For all symmetric noise distributions $\rho$ we have
\[\Dbf^{\pmb{\omega}} = \text{diag}(\hat{\rho}(\pmb{\omega})), \qquad \Abf^{(p_{\max})}_{ij} := \delta_{ij}+\Lbf_{ij}^{(p_{\max})}=\begin{dcases} {j\choose i} M_{j-i}(\rho)& 0\leq i\leq j \\ 0 & \text{otherwise,} \end{dcases}\]
provided the highest moment $M_{p_{\max}}(\rho)$ exists.
We can decompose $\Abf$ into diagonal and off-diagonal matrices as follows,
\begin{equation}\label{Ainv_DL}
\Abf = \Dbf + \Lbf
\end{equation}
where $\Dbf := \Ibf^{(S)}\otimes \text{blkdiag}(\Ibf^{(p_{\max}+1)},\Dbf^{\pmb{\omega}})$ is diagonal with diagonal entries less than or equal to 1 in magnitude (since $\rho$ is a probability distribution), and $\Lbf := \Ibf^{(S)}\otimes \text{blkdiag}(\Lbf^{(p_{\max})},\Ibf^{(|\pmb{\omega}|)})$ is zero along the diagonal. Furthermore, in the case that $\widehat{\rho}>0$, (e.g. when $\rho$ is Gaussian), we have
\begin{equation}\label{invA}
\Abf^{-1} = \Dbf^{-1} + \tilde{\Lbf}
\end{equation}
where $\tilde{\Lbf} = \Ibf^{(S)}\otimes \text{blkdiag}(\tilde{\Lbf}^{(p_{\max})},\Ibf^{(|\pmb{\omega}|)})$, and $\tilde{\Lbf}^{(p_{\max})}$ is equal to $\Lbf^{(p_{\max})}$ up to sign changes (see Lemma <ref> in Appendix <ref>).
In the following we assume $\rho$ is Gaussian, as it allows for explicit bounds on the critical noise level $\sigma_c$. The polynomial moment matrix $\Abf^{(p_{\max})}$ is then given by (<ref>), with inverse given in Lemma <ref>, and $\Dbf^{\pmb{\omega}}$ is positive definite. Similar results, but not as explicit, exist for any other symmetric sub-Gaussian noise distribution (see Lemma <ref>) however restrictions may be needed on $\pmb{\omega}$ to ensure the invertibility of $\Dbf$ (see Remark <ref>). First we need the following existing result on the STLS algorithm.
Let $\Abf$ be the corresponding moment matrix such that $\overline{\Gbf} = \Gbf^\star \Abf$ and let $\what =$ STLS$(\overline{\Gbf},\overline{\bbf},\widehat{\lambda})$ be the continuum STLS solution with sparsity threshold $\widehat{\lambda}$. A necessary and sufficient condition for one iteration of STLS to result in ${\normalfont\supp{\what} = \supp{\wstar}}$ is
\begin{equation}\label{necccond1}
\min_{j\in S^\star}\left\vert\left(\Abf^{-1}\wstar\right)_j\right\vert>\widehat{\lambda}> \max_{j\in (S^\star)^c}\left\vert\left(\Abf^{-1}\wstar\right)_j\right\vert.
\end{equation}
Moreover, (<ref>) is sufficient to ensure $\supp{\what} \subset \supp{\wstar}$ for any number of STLS iterations.
This is a special case of <cit.>, considering that $\Abf^{-1}\wstar = \overline{\Gbf}^\dagger\overline{\bbf}$.
In the next lemma we classify which models and noise levels lead to existence of $\widehat{\lambda}$ satisfying (<ref>) in the case of Gaussian noise.
Let $\rho$ be a Gaussian mean-zero noise distribution and $\Abf$ be the corresponding moment matrix such that $\overline{\Gbf} = \Gbf^\star \Abf$. Let $p$ be the maximum polynomial degree appearing in the true model and define $S^\star:=\supp{\wstar}$. Then we have the following cases:
* If $p \leq 2$ and the true coefficient of $u^2$ is zero, then there exists $\widehat{\lambda}$ satisfying (<ref>) for any finite noise level $\sigma>0$.
* If $p\geq 3$, or if $p=2$ and the true coefficient of $u^2$ is nonzero, then there exists a critical noise level $\sigma_c$ satisfying
\begin{equation}\label{explicitboundsonsigmac}
\left(\frac{1}{2{p \choose 2}e}\right)\frac{\min_{j\in S^\star}|\wstar_j|}{\max_{j \in S^\star}|\wbf^\star_j|}\leq \sigma_c^2\leq \frac{1}{{p \choose 2}},
\end{equation}
such that for all $\sigma < \sigma_c$, there exists $\widehat{\lambda}$ such that (<ref>) holds.
For $p\leq 2$, if the term $u^2$ does not exist in the true model, in other words all terms $\partial^{\alpha^s}u^2$ in the true model satisfy $|\alpha^s|\geq 1$, then using that
\begin{equation}\label{u2}
\partial^{\alpha^s}\psi*(u^2\star \rho) = \partial^{\alpha^s}\psi*(\sigma^2 + u^2) = \partial^{\alpha^s}\psi*u^2,
\end{equation}
we see that no spurious terms are generated (recall from Section <ref> that trigonometric terms do not generate spurious terms in the continuum problem). In these cases any $\widehat{\lambda} < \min_{i\in S^\star}|\wstar_i|$ satisfies (<ref>), using that $|(\Abf^{-1}\wstar)_i| = |\Dbf^{-1}_{ii}\wstar_i|\geq |\wstar_i|$ for all $i\in\{1,\dots,\mathfrak{J}\}$, since $\nrm{\text{diag}(\Dbf)}_\infty\leq 1$.
Now assume that $p\geq 3$ or $p=2$ with $u^2$ contained in the true model. To derive the upper bound in (<ref>), consider the case where all entries of $\wstar_{S^\star}$ have equal magnitude. Then there exists $\widehat{\lambda}$ such that (<ref>) holds only if
\[\min_{j\in S^\star}\left\vert\left(\Abf^{-1}\wstar\right)_j\right\vert{p \choose 2} \sigma^2 < \min_{j\in S^\star}\left\vert\left(\Abf^{-1}\wstar\right)_j\right\vert,\]
since the coefficient of $\partial^{\alpha^s}\psi*u^p$ is nonzero for some $\alpha^s\in \pmb{\alpha}$, which generates the spurious term ${p \choose 2}\sigma^2\partial^{\alpha^s}\psi*u^{p-2}$. Equivalently, such a $\widehat{\lambda}$ exists only if $\sigma^2 < {p \choose 2}^{-1}$, which necessitates $\sigma_c^2\leq {p \choose 2}^{-1}$.
For the sufficient lower bound in (<ref>), using Lemma <ref> to bound $\|\tilde{\Lbf}\|_\infty$, we have that
\[\max_{j\in (S^\star)^c}\left\vert\left(\Abf^{-1}\wstar\right)_j\right\vert \leq \|\tilde{\Lbf}\|_\infty\nrm{\wbf^\star}_\infty \leq \sigma^2{p \choose 2}\exp\left(\sigma^2{p \choose 2}\right)\nrm{\wbf^\star}_\infty\]
\[\min_{j\in S^\star}\left\vert\left(\Abf^{-1}\wstar\right)_j\right\vert \geq \min_{j \in S^\star}|\Dbf^{-1}\wbf^\star_j|-\|\tilde{\Lbf}\|_\infty\nrm{\wbf^\star}_\infty \geq \min_{j \in S^\star}|\wbf^\star_j| - \sigma^2{p \choose 2}\exp\left(\sigma^2{p \choose 2}\right)\nrm{\wbf^\star}_\infty.\]
A sufficient condition for existence of $\widehat{\lambda}$ satisfying (<ref>) is thus
\begin{equation}\label{suff_lower_bound}
2\sigma^2{p \choose 2}\exp\left(\sigma^2{p \choose 2}\right)< \frac{\min_{j\in S^\star}|\wstar_j|}{\max_{j \in S^\star}|\wbf^\star_j|}.
\end{equation}
Taking $\sigma \leq \sigma_c$ and using the upper bound $\sigma_c^2 \leq {p \choose 2}^{-1}$, we get that (<ref>) is implied by
\[\sigma^2 < \frac{1}{2{p \choose 2}e}\frac{\min_{j\in S^\star}|\wstar_j|}{\max_{j \in S^\star}|\wbf^\star_j|},\]
hence by the sufficiency of this upper bound we achieve the lower bound on $\sigma_c$ in (<ref>).
Lemma <ref> provides a rigorous explanation for the robustness to noise of WSINDy observed in [18], as several systems that were shown to yield robust recovery fall into the case $(i)$ for which $\sigma_c=\infty$. These include inviscid Burgers, Korteweg-de Vries, Kuramoto-Sivashinksy, porous medium, Sine-Gordon, and Navier-Stokes. Moreover, all linear differential equations fall into case $(i)$. Furthermore, several types of nonlinear PDEs including reaction-diffusion and nonlinear Schrödinger's (also examined in [18]) fall into the case $(ii)$, for which spurious terms will arise in the limit of large data if the noise level is greater than some finite $\sigma_c$.
Condition (<ref>) also implies that the STLS algorithm with any number of iterations (at least one) yields a solution $\supp{\what}$ with $\supp{\what} = \supp{\wstar}$, provided the noise level $\sigma^2$ is low enough. This follows from stability of least-squares problems in Lemma <ref> of Appendix <ref>, however we are unable to provide explicit bounds on the critical noise $\sigma_c$ in this case, hence we focus on the one-shot MSTLS algorithm in what follows.
In Lemma <ref> we identified a relation in the Gaussian noise case between the noise variance $\sigma^2$ and the existence of a sparsity parameter $\widehat{\lambda}$ such that one step of STLS produces the correct support. In the next two lemmas we identify conditions under which the MSTLS algorithm (<ref>) identifies a feasible $\widehat{\lambda}$ satisfying (<ref>), which leads to correct support in one step of STLS. While in practice we often run STLS until it terminates (in a maximum of $\mathfrak{J}$ iterations), for the sake of identifying conditions for convergence we focus on the simpler case of performing only one step of STLS for each candidate $\lambda\in\pmb{\lambda}$.
In the one-step STLS case there are only $\mathfrak{J}+1$ possible sparse solutions. To see this, order $\overline{\wbf}^0$ from least to greatest in absolute value:
\[|\overline{\wbf}^0_{(0)}|\leq |\overline{\wbf}^0_{(1)}|\leq |\overline{\wbf}^0_{(2)}|\leq \cdots\leq |\overline{\wbf}^0_{(\mathfrak{J})}|\]
where $\overline{\wbf}^0_{(0)} = 0$ is inserted. Then for all $\lambda \in (\overline{\wbf}^0_{(i)},\overline{\wbf}^0_{(i+1)})$, one step of STLS produces the same solution, hence there are at most $\mathfrak{J}$ distinct solutions attainable for $\lambda \in [0,\nrm{\overline{\wbf}^0}_\infty]$, and $\lambda> \nrm{\overline{\wbf}^0}_\infty$ leads to the zero vector. With this in mind, we need only examine the case
\begin{equation}\label{oneshotpmblambda}
\pmb{\lambda} = \left\{ \frac{|\overline{\wbf}^0_{(i)}|+|\overline{\wbf}^0_{(i+1)}|}{2}\ :\ i=\{0,\mathfrak{J}-1\}\right\},
\end{equation}
discarding duplicate values. In what follows, we let $\overline{\wbf}^\lambda = \text{STLS}^{(1)}(\overline{\Gbf},\overline{\bbf},\lambda)$ denote the STLS solution using a single round of thresholding, in other words,
\[\overline{\wbf}^\lambda = \argmin_{\supp{\wbf}\subset \supp{H_\lambda(\overline{\Gbf}^\dagger\overline{\bbf})}} \nrm{\overline{\Gbf}\wbf-\overline{\bbf}}_2^2\]
and $\widehat{\wbf} = \text{MSTLS}^{(1)}(\overline{\Gbf},\overline{\bbf})$ denote the MSTLS solution using one round of thresholding per inner STLS loop over the thresholds (<ref>).
Let $\rho$ be a Gaussian mean-zero noise distribution and let $p$ be the maximum polynomial degree appearing in the true model. Let $\Gbf^\star_p$ be the restriction of $\Gbf^\star$ to the columns corresponding to polynomial terms of maximum degree $p$. Then there exists a critical noise level $\sigma_c$ such that for any $\sigma\leq \sigma_c$ the estimator $\what = \text{MSTLS}^{(1)}(\overline{\Gbf},\overline{\bbf})$ satisfies
\begin{equation}\label{subsetsupprec_lemm}
\supp{\what}\subset\supp{\wstar}
\end{equation}
where $\sigma_c$ satisfies
* If $p\leq 2$ and the coefficient of $u^2$ is zero in the true model, then $\sigma_c=\infty$.
* Otherwise, $\sigma_c$ satisfies the bounds
\begin{equation}\label{boundsMSTLS}
\frac{1}{e{p \choose 2}}\min\left\{\frac{\nrm{\Gbf^\star\wstar}_2}{\mathfrak{J}\nrm{\Gbf^\star_p}_2\nrm{\wstar}_2}, \frac{\min_{j\in S^\star}|\wstar_j|}{2\max_{j\in S^\star}|\wstar_j|}\right\}\leq \sigma_c^2\leq {p \choose 2}^{-1}.
\end{equation}
The MSTLS loss is defined
\[\CalL(\lambda) = \frac{\nrm{\overline{\Gbf}(\overline{\wbf}^0 -\overline{\wbf}^\lambda)}}{\nrm{\overline{\Gbf} \overline{\wbf}^0}}+\frac{\nrm{\overline{\wbf}^\lambda}_0}{\mathfrak{J}}\]
where $\overline{\wbf}^\lambda=\text{STLS}^{(1)}(\overline{\Gbf},\overline{\bbf},\lambda)$. We first assume that there exists $\widehat{\lambda}$ satisfying (<ref>), which is given by $\sigma<\sigma_c$ for $\sigma_c$ satisfying (<ref>). Then it holds that $\widehat{\lambda}\in \pmb{\lambda}$ given by (<ref>). Also, for $\lambda \in \pmb{\lambda}$ satisfying $\lambda>\min_{j\in S^\star}|(\Abf^{-1}\wstar)_j|$, it holds that $\supp{\overline{\wbf}^\lambda}\subset\supp{\wstar}$, so it suffices to prove that $\CalL(\tilde{\lambda})>\CalL(\widehat{\lambda})$ for all $\tilde{\lambda} \leq \max_{j\in (S^\star)^c}|(\Abf^{-1}\wstar)_j|$.
Indeed, choose $\tilde{\lambda}\leq \max_{j\in (S^\star)^c}|(\Abf^{-1}\wstar)_j|$ whereby
\[S^\star= \supp{\overline{\wbf}^{\widehat{\lambda}}} \subset \supp{\overline{\wbf}^{\tilde{\lambda}}} \subset S^0\]
where $S^\star := \supp{\wstar}$ and $S^0:= \supp{\overline{\wbf}^0}$. Then
\[\CalL(\widehat{\lambda}) - \CalL(\tilde{\lambda}) \leq \frac{\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\widehat{\lambda}}\right)}_2-\nrm{\overline{\Gbf}\left(\overline{\wbf}^0- \overline{\wbf}^{\tilde{\lambda}}\right)}_2}{\nrm{\overline{\bbf}}_2} - \frac{1}{\mathfrak{J}} \leq \frac{\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\widehat{\lambda}}\right)}_2}{\nrm{\overline{\bbf}}_2} - \frac{1}{\mathfrak{J}}.\]
Now, if we are in case $(i)$, it holds that $\overline{\wbf}^0=\overline{\wbf}^{\widehat{\lambda}}$ since no spurious terms are generated. For case $(ii)$, since $\overline{\Gbf}\overline{\wbf}^0 = \Gbf^\star\wstar$, we have
\[\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\widehat{\lambda}}\right)}_2= \nrm{\Gbf^\star\wstar -\overline{\Gbf}\overline{\wbf}^{\widehat{\lambda}}}_2 = \nrm{\Gbf^\star\wstar - \Gbf^\star\Abf\overline{\wbf}^{\widehat{\lambda}}}_2= \nrm{\Gbf^\star\wstar - \Gbf^\star(\Dbf +\Lbf)\overline{\wbf}^{\widehat{\lambda}}}_2.\]
Recalling that $\overline{\wbf}^{\widehat{\lambda}}_{S^\star}= \overline{\Gbf}_{S^\star}^\dagger\overline{\bbf}$, and using that $\overline{\bbf}=\bbf^\star=\Gbf^\star\wstar$, we see that replacing $\overline{\wbf}^{\widehat{\lambda}}$ in the previous line with any other vector supported on $S^\star$ will increase the norm. Hence, we may define a new vector $\wbf'$ block-wise (see (<ref>)) by
\[\wbf_B' = (\Dbf^{\pmb{\omega}})^{-1}\wstar_B\]
for blocks $B$ corresponding to trigonometric terms, and
\[\wbf_B' = \wstar_B\]
for blocks $B$ corresponding to polynomial terms, which leads to
\[\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\widehat{\lambda}}\right)}_2 \leq \nrm{\Gbf^\star\wstar -\Gbf^\star(\Dbf +\Lbf)\wbf'}_2 = \nrm{\Gbf^\star_{p}\Lbf_p\wstar_p}_2\leq \nrm{\Gbf^\star_p}_2\nrm{\wstar}_2\nrm{\Lbf^{(p)}}_2\]
where we use the subscript $p$ to denote columns pertaining to polynomial terms of degree at most $p$. Again using the bounds on $\nrm{\Lbf^{(p)}}$ in the appendix, we then get
\[\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\widehat{\lambda}}\right)}_2 \leq \sigma^2 e{ p\choose 2}\nrm{\Gbf^\star_p}_2\nrm{\wstar}_2\]
where we also employed the necessary condition $\sigma^2<{p\choose 2}^{-1}$ from the previous lemma.
Hence, for all
\[\sigma^2<\frac{\nrm{\Gbf^\star\wstar}_2}{\nrm{\Gbf^\star_p}_2\nrm{\wstar}_2} \frac{1}{\mathfrak{J}e{p \choose 2}}\]
it holds that $\CalL(\widehat{\lambda}) < \CalL(\tilde{\lambda})$. We conclude by combining the bounds on $\sigma_c$ given in the previous lemma.
We now list some classical estimates that will be used to show that support recovery occurs with high probability as the resolution of the data increases. The bound (<ref>) is adapted from classical stability of full-rank linear least squares problems found in <cit.>, together with elementary norm equivalences.
Let $\Abf \in \Rbb^{m\times n}$ have rank $n$ and $\ybf = \Abf\xbf$ for $\xbf\in \Rbb^n$, $\xbf\neq \mathbf{0}$. If a perturbed system $(\Abf',\ybf')$ satisfies
\[\nrm{\Abf-\Abf'}_{\overrightarrow{\infty}}<\varepsilon, \qquad \nrm{\ybf-\ybf'}_\infty < \varepsilon\]
where $\Abf'$ has rank $n$ and[Note that the constant $C(\Abf,\xbf)$ in general takes the form
\[C(\Abf,\xbf) = \frac{\sqrt{m}+\sqrt{mn}\nrm{\xbf}_2}{\sigma_n(\Abf)(1-\alpha)}\]
valid for any $\alpha<1$ provided $\varepsilon\leq \alpha \frac{\sigma_n(\Abf)}{\sqrt{mn}}$. For convenience we chose $\alpha = 1/\sqrt{2}$ and the requirement $\varepsilon\leq \frac{\sigma_n(\Abf)}{\sqrt{2mn}}$.]
$\varepsilon\leq \frac{\sigma_n(\Abf)}{\sqrt{2mn}}$, then the solution $\xbf' = (\Abf')^\dagger \ybf' = ((\Abf')^T\Abf')^{-1}(\Abf')^T\ybf'$ to the perturbed least squares problem satisfies
\begin{equation}\label{xnormbnd}
\nrm{\xbf-\xbf'}_\infty\leq \nrm{\xbf-\xbf'}_2 \leq \left(\frac{\sqrt{2m}+\sqrt{2mn}\nrm{\xbf}_2}{\sigma_n(\Abf)(\sqrt{2}-1)}\right)\varepsilon =: C(\Abf,\xbf)\varepsilon.
\end{equation}
We now introduce the main theorem of this section, showing subset support recovery with high probability so long as $\sigma<\sigma_c$ for some critical noise. Results are presented for Gaussian noise distributions, however similar results hold for more general noise distributions with suitable modifications to the proof using Lemma <ref>.
Let Assumptions <ref>-<ref> hold with $\rho$ a Gaussian mean-zero noise distribution. There exists a critical noise $\sigma_c>0$ and a stability tolerance $\tau$, both independent of $m$, such that for all $\sigma<\sigma_c$ and $t< \tau$, and for sufficiently large $m$, it holds that
\begin{equation}\label{subsetsupprec_thm}
\supp{\widehat{\wbf}^{(m)}}\subset\supp{\wstar}
\end{equation}
with probability exceeding $1-2K\mathfrak{J}\exp\left(-\frac{c}{2}\left(mt\right)^{2/p_{\max}}\right)$, where $c$ is from Theorem <ref> and $\widehat{\wbf}^{(m)}$ is the one-shot MSTLS solution, $\widehat{\wbf}^{(m)} = \text{MSTLS}^{(1)}(\Gbf^{(m)},\bbf^{(m)})$. Moreover, if $\supp{\widehat{\wbf}^{(m)}}=\supp{\wstar}$, then it holds that
\begin{equation}
\nrm{\widehat{\wbf}^{(m)}-\wstar}_\infty < C'(t+\sigma^2)
\end{equation}
with the same probability, where $C'$ depends only on $(\Gbf^\star_{S^\star},\bbf^\star)$.
Let $\overline{\wbf}^0 =\Abf^{-1}\wstar = \overline{\Gbf}^\dagger \overline{\bbf}$ and define $S = S^0\setminus S^\star$ where $S^0 = \supp{\overline{\wbf}^0}$. By Lemma <ref> there exists $\sigma_c'$ such that $\sigma<\sigma_c'$ ensures
\begin{equation}\label{delta1}
\delta_1 := \min_{j\in S^\star}\left\vert \overline{\wbf}^0_j\right\vert-\max_{j\in (S^\star)^c}\left\vert\overline{\wbf}^0_j\right\vert > 0.
\end{equation}
By Lemma <ref> this guarantees existence of $\widehat{\lambda}$ such that $\supp{\overline{\wbf}^{\widehat{\lambda}}} = S^\star$, where $\overline{\wbf}^{\widehat{\lambda}} = \text{STLS}^{(1)}(\overline{\Gbf},\overline{\bbf},\widehat{\lambda})$. Lemma <ref> then ensures that there exists $\sigma_c\leq \sigma_c'$ such that for $\sigma<\sigma_c$,
\begin{equation}\label{delta2}
\delta_2 := \frac{\nrm{\overline{\bbf}}_2}{\mathfrak{J}}-\nrm{\overline{\Gbf}(\overline{\wbf}^0-\overline{\wbf}^{\widehat{\lambda}})}_2 > 0,
\end{equation}
which guarantees that $\widehat{\wbf} = \text{MSTLS}^{(1)}(\overline{\Gbf},\overline{\bbf})$ satisfies $\supp{\widehat{\wbf}}\subset S^\star$. Next, using Theorem <ref> and Corollary <ref>, we have that for all $t>0$ and sufficiently large $m$, it holds that
\[
\Pbb\left(\nrm{(\Gbf^{(m)},\bbf^{(m)}) - (\overline{\Gbf},\overline{\bbf})}_{\overrightarrow{\infty}} > t \right) \leq 2K\mathfrak{J}\exp\left(-\frac{c}{2}(mt)^{2/p_{\max}}\right).
\]
All that remains is to show that for sufficiently small $t$, $\nrm{(\Gbf^{(m)},\bbf^{(m)}) - (\overline{\Gbf},\overline{\bbf})}_{\overrightarrow{\infty}}<t$ leads to $\delta_1^{(m)},\delta_2^{(m)}>0$ where $\delta_1^{(m)},\delta_2^{(m)}$ are defined analogously to $\delta_1,\delta_2$ using $(\Gbf^{(m)},\bbf^{(m)})$.
Indeed, assume that
\begin{equation}\label{boundwitht}
\nrm{(\Gbf^{(m)},\bbf^{(m)}) - (\overline{\Gbf},\overline{\bbf})}_{\overrightarrow{\infty}}<t.
\end{equation}
If $t<\frac{\sigma_{\mathfrak{J}}(\overline{\Gbf})}{\sqrt{2K\mathfrak{J}}}$, where $\sigma_{\mathfrak{J}}(\overline{\Gbf})$ is the smallest singular value of $\overline{\Gbf}$, then by Lemma <ref>, we have that
\[\nrm{\overline{\wbf}^0-\wbf^{(m),0}}_\infty<Ct,\]
where $C=C(\overline{\Gbf},\overline{\wbf}^0)$ is defined in (<ref>). This implies that
\[\delta_1^{(m)} := \min_{j\in S^\star}\left\vert \wbf^{(m),0}_j\right\vert-\max_{j\in (S^\star)^c}\left\vert \wbf^{(m),0}_j\right\vert \geq \delta_1 - 2Ct.\]
Hence, if $t < \min\left(\frac{\sigma_{\mathfrak{J}}(\overline{\Gbf})}{\sqrt{2K\mathfrak{J}}}, \frac{\delta_1}{2C}\right)$ and $m$ is sufficiently large, the probability that there exists $\widehat{\lambda}\in \pmb{\lambda}$ such that $\supp{\text{STLS}^{(1)}(\Gbf^{(m)},\bbf^{(m)},\widehat{\lambda})} = S^\star$ exceeds $1-2K\mathfrak{J}\exp\left(-\frac{c}{2}\left(mt\right)^{2/p_{\max}}\right)$.
Next, we have
\begin{align*}
\delta_2^{(m)} &:= \frac{\nrm{\bbf^{(m)}}_2}{\mathfrak{J}}-\nrm{\Gbf^{(m)}(\wbf^{(m),0}-\wbf^{(m),\widehat{\lambda}})}_2 \\
&= \delta_2 + \frac{\nrm{\bbf^{(m)}}_2-\nrm{\overline{\bbf}}_2}{\mathfrak{J}}-\nrm{\Gbf^{(m)}(\wbf^{(m),0}-\wbf^{(m),\widehat{\lambda}})}_2 +\nrm{\overline{\Gbf}(\overline{\wbf}^0-\overline{\wbf}^{\widehat{\lambda}})}_2\\
&\geq \delta_2 - \frac{\nrm{\bbf^{(m)}-\overline{\bbf}}_2}{\mathfrak{J}} - \nrm{\Gbf^{(m)}-\overline{\Gbf}}_2\left(\nrm{\wbf^{(m),0}}_2+\nrm{\wbf^{(m),\widehat{\lambda}}}_2\right)\\
&\geq \delta_2 - \left(\frac{\sqrt{K}}{\mathfrak{J}} +\sqrt{K\mathfrak{J}}\left(\nrm{\overline{\wbf}^0}_2+\nrm{\overline{\wbf}^{\widehat{\lambda}}}_2+\delta_1\right)+2C\nrm{\overline{\Gbf}}_2\right)t\\
&=:\delta_2 - C't
\end{align*}
where we used the upper-bound $t<\frac{\delta_1}{2C}$ and the fact that
\begin{equation}\label{subsetLSbound}
\nrm{\overline{\wbf}^\lambda-\wbf^{(m),\tilde{\lambda}}}_\infty<Ct
\end{equation}
for any $\lambda,\tilde{\lambda}$ that result in $\supp{\overline{\wbf}^\lambda} = \supp{\wbf^{(m),\tilde{\lambda}}}$, since (<ref>)
implies that $\nrm{\Gbf^{(m)}_{S'}-\overline{\Gbf}_{S'}}_{\overrightarrow{\infty}}<t$ on any subset $S'\subset\{1,\dots,\mathfrak{J}\}$.
Hence, for sufficiently large $m$ and any
\[ t < \min\left\{\frac{\sigma_{\mathfrak{J}}(\overline{\Gbf})}{\sqrt{2K\mathfrak{J}}},\ \frac{\delta_1}{2C},\ \frac{\delta_2}{C'}\right\}:=\tau,\]
we get that $\supp{\text{MSTLS}^{(1)}(\Gbf^{(m)},\bbf^{(m)})} \subset S^\star$ with probability exceeding $1-2K\mathfrak{J}\exp\left(-\frac{c}{2}\left(mt\right)^{2/p_{\max}}\right)$.
To prove the coefficient accuracy, if $\supp{\what^{(m)}} = S^\star$ (i.e. $\what^{(m)}=\wbf^{(m),\widehat{\lambda}}$), (<ref>) implies that
\[\nrm{\what^{(m)} - (\overline{\Gbf}_{S^\star})^\dagger\overline{\bbf}}_\infty <Ct\]
where $C=C(\overline{\Gbf}_S^\star, (\overline{\Gbf}_S^\star)^\dagger\overline{\bbf})$ is again defined in (<ref>). Furthermore, properties of $\Abf$ imply that for $C''$ depending on the maximum polynomial degree $p$ and maximum frequency $\omega_{\max}$ appearing in the true model we have $\nrm{\overline{\Gbf}_{S^\star}-\Gbf_{S^\star}^\star}_\infty<C''\sigma^2$, which together with Lemma <ref> implies that
\[\nrm{(\overline{\Gbf}_{S^\star}^\dagger - \Gbf_{S^\star}^\dagger)\bbf^\star}_\infty<\tilde{C}\sigma^2.\]
Altogether, if $\supp{\what^{(m)}} = S^\star$, then with probability exceeding $1-2K\mathfrak{J}\exp\left(-\frac{c}{2}\left(mt\right)^{2/p_{\max}}\right)$ it holds that
\[\nrm{\what^{(m)} - \wstar}_\infty\leq \nrm{\what^{(m)} - (\overline{\Gbf}_{S^\star})^\dagger\overline{\bbf}}_\infty+\nrm{(\overline{\Gbf}_{S^\star})^\dagger\overline{\bbf} - \wstar}_\infty\leq Ct + \tilde{C}\sigma^2 \leq C'(t+\sigma^2).\]
As alluded to at the beginning of this section, if we make an additional assumption on the noise-free continuous data, we can strengthen Lemma <ref> to ensure $\supp{\widehat{\wbf}}=\supp{\wstar}$ for all sufficiently small $\sigma$, which subsequently strengthens Theorem <ref> to ensure $\supp{\widehat{\wbf}^{(m)}}=\supp{\wstar}$ with high probability for all sufficiently small $\sigma$ and sufficiently large $m$. This condition is the following:
\begin{equation}\label{condforsupprec}
\mu^\star:= \min_{S\subsetneq S^\star}\frac{\nrm{\Pbf^\perp_{\Gbf^\star_{S^\star\setminus S}} \bbf^\star}}{\nrm{\bbf^\star}}-\frac{|S|+1}{\mathfrak{J}}>0.
\end{equation}
Using that $\Pbf^\perp_{\Gbf^\star_{S^\star\setminus S}} \bbf^\star = \Pbf^\perp_{\Gbf^\star_{S^\star\setminus S}}\Gbf^\star_S\wstar_S$, in words, this says that the contribution of each subset of true terms $\Gbf_S^\star\wstar_S$ that is orthogonal to the subspace spanned by the remaining terms ($\text{span}(\Gbf^\star_{S^\star\setminus S})$) cannot be arbitrarily small. Specifically, this orthogonal contribution must be at least $(\frac{|S|+1}{\mathfrak{J}})\nrm{\bbf^\star}$. In Appendix <ref> we show that this is sufficient to guarantee that the MSTLS loss $\CalL$ satisfies $\CalL(\widehat{\lambda})<\CalL(\tilde{\lambda})$ for all $\supp{\overline{\wbf}^{\widehat{\lambda}}} = S^\star$ and $\supp{\overline{\wbf}^{\tilde{\lambda}}}=\tilde{S}\subsetneq S^\star$. However, condition (<ref>) is unsatisfactory because it cannot be checked without knowledge of the true support $S^\star$. Nevertheless, with $|S|=1$, we can interpret (<ref>) as a modeling criteria: each true term in the model must provide a unique (orthogonal) contribution to the dynamics given in $\bbf^\star$ of at least $(2/\mathfrak{J})100\%$.
§.§ Asymptotic consistency with filtering
In this section we show that filtering the data prior to building the linear system $(\Gbf^{(m)},\bbf^{(m)})$ results in unconditional convergence of estimators to the true weights, in other words $\what^{(m)}\to \wstar$ in probability, provided the filtering window scales appropriately.
We define the filtered data $\tilde{\Ubf}^{(m)}$ with respect to a discrete convolutional filter $\pmb{\nu}^{(m)}$ as
\[\tilde{\Ubf}^{(m)} = \pmb{\nu}^{(m)}*\Ubf^{(m)},\]
where $\pmb{\nu}^{(m)}\in \Rbb^{n_1^\nu\times\cdots\times n_{d+1}^\nu}$ satisfies $\nrm{\pmb{\nu}^{(m)}}_{\overrightarrow{1}}=1$ and $\pmb{\nu}^{(m)} > 0$ (i.e $\pmb{\nu}^{(m)}$ is a discrete probability distribution). For convenience we perform symmetric reflection of the data at the boundaries to maintain the same number of data points in $\tilde{\Ubf}^{(m)}$ and $\Ubf^{(m)}$. The filter $\pmb{\nu}^{(m)}$ is characterized by its filter width in total number of gridpoints at level $m$:
\[m^{(\nu)}:= \prod_{i=1}^{d+1} n_i^\nu.\]
The class of $\pmb{\nu}^{(m)}$ resulting in convergence is large, however for simplicity we restrict our attention to the simple moving average filter, in which case each entry of $\pmb{\nu}^{(m)}$ is equal to $1/m^{(\nu)}$. In particular, we show that the moving average filter provides convergence so long as $m^{(\nu)}\gtrsim m^{\alpha}$ as $m\to \infty$ for some $\alpha<1$, and as before, $m$ is the number of points that the test function $\psi$ is support on at the grid resolution $(\Delta x^{(m)},\Delta t^{(m)})$. Since the simple moving average filter reduces variance by a factor of $1/m^{(\nu)}=\nrm{\pmb{\nu}^{(m)}}_{\overrightarrow{2}}^2$, we conjecture that the results below hold for other filters satisfying $\nrm{\pmb{\nu}^{(m)}}_{\overrightarrow{2}}^2\leq m^{-\alpha}$.
We denote the WSINDy linear system built using the filtered data $\tilde{\Ubf}^{(m)}$ as
\[(\tilde{\Gbf}^{(m)}, \tilde{\bbf}^{(m)}).\]
Our approach is to show that $(\tilde{\Gbf}^{(m)}, \tilde{\bbf}^{(m)})\to (\Gbf^\star, \bbf^\star)$ in probability, after which the full-rank assumption on $\Gbf^\star$ implies that the least-squares solution $\tilde{\wbf}^0 = (\tilde{\Gbf}^{(m)})^\dagger \tilde{\bbf}^{(m)}$ converges to $\wstar$. The main hurdle in proving this is the correlations that arise from filtering the data, since entries of $(\tilde{\Gbf}^{(m)}, \tilde{\bbf}^{(m)})$ are now sums of correlated heavy-tailed random variables, which falls outside of the analysis of works [35, 4]. The proof can be found in Appendix <ref>.
Let assumptions <ref>, <ref>, and <ref>
hold and let $\pmb{\nu}^{(m)}$ be a simple moving average filter with $m^{(\nu)}\gtrsim m^{\alpha}$ for some $\alpha\in(0,1)$. Further, let $\CalF$ be a library of locally Lipschitz functions such that $\forall f\in \CalF$ and $(x,y)\in \Rbb$,
\[|f(x)-f(y)|\leq C_\CalF|x-y|\left(1+|x-y|^{p_{\max}-1}\right)\]
for some $C_\CalF\geq 0$ and $p_{\max}\geq 1$. Then for $t> \mathfrak{t}(m)$, where $\mathfrak{t}(m) = \CalO(m^{-\alpha \left(\frac{k}{d+1}-\frac{1}{2}\right)})$ and $k$ is the Sobolev regularity of $u\in \CalH^{k,\infty}(D)$, and sufficiently large $m$, we have
\[\Pbb \left(\|(\tilde{\Gbf}^{(m)},\tilde{\Gbf}^{(m)}) - (\Gbf^\star,\bbf^\star)\|_\infty>t\right) \leq 3K\mathfrak{J}\exp\left(-\frac{c}{2^{1+p_{\max}}}[m(t-\mathfrak{t}(m))]^{2/{p_{\max}}}\right).\]
where $c$ is the same rate from Theorem <ref>.
A more careful analysis will lead to the rate $c$ replaced by a variable rate $c^*(m)$ (bounded below by $c$ from Theorem <ref>) that increases as $\sim m^{\alpha}$, since filtering transforms the sub-Gaussian noise distribution at each $m$ into a new distribution $\rho^{(m)}$ that has a reduced variance $\sigma^2/m^\alpha$.
As well as factors which serve to increase the rate $c$ in Theorem <ref>, convergence is largely dictated by how rapidly $\mathfrak{t}(m)$ decreases. This is determined by three deterministic effects: (1) the bias $|\tilde{\Ubf}^{(m),\star}-\Ubf^{(m),\star}|$ between the filtered and unfiltered noise-free data, (2) the distributional convergence of $\rho^{(m)}$ to $\delta_0$ as $m\to \infty$, and (3) the convergence of the trapezoidal rule on functions $\partial \psi^{\alpha^s}(\cdot) f(u(\cdot))$. In the case that $u$ is locally polynomial of degree $q+1$, higher-order filters can be used to reduce (1) to $\CalO(m^{-\frac{(q+1)(1-\alpha)}{d+1}})$. However, in most cases (2) still dominates, limiting $\mathfrak{t}(m)$ to $\CalO(m^{-\alpha/2})$. If $\CalF$ contains only analytic functions, then (2) acquires the rate $\CalO(m^{-\alpha})$. Higher-order filters are of no help in decreasing (2) since $|f*\rho^{(m)}(u)-f*\delta_0(u)|=\CalO(\nrm{\pmb{\nu}^{(m)}}_2^2)$, and $\nrm{\pmb{\nu}^{(m)}}_2^2$ is minimized at the simple moving average filter. For (3), if $u\in H^k$ with $k>(d+1)/2$ and $f_j$ are smooth, then the convergence rate of the trapezoidal rule is $\CalO(m^{-k+(d+1)/2})$, which leads to rapidly decreasing $\mathfrak{t}(m)$ with higher $k$, and thus faster entry into the region of exponential concentration. Ultimately, while filtering the data enables concentration to the noise-free problem, it is clear that more work is needed to make filtering useful in practice due to these intermediate biases.
Lastly, Lemma <ref> and Theorem <ref> directly imply the following coefficient accuracy.
Under the same conditions as Theorem <ref>, it holds that
\[\nrm{\tilde{\wbf}^{(m),0}-\wstar}_\infty\leq Ct\]
with probability exceeding $1 - 3K\mathfrak{J}\exp\left(-\frac{c}{2^{1+p_{\max}}}[m(t-\mathfrak{t}(m))]^{2/{p_{\max}}}\right)$,
where $C=C(\Gbf^\star,\wstar)$ is defined in Lemma <ref> and $\tilde{\wbf}^{(m),0} = (\tilde{\Gbf}^{(m)})^\dagger\tilde{\bbf}$ is the filtered least squares solution.
Since it is not advised to simply use the least-squares solution as in the previous corollary (which will in general not be sparse), we note that Theorem <ref> directly implies that $\supp{\what^{(m)}}\subset \supp{\wstar}$ with high probability as in Theorem <ref>, where here $\what^{(m)} = \text{MSTLS}^{(1)}(\tilde{\Gbf}^{(m)},\tilde{\bbf}^{(m)})$ is the one-shot MSTLS solution on the filtered linear system $(\tilde{\Gbf}^{(m)},\tilde{\bbf}^{(m)})$. Moreover, in the case of filtered data the result is not restricted to trigonometric and polynomial libraries, but holds for any locally Lipschitz library $\CalF$. Finally, condition (<ref>) implies full support recovery $\supp{\what^{(m)}} = \supp{\wstar}$ with high probability from filtered data, using similar arguments as in Appendix <ref>.
§ NUMERICAL EXPERIMENTS
We now test the theoretical results in the previous sections using four example problems: (1) the Lorenz system, (2) a hyper-diffusive Kuramoto-Sivashinsky (KS)-type equation, (3) a cubic oscillator, and (4) a nonlinear viscous Burgers-type model. Examples (1) and (2) are ordinary and partial differential equations, respectively, which do not exhibit a critical noise (equivalently, $\sigma_c=\infty$). For these examples we demonstrate that recovery of the correct model occurs with probability approaching 1 as $m\to \infty$ across the noise spectrum. In contrast, systems (3) and (4) do exhibit a finite critical noise $\sigma_c$, and in these cases we show that a simple moving average filter is sufficient to enable convergence for $\sigma\geq \sigma_c$.
For cases of unconditional convergence (examples (1) and (2)) we do not report results for recovery from filtered data, as these were consistently worse than their unfiltered counterparts. This is supported by Theorems <ref> and <ref>, as convergence for the filtered linear system is slower than for raw data.
§.§ Data generation and WSINDy settings
For each example we subsample a high-accuracy fine-grid simulation of the system in order to test the performance at different test function support size $m$. We then add mean-zero Gaussian white noise to every datapoint with specified variance $\sigma^2$. Where specified, we use a noise ratio $\sigma_{NR}$ to determine $\sigma$, and set
\[\sigma = \nrm{\Ubf^{(m),\star}}_{stdev'}\sigma_{NR}\]
where $\nrm{\Ubf^{(m),\star}}_{stdev'}$ indicates the standard deviation of the clean data $\Ubf^{(m),\star}$ stretched into a column vector.
Throughout we use the MSTLS algorithm outlined in equations (<ref>)-(<ref>) with sparsity thresholds $\log_{10}\pmb{\lambda} = \texttt{linspace}(-4,0,100)$. For simplicity, we fix the reference test function along each coordinate to be the $C^\infty_c(\Rbb)$ bump function
\[\phi(v) = \ind{(-1,1)}(v)\exp\left(\frac{9}{v^2-1}\right).\]
The model library and convolution query points vary by example as described below.
§.§ Performance metrics
We are mostly concerned with verifying the different asymptotic consistency results above for raw and filtered data over a range of noise levels. For each noise level $\sigma$, we run the WSINDy algorithm on 200 different independent noise instantiations at each test function support size $m$, producing 200 learned models $(\what^{(m),(i)})_{i=1,\dots,200}$ which we average results over. We record the probability of support recovery
\begin{equation}
\Pbb\left(S^{(m)}=S^\star\right)\approx \frac{1}{200}\sum_{i=1}^{200} \mathbbm{1}\left( \supp{\what^{(m),(i)}}=\supp{\wstar} \right)
\end{equation}
and where relevant the probability of support inclusion
\begin{equation}
\Pbb\left(S^{(m)}\subset S^\star\right)\approx \frac{1}{200}\sum_{i=1}^{200} \mathbbm{1}\left( \supp{\what^{(m),(i)}}\subset\supp{\wstar} \right).
\end{equation}
We also report the maximum relative coefficient error over true support set, defined as
\begin{equation}\label{eq:err_inf}
E_\infty(m) = \Ebb\left[\max_{j\in S^\star} \frac{|\what^{(m)}_j-\wstar_j|}{|\wstar_j|}\right]\approx \frac{1}{200}\sum_{i=1}^{200}\max_{j\in S^\star} \frac{|\what^{(m),(i)}_j-\wstar_j|}{|\wstar_j|} .
\end{equation}
§.§ Unconditional consistency
Example datasets for the Lorenz system (left) at 20% noise and hyper-KS system (right) at $100\%$ noise.
§.§.§ Lorenz System
The true model equations for the Lorenz system are
\begin{equation}\label{eq:lorenz}
\begin{dcases}
\frac{du_1}{dt} = (-10)u_1+(10)u_2 \\
\frac{du_2}{dt} = (28)u_1+(-1)u_2+(-1)u_1u_3 \\
\frac{du_3}{dt} = (-8/3)u_3+(1)u_1u_2.
\end{dcases}
\end{equation}
Since each term is either linear or bilinear, the associated continuum linear system is unbiased, in other words $\overline{\Gbf}_{S^\star} = \Gbf^\star_{S^\star}$, hence the system does not exhibit a critical noise. We simulate (<ref>) for $t\in [0,10]$ using RK-45 with absolute tolerance $10^{-12}$ using $250,000$ equally-spaced points. This fine-grid solution is then subsampled by successive factors of two, leading to coarser data with approximate total numbers of points $\{2^{-k}250,000\ :\ k=0,\dots,9\}$. At each level of resolution we specify the test function width $m$ to be $2\%$ of the total timeseries, so that $|\supp{\psi}|/T=0.02$ for all $m$. We use a maximum of $K=1000$ equally-spaced convolution query points when constructing $\Gbf^{(m)}$. (When the number of timepoints $M$ is less than 1000 we use all possible query points, or $K = M-m+1$). We let $\CalF$ be the set of polynomials up to total degree $p_{\max}=6$ in the state variables $(u_1,u_2,u_3)$, leading to 84 library terms.
We examine noise ratios $\sigma_{NR}$ from $0.001$ to $1$, which translate to $\sigma$ from $0.013$ to $13$, due to $\nrm{\Ubf^{(m),\star}}_{stdev'}$ $\approx 13$. Figure <ref> (left column) shows that across the noise spectrum we get asymptotic recovery of the correct system, with coefficient errors $E_\infty$ eventually entering a Monte-Carlo-type $\CalO(m^{-1/2})$ convergence.
§.§.§ Hyper-KS
In order to demonstrate the ability of WSINDy to recover high-order PDEs, we examine a hyper-diffusive, dispersive evolution equation that exhibits spatiotemporal chaos similar to the KS equation:
\begin{equation}\label{hyperKS}
\partial_t u = (1)\partial_{xxxx} u + (0.75)\partial_{xxxxxx} u + (-0.5)\partial_x(u^2) + (0.1)\partial_{xxx} u^2.
\end{equation}
Such models are of general interest for their potential to model challenging dynamics such as earthquakes and flame propagation. The dynamics of (<ref>) are elaborated on in Appendix <ref>. Despite its complexity at face-value, the model (<ref>) can be recovered using WSINDy as $m\to \infty$ with no restrictions on the noise level. This is due to the fact that the system is composed of only linear and quadratic terms, with no quadratic growth term, so falling into case $(i)$ for Lemmas <ref> and <ref>.
We simulate (<ref>) using a ETDRK4 and Fourier-spectral collocation on a fine grid $(\Xbf^f,\tbf^f)\subset [0,32\pi]\times [0,82]$ containing $1024\times 1025$ points in space and time to mimic the continuum limit. We then examine a range of resolutions, with the coarsest grid have $64\times 65$ points, or $32$ times coarser than $(\Xbf^f,\tbf^f)$. We examine $\sigma_{NR}$ in the range $0.001$ to $1$, where $\sigma_{NR}\approx \sigma$ here since $\nrm{\Ubf^{(m),\star}}_{stdev'}\approx 1$. The reference test function is set so that $|\supp{\psi}|/|\Omega\times(0,T)| = 1/25$ (see inset plot of Figure <ref> (right)). We fix the library at $\CalF = (u^q)_{q=0,\dots,8}$ and the differential operators $\partial^{\pmb{\alpha}} = (\partial_x^q)_{q=0,\dots,8}$, leading to a total of 73 library terms.
Similar to the Lorenz system, we observe support recovery of (<ref>) in Figure <ref> (right column) with probability approaching one for all $\sigma_{NR}$ examined. As well, the error scales asymptotically like $\CalO(m^{-1/2})$, where we note that $m=m_xm_t$. It is interesting to note that for lower resolutions, we still recover the correct system at fairly high probability. At $\sigma_{NR} = 0.01$, we recover the correct system with $>98\%$ probability from a grid of size $86\times 86$ (corresponding to $(m_x,m_t) = (15,15)$), and with average coefficient error $E_\infty<0.026$. This is surprising due to the dependence on $\partial^4_xu$ and $\partial^6_xu$.
Recovery results for the Lorenz system (left) and hyper-KS equation (right). Both systems exhibit similar asymptotic consistency trends, where support recovery is achievable at any noise level if $m$ is taken large enough, and errors eventually decrease at a rate $\CalO(m^{-1/2})$ (note in the right column $m=m_xm_t$).
§.§ Conditional consistency and filtering
Left: numerical solution to the cubic oscillator (<ref>) together with noisy data at $\sigma=\sigma_c$ and corresponding filtered data. Right: numerical solution to the nonlinear viscous Burgers model (<ref>) with noise $\sigma = 10\sigma_c$.
The next two examples do exhibit a finite critical noise, and so we report the performance of WSINDy on the raw data as well as filtered data using a simple moving average filter. We design the filter width $m^{(\nu)}$ using the bounds (<ref>)
in Lemma <ref>. First we use the data to calculate an estimate of $\sigma$, denoted by $\sigma_{\text{est}}$, using the method in Appendix <ref>. We then specify a prior $\tau^\star=0.01$ for the ratio $\min_{i\in S^\star}|\wstar_i|/\nrm{\wstar}_\infty$. Assuming $\tau^\star$ is well-specified, a desirable quality of the filter given the bounds (<ref>) is to have
\[\frac{1}{\tau^\star}{p_{\max} \choose 2}\sigma_\nu^2 < 1\]
where $\sigma_\nu^2 = \frac{\sigma^2}{m^{(\nu)}}$ is the reduced variance of the filtered data, given an initial noise variance of $\sigma^2$. We can then solve for $m^{(\nu)}$ using our variance estimate and our prior $\tau^\star$:
\[m^{(\nu)} > {p_{\max} \choose 2}\frac{\sigma_\text{est}^2}{\tau^\star}.\]
In both examples we use polynomials up to total degree $p_{\max} = 6$, and we set $\tau^\star = 0.01$, which translates into a filter width of
\begin{equation}
m^{(\nu)}_1 > \left\lfloor (1500 \sigma_\text{est}^2)^{\frac{1}{d+1}} \right\rfloor
\end{equation}
points in each dimension. We use this together with a restriction on the filter width depending on the test function support size $m$, and set
\[m^{(\nu)}_1 = \left\lfloor \min\left(2(1500 \sigma_\text{est}^2)^{\frac{1}{d+1}}, m^{\frac{1}{d+1}}/2\right)\right\rfloor.\]
In this way the variance is reduced enough (given assumptions on $\tau^\star$) to cancel spurious terms that arise, but the filter only covers at most $1/2^{d+1}$ of $|\supp{\psi}|$ to limit the resulting bias. We make no claim that this is the optimal way to choose the filter, but as we'll see below, it is sufficient to ensure support recovery beyond the critical noise level.
Recovery results for the Cubic Oscillator (<ref>) (left) and nonlinear viscous Burgers equation (<ref>) (right). Solid lines correspond to raw (unfiltered) data and dot-dashed lines correspond to filtered data. For both systems WSINDy applied to the raw data only recovers the correct model for $\sigma<\sigma_c$, whereas support recovery is achievable at higher noise levels from the filtered data if $m$ is taken large enough. Moreover, solutions converge on the filtered data at a higher rate than $\CalO(m^{-1/2})$ (see dot-dashed lines in the bottom right plot). (Note that in the right column $m=m_xm_t$).
§.§.§ Cubic Oscillator
\begin{equation}\label{cubicosc}
\begin{dcases}
\frac{du_1}{dt} = (2)u_1^3+(-0.1)u_2^3 \\
\frac{du_2}{dt} = (-0.1)u_1^3+(-2)u_2^3
\end{dcases}
\end{equation}
The cubic oscillator has origins in quantum mechanics and represents a particle in an anharmonic potential well. The cubic terms each generate spurious linear terms in the limit, which persist in the learned equations for noise levels above the critical noise $\sigma_c = \sqrt{0.1/6}\approx 0.13$. A fine-grid solution is obtained on $10^6$ equally-spaced timepoints $\tbf \subset [0,25]$. The data is then subsampled to obtain coarser resolutions. As with the Lorenz system, we fix $|\supp{\psi}|=T/50$ and we use a maximum of $K=1000$ equally-spaced query points (meaning, as before, that $K=1000$ for total number of timepoints $M>1000$, and otherwise $K=M-m+1$). We examine noise levels $\sigma$ in the range $10^{-2}\sigma_c\approx 0.0013$ to $10\sigma_c \approx 1.3$. In terms of the noise ratio $\sigma_{NR}$, given that $\nrm{\Ubf^{(m),\star}}_{stdev'}\approx 0.6253$, this corresponds to $\sigma_{NR}$ in the range $0.0021$ to $2.064$.
In Figure <ref> (left) we report the asymptotic recovery trends for (<ref>). For noise levels $\sigma < \sigma_c$, we recover the correct support as $m\to\infty$ for both raw and filtered datasets, with WSINDy performing better overall on the raw data. For $\sigma\geq \sigma_c$, the raw data fails to lead to support recovery, but the method works successfully on the filtered data, as predicted. However, it is clear that this particular problem exhibits slow convergence. We observe rapid subset support recovery (left middle panel of Figure <ref>) using filtered data, corresponding to dropping one or both of the smaller terms in (<ref>), but full support recovery requires impractical amounts of data using the chosen hyperparameters. Nevertheless the asymptotic theoretical results are captured.
§.§.§ Nonlinear Viscous Burgers
In our last example we examine the model
\begin{equation}\label{burgers}
\partial_t u = (0.01)\partial_{xx} u + (-0.5)\partial_x(u^2) + (-1)u^3+(2)u^2+(1)u^0,
\end{equation}
which allows for explicit computation of the critical noise level $\sigma_c$ similar to the cubic oscillator. Specifically, the $(-1)u^3$ term generates a spurious term $(-3\sigma^2) u^1$ in the continuum limit, and the term $(2)u^2$ generates a term $(2\sigma^2)u^0$, which corrupts the existing coefficient of $u^0$. With a diffusion coefficient of $0.01$, the true model will not be identified for noise levels above the critical noise $\sigma_c:=\sqrt{0.01/3}\approx 0.058$. We examine noise levels $\sigma$ in the range $10^{-2}\sigma_c\approx 0.0006$ to $10\sigma_c \approx 0.5774$. In terms of the noise ratio $\sigma_{NR}$, given that $\nrm{\Ubf^{(m),\star}}_{stdev'}\approx 1.16$, this corresponds to $\sigma_{NR}$ in the range $0.0005$ to $0.5$.
We simulate (<ref>) on a fine grid $(\Xbf^f,\tbf^f)\subset [-1,1]\times [0,1.5]$ containing (2048,1801) points in space and time to mimic the continuum limit. We examine a range of resolutions, with the coarsest grid have $(64,57)$ points, or $32$ times coarser than $(\Xbf^f,\tbf^f)$. The raw data without noise is depicted in Figure <ref> at the finest resolution with $\sigma=10\sigma_c$ along with the reference test function $\psi$ overlaid. We fix the reference test function such that $|\supp{\psi}|/|\Omega\times(0,T)| = 1/16$ (see Figure <ref>), and we fix the differential operators $\partial^{\pmb{\alpha}} = (\partial_x^q)_{q=0,\dots,6}$ and the nonlinearities $\CalF = (u^q)_{q=0,\dots,6}$.
Figure <ref> (right column) shows recovery results as $m\to \infty$ for various noise levels, comparing raw data (solid lines) with filtered data (dot-dashed lines). As expected, for $\sigma<\sigma_c$ we see rapid recovery of the true support using both raw and filtered data. For $\sigma = \sigma_c$ with raw data (purple solid line), we see an initial increase in the probability of recovery, followed by a slow decline, which is to be expected since the continuum problem is not expected to yield correct recovery in this case. All noise levels $\sigma>\sigma_c$ using the raw data yield zero probability of correct support recovery. On the other hand, filtering the data successfully enables support recovery at all higher noise levels.
At noise levels $\sigma<\sigma_c$ the coefficient error $E_\infty(m)$ approximately decreases with Monte-Carlo rate of $\CalO(m^{-1/2})$ (bottom right panel of Figure <ref>). For higher noise levels, the $E_\infty$ decreases at an increased rate on filtered data, which is a reflection of the observation in Remark <ref>.
§ DISCUSSION
In this work we have provided an in-depth analysis of the WSINDy algorithm in the limit of continuum data. We have presented results in as general a framework as possible, in particular showing convergence for non-smooth solutions of differential equations. This analysis includes several key results that can be used to improve algorithms for equation learning in general. In particular, we identify that WSINDy models may in general be only conditionally consistent with the ground truth, and that spurious terms may arise in the limit if the noise level is above a critical threshold. Included in this result is the identification of a large class of models for which WSINDy does recover the correct model in the limit for any noise level, which explains the previously report robustness of weak-form methods.
We also examine a filtered WSINDy approach, and identify that filtering the data leads to unconditional convergence for a wide range of systems and noise distributions. We propose that optimal filtering be a major priority for future algorithms, as the theoretical advantages of filtering are undeniable, yet is well-known that filtering can lead to much worse results if disadvantageous filters are use (see e.g. [9]).
As far as other numerical results, we demonstrated that the theoretical findings are in fact exhibited in practice. In particular, filtering the data enables recovery of systems beyond the critical noise level. Also, crucially, for system which do not exhibit a critical noise level, we do observe unconditional convergence, no matter how high the orders of spatial derivatives are in true model (as in the Hyper-KS case).
We now offer several ideas for extensions and combination of our results with existing results.
* We have examined only models that can be integrated-by-parts to put all derivatives onto the test function. For other such terms, it is possible to use local-polynomial differentiation as in [12] and then apply the weak form to get convergence. We conjecture that local polynomial filtering may also provide advantages over the simple moving average filter employed here for general data filtering, hence a hybrid weak-form / local-polynomial discretization appears advantageous both theoretically and in practice.
* In this work we have restricted our analyses to cases where the true model is contained in the model library. A recent work [27] has proven the convergence of WSINDy surrogate models using terms that are not restricted to the true support set. A fruitful future direction would be to merge these results and our current findings to extend results in [27] to the case of discrete and noisy data.
* While results are presented here for sub-Gaussian noise distributions, extensions to heavier-tailed noise distributions are of course possible. For instance, the results above carry over immediately to sub-exponential noise albeit with slower concentration rates.
* We have analyzed the MSTLS algorithm from [18] due to its practical performance and speed. We also conjecture that thresholding-based sparse regression routines may offer an advantage in the case highly-correlated linear system. However, alternative sparse optimization techniques as employed in [8] and [5] may offer additional advantages with regard to the biases introduced by nonlinear functions in $\CalF$. On the other hand, suitably rescaling the data and coordinates as introduced in [18] has the ability to increase the critical noise threshold $\sigma_c$ (e.g. by increasing the lower bound in equation (<ref>)). Altogether, we believe that our results may inform future developments in sparse regression algorithms for equation learning.
§ ACKNOWLEDGMENTS
This research was supported in part by the NSF Mathematical Biology MODULUS grant 2054085, in part by the NSF/NIH Joint DMS/NIGMS Mathematical Biology Initiative grant R01GM126559, and in part by the NSF Computing and Communications Foundations grant 1815983. This work also utilized resources from the University of Colorado Boulder Research Computing Group, which is supported by the National Science Foundation (awards ACI-1532235 and ACI-1532236), the University of Colorado Boulder, and Colorado State University. The authors would also like to thank Prof. Vanja Dukić (University of Colorado at Boulder, Department of Applied Mathematics) for insightful discussions about the statistical aspects of this work and in particular the results concerning the bias.
§ NOTATION
In Table <ref> we include relevant notations used throughout along with text references.
Symbol Domain Meaning First Mention
$\Omega$ $\Rbb^d$ spatial domain of true solution Sec. <ref>
$(0,T)$ $\Rbb$ temporal domain of true solution Sec. <ref>
$\CalH^{k,p}(D)$ — spaces considered for true solutions eq. (<ref>)
$u$ $\CalH^{k,\infty}(\Omega\times (0,T))$ true solution Sec. <ref>
$\CalF:=(f_j)_{j\in [J]}$ $C(\Rbb^n,\Rbb)$ trial functions Sec. <ref>
$F^{\pmb{\omega}} = \{\exp(i\omega^Tu\}_{\omega \in \pmb{\omega}}$ $C(\Rbb^n,\Rbb)$ finite trigonometric basis Sec. <ref>
$P^{(q)}$ $C(\Rbb^n,\Rbb)$ polynomials of degree at most $q$ over $\Rbb^n$ Sec. <ref>
$\pmb{\alpha}:=(\alpha^s)_{s\in [S]}$ $\Nbb^{d+1}$ trial differential operators as multi-indices Sec. <ref>
$S$ $\Nbb$ number of differential operators in library Sec. <ref>
$J$ $\Nbb$ number of trial functions in library Sec. <ref>
$\mathfrak{J}=SJ$ $\Nbb$ total number of library terms Sec. <ref>
$\wstar$ $\Rbb^{\mathfrak{J}\times n}$ true model coefficients Sec. <ref>
$\psi$ $C^{\nrm{\pmb{\alpha}}_\infty}_0(\Omega\times (0,T))$ reference test function Sec. <ref>
$\CalQ := \{(\xbf_k,t_k)\}_{k\in[K]}$ $\Omega\times (0,T)$ Query points used to evaluate convolutions with $\psi$ Sec. <ref>
STLS — sequential thresholding least squares equation (<ref>)
$\CalL$ $BV(0,\infty)$ MSTLS loss function eq. (<ref>)
$(\Delta x^{(m)},\Delta t^{(m)})$ $\Rbb^2_+$ spatiotemporal resolution at discretization level $m$ Sec. <ref>
$(\Xbf^{(m)},\tbf^{(m)})$ $\Rbb^{n_1^{(m)}\times \cdots \times n_d^{(m)}}\times \Rbb^{n_{d+1}^{(m)}}$ spatiotemporal grid at discretization level $m$ Sec. <ref>
$m$ $\Nbb$ $\supp{\psi} \cap(\Xbf^{(m)},\tbf^{(m)})$ Sec. <ref>
$\what^{(m)}$ $\Rbb^{\mathfrak{J}\times n}$ learned weight vector at discretization level $m$ Sec. <ref>
$\rho$ $\CalP(\Rbb)$ noise distribution Sec. <ref>
$\Ubf^{(m)}$ $(\Rbb^{n_1^{(m)}\times \cdots \times n_{d+1}^{(m)}})^n$ noisy data evaluated at $(\Xbf^{(m)},\tbf^{(m)})$ Sec. <ref>
$\ep$ - i.i.d. noise with distribution $\rho$ Sec. <ref>
$(\Gbf^{(m)},\bbf^{(m)})$ $\Rbb^{K\times\mathfrak{J}}\times \Rbb^{K\times n}$ WSINDy linear system at discretization level $m$ Sec. <ref>
$(\overline{\Gbf},\overline{\bbf})$ $\Rbb^{K\times\mathfrak{J}}\times \Rbb^{K\times n}$ WSINDy linear system in the limit $m\to \infty$ Sec. <ref>
$(\Gbf^\star,\bbf^\star)$ $\Rbb^{K\times\mathfrak{J}}\times \Rbb^{K\times n}$ $(\overline{\Gbf},\overline{\bbf})$ in the case of noise-free data Sec. <ref>
$\wbf^{(m),\lambda}$ $\Rbb^{\mathfrak{J}\times n}$ output of STLS$(\Gbf^{(m)},\bbf^{(m)},\lambda)$ Sec. <ref>
$\what^{(m),\lambda}$ $\Rbb^{\mathfrak{J}\times n}$ output of MSTLS$(\Gbf^{(m)},\bbf^{(m)})$ Sec. <ref>
$\overline{\wbf}^\lambda$ $\Rbb^{\mathfrak{J}\times n}$ output of STLS$(\overline{\Gbf},\overline{\bbf},\lambda)$ Sec. <ref>
$\widehat{\overline{\wbf}}^\lambda$ $\Rbb^{\mathfrak{J}\times n}$ output of MSTLS$(\overline{\Gbf},\overline{\bbf})$ Sec. <ref>
Notations used throughout.
§ CONVERGENCE OF THE TRAPEZOIDAL RULE FOR $\CALH^{K,\INFTY}$
Let $U\subset \Rbb^{n}$ be an open subset and $k>n/2$. Define $r = \left\lceil k-\frac{n}{2}\right\rceil-1$ and $\alpha = k-\frac{n}{2}-r$. By the Sobolev embedding theorem, we have
\[H^k(U)\subset C^{r, \gamma}(U)\]
where $\gamma = \alpha$ if $\alpha\in (0,1)$, and if $\alpha=1$ the embedding holds for any $\gamma \in (0,1)$. The Hölder space $C^{r,\gamma}(U)$ is defined
\[C^{r,\gamma}(U) = \left\{f\in C(U)\ :\ L:=\max_{|\beta|\leq r}\sup_{x,y\in U}\frac{|\partial^\beta f(x)-\partial^{\beta}f(y)|}{|x-y|^\gamma}<\infty\right\}.\]
The following lemma verifies that the trapezoidal rule converges for functions $\partial^{\alpha^s}\psi(\cdot) f(u(\cdot))$ appearing in the WSINDy linear system, so long as $u \in \CalH^{k,\infty}(\Omega\times(0,T))$ and $\psi$, $f$ are smooth. For smoother functions the convergence rate is greater.
Let $g\in \CalH^{k,\infty}(D)$ have compact support where $D\subset \Rbb^{n}$ is open and bounded. Then the multidimensional trapezoidal rule approximation of $\int_D g(x)dx$ over a grid $\Xbf\subset \overline{D}$ with uniform mesh width $h$ takes the form
\[I_h(g) = h^n \sum_{x_i\in \Xbf} g(x_i).\]
Then we have the following cases:
* If $k > \frac{n}{2}$ and we also have $g\in H^k(D)$, then for $C$ independent of $h$ it holds that
\[\left\vert I_h(g) - \int_D g(x) dx \right\vert \leq Ch^{k-\frac{n}{2}}.\]
* If $\frac{n}{2} < k \leq \frac{n}{2}+1$, then for $C$ independent of $h$ it holds that
\[\left\vert I_h(g) - \int_D g(x) dx \right\vert \leq Ch^{k-\frac{n}{2}},\]
* If $k > \frac{n}{2}+1$, then for $C$ independent of $h$ it holds that
\[\left\vert I_h(g) - \int_D g(x) dx \right\vert \leq Ch.\]
Case $(i)$ follows from the compact support of $g$ using Fourier analysis. We now examine cases $(ii)$ and $(iii)$. By the definition of $\CalH^{k,\infty}(D)$, there exists a finite partition $D = D_1\cup\cdots \cup D_\ell$ where each subdomain $D_i$ has Lipschitz boundary. Define the interior boundary
\[\CalS_{\text{int}} = \left(\bigcup_{i=1}^L \partial D_i\right)\setminus \partial D\]
and the $\varepsilon$-tube of $\CalS_{\text{int}}$
\[\CalS_\varepsilon = \CalS_{\text{int}} +B_n(0,\varepsilon)\]
where $B_n(0,\varepsilon)$ is the ball of radius $\varepsilon$ in $\Rbb^n$ centered at $0$. Due to the regularity of the boundaries, the surface measure of $\CalS_{\text{int}}$ is finite, hence the volume of $\CalS_\varepsilon$ satisfies $\lim_{\varepsilon\to 0}|\CalS_\varepsilon| = 0$. Fix $h>0$ so that $D$ is partitioned into $M = |D|/h^n$ hypercubes and choose $\varepsilon < h/2$ so that $|\CalS_\varepsilon|\leq |\CalS_{\text{int}}|h$. Any given hypercube $K$ that intersects $\CalS_{\text{int}}$ contributes a worst-case error of
\[\left\vert\frac{h^n}{2^n}\sum_{j=1}^{2^n}g_j - \int_{K}g(x)dx \right\vert \leq 2\nrm{g}_\infty h^n,\]
where $g_j=g(x^{(j)})$ for vertices $(x^{(j)})_{j=1}^{2^n}$ of $K$. There are fewer than $M\frac{h|\CalS_{\text{int}}|}{|D|}$ such hypercubes.
For case $(ii)$, using the mean value theorem and the Hölder regularity, we have that integration over hypercubes $K$ with $K\cap \CalS_{\text{int}}=\emptyset$ satisfies
\[\left\vert\frac{h^n}{2^n}\sum_{j=1}^{2^n}g_j - \int_{K}g(x)dx \right\vert = \left\vert\frac{h^n}{2^n}\sum_{j=1}^{2^n}\left(g_j - g(\tilde{x})\right) \right\vert \leq \frac{Lh^n}{2^n}\sum_{j=1}^{2^n} |x^{(j)}-\tilde{x}|^\gamma \leq L\sqrt{n}^{k-\frac{n}{2}}h^{k+\frac{n}{2}}\]
where $g(\tilde{x}) = \frac{1}{|K|}\int_Kg(x)dx$. Here $L$ is the Hölder constant for $g$ in $C^{0,k-\frac{n}{2}}(D)$. This gives us the error
\[E \leq \left(M\frac{h|\CalS_{\text{int}}|}{|D|}\right) 2\nrm{g}_\infty h^n + M L\sqrt{n}^{k-\frac{n}{2}}h^{k+\frac{n}{2}}\]
\[\leq 2\nrm{g}_\infty|\CalS_\text{int}|h + L|D|\sqrt{n}^{k-\frac{n}{2}}h^{k-\frac{n}{2}} \leq C h^{k-\frac{n}{2}}.\]
For case $(iii)$, we have that $g\big\vert_{D_i}\in C^{1,\gamma}$ where $\gamma = \min(1,k-\frac{n}{2}-1)$ for $i=1,\dots,\ell$, hence we can exploit smoothness in the 1st derivative. In the 1D case ($n=1$), using that
\[g(x) = g(0) + \int_0^xg'(y)dy = g(h) -\int_x^hg'(y)dy\]
we have the following:
\begin{align*}
\left\vert\frac{h}{2}(g(0)+g(h))-\int_0^hg(x)dx\right\vert &=\frac{1}{2}\left\vert\int_0^h\left(\int_0^xg'(y)dy-\int_x^hg'(y)dy\right)dx\right\vert\\
\text{(switch order of integration)}\qquad &=\frac{1}{2}\left\vert\int_0^h\int_0^y(g'(x)-g'(y))dxdy\right\vert \\
&\leq \frac{L}{2}\int_0^h\int_0^y(y-x)^{\gamma}dxdy \\
&= \frac{L}{2(\gamma+1)(\gamma+2)}h^{2+\gamma}.
\end{align*}
We can extend this to the trapezoidal rule in $n$ dimensions as follows. Let $K$ be a hypercube satisfying $K\cap \CalS_{\text{int}}=\emptyset$ with vertices $(x^{(j)})_{j=1}^{2^n}$. Then
\[\left\vert\frac{h^n}{2^n}\sum_{j=1}^{2^n}g_j - \int_{K}g(x)dx \right\vert = \frac{1}{2^n}\left\vert \int_{K} \left(\sum_{j=1}^{2^n} \int_0^1 \frac{d}{dt}g(z_j(t))dt \right)dx\right\vert \]
where each $z_j$ is a connected curve satisfying $z_j(0)=x^{(j)}$ and $z_j(1)=x$, using that
\[g(x) = g(x^{(j)}) + \int_0^1 \frac{d}{dt}g(z_j(t))dt, \qquad j=1,\dots,2^n.\]
Since this holds for any absolutely continuous curves $z_j$, we can select curves to cancel analogously to the 1D case. Let each $z_j$ take the path from $x_j$ to $x$ along the coordinate axes. For example, let $x^{(1)}=(0,\dots,0)$, $x^{(2)}=(h,0,\dots,0)$ and consider the paths
\[z_1: x^{(1)}\to (x_1,0,\dots,0) \to (x_1,x_2,0,\dots,0) \to \cdots \to x.\]
\[z_2: x^{(2)}\to (x_1,0,\dots,0) \to (x_1,x_2,0,\dots,0) \to \cdots \to x.\]
Then combining the first edges of each path in the sum above leads to the term
\[I = \int_K\left(\int_0^{x_1} \partial_{x_1} g(y_1,0,\dots,0)dy_1 - \int_{x_1}^h\partial_{x_1}g(y_1,0,\dots,0)dy_1\right)dx \]
upon which we can use the same trick as before to swap the order of integration, leading to
\[I\leq \frac{L}{(\gamma+1)(\gamma+2)}h^{2+\gamma + (n-1)}.\]
Each of the $2^n$ paths $z_j$ have $n$ edges, each of which gets paired with another edge to yield a similar bound, leading to an overall $n2^n / 2$ such error terms, or
\[\left\vert\frac{h^n}{2^n}\sum_{j=1}^{2^n}g_j - \int_{K}g(x)dx \right\vert \leq \frac{n 2^{n-1}}{2^n} \frac{L}{(\gamma+1)(\gamma+2)}h^{2+\gamma + (n-1)} = \frac{n}{2}\frac{L}{(\gamma+1)(\gamma+2)}h^{1+\gamma + n}.\]
This holds for all $K \cap \CalS_{int} = \emptyset$, and the overall error satisfies
\[E \leq \left(M\frac{h|\CalS_{\text{int}}|}{|D|}\right) 2\nrm{g}_\infty h^n + M \frac{n}{2}\frac{L}{(\gamma+1)(\gamma+2)}h^{1+\gamma + n} \]
\[\leq 2\nrm{g}_\infty|\CalS_\text{int}|h + \frac{n}{2}\frac{L|D|}{(\gamma+1)(\gamma+2)}h^{1+\gamma} \leq Ch.\]
§ MOMENT MATRIX LEMMAS
Define the Gaussian moment matrix
\begin{equation}\label{gaussmoment}
\Abf^{(p)}_{i,j} = \begin{dcases} 1, & i=j\\ {j\choose i}(j-i-1)!!\sigma^{j-i}, & j>i, \ (j-i)\ \text{even} \\ 0, &\text{otherwise.}\end{dcases}
\end{equation}
Then the inverse of $\Abf^{(p)}$ is given by
\[(\Abf^{(p)})^{-1}_{i,j} := (-1)^{\frac{j-i}{2}}\Abf^{(p)}_{i,j}.\]
Define $\Bbf^{(p)}_{i,j} := (-1)^{\frac{j-i}{2}}\Abf^{(p)}_{i,j}$ and consider an entry of the product $\Cbf_{i,j} = \left(\Abf^{(p)}\Bbf^{(p)}\right)_{i,j}$. We first note that $\Cbf_{i,j} = 0$ if either $j<i$ or $j-i$ is odd, the former due to the fact that $\Cbf$ is upper triangular, the latter because both $\Abf^{(p)}$ and $\Bbf^{(p)}$ have a “checkerboard” sparsity pattern with zeros on odd superdiagonals, a pattern which is inherited by $\Cbf$. Next since both $\Abf^{(p)}$ and $\Bbf^{(p)}$ have 1's along the diagonal, $\Cbf$ will also have 1's along the diagonal. For the remaining entries, in other words $\Cbf_{i,j}$ with $j-i$ an even positive integer,
\begin{align}
\Cbf_{i,j} &= \sigma^{j-i}\sum_{\substack{k=i \\k-i \text{ even}}}^j {k \choose k-i}{j \choose j-k}(k-i-1)!!(j-k-1)!!(-1)^\frac{j-k}{2}\\
\label{a11} &=\sigma^{j-i}{j \choose j-i}(j-i)!\sum_{\substack{k=i \\k-i \text{ even}}}^j \frac{(-1)^\frac{j-k}{2}}{(k-i)!!(j-k)!!}\\
&=\sigma^{j-i}{j \choose j-i}(j-i)!\sum_{\ell=0}^{\frac{j-i}{2}} \frac{(-1)^\frac{j-i-2\ell}{2}}{(2\ell)!!(j-i-2\ell)!!}\\
&=\left(-\frac{\sigma^{j-i}{j \choose j-i}(j-i)!}{2^\frac{j-i}{2}\left(\frac{j-i}{2}\right)!}(-1)^\frac{j-i}{2}\right)\left(\sum_{\ell=0}^{\frac{j-i}{2}} (-1)^\ell { \frac{j-i}{2} \choose \ell}\right)\\
\end{align}
where we used the identities
\[{p \choose q}{p-q \choose j} = {p \choose q+j}{q+j \choose j}, \qquad p-q\geq j\geq 0\]
\[\sum_{\ell=0}^{p} (-1)^\ell { p \choose \ell}=0, \qquad p\geq 1.\]
This shows that $\Cbf$ is the identity, so that $\Bbf^{(p)} = (\Abf^{(p)})^{-1}$.
Let $\Abf^{(p)} = \Ibf + \Lbf^{(p)}$ be the Gaussian moment matrix defined in (<ref>). Then it holds that
\[\max\{\nrm{\Abf^{(p)}}_\infty,\nrm{\Abf^{(p)}}_2\}\leq \nrm{\Abf^{(p)}}_1 \leq \exp\left(\sigma^2{p \choose 2}\right)\]
\[\max\{\nrm{\Lbf^{(p)}}_\infty,\nrm{\Lbf^{(p)}}_2\}\leq \nrm{\Lbf^{(p)}}_1 \leq \sigma^2{p \choose 2}\exp\left(\sigma^2{p \choose 2}\right).\]
Set $\alpha = \sigma^2{p \choose 2}$. First, note that the maximum entry along any superdiagonal of $\Abf^{(p)}$ occurs in the $p$th column, i.e.
\[\max_{j-i = k} \Abf^{(p)}_{ij} = \Abf^{(p)}_{p-k,p}:=y_k \]
where for $k$ even,
\[y_k = {p \choose p-k}(k-1)!!\sigma^{k} = \frac{{p \choose p-k}(k-1)!!}{{p \choose 2}^{\frac{k}{2}}}\alpha^{k/2}\]
and for $k$ odd, we have $y_k = 0$. This implies that the $p$th column achieves the maximum column sum, so that $\nrm{\Abf^{(p)}}_1 = \nrm{\ybf}_1$. This also implies that we can upper-bound any row sum by $\nrm{\ybf}_1$, hence $\nrm{\Abf^{(p)}}_\infty\leq \nrm{\Abf^{(p)}}_1$. Now let $\Tbf(\ybf)$ be the Toeplitz matrix formed by the vector $\ybf$:
\[\Tbf(\ybf) = \begin{bmatrix} y_0 & y_1 & \cdots & y_p \\ y_p & y_0 & \cdots & y_{p-1} \\ \vdots & \ddots & \ddots & \vdots \\ y_1 & y_2 & \cdots & y_0 \end{bmatrix}.\]
Since the entries of $\Abf^{(p)}$ are positive, we have
\[\nrm{\Abf^{(p)}}_2\leq \nrm{\Tbf(\ybf)}_2 = \nrm{\ybf}_1 = \nrm{\Abf^{(p)}}_1,\]
where $\nrm{\Tbf(\ybf)}_2 = \nrm{\ybf}_1$ follows from the fact that the product $\Tbf(\ybf)\xbf$ is the convolution of $\xbf$ with $\ybf_-\ =\ \ybf$ in reverse order, so by Young's inequality for convolutions have
\[\nrm{\Tbf(\ybf)}_2 =\max_{\nrm{\xbf}_2=1}\nrm{\xbf*\ybf_-}_2\leq \nrm{\ybf}_1,\]
and $\nrm{\Tbf(\ybf)\xbf}_2 = \nrm{\ybf}_1$ is achieved by $\xbf = (1,\dots,1)^T$. Revisiting the entries of $\ybf$, we then have for $k$ even,
\[y_k \leq \frac{2^{\frac{k}{2}}}{(k)!!}\alpha^{k/2} = \frac{\alpha^{k/2}}{\left(\frac{k}{2}\right)!}\]
so that
\[\nrm{\ybf}_1 \leq \sum_{\substack{k=0 \\ k\text{ even}}}^p \frac{\alpha^{k/2}}{\left(\frac{k}{2}\right)!} = \sum_{\ell=0}^{\left\lfloor p/2\right\rfloor}\frac{\alpha^\ell}{\ell!}\leq \exp(\alpha).\]
The inequalities for $\Lbf^{(p)}$ follow similarly, noting that
\[\max\{\nrm{\Lbf^{(p)}}_\infty,\nrm{\Lbf^{(p)}}_2\}\leq \nrm{\Lbf^{(p)}}_1 = \sum_{k=2}^py_k = \alpha \sum_{\ell=0}^{\left\lfloor p/2\right\rfloor}\frac{\alpha^\ell}{\ell!}\frac{1}{\ell+1}\leq \alpha\exp(\alpha).\]
For general moment matrices we have the following.
Let $\Abf^{(p)}$ be the moment matrix of order $p$ for probability distribution $\rho$ such that the monomials $P^{(p)} = \{1,x,\dots,x^p\}$ transform under cross-correlation with $\rho$ according to $P^{(p)} \star \rho = P^{(p)}\Abf^{(p)}$. Then the inverse of $\Abf^{(p)}$ is given by $(\Abf^{(p)})^{-1}_{ij} = f(j-i)\Abf^{(p)}_{ij}$ where $f:\Nbb\cup\{0\}\to \Rbb$ obeys the recurrence
\begin{equation}\label{recrel}
f(q) = -\sum_{\ell=0}^{q-1}{q\choose \ell} \left(\frac{M_{q-\ell}M_\ell}{M_q}\right) f(\ell), \qquad f(0) = 1
\end{equation}
for moments $M_k := \int_\Rbb x^k d\rho(x)$. Moreover, if $\rho$ is symmetric and sub-Gaussian,
then with $\tilde{\Lbf}^{(p)} := (\Abf^{(p)})^{-1} - \Ibf^{(p)}$ it holds that
\[\|\tilde{\Lbf}^{(p)}\| =\CalO(\|\rho\|^2_\text{SG}).\]
$\Abf^{(p)}$ is defined by
\begin{equation}
\Abf^{(p)}_{ij} = \begin{dcases} M_{j-i}{j \choose i}, &j\geq i \\0, & \text{otherwise}.\end{dcases}
\end{equation}
Defining $(\Bbf^{(p)})_{ij} := f(j-i)\Abf^{(p)}_{ij}$, we see that solving for $f$ such that
\begin{align*}
\delta_{ij} &= (\Abf^{(p)}\Bbf^{(p)})_{ij} \\
&=\sum_{k=i}^j {k \choose i}{j \choose k}M_{k-i}M_{j-k}f(j-k)\\
&= {j \choose i}\sum_{\ell=0}^q{q\choose \ell}M_{q-\ell}M_\ell f(\ell)
\end{align*}
leads directly to (<ref>), noting that only $j\geq i$ needs to be considered.
Now consider $\rho$ to be sub-Gaussian. It then holds that
\[|\tilde{\Lbf}^{(p)}_{ij}| = \begin{dcases} |f(j-i)|M_{j-i}{j\choose i}&, j-i \text{ even}, j\geq i+2. \\ 0&,\text{otherwise}.\end{dcases}\]
If we now assume that $\|\rho\|_\text{SG}\leq B$ for any $B>0$, it holds through sub-Gaussianity that
\[M_q\leq \frac{q!!}{2^{{q}/{2}-1}}\|\rho\|_\text{SG}^q\leq C'\|\rho\|_\text{SG}^2, \qquad q\geq 2\]
where $C'$ depends only on $B$ and $q$. Using the recurrence (<ref>) and Jensen's inequality we can bound $f$ independently of $\rho$,
\[|f(q)|\leq \sum_{\ell=0}^{q-1}{q\choose \ell}|f(\ell)|,\]
hence we see that
\[|\tilde{\Lbf}^{(p)}_{ij}| \leq C\|\rho\|_\text{SG}^2\]
where $C$ depends only on $B$ and $p$. In this way we have that
\[\|\tilde{\Lbf}^{(p)}\| = \CalO(\|\rho\|_\text{SG}^2)\]
for any norm $\|\cdot\|$.
§ CONCENTRATION RESULTS
[Lemma 3.1]
Let $f:\Rbb\to \Rbb$ satisfy
\begin{equation}
|f(x)|\leq C_f\left(1+|x|^p\right),
\end{equation}
for some $p\geq 1$ and $C_f>0$. For bounded sequences of real numbers $(\alpha_i)_{i\in \Nbb}$ and $(u_i)_{i\in \Nbb}$, and for $\ep_i\sim \rho$ i.i.d. $i\in \Nbb$, define the random variables $Y_i = \alpha_i f(u_i +\ep_i)$. Then for any $\kappa\geq \nrm{\rho}_{\text{SG}}$ and $t>0$, it holds that the right tails of $Y_i$ are captured by a common rate function $I(t)$,
\[\Pbb\left(Y_i > t\right)\leq \exp (-I(t)),\]
\begin{equation}\label{eq:rate_fcn_app}
I(t):=\ind{(t^*,\infty)}(t)\left[\frac{1}{\kappa^2}\left(\left(\frac{t}{C_f\alpha^*}-1\right)^{1/p}-u^*\right)^2-\log(2)\right] = \frac{t^{2/p}}{\kappa^2(C_f\alpha^*)^{2/p}}I_0(t),
\end{equation}
for $\alpha^* = \sup_i|\alpha_i|$, $u^* = \sup_i|u_i|$, and $t^*:= C_f\alpha^*\left(1+\left(u^*+ \kappa\sqrt{\log(2)}\right)^p\right)$. Moreover, $I_0(t)$ is monotonically increasing from $0$ to $1$ over $t\in(t^*,\infty)$, and is defined in the proof.
Using (<ref>) and the symmetry of $\rho$, we get for each $i$ and every $t>t^*$,
\[\Pbb\left(Y_i >t\right)\leq \Pbb\left(C_f\alpha^*\left(1+|u_i+\ep|^p\right)> t \right) \leq 2\Pbb\left(\ep>\left(\frac{t}{C_f\alpha^*}-1\right)^{1/p}-u^*\right)\]
from which sub-Gaussianity implies
\[\Pbb\left(Y_i >t\right) \leq \exp(-I(t))\]
for $I(t)$ defined as above. This leads to
\[I_0(t) = \ind{(t^*,\infty)}(t)\left[\left(\left(1-\frac{C_f\alpha^*}{t}\right)^{1/p}-u^*\left(\frac{C_f\alpha^*}{t}\right)^{1/p}\right)^2 -\frac{\kappa^2(C_f\alpha^*)^{2/p}\log(2)}{t^{2/p}}\right]\]
where the quantity in brackets is positive over $t>t^*$ due to the definition of $t^*$, and approaches 1 as $t\to \infty$. It is monotonically increasing as it contains only sums and compositions of functions that preserve monotonicity over $t>t^*$ (namely, $t\to A-Bt^{-\theta}$ is monotonically increasing for all $A,B,\theta \geq 0$, and sums of monotonically increasing functions are monotonically increasing).
[Lemma 3.2]
Let $Y_i$ be defined under the same conditions as Lemma <ref> and choose $\beta\in(0,1)$. Then there exists $\overline{v}(\beta) < \infty$ such that the sum $S_m = \sum_{i=1}^{m} Y_i$ satisfies
\begin{equation}\label{eq:convergence_of_sums_app}
\Pbb\left(|S_m-\Ebb S_m|> mt\right) \leq \begin{dcases} 2 \exp\left(-\frac{\beta}{2} I(mt)\right) + 2m\exp\left(-I(mt)\right), & t \geq t_m(\beta)\\
2 \exp\left(-\frac{mt^2}{2\overline{v}(\beta)}\right) + 2m\exp\left(-\frac{mt_m(\beta)^2}{\overline{v}(\beta)}\right), & 0\leq t < t_m(\beta), \end{dcases}
\end{equation}
where $t_m (\beta):= \sup\{t\geq 0\ :\ t\leq \beta\overline{v}(\beta)\frac{I(mt)}{mt}\}$.
This follows from a simple modification of Theorem 1 in [4] to the case of independent but not identically distributed random variables. As in [4] we define
\[Y_i^L= Y_i\ind{Y_i\leq L}\]
and use that (<cit.>)
\[\Ebb\left[\exp\left(\lambda(Y_i^L-\Ebb[Y_i])\right)\right]\leq \frac{k_i(L,\lambda)}{2}\lambda^2\]
for all $\lambda,L>0$, where
\[k_i(L,\lambda) := \Ebb\left[\left(Y_i^L-\Ebb[Y_i]\right)^2\ind{Y_i^L\leq \Ebb[Y_i]} + \left(Y_i^L-\Ebb[Y_i]\right)^2\exp\left(\lambda\left(Y_i^L-\Ebb[Y_i]\right)\right)\ind{Y_i^L> \Ebb[Y_i]}\right].\]
We then note that $\sup_{i\in \Nbb} k_i(L,\lambda)=: \overline{k}(L,\lambda)$ is bounded, for instance by
\[\overline{k}(L,\lambda)\leq \sup_i\Vbb[Y_i]+(L+\mu^*)^2\exp(\lambda(L+\mu^*))),\]
where $\mu^*:=\sup_i|\Ebb[Y_i]|$. We then modify the proof of <cit.> as follows,
\begin{align*}
\Pbb\left(S_m-\Ebb[S_m]>mt\right)&\leq \Pbb\left(\sum_{i=1}^m Y_i^L-\Ebb[S_m] > mt\right)+\Pbb\left(\exists i\ \text{ s.t. }\ Y_i> L\right)\\
&\leq \exp(-\lambda m t)\prod_{i=1}^m\Ebb\left[\exp(\lambda(Y_i^L-\Ebb[Y_i]))\right] + m\Pbb\left(Y_i>L\right)\\
&\leq \exp\left(m\left(-\lambda t + \frac{\overline{k}(L,\lambda)}{2}\lambda^2\right)\right) + m\exp\left(-I(L)\right).
\end{align*}
Choosing $\lambda=\beta\frac{I(mt)}{mt}$, $L=mt$ for $t>t_m(\beta)$ and $\lambda = \frac{t}{\overline{v}(\beta)}$, $L=mt_m(\beta)$ for $t\leq t_m(\beta)$ gives (<ref>), where $\overline{v}(\beta) = \sup_{L>0}\overline{k}(L,\beta\frac{I(L)}{L})$. We also used that $1-\frac{\beta\overline{v}(\beta)I(mt)}{2mt^2}\in[\frac{1}{2},1]$ for $t\geq t_m(\beta)$.
Finally, the bound for $\overline{v}(\beta)$ can be obtained using <cit.>, which asserts that
\[k_i\left(L,\beta\frac{I(L)}{L}\right) \leq \sup_i\Vbb[Y_i]+\exp\left(-\beta \Ebb[Y_i]\frac{I(L)}{L}\right)\int_0^{L-\Ebb[Y_i]}\exp(-(1-\beta)I(t+\Ebb[Y_i]))(2t+\beta tI(t))dt.\]
Letting $C_1 = \sup_i\Vbb[Y_i]$, $C_2 = \sup_{i\in \Nbb,L>0}\exp\left(-\beta \Ebb[Y_i]\frac{I(L)}{L}\right)$, and introducing a parameter $\gamma>1$, we can then use the definition of $I(t)$ to get
\begin{align*}
k_i\left(L,\beta\frac{I(L)}{L}\right) &\leq C_1+C_2\int_0^\infty\exp(-(1-\beta)I(t-\mu^*))(2t+\beta tI(t))dt\\
&\leq C_1+C_2\int_0^{\gamma t^*+\mu^*}(2t+\beta tI(t))dt+\int_{\gamma t^*+\mu^*}^\infty \exp(-(1-\beta)I(t-\mu^*))(2t+\beta tI(t))dt\\
&\leq C'+C_2\int_{\gamma t^*}^\infty \exp\left(-\frac{(1-\beta)I_0(\gamma t^*)}{K^2(C_f\alpha^*)^{2/p}}s^{2/p}\right)r(s)ds \\
&\leq C'+C_2\int_0^\infty \exp\left(-A s^{2/p}\right)r(s)ds,
\end{align*}
\[r(s) = 2(s+\mu^*)+\frac{1}{\kappa^2(C_f\alpha^*)^{2/p}}(s+\mu^*)^{2/p+1} \leq r_0+r_1 s +r_2 s^{2/p+1}.\]
We see from this that $\overline{v}(\beta)=\sup_{i\in \Nbb,L>0} k_i(L,\beta\frac{I(L)}{L})$ is finite, since $\gamma>1 $ and $\beta\in(0,1)$ imply $A>0$, and $s\to \exp(-As^{2/p})$ has finite moments of all order over $s\in[0,\infty)$.
Suppose each function in library $\CalF$ satisfies the growth bound (<ref>) for some $p:=p_{\max}$. Then it holds that for every $t>\overline{t}(m)$, where $\overline{t}(m)\to 0$ as $m\to \infty$, we have the concentration rates
\begin{equation}\label{eq:convergence_of_G_app}
\Pbb\left(\nrm{(\Gbf^{(m)},\bbf^{(m)}) - (\overline{\Gbf},\overline{\bbf})}_{\overrightarrow{\infty}}> t \right) \leq \begin{dcases} K\mathfrak{J}\exp\left(-\frac{c}{2} (mt)^{2/p_{\max}}\right) + K\mathfrak{J}m\exp\left(-c (mt)^{2/p_{\max}}\right), & t \geq t_m\\
K\mathfrak{J} \exp\left(-\frac{mt^2}{2\overline{v}}\right) + K\mathfrak{J}m\exp\left(-\frac{mt_m^2}{\overline{v}}\right), & 0\leq t < t_m, \end{dcases}
\end{equation}
where the rate factor $c$ depends on $\nrm{u}_\infty$, $|\Omega\times (0,T)|$, $\pmb{\alpha}$, $\psi$, $\CalF$, and $\nrm{\rho}_{\text{SG}}^2$.
For convenience we have chosen $\beta=0.5$ from Lemma <ref>, so that $t_m =t_m(0.5)$, $\overline{v}=\overline{v}(0.5)$.
It suffices to consider only the concentration of $\Gbf^{(m)}$ to $\overline{\Gbf}$ since entries of $\bbf^{(m)}$ are of the same type[In addition, when $\bbf^{(m)}$ is linear in the data (e.g. when the left-hand side of (<ref>) is a linear differential operator), $\bbf^{(m)}$ concentrates at a Gaussian rate to $\overline{\bbf}$, and $\overline{\bbf}=\bbf^\star$]. First notice that
\[\nrm{\Gbf^{(m)}-\overline{\Gbf}}_{\overrightarrow{\infty}}\leq \underbrace{\nrm{\Gbf^{(m)}-\Ebb\Gbf^{(m)}}_{\overrightarrow{\infty}}}_{\text{deviation from mean}}+\underbrace{\nrm{\Ebb\Gbf^{(m)}-\overline{\Gbf}}_{\overrightarrow{\infty}}}_{\text{integration error}}\]
and similarly for $\bbf^{(m)}$. The entries of $\Gbf^{(m)}$ satisfy
\[\Gbf^{(m)}_{k,(s-1)J+j} = \sum_{\substack{(\xbf_\ell,t_\ell)\in \\ \supp{\psi(\xbf_k-\cdot,t_k-\cdot)}\cap (\Xbf^{(m)},\tbf^{(m)})}} \partial^{\alpha^s}\psi(\xbf_k-\xbf_\ell,t_k-t_\ell)f_j(\Ubf^{(m)}(\xbf_\ell,t_\ell))(\Delta x^{(m)})^d\Delta t^{(m)}\]
\[= \frac{1}{m}\sum_{\ell=1}^{m} \alpha_{s,k,\ell} f_j(u_\ell+\ep_\ell)\]
\[\alpha_{s,k,\ell} = |\supp{\psi}|\partial^{\alpha^s}\psi(\xbf_k-\xbf_\ell,t_k-t_\ell), \quad u_\ell = u(\xbf_\ell,t_\ell),\]
$m$ is the number of points $\psi$ is supported on at the resolution $(\Delta x^{(m)},\Delta t^{(m)})$. By the smoothness and compact support of $\psi$, together with the regularity $u\in \CalH^{k,\infty}(\Omega\times (0,T))$, we have that $\alpha^*:=\sup_{s,k,\ell} |\alpha_{s,k,\ell}|<\infty$ and $u^*:=\sup_\ell |u_\ell|<\infty$. Hence, each entry $\Gbf^{(m)}_{k,(s-1)J+j}$ concentrates to $\Ebb \Gbf^{(m)}_{k,(s-1)J+j}$ according to Lemma <ref>, and in particular its concentration can be modelled by a common rate function $I(t)$ given by (<ref>). To make use of the desired $\CalO(\exp(-(mt)^{2/p_{\max}}))$ concentration, we must take $mt>t^*$. Further, $t$ must be larger than the integration error, which is at-most $\CalO(m^{-\min(1,k-(d+1)/2)})$, hence depends on the smoothness of $u$ relative to the spatiotemporal dimension $d+1$. Denote the integration error by $e_\text{int}(m)$. Then for some $\gamma>1$, taking
\begin{equation}
\overline{t}(m) = \max\left(\frac{\gamma t^*}{m},e_\text{int}(m)\right)%\leq \overline{C}m^{-b}
\end{equation}
we arrive at the desired concentation rates for $t>\overline{t}(m)$, and with an at-worst rate factor
\[c = \frac{I_0(\gamma t^*)}{4\nrm{\rho}_{\text{SG}}^2(C_f \alpha^*)^{2/p_{\max}}}\]
Finally, each $\Gbf^{(m)}$ has $K\mathfrak{J}$ entries, hence a union bound provides the desired concentration result.
Let the above assumption hold with $m^{(\nu)}\gtrsim m^{\alpha}$ for some $\alpha\in(0,1)$ and $\CalF$ locally Lipschitz with polynomial growth, i.e. $\forall f\in \CalF$ and $(x,y)\in \Rbb$,
\[|f(x)-f(y)|\leq C_\CalF|x-y|\left(1+|x-y|^{p_{\max}-1}\right)\]
for some $C_\CalF\geq 0$ and $p_{\max}\geq 1$, and recall that $u\in \CalH^{k,\infty}(D)$. Then for $t> \mathfrak{t}(m)$, where $\mathfrak{t}(m) = \CalO((m^{-\alpha \left(\frac{k}{d+1}-\frac{1}{2}\right)})$, and sufficiently large $m$, we have
\[\Pbb \left(\|(\tilde{\Gbf}^{(m)},\tilde{\Gbf}^{(m)}) - (\Gbf^\star,\bbf^\star)\|_{\overrightarrow{\infty}}>t\right) \leq 3K\mathfrak{J}\exp\left(-\frac{c}{2^{1+p_{\max}}}[m(t-\mathfrak{t}(m))]^{2/{p_{\max}}}\right).\]
where $c$ is the same rate from Theorem <ref>.
As in Theorem <ref>, it suffices to consider concentration of $\tilde{\Gbf}^{(m)}$. Let $\pmb{\nu}^{(m)}$ be a simple moving average filter with $n^\nu_q$ points in dimension $q$ and filter width $m^{(\nu)} := \prod_{q=1}^{d+1}n^\nu_q$ satisfying $m^{(\nu)} = m^\alpha$ for some $\alpha \in (0,1)$. Denote the filtered data, used to build $\tilde{\Gbf}^{(m)}$, by $\tilde{\Ubf}^{(m)} = \pmb{\nu}^{(m)}* \Ubf^{(m)} = \tilde{\Ubf}^{(m),\star}+\tilde{\ep}^{(m)}$, where $\tilde{\Ubf}^{(m),\star} = \pmb{\nu}^{(m)}*\Ubf^{(m),\star}$ is the filtered clean data and the filtered noise $\tilde{\ep}^{(m)}$ is mean zero and correlated accorded to
\[\Ebb\tilde{\ep}^{(m)}_i\tilde{\ep}^{(m)}_j = \frac{\sigma^2}{m^\alpha}\Sigma(i,j), \qquad \Sigma(i,j) = \prod_{q=1}^{d+1}\max\left(1-\frac{|i_q-j_q|}{n^\nu_q},\ 0\right)\in[0,1],\]
where we've treated the indices $i$ and $j$ as vectors in $\Rbb^{d+1}$. We will couple $\tilde{\Ubf}^{(m)}$ to a filtered, uncorrelated dataset $\widehat{\Ubf}^{(m)}$ which we inject as an intermediary, defined as $\widehat{\Ubf}^{(m)} = \tilde{\Ubf}^{(m),\star}+\widehat{\ep}^{(m)}$ where $\widehat{\ep}$ and $\tilde{\ep}$ are all identically distributed according to $\rho^{(m)}$ with variance $\frac{\sigma^2}{m^\alpha}$, yet $\widehat{\ep}$ satisfy
\[\Ebb\widehat{\ep}^{(m)}_i\widehat{\ep}^{(m)}_j = \delta_{ij}\frac{\sigma^2}{m^\alpha}\]
\[\Ebb\widehat{\ep}^{(m)}_i\tilde{\ep}^{(m)}_j = 0 \quad \forall i,j\in \Rbb^{d+1}.\]
In other words, entries $\widehat{\ep}_i$ are independent, and are independent from all $\tilde{\ep}_j$.
We will split the error into the following terms:
\[\|\tilde{\Gbf}^{(m)} - \Gbf^\star\| \leq \|\tilde{\Gbf}^{(m)} - \widehat{\Gbf}^{(m)}\| + \|\widehat{\Gbf}^{(m)} - \overline{\Gbf}^{(m)}\| +\|\overline{\Gbf}^{(m)} - \overline{\Gbf}^{(m),\star}\|+\|\overline{\Gbf}^{(m),\star} - \Gbf^{(m),\star}\|+\|\Gbf^{(m),\star} - \Gbf^\star\|\]
\[ = I_1+I_2+I_3+I_4+I_5.\]
Using subscript $m$ to denote trapezoidal rule approximation of inner products on the grid $(\Xbf^{(m)}, \tbf^{(m)})$, the different intermediate matrices are defined by
* $\tilde{\Gbf}^{(m)}_{ksj} = \lan \partial^{\alpha^s}\psi_k, f_j(\tilde{\Ubf}^{(m),\star}+\tilde{\ep}^{(m)})\ran_m$
* $\widehat{\Gbf}^{(m)}_{ksj} = \lan \partial^{\alpha^s}\psi_k, f_j(\tilde{\Ubf}^{(m),\star}+\widehat{\ep}^{(m)})\ran_m$
* $\overline{\Gbf}^{(m)}_{ksj} = \lan \partial^{\alpha^s}\psi_k, \rho^{(m)}\star f_j(\tilde{\Ubf}^{(m),\star})\ran_m$
* $\overline{\Gbf}^{(m),\star}_{ksj} = \lan \partial^{\alpha^s}\psi_k, f_j(\tilde{\Ubf}^{(m),\star})\ran_m$
* $\Gbf^{(m),\star}_{ksj} = \lan \partial^{\alpha^s}\psi_k, f_j(\Ubf^{(m),\star})\ran_m$
* $\Gbf^\star_{ksj} = \lan \partial^{\alpha^s}\psi_k, f_j(u)\ran$.
$I_4$: First, we note that $I_4=\|\overline{\Gbf}^{(m),\star} - \Gbf^{(m),\star}\|$ is determined by how fast the locally averaged clean data $\tilde{\Ubf}^{(m),\star}$ converges to the clean data $\Ubf^{(m),\star}$. The locally Lipschitz assumption on the library and the smoothness of $u\in \CalH^{k,\infty}$ with $k>\frac{d+1}{2}$ implies that there exists $\tilde{C}$ depending on $C_\CalF$, $\psi$, and $\nrm{u}_\infty$ such that $m$ large enough implies
\[I_4 \leq \tilde{C} \|\tilde{\Ubf}^{(m),\star}-\Ubf^{(m),\star}\| \leq \tilde{C} m^{-\alpha \min\{\frac{1}{d+1},\frac{k}{d+1}-\frac{1}{2}\}}.\]
Further, this is a worst-case bound reached only when $u$ is both discontinuous and not continuously differentiable in the subregions of $D=\Omega\times(0,T)$ where $u$ is continuous. If $u\in C^{1,\gamma}(D)$ then the rate changes to $\CalO(m^{-\frac{\alpha}{d+1}(1+\gamma)})$.
$I_5$: Asymptotic behavior of $I_5=\|\Gbf^{(m),\star} - \Gbf^\star\|$ is similar to $I_4$ as it is determined by how quickly the trapezoidal rule converges on $\partial^{\alpha^s}\psi(\cdot) f_j(u(\cdot))$. The worst-case rate is given by
\[I_5\leq \tilde{C} m^{-\min\{\frac{1}{d+1},\frac{k}{d+1}-\frac{1}{2}\}},\]
which similary improves with the smoothness of $u$.
$I_3$: For $I_3=\|\widehat{\Gbf}^{(m)} - \overline{\Gbf}^{(m)}\|$, we can consider the pointwise rate of convergence
\[\rho^{(m)}\star f_j(x)\to f_j(x),\]
which for $f_j$ locally Lipschitz and with polynomial growth, we get
\[|\rho^{(m)}\star f_j(x)-f_j(x)|\leq C_\CalF\left(\sqrt{M_2(\rho^{(m)})} + M_{p_{\max}}(\rho^{(m)})\right)\leq C_\CalF C_\sigma\frac{\sigma}{m^{\alpha/2}}\]
where we used Jensen's inequality and sub-Gaussianity of $\rho^{(m)}$ to relate higher moments to the variance, and $C_\sigma$ depends on $p_{max}$ and $\rho$ but not $m$. It should be noted that this rate also increases to $\CalO(m^{-\alpha})$ for polynomial and trigonometric libraries $\CalF$.
Hence for $m$ large enough, it suffices to consider
\[\Pbb\left(I_1+I_2 > t'\right) \leq \Pbb\left(I_1 > t'/2\right)+\Pbb\left(I_2 > t'/2\right) := E_1+E_2,\]
where $t' = t-\sum_{i=3}^5I_i$, and for $k>1+\frac{d+1}{2}$, $t' = t - \tilde{C}m^{-\frac{\alpha}{d+1}}$. For $E_2$ we are in the same situation as in Theorem <ref>, hence for $t'>0$ we can use Corollary <ref> to get the exponential concentration:
\[E_2 = \Pbb\left( \|\widehat{\Gbf}^{(m)} - \overline{\Gbf}^{(m)}\|_{\overrightarrow{\infty}} > t'/2\right) \leq 2K\mathfrak{J}\exp\left(-\frac{c}{2^{1+2/{p_{\max}}}}(mt')^{2/{p_{\max}}}\right).\]
This leaves $E_1$, which we now show also yields exponential concentration, but at a sub-Gaussian rate. Indeed,
\begin{align*}
\Pbb\left(|\tilde{\Gbf}^{(m)}_{ksj} - \widehat{\Gbf}_{ksj}^{(m)}|> t'/2\right) &= \Pbb\left(|\lan \partial^{\alpha^s}\psi_k, f_j(\tilde{\Ubf}^{(m),\star}+\tilde{\ep}^{(m)})-f_j(\tilde{\Ubf}^{(m),\star}+\widehat{\ep}^{(m)})\ran_m| > t'/2\right) \\
&\leq \Pbb\left(\nrm{\partial^{\alpha^s}\psi_k}_\infty\nrm{f_j(\tilde{\Ubf}^{(m),\star}+\tilde{\ep}^{(m)})-f_j(\tilde{\Ubf}^{(m),\star}+\widehat{\ep}^{(m)})}_{\overrightarrow{1}} > mt'/(2\gamma T|\Omega|)\right)\\
&\leq \sum_{i=1}^m \Pbb\left(|f_j(\tilde{\Ubf}^{(m),\star}_i+\tilde{\ep}^{(m)}_i)-f_j(\tilde{\Ubf}^{(m),\star}_i+\widehat{\ep}_i^{(m)})| > Ct'\right)
\end{align*}
where $C = (2\gamma T|\Omega| \nrm{\partial^{\alpha^s}\psi_k}_\infty)^{-1}$. (Here we replaced the volume element $(\Delta x)^d\Delta t$ with the equivalent expression $\gamma\frac{T|\Omega|}{m}$ using that $|\supp{\psi}| = \gamma T|\Omega|$, for some $\gamma<1$.) Now, define the sets
\[A_i = \{|f_j(\tilde{\Ubf}^{(m),\star}_i+\tilde{\ep}^{(m)}_i)-f_j(\tilde{\Ubf}^{(m),\star}_i+\widehat{\ep}^{(m)}_i)| > Ct'\},\]
\[B_i = \{|\tilde{\ep}^{(m)}_i - \widehat{\ep}_i^{(m)}|>1\}.\]
We get
\begin{align*}
\Pbb\left(|f_j(\tilde{\Ubf}^{(m),\star}_i+\tilde{\ep}^{(m)}_i)-f_j(\tilde{\Ubf}^{(m),\star}_i+\widehat{\ep}_i^{(m)})| > Ct'\right)&=\Pbb(A_i\cap B_i)+\Pbb(A_i\cap B_i^c)
\end{align*}
where, using sub-Gaussianity,
\[\Pbb(A_i\cap B_i) \leq \Pbb(B_i)\leq \tilde{C} e^{-\frac{m^\alpha}{2\sigma^2}}\]
and using the locally Lipschitz condition on $f_j$, together with the independent of $\tilde{\ep}^{(m)}_i$ and $\widehat{\ep}^{(m)}_i$,
\begin{align*}
\Pbb(A_i\cap B_i) &\leq \Pbb\left(\{C_\CalF(|\tilde{\ep}^{(m)}_i - \widehat{\ep}_i^{(m)}|+|\tilde{\ep}^{(m)}_i - \widehat{\ep}_i^{(m)}|^{p_{\max}})>Ct'\}\cap\{|\tilde{\ep}^{(m)}_i - \widehat{\ep}_i^{(m)}|<1\}\right)\\
&\leq \Pbb\left(|\tilde{\ep}^{(m)}_i - \widehat{\ep}_i^{(m)}|>\frac{C}{2C_\CalF}t'\right) \\
&\leq \tilde{C} e^{-\frac{m^\alpha}{2\sigma^2}\left(\frac{Ct'}{2C_\CalF}\right)^2}.
\end{align*}
Hence, summing over all $i$, we get
\[\Pbb\left(|\tilde{\Gbf}^{(m)}_{ksj} - \widehat{\Gbf}_{ksj}^{(m)}|> t'/2\right) \leq \tilde{C}me^{-c_1 m^\alpha (t')^2}\]
for some fixed $c_1>0$, which shows that the term $E_2$ asymptotically dominates $E_1$.
Overall this shows that $\tilde{\Gbf}^{(m)}$ converges to $\Gbf^\star$ elementwise, and since the number of elements is fixed, we can take a union bound to conclude that $\|\tilde{\Gbf}^{(m)}-\Gbf^\star\|_\infty\to 0$ in probability at the same rate. We can characterize this convergence as follows. (1) For each $m$ we acrue a deterministic error $\mathfrak{t}(m)$ due to the bias introduced when averaging the clean data, as well as the integration errors. The worst-case asymptotic order of this error is $\CalO(m^{-\alpha \min\{\frac{1}{d+1},\frac{k}{d+1}-\frac{1}{2}\}})$ and arises from $I_4$, the difference between $\overline{\Gbf}^{(m),\star}$ which uses the filtered clean data $\tilde{\Ubf}^{(m),\star}$ and $\Gbf^{(m),\star}$ which uses the clean data $\Ubf^{(m),\star}$ (in effect, $I_4$ is the combined bias and integration error). (2) For $t>\mathfrak{t}(m)$ and large enough $m$ we have
\[\Pbb \left(\|\tilde{\Gbf}^{(m)} - \Gbf^\star\|_\infty>t\right) \leq 3K\mathfrak{J}\exp\left(-\frac{c}{2^{1+2/{p_{\max}}}}[m(t-\mathfrak{t}(m))]^{2/{p_{\max}}}\right),\]
using the worst-case concentration rate arising from the error $I_2$.
§ FULL SUPPORT RECOVERY
As indicated in Remark <ref>, in this section we extend the results (<ref>) of Lemma <ref> and (<ref>) of Theorem <ref> to subset equality
\[\supp{\widehat{\wbf}} = \supp{\wstar} \qquad \& \qquad\supp{\widehat{\wbf}^{(m)}}=\supp{\wstar},\]
respectively, with the addition of a constraint on the linear system $(\Gbf^\star,\bbf^\star)$. First we need another stability lemma similar to Lemma <ref>.
Let $\Abf \in \Rbb^{m\times n}$ have rank $n$ and let $\tilde{\Abf} = \Abf+\Ebf$ be a perturbed system satisfying $\nrm{\Ebf}_{2}<\varepsilon$. For sufficiently small $\varepsilon$, there exist constants $C,C'>0$ depending only on $\Abf$ such that the following stability holds for the pseudoinverse $\Abf^\dagger := (\Abf^T\Abf)^{-1}\Abf^T$ and projection operator $\Pbf_\Abf := \Abf\Abf^\dagger$:
\begin{equation}
\nrm{\Abf^\dagger - \tilde{\Abf}^\dagger}_2<C\varepsilon \qquad \& \qquad \nrm{\Pbf_\Abf-\Pbf_{\tilde{\Abf}}}_2<C'\varepsilon.
\end{equation}
The pseudoinverse of $\tilde{\Abf}$ is given by
\[\tilde{\Abf}^\dagger = (\tilde{\Abf}^T\tilde{\Abf})^{-1}\tilde{\Abf}^T = (\Ibf+\Bbf)^{-1}(\Abf^\dagger+(\Abf^T\Abf)^{-1}\Ebf^T),\]
provided $(\Ibf+\Bbf)^{-1}$ exists, where
\[\Bbf = \Abf^\dagger\Ebf+(\Abf^T\Abf)^{-1}\Ebf^T(\Abf + \Ebf).\]
A sufficient condition for invertibility of $\Ibf+\Bbf$ is
\[\nrm{\Bbf}_2<1,\]
which is guaranteed for sufficiently small $\varepsilon$ since $\nrm{\Bbf}_2 = \CalO(\varepsilon)$. In this case using the Von Neumann series for $(\Ibf+\Bbf)^{-1}$, we get that
\[\tilde{\Abf}^\dagger = \Abf^\dagger + \tilde{\Ebf},\]
\[\tilde{\Ebf} = (\Abf^T\Abf)^{-1}\Ebf^T+\sum_{k=1}^\infty(-\Bbf)^k\left(\Abf^\dagger+(\Abf^T\Abf)^{-1}\Ebf^T\right).\]
Since $\Ebf=\CalO(\varepsilon)$ and $\Bbf = \CalO(\varepsilon)$, we have $\tilde{\Ebf} = \CalO(\varepsilon)$, hence
\[\nrm{\Abf^\dagger - \tilde{\Abf}^\dagger}_2 = \nrm{\tilde{\Ebf}}_2 = \CalO(\varepsilon).\]
The same stability readily applies to $\Pbf_{\tilde{\Abf}} = \Pbf_{\Abf} + \Abf\tilde{\Ebf}+\Ebf\tilde{\Abf}^\dagger$.
We can now use this stability to extend Lemma <ref> to yield full support recovery on the continuum problem for small enough $\sigma^2$, provided condition (<ref>) is satisfied.
Let $\rho$ be a Gaussian mean-zero noise distribution and let $p$ be the maximum polynomial degree appearing in the true model. Assume that $(\Gbf^\star,\bbf^\star)$ satisfy
\begin{equation}\label{app:condforsupprec}
\mu^\star:= \min_{S\subsetneq S^\star}\frac{\nrm{\Pbf^\perp_{\Gbf^\star_{S^\star\setminus S}} \bbf^\star}}{\nrm{\bbf^\star}}-\frac{|S|+1}{\mathfrak{J}}>0.
\end{equation}
Then there exists a critical noise level $\sigma_c$ such that for any $\sigma\leq \sigma_c$ the estimator $\what = \text{MSTLS}^{(1)}(\overline{\Gbf},\overline{\bbf})$ satisfies
\begin{equation}\label{subsetsupprec_lemm_app}
\supp{\what}=\supp{\wstar}.
\end{equation}
In Lemma <ref> we already showed that
* For sufficiently small $\sigma$ there exists $\widehat{\lambda}\in \pmb{\lambda}$ such that $\supp{\overline{\wbf}^{\widehat{\lambda}}} = \supp{\wstar}$, where $\overline{\wbf}^{\widehat{\lambda}} = \text{STLS}^{(1)}(\overline{\Gbf},\overline{\bbf},\widehat{\lambda})$.
* For sufficiently small $\sigma$ we have
\[\frac{\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\widehat{\lambda}}\right)}_2}{\nrm{\overline{\bbf}}_2} <\frac{1}{\mathfrak{J}}\]
which implies that for $\lambda\in \pmb{\lambda}$ with $\lambda<\widehat{\lambda}$, we have
\[\CalL(\widehat{\lambda})<\CalL(\lambda).\]
Assume that $\sigma$ is small enough so that (a) and (b) hold, and consider $\lambda\in \pmb{\lambda}$ with $\lambda>\widehat{\lambda}$. All we need to show is that for sufficiently small $\sigma$, we have $\CalL(\widehat{\lambda})<\CalL(\lambda)$ in this case as well. Indeed, by construction of $\pmb{\lambda}$, for $\lambda>\widehat{\lambda}$ we have
\[\supp{\overline{\wbf}^{\lambda}} \subsetneq S^\star.\]
Setting $S = S^\star \setminus \supp{\overline{\wbf}^{\lambda}}$, we see that
\[\CalL(\widehat{\lambda}) - \CalL(\lambda) = \frac{\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\widehat{\lambda}}\right)}_2}{\nrm{\overline{\bbf}}_2} - \frac{\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\lambda}\right)}_2}{\nrm{\overline{\bbf}}_2}+\frac{|S|}{\mathfrak{J}}.\]
Using (b), and rewriting using the projection operator, we have
\[\CalL(\widehat{\lambda}) - \CalL(\lambda) < - \frac{\nrm{\overline{\Gbf}\left(\overline{\wbf}^0 -\overline{\wbf}^{\lambda}\right)}_2}{\nrm{\overline{\bbf}}_2}+\frac{|S|+1}{\mathfrak{J}} = -\frac{\nrm{\Pbf^\perp_{\overline{\Gbf}_{S^\star \setminus S}}\overline{\bbf}}_2}{\nrm{\overline{\bbf}}_2}+\frac{|S|+1}{\mathfrak{J}}.\]
Now, for small enough $\sigma$, using stability of the projection operator, we have
\[\max_{S\subsetneq S^\star}\nrm{\Pbf^\perp_{\overline{\Gbf}_{S^\star \setminus S}} - \Pbf^\perp_{\Gbf^\star_{S^\star \setminus S}} }_2 < C\sigma^2,\]
since properties of $\Dbf$ and $\Lbf$ imply
\[\overline{\Gbf}_{S^\star \setminus S} -\Gbf^\star_{S^\star \setminus S} = \Gbf^\star (\Abf_{S^\star \setminus S} -\Ibf_{S^\star \setminus S}) = \Gbf^\star(\Dbf-\Ibf)_{S^\star \setminus S}+ \Gbf^\star \Lbf_{S^\star \setminus S} = \CalO(\sigma^2).\]
Hence, using the reverse triangle inequality we get
\begin{align*}
\CalL(\widehat{\lambda}) - \CalL(\lambda) &< -\frac{\nrm{\Pbf^\perp_{\Gbf^\star_{S^\star \setminus S}}\overline{\bbf}}_2}{\nrm{\overline{\bbf}}_2} + \frac{\nrm{\left(\Pbf^\perp_{\Gbf^\star_{S^\star \setminus S}}- \Pbf^\perp_{\Gbf^\star_{S^\star \setminus S}}\right)\overline{\bbf}}_2}{\nrm{\overline{\bbf}}_2} +\frac{|S|+1}{\mathfrak{J}} \\
&< -\mu^\star + C\sigma^2
\end{align*}
which is negative for all $\sigma^2 <\mu^\star/C$. Thus, to recover the full support on systems satisfying (<ref>), in addition to other constraints derived on $\sigma_c$, it holds that $\sigma_c^2\geq \mu^\star/C$.
The previous Lemma, together with results shown in Section <ref>, directly imply the following, using similar techniques as Theorem <ref>.
Let Assumptions <ref>-<ref> hold with $\rho$ a mean-zero Gaussian distribution. In addition assume that the condition (<ref>) holds. Then exists a critical noise $\sigma_c>0$ and a stability tolerance $\tau$, both independent of $m$, such that for all $\sigma<\sigma_c$ and $t< \tau$, and for sufficiently large $m$, it holds that
\begin{equation}\label{supprec_thm}
\supp{\widehat{\wbf}^{(m)}}=\supp{\wstar} \qquad \text{and} \qquad \nrm{\widehat{\wbf}^{(m)}-\wstar}_\infty < C'(t+\sigma^2)
\end{equation}
with probability exceeding $1-2K\mathfrak{J}\exp\left(-\frac{c}{2}\left(mt\right)^{2/p_{\max}}\right)$, where $\widehat{\wbf}^{(m)} = \text{MSTLS}^{(1)}(\Gbf^{(m)},\bbf^{(m)})$, $c$ is from Theorem <ref>, and $C'$ depends on only $(\Gbf^\star_{S^\star},\bbf^\star)$.
§ HYPER-KS DYNAMICS
\begin{equation}\label{hyperKSapp}
\partial_t u = (1)\partial_{xxxx} u + (0.75)\partial_{xxxxxx} u + (-0.5)\partial_x(u^2) + (0.1)\partial_{xxx} u^2.
\end{equation}
Motivation for equation <ref> comes from the fact that variants of this equation arise when performing equation learning on corrupted Kuramoto-Sivashinsky data using a disadvantageous test function, if derivatives up to order 6 are included in the library. (See for example Figure 16 of [32].) Two effects dominate the dynamics. At low wavelengths $|k|\leq k^*$ below some threshold $k^*$ the system is unstable. Then at all wavelengths sufficiently separated from a critical wavenumber $\overline{k}$, we have dispersive mixing of wavemodes. At the critical wavenumber $\overline{k}$, dispersive and transport effects cancel out, such that $\overline{k}$ is entirely subject to the growth and decay dynamics of the diffusive terms. This implies that if $|\overline{k}|< k^*$, the system can become unstable.
§ VARIANCE ESTIMATION
We estimate the noise variance using the following procedure. Let $\fbf\in \Rbb^{2L+1}$ satisfy the discrete moment conditions
\[M_k(\fbf) = \sum_{j=-L}^L j^k\fbf_f = 0, \quad k=0,\dots,M_{\max}.\]
\[\nrm{\fbf}_2=1.\]
Now consider data $\Ubf = u(\tbf)+\ep$ such that $u$ is locally a polynomial of degree $M_{\max}$ over intervals $(t-L\Delta t, t+L\Delta t)$ and $\ep$ is mean zero i.i.d. noise with variance $\sigma^2$. Then the data $\Ubf$ will satisfy
\[\Ebb\left[|(\fbf*\Ubf)^2_i\right] = \sigma^2, \quad i\in \Zbb,\]
\[\nrm{\fbf*\Ubf}_{rms}\approx \sigma\]
to very high accuracy. We use this approach and take $\fbf$ to be 6th-order finite difference weights normalized so that $\nrm{\fbf}_2=1$.
[1]
H. Akaike.
A new look at the statistical model identification.
IEEE Transactions on Automatic Control, 19(6):716–723,
December 1974.
[2]
Hirotugu Akaike.
On entropy maximization principle.
In P. R. Krishnaiah, editor, Applications of Statistics, pages
27–41. North Holland, Amsterdam, Netherlands, 1977.
[3]
E Paulo Alves and Frederico Fiuza.
Data-driven discovery of reduced plasma physics models from fully
kinetic simulations.
Physical Review Research, 4(3):033192, 2022.
[4]
Milad Bakhshizadeh, Arian Maleki, and Victor H de la Pena.
Sharp concentration results for heavy-tailed distributions.
arXiv preprint arXiv:2003.13819, 2020.
[5]
Dimitris Bertsimas and Wes Gurnee.
Learning sparse nonlinear dynamics via mixed-integer optimization.
arXiv preprint arXiv:2206.00176, 2022.
[6]
Steven L Brunton, Joshua L Proctor, and J Nathan Kutz.
Discovering governing equations from data by sparse identification of
nonlinear dynamical systems.
Proceedings of the national academy of sciences,
113(15):3932–3937, 2016.
[7]
Zhen Chen, Kailiang Wu, and Dongbin Xiu.
Methods to recover unknown processes in partial differential
equations using data.
Journal of Scientific Computing, 85(2):1–23, 2020.
[8]
Alexandre Cortiella, Kwang-Chun Park, and Alireza Doostan.
Sparse identification of nonlinear dynamical systems via reweighted
$\ell_1$-regularized least squares.
Computer Methods in Applied Mechanics and Engineering,
376:113620, 2021.
[9]
Alain de Cheveigné and Israel Nelken.
Filters: when, why, and how (not) to use them.
Neuron, 102(2):280–293, 2019.
[10]
Urban Fasel, J Nathan Kutz, Bingni W Brunton, and Steven L Brunton.
Ensemble-sindy: Robust sparse model discovery in the low-data,
high-noise limit, with active learning and control.
Proceedings of the Royal Society A, 478(2260):20210904, 2022.
[11]
Daniel R Gurevich, Patrick AK Reinbold, and Roman O Grigoriev.
Robust and optimal sparse regression for nonlinear pde models.
Chaos: An Interdisciplinary Journal of Nonlinear Science,
29(10):103113, 2019.
[12]
Yuchen He, Namjoon Suh, Xiaoming Huo, Sung Ha Kang, and Yajun Mei.
Asymptotic Theory of \(\boldsymbol
\ell _1\) -Regularized PDE
Identification from a Single Noisy Trajectory.
SIAM/ASA Journal on Uncertainty Quantification,
10(3):1012–1036, September 2022.
[13]
Lam Si Tung Ho, Hayden Schaeffer, Giang Tran, and Rachel Ward.
Recovery guarantees for polynomial coefficients from weakly dependent
data with outliers.
Journal of Approximation Theory, 259:105472, 2020.
[14]
Jeffrey M Hokanson, Gianluca Iaccarino, and Alireza Doostan.
Simultaneous identification and denoising of dynamical systems.
arXiv preprint arXiv:2203.13837, 2022.
[15]
Paul Houston, Christoph Schwab, and Endre Süli.
Discontinuous hp-finite element methods for
advection-diffusion-reaction problems.
SIAM Journal on Numerical Analysis, 39(6):2133–2163, 2002.
[16]
Kadierdan Kaheman, Steven L Brunton, and J Nathan Kutz.
Automatic differentiation to simultaneously identify nonlinear
dynamics and extract noise probability distributions from data.
Machine Learning: Science and Technology, 3(1):015031, 2022.
[17]
John H. Lagergren, John T. Nardini, G. Michael Lavigne, Erica M. Rutter, and
Kevin B. Flores.
Learning partial differential equations for biological transport
models from noisy spatio-temporal data.
Proc. R. Soc. A., 476(2234):20190800, February 2020.
[18]
Daniel A Messenger and David M Bortz.
Weak sindy for partial differential equations.
J. Comput. Phys., 443:110525, October 2021.
[19]
Daniel A Messenger and David M Bortz.
Weak sindy: Galerkin-based data-driven model selection.
Multiscale Model. Simul., 19(3):1474–1497, 2021.
[20]
Daniel A Messenger and David M Bortz.
Learning mean-field equations from particle data using wsindy.
Physica D: Nonlinear Phenomena, 439:133406, November 2022.
[21]
Daniel A Messenger, Emiliano Dall`Anese, and David Bortz.
Online weak-form sparse identification of partial differential
In Mathematical and Scientific Machine Learning, pages
241–256. PMLR, 2022.
[22]
Daniel A Messenger, Graycen E Wheeler, Xuedong Liu, and David M Bortz.
Learning anisotropic interaction rules from individual trajectories
in a heterogeneous cellular population.
J. R. Soc. Interface, 19, October 2022.
[23]
Yannis Pantazis and Ioannis Tsamardinos.
A unified approach for sparse dynamical system inference from
temporal measurements.
Bioinformatics, 35(18):3387–3396, 2019.
[24]
Patrick AK Reinbold, Daniel R Gurevich, and Roman O Grigoriev.
Using noisy or incomplete data to discover models of spatiotemporal
Physical Review E, 101(1):010203, 2020.
[25]
Joel A Rosenfeld, Benjamin Russo, Rushikesh Kamalapurkar, and Taylor T Johnson.
The occupation kernel method for nonlinear system identification.
arXiv preprint arXiv:1909.11792, 2019.
[26]
Samuel H Rudy, Steven L Brunton, Joshua L Proctor, and J Nathan Kutz.
Data-driven discovery of partial differential equations.
Science Advances, 3(4):e1602614, 2017.
[27]
Benjamin Russo and M Paul Laiu.
Convergence of weak-sindy surrogate models.
arXiv preprint arXiv:2209.15573, 2022.
[28]
Hayden Schaeffer.
Learning partial differential equations via data discovery and sparse
Proceedings of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 473(2197):20160446, 2017.
[29]
Hayden Schaeffer and Scott G McCalla.
Sparse model selection via integral terms.
Physical Review E, 96(2):023302, 2017.
[30]
Hayden Schaeffer, Giang Tran, Rachel Ward, and Linan Zhang.
Extracting structured dynamical systems using sparse optimization
with very few samples.
Multiscale Modeling & Simulation, 18(4):1435–1461, 2020.
[31]
Michael Schmidt and Hod Lipson.
Distilling free-form natural laws from experimental data.
science, 324(5923):81–85, 2009.
[32]
Mengyi Tang, Wenjing Liao, Rachel Kuske, and Sung Ha Kang.
Weakident: Weak formulation for identifying differential equations
using narrow-fit and trimming.
arXiv preprint arXiv:2211.03134, 2022.
[33]
Giang Tran and Rachel Ward.
Exact recovery of chaotic systems from highly corrupted data.
Multiscale Modeling & Simulation, 15(3):1108–1129, 2017.
[34]
Roman Vershynin.
High-dimensional probability: An introduction with applications
in data science, volume 47.
Cambridge university press, 2018.
[35]
Mariia Vladimirova, Stéphane Girard, Hien Nguyen, and Julyan Arbel.
Sub-weibull distributions: Generalizing sub-gaussian and
sub-exponential properties to heavier tailed distributions.
Stat, 9(1):e318, 2020.
[36]
Z Wang, X Huan, and K Garikipati.
Variational system identification of the partial differential
equations governing microstructure evolution in materials: Inference over
sparse and spatially unrelated data.
Computer Methods in Applied Mechanics and Engineering,
377:113706, 2021.
[37]
Zhenlin Wang, Xun Huan, and Krishna Garikipati.
Variational system identification of the partial differential
equations governing the physics of pattern-formation: inference under varying
fidelity and noise.
Computer Methods in Applied Mechanics and Engineering,
356:44–74, 2019.
[38]
Per-Åke Wedin.
Perturbation theory for pseudo-inverses.
BIT Numerical Mathematics, 13(2):217–232, 1973.
[39]
Jacqueline Wentz and Alireza Doostan.
Derivative-based sindy (dsindy): Addressing the challenge of
discovering governing equations from noisy data.
arXiv preprint arXiv:2211.05918, 2022.
[40]
Daolin Xu and Omid Khanmohamadi.
Spatiotemporal system reconstruction using fourier spectral operators
and structure selection techniques.
Chaos: An Interdisciplinary Journal of Nonlinear Science,
18(4):043122, 2008.
[41]
Linan Zhang and Hayden Schaeffer.
On the convergence of the SINDy algorithm.
Multiscale Modeling & Simulation, 17(3):948–972, 2019.
|
This paper pushes further the intrinsic capabilities of the global-local approach introduced initially in [7].
We develop a distributed computing approach using MPI (Message Passing Interface) both for the global and local problems.
Regarding local problems, a specific scheduling strategy is introduced.
Then, to measure correctly the convergence of the iterative process, we introduce a reference solution that
revisits the product of classical and enriched functions.
As a consequence, we are able to propose a purely matrix-based implementation of the global-local problem.
The distributed approach is then compared to other parallel solvers
either direct or iterative with domain decomposition.
The comparison addresses the scalability as well as the elapsed time.
Numerical examples deal with linear elastic problems: a polynomial exact solution problem, a complex micro-structure, and, finally, a pull-out test (with different crack extent).
§ INTRODUCTION
Computational methods are addressing more and more complex problems.
This paper is concerned with elasticity problems having roughly two-scales.
A global scale at the structural level and a local scale representing a micro-structure, a crack or a reinforcement for example.
Note that the local scale appears over the whole domain
in the case of a micro-structure and only in narrow zones for cracks and reinforcements.
A direct finite element approach for these problems is to create a unique mesh and solve.
This approach does not take advantage of the two-scale nature.
This leads to difficulties: the matrix can have a huge size and can be badly conditioned.
Iterative domain decomposition resolution is one way to face huge matrix size (see [30]) but does not solve the conditioning issue unless a specific preconditioner is used.
A direct solver can cope with poor conditioning, but matrix size is still an issue.
In order to take into account the two-scale nature of a problem, the first existing global to local method (see for example [23]) use the raw global scale field as a boundary condition for local problems (refined encapsulated meshes), which provide the desired accuracy to the simulation.
But since the raw global field does not take into account the local behavior, it does not provide a good boundary condition to the local problems.
The method, first introduced in [7], adds the idea of local to global interaction via enrichment functions.
More precisely, local problems, which can overlap, yield solutions used to build enrichment functions.
These functions are used through the partition of unity method to compute at global scale an improved solution.
The overlapping aspect of the local problems and their intermixing use in the enriched global problem permit to partially address the boundary condition error of the raw global problem field.
In [6] this problem of boundary condition has been addressed by increasing the size of the local problems (the boundary condition at the local scale is pushed far enough) or by increasing the polynomial interpolation order of the global scale problem.
But it is in [24] that a more practical and robust solution to this boundary problem has been proposed by introducing iterations between scales.
From the raw results, boundary conditions are imposed on the local problems, whose solutions are used as enrichment functions for a new enriched global problem, whose solution is again used as boundary conditions for local problems and finally their solutions reused for a last new enriched global problem.
Later in [26] the idea arises that in evolutionary phenomena such as crack propagation, this scale loop can be intertwined with a propagation loop.
But it was only in [14] that the looping strategy was further investigated for a larger number of scale iterations as a remedy for the boundary problem, in a context of a crack propagation step with a single local problem.
This work shows that in a few iterations, the method provides a significant error reduction at both scales.
But, as in many other publication, the way this error is obtained is problematic because it is a relative error based on a known solution for this problem (analytical or numerical fine reference).
Another approach to deal with this boundary problem has been studied in more detail in [20] by examining how boundary conditions are imposed at the local scale: Dirichlet, Neumann and Cauchy boundary conditions have been compared, the last one providing a better convergence rate in the treated case (fracture mechanics problems). But in [27] it turned out that the Dirichlet boundary condition was the best in the given context.
The question of how the local problem solution is used to construct an enrichment function is another topic widely studied in the literature.
In [4] the SGFEM technique proposes to remove from the fine scale field (obtained here with more conventional analytic functions) its piecewise linear interpolant on the coarse[In this article, the terms "fine" or "micro" are sometimes used instead of the original "local" designation in the method.
And "coarse" or "macro" are sometimes used instead of "global".]-scale field.
This operation ensures a matrix conditioning number for the enriched system equivalent to that of the non enriched system.
In [13, 12] the SGFEM technique, mentioned to potentially boost the method, was adapted to fracture mechanics with an analytical enrichment function improving both matrix conditioning and result quality.
The method has also been successfully evaluated in a nonlinear context [17, 29], in other mechanical domains [28, 11] and even integrated in a commercial software [22].
But since its inception until today, it is its intrinsic parallel capacities that have been put forward by its creators.
In [18], since the local problems (one per patch) are independent of each other, they can be solved (with a direct solver) in parallel (in multithreaded paradigm, each thread computes a set of patches).
In [27], the authors successfully reused this parallel implementation and addressed, among many aspects, the issue of memory consumption related to local problems.
Especially when the scale loop is used to cope with the boundary condition issue, storing the factorization of local problems reduces the cost of iterations, at the price of a significant increase in the memory footprint.
In [21] the local problems themselves have been treated with a parallel solver keeping the idea of computing each of them in parallel.
This dual level of parallelism was also studied at the same time in [31] (in a message passing context).
In [18, 27, 21] the local problems called "sub-local" domains were constructed from one or a few non-overlapping master local domains, in order to apply the same discretization for all these "sub-local" domains.
This artifact greatly simplify the integration process of the enriched macro-scale problem since all its enrichment function share the same discretization in this case, thus the same integration points.
In this paper, the proposed method retains the idea of a loop between scales to deal with boundary condition problems, but adds the calculation of a new error criterion to terminate this loop at a given accuracy.
This criterion is based on the notion of reference solution which introduces a new way to formulate the enrichment function and to create the global problem.
Otherwise, to avoid any limitation imposed by computer memory, the proposed method uses the parallel message passing interface (MPI) paradigm to access distributed memory resources.
All the meshes are distributed over the processors, which induces an original scheduling algorithm, described in section <ref>, to optimize the load balancing of the processors at the fine scale.
The distributed nature of the mesh at the global scale also leads to a parallel resolution at this scale.
Compared to the "sub-local" domain approach, the proposed method uses the more systematic approach of a local problem per patch while keeping the notion of a single discretization at the fine scale in order to simplify the parallel implementation, the definition of the enrichment functions and the construction of the problem at the global scale.
And finally, the chosen boundary conditions imposed on these local problems are of Dirichlet type.
The paper is organized as follows.
The next section recalls the classical global-local formulation.
Then in section <ref>, we introduce a reference solution and reformulate the two-scale approach using a pure matrix format.
Section <ref> studies the issue of scale difference between the global and local level and its impact on performance.
Section <ref> deals with the parallel global-local approach in which both the global
and local problems are distributed.
Section <ref> deals with the three numerical experiments and the paper ends with conclusion and possible future works.
§ INGREDIENTS
§.§ Continuous mechanical problem
In this work, we will only consider mechanical elasticity but other fields of application are possible.
In the domain of interest $\Omega$, a region of $\mathbb{R}^3$, the strong form of the equilibrium equation of the mechanical problem is defined as follows:
\begin{equation}
\nabla . \tens{\sigma} +\vm{f}= 0
\label{equilibrium}
\end{equation}
with $\vm{f}$ being a prescribed volume loading and the stress tensor is $\tens{\sigma}=\tens{C}:\tens{\epsilon}$, $\tens{C}$ being the Hook's tensor and $\tens{\epsilon}$ being the strain tensor.
The Neumann and Dirichlet conditions applied on the $\partial\Omega$ boundary of the $\Omega$ domain are:
\begin{equation}
\tens{\sigma}.\vm{n} = \vm{t}~ \text{on}~\partial\Omega^N, \vm{u}=\overline{\vm{u}} ~\text{on} ~\partial\Omega^D, \partial\Omega^N\cap \partial\Omega^D=\oslash
\label{BC}
\end{equation}
where $\vm{t}$ is the prescribed tensile load on the $\partial\Omega^N$ part of $\partial\Omega$, $\vm{n}$ the outgoing normal vector of $\Omega$, $\overline{\vm{u}}$ the prescribed displacements on the $\partial\Omega^D$ part of $\partial\Omega$ and $\vm{u}$ the displacement field.
Let $\mathcal{M}^{\mathsf{C}}$ be the continuous space of the problem defined on $\Omega$ and compatible with the Dirichlet boundary conditions ($\mathcal{M}_0^{\mathsf{C}}$ being $\mathcal{M}^{\mathsf{C}}$ with null Dirichlet boundary conditions).
The solution $\vm{u}^C \in \mathcal{M}^{\mathsf{C}}$ of the continuous problem defined by (<ref>) and (<ref>) in their weak form, is given by solving:
\begin{equation}
\forall \vm{v}^* \in \mathcal{M}_0^{\mathsf{C}},~~ A\left( \vm{u}^C,\vm{v}^* \right)_{\Omega} = B\left(\vm{v}^*\right)_{\Omega},~~\vm{u}^C=\overline{\vm{u}} ~\text{on} ~\partial\Omega^D
\label{equilib}
\end{equation}
where the bi-linear and linear forms are:
\begin{equation}
\begin{array}{l}
A\left( \vm{u},\vm{v} \right)_{\Omega} =\stretchint{5ex}_{\Omega}
\tens{\epsilon} \left( \vm{u} \right):\tens{C}:\tens{\epsilon}\left( \vm{v}\right)\mathrm{d}V \\
B\left( \vm{v} \right)_{\Omega} =\stretchint{5ex}_{\partial\Omega^N}
\vm{t}\cdot \vm{v} \mathrm{d}S-\stretchint{5ex}_{\Omega}
\vm{f}\cdot \vm{v} \mathrm{d}V
\end{array}
\text{with}~ \vm{u}\in \mathcal{M}^{\mathsf{C}} ~\text{and} ~\vm{v}\in \mathcal{M}_0^{\mathsf{C}}
\label{bilinform}
\end{equation}
and the strain is given by the kinematic equation
$\tens{\epsilon}\left( \vm{u} \right)=\frac{1}{2}\left(\nabla\vm{u} +\left(\nabla \vm{u}\right)^t\right)$.
§.§ Global problem
To solve (<ref>), the approach discretizes the continuous problem into two steps.
It starts by discretizing $\Omega$ with a mesh composed of macro (or global or coarse) elements capable of representing the general behavior of the problem.
On this global mesh, where $I^g$ denotes its set of nodes (the superscript g stands for "global scale"), we consider the standard first-order finite element approximation functions $N^i:\vm{x}\in \mathbb{R}^3\rightarrow N^i(\vm{x}) \in \left[0,1\right]\subset\mathbb{R}$, $i\in I^g$, associated to a coarse scale node $\vm{x}^i$, with a support given by the union of all finite elements sharing the node $\vm{x}^i$ ( this group of macro-elements is called "patch" hereafter).
These $N^i$ are used to linearly interpolate the degrees of freedom (dof), also called values in this work, which represent the classical values of the discretized displacement field at the mesh nodes.
Some nodes of this global mesh are then enriched when their support covers certain local behaviors that must be finely described.
Let $I_{e}^g\subseteq I^g$ be the set of enriched global nodes ($card\left( I_{e}^g\right)\leqslant card\left( I^g\right)$ and the subscript e stands for "enriched").
For each node $p\in I_{e}^g$ a enrichment function $\vm{F}^p(\vm{x})$ (described in the next section) enriches the classical basis ($N^i$) of the discretization space.
The kinematic equation used to describe the discrete global field $\vm{U}$ at a point $\vm{x}$ is then:
\begin{equation}
\vm{U}(\vm{x})=\displaystyle\sum_{i\in I^g} N^i(\vm{x})~ \vm{U}^{i} + \displaystyle\sum_{p\in I_{e}^g} N^p(\vm{x})~\vm{E}^{p}*\vm{F}^p(\vm{x})
\label{kinematic}
\end{equation}
* $\vm{U}^{i}$ is the vector of classical values for the node $\vm{x}^i$
* $\vm{E}^{i}$ is the vector of enriched values for the node $\vm{x}^i$
* $*$ operator is the component by component multiplication of two vectors
Thus (<ref>) becomes to find $\vm{U}^G\in \mathcal{M}^{\mathsf{G}}\subset \mathcal{M}^{\mathsf{C}}$ such that:
\begin{equation}
\forall \vm{v}^* \in \mathcal{M}_0^{\mathsf{G}},~~ A\left( \vm{U}^G,\vm{v}^* \right)_{\Omega} = B\left(\vm{v}^*\right)_{\Omega},~~\vm{U}^G=\overline{\vm{u}} ~\text{on} ~\partial\Omega^D
\label{equilibg}
\end{equation}
with $
\mathcal{M}^{\mathsf{G}}=\left\{\vm{U}(\vm{x}):\vm{U}(\vm{x})=\displaystyle\sum_{i\in I^g} N^i(\vm{x})~ \vm{U}^{i} + \displaystyle\sum_{p\in I_{e}^g} N^p(\vm{x})~\vm{E}^{p}*\vm{F}^p(\vm{x}), \vm{U}(\vm{x})=\overline{\vm{u}}(\vm{x}) ~\text{for}~ \vm{x}\in \partial\Omega^D \right\}$.
The integration of (<ref>) using appropriate Gauss quadrature points (see [18] for example) leads to the resolution of a linear system computed with a direct solver.
This system is constructed by eliminating the rows of the elementary matrices related to the Dirichlet values and moving the coupling term (columns related to the Dirichlet values) to the right side.
§.§ Local problems and enrichment functions
For a node $p\in I_{e}^g$, the enrichment function $\vm{F}^p(\vm{x})$ is constructed numerically using the solution of a micro (or local or fine) problem.
This local problem follows the same equations as (<ref>) and (<ref>), and is discretized by a micro-mesh corresponding to the refinement (in a nested manner) of the macro elements of the $p$ patch.
This is the second discretization step of the method.
Considering $\omega_p$ the region corresponding to the patch $p$, the solution of the local problem $p$ is obtained by finding $\vm{u}^{Q^p}\in \mathcal{M}^{{\mathsf{Q}}^p}\subset \mathcal{M}^{\mathsf{C}}$ such that:
\begin{equation}
\forall \vm{v}^* \in \mathcal{M}_0^{{\mathsf{Q}}^p},~~ A\left( \vm{u}^{Q^p},\vm{v}^* \right)_{\omega_p} = B\left(\vm{v}^*\right)_{\omega_p},
%~~\vm{u}^{Q^p}=\overline{\vm{u}} ~\text{on} ~\partial\omega_p\cap \partial\Omega^D,
~~ \vm{u}^{Q^p}=\vm{U}^G~ \text{on}~\partial\omega_p%\setminus \partial\Omega^D
\label{equilibl}
\end{equation}
* $\vm{U}^G$ is the solution of (<ref>)
* $\mathcal{M}^{{\mathsf{Q}}^p}=\left\{\vm{u}(\vm{x}):\vm{u}(\vm{x}) =\displaystyle\sum_{m\in I_p^l} n^m(\vm{x})~\vm{u}^{m^p},
%~ \vm{u}(\vm{x})=\overline{\vm{u}}(\vm{x}) ~\text{when}~ \vm{x}\in \partial\omega_p\cap \partial\Omega^D ,
~ \vm{u}(\vm{x})=\vm{U}^G(\vm{x})~ \text{for}~\vm{x}\in \partial\omega_p%\setminus \partial\Omega^D
\right\}$
* $I_p^l$ is the set of nodes associated with the fine mesh discretization of $\omega_p$ (superscript l stands for "local scale")
* $n^m:\vm{x}\in \mathbb{R}^3\rightarrow n^m(\vm{x}) \in \left[0,1\right]\subset\mathbb{R}$ is the first order finite element shape function associated to the fine scale node $\vm{x}^m$
* $\vm{u}^{m^p}$ is the vector of classical dofs for the node $\vm{x}^m$ corresponding to the fine scale problem $p$
* $A\left( \vm{u},\vm{v} \right)_{\omega_p} =\stretchint{5ex}_{\omega_p}
\tens{\epsilon} \left( \vm{u} \right):\tens{C}:\tens{\epsilon}\left( \vm{v}\right)\mathrm{d}V$ with $\vm{u}\in \mathcal{M}^{{\mathsf{Q}}^p}$ and $\vm{v}\in \mathcal{M}_0^{{\mathsf{Q}}^p}$
* $B\left( \vm{v} \right)_{\omega_p} =\stretchint{5ex}_{\partial\omega_p\cap\partial\Omega^N}
\vm{t}\cdot \vm{v} \mathrm{d}S-\stretchint{5ex}_{\omega_p}
\vm{f}\cdot \vm{v} \mathrm{d}V$ with $\vm{v}\in \mathcal{M}_0^{{\mathsf{Q}}^p}$
Regarding the boundary condition of the fine-scale problems, different approaches have been tested (in particular in [20]), all of them using the global-scale solution for Dirichlet, Neumann or a mix of both.
The idea is to impose the global behavior of the problem on the boundary of the fine-scale problems so that the local behaviors that must be finely described can be revealed by the solutions of the fine-scale problems.
In this work, the global-scale solution is only imposed by the Dirichlet boundary condition as presented in (<ref>).
Note that the boundary conditions at the global scale are inherited at the fine scale. In particular the coarse-scale Dirichlet boundary condition are imposed via $\vm{U}^G$ (since $\vm{U}^G$ is the solution of (<ref>), $\vm{U}^G=\overline{\vm{u}} ~\text{on} ~\partial\Omega^D$ and $\vm{u}^{Q^p}=\vm{U}^G=\overline{\vm{u}} ~\text{on} ~\partial\omega_p\cap \partial\Omega^D$).
The integration of (<ref>) for all $p$, using the standard Gauss quadrature points, lead to the construction of $card\left( I_{e}^g\right)$ linear systems (reduced at assembly by elimination of Dirichlet values).
Their resolution can be done in different ways, but, as already indicated in the introduction, in [18] they are solved in parallel using threads.
The distribution of patches to threads uses dynamic scheduling: each thread takes a local problem from a list and processes it (with a direct solver) until the list is empty.
Kim et al. proved that local problems should be sorted in this list by decreasing cost (calculated a priori with some metric) in order to maintain a good load balance.
They also showed that the most expensive local problem limits the number of processes that can be used for a given problem.
Beyond this limit, parallel scaling efficiency drops.
Section <ref> will explain how we go from a multi-thread patch resolution to a full MPI implementation.
When $\vm{u}^{Q^p}$ are available, the enrichment functions $\vm{F}^p$ can be constructed from them.
A first crud approach is to consider that:
\begin{equation}
\forall p\in I_{e}^g,~\forall \vm{x}\in \omega_p ~\vm{F}^p(\vm{x})=\vm{u}^{Q^p}(\vm{x})~ \text{and}~ \forall \vm{x}\notin \omega_p~\vm{F}^p(\vm{x})=\vm{0}
\label{crudenrich}
\end{equation}
This solution can works if the terms of $N^p(\vm{x})~\vm{F}^p(\vm{x})$ are not to close to $N^p(\vm{x})$.
Otherwise, the created system is badly conditioned because the discretization basis is almost redundant.
But in the literature, many authors use the SGFEM technique [4] which removes from $\vm{u}^{Q^p}$ its piecewise linear interpolant over the coarse-scale field:
\begin{equation}
\forall p\in I_{e}^g,~ \forall \vm{x}\in \omega_p ~\vm{F}^p(\vm{x})=\vm{u}^{Q^p}(\vm{x})-\displaystyle\sum_{j\in I_{p}^{g}} N^j(\vm{x})\vm{u}^{Q^p}(\vm{x}^j)~ \text{and}~ \forall \vm{x}\notin \omega_p~\vm{F}^p(\vm{x})=\vm{0}
\label{sgfem}
\end{equation}
where $I_{p}^g$ is the set of macro-scale nodes associated with patch $p$.
Eliminating the projection first removes from the enrichment function the part that can be represented by the coarse-scale discretization and thus improves the matrix conditioning of the coarse-scale problem.
Second, it forces the enrichment function to be zero at coarse-scale nodes location, which helps solve the blending [Blending appears in a coarse level element when all its classical dofs are not enriched.] element problem and also greatly simplifies[With a non-zero enrichment function, imposing a Dirichlet condition on an enriched node adds the complexity of setting up an equation linking the classical and enriched dofs.] the application of the global Dirichlet boundary conditions, if any, on those enriched nodes.
In this work, a rather different approach is proposed, as presented in the section <ref>.
§.§ The scale loop
The main idea of the solver is to iteratively solve the two types of problems:
the global problem (<ref>) and
the set of micro problems (<ref>) over patches solved independently.
The global and fine problems are linked together.
The fine problems over the patches deliver the enrichment functions which are used in the global problem.
The global problem delivers the boundary conditions for the patches.
The initial step of the loop is a key element of the method.
In general, a global-scale solution is used as the starting point of the loop (it provides a boundary condition for fine scale problems).
This solution can be the result of a global-scale problem without enrichment or the result of a previous computation.
In particular, when evolutionary phenomena are involved, the solver can effectively take advantage of using the last solution found (from the previous evolutionary step) as the starting solution of the loop used for the current evolution step.
Some test cases in the section <ref> illustrate this point.
In any case, all the fine-scale problems are mainly handled during this initialization phase.
The integration of (<ref>) yields a set of linear system which have constant matrices and partially constant right-hand side vectors during iterations.
Only the Dirichlet coupling term changes right-hand side vector during the loop as $\vm{U}^G$ varies.
Thus, the linear systems matrices and constant right-hand side vectors are created only once during the initialization.
And by choosing a direct solver to solve systems, we can also compute the matrix factorization only once.
If preserved (which has an impact on memory, as shown in [27]), these factorizations can be reused for backward/forward resolution of fine-scale problems in the remaining steps of the loop.
§ THE PROPOSED METHOD: MATRIX-BASED RESOLUTION WITH A REFERENCE TARGET
§.§ Overview
Based on the ingredients of the section <ref>, the figure <ref> illustrates in 2D, but the same principle applies in 3D, the general concept of the proposed method with a fictitious problem on a $\Omega$ region.
[][Global-scale problem]
[][The 25 enriched nodes with 4 of their supports colored]
[][: support union ]
[][Fine scale discretisation]
[][Reference field discretization]
[][Fine scale problems (2 out of 25)]
(0.57702894,0.21777677)(0,0)[lt]0t]lDirichlet ($\vm{u}_d^p$)
(0.27095745,0.145)(0,0)[lt]0t]lVisual effect to show the location of the local
(0.27095745,0.115)(0,0)[lt]0t]l behavior to be captured
(0.03399547,0.30154357)(0,0)[lt]0t]lBoundary conditions:
(0.27095745,0.0574839)(0,0)[lt]0t]lLinear relation
(0.27479608,0.00599376)(0,0)[lt]0t]lHanging nodes
(0.08816039,0.26898148)(0,0)[lt]0t]lGlobal scale enriched node ($\in I_{e}^g$) with a partially refined patch
(0.08816039,0.33843793)(0,0)[lt]0t]lGlobal scale enriched node ($\in I_{e}^g$) with a fully refined patch
(0.17760806,0.15915917)(0,0)[lt]0t]lGlobal-scale mesh
(0.17760806,0.04575297)(0,0)[lt]0t]lReference mesh
(0.72510056,0.15262199)(0,0)[lt]0t]l: refined
(0.72510056,0.02882823)(0,0)[lt]0t]l: not refined
method presented in a fictitious 2D problem (for sake of visibility fine-scale mesh is not refined as it should)
Figure <ref> shows the discretization of the global scale problem with its boundary conditions.
The magenta area is a simple visual effect to show the position of the local behavior to be captured.
This area is just a visual artifact where the mesh needs to be refined to appropriately simulate a crack, a specific material inclusion, a damaged material, a rivet, ...
All macro nodes for which a macro element of their support has been refined are enriched (figure <ref> where red and green dots form the $I_e^g$ set introduced in the section <ref>).
Their enrichment functions are obtained from the solutions of the fine-scale problems associated with their patches (figure <ref> where global and local boundary condition are imposed, $\vm{u}_d^p$ being introduced in the section <ref> and defined precisely in <ref>).
The union of all the elements of the enriched patches (yellow + orange in the figure <ref>) is called the , noted SP thereafter.
Conversely, the set of macro-element that are not part of the SP (shaded in the figure <ref>) will be noted as NSP in what follows.
One of the goals of the is to impose a single fine mesh discretization for all fine-scale problems.
More precisely, the idea is to impose a fine discretization for all patches covering the local behavior to be captured.
Thus their union (in orange on the figure <ref>) is the starting mesh of the adaptation strategy described in <ref> (see this appendix for details on how the elements are cut/split/refined) which guarantees that all the elements of this area are at least split once and that the local behavior is well refined (figure <ref>).
Using only orange elements prevents the refinement of any other element, in order to respect the "one hanging node per edge" rule (avoiding the enrichment of more nodes).
The yellow part of the SP is then the set of macro elements not refined and therefore used without modification at the fine-scale level.
Thus, in the SP, at the boundary between two macro-elements, the micro-meshes are either compatible or leave hanging nodes (visible in figure <ref> and presented in section <ref>).
This unified discretization imposed in the SP, allows to process all quantities related to fine-scale grid (integration at Gauss point, level set, damage, mapping, ... ) only once.
In this work, the union of the SP refined mesh and the NSP macro-elements (which naturally connect to the yellow fine scale grid elements) is called the reference mesh (figure <ref>).
It represented a new vision of what the method intends to discretize at the local and global level alternatively.
The section <ref> will detail the problem associated with this mesh.
A rich naming convention is implemented in <ref> to clarify the different discretizations and the status of the values involved in the different problems solved by the solver.
Thus, all subscribe letters appearing in the matrix or vector in the following algorithms or discussions come from the table <ref>.
The general scheme of this new version is given by the algorithm <ref>.
Create $\vm{U}_g^0$ macro-scale dofs
$\left(\vm{A}_{gg}^{ini},\vm{B}_{g}^{ini},\vm{A}_{FF},\vm{B}_F,\left\|\vm{B}_r\right\|,\vm{A}_{qq}^{patches},\vm{BI}_q^{patches},\vm{D}_{qd}^{patches} \right) \gets$ INIT
Retrieve macro-scale classical dofs to initialize $\vm{U}_g$ (enriched dof set to zero)
$ \vm{u}_F \gets$ UPDATE_MICRO_DOFS$\vm{U}_{g}$
$ \vm{u}_q^{patches} \gets$MICRO-SCALE_RESOLUTION$\vm{u}_F$, $\vm{A}_{qq}^{patches}$, $\vm{BI}_q^{patches}$, $\vm{D}_{qd}^{patches}$
$\left( \vm{A}_{gg},\vm{B}_{g}\right)\gets$UPDATE_MACRO_PRB$\vm{A}_{gg}^{ini}$, $\vm{B}_{g}^{ini}$, $\vm{u}_q^{patches}$
$\vm{U}_{g}\gets \vm{A}_{gg}^{-1}\cdot \vm{B}_{g}$ $\dagger$
$ \vm{u}_F \gets$UPDATE_MICRO_DOFS$\vm{U}_{g}$
$resi\gets$COMPUTE_RESIDUAL$\vm{u}_F$, $\vm{A}_{FF}$, $\vm{B}_F$, $\left\|\vm{B}_r\right\|$
$resi< \epsilon$
algorithm: general procedure. Procedures INIT, UPDATE_MACRO_PRB, UPDATE_MICRO_DOFS, MICRO-SCALE_RESOLUTION and COMPUTE_RESIDUAL are respectively depicted in algorithm <ref>, <ref> , <ref>, <ref> and <ref> of <ref>.
The $\epsilon$ value is the user-defined target accuracy that the criterion given in the equation (<ref>) must meet to end the scaling loop. See <ref> for notation convention : subscripted letters,superscript $patches$,$\triangleright$, ...
After an initialization step,
the alternate resolution between scales starts with the MICRO-SCALE_RESOLUTION procedure (algorithm <ref>) which computes the solutions of fine-scale problems.
These solutions are used by the UPDATE_MACRO_PRB procedure (algorithm <ref>) to update the linear system at the global-scale ($\vm{A}_{gg}$ and $\vm{B}_{g}$).
Then, a conventional solver ($\dagger$ in the algorithm <ref> described in section <ref>) solves the problem at the global-scale.
The global scale solution ($\vm{U}_{g}$) is then transferred to the fine scale by the UPDATE_MICRO_DOFS procedure (algorithm <ref>).
Compare to the classical version, the scale loop is now controlled by the computation of a $resi$ criterion (COMPUTE_RESIDUAL procedure described by the algorithm <ref>) which stops the iterations when it is lower than a threshold ( $\epsilon$) given by the user.
This convergence check is an important contribution because it allows to evaluate the solver and to easily compare different version of it.
This criterion is introduced in section <ref> and its computation is given in section <ref>.
The following subsections present in more detail the algorithm <ref> and the associated procedures without focusing too much on the parallelism which will be discussed in section <ref> (only the presence of the symbol $\triangleright$, on the right of a line of the algorithm, indicates that a certain communication between the processes exists for this step.).
§.§ The reference field and its associated criterion
The reference field $\vm{u}^{R}$ is the solution of the problem defined by (<ref>) and (<ref>), and discretized by the reference mesh (figure <ref>).
Solving this reference problem is to find $\vm{u}^{R}\in \mathcal{M}^{\mathsf{R}}\subset \mathcal{M}^{\mathsf{C}}$ such that:
\begin{equation}
\forall \vm{v}^* \in \mathcal{M}_0^{\mathsf{R}},~~ A\left( \vm{u}^{R},\vm{v}^* \right)_{\Omega} = B\left(\vm{v}^*\right)_{\Omega},
~~\vm{u}^{R}=\overline{\vm{u}} ~\text{on} ~\partial\Omega^D,
\label{equilibR}
\end{equation}
* $\mathcal{M}^{\mathsf{R}}=\left\{\vm{u}(\vm{x}):\vm{u}(\vm{x}) =\displaystyle\sum_{k\in I^l} \mathtt{N}^k(\vm{x})~\vm{u}^{k},
~ \vm{u}(\vm{x})=\overline{\vm{u}}(\vm{x})~ \text{for}~\vm{x}\in \partial\Omega^D
\right\}$
* $I^l$ is the set of nodes associated with the reference mesh, discretization of $\Omega$
* $\mathtt{N}^k$ is either $n^k$ or $N^k$ corresponding respectively to the shape functions defined in <ref> or in <ref> depending on whether $\vm{x}$ is in a micro or macro element.
* $\vm{u}^{k}$ is the vector of classical dofs for the node $\vm{x}^k$
This solution $\vm{u}^R$ represents the continuous field solution $\vm{u}^C$ only polluted by the discretization error.
The standard Gauss quadrature integration of (<ref>) gives the following linear system without Dirichlet elimination:
\begin{equation}
\vm{A}_{RR}\cdot \vm{u}_R^R=\vm{B}_R%^{\eqref{equilibR}}
~\text{with $\vm{u}_R^R$ the vector of $card(R)$ values defining $\vm{u}^R$}
\label{R_sys}
\end{equation}
As already mentioned in the section <ref>, in the reference mesh there are hanging nodes between the refined and unrefined areas (visible in figure <ref>) that need to be fixed to avoid discontinuity of the $\vm{u}^{R}$ field.
A simple approach is to eliminate them from the system (<ref>) by using a linear relationship to fix their value using the following formula:
\begin{equation}
\text{For an hanging node}~\vm{x}^j,~ \vm{u}^{j} =\displaystyle\sum_{i\in I_j}N^i(\vm{x}^j)~ \vm{u}^{i}
\label{hanging_eq}
\end{equation}
* $\vm{u}^{j}$ is the vector of fixed values (L-set) for the node $\vm{x}^j$
* $\vm{u}^{i}$ is the vector of free values (f-set) for the node $\vm{x}^i$
* $I_j$ is the set of nodes corresponding to the vertices of the edge or face on which hanging node $\vm{x}^j$ is located. Thus $card\left( I_j \right) \in \left\lbrace 2,3\right\rbrace $.
* $N^i(\vm{x})$ is the standard first-order finite element approximation function associated with the $\vm{x}^i$ node, related to the coarse element support that holds the edge or face where hanging node is present.
Eliminating the linear relations (<ref>) and the Dirichlet boundary condition of (<ref>) leads to the system:
\begin{equation}
\vm{A}_{rr}\cdot \vm{u}_r^R=\vm{B}_r
\label{r_sys}
\end{equation}
The solution given by a direct solver of this system is $\vm{u}_r^R$ the vector of $card(r)$ dofs defining $\vm{u}^R$.
But in this work, this system is never solved directly.
Instead, the proposed solver, at each scale iteration, computes a solution $\vm{u}^{ts}$ ($\vm{u}^{ts} \in \mathcal{M}^{\mathsf{R}}$) such that it quickly converges to the reference solution $\vm{u}^R$.
In this work, a $resi$ criterion is proposed to control this convergence.
The $resi$ criterion is taken as the relative residual error of system (<ref>):
\begin{equation}
resi=\frac{\left \| \vm{A}_{rr}.\vm{u}_{r}^{ts}-\vm{B}_{r} \right \|}{\left \|\vm{B}_{r} \right \|}
\label{residual_error}
\end{equation}
where $\left \| . \right \|$ is the L2-norm and $\vm{u}_r^{ts}$ is the vector of $card(r)$ dofs corresponding to the $\vm{u}^{ts}$ solution obtained by the solver.
For simplicity, in what follows, $\vm{u}_r$ will represent the $\vm{u}_r^{ts}$ vector of the field $\vm{u}^{ts}$ where superscripts $ts$ is made implicit.
The question of computing (<ref>) is exposed in the following section.
To relate this criterion to an energy error, an energy norm is defined as follows using (<ref>):
\begin{equation}
\left\| \vm{u} \right\|_{E_{\Omega}} =\sqrt{A\left( \vm{u},\vm{u} \right)_{\Omega}}=\sqrt{\stretchint{5ex}_{\Omega}
\tens{\epsilon} \left( \vm{u} \right):\tens{C}:\tens{\epsilon}\left( \vm{u}\right)\mathrm{d}\Omega}
\label{energynorm}
\end{equation}
The error introduced by the solver (mainly on the boundary conditions imposed on the fine scale problems) in the indirect resolution of system <ref> compare to its direct resolution can thus be expressed by $\left\| \vm{u}^{ts}-\vm{u}^R \right\|_{E_{\Omega}}$.
It is related to $resi$ by $\vm{A}_{rr}\cdot \vm{u}_{r}-\vm{B}_{r}$ vector:
\begin{equation}
\left\| \vm{u}^{ts}-\vm{u}^R \right\|_{E_{\Omega}}=\sqrt{A\left( \vm{u}^{ts}-\vm{u}^R, \vm{u}^{ts}-\vm{u}^R\right)}=\sqrt{\left( \vm{A}_{rr}\left( \vm{u}_{r}-\vm{u}_r^R\right)\right) \cdot \left( \vm{u}_{r}-\vm{u}_r^R\right)}=\sqrt{\left(\vm{A}_{rr}.\vm{u}_{r}-\vm{B}_{r}\right)\cdot \left( \vm{u}_{r}-\vm{u}_r^R\right)}
\label{lien_resi_energ}
\end{equation}
So when $\vm{A}_{rr}\cdot \vm{u}_{r}-\vm{B}_{r}$ vector tends to the zero vector, both $resi$ and $\left\| \vm{u}^{ts}-\vm{u}^R \right\|_{E_{\Omega}}$ tend to zero.
It is also interesting to compare the solution to the continuous solution.
The associated energy error is $\left\| \vm{u}^{ts}-\vm{u}^C \right\|_{E_{\Omega}}$ and can be expressed as:
\begin{equation}
\begin{array}{r}
\left\| \vm{u}^{ts}-\vm{u}^C \right\|_{E_{\Omega}}^2 =\stretchint{5ex}_{\Omega}
\tens{\epsilon} \left( \vm{u}^{ts} -\vm{u}^R + \vm{u}^R-\vm{u}^C \right):\tens{C}:\tens{\epsilon}\left( \vm{u}^{ts}-\vm{u}^R + \vm{u}^R-\vm{u}^C\right)\mathrm{d}\Omega =\\
\left\| \vm{u}^{ts}-\vm{u}^R \right\|_{E_{\Omega}}^2+\left\| \vm{u}^R-\vm{u}^C \right\|_{E_{\Omega}}^2+2A\left( \vm{u}^R-\vm{u}^C,\vm{u}^{ts}-\vm{u}^R \right)_{\Omega} \end{array}
\label{scale error}
\end{equation}
Subtracting (<ref>) to (<ref>) gives as $\mathcal{M}^{\mathsf{R}}\subset \mathcal{M}^{\mathsf{C}}$:
\begin{equation}
\forall \vm{v}^* \in \mathcal{M}_0^{\mathsf{R}}, A\left( \vm{u}^R-\vm{u}^C,\vm{v}^* \right)_{\Omega} =B\left(\vm{v}^*\right)_{\Omega}-B\left(\vm{v}^*\right)_{\Omega}=0
\label{equilibA-R}
\end{equation}
Based on (<ref>) and as $\vm{u}^{ts}-\vm{u}^R \in \mathcal{M}_0^{\mathsf{R}}$, then the extra term of (<ref>) vanishes:
\begin{equation}
A\left( \vm{u}^R-\vm{u}^C,\vm{u}^{ts}-\vm{u}^R \right)_{\Omega} =0
\label{Nullterm}
\end{equation}
Leading to the relation:
\begin{equation}
\left\| \vm{u}^{ts}-\vm{u}^C \right\|_{E_{\Omega}}^2 =\left\| \vm{u}^{ts}-\vm{u}^R \right\|_{E_{\Omega}}^2+\left\| \vm{u}^R-\vm{u}^C \right\|_{E_{\Omega}}^2
\label{equality}
\end{equation}
The second term on the right-hand side of (<ref>) represents the error introduced by the reference mesh discretization with respect to the continuous solution.
The first one is the error introduced by the method.
Most publications on the subject do compute $\vm{u}^R$ with another solver and verify a posteriori that $\left\| \vm{u}^{ts}-\vm{u}^C \right\|_{E_{\Omega}}$ tends to $\left\| \vm{u}^R-\vm{u}^C \right\|_{E_{\Omega}}$ (i.e. that $\left\| \vm{u}^{ts}-\vm{u}^R \right\|_{E_{\Omega}}$ tends to zero) in few iterations.
In this work $resi$ carries out this verification during the iterations.
A numerical comparison of these errors is given in section <ref>.
§.§ Specific global-scale system construction: the algebraic operator
The $\vm{u}_{r}$ (as mentioned above, the simplified notation of $\vm{u}_r^{ts}$) vector will act as a bridge between computational tasks.
It stores classical values at the nodes of $\vm{u}^{ts}$ field which will be used to compute $resi$ and impose boundary conditions on fine-scale problems.
The $\vm{u}_{r}$ dofs of this field can be related to global-scale field dofs via the kinematic equation (<ref>).
With this equation, for any node $\vm{x}^j$ of the reference mesh, it is possible to write, without any approximation, that the solution $\vm{u}^{ts}$ is equal to the global enriched discrete solution at this location:
\begin{equation}
\vm{u}^{ts}(\vm{x}^j)=\vm{U}^G(\vm{x}^j)=\displaystyle\sum_{i\in I^g} N^i(\vm{x}^j)~ \vm{U}^{i} + \displaystyle\sum_{p\in I_{e}^g} N^p(\vm{x}^j)~\vm{E}^{p}*\vm{F}^p(\vm{x}^j)
\label{relation}
\end{equation}
This can be rewritten in algebraic form as:
\begin{equation}
\vm{u}_M=\vm{T}_{MG}\cdot \vm{U}_G
\label{u_ru_g}
\end{equation}
where $\vm{T}_{MG}$ is a linear interpolation operator whose structure and construction are given in <ref>.
In this work, the process of eliminating linear relations (<ref>) is made transparent by choosing, from the R-set, to first eliminate the L-set before applying linear interpolation.
Thus, the operator in (<ref>) interpolates from the G-set to the M-set.
Considering the matrices of (<ref>) with the L-set eliminated, and the equation (<ref>) to apply a change of variable, the equality between the potential energy and the work of external forces can be written as follows:
\begin{equation}
\vm{U}_G^t\cdot \vm{T}_{MG}^t\cdot \vm{A}_{MM} \cdot \vm{T}_{MG}\cdot \vm{U}_G=\vm{U}_G^t\cdot \vm{T}_{MG}^t\cdot \vm{B}_{M}
\label{energy_g}
\end{equation}
This gives the following system (see <ref> for the detailed construction):
\begin{equation}
\vm{A}_{gg}\cdot \vm{U}_g=\vm{B}_g
\label{g_sys}
\end{equation}
From a practical point of view, the sub-blocks of $\vm{T}_{MG}$, $\vm{A}_{MM}$ and $\vm{B}_{M}$ are large sparse matrices, so they are never fully assembled in memory but stored in blocks using a per macro-element strategy.
This strategy, which gathers in memory all the matrices and vectors necessary for the calculation relative to a macro element, is favorable to the reduction of cache misses and to multi-threading computation.
Moreover, during the loop, only the terms related to $\vm{F}^p$ (related to $\vm{T}_{Fe}$ operator given in <ref> ) have to be updated at each scale iteration. Thus some sub-block of $\vm{A}_{gg}$ and $\vm{B}_g$ are constant during the iteration and are computed and assembled only once at initialization time.
The INIT procedure (algorithm <ref> detailed in <ref> ) is in charge of this initialization.
Let $\omega^{e_{macro}}$ be the region of $\Omega$ covered by a macro element $e_{macro}$.
$\omega^{e_{macro}}$ is discretized by one or more micro-elements of the reference mesh.
With two loops on all macro elements, the INIT procedure integrates and assembles (<ref>) over $\omega^{e_{macro}}$ with the elimination of (<ref>) but keeping Dirichlet terms.
The resulting matrices and vectors $\vm{A}_{FF}^{e_{macro}}$ and $\vm{B}_{F}^{e_{macro}}$ are stored for all SP elements.
From this first matrix assembly of $\vm{A}_{FF}$ and $\vm{B}_{F}$ per block and the creation of the sub blocks of $\vm{T}_{MG}$, all the constant terms of $\vm{A}_{gg}$ and $\vm{B}_g$ can be computed algebraically and assembled in two dedicated memory areas, $\vm{A}_{gg}^{ini}$ and $\vm{B}_{g}^{ini}$.
§.§ global to local transfer
In this work the idea is to obtain the $\vm{u}^{ts}$ field not by solving directly (<ref>) but by solving (<ref>) and use (<ref>) to go from $\vm{U}^G$ to $\vm{u}_F$.
This last point is achieved by the UPDATE_MICRO_DOFS procedure (algorithm <ref> detailed in <ref>) which uses the same per-macro-element storage strategy and stores the $\vm{u}^{ts}$ field in the $\vm{u}_F^{e_{macro}}$ vectors attached to SP macro-elements.
Note that this procedure does not loop over the NSP elements and that no information about the h-set is stored in memory because in the computational tasks related to $\vm{u}^{ts}$, this information can be ignored as we will see later.
§.§ Local problem
In this work, as in the section <ref>, the solution of the local problem for a patch $p$ will be given by solving (<ref>) but using a different mechanism to create the linear system.
Consider that $\omega_p$ is equal to $\bigcup\limits_{e_{macro}\in J^p}\omega_{e_{macro}}$, with $J^p$ the set of macro-elements of the patch $p$.
As the local problems are parts of the reference problem sharing with it its definition and discretization, the integration of (<ref>) has already been partially done when integrating (<ref>) over $\omega^{e_{macro}}$.
Thus, creating the matrix and vector for a patch $p$ is just assembling the blocks $\vm{A}_{FF}^{e_{macro}}$ and $\vm{B}_F^{e_{macro}}$ with $e_{macro}\in J^p$.
As for the Dirichlet boundary conditions on $\partial\omega_p$, they are directly obtained from the $\vm{u}^{ts}$ field ($\vm{u}_F^{e_{macro}}$ dofs).
It is strictly the same as using $\vm{U}^G$ because $\vm{u}_F$ which comes from (<ref>) is computed with $\vm{T}_{MG}$ constructed from (<ref>) as the space $\mathcal{M}^{\mathsf{G}}$ constructed from same kinematic equation.
The application of these boundary conditions gives the following set of local linear systems:
\begin{equation}
\forall p\in I_e^g~\vm{A}_{qq}^p\cdot \vm{u}_q^p=\vm{B}_q^p
\label{p_sys}
\end{equation}
Again, it is the INIT procedure, with a loop over the patches, that is responsible for creating $\vm{A}_{qq}^p$ which is constant during the iteration.
Similarly, the constant part of the vector $\vm{B}_q^p$, called $\vm{BI}_q^p$ and the constant coupling Dirichlet term $\vm{D}_{qd}^p$ are computed in this loop over the patches (see <ref> for more details).
Note that the use of Dirichlet boundary conditions for local problems simplifies the software implementation and offers good performance for solving (<ref>): smaller system size, simple constant algebraic operator ($\vm{D}_{qd}^p$) for imposing the prescribed displacements, and the matrix is not polluted by additional parameters (springs) to be set.
But the consequence of this choice on the proposed loop has not been studied and should be in a future work.
§.§ The loop
When the INIT procedure is completed, all local structures associated with macro-elements or patches are allocated and partially computed.
Before entering the loop, the algorithm <ref> initializes the field $\vm{u}^{ts}$ by calling UPDATE_MICRO_DOFS with the vector $\vm{U}_g$, solution of a specific macro-scale problem (in general the unenriched macro-scale problem but it can come from another computation).
The algorithm <ref> then enters the loop and first computes the solutions $\vm{u}_q^p$ of (<ref>) by calling the MICRO-SCALE_RESOLUTION procedure (algorithm <ref> detailed in <ref>).
This procedure creates vectors $\vm{B}_q^p$ from the constant terms and the coupling Dirichlet terms multiplied by the current subset of $\vm{u}_F$.
It then solves the systems with a direct solver ($\ddagger$ of the algorithm <ref>), factoring only once the matrices.
Having all the $\vm{u}_q^p$, the global to local step is done and the local to global step can start by calling the UPDATE_MACRO_PRB procedure ( algorithm <ref> detailed in <ref>).
This procedure computes the enrichment function from the solution $\vm{u}_q^p$ and using $\vm{A}_{gg}^{ini}$ and $\vm{B}_{g}^{ini}$ finishes the algebraic computation of $\vm{A}_{gg}$ and $\vm{B}_{g}$.
The system (<ref>) is then solved ($\dagger$ of the algorithm <ref>) providing $\vm{U}_g$ solution.
A call to UPDATE_MICRO_DOFS with this new $\vm{U}_g$ solution updates $\vm{u}^{ts}$ field so that the $resi$ criterion (<ref>) can be computed for the current iteration.
This is done by COMPUTE_RESIDUAL procedure described in the next section.
Depending on whether $resi$ is greater than the given $\epsilon$, the loop must continue or not.
§.§ residual criterion computation
The numerator of the equation (<ref>) can be divided as follows:
\begin{equation}
resi=\frac{\left \| \begin{array}{c}
\vm{A}_{hr}.\vm{u}_{r}-\vm{B}_{h}\\
\vm{A}_{fr}.\vm{u}_{r}-\vm{B}_{f}
\end{array} \right \|}{\left \|\vm{B}_{r} \right \|}
\label{residual_error_split}
\end{equation}
In this work, the h-set part of the residual vector is considered as zero.
This is justified by the fact that the discretization of the NSP elements at both scales is the same: the h-set part of the r-set is the same as the h-set part of the g-set.
Thus, since the $\dagger$ resolution, when performed by a direct solver, provides a "zero" residual error of the g-set system (zero at the machine accuracy), all this residual vector is zero and in particular its h-set rows.
The computation of $resi$ is therefore simplified to:
\begin{equation}
resi=\frac{\left \|
\vm{A}_{fr}\cdot \vm{u}_{r}-\vm{B}_{f}
\right \|}{\left \|\vm{B}_{r} \right \|}
\label{residual_error_simp}
\end{equation}
The organization of the data by macro-element both simplifies and complicates the computation of (<ref>) performed by the COMPUTE_RESIDUAL procedure (algorithm <ref> detailed in <ref>).
The simplicity comes from the computation by $e_{macro}$ of the vector $\vm{A}_{FF}^{e_{macro}}\cdot\vm{u}_{F}^{e_{macro}}-\vm{B}_{F}^{e_{macro}}$ with all the vectors and the matrix available as data attached to the $e_{macro}$ element.
The complexity comes from the accumulation of the special local scalar product of these vectors to obtain term $\left \| \vm{A}_{fr}\cdot \vm{u}_{r}-\vm{B}_{f} \right \|$ .
The $\left \|\vm{B}_{r} \right \|$ term of the equation (<ref>) is computed by the procedure COMPUTE_B_NORM described by the algorithm <ref> in <ref>.
This algorithm is very similar to the algorithm <ref> in that the scalar product of the vector $\vm{B}_r$ by itself is transformed into a sum of local dot product.
But as $\vm{B}_r$ remains unchanged during the loop, this procedure is called only once during the initialization by the INIT procedure.
The $\left \|\vm{B}_{r} \right \|$ term is therefore supplied as an argument to the COMPUTE_RESIDUAL procedure which computes the $resi$ numerator and just divide by this argument to obtains $resi$.
§.§ enrichment function
In this work we choose to approach blending element problem by using a shift to cancel the enrichment function at the coarse-scale enriched node location and by forcing discretization transition to be embedded in patches, contrasting with SGFEM [4] presented in (<ref>).
This last point is somewhat similar to the idea in [9] where enrichment was added to nodes initially not enriched in blending elements, with the use of a special ramp function to correct the original enrichment function.
Here, mixed discretization patches are added, so enrichments are added (red node in figure <ref>) on initially unenriched blending elements nodes (unenriched because their support did not cover the behavior being captured).
And the treatment of hanging nodes replaces the effect of the ramp function.
In this way, the original blending element (orange element connected to red node in figure <ref>) regain their partition of the unity property and the new blending elements (yellow element in figure <ref>) are not perturbed thanks to the use of a null enrichment function.
Some tests not provided in this work show that the proposed enrichment give a good convergence compare to that given by the SGFEM enrichment.
In this work, the enrichment function $\vm{F}^p(\vm{x})$ is :
\begin{equation}
\forall p\in I_{e}^g,~\forall \vm{x}\in \omega_p ~\vm{F}^p(\vm{x})=\vm{u}^{Q^p}(\vm{x})-\vm{u}^{Q^p}(\vm{x}^p)~ \text{and}~ \forall \vm{x}\notin \omega_p~\vm{F}^p(\vm{x})=\vm{0}
\label{shift_enrich}
\end{equation}
* $\vm{u}^{Q^p}(\vm{x})$ and $I_{e}^g$ are defined in section <ref>
* $\vm{x}^p$ is the location of the enriched node $p$
Another specificity of this work is the wish that $\vm{u}^{ts}$ converges to $\vm{u}^{R}$ which can only be achieved if the enrichment function is linearly interpolated on the reference mesh following the construction and use of $\vm{T}_{MG}$.
Thus, the kinematic equation actually used at the global scale, corresponding to the application of the $\vm{T}_{MG}$ operator, is:
\begin{equation}
\vm{U}(\vm{x})=\displaystyle\sum_{i\in I^g} N^i(\vm{x})~ \vm{U}^{i} + \displaystyle\sum_{p\in I_{e}^g} \vm{E}^{p} \displaystyle\sum_{m\in {\bar{I}_p^l}} n^m(\vm{x})~ N^p(\vm{x}_m)~\vm{F}^p(\vm{x}_m)
\label{kinematicnew}
\end{equation}
where $\bar{I}_p^l={I}_p^l\setminus \left\lbrace \text{hanging nodes} \right\rbrace$ since the L-set has also been eliminated in the patch problems.
The construction of the second term of (<ref>) is first illustrated, in 1D, in the figure <ref> where artificial elevations show the different quantities of the 8 micro-edges of a $p$ patch (composed of 2 macro-edges not shown here).
[$\displaystyle\sum_{m\in {\bar{I}_p^l}} n_m(\vm{x})N^p(\vm{x}_m)~\vm{F}^p(\vm{x}_m)$]
Construction of an interpolated generalized approximation function for a 1D patch $p$.
For visualization purposes, the quantities here are artificial elevations of the patch mesh.
The same type of illustration of the 2D example in figure <ref> is shown in figure <ref>.
Again, artificial elevations show the various quantities outside the plane formed by a patch $p$.
[$\displaystyle\sum_{m\in {\bar{I}_p^l}} n_m(\vm{x})N^p(\vm{x}_m)~\vm{F}^p(\vm{x}_m)$]
Construction of an interpolated generalized approximation function for a $p$ patch (based on the example in figure <ref> ).
For visualization purposes, the quantities transformed into scalars are drawn here as artificial elevations outside the plane formed by the patch mesh. (*enriche_construct_a,*enriche_construct_b) and (*enriche_construct_c,*enriche_construct_d) are represented from two different viewpoints in space for better visualization.
The solutions $\vm{u}^{Q^p}(\vm{x})$ (figure <ref> and <ref>) are completely arbitrary.
The enriched function $\vm{F}^p(\vm{x})$ corresponding to the equation (<ref>) is given in figure <ref> and <ref>.
By construction, this function is null at the location of the enriched node.
When multiplied by the standard finite element approximation function to form the generalized finite element approximation function (figure <ref> and <ref>), all the boundary values of the patches become zero and we obtain second order curves/surfaces.
The final linear interpolation is shown in the figure <ref> and <ref>.
In the 2D example, where a mixture of refined and unrefined elements forms the patch, the interpolated generalized function is necessarily zero on the unrefined element (yellow blending elements in the figure <ref>).
This property implies that blending elements have unenriched nodes and enriched nodes with zero contribution from enrichment function.
Thus, the enrichment contribution is zero for the blending element and the partition of unity is saved because only the classical terms matter.
For refined elements, $N^p(\vm{x}_m)~\vm{F}^p(\vm{x}_m)$ is not zero for any location inside the patch (if $\vm{F}^p(\vm{x}_m)\neq\vm{0}$).
For the hanging nodes, since they are eliminated in $\bar{I}_p^l$ their values are therefore zero which is consistent with the values in the unrefined elements.
Note that if the refined elements correspond only to a terminal connection specific to the adaptation tool (element added in between figure <ref> and figure <ref> in <ref>), then, despite the refinement, the associated generalized finite element approximation function is zero on these elements.
To avoid this situation, which can lead to a singular problem, the refined SP elements are always divided at least once.
§ THEORETICAL SEQUENTIAL ALGEBRA PERFORMANCE
Since the method uses a resolution with sparse matrix of different patterns and densities, and since the number of iterations of the loop is not known in advance, estimating the algebraic performance of the method is difficult in a general context.
It is even more complex when using parallelism.
Thus, to obtain the numerical complexity of the solver, we will use a simple sequential numerical example to be able to count the number of flops [flop=floating-point operation: one addition, subtraction, division or multiplication of two floating-point numbers] in each part of the algorithm.
The number of flops used will then be expressed as a polynomial function of the problem dimensions keeping only the leading terms to obtain the order of magnitude ($\mathcal{O}(n)$ notation).
If we take a meshed cube with a finite hexahedral element of order one in 3D, we can, using octree refinement, simply count the coarse scale dofs, the patches and the patches' dofs.
For the linear resolution steps of the method, only direct sparse solver will be considered.
Thus, the number of flops for the factorization, in terms of the $n$ dimension of the problem, is approximately between $n^2$ and $n^3$ (dense matrix).
And for backward and forward substitution the cost is in between $2.n$ and $2.n^2$.
Depending on the strategy of the solver (multi-frontal,left-looking,...), the reordering of the symbolic factorization (nested dissection, minimum degree, ...) and the density of the initial matrix, the number of flops varies greatly and depends on a variable number of parameters.
Thus, to estimate these quantities, a sparse ratio parameter $SR$ is introduced such that the dimension used to count the number of flops $n_f$ is:
\begin{equation}
n_f(n,SR)=\sqrt{SR. n^2}
\end{equation}
with $SR\in ]0,1]$.
Then the number of flops for factorization ($count_{fact}$), backward/forward computation ($count_{bf}$) and resolution ($count_{resolv}$) are, using a dense estimation:
\begin{gather}\tag{\ref{TAP:costr}}
count_{resolv}(n,SR)=count_{fact}(n,SR)+count_{bf}(n,SR) \\
\begin{align}
count_{fact}(n,SR)=(n_f(n,SR))^3 \\
\end{align}
\end{gather}
This $SR$ parameter is completely arbitrary.
It is introduced to verify that the conclusions obtained in this section are not impacted by the lack of accurate estimation of the factorization and solving phase.
For discretization, the root of the octree uses one element to represent the problem.
It corresponds to zero level.
A level, noted $L$ in the following, describes the depth in the octree tree.
With $L=0$ being the starting level, to go from $L$ to $L+1$, the octree refinement adds a new node in the middle of all edges, faces and elements of level $L$.
Then, each element of the level $L$ is replaced by 8 hexahedra encapsulated in it and connected to the old and new nodes.
The number of classical dofs for a level $L$ is easily deduced from this refinement strategy and is :
\begin{equation}nb_{dof}(L) = 3.(2^{L}+1)^{3}
\label{TAP:nbdofL}
\end{equation}
Thus, the coarse enriched problem defined at the $L_c$ level will cost for $N_l$ iteration with a sparse ratio parameter $SR_c$ using (<ref>) and (<ref>):
\begin{equation}
\label{TAP:costc}
cost_{coarse}(L_{c},N_l,SR_c)=N_l\times count_{resolv}(2\times nb_{dof}(L_{c}),SR_c)
\end{equation}
[][ratio of floating point estimation (full rank/resolution) with $N_l=30$,$SR_c=SR_p=0.017$]
floating point estimation for different value of $L_c$,L, $SR_C$ (sparse ratio for coarse problem), $SR_F$(sparse ratio for fine problems) and $N_l$ (number of iteration)
For a given coarse level $L_c$, it is possible to obtain in the same way the cost of computing patches (factorization and solving) for $N_l$ iterations, a sparse ratio parameter $SR_p$ and a final refinement level $L$.
This is given by the function $cost_{patch}(L_c,L,N_l,SR_p)$ detailed in <ref>.
Using (<ref>) and (<ref>) we can state that the cost of the method is:
\begin{equation}
\label{TAP:cost_ts}
\end{equation}
Fixing $N_l,SR_c,SR_p$ the equation (<ref>) for some values of $L_c$ and $L$ give the surface of figure <ref>.
We can see on this continuous representation of (<ref>) that the iso-$L$ curves have a minimum for a specific $L_c$ value.
It means that method has an optimal coarse level for a fixed target level that provides the best performance (i.e. the lowest number of flop consumption).
This can be understood by looking at figure <ref> for a specific target $L=8$ and fixed $N_l,SR_c,SR_p$ where the equation (<ref>) is divided into patch contribution (equation (<ref>)) and coarse-level contribution (equation (<ref>)).
In this figure, it is clear that the antagonistic evolution of the performance of patch solving and coarse-scale problem solving lead to having a minimal value at the intersection of the curves.
In figure <ref> the variation of the parameters $N_l,SR_c,SR_p$ does not change the fact that a minimum exists.
In this example where $L=8$, when the number of loop increases, the minimum remains the same.
When we consider the variation of the sparsity of the matrix (up to dense case), the minimum is moved around its original position depending on whether the coarse problem or the patch problems are denser or not.
With denser matrices for patches, it is naturally preferable to increase $l_c$ to have smaller patches.
And with a denser coarse matrix, it is better to decrease $l_c$ to have a smaller coarse problem to solve.
When both sparsities increase, the minimum remains the same but with a higher value.
Figure <ref> presents, as a ratio, an estimate of the performance of a full rank solver (using the equation (<ref>)) compared to the solver.
A ratio greater than, equal to, or less than 1 indicates that the full rank solver uses more, the same, or less flop than the solver respectively.
The surface naturally has a maximum for iso-$L$ curves with the same $L_c$ as the minimum of the figure <ref> (on iso-$L$ curves the cost of the full rank solver is constant).
In this example, the theoretical performance of the solver compared to the full rank solver is much better for $L>2$.
For $iso-L=2$ the optimal ratio at $L_c=1$ is only of 1.76.
And for $L=1$, as in this case the patch problems size are almost equivalent to the $L_c$ level of discretization no gain is obtained.
This illustrates the fact that the method, to be effective, must be used at a target level that allows to start from a coarse level that induces a scaling effect.
The conclusion drawn from the analysis of this octree refinement example were also observed in parallel with the different test cases studied bellow.
Parallelism, the nature of the unstructured mesh, and the cost of assembly also impact the choice of the optimal coarse mesh for a given fine-scale target discretization.
But the observation of the test case <ref> led to use the following formula to define the scale jump for most cases with regular refinement:
\begin{equation}
\label{ts_jump_level_eq}
\end{equation}
[][Weight: micro embemded element count ]
[][Global element dispatching on 4 processes partially based on weight (*ts_dist_weigth_elem)]
(0.89885423,0.88901305)(0,0)[lt]0t]lprocess 1
(0.92777561,0.46741474)(0,0)[lt]0t]lprocess 2
(0.40797012,0.15228167)(0,0)[lt]0t]lprocess 3
(-0.04530285,0.99875923)(0,0)[lt]0t]lprocess 0
[][Global element dispatching on 8 processes partially based on weight (*ts_dist_weigth_elem)]
(-0.09201796,0.74600391)(0,0)[lt]0t]lprocess 7
(0.89885423,0.88901305)(0,0)[lt]0t]lprocess 2
(0.92021202,0.51974065)(0,0)[lt]0t]lprocess 3
(0.75579117,0.14654528)(0,0)[lt]0t]lprocess 5
(-0.04530285,0.99875923)(0,0)[lt]0t]lprocess 0
(0.56634056,1.17854491)(0,0)[lt]0t]lprocess 1
(0.91647254,0.38665167)(0,0)[lt]0t]lprocess 4
(0.01965601,0.18307101)(0,0)[lt]0t]lprocess 6
(0.24956891,0.69252891)(0,0)[lt]0t]lGlobal scale node with distributed support
(0.24961644,0.59985417)(0,0)[lt]0t]lGlobal scale node with non distributed support
(0.25355642,0.30656925)(0,0)[lt]0t]lGlobal-scale mesh
(0.40199406,0.04134013)(0,0)[lt]0t]lProcess id color
(0.31214749,0.51877158)(0,0)[lt]0t]lInterprocess interface
(0.26072418,0.18327801)(0,0)[lt]0t]lFine-scale mesh
(0.31484353,0.43445197)(0,0)[lt]0t]lProcess boundary
[][Fine scale problems (2 out 25) following global element dispatching on 4 processes (*ts_dist_load_balance)]
(0.03114623,0.68638138)(0,0)[lt]0t]lGlobal comm: process 1 / Local comm: process 0
(0.07733094,0.03550769)(0,0)[lt]0t]lGlobal comm: process 3 / Local comm: process 1
(0.33973626,0.2837736)(0,0)[lt]0t]lGlobal comm: process 0 / Local comm: process 0
fictitious 2D problem of figure<ref> distributed on processes
§ PARALLEL PARADIGM
This work does not use shared-memory multithreaded parallelism and uses only the distributed-memory message passing paradigm [In this work we use the MPI-3.0 (or higher) message passing standard.].
This choice comes from the thread-safety constraint which would have generated too much implementation work for this first study (for example the in-house library used to code the method is not thread-safe).
As a result, multi-threading is never used in this study and to be fair when comparing with other methods, their multi-threading capability was also disabled.
Thus, in this section, parallelism will always concern the message passing paradigm.
Parallelism is introduced at all scales in all frameworks, but with a quite different granularity.
At the global level, the mesh is distributed over processes (see figures <ref> and <ref> for the example of figure <ref> distributed over 4 and 8 processes respectively), based on a macro-element partitioning, in order to allow a good load balancing.
This partitioning (not evaluated in depth in this paper), given by ParMetis ([16]) in this work, is supposed to balance the cost of creating linear systems, assembling them and solving them at both scales.
To do this, a cost per macro-element is obtained directly from the number of its encapsulated micro-elements (see figure <ref>) and the number of enriched nodes.
If the entire global-scale mesh is refined (as in the examples in the section <ref> and <ref>), all macro elements have roughly the same number of encapsulated micro elements and enriched nodes.
Thus, the cost per element can be considered as a constant.
Now, if only a specific region is involved (see the 2D example in Figure <ref> or the example in section <ref>), the number of encapsulated micro-elements and enriched nodes provides weights to the ParaMetis multi-objective partitioning optimization algorithm.
In the examples studied, this partitioning provides great scalability for the assembly task, as all processes receive roughly the same amount of micro-elements to integrate and assemble (see in particular the table <ref> in section <ref>).
It also allows the use of a fairly well balanced parallel resolution for the linear system related to the global-scale problem ($\dagger$ in the algorithm <ref>).
The section <ref> gives more details on the choice of solver to use at this level.
At the fine scale, patches follow the distribution of the global scale mesh (see figure <ref>) and are therefore naturally distributed in most processes.
This results, as in [18], in a high level of coarse-grained parallelism because many fine-scale problems are solved independently (without communication) at the same time by many processes.
But some patches are split between the computational units due to the global mesh distribution (see the left patch in the figure <ref>).
In this work the distributed patch problems are solved entirely in parallel without any overlapping mechanism (phantom variables, ...).
The assembly of the problems ($\vm{A}_{qq}^{p}$ and $\vm{BI}_{q}^{p}$ of algorithm <ref> and $\vm{B}_{q}^{p}$ of algorithm <ref>) is thus done in parallel without communication.
Then, the matrix contributions of the linear system are provided to a parallel solver ($\ddagger$ of the algorithm <ref>) and treated with a mid-grain parallelism.
But this introduces some constraints on the overall process of fine-scale parallel resolution.
The distributed patches are no longer independent.
Typically, if two distributed patches share the same process, it will be impossible to run the resolution of both in that process.
The solver is not able to treat two independent matrices at the same time.
This implies that the computation of distributed patches must be scheduled as explained in <ref>.
These distributed patches have an ambivalent impact on performance.
On the one hand, they reduce the time to solve a rather large fine-scale problem.
And on the other hand, if their number increases too much, it expands the amount of communication and can also have an impact on sequencing.
§.§ global scale problem parallel resolution
In this work, we first chose an asynchronous parallel MPI sparse direct solver (MUMPS [2, 3]) for the resolution of the linear system associated with the global problem.
It was a safe choice (even if the matrix becomes badly conditioned) for this first implementation but by no means a requirement.
The symbolic step is performed only once at the beginning of the loop.
Factorization and solving step are repeated for each iteration with the updated $\vm{A}_{gg}$ and $\vm{B}_{g}$.
With many patches, as the fine scale resolution has good scalability (see the results presented in section <ref>), many processes can be used well beyond the scalability of the parallel direct solver which solves at the coarse-scale, a problem of rather smaller size.
To limit this effect, in all simulations tested, a subset of the available processes (taking at most $nbp_{max}$ processes, $nbp_{max}$ arbitrarily chosen to stay within the range of good performance of the direct solver) was used to solve this coarse-scale problem (gathering $\vm{A}_{gg}$ contribution of the processes eliminated from the resolution in the $nbp_{max}$ retained processes).
This version is referred to as below.
This reduction had a positive effect, but the overall speed-up[the speed-up is the ratio between the time used by an application in single-process mode and the time used in multi-process mode.
Ideally, it should give the number of processes used. ] of the resolution was still deteriorated by the coarse-scale efficiency.
To reduce this effect, as the loop iterates over a coarse-scale problem without changing its discretization, it is possible to use a parallel preconditioned iterative solver (conjugate gradient) that starts with the solution of the previous iteration.
This choice intrinsically reduces the coarse-scale resolution by considering that the successive solutions are not too far apart from each other, so that the number of iterations in the iterative solver should be small.
And this choice adds scalability because the conjugate gradient scales well (to the point where the scalar product becomes a problem).
A natural choice is to use for the preconditioner a previous factorization performed by the direct solver.
The preconditioning step then only costs a forward/backward resolution, performed in parallel, following the distribution of the direct solver.
If not done to many times, it is cheap compared to a full factorization.
Note that this technique does not take into account the stable nature of the classical dofs of the coarse-scale problem as is the case in [19].
This would have added additional complexity regarding load balancing and resolution.
Nevertheless, the proposed approach offers apriori a slightly richer preconditioner compared to the block Jacobi proposed in this quote because it incorporates all terms (including the classic x enriched coupling blocks).
The use of the iterative solver and the quality of the preconditioner are controlled using the following arbitrary rules:
* The iterative resolution is used instead of the direct resolution as soon as the criterion "$resi$" becomes smaller than $\epsilon\times 10000$ ($\epsilon$ being the precision to be reach by "$resi$" in algorithm <ref> ) and as soon as 2 iterations have been performed
* When the iterative resolution takes more than 13 iterations to converge, the preconditioner is recomputed: at the next iteration, the direct solver is used again to provide a new solution and a new factorization.
The iterative solver is then reused in the next iteration.
The second rule avoids spending too much time in the iterative solver when the preconditioner is not of high quality.
The first rule requires waiting for the solution to reach a minimum quality so that the variation between the iteration is small enough for the preconditioner to efficiently reduce the number of iterations of the iterative solver.
This version is hereafter referred to as .
Also note that in this version, for the direct resolution phase, low-rank resolution (see <ref>) can also be used instead of full rank resolution.
This is left as future work.
At the coarse scale, another type of solver ( a domain decomposition solver presented in <ref>) was also tested during this study in order to gain scalability as illustrated in section <ref>.
[][weight per patch]
[][Initial graph with arbitrary node id]
[][sequence 1]
[][sequence 2 ]
[][sequence 3]
[][sequence 4]
[][Sequence 5]
[][Sequence 6]
[][Sequence 7]
[][Sequence 8]
[][Remaining local patch]
(0.20066397,1.2462974)(0,0)[lt]0t]ledge related to process 0
(0.20066397,1.13389424)(0,0)[lt]0t]ledge related to process 1
(0.20066397,1.02149109)(0,0)[lt]0t]ledge related to process 2
(0.20066397,0.90908791)(0,0)[lt]0t]ledge related to process 3
(0.20066397,0.79668479)(0,0)[lt]0t]lfirst node selected for this sequence
(0.20066397,0.68428159)(0,0)[lt]0t]ledge folowed to find impacted node by first choice
(0.20066397,0.57187843)(0,0)[lt]0t]lblocked node by first choice
(0.20066397,0.4594752)(0,0)[lt]0t]lother node(s) selected for this sequence
(0.20066397,0.34707211)(0,0)[lt]0t]ledge folowed to find impacted node by other choice(s)
(0.20066397,0.23466895)(0,0)[lt]0t]lblocked node by other choice(s)
(0.20061921,0.1234222)(0,0)[lt]0t]lnode with distributed support
(0.20066397,0.00986271)(0,0)[lt]0t]lNode with non distributed support
Patch sequencing for figure <ref> example. At each sequence, a set $\mathcal{S}$ is chosen so that as many distributed patches as possible are solved at the same time and during the same time period. A sequence is first built by the lowest process which chooses one of its distributed nodes (red node) in the graph. Then, all the other processes that are not already blocked by this first choice do the same (blue node) in ascending order of the process identification numbers. See <ref> for a detailed description and the table <ref> for each $\mathcal{S}$. After a sequence is constructed, its nodes and connected edges are removed from the graph to give a new sub-graph used to create the next sequence until no more distributed nodes are available.
§.§ local scale problem scheduling
The 2D example in figure <ref> will be used to illustrate the static scheduling algorithm.
As already mentioned, two distributed patches that share the same process cannot start solving their linear system at the same time in that process.
To avoid this situation, a simple idea is to force distributed patches that share the same process to be computed at different times.
To do this, a static[Avoiding dynamic scheduling is normally more efficient] scheduling is created that requires all distributed patches over all processes to be computed in a specific order during the loop.
The general idea is to solve as many distributed patches as possible at once, thus maximizing the use of processes and reducing the number of sequences.
To achieve this goal, the first mandatory task is to identify the dependencies between the distributed patches so that they can be sequenced.
A distributed patch depends on another if it shares a process with it.
This dependency can be extended to local patches to create an undirected graph where the vertices are all patches and the edges represent a process connection (i.e. two patches are connected by an edge if they belong even partially to the same process).
Using the mesh distribution in figure <ref> this graph concept is illustrated in figure <ref>: the vertices of this graph are patches (here, for a quick visual recognition of patches, the vertices of the graph are located at the same place as the enriched nodes of the mesh and each of them has been assigned an arbitrary identifier), the edges of this graph represent a process connection (in this figure, the edges are colored the same way as macro-element of figure <ref> to clearly identify the process identifier given in figure <ref>).
From this undirected graph, it is possible to create a set $\mathcal{S}$ of distributed vertices such that for all vertices in $\mathcal{S}$, no edge connect them.
Thus, these vertices can be computed at the same time because they do not share a common process (no edges connect them).
Finding this set $\mathcal{S}$ (not unique) corresponds in graph theory to the "independent set problem" and will correspond to finding a sequence of the scheduling.
It is interesting to maximize the size of $\mathcal{S}$ (which corresponds to the NP-hard "maximum independent set problem" in graph theory) because on the one hand it maximizes a priori the number of used processes and therefore the load balancing is good. On the other hand, the distributed vertices are consumed faster and therefore the number of sequences can be minimized.
When the set $\mathcal{S}$ is assigned to a sequence (mainly by creating a colored MPI communicator associated with it), its vertices (and their connecting edges) can be removed from the graph.
The maximum independent set problem is then solved on the remaining subgraph to find the next sequence.
All sequences are found this way until all distributed patches are removed from the subgraph.
Then the remaining local patches can be processed in any order as they are independent.
[distribution of figure <ref>]
8|c|Total number of distributed patches: 11
8|c|Maximum number of distributed patches per process: 8
8|c|Minimum number of distributed patches per process: 4
8|c|Average number of distributed patches per process: 6
[origin=c]90sequence [origin=c]90Active patches [origin=c]90Active distributed [origin=c]90patches [origin=c]90% Active distributed [origin=c]90 patches [origin=c]90Active processes
[origin=c]90% Active processes
[origin=c]90$\mathcal{S}$ [origin=c]90Figure
1 2 1 50 4 100 18,3 <ref>
2 3 1 33.3 4 100 12,2,20 <ref>
3 2 1 50 4 100 10,21 <ref>
4 3 1 33.3 4 100 13,1,23 <ref>
5 2 2 100 4 100 17,6 <ref>
6 2 2 100 4 100 7,19 <ref>
7 3 1 33.3 4 100 8,11,22 <ref>
8 2 2 100 4 100 15,5 <ref>
average 2.4 1.4 62.5 4 100 2c
[distribution of figure <ref>]
7|c|Total number of distributed patches: 17
7|c|Maximum number of distributed patches per process: 6
7|c|Minimum number of distributed patches per process: 4
7|c|Average number of distributed patches per process: 5.1
[origin=c]90sequence [origin=c]90Active patches [origin=c]90Active distributed [origin=c]90patches [origin=c]90% Active distributed [origin=c]90patches [origin=c]90Active processes
[origin=c]90% Active processes
1 3 3 100 8 100 12,7,17
2 3 2 66.7 8 100 10,18,21
3 3 3 100 8 100 13,6,20
4 4 4 100 8 100 8,5,19,15
5 4 4 100 8 100 14,3,11,25
6 3 1 33.3 4 50 9,1,23
average 3.3 2.8 83.3 7.3 91.7 1c
Distributed patches sequencing with the algorithm <ref> for the fictitious 2D problem in figure<ref> distributed over 4 and 8 processes (figure <ref>). It does not correspond the calculation of all patches because the average does not take into account the computation of the non-constrained local patches.
The condition of maximizing the size of $\mathcal{S}$ is not sufficient to obtain a balanced load.
The vertices of $\mathcal{S}$ must have the same "load" (in a sequence, if a vertex $v_i$ of $\mathcal{S}$ takes longer to compute than the others, most of the process will be idle while waiting for $v_i$ to finish).
Thus, with distributed patches having different computational consumption (called weights hereafter), the choice of $\mathcal{S}$ must be oriented by these weights in order to have approximately the same "load" for all the vertices of $\mathcal{S}$.
In this work, these weights are simply the number of micro-elements in a patch which reflects its number of dofs and thus its computational cost (see figure <ref> computed from the weights in figure <ref>).
The NP-hard nature of the maximum independent set problem, this weighting constraint, and the a priori distributed nature of the undirected graph lead to the choice of a basic sub-optimal heuristic to quickly find a static scheduling.
More precisely, the chosen algorithm, formalized in the algorithms <ref>, <ref> and <ref> detailed in <ref>, is played in the table <ref> and figure <ref> for the example of the figure <ref>.
In this 2D example, 8 sequences are required, which corresponds to the maximum number of distributed patches per process: in the process with 8 distributed patches, there cannot be less than 8 sequences.
With the example of figure <ref>, distributed on 8 processes (see figure <ref>), the table <ref> shows that in this case 6 sequences must be computed.
But in this case, the last sequence is less efficient because process 3, 4, 5 and 7 are note used.
This shows that the scalability of the distributed method will be influenced by the quality of the algorithm <ref>.
In the section <ref>, and in particular in the table <ref> of this section, we study in a real example the heuristic effect of this algorithm.
§ NUMERICAL SIMULATION
All simulation were run on the Liger cluster (see <ref> for details) in the exclusive node condition (no other users at the node level but shared resources at the network and I/O level).
In many cases, the solver is compared to the full rank direct solver MUMPS (denoted by "fr" hereafter), to a low-rank direct solver based on MUMPS (see <ref>, denoted by "blr" hereafter) and to a domain decomposition solver (see <ref>, denoted by "dd" hereafter).
Hereinafter, the and versions are referred to as "ts" and "tsi" respectively.
In terms of notation E will be the Young's modulus and $\nu$ the Poisson's ratio.
§.§ Analytic solution
This test is intended to validate the proposed work using a given cubic displacement field over a plate:
figureAnalytical test: plate under volume and surface loading. A, B, and C are the corner points used to fix the rigid body modes.
\begin{equation}
\Vec{u}^C(x,y,z)= \frac{(\nu+1)(1-2\nu)F}{E}\left[ \begin{array}{l} (x^2(\frac{K}{2}-\frac{x}{3})+2\nu(y^2-z^2))\\
-4\nu xy\\
4\nu xz
\end{array}
\right]
\label{eqUC}
\end{equation}
where :
* $E=36.5$GPa
* $\nu=0.2$
* $F$ is a scalar corresponding to a force
* $K$ is the plate length (figure <ref> )
The imposed loads are obtained from the equation (<ref>) (writing the equilibrium of the system) and the plate is made of a homogeneous isotropic elastic material.
The plate is discretized with a coarse mesh (595 tetrahedrons, 188 vertices) which corresponds, hereafter, to the level 0 (L0).
Up to 4 other discretizations (L1,L2,L3,L4) derived from this level are used as the global-scale problem.
Each of them is built using the adaptation strategy described in <ref> (considering the whole mesh as an area of interest: no hanging nodes).
Each level corresponds to the previous one with all its elements divided once (L2 is L1 divided once).
This corresponds exactly to the refinement process used for scaling.
Thus, since all nodes are enriched, the 4 meshes are identical to any refined SP targeting the same level (L2 mesh will be the same as the refined created by refining L0 mesh 2 times).
We evaluate the performance of the proposed parallel solver and compare it to the solvers "fr", "blr" and "dd".
In all figures, for the curves, "L$Y$ from L$X$" or "from L$X$" (when the fine scale-level "$Y$" is implicit) means to use a "$X$" level for the discretization of the global level and to use a "$Y$" level for the discretization of the ($Y>X$).
Since the full rank direct solver provides a residual error close to the machine accuracy, a relatively fair comparison requires to adopt a value of $1.e-7$ for $\epsilon$ in the algorithm <ref>,<ref> and <ref>.
As noted in section <ref>, for comparison purposes, the multithreading capability of MUMPS and underlying BLAS is not used.
Thus, the number of cores (abscissa in many figures) directly represents the number of MPI process dispatched on one or more nodes of the cluster.
every node/.append style=scale=0.65
(0.000,0.000) rectangle (5.625,5.206);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.860,0.640)–(5.264,0.640);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.860,0.640)–(1.040,0.640);
[gp path] (5.264,0.640)–(5.084,0.640);
[gp node right] at (0.740,0.640) $0.1$;
[gp path] (0.860,1.334)–(0.950,1.334);
[gp path] (5.264,1.334)–(5.174,1.334);
[gp path] (0.860,1.739)–(0.950,1.739);
[gp path] (5.264,1.739)–(5.174,1.739);
[gp path] (0.860,2.027)–(0.950,2.027);
[gp path] (5.264,2.027)–(5.174,2.027);
[gp path] (0.860,2.251)–(0.950,2.251);
[gp path] (5.264,2.251)–(5.174,2.251);
[gp path] (0.860,2.433)–(0.950,2.433);
[gp path] (5.264,2.433)–(5.174,2.433);
[gp path] (0.860,2.587)–(0.950,2.587);
[gp path] (5.264,2.587)–(5.174,2.587);
[gp path] (0.860,2.721)–(0.950,2.721);
[gp path] (5.264,2.721)–(5.174,2.721);
[gp path] (0.860,2.839)–(0.950,2.839);
[gp path] (5.264,2.839)–(5.174,2.839);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.860,2.944)–(5.264,2.944);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.860,2.944)–(1.040,2.944);
[gp path] (5.264,2.944)–(5.084,2.944);
[gp node right] at (0.740,2.944) $1$;
[gp path] (0.860,3.638)–(0.950,3.638);
[gp path] (5.264,3.638)–(5.174,3.638);
[gp path] (0.860,4.044)–(0.950,4.044);
[gp path] (5.264,4.044)–(5.174,4.044);
[gp path] (0.860,4.332)–(0.950,4.332);
[gp path] (5.264,4.332)–(5.174,4.332);
[gp path] (0.860,4.555)–(0.950,4.555);
[gp path] (5.264,4.555)–(5.174,4.555);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.860,0.640)–(0.860,4.555);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.860,0.640)–(0.860,0.820);
[gp path] (0.860,4.555)–(0.860,4.375);
[gp node center] at (0.860,0.440) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.489,0.640)–(1.489,4.555);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.489,0.640)–(1.489,0.820);
[gp path] (1.489,4.555)–(1.489,4.375);
[gp node center] at (1.489,0.440) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.118,0.640)–(2.118,4.555);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.118,0.640)–(2.118,0.820);
[gp path] (2.118,4.555)–(2.118,4.375);
[gp node center] at (2.118,0.440) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.747,0.640)–(2.747,4.555);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.747,0.640)–(2.747,0.820);
[gp path] (2.747,4.555)–(2.747,4.375);
[gp node center] at (2.747,0.440) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.376,0.640)–(3.376,4.555);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.376,0.640)–(3.376,0.820);
[gp path] (3.376,4.555)–(3.376,4.375);
[gp node center] at (3.376,0.440) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.005,0.640)–(4.005,4.555);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.005,0.640)–(4.005,0.820);
[gp path] (4.005,4.555)–(4.005,4.375);
[gp node center] at (4.005,0.440) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.634,0.640)–(4.634,4.555);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.634,0.640)–(4.634,0.820);
[gp path] (4.634,4.555)–(4.634,4.375);
[gp node center] at (4.634,0.440) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.263,0.640)–(5.263,4.555);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.263,0.640)–(5.263,0.820);
[gp path] (5.263,4.555)–(5.263,4.375);
[gp node center] at (5.263,0.440) 128;
[gp path] (0.860,4.555)–(0.860,0.640)–(5.264,0.640)–(5.264,4.555)–cycle;
[gp node center,rotate=-270] at (0.190,2.597) elapsed time in s;
[gp node center] at (3.062,0.140) number of cores;
[gp node right] at (1.460,4.913) fr;
rgb color=0.000,1.000,1.000
[gp path] (1.580,4.913)–(2.240,4.913);
[gp path] (0.860,3.196)–(1.489,2.707)–(2.118,2.237)–(2.747,2.234)–(3.376,2.410)
3pointgp mark 1(0.860,3.196)
3pointgp mark 1(1.489,2.707)
3pointgp mark 1(2.118,2.237)
3pointgp mark 1(2.747,2.234)
3pointgp mark 1(3.376,2.410)
3pointgp mark 1(4.005,2.538)
3pointgp mark 1(4.634,2.571)
3pointgp mark 1(5.263,2.650)
3pointgp mark 1(1.910,4.913)
color=gp lt color border
[gp node right] at (1.460,4.688) blr;
rgb color=0.000,0.000,0.545
[gp path] (1.580,4.688)–(2.240,4.688);
[gp path] (0.860,3.273)–(1.489,2.840)–(2.118,2.443)–(2.747,2.393)–(3.376,2.594)
3pointgp mark 2(0.860,3.273)
3pointgp mark 2(1.489,2.840)
3pointgp mark 2(2.118,2.443)
3pointgp mark 2(2.747,2.393)
3pointgp mark 2(3.376,2.594)
3pointgp mark 2(4.005,2.699)
3pointgp mark 2(4.634,2.776)
3pointgp mark 2(5.263,2.898)
3pointgp mark 2(1.910,4.688)
color=gp lt color border
[gp node right] at (2.960,4.913) dd;
rgb color=0.753,0.251,0.000
[gp path] (3.080,4.913)–(3.740,4.913);
[gp path] (0.860,3.196)–(1.489,2.830)–(2.118,2.373)–(2.747,2.090)–(3.376,1.886)
3pointgp mark 15(0.860,3.196)
3pointgp mark 15(1.489,2.830)
3pointgp mark 15(2.118,2.373)
3pointgp mark 15(2.747,2.090)
3pointgp mark 15(3.376,1.886)
3pointgp mark 15(4.005,1.507)
3pointgp mark 15(4.634,1.307)
3pointgp mark 15(5.263,1.233)
3pointgp mark 15(3.410,4.913)
rgb color=0.580,0.000,0.827
[gp path] (0.860,4.490)–(1.489,3.912)–(2.118,3.273)–(2.747,2.862)–(3.376,2.559)
3pointgp mark 4(0.860,4.490)
3pointgp mark 4(1.489,3.912)
3pointgp mark 4(2.118,3.273)
3pointgp mark 4(2.747,2.862)
3pointgp mark 4(3.376,2.559)
3pointgp mark 4(4.005,2.391)
3pointgp mark 4(4.634,2.495)
3pointgp mark 4(5.263,2.187)
rgb color=0.933,0.510,0.933
[gp path] (0.860,4.476)–(1.489,3.866)–(2.118,3.242)–(2.747,2.848)–(3.376,2.469)
3pointgp mark 5(0.860,4.476)
3pointgp mark 5(1.489,3.866)
3pointgp mark 5(2.118,3.242)
3pointgp mark 5(2.747,2.848)
3pointgp mark 5(3.376,2.469)
3pointgp mark 5(4.005,2.315)
3pointgp mark 5(4.634,2.423)
3pointgp mark 5(5.263,2.101)
color=gp lt color border
[gp node right] at (2.960,4.688) ideal;
rgb color=0.000,0.000,0.000
[gp path] (3.080,4.688)–(3.740,4.688);
[gp path] (0.860,4.476)–(0.904,4.427)–(0.949,4.378)–(0.993,4.329)–(1.038,4.280)
color=gp lt color border
[gp path] (0.860,4.555)–(0.860,0.640)–(5.264,0.640)–(5.264,4.555)–cycle;
gp plot 10.860cm0.640cm5.264cm4.555cm
every node/.append style=scale=0.65
(0.000,0.000) rectangle (5.625,4.987);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.540,0.640)–(0.630,0.640);
[gp path] (5.264,0.640)–(5.174,0.640);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.540,0.743)–(5.264,0.743);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.540,0.743)–(0.720,0.743);
[gp path] (5.264,0.743)–(5.084,0.743);
[gp node right] at (0.420,0.743) $1$;
[gp path] (0.540,1.419)–(0.630,1.419);
[gp path] (5.264,1.419)–(5.174,1.419);
[gp path] (0.540,1.815)–(0.630,1.815);
[gp path] (5.264,1.815)–(5.174,1.815);
[gp path] (0.540,2.096)–(0.630,2.096);
[gp path] (5.264,2.096)–(5.174,2.096);
[gp path] (0.540,2.314)–(0.630,2.314);
[gp path] (5.264,2.314)–(5.174,2.314);
[gp path] (0.540,2.492)–(0.630,2.492);
[gp path] (5.264,2.492)–(5.174,2.492);
[gp path] (0.540,2.642)–(0.630,2.642);
[gp path] (5.264,2.642)–(5.174,2.642);
[gp path] (0.540,2.772)–(0.630,2.772);
[gp path] (5.264,2.772)–(5.174,2.772);
[gp path] (0.540,2.887)–(0.630,2.887);
[gp path] (5.264,2.887)–(5.174,2.887);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.540,2.990)–(5.264,2.990);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.540,2.990)–(0.720,2.990);
[gp path] (5.264,2.990)–(5.084,2.990);
[gp node right] at (0.420,2.990) $10$;
[gp path] (0.540,3.667)–(0.630,3.667);
[gp path] (5.264,3.667)–(5.174,3.667);
[gp path] (0.540,4.062)–(0.630,4.062);
[gp path] (5.264,4.062)–(5.174,4.062);
[gp path] (0.540,4.343)–(0.630,4.343);
[gp path] (5.264,4.343)–(5.174,4.343);
[gp path] (0.540,4.561)–(0.630,4.561);
[gp path] (5.264,4.561)–(5.174,4.561);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.540,0.640)–(0.540,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.540,0.640)–(0.540,0.820);
[gp path] (0.540,4.561)–(0.540,4.381);
[gp node center] at (0.540,0.440) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.130,0.640)–(1.130,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.130,0.640)–(1.130,0.820);
[gp path] (1.130,4.561)–(1.130,4.381);
[gp node center] at (1.130,0.440) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.721,0.640)–(1.721,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.721,0.640)–(1.721,0.820);
[gp path] (1.721,4.561)–(1.721,4.381);
[gp node center] at (1.721,0.440) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.311,0.640)–(2.311,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.311,0.640)–(2.311,0.820);
[gp path] (2.311,4.561)–(2.311,4.381);
[gp node center] at (2.311,0.440) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.902,0.640)–(2.902,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.902,0.640)–(2.902,0.820);
[gp path] (2.902,4.561)–(2.902,4.381);
[gp node center] at (2.902,0.440) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.492,0.640)–(3.492,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.492,0.640)–(3.492,0.820);
[gp path] (3.492,4.561)–(3.492,4.381);
[gp node center] at (3.492,0.440) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.083,0.640)–(4.083,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.083,0.640)–(4.083,0.820);
[gp path] (4.083,4.561)–(4.083,4.381);
[gp node center] at (4.083,0.440) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.673,0.640)–(4.673,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.673,0.640)–(4.673,0.820);
[gp path] (4.673,4.561)–(4.673,4.381);
[gp node center] at (4.673,0.440) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.264,0.640)–(5.264,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.264,0.640)–(5.264,0.820);
[gp path] (5.264,4.561)–(5.264,4.381);
[gp node center] at (5.264,0.440) 256;
[gp path] (0.540,4.561)–(0.540,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
[gp node center] at (2.902,0.140) number of cores;
rgb color=0.000,1.000,1.000
[gp path] (0.540,3.614)–(1.130,3.073)–(1.721,2.506)–(2.311,2.112)–(2.902,1.660)
3pointgp mark 1(0.540,3.614)
3pointgp mark 1(1.130,3.073)
3pointgp mark 1(1.721,2.506)
3pointgp mark 1(2.311,2.112)
3pointgp mark 1(2.902,1.660)
3pointgp mark 1(3.492,1.625)
3pointgp mark 1(4.083,2.003)
3pointgp mark 1(4.673,2.390)
rgb color=0.000,0.000,0.545
[gp path] (0.540,3.734)–(1.130,3.200)–(1.721,2.579)–(2.311,2.148)–(2.902,1.757)
3pointgp mark 2(0.540,3.734)
3pointgp mark 2(1.130,3.200)
3pointgp mark 2(1.721,2.579)
3pointgp mark 2(2.311,2.148)
3pointgp mark 2(2.902,1.757)
3pointgp mark 2(3.492,1.795)
3pointgp mark 2(4.083,2.131)
3pointgp mark 2(4.673,2.455)
rgb color=0.753,0.251,0.000
[gp path] (0.540,3.608)–(1.130,3.336)–(1.721,2.928)–(2.311,2.957)–(2.902,2.567)
3pointgp mark 15(0.540,3.608)
3pointgp mark 15(1.130,3.336)
3pointgp mark 15(1.721,2.928)
3pointgp mark 15(2.311,2.957)
3pointgp mark 15(2.902,2.567)
3pointgp mark 15(3.492,1.937)
3pointgp mark 15(4.083,1.703)
3pointgp mark 15(4.673,1.253)
3pointgp mark 15(5.264,0.760)
color=gp lt color border
[gp node right] at (1.860,4.694) ts from L0;
rgb color=0.580,0.000,0.827
[gp path] (1.980,4.694)–(2.640,4.694);
[gp path] (0.540,4.046)–(1.130,3.452)–(1.721,2.841)–(2.311,2.379)–(2.902,1.992)
3pointgp mark 4(0.540,4.046)
3pointgp mark 4(1.130,3.452)
3pointgp mark 4(1.721,2.841)
3pointgp mark 4(2.311,2.379)
3pointgp mark 4(2.902,1.992)
3pointgp mark 4(3.492,1.684)
3pointgp mark 4(4.083,1.626)
3pointgp mark 4(4.673,1.249)
3pointgp mark 4(2.310,4.694)
color=gp lt color border
[gp node right] at (4.080,4.694) tsi from L0;
rgb color=0.933,0.510,0.933
[gp path] (4.200,4.694)–(4.860,4.694);
[gp path] (0.540,4.049)–(1.130,3.451)–(1.721,2.838)–(2.311,2.382)–(2.902,1.983)
3pointgp mark 5(0.540,4.049)
3pointgp mark 5(1.130,3.451)
3pointgp mark 5(1.721,2.838)
3pointgp mark 5(2.311,2.382)
3pointgp mark 5(2.902,1.983)
3pointgp mark 5(3.492,1.677)
3pointgp mark 5(4.083,1.634)
3pointgp mark 5(4.673,1.213)
3pointgp mark 5(4.530,4.694)
rgb color=0.784,0.784,0.000
[gp path] (0.540,4.487)–(1.130,3.870)–(1.721,3.202)–(2.311,2.694)–(2.902,2.237)
3pointgp mark 12(0.540,4.487)
3pointgp mark 12(1.130,3.870)
3pointgp mark 12(1.721,3.202)
3pointgp mark 12(2.311,2.694)
3pointgp mark 12(2.902,2.237)
3pointgp mark 12(3.492,1.825)
3pointgp mark 12(4.083,1.499)
3pointgp mark 12(4.673,1.349)
3pointgp mark 12(5.264,1.241)
rgb color=1.000,1.000,0.000
[gp path] (0.540,4.387)–(1.130,3.801)–(1.721,3.179)–(2.311,2.629)–(2.902,2.150)
3pointgp mark 13(0.540,4.387)
3pointgp mark 13(1.130,3.801)
3pointgp mark 13(1.721,3.179)
3pointgp mark 13(2.311,2.629)
3pointgp mark 13(2.902,2.150)
3pointgp mark 13(3.492,1.682)
3pointgp mark 13(4.083,1.260)
3pointgp mark 13(4.673,1.026)
3pointgp mark 13(5.264,0.880)
rgb color=0.000,0.000,0.000
[gp path] (0.540,4.049)–(0.582,4.001)–(0.623,3.953)–(0.665,3.905)–(0.707,3.858)
color=gp lt color border
[gp path] (0.540,4.561)–(0.540,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
gp plot 10.540cm0.640cm5.264cm4.561cm
every node/.append style=scale=0.65
(0.000,0.000) rectangle (5.625,4.987);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.660,0.640)–(0.750,0.640);
[gp path] (5.264,0.640)–(5.174,0.640);
[gp path] (0.660,0.860)–(0.750,0.860);
[gp path] (5.264,0.860)–(5.174,0.860);
[gp path] (0.660,1.032)–(0.750,1.032);
[gp path] (5.264,1.032)–(5.174,1.032);
[gp path] (0.660,1.171)–(0.750,1.171);
[gp path] (5.264,1.171)–(5.174,1.171);
[gp path] (0.660,1.289)–(0.750,1.289);
[gp path] (5.264,1.289)–(5.174,1.289);
[gp path] (0.660,1.392)–(0.750,1.392);
[gp path] (5.264,1.392)–(5.174,1.392);
[gp path] (0.660,1.482)–(0.750,1.482);
[gp path] (5.264,1.482)–(5.174,1.482);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.660,1.563)–(5.264,1.563);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.660,1.563)–(0.840,1.563);
[gp path] (5.264,1.563)–(5.084,1.563);
[gp node right] at (0.540,1.563) $10$;
[gp path] (0.660,2.094)–(0.750,2.094);
[gp path] (5.264,2.094)–(5.174,2.094);
[gp path] (0.660,2.405)–(0.750,2.405);
[gp path] (5.264,2.405)–(5.174,2.405);
[gp path] (0.660,2.625)–(0.750,2.625);
[gp path] (5.264,2.625)–(5.174,2.625);
[gp path] (0.660,2.796)–(0.750,2.796);
[gp path] (5.264,2.796)–(5.174,2.796);
[gp path] (0.660,2.936)–(0.750,2.936);
[gp path] (5.264,2.936)–(5.174,2.936);
[gp path] (0.660,3.054)–(0.750,3.054);
[gp path] (5.264,3.054)–(5.174,3.054);
[gp path] (0.660,3.156)–(0.750,3.156);
[gp path] (5.264,3.156)–(5.174,3.156);
[gp path] (0.660,3.247)–(0.750,3.247);
[gp path] (5.264,3.247)–(5.174,3.247);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.660,3.327)–(5.264,3.327);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.660,3.327)–(0.840,3.327);
[gp path] (5.264,3.327)–(5.084,3.327);
[gp node right] at (0.540,3.327) $100$;
[gp path] (0.660,3.859)–(0.750,3.859);
[gp path] (5.264,3.859)–(5.174,3.859);
[gp path] (0.660,4.169)–(0.750,4.169);
[gp path] (5.264,4.169)–(5.174,4.169);
[gp path] (0.660,4.390)–(0.750,4.390);
[gp path] (5.264,4.390)–(5.174,4.390);
[gp path] (0.660,4.561)–(0.750,4.561);
[gp path] (5.264,4.561)–(5.174,4.561);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.660,0.640)–(0.660,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.660,0.640)–(0.660,0.820);
[gp path] (0.660,4.561)–(0.660,4.381);
[gp node center] at (0.660,0.440) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.235,0.640)–(1.235,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.235,0.640)–(1.235,0.820);
[gp path] (1.235,4.561)–(1.235,4.381);
[gp node center] at (1.235,0.440) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.811,0.640)–(1.811,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.811,0.640)–(1.811,0.820);
[gp path] (1.811,4.561)–(1.811,4.381);
[gp node center] at (1.811,0.440) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.386,0.640)–(2.386,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.386,0.640)–(2.386,0.820);
[gp path] (2.386,4.561)–(2.386,4.381);
[gp node center] at (2.386,0.440) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.962,0.640)–(2.962,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.962,0.640)–(2.962,0.820);
[gp path] (2.962,4.561)–(2.962,4.381);
[gp node center] at (2.962,0.440) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.537,0.640)–(3.537,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.537,0.640)–(3.537,0.820);
[gp path] (3.537,4.561)–(3.537,4.381);
[gp node center] at (3.537,0.440) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.113,0.640)–(4.113,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.113,0.640)–(4.113,0.820);
[gp path] (4.113,4.561)–(4.113,4.381);
[gp node center] at (4.113,0.440) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.688,0.640)–(4.688,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.688,0.640)–(4.688,0.820);
[gp path] (4.688,4.561)–(4.688,4.381);
[gp node center] at (4.688,0.440) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.264,0.640)–(5.264,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.264,0.640)–(5.264,0.820);
[gp path] (5.264,4.561)–(5.264,4.381);
[gp node center] at (5.264,0.440) 256;
[gp path] (0.660,4.561)–(0.660,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
[gp node center] at (2.962,0.140) number of cores;
rgb color=0.000,1.000,1.000
[gp path] (0.660,4.546)–(1.235,4.047)–(1.811,3.742)–(2.386,3.196)–(2.962,2.851)
3pointgp mark 1(0.660,4.546)
3pointgp mark 1(1.235,4.047)
3pointgp mark 1(1.811,3.742)
3pointgp mark 1(2.386,3.196)
3pointgp mark 1(2.962,2.851)
3pointgp mark 1(3.537,2.459)
3pointgp mark 1(4.113,2.295)
3pointgp mark 1(4.688,2.209)
3pointgp mark 1(5.264,2.335)
rgb color=0.000,0.000,0.545
[gp path] (0.660,4.267)–(1.235,3.799)–(1.811,3.332)–(2.386,2.937)–(2.962,2.590)
3pointgp mark 2(0.660,4.267)
3pointgp mark 2(1.235,3.799)
3pointgp mark 2(1.811,3.332)
3pointgp mark 2(2.386,2.937)
3pointgp mark 2(2.962,2.590)
3pointgp mark 2(3.537,2.302)
3pointgp mark 2(4.113,2.172)
3pointgp mark 2(4.688,2.158)
3pointgp mark 2(5.264,2.313)
rgb color=0.753,0.251,0.000
[gp path] (0.660,4.546)–(1.235,4.194)–(1.811,3.842)–(2.386,3.712)–(2.962,3.348)
3pointgp mark 15(0.660,4.546)
3pointgp mark 15(1.235,4.194)
3pointgp mark 15(1.811,3.842)
3pointgp mark 15(2.386,3.712)
3pointgp mark 15(2.962,3.348)
3pointgp mark 15(3.537,2.816)
3pointgp mark 15(4.113,2.689)
3pointgp mark 15(4.688,2.164)
3pointgp mark 15(5.264,1.778)
rgb color=0.580,0.000,0.827
[gp path] (0.660,4.348)–(1.235,3.868)–(1.811,3.368)–(2.386,3.066)–(2.962,2.752)
3pointgp mark 4(0.660,4.348)
3pointgp mark 4(1.235,3.868)
3pointgp mark 4(1.811,3.368)
3pointgp mark 4(2.386,3.066)
3pointgp mark 4(2.962,2.752)
3pointgp mark 4(3.537,2.482)
3pointgp mark 4(4.113,2.366)
3pointgp mark 4(4.688,1.902)
rgb color=0.933,0.510,0.933
[gp path] (0.660,4.331)–(1.235,3.844)–(1.811,3.362)–(2.386,3.067)–(2.962,2.752)
3pointgp mark 5(0.660,4.331)
3pointgp mark 5(1.235,3.844)
3pointgp mark 5(1.811,3.362)
3pointgp mark 5(2.386,3.067)
3pointgp mark 5(2.962,2.752)
3pointgp mark 5(3.537,2.483)
3pointgp mark 5(4.113,2.356)
3pointgp mark 5(4.688,1.915)
color=gp lt color border
[gp node right] at (1.980,4.694) ts from L1;
rgb color=0.784,0.784,0.000
[gp path] (2.100,4.694)–(2.760,4.694);
[gp path] (0.660,4.135)–(1.235,3.628)–(1.811,3.090)–(2.386,2.724)–(2.962,2.274)
3pointgp mark 12(0.660,4.135)
3pointgp mark 12(1.235,3.628)
3pointgp mark 12(1.811,3.090)
3pointgp mark 12(2.386,2.724)
3pointgp mark 12(2.962,2.274)
3pointgp mark 12(3.537,1.904)
3pointgp mark 12(4.113,1.533)
3pointgp mark 12(4.688,1.207)
3pointgp mark 12(5.264,0.941)
3pointgp mark 12(2.430,4.694)
color=gp lt color border
[gp node right] at (4.200,4.694) tsi from L1;
rgb color=1.000,1.000,0.000
[gp path] (4.320,4.694)–(4.980,4.694);
[gp path] (0.660,4.056)–(1.235,3.541)–(1.811,3.056)–(2.386,2.624)–(2.962,2.227)
3pointgp mark 13(0.660,4.056)
3pointgp mark 13(1.235,3.541)
3pointgp mark 13(1.811,3.056)
3pointgp mark 13(2.386,2.624)
3pointgp mark 13(2.962,2.227)
3pointgp mark 13(3.537,1.866)
3pointgp mark 13(4.113,1.470)
3pointgp mark 13(4.688,1.150)
3pointgp mark 13(5.264,0.904)
3pointgp mark 13(4.650,4.694)
rgb color=0.000,0.000,0.000
[gp path] (0.660,4.056)–(0.701,4.019)–(0.741,3.981)–(0.782,3.944)–(0.823,3.906)
color=gp lt color border
[gp path] (0.660,4.561)–(0.660,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
gp plot 10.660cm0.640cm5.264cm4.561cm
every node/.append style=scale=0.65
(0.000,0.000) rectangle (5.625,4.987);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.980,0.640)–(1.070,0.640);
[gp path] (5.264,0.640)–(5.174,0.640);
[gp path] (0.980,0.726)–(1.070,0.726);
[gp path] (5.264,0.726)–(5.174,0.726);
[gp path] (0.980,0.803)–(1.070,0.803);
[gp path] (5.264,0.803)–(5.174,0.803);
[gp path] (0.980,0.871)–(1.160,0.871);
[gp path] (5.264,0.871)–(5.084,0.871);
[gp node right] at (0.860,0.871) $10$;
[gp path] (0.980,1.319)–(1.070,1.319);
[gp path] (5.264,1.319)–(5.174,1.319);
[gp path] (0.980,1.582)–(1.070,1.582);
[gp path] (5.264,1.582)–(5.174,1.582);
[gp path] (0.980,1.768)–(1.070,1.768);
[gp path] (5.264,1.768)–(5.174,1.768);
[gp path] (0.980,1.912)–(1.070,1.912);
[gp path] (5.264,1.912)–(5.174,1.912);
[gp path] (0.980,2.030)–(1.070,2.030);
[gp path] (5.264,2.030)–(5.174,2.030);
[gp path] (0.980,2.130)–(1.070,2.130);
[gp path] (5.264,2.130)–(5.174,2.130);
[gp path] (0.980,2.216)–(1.070,2.216);
[gp path] (5.264,2.216)–(5.174,2.216);
[gp path] (0.980,2.292)–(1.070,2.292);
[gp path] (5.264,2.292)–(5.174,2.292);
[gp path] (0.980,2.360)–(1.160,2.360);
[gp path] (5.264,2.360)–(5.084,2.360);
[gp node right] at (0.860,2.360) $100$;
[gp path] (0.980,2.809)–(1.070,2.809);
[gp path] (5.264,2.809)–(5.174,2.809);
[gp path] (0.980,3.071)–(1.070,3.071);
[gp path] (5.264,3.071)–(5.174,3.071);
[gp path] (0.980,3.257)–(1.070,3.257);
[gp path] (5.264,3.257)–(5.174,3.257);
[gp path] (0.980,3.402)–(1.070,3.402);
[gp path] (5.264,3.402)–(5.174,3.402);
[gp path] (0.980,3.520)–(1.070,3.520);
[gp path] (5.264,3.520)–(5.174,3.520);
[gp path] (0.980,3.619)–(1.070,3.619);
[gp path] (5.264,3.619)–(5.174,3.619);
[gp path] (0.980,3.706)–(1.070,3.706);
[gp path] (5.264,3.706)–(5.174,3.706);
[gp path] (0.980,3.782)–(1.070,3.782);
[gp path] (5.264,3.782)–(5.174,3.782);
[gp path] (0.980,3.850)–(1.160,3.850);
[gp path] (5.264,3.850)–(5.084,3.850);
[gp node right] at (0.860,3.850) $1000$;
[gp path] (0.980,4.299)–(1.070,4.299);
[gp path] (5.264,4.299)–(5.174,4.299);
[gp path] (0.980,4.561)–(1.070,4.561);
[gp path] (5.264,4.561)–(5.174,4.561);
[gp path] (0.980,0.640)–(0.980,0.820);
[gp path] (0.980,4.561)–(0.980,4.381);
[gp node center,font=8.0pt9.6pt] at (0.980,0.440) 2;
[gp path] (1.408,0.640)–(1.408,0.820);
[gp path] (1.408,4.561)–(1.408,4.381);
[gp node center,font=8.0pt9.6pt] at (1.408,0.440) 4;
[gp path] (1.837,0.640)–(1.837,0.820);
[gp path] (1.837,4.561)–(1.837,4.381);
[gp node center,font=8.0pt9.6pt] at (1.837,0.440) 8;
[gp path] (2.265,0.640)–(2.265,0.820);
[gp path] (2.265,4.561)–(2.265,4.381);
[gp node center,font=8.0pt9.6pt] at (2.265,0.440) 16;
[gp path] (2.694,0.640)–(2.694,0.820);
[gp path] (2.694,4.561)–(2.694,4.381);
[gp node center,font=8.0pt9.6pt] at (2.694,0.440) 32;
[gp path] (3.122,0.640)–(3.122,0.820);
[gp path] (3.122,4.561)–(3.122,4.381);
[gp node center,font=8.0pt9.6pt] at (3.122,0.440) 64;
[gp path] (3.550,0.640)–(3.550,0.820);
[gp path] (3.550,4.561)–(3.550,4.381);
[gp node center,font=8.0pt9.6pt] at (3.550,0.440) 128;
[gp path] (3.979,0.640)–(3.979,0.820);
[gp path] (3.979,4.561)–(3.979,4.381);
[gp node center,font=8.0pt9.6pt] at (3.979,0.440) 256;
[gp path] (4.407,0.640)–(4.407,0.820);
[gp path] (4.407,4.561)–(4.407,4.381);
[gp node center,font=8.0pt9.6pt] at (4.407,0.440) 512;
[gp path] (4.836,0.640)–(4.836,0.820);
[gp path] (4.836,4.561)–(4.836,4.381);
[gp node center,font=8.0pt9.6pt] at (4.836,0.440) 1024;
[gp path] (5.264,0.640)–(5.264,0.820);
[gp path] (5.264,4.561)–(5.264,4.381);
[gp node center,font=8.0pt9.6pt] at (5.264,0.440) 2048;
[gp path] (0.980,4.561)–(0.980,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
[gp node center,rotate=-270] at (0.206,2.600) elapsed time in s;
[gp node center] at (3.122,0.086) number of cores;
rgb color=0.000,1.000,1.000
[gp path] (2.265,4.433)–(2.694,4.096)–(3.122,3.822)–(3.550,3.518)–(3.979,3.303)
3pointgp mark 1(2.265,4.433)
3pointgp mark 1(2.694,4.096)
3pointgp mark 1(3.122,3.822)
3pointgp mark 1(3.550,3.518)
3pointgp mark 1(3.979,3.303)
3pointgp mark 1(4.407,3.157)
rgb color=0.000,0.000,0.545
[gp path] (1.408,4.202)–(1.837,3.849)–(2.265,3.502)–(2.694,3.219)–(3.122,3.081)
3pointgp mark 2(1.408,4.202)
3pointgp mark 2(1.837,3.849)
3pointgp mark 2(2.265,3.502)
3pointgp mark 2(2.694,3.219)
3pointgp mark 2(3.122,3.081)
3pointgp mark 2(3.550,2.961)
3pointgp mark 2(3.979,2.844)
3pointgp mark 2(4.407,2.806)
rgb color=0.753,0.251,0.000
[gp path] (2.694,3.885)–(3.122,3.480)–(3.550,2.927)–(3.979,2.615)–(4.407,2.275)
3pointgp mark 15(2.694,3.885)
3pointgp mark 15(3.122,3.480)
3pointgp mark 15(3.550,2.927)
3pointgp mark 15(3.979,2.615)
3pointgp mark 15(4.407,2.275)
3pointgp mark 15(4.836,1.948)
3pointgp mark 15(5.264,1.772)
rgb color=0.784,0.784,0.000
[gp path] (1.408,3.868)–(1.837,3.472)–(2.265,3.108)–(2.694,2.782)–(3.122,2.437)
3pointgp mark 12(1.408,3.868)
3pointgp mark 12(1.837,3.472)
3pointgp mark 12(2.265,3.108)
3pointgp mark 12(2.694,2.782)
3pointgp mark 12(3.122,2.437)
3pointgp mark 12(3.550,2.137)
3pointgp mark 12(3.979,1.891)
3pointgp mark 12(4.407,1.802)
rgb color=1.000,1.000,0.000
[gp path] (1.408,3.868)–(1.837,3.466)–(2.265,3.103)–(2.694,2.780)–(3.122,2.416)
3pointgp mark 13(1.408,3.868)
3pointgp mark 13(1.837,3.466)
3pointgp mark 13(2.265,3.103)
3pointgp mark 13(2.694,2.780)
3pointgp mark 13(3.122,2.416)
3pointgp mark 13(3.550,2.128)
3pointgp mark 13(3.979,1.875)
3pointgp mark 13(4.407,1.813)
color=gp lt color border
[gp node right] at (2.300,4.694) ts from L2;
rgb color=0.000,0.392,0.000
[gp path] (2.420,4.694)–(3.080,4.694);
[gp path] (0.980,4.096)–(1.408,3.645)–(1.837,3.181)–(2.265,2.759)–(2.694,2.450)
3pointgp mark 8(0.980,4.096)
3pointgp mark 8(1.408,3.645)
3pointgp mark 8(1.837,3.181)
3pointgp mark 8(2.265,2.759)
3pointgp mark 8(2.694,2.450)
3pointgp mark 8(3.122,2.076)
3pointgp mark 8(3.550,1.740)
3pointgp mark 8(3.979,1.426)
3pointgp mark 8(4.407,1.201)
3pointgp mark 8(4.836,1.046)
3pointgp mark 8(5.264,1.015)
3pointgp mark 8(2.750,4.694)
color=gp lt color border
[gp node right] at (4.520,4.694) tsi from L2;
rgb color=0.000,1.000,0.000
[gp path] (4.640,4.694)–(5.300,4.694);
[gp path] (0.980,4.046)–(1.408,3.558)–(1.837,3.108)–(2.265,2.707)–(2.694,2.372)
3pointgp mark 9(0.980,4.046)
3pointgp mark 9(1.408,3.558)
3pointgp mark 9(1.837,3.108)
3pointgp mark 9(2.265,2.707)
3pointgp mark 9(2.694,2.372)
3pointgp mark 9(3.122,1.969)
3pointgp mark 9(3.550,1.608)
3pointgp mark 9(3.979,1.271)
3pointgp mark 9(4.407,0.980)
3pointgp mark 9(4.836,0.738)
3pointgp mark 9(5.264,0.764)
3pointgp mark 9(4.970,4.694)
rgb color=0.545,0.000,0.000
[gp path] (1.408,4.070)–(1.837,3.657)–(2.265,3.250)–(2.694,2.851)–(3.122,2.569)
3pointgp mark 10(1.408,4.070)
3pointgp mark 10(1.837,3.657)
3pointgp mark 10(2.265,3.250)
3pointgp mark 10(2.694,2.851)
3pointgp mark 10(3.122,2.569)
3pointgp mark 10(3.550,2.315)
3pointgp mark 10(3.979,2.164)
3pointgp mark 10(4.407,2.109)
3pointgp mark 10(4.836,2.057)
3pointgp mark 10(5.264,2.169)
rgb color=1.000,0.000,0.000
[gp path] (1.408,3.784)–(1.837,3.357)–(2.265,2.955)–(2.694,2.535)–(3.122,2.178)
3pointgp mark 11(1.408,3.784)
3pointgp mark 11(1.837,3.357)
3pointgp mark 11(2.265,2.955)
3pointgp mark 11(2.694,2.535)
3pointgp mark 11(3.122,2.178)
3pointgp mark 11(3.550,1.899)
3pointgp mark 11(3.979,1.550)
3pointgp mark 11(4.407,1.327)
3pointgp mark 11(4.836,1.180)
3pointgp mark 11(5.264,1.161)
rgb color=0.000,0.000,0.000
[gp path] (0.980,4.046)–(1.015,4.010)–(1.049,3.974)–(1.084,3.937)–(1.118,3.901)
color=gp lt color border
[gp path] (0.980,4.561)–(0.980,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
gp plot 10.980cm0.640cm5.264cm4.561cm
every node/.append style=scale=0.65
(0.000,0.000) rectangle (5.625,4.987);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.780,0.640)–(0.870,0.640);
[gp path] (5.264,0.640)–(5.174,0.640);
[gp path] (0.780,0.985)–(0.870,0.985);
[gp path] (5.264,0.985)–(5.174,0.985);
[gp path] (0.780,1.230)–(0.870,1.230);
[gp path] (5.264,1.230)–(5.174,1.230);
[gp path] (0.780,1.420)–(0.870,1.420);
[gp path] (5.264,1.420)–(5.174,1.420);
[gp path] (0.780,1.575)–(0.870,1.575);
[gp path] (5.264,1.575)–(5.174,1.575);
[gp path] (0.780,1.707)–(0.870,1.707);
[gp path] (5.264,1.707)–(5.174,1.707);
[gp path] (0.780,1.820)–(0.870,1.820);
[gp path] (5.264,1.820)–(5.174,1.820);
[gp path] (0.780,1.921)–(0.870,1.921);
[gp path] (5.264,1.921)–(5.174,1.921);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.780,2.010)–(5.264,2.010);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.780,2.010)–(0.960,2.010);
[gp path] (5.264,2.010)–(5.084,2.010);
[gp node right] at (0.660,2.010) $100$;
[gp path] (0.780,2.601)–(0.870,2.601);
[gp path] (5.264,2.601)–(5.174,2.601);
[gp path] (0.780,2.946)–(0.870,2.946);
[gp path] (5.264,2.946)–(5.174,2.946);
[gp path] (0.780,3.191)–(0.870,3.191);
[gp path] (5.264,3.191)–(5.174,3.191);
[gp path] (0.780,3.381)–(0.870,3.381);
[gp path] (5.264,3.381)–(5.174,3.381);
[gp path] (0.780,3.536)–(0.870,3.536);
[gp path] (5.264,3.536)–(5.174,3.536);
[gp path] (0.780,3.667)–(0.870,3.667);
[gp path] (5.264,3.667)–(5.174,3.667);
[gp path] (0.780,3.781)–(0.870,3.781);
[gp path] (5.264,3.781)–(5.174,3.781);
[gp path] (0.780,3.881)–(0.870,3.881);
[gp path] (5.264,3.881)–(5.174,3.881);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.780,3.971)–(5.264,3.971);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.780,3.971)–(0.960,3.971);
[gp path] (5.264,3.971)–(5.084,3.971);
[gp node right] at (0.660,3.971) $1000$;
[gp path] (0.780,4.561)–(0.870,4.561);
[gp path] (5.264,4.561)–(5.174,4.561);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.780,0.640)–(0.780,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.780,0.640)–(0.780,0.820);
[gp path] (0.780,4.561)–(0.780,4.381);
[gp node center,font=8.0pt9.6pt] at (0.780,0.440) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.356,0.640)–(1.356,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.356,0.640)–(1.356,0.820);
[gp path] (1.356,4.561)–(1.356,4.381);
[gp node center,font=8.0pt9.6pt] at (1.356,0.440) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.932,0.640)–(1.932,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.932,0.640)–(1.932,0.820);
[gp path] (1.932,4.561)–(1.932,4.381);
[gp node center,font=8.0pt9.6pt] at (1.932,0.440) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.508,0.640)–(2.508,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.508,0.640)–(2.508,0.820);
[gp path] (2.508,4.561)–(2.508,4.381);
[gp node center,font=8.0pt9.6pt] at (2.508,0.440) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.084,0.640)–(3.084,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.084,0.640)–(3.084,0.820);
[gp path] (3.084,4.561)–(3.084,4.381);
[gp node center,font=8.0pt9.6pt] at (3.084,0.440) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.660,0.640)–(3.660,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.660,0.640)–(3.660,0.820);
[gp path] (3.660,4.561)–(3.660,4.381);
[gp node center,font=8.0pt9.6pt] at (3.660,0.440) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.236,0.640)–(4.236,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.236,0.640)–(4.236,0.820);
[gp path] (4.236,4.561)–(4.236,4.381);
[gp node center,font=8.0pt9.6pt] at (4.236,0.440) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.812,0.640)–(4.812,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.812,0.640)–(4.812,0.820);
[gp path] (4.812,4.561)–(4.812,4.381);
[gp node center,font=8.0pt9.6pt] at (4.812,0.440) 2048;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.264,0.640)–(5.264,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.264,0.640)–(5.264,0.820);
[gp path] (5.264,4.561)–(5.264,4.381);
[gp node center,font=8.0pt9.6pt] at (5.264,0.440) 3528;
[gp path] (0.780,4.561)–(0.780,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
[gp node center] at (3.022,0.086) number of cores;
rgb color=0.753,0.251,0.000
[gp path] (3.660,4.182)–(4.236,3.692)–(4.812,3.292)–(5.264,2.893);
3pointgp mark 15(3.660,4.182)
3pointgp mark 15(4.236,3.692)
3pointgp mark 15(4.812,3.292)
3pointgp mark 15(5.264,2.893)
color=gp lt color border
[gp node right] at (2.100,4.694) ts from L3;
rgb color=0.545,0.000,0.000
[gp path] (2.220,4.694)–(2.880,4.694);
[gp path] (0.780,4.288)–(1.356,3.709)–(1.932,3.232)–(2.508,2.766)–(3.084,2.387)
3pointgp mark 10(0.780,4.288)
3pointgp mark 10(1.356,3.709)
3pointgp mark 10(1.932,3.232)
3pointgp mark 10(2.508,2.766)
3pointgp mark 10(3.084,2.387)
3pointgp mark 10(3.660,2.084)
3pointgp mark 10(4.236,1.903)
3pointgp mark 10(4.812,1.936)
3pointgp mark 10(5.264,2.062)
3pointgp mark 10(2.550,4.694)
color=gp lt color border
[gp node right] at (4.320,4.694) tsi from L3;
rgb color=1.000,0.000,0.000
[gp path] (4.440,4.694)–(5.100,4.694);
[gp path] (0.780,4.204)–(1.356,3.614)–(1.932,3.091)–(2.508,2.591)–(3.084,2.092)
3pointgp mark 11(0.780,4.204)
3pointgp mark 11(1.356,3.614)
3pointgp mark 11(1.932,3.091)
3pointgp mark 11(2.508,2.591)
3pointgp mark 11(3.084,2.092)
3pointgp mark 11(3.660,1.673)
3pointgp mark 11(4.236,1.293)
3pointgp mark 11(4.812,0.973)
3pointgp mark 11(5.264,0.901)
3pointgp mark 11(4.770,4.694)
rgb color=0.000,1.000,0.000
[gp path] (3.660,2.343)–(4.236,1.999)–(4.812,1.651);
3pointgp mark 9(3.660,2.343)
3pointgp mark 9(4.236,1.999)
3pointgp mark 9(4.812,1.651)
rgb color=0.000,0.000,0.000
[gp path] (0.780,4.204)–(0.815,4.168)–(0.850,4.132)–(0.885,4.096)–(0.920,4.061)
color=gp lt color border
[gp path] (0.780,4.561)–(0.780,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
gp plot 10.780cm0.640cm5.264cm4.561cm
every node/.append style=scale=0.65
(0.000,0.000) rectangle (5.625,4.987);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.780,0.928)–(0.870,0.928);
[gp path] (5.264,0.928)–(5.174,0.928);
[gp path] (0.780,1.382)–(0.870,1.382);
[gp path] (5.264,1.382)–(5.174,1.382);
[gp path] (0.780,1.734)–(0.870,1.734);
[gp path] (5.264,1.734)–(5.174,1.734);
[gp path] (0.780,2.021)–(0.870,2.021);
[gp path] (5.264,2.021)–(5.174,2.021);
[gp path] (0.780,2.265)–(0.870,2.265);
[gp path] (5.264,2.265)–(5.174,2.265);
[gp path] (0.780,2.475)–(0.870,2.475);
[gp path] (5.264,2.475)–(5.174,2.475);
[gp path] (0.780,2.661)–(0.870,2.661);
[gp path] (5.264,2.661)–(5.174,2.661);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.780,2.827)–(5.264,2.827);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.780,2.827)–(0.960,2.827);
[gp path] (5.264,2.827)–(5.084,2.827);
[gp node right] at (0.660,2.827) $1000$;
[gp path] (0.780,3.921)–(0.870,3.921);
[gp path] (5.264,3.921)–(5.174,3.921);
[gp path] (0.780,4.561)–(0.870,4.561);
[gp path] (5.264,4.561)–(5.174,4.561);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.780,0.640)–(0.780,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.780,0.640)–(0.780,0.820);
[gp path] (0.780,4.561)–(0.780,4.381);
[gp node center] at (0.780,0.440) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.719,0.640)–(1.719,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.719,0.640)–(1.719,0.820);
[gp path] (1.719,4.561)–(1.719,4.381);
[gp node center] at (1.719,0.440) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.658,0.640)–(2.658,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.658,0.640)–(2.658,0.820);
[gp path] (2.658,4.561)–(2.658,4.381);
[gp node center] at (2.658,0.440) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.597,0.640)–(3.597,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.597,0.640)–(3.597,0.820);
[gp path] (3.597,4.561)–(3.597,4.381);
[gp node center] at (3.597,0.440) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.536,0.640)–(4.536,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.536,0.640)–(4.536,0.820);
[gp path] (4.536,4.561)–(4.536,4.381);
[gp node center] at (4.536,0.440) 2048;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.264,0.640)–(5.264,4.561);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.264,0.640)–(5.264,0.820);
[gp path] (5.264,4.561)–(5.264,4.381);
[gp node center] at (5.264,0.440) 3504;
[gp path] (0.780,4.561)–(0.780,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
[gp node center] at (3.022,0.140) number of cores;
[gp node right] at (2.100,4.694) ts from L4;
rgb color=0.000,0.000,0.545
[gp path] (2.220,4.694)–(2.880,4.694);
[gp path] (3.597,4.303)–(4.536,3.879);
3pointgp mark 6(3.597,4.303)
3pointgp mark 6(4.536,3.879)
3pointgp mark 6(2.550,4.694)
color=gp lt color border
[gp node right] at (4.320,4.694) tsi from L4;
rgb color=0.000,0.000,1.000
[gp path] (4.440,4.694)–(5.100,4.694);
[gp path] (0.780,3.891)–(1.719,2.956)–(2.658,2.271)–(3.597,1.755)–(4.536,1.353)
3pointgp mark 7(0.780,3.891)
3pointgp mark 7(1.719,2.956)
3pointgp mark 7(2.658,2.271)
3pointgp mark 7(3.597,1.755)
3pointgp mark 7(4.536,1.353)
3pointgp mark 7(5.264,1.013)
3pointgp mark 7(4.770,4.694)
rgb color=0.000,0.000,0.000
[gp path] (0.780,3.891)–(0.818,3.847)–(0.856,3.803)–(0.894,3.758)–(0.932,3.714)
color=gp lt color border
[gp path] (0.780,4.561)–(0.780,0.640)–(5.264,0.640)–(5.264,4.561)–cycle;
gp plot 10.780cm0.640cm5.264cm4.561cm
Cubic field: elapsed time in second versus the number of cores. Log/log scale. The results for a fine-scale refinement at L2 , L3, L4, L5, L6 and L7 level are presented in
*UOL_curve_times_L6 and *UOL_curve_times_L7 respectively.
The ideal curve shows the slope when the elapsed time of a single process is perfectly divided by the number of processes used for the calculation (this corresponds to the ideal speed-up).
This curve is always shifted so that it starts at the same point as the best solution for the smallest number of cores.
Figure <ref> presents, for different SP refinement, an analysis of the elapsed time used by all solvers.
The points not present on these curves are mainly due to the fact that, with this cluster, it was impossible to compute them: not enough memory, too much computation time, too many processes for the solver or integer overflow.
These limitations only apply to the actual cluster installation and are in no way an absolute judgment on the software used, especially in a non-multi-threaded context.
As discussed in section <ref>, the solver has an ideal scale ratio.
Thus, for the refinement of levels L3, L4, L5 and L6 different coarse-scale levels were tested.
The observed optimal scale ratio is 3 (best elapsed times curves are "tsi from L1", "tsi from L2" and "tsi from L3" respectively for the refinement of levels L4 (<ref>), L5 (<ref>) and L6 (<ref>): 1+3=4, 2+3=5 and 3+3=6), which is in good agreement with the analysis in section <ref> and gives equation (<ref>).
For L7, only this ratio was used to save computing resources.
Regarding the number of dofs, the figure <ref> shows that this ratio corresponds to a fixed number of dofs per patch appearing visually as a horizontal line (circle between 3435 and 4928).
every node/.append style=scale=0.60
(0.000,0.000) rectangle (8.125,5.250);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.592)–(7.794,0.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.190,0.592);
[gp path] (7.794,0.592)–(7.614,0.592);
[gp node right] at (0.900,0.592) 1e+02;
[gp path] (1.010,0.746)–(1.100,0.746);
[gp path] (7.794,0.746)–(7.704,0.746);
[gp path] (1.010,0.835)–(1.100,0.835);
[gp path] (7.794,0.835)–(7.704,0.835);
[gp path] (1.010,0.899)–(1.100,0.899);
[gp path] (7.794,0.899)–(7.704,0.899);
[gp path] (1.010,0.949)–(1.100,0.949);
[gp path] (7.794,0.949)–(7.704,0.949);
[gp path] (1.010,0.989)–(1.100,0.989);
[gp path] (7.794,0.989)–(7.704,0.989);
[gp path] (1.010,1.023)–(1.100,1.023);
[gp path] (7.794,1.023)–(7.704,1.023);
[gp path] (1.010,1.053)–(1.100,1.053);
[gp path] (7.794,1.053)–(7.704,1.053);
[gp path] (1.010,1.079)–(1.100,1.079);
[gp path] (7.794,1.079)–(7.704,1.079);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.102)–(7.794,1.102);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.102)–(1.190,1.102);
[gp path] (7.794,1.102)–(7.614,1.102);
[gp node right] at (0.900,1.102) 1e+03;
[gp path] (1.010,1.256)–(1.100,1.256);
[gp path] (7.794,1.256)–(7.704,1.256);
[gp path] (1.010,1.346)–(1.100,1.346);
[gp path] (7.794,1.346)–(7.704,1.346);
[gp path] (1.010,1.410)–(1.100,1.410);
[gp path] (7.794,1.410)–(7.704,1.410);
[gp path] (1.010,1.459)–(1.100,1.459);
[gp path] (7.794,1.459)–(7.704,1.459);
[gp path] (1.010,1.499)–(1.100,1.499);
[gp path] (7.794,1.499)–(7.704,1.499);
[gp path] (1.010,1.534)–(1.100,1.534);
[gp path] (7.794,1.534)–(7.704,1.534);
[gp path] (1.010,1.563)–(1.100,1.563);
[gp path] (7.794,1.563)–(7.704,1.563);
[gp path] (1.010,1.589)–(1.100,1.589);
[gp path] (7.794,1.589)–(7.704,1.589);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.613)–(7.794,1.613);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.613)–(1.190,1.613);
[gp path] (7.794,1.613)–(7.614,1.613);
[gp node right] at (0.900,1.613) 1e+04;
[gp path] (1.010,1.766)–(1.100,1.766);
[gp path] (7.794,1.766)–(7.704,1.766);
[gp path] (1.010,1.856)–(1.100,1.856);
[gp path] (7.794,1.856)–(7.704,1.856);
[gp path] (1.010,1.920)–(1.100,1.920);
[gp path] (7.794,1.920)–(7.704,1.920);
[gp path] (1.010,1.969)–(1.100,1.969);
[gp path] (7.794,1.969)–(7.704,1.969);
[gp path] (1.010,2.010)–(1.100,2.010);
[gp path] (7.794,2.010)–(7.704,2.010);
[gp path] (1.010,2.044)–(1.100,2.044);
[gp path] (7.794,2.044)–(7.704,2.044);
[gp path] (1.010,2.073)–(1.100,2.073);
[gp path] (7.794,2.073)–(7.704,2.073);
[gp path] (1.010,2.100)–(1.100,2.100);
[gp path] (7.794,2.100)–(7.704,2.100);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.123)–(7.794,2.123);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.123)–(1.190,2.123);
[gp path] (7.794,2.123)–(7.614,2.123);
[gp node right] at (0.900,2.123) 1e+05;
[gp path] (1.010,2.276)–(1.100,2.276);
[gp path] (7.794,2.276)–(7.704,2.276);
[gp path] (1.010,2.366)–(1.100,2.366);
[gp path] (7.794,2.366)–(7.704,2.366);
[gp path] (1.010,2.430)–(1.100,2.430);
[gp path] (7.794,2.430)–(7.704,2.430);
[gp path] (1.010,2.480)–(1.100,2.480);
[gp path] (7.794,2.480)–(7.704,2.480);
[gp path] (1.010,2.520)–(1.100,2.520);
[gp path] (7.794,2.520)–(7.704,2.520);
[gp path] (1.010,2.554)–(1.100,2.554);
[gp path] (7.794,2.554)–(7.704,2.554);
[gp path] (1.010,2.584)–(1.100,2.584);
[gp path] (7.794,2.584)–(7.704,2.584);
[gp path] (1.010,2.610)–(1.100,2.610);
[gp path] (7.794,2.610)–(7.704,2.610);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.633)–(7.794,2.633);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.633)–(1.190,2.633);
[gp path] (7.794,2.633)–(7.614,2.633);
[gp node right] at (0.900,2.633) 1e+06;
[gp path] (1.010,2.787)–(1.100,2.787);
[gp path] (7.794,2.787)–(7.704,2.787);
[gp path] (1.010,2.877)–(1.100,2.877);
[gp path] (7.794,2.877)–(7.704,2.877);
[gp path] (1.010,2.940)–(1.100,2.940);
[gp path] (7.794,2.940)–(7.704,2.940);
[gp path] (1.010,2.990)–(1.100,2.990);
[gp path] (7.794,2.990)–(7.704,2.990);
[gp path] (1.010,3.030)–(1.100,3.030);
[gp path] (7.794,3.030)–(7.704,3.030);
[gp path] (1.010,3.064)–(1.100,3.064);
[gp path] (7.794,3.064)–(7.704,3.064);
[gp path] (1.010,3.094)–(1.100,3.094);
[gp path] (7.794,3.094)–(7.704,3.094);
[gp path] (1.010,3.120)–(1.100,3.120);
[gp path] (7.794,3.120)–(7.704,3.120);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.143)–(7.794,3.143);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.143)–(1.190,3.143);
[gp path] (7.794,3.143)–(7.614,3.143);
[gp node right] at (0.900,3.143) 1e+07;
[gp path] (1.010,3.297)–(1.100,3.297);
[gp path] (7.794,3.297)–(7.704,3.297);
[gp path] (1.010,3.387)–(1.100,3.387);
[gp path] (7.794,3.387)–(7.704,3.387);
[gp path] (1.010,3.451)–(1.100,3.451);
[gp path] (7.794,3.451)–(7.704,3.451);
[gp path] (1.010,3.500)–(1.100,3.500);
[gp path] (7.794,3.500)–(7.704,3.500);
[gp path] (1.010,3.541)–(1.100,3.541);
[gp path] (7.794,3.541)–(7.704,3.541);
[gp path] (1.010,3.575)–(1.100,3.575);
[gp path] (7.794,3.575)–(7.704,3.575);
[gp path] (1.010,3.604)–(1.100,3.604);
[gp path] (7.794,3.604)–(7.704,3.604);
[gp path] (1.010,3.630)–(1.100,3.630);
[gp path] (7.794,3.630)–(7.704,3.630);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.654)–(7.794,3.654);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.654)–(1.190,3.654);
[gp path] (7.794,3.654)–(7.614,3.654);
[gp node right] at (0.900,3.654) 1e+08;
[gp path] (1.010,3.807)–(1.100,3.807);
[gp path] (7.794,3.807)–(7.704,3.807);
[gp path] (1.010,3.897)–(1.100,3.897);
[gp path] (7.794,3.897)–(7.704,3.897);
[gp path] (1.010,3.961)–(1.100,3.961);
[gp path] (7.794,3.961)–(7.704,3.961);
[gp path] (1.010,4.010)–(1.100,4.010);
[gp path] (7.794,4.010)–(7.704,4.010);
[gp path] (1.010,4.051)–(1.100,4.051);
[gp path] (7.794,4.051)–(7.704,4.051);
[gp path] (1.010,4.085)–(1.100,4.085);
[gp path] (7.794,4.085)–(7.704,4.085);
[gp path] (1.010,4.115)–(1.100,4.115);
[gp path] (7.794,4.115)–(7.704,4.115);
[gp path] (1.010,4.141)–(1.100,4.141);
[gp path] (7.794,4.141)–(7.704,4.141);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,4.164)–(7.794,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,4.164)–(1.190,4.164);
[gp path] (7.794,4.164)–(7.614,4.164);
[gp node right] at (0.900,4.164) 1e+09;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.592)–(1.010,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.010,0.772);
[gp path] (1.010,4.164)–(1.010,3.984);
[gp node center] at (1.010,0.407) 0;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.915,0.592)–(1.915,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.915,0.592)–(1.915,0.772);
[gp path] (1.915,4.164)–(1.915,3.984);
[gp node center] at (1.915,0.407) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.819,0.592)–(2.819,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.819,0.592)–(2.819,0.772);
[gp path] (2.819,4.164)–(2.819,3.984);
[gp node center] at (2.819,0.407) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.724,0.592)–(3.724,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.724,0.592)–(3.724,0.772);
[gp path] (3.724,4.164)–(3.724,3.984);
[gp node center] at (3.724,0.407) 3;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.628,0.592)–(4.628,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.628,0.592)–(4.628,0.772);
[gp path] (4.628,4.164)–(4.628,3.984);
[gp node center] at (4.628,0.407) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.533,0.592)–(5.533,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.533,0.592)–(5.533,0.772);
[gp path] (5.533,4.164)–(5.533,3.984);
[gp node center] at (5.533,0.407) 5;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (6.437,0.592)–(6.437,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (6.437,0.592)–(6.437,0.772);
[gp path] (6.437,4.164)–(6.437,3.984);
[gp node center] at (6.437,0.407) 6;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (7.342,0.592)–(7.342,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (7.342,0.592)–(7.342,0.772);
[gp path] (7.342,4.164)–(7.342,3.984);
[gp node center] at (7.342,0.407) 7;
[gp path] (1.010,4.164)–(1.010,0.592)–(7.794,0.592)–(7.794,4.164)–cycle;
[gp node center,rotate=-270] at (0.175,2.378) Nb dofs;
[gp node center] at (4.402,0.130) discretization level;
[gp node right] at (2.990,4.957) fr/blr;
rgb color=0.000,1.000,1.000
[gp path] (3.100,4.957)–(3.720,4.957);
[gp path] (1.010,0.973)–(1.915,1.368)–(2.819,1.793)–(3.724,2.235)–(4.628,2.687)
color=gp lt color border
[gp node right] at (2.990,4.732) ts global enriched;
rgb color=0.000,0.000,1.000
[gp path] (3.100,4.732)–(3.720,4.732);
[gp path] (1.010,1.128)–(1.915,1.522)–(2.819,1.947)–(3.724,2.389)–(4.628,2.840);
3pointgp mark 4(1.010,1.128)
3pointgp mark 4(1.915,1.522)
3pointgp mark 4(2.819,1.947)
3pointgp mark 4(3.724,2.389)
3pointgp mark 4(4.628,2.840)
3pointgp mark 4(3.410,4.732)
color=gp lt color border
[gp node right] at (2.990,4.507) per patch: from L0;
rgb color=0.580,0.000,0.827
[gp path] (3.100,4.507)–(3.720,4.507);
[gp path] (2.819,0.895)–(3.724,1.376)–(4.628,1.846)–(5.533,2.344);
3pointgp mark 5(2.819,0.895)
3pointgp mark 5(3.724,1.376)
3pointgp mark 5(4.628,1.846)
3pointgp mark 5(5.533,2.344)
3pointgp mark 5(3.410,4.507)
color=gp lt color border
[gp node right] at (2.990,4.282) per patch: from L1;
rgb color=0.784,0.784,0.000
[gp path] (3.100,4.282)–(3.720,4.282);
[gp path] (3.724,0.919)–(4.628,1.411)–(5.533,1.893);
3pointgp mark 6(3.724,0.919)
3pointgp mark 6(4.628,1.411)
3pointgp mark 6(5.533,1.893)
3pointgp mark 6(3.410,4.282)
color=gp lt color border
[gp node right] at (5.810,4.957) per patch: from L2;
rgb color=0.000,0.392,0.000
[gp path] (5.920,4.957)–(6.540,4.957);
[gp path] (5.533,1.437)–(6.437,1.923);
3pointgp mark 7(5.533,1.437)
3pointgp mark 7(6.437,1.923)
3pointgp mark 7(6.230,4.957)
color=gp lt color border
[gp node right] at (5.810,4.732) per patch: from L3;
rgb color=0.545,0.000,0.000
[gp path] (5.920,4.732)–(6.540,4.732);
[gp path] (5.533,0.948)–(6.437,1.449);
3pointgp mark 8(5.533,0.948)
3pointgp mark 8(6.437,1.449)
3pointgp mark 8(6.230,4.732)
color=gp lt color border
[gp node right] at (5.810,4.507) per patch: from L4;
rgb color=0.000,0.000,0.545
[gp path] (5.920,4.507)–(6.540,4.507);
3pointgp mark 9(7.342,1.456)
3pointgp mark 9(6.230,4.507)
color=gp lt color border
[gp node right] at (5.810,4.282) scale ratio of 3;
rgb color=0.000,0.000,1.000
[gp path] (6.286,4.282)–(6.285,4.284)–(6.285,4.287)–(6.285,4.290)–(6.284,4.293)
[gp path] (3.859,1.376)–(3.859,1.383)–(3.858,1.390)–(3.858,1.397)–(3.856,1.404)
[gp path] (4.763,1.411)–(4.763,1.418)–(4.762,1.425)–(4.762,1.432)–(4.760,1.439)
[gp path] (5.668,1.437)–(5.668,1.444)–(5.667,1.451)–(5.667,1.458)–(5.665,1.465)
[gp path] (6.572,1.449)–(6.572,1.456)–(6.571,1.463)–(6.571,1.470)–(6.569,1.477)
[gp path] (7.477,1.456)–(7.477,1.463)–(7.476,1.470)–(7.476,1.477)–(7.474,1.484)
color=gp lt color border
[gp path] (1.010,4.164)–(1.010,0.592)–(7.794,0.592)–(7.794,4.164)–cycle;
gp plot 11.010cm0.592cm7.794cm4.164cm
every node/.append style=scale=0.60
(0.000,0.000) rectangle (8.125,5.250);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,0.592)–(7.794,0.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.592)–(1.005,0.592);
[gp path] (7.794,0.592)–(7.614,0.592);
[gp node right] at (0.715,0.592) 1e+02;
[gp path] (0.825,0.807)–(0.915,0.807);
[gp path] (7.794,0.807)–(7.704,0.807);
[gp path] (0.825,0.933)–(0.915,0.933);
[gp path] (7.794,0.933)–(7.704,0.933);
[gp path] (0.825,1.022)–(0.915,1.022);
[gp path] (7.794,1.022)–(7.704,1.022);
[gp path] (0.825,1.091)–(0.915,1.091);
[gp path] (7.794,1.091)–(7.704,1.091);
[gp path] (0.825,1.148)–(0.915,1.148);
[gp path] (7.794,1.148)–(7.704,1.148);
[gp path] (0.825,1.196)–(0.915,1.196);
[gp path] (7.794,1.196)–(7.704,1.196);
[gp path] (0.825,1.237)–(0.915,1.237);
[gp path] (7.794,1.237)–(7.704,1.237);
[gp path] (0.825,1.274)–(0.915,1.274);
[gp path] (7.794,1.274)–(7.704,1.274);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.306)–(7.794,1.306);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.306)–(1.005,1.306);
[gp path] (7.794,1.306)–(7.614,1.306);
[gp node right] at (0.715,1.306) 1e+03;
[gp path] (0.825,1.521)–(0.915,1.521);
[gp path] (7.794,1.521)–(7.704,1.521);
[gp path] (0.825,1.647)–(0.915,1.647);
[gp path] (7.794,1.647)–(7.704,1.647);
[gp path] (0.825,1.737)–(0.915,1.737);
[gp path] (7.794,1.737)–(7.704,1.737);
[gp path] (0.825,1.806)–(0.915,1.806);
[gp path] (7.794,1.806)–(7.704,1.806);
[gp path] (0.825,1.862)–(0.915,1.862);
[gp path] (7.794,1.862)–(7.704,1.862);
[gp path] (0.825,1.910)–(0.915,1.910);
[gp path] (7.794,1.910)–(7.704,1.910);
[gp path] (0.825,1.952)–(0.915,1.952);
[gp path] (7.794,1.952)–(7.704,1.952);
[gp path] (0.825,1.988)–(0.915,1.988);
[gp path] (7.794,1.988)–(7.704,1.988);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.021)–(7.794,2.021);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.021)–(1.005,2.021);
[gp path] (7.794,2.021)–(7.614,2.021);
[gp node right] at (0.715,2.021) 1e+04;
[gp path] (0.825,2.236)–(0.915,2.236);
[gp path] (7.794,2.236)–(7.704,2.236);
[gp path] (0.825,2.362)–(0.915,2.362);
[gp path] (7.794,2.362)–(7.704,2.362);
[gp path] (0.825,2.451)–(0.915,2.451);
[gp path] (7.794,2.451)–(7.704,2.451);
[gp path] (0.825,2.520)–(0.915,2.520);
[gp path] (7.794,2.520)–(7.704,2.520);
[gp path] (0.825,2.577)–(0.915,2.577);
[gp path] (7.794,2.577)–(7.704,2.577);
[gp path] (0.825,2.625)–(0.915,2.625);
[gp path] (7.794,2.625)–(7.704,2.625);
[gp path] (0.825,2.666)–(0.915,2.666);
[gp path] (7.794,2.666)–(7.704,2.666);
[gp path] (0.825,2.703)–(0.915,2.703);
[gp path] (7.794,2.703)–(7.704,2.703);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.735)–(7.794,2.735);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.735)–(1.005,2.735);
[gp path] (7.794,2.735)–(7.614,2.735);
[gp node right] at (0.715,2.735) 1e+05;
[gp path] (0.825,2.950)–(0.915,2.950);
[gp path] (7.794,2.950)–(7.704,2.950);
[gp path] (0.825,3.076)–(0.915,3.076);
[gp path] (7.794,3.076)–(7.704,3.076);
[gp path] (0.825,3.165)–(0.915,3.165);
[gp path] (7.794,3.165)–(7.704,3.165);
[gp path] (0.825,3.235)–(0.915,3.235);
[gp path] (7.794,3.235)–(7.704,3.235);
[gp path] (0.825,3.291)–(0.915,3.291);
[gp path] (7.794,3.291)–(7.704,3.291);
[gp path] (0.825,3.339)–(0.915,3.339);
[gp path] (7.794,3.339)–(7.704,3.339);
[gp path] (0.825,3.380)–(0.915,3.380);
[gp path] (7.794,3.380)–(7.704,3.380);
[gp path] (0.825,3.417)–(0.915,3.417);
[gp path] (7.794,3.417)–(7.704,3.417);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,3.450)–(7.794,3.450);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,3.450)–(1.005,3.450);
[gp path] (7.794,3.450)–(7.614,3.450);
[gp node right] at (0.715,3.450) 1e+06;
[gp path] (0.825,3.665)–(0.915,3.665);
[gp path] (7.794,3.665)–(7.704,3.665);
[gp path] (0.825,3.790)–(0.915,3.790);
[gp path] (7.794,3.790)–(7.704,3.790);
[gp path] (0.825,3.880)–(0.915,3.880);
[gp path] (7.794,3.880)–(7.704,3.880);
[gp path] (0.825,3.949)–(0.915,3.949);
[gp path] (7.794,3.949)–(7.704,3.949);
[gp path] (0.825,4.006)–(0.915,4.006);
[gp path] (7.794,4.006)–(7.704,4.006);
[gp path] (0.825,4.053)–(0.915,4.053);
[gp path] (7.794,4.053)–(7.704,4.053);
[gp path] (0.825,4.095)–(0.915,4.095);
[gp path] (7.794,4.095)–(7.704,4.095);
[gp path] (0.825,4.131)–(0.915,4.131);
[gp path] (7.794,4.131)–(7.704,4.131);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,4.164)–(7.794,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,4.164)–(1.005,4.164);
[gp path] (7.794,4.164)–(7.614,4.164);
[gp node right] at (0.715,4.164) 1e+07;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,0.592)–(0.825,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.592)–(0.825,0.772);
[gp path] (0.825,4.164)–(0.825,3.984);
[gp node center] at (0.825,0.407) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.459,0.592)–(1.459,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.459,0.592)–(1.459,0.772);
[gp path] (1.459,4.164)–(1.459,3.984);
[gp node center] at (1.459,0.407) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.092,0.592)–(2.092,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.092,0.592)–(2.092,0.772);
[gp path] (2.092,4.164)–(2.092,3.984);
[gp node center] at (2.092,0.407) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.726,0.592)–(2.726,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.726,0.592)–(2.726,0.772);
[gp path] (2.726,4.164)–(2.726,3.984);
[gp node center] at (2.726,0.407) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.359,0.592)–(3.359,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.359,0.592)–(3.359,0.772);
[gp path] (3.359,4.164)–(3.359,3.984);
[gp node center] at (3.359,0.407) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.993,0.592)–(3.993,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.993,0.592)–(3.993,0.772);
[gp path] (3.993,4.164)–(3.993,3.984);
[gp node center] at (3.993,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.626,0.592)–(4.626,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.626,0.592)–(4.626,0.772);
[gp path] (4.626,4.164)–(4.626,3.984);
[gp node center] at (4.626,0.407) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.260,0.592)–(5.260,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.260,0.592)–(5.260,0.772);
[gp path] (5.260,4.164)–(5.260,3.984);
[gp node center] at (5.260,0.407) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.893,0.592)–(5.893,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.893,0.592)–(5.893,0.772);
[gp path] (5.893,4.164)–(5.893,3.984);
[gp node center] at (5.893,0.407) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (6.527,0.592)–(6.527,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (6.527,0.592)–(6.527,0.772);
[gp path] (6.527,4.164)–(6.527,3.984);
[gp node center] at (6.527,0.407) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (7.160,0.592)–(7.160,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (7.160,0.592)–(7.160,0.772);
[gp path] (7.160,4.164)–(7.160,3.984);
[gp node center] at (7.160,0.407) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (7.794,0.592)–(7.794,4.164);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (7.794,0.592)–(7.794,0.772);
[gp path] (7.794,4.164)–(7.794,3.984);
[gp node center] at (7.794,0.407) 2048;
[gp path] (0.825,4.164)–(0.825,0.592)–(7.794,0.592)–(7.794,4.164)–cycle;
[gp node center] at (4.309,0.130) number of cores;
[gp node right] at (2.255,4.957) Schur L2;
rgb color=0.000,0.933,0.933
[gp path] (2.365,4.957)–(2.985,4.957);
[gp path] (1.459,1.115)–(2.092,1.396)–(2.726,1.625)–(3.359,1.776)–(3.993,1.895)
3pointgp mark 2(1.459,1.115)
3pointgp mark 2(2.092,1.396)
3pointgp mark 2(2.726,1.625)
3pointgp mark 2(3.359,1.776)
3pointgp mark 2(3.993,1.895)
3pointgp mark 2(4.626,2.006)
3pointgp mark 2(5.260,2.080)
3pointgp mark 2(2.675,4.957)
color=gp lt color border
[gp node right] at (2.255,4.732) per domain L2;
rgb color=0.000,0.933,0.933
[gp path] (2.365,4.732)–(2.985,4.732);
[gp path] (0.825,2.274)–(1.459,2.066)–(2.092,1.862)–(2.726,1.667)–(3.359,1.477)
color=gp lt color border
[gp node right] at (2.255,4.507) Schur L3;
rgb color=0.784,0.784,0.000
[gp path] (2.365,4.507)–(2.985,4.507);
[gp path] (1.459,1.533)–(2.092,1.811)–(2.726,2.059)–(3.359,2.228)–(3.993,2.323)
3pointgp mark 2(1.459,1.533)
3pointgp mark 2(2.092,1.811)
3pointgp mark 2(2.726,2.059)
3pointgp mark 2(3.359,2.228)
3pointgp mark 2(3.993,2.323)
3pointgp mark 2(4.626,2.443)
3pointgp mark 2(5.260,2.531)
3pointgp mark 2(5.893,2.610)
3pointgp mark 2(2.675,4.507)
color=gp lt color border
[gp node right] at (2.255,4.282) per domain L3;
rgb color=0.784,0.784,0.000
[gp path] (2.365,4.282)–(2.985,4.282);
[gp path] (0.825,2.893)–(1.459,2.682)–(2.092,2.472)–(2.726,2.269)–(3.359,2.069)
color=gp lt color border
[gp node right] at (4.525,4.957) Schur L4;
rgb color=0.000,0.392,0.000
[gp path] (4.635,4.957)–(5.255,4.957);
[gp path] (1.459,1.953)–(2.092,2.235)–(2.726,2.488)–(3.359,2.652)–(3.993,2.762)
3pointgp mark 2(1.459,1.953)
3pointgp mark 2(2.092,2.235)
3pointgp mark 2(2.726,2.488)
3pointgp mark 2(3.359,2.652)
3pointgp mark 2(3.993,2.762)
3pointgp mark 2(4.626,2.887)
3pointgp mark 2(5.260,2.976)
3pointgp mark 2(5.893,3.066)
3pointgp mark 2(4.945,4.957)
color=gp lt color border
[gp node right] at (4.525,4.732) per domain L4;
rgb color=0.000,0.392,0.000
[gp path] (4.635,4.732)–(5.255,4.732);
[gp path] (0.825,3.525)–(1.459,3.311)–(2.092,3.099)–(2.726,2.890)–(3.359,2.683)
color=gp lt color border
[gp node right] at (4.525,4.507) Schur L5;
rgb color=0.545,0.000,0.000
[gp path] (4.635,4.507)–(5.255,4.507);
[gp path] (3.993,3.194)–(4.626,3.313)–(5.260,3.422)–(5.893,3.507)–(6.527,3.585)
3pointgp mark 2(3.993,3.194)
3pointgp mark 2(4.626,3.313)
3pointgp mark 2(5.260,3.422)
3pointgp mark 2(5.893,3.507)
3pointgp mark 2(6.527,3.585)
3pointgp mark 2(7.160,3.660)
3pointgp mark 2(7.794,3.735)
3pointgp mark 2(4.945,4.507)
color=gp lt color border
[gp node right] at (4.525,4.282) per domain L5;
rgb color=0.545,0.000,0.000
[gp path] (4.635,4.282)–(5.255,4.282);
[gp path] (3.993,3.101)–(4.626,2.893)–(5.260,2.686)–(5.893,2.480)–(6.527,2.276)
color=gp lt color border
[gp node right] at (6.795,4.957) Schur L6;
rgb color=0.000,0.000,0.545
[gp path] (6.905,4.957)–(7.525,4.957);
[gp path] (6.527,3.988)–(7.160,4.065)–(7.794,4.139);
3pointgp mark 2(6.527,3.988)
3pointgp mark 2(7.160,4.065)
3pointgp mark 2(7.794,4.139)
3pointgp mark 2(7.215,4.957)
color=gp lt color border
[gp node right] at (6.795,4.732) per domain L6;
rgb color=0.000,0.000,0.545
[gp path] (6.905,4.732)–(7.525,4.732);
[gp path] (6.527,2.892)–(7.160,2.683)–(7.794,2.476);
color=gp lt color border
[gp path] (0.825,4.164)–(0.825,0.592)–(7.794,0.592)–(7.794,4.164)–cycle;
gp plot 10.825cm0.592cm7.794cm4.164cm
Cubic field: degree of freedom numbers .
Compare to the number of "fr/br" dofs that increases with discretization, the solver can keep the size of the fine-scale resolutions within a specific range as long as the scale ratio is maintained.
Keeping the same ratio implies increasing the discretization level for the global scale problem.
In the figure <ref> the "ts global enriched" curve is simply 2 times the "fr/blr" curve (all nodes are enriched).
This already highlights the issue of having a global problem with a not so small size.
This point is addressed by the version as verified below.
Another impact can be seen in figure <ref> where increasing the discretization level for a global scale problem implies an increase in the "total" number of patches involved in the computation.
This last point is naturally treated by the parallelism as can be seen on the same figure.
The number of patches calculated per process decreases when using an increasing number of cores.
every node/.append style=scale=0.75
(0.000,0.000) rectangle (16.250,6.562);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.266,0.739)–(15.835,0.739);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.266,0.739)–(1.446,0.739);
[gp path] (15.835,0.739)–(15.655,0.739);
[gp node right] at (1.128,0.739) 1e+00;
[gp path] (1.266,1.020)–(1.356,1.020);
[gp path] (15.835,1.020)–(15.745,1.020);
[gp path] (1.266,1.184)–(1.356,1.184);
[gp path] (15.835,1.184)–(15.745,1.184);
[gp path] (1.266,1.300)–(1.356,1.300);
[gp path] (15.835,1.300)–(15.745,1.300);
[gp path] (1.266,1.390)–(1.356,1.390);
[gp path] (15.835,1.390)–(15.745,1.390);
[gp path] (1.266,1.464)–(1.356,1.464);
[gp path] (15.835,1.464)–(15.745,1.464);
[gp path] (1.266,1.526)–(1.356,1.526);
[gp path] (15.835,1.526)–(15.745,1.526);
[gp path] (1.266,1.581)–(1.356,1.581);
[gp path] (15.835,1.581)–(15.745,1.581);
[gp path] (1.266,1.628)–(1.356,1.628);
[gp path] (15.835,1.628)–(15.745,1.628);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.266,1.671)–(15.835,1.671);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.266,1.671)–(1.446,1.671);
[gp path] (15.835,1.671)–(15.655,1.671);
[gp node right] at (1.128,1.671) 1e+01;
[gp path] (1.266,1.951)–(1.356,1.951);
[gp path] (15.835,1.951)–(15.745,1.951);
[gp path] (1.266,2.115)–(1.356,2.115);
[gp path] (15.835,2.115)–(15.745,2.115);
[gp path] (1.266,2.232)–(1.356,2.232);
[gp path] (15.835,2.232)–(15.745,2.232);
[gp path] (1.266,2.322)–(1.356,2.322);
[gp path] (15.835,2.322)–(15.745,2.322);
[gp path] (1.266,2.396)–(1.356,2.396);
[gp path] (15.835,2.396)–(15.745,2.396);
[gp path] (1.266,2.458)–(1.356,2.458);
[gp path] (15.835,2.458)–(15.745,2.458);
[gp path] (1.266,2.512)–(1.356,2.512);
[gp path] (15.835,2.512)–(15.745,2.512);
[gp path] (1.266,2.560)–(1.356,2.560);
[gp path] (15.835,2.560)–(15.745,2.560);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.266,2.603)–(15.835,2.603);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.266,2.603)–(1.446,2.603);
[gp path] (15.835,2.603)–(15.655,2.603);
[gp node right] at (1.128,2.603) 1e+02;
[gp path] (1.266,2.883)–(1.356,2.883);
[gp path] (15.835,2.883)–(15.745,2.883);
[gp path] (1.266,3.047)–(1.356,3.047);
[gp path] (15.835,3.047)–(15.745,3.047);
[gp path] (1.266,3.164)–(1.356,3.164);
[gp path] (15.835,3.164)–(15.745,3.164);
[gp path] (1.266,3.254)–(1.356,3.254);
[gp path] (15.835,3.254)–(15.745,3.254);
[gp path] (1.266,3.328)–(1.356,3.328);
[gp path] (15.835,3.328)–(15.745,3.328);
[gp path] (1.266,3.390)–(1.356,3.390);
[gp path] (15.835,3.390)–(15.745,3.390);
[gp path] (1.266,3.444)–(1.356,3.444);
[gp path] (15.835,3.444)–(15.745,3.444);
[gp path] (1.266,3.492)–(1.356,3.492);
[gp path] (15.835,3.492)–(15.745,3.492);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.266,3.535)–(15.835,3.535);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.266,3.535)–(1.446,3.535);
[gp path] (15.835,3.535)–(15.655,3.535);
[gp node right] at (1.128,3.535) 1e+03;
[gp path] (1.266,3.815)–(1.356,3.815);
[gp path] (15.835,3.815)–(15.745,3.815);
[gp path] (1.266,3.979)–(1.356,3.979);
[gp path] (15.835,3.979)–(15.745,3.979);
[gp path] (1.266,4.096)–(1.356,4.096);
[gp path] (15.835,4.096)–(15.745,4.096);
[gp path] (1.266,4.186)–(1.356,4.186);
[gp path] (15.835,4.186)–(15.745,4.186);
[gp path] (1.266,4.260)–(1.356,4.260);
[gp path] (15.835,4.260)–(15.745,4.260);
[gp path] (1.266,4.322)–(1.356,4.322);
[gp path] (15.835,4.322)–(15.745,4.322);
[gp path] (1.266,4.376)–(1.356,4.376);
[gp path] (15.835,4.376)–(15.745,4.376);
[gp path] (1.266,4.424)–(1.356,4.424);
[gp path] (15.835,4.424)–(15.745,4.424);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.266,4.466)–(15.835,4.466);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.266,4.466)–(1.446,4.466);
[gp path] (15.835,4.466)–(15.655,4.466);
[gp node right] at (1.128,4.466) 1e+04;
[gp path] (1.266,4.747)–(1.356,4.747);
[gp path] (15.835,4.747)–(15.745,4.747);
[gp path] (1.266,4.911)–(1.356,4.911);
[gp path] (15.835,4.911)–(15.745,4.911);
[gp path] (1.266,5.027)–(1.356,5.027);
[gp path] (15.835,5.027)–(15.745,5.027);
[gp path] (1.266,5.118)–(1.356,5.118);
[gp path] (15.835,5.118)–(15.745,5.118);
[gp path] (1.266,5.191)–(1.356,5.191);
[gp path] (15.835,5.191)–(15.745,5.191);
[gp path] (1.266,5.254)–(1.356,5.254);
[gp path] (15.835,5.254)–(15.745,5.254);
[gp path] (1.266,5.308)–(1.356,5.308);
[gp path] (15.835,5.308)–(15.745,5.308);
[gp path] (1.266,5.356)–(1.356,5.356);
[gp path] (15.835,5.356)–(15.745,5.356);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.266,5.398)–(15.835,5.398);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.266,5.398)–(1.446,5.398);
[gp path] (15.835,5.398)–(15.655,5.398);
[gp node right] at (1.128,5.398) 1e+05;
[gp path] (1.266,5.679)–(1.356,5.679);
[gp path] (15.835,5.679)–(15.745,5.679);
[gp path] (1.266,5.843)–(1.356,5.843);
[gp path] (15.835,5.843)–(15.745,5.843);
[gp path] (1.266,5.959)–(1.356,5.959);
[gp path] (15.835,5.959)–(15.745,5.959);
[gp path] (1.266,6.049)–(1.356,6.049);
[gp path] (15.835,6.049)–(15.745,6.049);
[gp path] (1.266,6.123)–(1.356,6.123);
[gp path] (15.835,6.123)–(15.745,6.123);
[gp path] (1.266,6.186)–(1.356,6.186);
[gp path] (15.835,6.186)–(15.745,6.186);
[gp path] (1.266,6.240)–(1.356,6.240);
[gp path] (15.835,6.240)–(15.745,6.240);
[gp path] (1.266,6.287)–(1.356,6.287);
[gp path] (15.835,6.287)–(15.745,6.287);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.266,6.330)–(15.835,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.266,6.330)–(1.446,6.330);
[gp path] (15.835,6.330)–(15.655,6.330);
[gp node right] at (1.128,6.330) 1e+06;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.266,0.739)–(1.266,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.266,0.739)–(1.266,0.919);
[gp path] (1.266,6.330)–(1.266,6.150);
[gp node center] at (1.266,0.508) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.502,0.739)–(2.502,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.502,0.739)–(2.502,0.919);
[gp path] (2.502,6.330)–(2.502,6.150);
[gp node center] at (2.502,0.508) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.739,0.739)–(3.739,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.739,0.739)–(3.739,0.919);
[gp path] (3.739,6.330)–(3.739,6.150);
[gp node center] at (3.739,0.508) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.975,0.739)–(4.975,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.975,0.739)–(4.975,0.919);
[gp path] (4.975,6.330)–(4.975,6.150);
[gp node center] at (4.975,0.508) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (6.211,0.739)–(6.211,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (6.211,0.739)–(6.211,0.919);
[gp path] (6.211,6.330)–(6.211,6.150);
[gp node center] at (6.211,0.508) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (7.447,0.739)–(7.447,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (7.447,0.739)–(7.447,0.919);
[gp path] (7.447,6.330)–(7.447,6.150);
[gp node center] at (7.447,0.508) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (8.684,0.739)–(8.684,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (8.684,0.739)–(8.684,0.919);
[gp path] (8.684,6.330)–(8.684,6.150);
[gp node center] at (8.684,0.508) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (9.920,0.739)–(9.920,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (9.920,0.739)–(9.920,0.919);
[gp path] (9.920,6.330)–(9.920,6.150);
[gp node center] at (9.920,0.508) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (11.156,0.739)–(11.156,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (11.156,0.739)–(11.156,0.919);
[gp path] (11.156,6.330)–(11.156,6.150);
[gp node center] at (11.156,0.508) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (12.392,0.739)–(12.392,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (12.392,0.739)–(12.392,0.919);
[gp path] (12.392,6.330)–(12.392,6.150);
[gp node center] at (12.392,0.508) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (13.629,0.739)–(13.629,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (13.629,0.739)–(13.629,0.919);
[gp path] (13.629,6.330)–(13.629,6.150);
[gp node center] at (13.629,0.508) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (14.865,0.739)–(14.865,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (14.865,0.739)–(14.865,0.919);
[gp path] (14.865,6.330)–(14.865,6.150);
[gp node center] at (14.865,0.508) 2048;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (15.823,0.739)–(15.823,6.330);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (15.823,0.739)–(15.823,0.919);
[gp path] (15.823,6.330)–(15.823,6.150);
[gp node center] at (15.823,0.508) 3504;
[gp path] (1.266,6.330)–(1.266,0.739)–(15.835,0.739)–(15.835,6.330)–cycle;
[gp node center,rotate=-270] at (0.219,3.534) Number of patches;
[gp node center] at (8.550,0.162) number of cores;
[gp node right] at (1.625,7.217) total L0;
rgb color=0.580,0.000,0.827
[gp path] (1.763,7.217)–(2.495,7.217);
[gp path] (1.266,2.858)–(1.413,2.858)–(1.560,2.858)–(1.707,2.858)–(1.855,2.858)
color=gp lt color border
[gp node right] at (1.625,6.889) per core L0;
rgb color=0.580,0.000,0.827
[gp path] (1.763,6.889)–(2.495,6.889);
[gp path] (1.266,2.858)–(2.502,2.613)–(3.739,2.382)–(4.975,2.179)–(6.211,1.989)
3pointgp mark 7(1.266,2.858)
3pointgp mark 7(2.502,2.613)
3pointgp mark 7(3.739,2.382)
3pointgp mark 7(4.975,2.179)
3pointgp mark 7(6.211,1.989)
3pointgp mark 7(7.447,1.833)
3pointgp mark 7(8.684,1.799)
3pointgp mark 7(9.920,1.611)
3pointgp mark 7(2.129,6.889)
color=gp lt color border
[gp node right] at (1.625,6.561) dist per core L0;
rgb color=0.580,0.000,0.827
[gp path] (1.763,6.561)–(2.495,6.561);
[gp path] (2.502,1.886)–(3.739,1.961)–(4.975,1.971)–(6.211,1.887)–(7.447,1.799)
3pointgp mark 8(2.502,1.886)
3pointgp mark 8(3.739,1.961)
3pointgp mark 8(4.975,1.971)
3pointgp mark 8(6.211,1.887)
3pointgp mark 8(7.447,1.799)
3pointgp mark 8(8.684,1.797)
3pointgp mark 8(9.920,1.611)
3pointgp mark 8(2.129,6.561)
color=gp lt color border
[gp node right] at (4.875,7.217) total L1;
rgb color=0.784,0.784,0.000
[gp path] (5.013,7.217)–(5.745,7.217);
[gp path] (1.266,3.577)–(1.413,3.577)–(1.560,3.577)–(1.707,3.577)–(1.855,3.577)
color=gp lt color border
[gp node right] at (4.875,6.889) per core L1;
rgb color=0.784,0.784,0.000
[gp path] (5.013,6.889)–(5.745,6.889);
[gp path] (1.266,3.577)–(2.502,3.314)–(3.739,3.061)–(4.975,2.831)–(6.211,2.611)
3pointgp mark 7(1.266,3.577)
3pointgp mark 7(2.502,3.314)
3pointgp mark 7(3.739,3.061)
3pointgp mark 7(4.975,2.831)
3pointgp mark 7(6.211,2.611)
3pointgp mark 7(7.447,2.389)
3pointgp mark 7(8.684,2.194)
3pointgp mark 7(9.920,2.003)
3pointgp mark 7(11.156,1.847)
3pointgp mark 7(5.379,6.889)
color=gp lt color border
[gp node right] at (4.875,6.561) dist per core L1;
rgb color=0.784,0.784,0.000
[gp path] (5.013,6.561)–(5.745,6.561);
[gp path] (2.502,2.330)–(3.739,2.420)–(4.975,2.459)–(6.211,2.386)–(7.447,2.245)
3pointgp mark 8(2.502,2.330)
3pointgp mark 8(3.739,2.420)
3pointgp mark 8(4.975,2.459)
3pointgp mark 8(6.211,2.386)
3pointgp mark 8(7.447,2.245)
3pointgp mark 8(8.684,2.123)
3pointgp mark 8(9.920,1.969)
3pointgp mark 8(11.156,1.839)
3pointgp mark 8(5.379,6.561)
color=gp lt color border
[gp node right] at (8.125,7.217) total L2;
rgb color=0.000,0.392,0.000
[gp path] (8.263,7.217)–(8.995,7.217);
[gp path] (1.266,4.352)–(1.413,4.352)–(1.560,4.352)–(1.707,4.352)–(1.855,4.352)
color=gp lt color border
[gp node right] at (8.125,6.889) per core L2;
rgb color=0.000,0.392,0.000
[gp path] (8.263,6.889)–(8.995,6.889);
[gp path] (2.502,4.081)–(3.739,3.815)–(4.975,3.562)–(6.211,3.311)–(7.447,3.069)
3pointgp mark 7(2.502,4.081)
3pointgp mark 7(3.739,3.815)
3pointgp mark 7(4.975,3.562)
3pointgp mark 7(6.211,3.311)
3pointgp mark 7(7.447,3.069)
3pointgp mark 7(8.684,2.839)
3pointgp mark 7(9.920,2.617)
3pointgp mark 7(11.156,2.403)
3pointgp mark 7(12.392,2.197)
3pointgp mark 7(13.629,2.007)
3pointgp mark 7(14.865,1.863)
3pointgp mark 7(8.629,6.889)
color=gp lt color border
[gp node right] at (8.125,6.561) dist per core L2;
rgb color=0.000,0.392,0.000
[gp path] (8.263,6.561)–(8.995,6.561);
[gp path] (2.502,2.836)–(3.739,2.927)–(4.975,2.970)–(6.211,2.880)–(7.447,2.762)
3pointgp mark 8(2.502,2.836)
3pointgp mark 8(3.739,2.927)
3pointgp mark 8(4.975,2.970)
3pointgp mark 8(6.211,2.880)
3pointgp mark 8(7.447,2.762)
3pointgp mark 8(8.684,2.636)
3pointgp mark 8(9.920,2.490)
3pointgp mark 8(11.156,2.325)
3pointgp mark 8(12.392,2.153)
3pointgp mark 8(13.629,1.988)
3pointgp mark 8(14.865,1.860)
3pointgp mark 8(8.629,6.561)
color=gp lt color border
[gp node right] at (11.374,7.217) total L3;
rgb color=0.545,0.000,0.000
[gp path] (11.512,7.217)–(12.244,7.217);
[gp path] (1.266,5.159)–(1.413,5.159)–(1.560,5.159)–(1.707,5.159)–(1.855,5.159)
color=gp lt color border
[gp node right] at (11.374,6.889) per core L3;
rgb color=0.545,0.000,0.000
[gp path] (11.512,6.889)–(12.244,6.889);
[gp path] (3.739,4.610)–(4.975,4.344)–(6.211,4.080)–(7.447,3.821)–(8.684,3.569)
3pointgp mark 7(3.739,4.610)
3pointgp mark 7(4.975,4.344)
3pointgp mark 7(6.211,4.080)
3pointgp mark 7(7.447,3.821)
3pointgp mark 7(8.684,3.569)
3pointgp mark 7(9.920,3.321)
3pointgp mark 7(11.156,3.079)
3pointgp mark 7(12.392,2.845)
3pointgp mark 7(13.629,2.619)
3pointgp mark 7(14.865,2.404)
3pointgp mark 7(11.878,6.889)
color=gp lt color border
[gp node right] at (11.374,6.561) dist per core 3;
rgb color=0.545,0.000,0.000
[gp path] (11.512,6.561)–(12.244,6.561);
[gp path] (3.739,3.457)–(4.975,3.495)–(6.211,3.424)–(7.447,3.310)–(8.684,3.180)
3pointgp mark 8(3.739,3.457)
3pointgp mark 8(4.975,3.495)
3pointgp mark 8(6.211,3.424)
3pointgp mark 8(7.447,3.310)
3pointgp mark 8(8.684,3.180)
3pointgp mark 8(9.920,3.028)
3pointgp mark 8(11.156,2.864)
3pointgp mark 8(12.392,2.695)
3pointgp mark 8(13.629,2.520)
3pointgp mark 8(14.865,2.345)
3pointgp mark 8(11.878,6.561)
[gp path] (6.211,5.159)–(7.447,5.159)–(8.684,5.159)–(9.920,5.159)–(11.156,5.159)
[gp path] (6.211,4.080)–(7.447,3.821)–(8.684,3.569)–(9.920,3.321)–(11.156,3.079)
3pointgp mark 7(6.211,4.080)
3pointgp mark 7(7.447,3.821)
3pointgp mark 7(8.684,3.569)
3pointgp mark 7(9.920,3.321)
3pointgp mark 7(11.156,3.079)
3pointgp mark 7(12.392,2.845)
3pointgp mark 7(13.629,2.619)
3pointgp mark 7(14.865,2.404)
3pointgp mark 7(15.835,2.243)
[gp path] (6.211,3.424)–(7.447,3.310)–(8.684,3.180)–(9.920,3.028)–(11.156,2.864)
3pointgp mark 8(6.211,3.424)
3pointgp mark 8(7.447,3.310)
3pointgp mark 8(8.684,3.180)
3pointgp mark 8(9.920,3.028)
3pointgp mark 8(11.156,2.864)
3pointgp mark 8(12.392,2.695)
3pointgp mark 8(13.629,2.520)
3pointgp mark 8(14.865,2.345)
3pointgp mark 8(15.835,2.207)
color=gp lt color border
[gp node right] at (14.624,7.217) total L4;
rgb color=0.000,0.000,0.545
[gp path] (14.762,7.217)–(15.494,7.217);
[gp path] (1.266,5.983)–(1.413,5.983)–(1.560,5.983)–(1.707,5.983)–(1.855,5.983)
color=gp lt color border
[gp node right] at (14.624,6.889) per core L4;
rgb color=0.000,0.000,0.545
[gp path] (14.762,6.889)–(15.494,6.889);
[gp path] (9.920,4.085)–(11.156,3.827)–(12.392,3.572)–(13.629,3.323)–(14.865,3.081)
3pointgp mark 7(9.920,4.085)
3pointgp mark 7(11.156,3.827)
3pointgp mark 7(12.392,3.572)
3pointgp mark 7(13.629,3.323)
3pointgp mark 7(14.865,3.081)
3pointgp mark 7(15.823,2.898)
3pointgp mark 7(15.128,6.889)
color=gp lt color border
[gp node right] at (14.624,6.561) dist per core L4;
rgb color=0.000,0.000,0.545
[gp path] (14.762,6.561)–(15.494,6.561);
[gp path] (9.920,3.578)–(11.156,3.418)–(12.392,3.249)–(13.629,3.072)–(14.865,2.896)
3pointgp mark 8(9.920,3.578)
3pointgp mark 8(11.156,3.418)
3pointgp mark 8(12.392,3.249)
3pointgp mark 8(13.629,3.072)
3pointgp mark 8(14.865,2.896)
3pointgp mark 8(15.823,2.756)
3pointgp mark 8(15.128,6.561)
color=gp lt color border
[gp path] (1.266,6.330)–(1.266,0.739)–(15.835,0.739)–(15.835,6.330)–cycle;
gp plot 11.266cm0.739cm15.835cm6.330cm
Cubic field: number of patches. Log/log scale..
For the refinement of level L2 (figure <ref>) the solver does not provide better performance compared to the tested solvers (for "fr" and "blr", it is only with a high number of cores that the solver reaches the same level of performance of these solvers before their efficiency loss).
One of the reasons is certainly that the optimal ratio of 3 cannot be reached and thus the cost of overlapping patches is not compensated by an improvement in algorithmic complexity (fine-scale problems size to small).
When this ratio of 3 is reached (figure <ref>), the solver starts to provide performances similar to those of other solutions.
From Level 4 and above, the solver performs better than all the other solutions tested.
In figure <ref> (L5), corresponding to "fr/blr" $\sim1.e^{+7}$ dofs (figure <ref>), the potentiality of the other solver is expressed, as expected, in different ranges of number of cores.
The "fr" solver provides an exact result but has a limited scalability and needs memory (the computation was not possible with less than 16 cores distributed over 16 nodes : 16x128Go$\approx$2To).
The "blr" solver reduces the memory footprint and elapsed time compared to the full rank version with a controlled error ( $\epsilon=1.e^{-7}$).
The "dd" solver offers good scalability but fails for a small number of processes.
This is related to the choice of implementation which uses only one domain per process.
Thus, for a small number of cores, the domains become very large and their resolution cannot fit in a single node of the cluster: the figure <ref> shows that the average size of the domain increases to the "fr/blr" size when the number of cores decreases to 1 process.
A classical alternative, not tested in this work, is to consider that the domain size is independent of the process (i.e. a process can compute more than one domain whose size is controlled by an arbitrary threshold).
But the Schur complement is then potentially handled by fewer processes compared to the proposed "dd" implementation, and thus would have added additional complexity in the analyses.
However, it would have been easy to compare the case where the number of dofs for domains and patches is the same.
Nevertheless, with 2048 processes, the "dd" domains represent about the same number of dofs as the patches (figure <ref>) and the solver shows better performance in this condition.
This comparison should be addressed in more detail and is left as a future prospect.
As for the other solvers, with a small number of cores, the memory consumption becomes an problem (impossible computation with less than 2 process for the "from L2" version) but clearly at a lower level.
This validates the choice made in the implementation regarding the per-macro-element storage (see section <ref>) and shows the low memory consumption impact of the fines scale problems.
Also in figure <ref>, the curves show that the proposed solver, even with a non optimal scale ratio, consistently gives better performance from 2 to 2048 processes compared to other solvers.
Note also that another interesting reading of this figure <ref> gives the following conclusion: with a small number of processes, the solver gives the same level of performance (i.e. elapsed time) compared to other solvers using larger number of processes.
With L6 (figure <ref>, 78 851 421 fine scale dofs) and L7 (figure <ref>, 627 350 205 fine scale dofs), the solver confirms its low resource requirements.
The memory consumption allows to launch calculations with "only" 16 and 128 processes.
For L7, no other solver passes with the chosen hardware and software configuration.
And for L6 only "dd" starts to provide results with 512 cores that the solver provides with only 16 processes in the same time frame.
More detailed information about the "L6 from L3" test is given in the table <ref>.
6c|Task: elapsed time in s and % of total resolution
1|r|resolution 1314.4 100 67.3 100 29.6 100
1|r|$resi$ computation 34.4 2.6 1.8 2.6 0.6 2.2
1|r|$\vm{u}_r$ update 15.9 1.2 0.6 0.9 0.2 0.6
1|r|$\vm{A}_{gg}$ (product,operator construction and assembly of equation <ref> ) 261.6 19.9 8.2 12.1 2.0 6.9
1|r|$\vm{B}_{g}$ (product and assembly of equation <ref>) 3.4 0.3 0.1 0.2 0.03 0.1
1|r|$\vm{A}_{FF}^{e_{macro}}$ (computation and assembly per macro-element) 153.3 11.7 4.9 7.3 1.2 4.1
1|r|$\vm{B}_{F}^{e_{macro}}$ (computation and assembly per macro-element) 36.1 2.7 1.1 1.7 0.3 0.9
1|r|System of equation <ref> construction (sum of above) 454.4 34.6 14.3 21.3 3.5 12
nb of cores 2c|16 2c|512 2c|2048
Detailed elapsed times of the global matrix construction, $resi$ computation and $\vm{u}_r$ update for some points of "tsi" L6 from L3 curve (figure <ref>)
This table shows that the construction time of the global system (equation <ref>) represents less than 35% of the solver resolution with 16 processes.
It is only 12% with 2048 processes, which can be explained by the perfect scalability of this task (454.4/128$\approx$3.5) compared to the scalability of the solver (1314.4/128$\approx$10.3$<$29.6).
The different calculation($\vm{A}_{gg}$,$\vm{B}_{g}$,$\vm{A}_{FF}^{e_{macro}}$ and $\vm{B}_{F}^{e_{macro}}$) have the same perfect scalability and contribute with the same ratio to the construction task.
The most expensive calculation is related to the construction of $\vm{A}_{gg}$ (19.9%, 12.1% and 6.9%) and in particular the product $\vm{A}_{FF}^{e_{macro}}\cdot \vm{T}_{Fe}^{e_{macro}}$ which represents $\sim$56% of the construction of $\vm{A}_{gg}$ (result not given in the table <ref>).
It is a sparse matrix by sparse matrix product (not using the level 3 BLAS routine) performed on all SP elements at each iteration.
Perhaps an effort on its implementation would allow to obtain better performances but it would be anyway only on a small part of the resolution.
The performance of the global scale system construction is therefore no longer analyzed in this work (but included in the results) because it is second order in the calculations and scales perfectly.
Regarding the computation of $resi$, it consistently represents less than 3% of the resolution and scales relatively (few communication) well (34.4/128$\approx$0.3$<$0.6).
This validates the proposed algorithm <ref>.
The same conclusion applies to the update of $\vm{u}_r$ (algorithm <ref>) which costs less than 1,3% and scales well (15,9/128$\approx$0,12$\lessapprox$0,2).
The curve "fr" in figure <ref> for L2,L3,L4 and L5 shows that beyond a limit, the elapsed times increase when the number of used cores increases.
This justifies why, in the section <ref>, the version takes at most $nbp_{max}$ processes to compute the problem at the global scale.
But this has an impact on scalability.
In the figure <ref>, for two configurations ("L5 from L2" and "L6 from L3"), the elapsed times of the scale loop resolutions are isolated and divided into global scale contribution(resolution $\dagger$ in algorithm <ref>) and fine scale contribution (MICRO-SCALE_RESOLUTION call of algorithm <ref>).
[L5 from L2,$nbp_{max}=24$]
every node/.append style=scale=0.70
(0.000,0.000) rectangle (8.125,4.812);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,0.691)–(1.105,0.691);
[gp path] (7.737,0.691)–(7.557,0.691);
[gp node right] at (0.796,0.691) $1$;
[gp path] (0.925,1.076)–(1.015,1.076);
[gp path] (7.737,1.076)–(7.647,1.076);
[gp path] (0.925,1.302)–(1.015,1.302);
[gp path] (7.737,1.302)–(7.647,1.302);
[gp path] (0.925,1.461)–(1.015,1.461);
[gp path] (7.737,1.461)–(7.647,1.461);
[gp path] (0.925,1.586)–(1.015,1.586);
[gp path] (7.737,1.586)–(7.647,1.586);
[gp path] (0.925,1.687)–(1.015,1.687);
[gp path] (7.737,1.687)–(7.647,1.687);
[gp path] (0.925,1.773)–(1.015,1.773);
[gp path] (7.737,1.773)–(7.647,1.773);
[gp path] (0.925,1.847)–(1.015,1.847);
[gp path] (7.737,1.847)–(7.647,1.847);
[gp path] (0.925,1.912)–(1.015,1.912);
[gp path] (7.737,1.912)–(7.647,1.912);
[gp path] (0.925,1.971)–(1.105,1.971);
[gp path] (7.737,1.971)–(7.557,1.971);
[gp node right] at (0.796,1.971) $10$;
[gp path] (0.925,2.356)–(1.015,2.356);
[gp path] (7.737,2.356)–(7.647,2.356);
[gp path] (0.925,2.581)–(1.015,2.581);
[gp path] (7.737,2.581)–(7.647,2.581);
[gp path] (0.925,2.741)–(1.015,2.741);
[gp path] (7.737,2.741)–(7.647,2.741);
[gp path] (0.925,2.865)–(1.015,2.865);
[gp path] (7.737,2.865)–(7.647,2.865);
[gp path] (0.925,2.967)–(1.015,2.967);
[gp path] (7.737,2.967)–(7.647,2.967);
[gp path] (0.925,3.052)–(1.015,3.052);
[gp path] (7.737,3.052)–(7.647,3.052);
[gp path] (0.925,3.126)–(1.015,3.126);
[gp path] (7.737,3.126)–(7.647,3.126);
[gp path] (0.925,3.192)–(1.015,3.192);
[gp path] (7.737,3.192)–(7.647,3.192);
[gp path] (0.925,3.250)–(1.105,3.250);
[gp path] (7.737,3.250)–(7.557,3.250);
[gp node right] at (0.796,3.250) $100$;
[gp path] (0.925,3.636)–(1.015,3.636);
[gp path] (7.737,3.636)–(7.647,3.636);
[gp path] (0.925,3.861)–(1.015,3.861);
[gp path] (7.737,3.861)–(7.647,3.861);
[gp path] (0.925,4.021)–(1.015,4.021);
[gp path] (7.737,4.021)–(7.647,4.021);
[gp path] (0.925,4.145)–(1.015,4.145);
[gp path] (7.737,4.145)–(7.647,4.145);
[gp path] (0.925,0.691)–(0.925,0.871);
[gp path] (0.925,4.145)–(0.925,3.965);
[gp node center,font=10.0pt12.0pt] at (0.925,0.475) 2;
[gp path] (1.606,0.691)–(1.606,0.871);
[gp path] (1.606,4.145)–(1.606,3.965);
[gp node center,font=10.0pt12.0pt] at (1.606,0.475) 4;
[gp path] (2.287,0.691)–(2.287,0.871);
[gp path] (2.287,4.145)–(2.287,3.965);
[gp node center,font=10.0pt12.0pt] at (2.287,0.475) 8;
[gp path] (2.969,0.691)–(2.969,0.871);
[gp path] (2.969,4.145)–(2.969,3.965);
[gp node center,font=10.0pt12.0pt] at (2.969,0.475) 16;
[gp path] (3.650,0.691)–(3.650,0.871);
[gp path] (3.650,4.145)–(3.650,3.965);
[gp node center,font=10.0pt12.0pt] at (3.650,0.475) 32;
[gp path] (4.331,0.691)–(4.331,0.871);
[gp path] (4.331,4.145)–(4.331,3.965);
[gp node center,font=10.0pt12.0pt] at (4.331,0.475) 64;
[gp path] (5.012,0.691)–(5.012,0.871);
[gp path] (5.012,4.145)–(5.012,3.965);
[gp node center,font=10.0pt12.0pt] at (5.012,0.475) 128;
[gp path] (5.693,0.691)–(5.693,0.871);
[gp path] (5.693,4.145)–(5.693,3.965);
[gp node center,font=10.0pt12.0pt] at (5.693,0.475) 256;
[gp path] (6.375,0.691)–(6.375,0.871);
[gp path] (6.375,4.145)–(6.375,3.965);
[gp node center,font=10.0pt12.0pt] at (6.375,0.475) 512;
[gp path] (7.056,0.691)–(7.056,0.871);
[gp path] (7.056,4.145)–(7.056,3.965);
[gp node center,font=10.0pt12.0pt] at (7.056,0.475) 1024;
[gp path] (7.737,0.691)–(7.737,0.871);
[gp path] (7.737,4.145)–(7.737,3.965);
[gp node center,font=10.0pt12.0pt] at (7.737,0.475) 2048;
[gp path] (0.925,4.145)–(0.925,0.691)–(7.737,0.691)–(7.737,4.145)–cycle;
[gp node center,rotate=-270] at (0.218,2.418) elapsed time in s;
[gp node center] at (4.331,0.105) number of cores;
[gp node right] at (4.279,4.519) ts loop global-scale solv;
rgb color=0.545,0.000,0.000
[gp path] (4.408,4.519)–(5.104,4.519);
[gp path] (0.925,2.915)–(1.606,2.621)–(2.287,2.207)–(2.969,1.953)–(3.650,1.732)
3pointgp mark 6(0.925,2.915)
3pointgp mark 6(1.606,2.621)
3pointgp mark 6(2.287,2.207)
3pointgp mark 6(2.969,1.953)
3pointgp mark 6(3.650,1.732)
3pointgp mark 6(4.331,1.729)
3pointgp mark 6(5.012,1.685)
3pointgp mark 6(5.693,1.665)
3pointgp mark 6(6.375,1.670)
3pointgp mark 6(7.056,1.724)
3pointgp mark 6(7.737,1.780)
3pointgp mark 6(4.756,4.519)
rgb color=0.000,0.392,0.000
[gp path] (0.925,4.110)–(1.606,3.585)–(2.287,3.199)–(2.969,2.846)–(3.650,2.511)
3pointgp mark 8(0.925,4.110)
3pointgp mark 8(1.606,3.585)
3pointgp mark 8(2.287,3.199)
3pointgp mark 8(2.969,2.846)
3pointgp mark 8(3.650,2.511)
3pointgp mark 8(4.331,2.212)
3pointgp mark 8(5.012,1.941)
3pointgp mark 8(5.693,1.676)
3pointgp mark 8(6.375,1.414)
3pointgp mark 8(7.056,1.143)
3pointgp mark 8(7.737,1.020)
color=gp lt color border
[gp node right] at (4.279,4.294) tsi loop global-scale solv;
rgb color=1.000,0.000,0.000
[gp path] (4.408,4.294)–(5.104,4.294);
[gp path] (0.925,2.660)–(1.606,2.326)–(2.287,1.905)–(2.969,1.743)–(3.650,1.356)
3pointgp mark 7(0.925,2.660)
3pointgp mark 7(1.606,2.326)
3pointgp mark 7(2.287,1.905)
3pointgp mark 7(2.969,1.743)
3pointgp mark 7(3.650,1.356)
3pointgp mark 7(4.331,1.214)
3pointgp mark 7(5.012,1.049)
3pointgp mark 7(5.693,0.966)
3pointgp mark 7(6.375,0.949)
3pointgp mark 7(7.056,1.000)
3pointgp mark 7(7.737,1.305)
3pointgp mark 7(4.756,4.294)
rgb color=0.000,1.000,0.000
[gp path] (0.925,4.075)–(1.606,3.552)–(2.287,3.175)–(2.969,2.829)–(3.650,2.498)
3pointgp mark 9(0.925,4.075)
3pointgp mark 9(1.606,3.552)
3pointgp mark 9(2.287,3.175)
3pointgp mark 9(2.969,2.829)
3pointgp mark 9(3.650,2.498)
3pointgp mark 9(4.331,2.191)
3pointgp mark 9(5.012,1.933)
3pointgp mark 9(5.693,1.665)
3pointgp mark 9(6.375,1.408)
3pointgp mark 9(7.056,1.128)
3pointgp mark 9(7.737,1.115)
rgb color=0.000,0.000,0.000
[gp path] (0.925,4.075)–(0.987,4.040)–(1.049,4.005)–(1.111,3.970)–(1.173,3.935)
color=gp lt color border
[gp path] (0.925,4.145)–(0.925,0.691)–(7.737,0.691)–(7.737,4.145)–cycle;
gp plot 10.925cm0.691cm7.737cm4.145cm
[L6 from L3,$nbp_{max}=128$]
every node/.append style=scale=0.70
(0.000,0.000) rectangle (8.125,5.075);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.709,0.691)–(0.799,0.691);
[gp path] (7.737,0.691)–(7.647,0.691);
[gp path] (0.709,0.860)–(0.799,0.860);
[gp path] (7.737,0.860)–(7.647,0.860);
[gp path] (0.709,0.998)–(0.799,0.998);
[gp path] (7.737,0.998)–(7.647,0.998);
[gp path] (0.709,1.115)–(0.799,1.115);
[gp path] (7.737,1.115)–(7.647,1.115);
[gp path] (0.709,1.217)–(0.799,1.217);
[gp path] (7.737,1.217)–(7.647,1.217);
[gp path] (0.709,1.306)–(0.799,1.306);
[gp path] (7.737,1.306)–(7.647,1.306);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.709,1.386)–(7.737,1.386);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.709,1.386)–(0.889,1.386);
[gp path] (7.737,1.386)–(7.557,1.386);
[gp node right] at (0.580,1.386) $10$;
[gp path] (0.709,1.911)–(0.799,1.911);
[gp path] (7.737,1.911)–(7.647,1.911);
[gp path] (0.709,2.219)–(0.799,2.219);
[gp path] (7.737,2.219)–(7.647,2.219);
[gp path] (0.709,2.437)–(0.799,2.437);
[gp path] (7.737,2.437)–(7.647,2.437);
[gp path] (0.709,2.606)–(0.799,2.606);
[gp path] (7.737,2.606)–(7.647,2.606);
[gp path] (0.709,2.744)–(0.799,2.744);
[gp path] (7.737,2.744)–(7.647,2.744);
[gp path] (0.709,2.861)–(0.799,2.861);
[gp path] (7.737,2.861)–(7.647,2.861);
[gp path] (0.709,2.963)–(0.799,2.963);
[gp path] (7.737,2.963)–(7.647,2.963);
[gp path] (0.709,3.052)–(0.799,3.052);
[gp path] (7.737,3.052)–(7.647,3.052);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.709,3.132)–(7.737,3.132);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.709,3.132)–(0.889,3.132);
[gp path] (7.737,3.132)–(7.557,3.132);
[gp node right] at (0.580,3.132) $100$;
[gp path] (0.709,3.657)–(0.799,3.657);
[gp path] (7.737,3.657)–(7.647,3.657);
[gp path] (0.709,3.965)–(0.799,3.965);
[gp path] (7.737,3.965)–(7.647,3.965);
[gp path] (0.709,4.183)–(0.799,4.183);
[gp path] (7.737,4.183)–(7.647,4.183);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.709,0.691)–(0.709,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.709,0.691)–(0.709,0.871);
[gp path] (0.709,4.183)–(0.709,4.003);
[gp node center,font=10.0pt12.0pt] at (0.709,0.475) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.612,0.691)–(1.612,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.612,0.691)–(1.612,0.871);
[gp path] (1.612,4.183)–(1.612,4.003);
[gp node center,font=10.0pt12.0pt] at (1.612,0.475) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.515,0.691)–(2.515,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.515,0.691)–(2.515,0.871);
[gp path] (2.515,4.183)–(2.515,4.003);
[gp node center,font=10.0pt12.0pt] at (2.515,0.475) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.417,0.691)–(3.417,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.417,0.691)–(3.417,0.871);
[gp path] (3.417,4.183)–(3.417,4.003);
[gp node center,font=10.0pt12.0pt] at (3.417,0.475) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.320,0.691)–(4.320,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.320,0.691)–(4.320,0.871);
[gp path] (4.320,4.183)–(4.320,4.003);
[gp node center,font=10.0pt12.0pt] at (4.320,0.475) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.223,0.691)–(5.223,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.223,0.691)–(5.223,0.871);
[gp path] (5.223,4.183)–(5.223,4.003);
[gp node center,font=10.0pt12.0pt] at (5.223,0.475) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (6.126,0.691)–(6.126,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (6.126,0.691)–(6.126,0.871);
[gp path] (6.126,4.183)–(6.126,4.003);
[gp node center,font=10.0pt12.0pt] at (6.126,0.475) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (7.029,0.691)–(7.029,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (7.029,0.691)–(7.029,0.871);
[gp path] (7.029,4.183)–(7.029,4.003);
[gp node center,font=10.0pt12.0pt] at (7.029,0.475) 2048;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (7.737,0.691)–(7.737,4.183);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (7.737,0.691)–(7.737,0.871);
[gp path] (7.737,4.183)–(7.737,4.003);
[gp node center,font=10.0pt12.0pt] at (7.737,0.475) 3528;
[gp path] (0.709,4.183)–(0.709,0.691)–(7.737,0.691)–(7.737,4.183)–cycle;
[gp node center] at (4.223,0.105) number of cores;
rgb color=0.545,0.000,0.000
[gp path] (0.709,3.599)–(1.612,3.158)–(2.515,2.987)–(3.417,2.724)–(4.320,2.684)
3pointgp mark 6(0.709,3.599)
3pointgp mark 6(1.612,3.158)
3pointgp mark 6(2.515,2.987)
3pointgp mark 6(3.417,2.724)
3pointgp mark 6(4.320,2.684)
3pointgp mark 6(5.223,2.611)
3pointgp mark 6(6.126,2.646)
3pointgp mark 6(7.029,2.861)
3pointgp mark 6(7.737,3.028)
color=gp lt color border
[gp node right] at (3.805,4.782) ts loop fine-scale solv;
rgb color=0.000,0.392,0.000
[gp path] (3.934,4.782)–(4.630,4.782);
[gp path] (0.709,4.093)–(1.612,3.518)–(2.515,3.072)–(3.417,2.674)–(4.320,2.216)
3pointgp mark 8(0.709,4.093)
3pointgp mark 8(1.612,3.518)
3pointgp mark 8(2.515,3.072)
3pointgp mark 8(3.417,2.674)
3pointgp mark 8(4.320,2.216)
3pointgp mark 8(5.223,1.852)
3pointgp mark 8(6.126,1.490)
3pointgp mark 8(7.029,1.103)
3pointgp mark 8(7.737,1.002)
3pointgp mark 8(4.282,4.782)
rgb color=1.000,0.000,0.000
[gp path] (0.709,2.588)–(1.612,2.100)–(2.515,1.878)–(3.417,1.559)–(4.320,1.408)
3pointgp mark 7(0.709,2.588)
3pointgp mark 7(1.612,2.100)
3pointgp mark 7(2.515,1.878)
3pointgp mark 7(3.417,1.559)
3pointgp mark 7(4.320,1.408)
3pointgp mark 7(5.223,1.299)
3pointgp mark 7(6.126,1.219)
3pointgp mark 7(7.029,1.247)
3pointgp mark 7(7.737,1.491)
color=gp lt color border
[gp node right] at (3.805,4.557) tsi loop fine-scale solv;
rgb color=0.000,1.000,0.000
[gp path] (3.934,4.557)–(4.630,4.557);
[gp path] (0.709,4.092)–(1.612,3.518)–(2.515,3.073)–(3.417,2.679)–(4.320,2.204)
3pointgp mark 9(0.709,4.092)
3pointgp mark 9(1.612,3.518)
3pointgp mark 9(2.515,3.073)
3pointgp mark 9(3.417,2.679)
3pointgp mark 9(4.320,2.204)
3pointgp mark 9(5.223,1.840)
3pointgp mark 9(6.126,1.462)
3pointgp mark 9(7.029,1.059)
3pointgp mark 9(7.737,0.763)
3pointgp mark 9(4.282,4.557)
color=gp lt color border
[gp node right] at (3.805,4.332) ideal;
rgb color=0.000,0.000,0.000
[gp path] (3.934,4.332)–(4.630,4.332);
[gp path] (0.709,4.092)–(0.773,4.055)–(0.837,4.018)–(0.901,3.981)–(0.964,3.943)
color=gp lt color border
[gp path] (0.709,4.183)–(0.709,0.691)–(7.737,0.691)–(7.737,4.183)–cycle;
gp plot 10.709cm0.691cm7.737cm4.183cm
Cubic field: elapsed time in second of the loop resolutions (ts and tsi versions) divided in two parts: the global-scale resolution and the local-scale resolutions. Log/log scale..
It shows that the elapsed time of the global-scale resolution corresponding to the version ("ts loop global-scale solv") decreases until $nbp_{max}$ is reached.
For a larger number of processes, the elapsed time remains stable and even increases.
This last point can be explained by the way the subset of $nbp_{max}$ processes is chosen.
With the current implementation, this subset follows a uniform distribution with respect to the mapping of all the processes.
This ensures that problem solving spans many nodes to ensure balanced memory consumption.
But this leads, with a high number of processes, to force almost all the communications of factoring and forward/backward solving, to go through the network between nodes that may be relatively far apart (in the sens of the network topology).
Another parameter explaining this increase, is related to the gathering of $\vm{A}_{gg}$ and $\vm{B}_g$, and the scattering of $\vm{U}_g$, from/to a large number of processes that add communication.
This poor performance for a number of processes higher than $nbp_{max}$ justifies the introduction of the version.
Its effect is first to reduce the global-scale consumption: the "tsi loop global-scale solv" is shifted down compared to the "ts loop global-scale solv" (a facorization is replaced by some forward/backward resolution and sparse matrix multiplication per vector at some iterations).
This is even more true in the case of figure <ref> where the global scale level is larger than in the case of figure <ref>.
The second effect of this version is to maintain a decreasing slop of the "tsi loop global-scale solv" curves when using more than $nbp_{max}$ processes.
The iterative solver adds scalability.
With more than 1024 processes, the elapsed time increases again for "tsi loop global-scale solv".
The non-perfect scalability and this increase are again related to the
choice of the $nbp_{max}$ cores.
The preconditioner uses the factorization of the direct solver to perform forward/backward resolution but only on $nbp_{max}$ processes spread over many nodes.
This limits the scalability because this part will not use more than $nbp_{max}$ processes and, as said above, network will be more stressed.
Note that with Mumps, we choose to have the solution and the right-hand side centralized on process 0 to simplify the exchange of these vectors between the $nbp_{max}$ processes and all processes.
Perhaps using the ability of MUMPS to spread the solution vector and the right-hand side vector across processes would improve the situation.
But it's not clear because the vector gathering/scattering operation should still be performed, now directly, between $nbp_{max}$ processes and all processes, but not longer collectively.
In comparison, the elapsed time of the fine-scale resolution (curves "ts loop fine-scale solv" and "tsi loop fine-scale solv" in figure <ref> ) decrease steadily and confirms a fairly good scalability of the fine-scale resolution.
Now, regarding the strong scaling efficiency [Strong scaling efficiency is the ratio of speed-up to the number of processes used.
It is multiplied by 100 to obtain a %. 100% implies a perfect use of resource(cores)] of the solver, the figure <ref> shows a decrease in performance with increasing number of cores.
This deterioration is mainly related to the poor performance of the global-scale as mentioned above in the analysis of the figure <ref>.
But it is not the only reason.
The strong scaling efficiency (not shown here) of "ts loop fine-scale solv" and "tsi loop fine-scale solv" in figure <ref> shows that the performance remains above 50% over a wide range, but decreases.
This decrease in fine-scale problem solving performance can be understood by analyzing the evolution of the number of distributed patches per process.
The figure <ref> shows that the number of distributed patches per process reaches the number of patches per process when the number of processes increases.
This implies that the amount of communication increases as more patches need to communicate during their resolution.
This in itself has a significant impact on overall efficiency.
But this also increases the number of sequences that the algorithm <ref> must process.
And, because this algorithm uses a heuristic, perfect sequencing may be more difficult to achieve when almost all patches are distributed (the performance of the sequencing algorithm will be discussed in the section <ref>).
A direct comparison of the strong scaling efficiency with other solvers is possible for small problems (L2, L3, L4) where the data for one process is available for all solvers.
In figure <ref> and <ref>, the efficiency drops bellow 50% when using more then 8 (L2) or 32 (L3 and L4) processes with the solver.
In comparison, this 50% drop appears earlier for "dd" (4 processes) and "fr","blr' (4 processes for L2, 8 processes for L3 and 16 processes for L4).
For the other discretization level (L5,L6 and L7), the data for "one process" are missing and the first available point is therefore arbitrarily considered as perfect (100%).
In these cases, only the slopes can be compared.
But by using a linear fit of the curves on their regularly decreasing part, the approximate slope can be extracted ("slop" in figure <ref> and <ref>).
This shows that the solver has lower slopes compared to "dd","fr" and "blr".
In conclusion, in all cases, the solver presents slightly better strong scaling performances than the other "solvers".
[strong scaling fr/blr]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.875,5.119);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,0.592)–(5.544,0.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,0.592)–(0.970,0.592);
[gp path] (5.544,0.592)–(5.364,0.592);
[gp node right] at (0.680,0.592) $0$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,0.925)–(5.544,0.925);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,0.925)–(0.970,0.925);
[gp path] (5.544,0.925)–(5.364,0.925);
[gp node right] at (0.680,0.925) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,1.258)–(5.544,1.258);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,1.258)–(0.970,1.258);
[gp path] (5.544,1.258)–(5.364,1.258);
[gp node right] at (0.680,1.258) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,1.592)–(5.544,1.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,1.592)–(0.970,1.592);
[gp path] (5.544,1.592)–(5.364,1.592);
[gp node right] at (0.680,1.592) $30$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,1.925)–(5.544,1.925);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,1.925)–(0.970,1.925);
[gp path] (5.544,1.925)–(5.364,1.925);
[gp node right] at (0.680,1.925) $40$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,2.258)–(5.544,2.258);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,2.258)–(0.970,2.258);
[gp path] (5.544,2.258)–(5.364,2.258);
[gp node right] at (0.680,2.258) $50$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,2.591)–(5.544,2.591);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,2.591)–(0.970,2.591);
[gp path] (5.544,2.591)–(5.364,2.591);
[gp node right] at (0.680,2.591) $60$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,2.924)–(5.544,2.924);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,2.924)–(0.970,2.924);
[gp path] (5.544,2.924)–(5.364,2.924);
[gp node right] at (0.680,2.924) $70$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,3.257)–(5.544,3.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,3.257)–(0.970,3.257);
[gp path] (5.544,3.257)–(5.364,3.257);
[gp node right] at (0.680,3.257) $80$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,3.591)–(5.544,3.591);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,3.591)–(0.970,3.591);
[gp path] (5.544,3.591)–(5.364,3.591);
[gp node right] at (0.680,3.591) $90$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,3.924)–(5.544,3.924);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,3.924)–(0.970,3.924);
[gp path] (5.544,3.924)–(5.364,3.924);
[gp node right] at (0.680,3.924) $100$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,4.257)–(5.544,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,4.257)–(0.970,4.257);
[gp path] (5.544,4.257)–(5.364,4.257);
[gp node right] at (0.680,4.257) $110$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.790,0.592)–(0.790,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.790,0.592)–(0.790,0.772);
[gp path] (0.790,4.257)–(0.790,4.077);
[gp node center,font=9.0pt10.8pt] at (0.790,0.407) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.222,0.592)–(1.222,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.222,0.592)–(1.222,0.772);
[gp path] (1.222,4.257)–(1.222,4.077);
[gp node center,font=9.0pt10.8pt] at (1.222,0.407) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.654,0.592)–(1.654,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.654,0.592)–(1.654,0.772);
[gp path] (1.654,4.257)–(1.654,4.077);
[gp node center,font=9.0pt10.8pt] at (1.654,0.407) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.087,0.592)–(2.087,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.087,0.592)–(2.087,0.772);
[gp path] (2.087,4.257)–(2.087,4.077);
[gp node center,font=9.0pt10.8pt] at (2.087,0.407) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.519,0.592)–(2.519,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.519,0.592)–(2.519,0.772);
[gp path] (2.519,4.257)–(2.519,4.077);
[gp node center,font=9.0pt10.8pt] at (2.519,0.407) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.951,0.592)–(2.951,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.951,0.592)–(2.951,0.772);
[gp path] (2.951,4.257)–(2.951,4.077);
[gp node center,font=9.0pt10.8pt] at (2.951,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.383,0.592)–(3.383,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.383,0.592)–(3.383,0.772);
[gp path] (3.383,4.257)–(3.383,4.077);
[gp node center,font=9.0pt10.8pt] at (3.383,0.407) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.815,0.592)–(3.815,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.815,0.592)–(3.815,0.772);
[gp path] (3.815,4.257)–(3.815,4.077);
[gp node center,font=9.0pt10.8pt] at (3.815,0.407) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.247,0.592)–(4.247,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.247,0.592)–(4.247,0.772);
[gp path] (4.247,4.257)–(4.247,4.077);
[gp node center,font=9.0pt10.8pt] at (4.247,0.407) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.680,0.592)–(4.680,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.680,0.592)–(4.680,0.772);
[gp path] (4.680,4.257)–(4.680,4.077);
[gp node center,font=9.0pt10.8pt] at (4.680,0.407) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.112,0.592)–(5.112,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.112,0.592)–(5.112,0.772);
[gp path] (5.112,4.257)–(5.112,4.077);
[gp node center,font=9.0pt10.8pt] at (5.112,0.407) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.544,0.592)–(5.544,4.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.544,0.592)–(5.544,0.772);
[gp path] (5.544,4.257)–(5.544,4.077);
[gp node center,font=9.0pt10.8pt] at (5.544,0.407) 2048;
[gp path] (0.790,4.257)–(0.790,0.592)–(5.544,0.592)–(5.544,4.257)–cycle;
[gp node left,font=7.0pt8.4pt] at (3.383,2.907) slop -22;
[gp node left,font=7.0pt8.4pt] at (3.383,2.059) slop -21;
[gp node center,rotate=-270] at (0.194,2.424) efficiency (%);
[gp node center] at (3.167,0.068) number of cores;
[gp node right] at (1.450,4.825) 50%;
rgb color=0.000,0.000,0.000
[gp path] (1.560,4.825)–(2.180,4.825);
[gp path] (0.790,2.258)–(0.838,2.258)–(0.886,2.258)–(0.934,2.258)–(0.982,2.258)
color=gp lt color border
[gp node right] at (1.450,4.600) fr L2;
rgb color=0.000,0.933,0.933
[gp path] (1.560,4.600)–(2.180,4.600);
[gp path] (0.790,3.924)–(1.222,3.333)–(1.654,2.758)–(2.087,1.683)–(2.519,1.048)
3pointgp mark 7(0.790,3.924)
3pointgp mark 7(1.222,3.333)
3pointgp mark 7(1.654,2.758)
3pointgp mark 7(2.087,1.683)
3pointgp mark 7(2.519,1.048)
3pointgp mark 7(2.951,0.798)
3pointgp mark 7(3.383,0.688)
3pointgp mark 7(3.815,0.638)
3pointgp mark 7(1.870,4.600)
color=gp lt color border
[gp node right] at (1.450,4.375) blr L2;
rgb color=0.000,0.933,0.933
[gp path] (1.560,4.375)–(2.180,4.375);
[gp path] (0.790,3.924)–(1.222,3.164)–(1.654,2.508)–(2.087,1.579)–(2.519,0.999)
3pointgp mark 8(0.790,3.924)
3pointgp mark 8(1.222,3.164)
3pointgp mark 8(1.654,2.508)
3pointgp mark 8(2.087,1.579)
3pointgp mark 8(2.519,0.999)
3pointgp mark 8(2.951,0.777)
3pointgp mark 8(3.383,0.677)
3pointgp mark 8(3.815,0.631)
3pointgp mark 8(1.870,4.375)
color=gp lt color border
[gp node right] at (2.950,4.825) fr L3;
rgb color=0.784,0.784,0.000
[gp path] (3.060,4.825)–(3.680,4.825);
[gp path] (0.790,3.924)–(1.222,3.491)–(1.654,3.183)–(2.087,2.533)–(2.519,2.134)
3pointgp mark 7(0.790,3.924)
3pointgp mark 7(1.222,3.491)
3pointgp mark 7(1.654,3.183)
3pointgp mark 7(2.087,2.533)
3pointgp mark 7(2.519,2.134)
3pointgp mark 7(2.951,1.391)
3pointgp mark 7(3.383,0.863)
3pointgp mark 7(3.815,0.683)
3pointgp mark 7(3.370,4.825)
color=gp lt color border
[gp node right] at (2.950,4.600) blr L3;
rgb color=0.784,0.784,0.000
[gp path] (3.060,4.600)–(3.680,4.600);
[gp path] (0.790,3.924)–(1.222,3.472)–(1.654,3.313)–(2.087,2.709)–(2.519,2.171)
3pointgp mark 8(0.790,3.924)
3pointgp mark 8(1.222,3.472)
3pointgp mark 8(1.654,3.313)
3pointgp mark 8(2.087,2.709)
3pointgp mark 8(2.519,2.171)
3pointgp mark 8(2.951,1.352)
3pointgp mark 8(3.383,0.861)
3pointgp mark 8(3.815,0.689)
3pointgp mark 8(3.370,4.600)
color=gp lt color border
[gp node right] at (2.950,4.375) fr L4;
rgb color=0.000,0.392,0.000
[gp path] (3.060,4.375)–(3.680,4.375);
[gp path] (0.790,3.924)–(1.222,3.790)–(1.654,2.971)–(2.087,3.016)–(2.519,2.493)
3pointgp mark 7(0.790,3.924)
3pointgp mark 7(1.222,3.790)
3pointgp mark 7(1.654,2.971)
3pointgp mark 7(2.087,3.016)
3pointgp mark 7(2.519,2.493)
3pointgp mark 7(2.951,2.179)
3pointgp mark 7(3.383,1.575)
3pointgp mark 7(3.815,1.141)
3pointgp mark 7(4.247,0.825)
3pointgp mark 7(3.370,4.375)
color=gp lt color border
[gp node right] at (4.450,4.825) blr L4;
rgb color=0.000,0.392,0.000
[gp path] (4.560,4.825)–(5.180,4.825);
[gp path] (0.790,3.924)–(1.222,3.663)–(1.654,3.416)–(2.087,2.955)–(2.519,2.449)
3pointgp mark 8(0.790,3.924)
3pointgp mark 8(1.222,3.663)
3pointgp mark 8(1.654,3.416)
3pointgp mark 8(2.087,2.955)
3pointgp mark 8(2.519,2.449)
3pointgp mark 8(2.951,1.944)
3pointgp mark 8(3.383,1.393)
3pointgp mark 8(3.815,1.000)
3pointgp mark 8(4.247,0.759)
3pointgp mark 8(4.870,4.825)
color=gp lt color border
[gp node right] at (4.450,4.600) fr L5;
rgb color=0.545,0.000,0.000
[gp path] (4.560,4.600)–(5.180,4.600);
[gp path] (2.519,3.924)–(2.951,3.397)–(3.383,2.734)–(3.815,2.305)–(4.247,1.786)
3pointgp mark 7(2.519,3.924)
3pointgp mark 7(2.951,3.397)
3pointgp mark 7(3.383,2.734)
3pointgp mark 7(3.815,2.305)
3pointgp mark 7(4.247,1.786)
3pointgp mark 7(4.680,1.341)
3pointgp mark 7(4.870,4.600)
color=gp lt color border
[gp node right] at (4.450,4.375) blr L5;
rgb color=0.545,0.000,0.000
[gp path] (4.560,4.375)–(5.180,4.375);
[gp path] (1.654,3.924)–(2.087,3.467)–(2.519,3.052)–(2.951,2.496)–(3.383,1.771)
3pointgp mark 8(1.654,3.924)
3pointgp mark 8(2.087,3.467)
3pointgp mark 8(2.519,3.052)
3pointgp mark 8(2.951,2.496)
3pointgp mark 8(3.383,1.771)
3pointgp mark 8(3.815,1.301)
3pointgp mark 8(4.247,1.017)
3pointgp mark 8(4.680,0.817)
3pointgp mark 8(4.870,4.375)
[gp path] (2.951,3.360)–(2.964,3.344)–(2.977,3.329)–(2.990,3.313)–(3.003,3.297)
[gp path] (2.519,2.944)–(2.532,2.930)–(2.545,2.915)–(2.558,2.901)–(2.571,2.887)
color=gp lt color border
[gp path] (0.790,4.257)–(0.790,0.592)–(5.544,0.592)–(5.544,4.257)–cycle;
gp plot 10.790cm0.592cm5.544cm4.257cm
[strong scaling dd/ts]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.875,6.125);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,0.592)–(5.544,0.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,0.592)–(0.785,0.592);
[gp path] (5.544,0.592)–(5.364,0.592);
[gp node right] at (0.495,0.592) $0$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,1.258)–(5.544,1.258);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,1.258)–(0.785,1.258);
[gp path] (5.544,1.258)–(5.364,1.258);
[gp node right] at (0.495,1.258) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,1.924)–(5.544,1.924);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,1.924)–(0.785,1.924);
[gp path] (5.544,1.924)–(5.364,1.924);
[gp node right] at (0.495,1.924) $40$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,2.591)–(5.544,2.591);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,2.591)–(0.785,2.591);
[gp path] (5.544,2.591)–(5.364,2.591);
[gp node right] at (0.495,2.591) $60$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,3.257)–(5.544,3.257);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,3.257)–(0.785,3.257);
[gp path] (5.544,3.257)–(5.364,3.257);
[gp node right] at (0.495,3.257) $80$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,3.923)–(5.544,3.923);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,3.923)–(0.785,3.923);
[gp path] (5.544,3.923)–(5.364,3.923);
[gp node right] at (0.495,3.923) $100$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,4.589)–(5.544,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,4.589)–(0.785,4.589);
[gp path] (5.544,4.589)–(5.364,4.589);
[gp node right] at (0.495,4.589) $120$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,0.592)–(0.605,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,0.592)–(0.605,0.772);
[gp path] (0.605,4.589)–(0.605,4.409);
[gp node center,font=7.0pt8.4pt] at (0.605,0.407) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.024,0.592)–(1.024,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.024,0.592)–(1.024,0.772);
[gp path] (1.024,4.589)–(1.024,4.409);
[gp node center,font=7.0pt8.4pt] at (1.024,0.407) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.443,0.592)–(1.443,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.443,0.592)–(1.443,0.772);
[gp path] (1.443,4.589)–(1.443,4.409);
[gp node center,font=7.0pt8.4pt] at (1.443,0.407) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.862,0.592)–(1.862,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.862,0.592)–(1.862,0.772);
[gp path] (1.862,4.589)–(1.862,4.409);
[gp node center,font=7.0pt8.4pt] at (1.862,0.407) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.281,0.592)–(2.281,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.281,0.592)–(2.281,0.772);
[gp path] (2.281,4.589)–(2.281,4.409);
[gp node center,font=7.0pt8.4pt] at (2.281,0.407) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.701,0.592)–(2.701,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.701,0.592)–(2.701,0.772);
[gp path] (2.701,4.589)–(2.701,4.409);
[gp node center,font=7.0pt8.4pt] at (2.701,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.120,0.592)–(3.120,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.120,0.592)–(3.120,0.772);
[gp path] (3.120,4.589)–(3.120,4.409);
[gp node center,font=7.0pt8.4pt] at (3.120,0.407) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.539,0.592)–(3.539,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.539,0.592)–(3.539,0.772);
[gp path] (3.539,4.589)–(3.539,4.409);
[gp node center,font=7.0pt8.4pt] at (3.539,0.407) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.958,0.592)–(3.958,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.958,0.592)–(3.958,0.772);
[gp path] (3.958,4.589)–(3.958,4.409);
[gp node center,font=7.0pt8.4pt] at (3.958,0.407) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.377,0.592)–(4.377,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.377,0.592)–(4.377,0.772);
[gp path] (4.377,4.589)–(4.377,4.409);
[gp node center,font=7.0pt8.4pt] at (4.377,0.407) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.796,0.592)–(4.796,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.796,0.592)–(4.796,0.772);
[gp path] (4.796,4.589)–(4.796,4.409);
[gp node center,font=7.0pt8.4pt] at (4.796,0.407) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.215,0.592)–(5.215,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.215,0.592)–(5.215,0.772);
[gp path] (5.215,4.589)–(5.215,4.409);
[gp node center,font=7.0pt8.4pt] at (5.215,0.407) 2048;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.540,0.592)–(5.540,4.589);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.540,0.592)–(5.540,0.772);
[gp path] (5.540,4.589)–(5.540,4.409);
[gp node center,font=7.0pt8.4pt] at (5.540,0.407) 3504;
[gp path] (0.605,4.589)–(0.605,0.592)–(5.544,0.592)–(5.544,4.589)–cycle;
[gp node left,font=7.0pt8.4pt] at (3.081,2.592) slop -16;
[gp node left,font=7.0pt8.4pt] at (4.940,2.393) slop -23;
[gp node left,font=7.0pt8.4pt] at (3.255,3.070) slop -16;
[gp node left,font=7.0pt8.4pt] at (4.985,3.327) slop -21;
[gp node left,font=7.0pt8.4pt] at (4.892,1.356) slop -26;
[gp node center] at (3.074,0.068) number of cores;
[gp node right] at (2.145,5.832) 50%;
rgb color=0.000,0.000,0.000
[gp path] (2.255,5.832)–(2.875,5.832);
[gp path] (0.605,2.257)–(0.655,2.257)–(0.705,2.257)–(0.755,2.257)–(0.805,2.257)
color=gp lt color border
[gp node right] at (2.145,5.607) dd L2;
rgb color=0.000,0.933,0.933
[gp path] (2.255,5.607)–(2.875,5.607);
[gp path] (0.605,3.923)–(1.024,3.002)–(1.443,2.623)–(1.862,1.839)–(2.281,1.438)
3pointgp mark 11(0.605,3.923)
3pointgp mark 11(1.024,3.002)
3pointgp mark 11(1.443,2.623)
3pointgp mark 11(1.862,1.839)
3pointgp mark 11(2.281,1.438)
3pointgp mark 11(2.701,1.142)
3pointgp mark 11(3.120,0.931)
3pointgp mark 11(3.539,0.776)
3pointgp mark 11(2.565,5.607)
color=gp lt color border
[gp node right] at (2.145,5.382) tsi L2 from L0;
rgb color=0.000,0.933,0.933
[gp path] (2.255,5.382)–(2.875,5.382);
[gp path] (0.605,3.923)–(1.024,3.656)–(1.443,3.449)–(1.862,2.710)–(2.281,2.138)
3pointgp mark 4(0.605,3.923)
3pointgp mark 4(1.024,3.656)
3pointgp mark 4(1.443,3.449)
3pointgp mark 4(1.862,2.710)
3pointgp mark 4(2.281,2.138)
3pointgp mark 4(2.701,1.494)
3pointgp mark 4(3.120,0.997)
3pointgp mark 4(3.539,0.871)
3pointgp mark 4(2.565,5.382)
color=gp lt color border
[gp node right] at (2.145,5.157) dd L3;
rgb color=0.784,0.784,0.000
[gp path] (2.255,5.157)–(2.875,5.157);
[gp path] (0.605,3.944)–(1.024,2.805)–(1.443,2.273)–(1.862,1.408)–(2.281,1.201)
3pointgp mark 11(0.605,3.944)
3pointgp mark 11(1.024,2.805)
3pointgp mark 11(1.443,2.273)
3pointgp mark 11(1.862,1.408)
3pointgp mark 11(2.281,1.201)
3pointgp mark 11(2.701,1.172)
3pointgp mark 11(3.120,0.961)
3pointgp mark 11(3.539,0.884)
3pointgp mark 11(3.958,0.834)
3pointgp mark 11(2.565,5.157)
color=gp lt color border
[gp node right] at (2.145,4.932) tsi L3 from L1;
rgb color=0.784,0.784,0.000
[gp path] (2.255,4.932)–(2.875,4.932);
[gp path] (0.605,3.923)–(1.024,3.627)–(1.443,3.463)–(1.862,3.113)–(2.281,2.652)
3pointgp mark 4(0.605,3.923)
3pointgp mark 4(1.024,3.627)
3pointgp mark 4(1.443,3.463)
3pointgp mark 4(1.862,3.113)
3pointgp mark 4(2.281,2.652)
3pointgp mark 4(2.701,2.256)
3pointgp mark 4(3.120,1.873)
3pointgp mark 4(3.539,1.406)
3pointgp mark 4(3.958,1.065)
3pointgp mark 4(2.565,4.932)
color=gp lt color border
[gp node right] at (2.145,4.707) dd L4;
rgb color=0.000,0.392,0.000
[gp path] (2.255,4.707)–(2.875,4.707);
[gp path] (0.605,3.923)–(1.024,3.230)–(1.443,2.680)–(1.862,1.828)–(2.281,1.586)
3pointgp mark 11(0.605,3.923)
3pointgp mark 11(1.024,3.230)
3pointgp mark 11(1.443,2.680)
3pointgp mark 11(1.862,1.828)
3pointgp mark 11(2.281,1.586)
3pointgp mark 11(2.701,1.588)
3pointgp mark 11(3.120,1.179)
3pointgp mark 11(3.539,1.174)
3pointgp mark 11(3.958,1.074)
3pointgp mark 11(2.565,4.707)
color=gp lt color border
[gp node right] at (4.525,5.832) tsi L4 from L1;
rgb color=0.000,0.392,0.000
[gp path] (4.635,5.832)–(5.255,5.832);
[gp path] (0.605,3.923)–(1.024,3.856)–(1.443,3.666)–(1.862,3.291)–(2.281,2.856)
3pointgp mark 4(0.605,3.923)
3pointgp mark 4(1.024,3.856)
3pointgp mark 4(1.443,3.666)
3pointgp mark 4(1.862,3.291)
3pointgp mark 4(2.281,2.856)
3pointgp mark 4(2.701,2.405)
3pointgp mark 4(3.120,2.113)
3pointgp mark 4(3.539,1.747)
3pointgp mark 4(3.958,1.388)
3pointgp mark 4(4.945,5.832)
color=gp lt color border
[gp node right] at (4.525,5.607) dd L5;
rgb color=0.545,0.000,0.000
[gp path] (4.635,5.607)–(5.255,5.607);
[gp path] (2.701,3.923)–(3.120,3.710)–(3.539,4.254)–(3.958,3.560)–(4.377,3.099)
3pointgp mark 11(2.701,3.923)
3pointgp mark 11(3.120,3.710)
3pointgp mark 11(3.539,4.254)
3pointgp mark 11(3.958,3.560)
3pointgp mark 11(4.377,3.099)
3pointgp mark 11(4.796,2.670)
3pointgp mark 11(5.215,1.957)
3pointgp mark 11(4.945,5.607)
color=gp lt color border
[gp node right] at (4.525,5.382) tsi L5 from L2;
rgb color=0.545,0.000,0.000
[gp path] (4.635,5.382)–(5.255,5.382);
[gp path] (1.024,3.923)–(1.443,4.134)–(1.862,4.140)–(2.281,3.892)–(2.701,3.362)
3pointgp mark 4(1.024,3.923)
3pointgp mark 4(1.443,4.134)
3pointgp mark 4(1.862,4.140)
3pointgp mark 4(2.281,3.892)
3pointgp mark 4(2.701,3.362)
3pointgp mark 4(3.120,3.174)
3pointgp mark 4(3.539,2.845)
3pointgp mark 4(3.958,2.491)
3pointgp mark 4(4.377,2.079)
3pointgp mark 4(4.796,1.674)
3pointgp mark 4(5.215,1.112)
3pointgp mark 4(4.945,5.382)
color=gp lt color border
[gp node right] at (4.525,5.157) dd L6;
rgb color=0.000,0.000,0.545
[gp path] (4.635,5.157)–(5.255,5.157);
[gp path] (4.377,3.923)–(4.796,3.553)–(5.215,2.959)–(5.544,2.787);
3pointgp mark 11(4.377,3.923)
3pointgp mark 11(4.796,3.553)
3pointgp mark 11(5.215,2.959)
3pointgp mark 11(5.544,2.787)
3pointgp mark 11(4.945,5.157)
color=gp lt color border
[gp node right] at (4.525,4.932) tsi L6 from L3;
rgb color=0.000,0.000,0.545
[gp path] (4.635,4.932)–(5.255,4.932);
[gp path] (2.281,3.923)–(2.701,3.919)–(3.120,3.670)–(3.539,3.358)–(3.958,3.078)
3pointgp mark 4(2.281,3.923)
3pointgp mark 4(2.701,3.919)
3pointgp mark 4(3.120,3.670)
3pointgp mark 4(3.539,3.358)
3pointgp mark 4(3.958,3.078)
3pointgp mark 4(4.377,2.625)
3pointgp mark 4(4.796,2.180)
3pointgp mark 4(5.215,1.749)
3pointgp mark 4(5.544,1.322)
3pointgp mark 4(4.945,4.932)
color=gp lt color border
[gp node right] at (4.525,4.707) tsi L7 from L4;
rgb color=1.000,0.078,0.576
[gp path] (4.635,4.707)–(5.255,4.707);
[gp path] (3.539,3.923)–(3.958,3.605)–(4.377,2.917)–(4.796,2.204)–(5.215,1.632)
3pointgp mark 4(3.539,3.923)
3pointgp mark 4(3.958,3.605)
3pointgp mark 4(4.377,2.917)
3pointgp mark 4(4.796,2.204)
3pointgp mark 4(5.215,1.632)
3pointgp mark 4(5.540,1.346)
3pointgp mark 4(4.945,4.707)
rgb color=0.545,0.000,0.000
[gp path] (4.796,2.559)–(4.800,2.554)–(4.805,2.549)–(4.809,2.543)–(4.813,2.538)
[gp path] (3.539,2.767)–(3.543,2.763)–(3.547,2.759)–(3.551,2.755)–(3.556,2.751)
rgb color=0.000,0.000,0.545
[gp path] (4.796,3.478)–(4.800,3.473)–(4.805,3.468)–(4.809,3.463)–(4.813,3.459)
[gp path] (3.539,3.305)–(3.543,3.301)–(3.547,3.298)–(3.551,3.294)–(3.556,3.290)
rgb color=1.000,0.078,0.576
[gp path] (5.215,1.659)–(5.218,1.655)–(5.222,1.650)–(5.225,1.645)–(5.228,1.641)
color=gp lt color border
[gp path] (0.605,4.589)–(0.605,0.592)–(5.544,0.592)–(5.544,4.589)–cycle;
gp plot 10.605cm0.592cm5.544cm4.589cm
[weak scaling]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.875,4.812);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,0.592)–(4.830,0.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,0.592)–(0.785,0.592);
[gp path] (4.830,0.592)–(4.650,0.592);
[gp node right] at (0.495,0.592) $0$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,0.918)–(4.830,0.918);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,0.918)–(0.785,0.918);
[gp path] (4.830,0.918)–(4.650,0.918);
[gp node right] at (0.495,0.918) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,1.244)–(4.830,1.244);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,1.244)–(0.785,1.244);
[gp path] (4.830,1.244)–(4.650,1.244);
[gp node right] at (0.495,1.244) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,1.569)–(4.830,1.569);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,1.569)–(0.785,1.569);
[gp path] (4.830,1.569)–(4.650,1.569);
[gp node right] at (0.495,1.569) $30$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,1.895)–(4.830,1.895);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,1.895)–(0.785,1.895);
[gp path] (4.830,1.895)–(4.650,1.895);
[gp node right] at (0.495,1.895) $40$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,2.221)–(4.830,2.221);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,2.221)–(0.785,2.221);
[gp path] (4.830,2.221)–(4.650,2.221);
[gp node right] at (0.495,2.221) $50$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,2.547)–(4.830,2.547);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,2.547)–(0.785,2.547);
[gp path] (4.830,2.547)–(4.650,2.547);
[gp node right] at (0.495,2.547) $60$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,2.873)–(4.830,2.873);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,2.873)–(0.785,2.873);
[gp path] (4.830,2.873)–(4.650,2.873);
[gp node right] at (0.495,2.873) $70$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,3.199)–(4.830,3.199);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,3.199)–(0.785,3.199);
[gp path] (4.830,3.199)–(4.650,3.199);
[gp node right] at (0.495,3.199) $80$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,3.524)–(4.830,3.524);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,3.524)–(0.785,3.524);
[gp path] (4.830,3.524)–(4.650,3.524);
[gp node right] at (0.495,3.524) $90$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,3.850)–(4.830,3.850);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,3.850)–(0.785,3.850);
[gp path] (4.830,3.850)–(4.650,3.850);
[gp node right] at (0.495,3.850) $100$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,4.176)–(4.830,4.176);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,4.176)–(0.785,4.176);
[gp path] (4.830,4.176)–(4.650,4.176);
[gp node right] at (0.495,4.176) $110$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,0.592)–(0.605,4.176);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,0.592)–(0.605,0.772);
[gp path] (0.605,4.176)–(0.605,3.996);
[gp node center,font=6.0pt7.2pt] at (0.605,0.407) 22611 (1.0);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.638,0.592)–(1.638,4.176);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.638,0.592)–(1.638,0.772);
[gp path] (1.638,4.176)–(1.638,3.996);
[gp node center,font=6.0pt7.2pt] at (1.638,0.407) 166185 (7.3);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.693,0.592)–(2.693,4.176);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.693,0.592)–(2.693,0.772);
[gp path] (2.693,4.176)–(2.693,3.996);
[gp node center,font=6.0pt7.2pt] at (2.693,0.407) 1273173 (56.3);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.759,0.592)–(3.759,4.176);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.759,0.592)–(3.759,0.772);
[gp path] (3.759,4.176)–(3.759,3.996);
[gp node center,font=6.0pt7.2pt] at (3.759,0.407) 9965229 (440.7);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.830,0.592)–(4.830,4.176);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.830,0.592)–(4.830,0.772);
[gp path] (4.830,4.176)–(4.830,3.996);
[gp node center,font=6.0pt7.2pt] at (4.830,0.407) 78851421 (3487.3);
[gp path] (0.605,4.176)–(0.605,0.592)–(4.830,0.592)–(4.830,4.176)–cycle;
[gp node center] at (2.717,0.068) number of dofs (interpolated processes);
[gp node right] at (1.045,4.519) fr;
rgb color=0.000,1.000,1.000
[gp path] (1.155,4.519)–(1.775,4.519);
[gp path] (0.605,3.850)–(1.638,1.576)–(2.693,0.743)–(3.759,0.603);
3pointgp mark 1(0.605,3.850)
3pointgp mark 1(1.638,1.576)
3pointgp mark 1(2.693,0.743)
3pointgp mark 1(3.759,0.603)
3pointgp mark 1(1.465,4.519)
color=gp lt color border
[gp node right] at (1.045,4.294) blr;
rgb color=0.000,0.000,0.545
[gp path] (1.155,4.294)–(1.775,4.294);
[gp path] (0.605,3.850)–(1.638,1.567)–(2.693,0.786)–(3.759,0.614);
3pointgp mark 1(0.605,3.850)
3pointgp mark 1(1.638,1.567)
3pointgp mark 1(2.693,0.786)
3pointgp mark 1(3.759,0.614)
3pointgp mark 1(1.465,4.294)
color=gp lt color border
[gp node right] at (2.325,4.519) dd;
rgb color=0.753,0.251,0.000
[gp path] (2.435,4.519)–(3.055,4.519);
[gp path] (0.605,3.850)–(1.638,1.193)–(2.693,0.684)–(3.759,0.632);
3pointgp mark 2(0.605,3.850)
3pointgp mark 2(1.638,1.193)
3pointgp mark 2(2.693,0.684)
3pointgp mark 2(3.759,0.632)
3pointgp mark 2(2.745,4.519)
color=gp lt color border
[gp node right] at (2.325,4.294) tsi;
rgb color=0.580,0.000,0.827
[gp path] (2.435,4.294)–(3.055,4.294);
[gp path] (0.605,3.850)–(1.638,3.152)–(2.693,2.053)–(3.759,1.690)–(4.830,1.144);
3pointgp mark 4(0.605,3.850)
3pointgp mark 4(1.638,3.152)
3pointgp mark 4(2.693,2.053)
3pointgp mark 4(3.759,1.690)
3pointgp mark 4(4.830,1.144)
3pointgp mark 4(2.745,4.294)
color=gp lt color border
[gp node right] at (3.605,4.519) 50%;
rgb color=0.000,0.000,0.000
[gp path] (3.715,4.519)–(4.335,4.519);
[gp path] (0.605,2.221)–(0.648,2.221)–(0.690,2.221)–(0.733,2.221)–(0.776,2.221)
color=gp lt color border
[gp path] (0.605,4.176)–(0.605,0.592)–(4.830,0.592)–(4.830,4.176)–cycle;
gp plot 10.605cm0.592cm4.830cm4.176cm
Cubic field: scaling efficiency. log/lin scale..
With respect to weak scaling efficiency[For a problem of size $N$, the weak scaling efficiency is the ratio of the time used by an application to solve a problem in single-process mode divided by the time used in multi-process mode (with $nbp$ processes) to solve a problem of size $nbp\times N$.
It is multiply by 100 to obtain a %. 100% implies perfect use of the resource(cores) ] a full simulation campaign has not been conducted.
Interpolation between the data in figures <ref> and <ref> is used to produce the curves in figure <ref> where "tsi" uses a different coarse-scale problem size.
The efficiency remain above 50% up to $1.e^{+6}$ dofs and drops to 17% for $7.8e^{+7}$ dofs, which in itself is positive (i.e. the efficiency does not drop below 5% immediately).
Compared to other solvers, the solver offers a better efficiency.
Note that for the "dd" solver, the elapsed time reference for a single process is that of "fr" because the implementation does not handle the single domain case.
Thus, the weak efficiency in this case should certainly be moved up.
We now analyze the quality of the solver using the following relative errors:
\begin{equation}
\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right) =\frac{\left\| \vm{u}^{ts}-\vm{u}^C \right\|_{E_{\Omega}}}{\left\| \vm{u}^C \right\|_{E_{\Omega}}} ~\text{and}~ \mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right) =\frac{\left\| \vm{u}^{ts}-\vm{u}^R \right\|_{E_{\Omega}}}{\left\| \vm{u}^R \right\|_{E_{\Omega}}}
\label{TSvsA}
\end{equation}
where $\vm{u}^C$ is the analytical solution given by (<ref>) (equivalent to the continuous solution of the section <ref> ) and $\vm{u}^R$ is the reference solution given by solving of (<ref>) with an auxiliary computation ("fr" or "dd" with $\epsilon=10^{-13}$).
Figure <ref> shows in the left graph $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right)$, in the right graph $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)$ and in the central graph the residual error $\frac{\left \| \vm{A}_{rr}\cdot \vm{u}_{r}-\vm{B}_{r} \right \|}{\left \|\vm{B}_{r} \right \|}$ ($resi$) provided by the algorithm <ref>.
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.375);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.100,0.592);
[gp path] (5.294,0.592)–(5.204,0.592);
[gp path] (1.010,0.693)–(1.100,0.693);
[gp path] (5.294,0.693)–(5.204,0.693);
[gp path] (1.010,0.783)–(1.100,0.783);
[gp path] (5.294,0.783)–(5.204,0.783);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.863)–(5.294,0.863);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.863)–(1.190,0.863);
[gp path] (5.294,0.863)–(5.114,0.863);
[gp node right] at (0.900,0.863) 1e-03;
[gp path] (1.010,1.389)–(1.100,1.389);
[gp path] (5.294,1.389)–(5.204,1.389);
[gp path] (1.010,1.697)–(1.100,1.697);
[gp path] (5.294,1.697)–(5.204,1.697);
[gp path] (1.010,1.915)–(1.100,1.915);
[gp path] (5.294,1.915)–(5.204,1.915);
[gp path] (1.010,2.084)–(1.100,2.084);
[gp path] (5.294,2.084)–(5.204,2.084);
[gp path] (1.010,2.223)–(1.100,2.223);
[gp path] (5.294,2.223)–(5.204,2.223);
[gp path] (1.010,2.340)–(1.100,2.340);
[gp path] (5.294,2.340)–(5.204,2.340);
[gp path] (1.010,2.441)–(1.100,2.441);
[gp path] (5.294,2.441)–(5.204,2.441);
[gp path] (1.010,2.531)–(1.100,2.531);
[gp path] (5.294,2.531)–(5.204,2.531);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.611)–(5.294,2.611);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.611)–(1.190,2.611);
[gp path] (5.294,2.611)–(5.114,2.611);
[gp node right] at (0.900,2.611) 1e-02;
[gp path] (1.010,3.137)–(1.100,3.137);
[gp path] (5.294,3.137)–(5.204,3.137);
[gp path] (1.010,3.444)–(1.100,3.444);
[gp path] (5.294,3.444)–(5.204,3.444);
[gp path] (1.010,3.663)–(1.100,3.663);
[gp path] (5.294,3.663)–(5.204,3.663);
[gp path] (1.010,3.832)–(1.100,3.832);
[gp path] (5.294,3.832)–(5.204,3.832);
[gp path] (1.010,3.971)–(1.100,3.971);
[gp path] (5.294,3.971)–(5.204,3.971);
[gp path] (1.010,4.088)–(1.100,4.088);
[gp path] (5.294,4.088)–(5.204,4.088);
[gp path] (1.010,4.189)–(1.100,4.189);
[gp path] (5.294,4.189)–(5.204,4.189);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.592)–(1.010,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.010,0.772);
[gp path] (1.010,4.189)–(1.010,4.009);
[gp node center] at (1.010,0.407) $0$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.724,0.592)–(1.724,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.724,0.592)–(1.724,0.772);
[gp path] (1.724,4.189)–(1.724,4.009);
[gp node center] at (1.724,0.407) $5$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.438,0.592)–(2.438,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.438,0.592)–(2.438,0.772);
[gp path] (2.438,4.189)–(2.438,4.009);
[gp node center] at (2.438,0.407) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.152,0.592)–(3.152,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.152,0.592)–(3.152,0.772);
[gp path] (3.152,4.189)–(3.152,4.009);
[gp node center] at (3.152,0.407) $15$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.866,0.592)–(3.866,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.866,0.592)–(3.866,0.772);
[gp path] (3.866,4.189)–(3.866,4.009);
[gp node center] at (3.866,0.407) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.580,0.592)–(4.580,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.580,0.592)–(4.580,0.772);
[gp path] (4.580,4.189)–(4.580,4.009);
[gp node center] at (4.580,0.407) $25$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.294,0.592)–(5.294,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.294,0.592)–(5.294,0.772);
[gp path] (5.294,4.189)–(5.294,4.009);
[gp node center] at (5.294,0.407) $30$;
[gp path] (1.010,4.189)–(1.010,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
[gp node center,rotate=-270] at (0.175,2.390) $\mathcal{E}\left( u^{ts},u^C\right)$;
[gp node center] at (3.152,0.130) number of TS iterations;
[gp node right] at (2.250,4.374) L2 from L0;
rgb color=0.580,0.000,0.827
[gp path] (2.360,4.374)–(2.980,4.374);
[gp path] (1.010,3.685)–(1.153,3.516)–(1.296,3.499)–(1.438,3.496)–(1.581,3.496)
3pointgp mark 1(1.010,3.685)
3pointgp mark 1(1.153,3.516)
3pointgp mark 1(1.296,3.499)
3pointgp mark 1(1.438,3.496)
3pointgp mark 1(1.581,3.496)
3pointgp mark 1(1.724,3.496)
3pointgp mark 1(1.867,3.496)
3pointgp mark 1(2.010,3.496)
3pointgp mark 1(2.152,3.496)
3pointgp mark 1(2.295,3.496)
3pointgp mark 1(2.438,3.496)
3pointgp mark 1(2.581,3.496)
3pointgp mark 1(2.724,3.496)
3pointgp mark 1(2.866,3.496)
3pointgp mark 1(3.009,3.496)
3pointgp mark 1(3.152,3.496)
3pointgp mark 1(3.295,3.496)
3pointgp mark 1(3.438,3.496)
3pointgp mark 1(3.580,3.496)
3pointgp mark 1(2.670,4.374)
color=gp lt color border
[gp node right] at (2.250,4.593) L3 from L0;
rgb color=0.580,0.000,0.827
[gp path] (2.360,4.593)–(2.980,4.593);
[gp path] (1.010,3.487)–(1.153,3.058)–(1.296,2.987)–(1.438,2.977)–(1.581,2.975)
3pointgp mark 2(1.010,3.487)
3pointgp mark 2(1.153,3.058)
3pointgp mark 2(1.296,2.987)
3pointgp mark 2(1.438,2.977)
3pointgp mark 2(1.581,2.975)
3pointgp mark 2(1.724,2.975)
3pointgp mark 2(1.867,2.975)
3pointgp mark 2(2.010,2.975)
3pointgp mark 2(2.152,2.975)
3pointgp mark 2(2.295,2.975)
3pointgp mark 2(2.438,2.975)
3pointgp mark 2(2.581,2.975)
3pointgp mark 2(2.724,2.975)
3pointgp mark 2(2.866,2.975)
3pointgp mark 2(3.009,2.975)
3pointgp mark 2(3.152,2.975)
3pointgp mark 2(3.295,2.975)
3pointgp mark 2(3.438,2.975)
3pointgp mark 2(3.580,2.975)
3pointgp mark 2(3.723,2.975)
3pointgp mark 2(2.670,4.593)
color=gp lt color border
[gp node right] at (4.499,4.593) L3 from L1;
rgb color=0.784,0.784,0.000
[gp path] (4.609,4.593)–(5.229,4.593);
[gp path] (1.010,3.166)–(1.153,2.995)–(1.296,2.978)–(1.438,2.975)–(1.581,2.975)
3pointgp mark 3(1.010,3.166)
3pointgp mark 3(1.153,2.995)
3pointgp mark 3(1.296,2.978)
3pointgp mark 3(1.438,2.975)
3pointgp mark 3(1.581,2.975)
3pointgp mark 3(1.724,2.975)
3pointgp mark 3(1.867,2.975)
3pointgp mark 3(2.010,2.975)
3pointgp mark 3(2.152,2.975)
3pointgp mark 3(2.295,2.975)
3pointgp mark 3(2.438,2.975)
3pointgp mark 3(2.581,2.975)
3pointgp mark 3(2.724,2.975)
3pointgp mark 3(2.866,2.975)
3pointgp mark 3(3.009,2.975)
3pointgp mark 3(3.152,2.975)
3pointgp mark 3(3.295,2.975)
3pointgp mark 3(3.438,2.975)
3pointgp mark 3(3.580,2.975)
3pointgp mark 3(3.723,2.975)
3pointgp mark 3(3.866,2.975)
3pointgp mark 3(4.009,2.975)
3pointgp mark 3(4.152,2.975)
3pointgp mark 3(4.294,2.975)
3pointgp mark 3(4.437,2.975)
3pointgp mark 3(4.919,4.593)
color=gp lt color border
[gp node right] at (2.250,4.811) L4 from L0;
rgb color=0.580,0.000,0.827
[gp path] (2.360,4.811)–(2.980,4.811);
[gp path] (1.010,3.417)–(1.153,2.712)–(1.296,2.495)–(1.438,2.456)–(1.581,2.449)
3pointgp mark 4(1.010,3.417)
3pointgp mark 4(1.153,2.712)
3pointgp mark 4(1.296,2.495)
3pointgp mark 4(1.438,2.456)
3pointgp mark 4(1.581,2.449)
3pointgp mark 4(1.724,2.447)
3pointgp mark 4(1.867,2.447)
3pointgp mark 4(2.010,2.447)
3pointgp mark 4(2.152,2.447)
3pointgp mark 4(2.295,2.447)
3pointgp mark 4(2.438,2.447)
3pointgp mark 4(2.581,2.447)
3pointgp mark 4(2.724,2.447)
3pointgp mark 4(2.866,2.447)
3pointgp mark 4(3.009,2.447)
3pointgp mark 4(3.152,2.447)
3pointgp mark 4(3.295,2.447)
3pointgp mark 4(3.438,2.447)
3pointgp mark 4(3.580,2.447)
3pointgp mark 4(3.723,2.447)
3pointgp mark 4(2.670,4.811)
color=gp lt color border
[gp node right] at (4.499,4.811) L4 from L1;
rgb color=0.784,0.784,0.000
[gp path] (4.609,4.811)–(5.229,4.811);
[gp path] (1.010,2.960)–(1.153,2.528)–(1.296,2.461)–(1.438,2.450)–(1.581,2.448)
3pointgp mark 4(1.010,2.960)
3pointgp mark 4(1.153,2.528)
3pointgp mark 4(1.296,2.461)
3pointgp mark 4(1.438,2.450)
3pointgp mark 4(1.581,2.448)
3pointgp mark 4(1.724,2.447)
3pointgp mark 4(1.867,2.447)
3pointgp mark 4(2.010,2.447)
3pointgp mark 4(2.152,2.447)
3pointgp mark 4(2.295,2.447)
3pointgp mark 4(2.438,2.447)
3pointgp mark 4(2.581,2.447)
3pointgp mark 4(2.724,2.447)
3pointgp mark 4(2.866,2.447)
3pointgp mark 4(3.009,2.447)
3pointgp mark 4(3.152,2.447)
3pointgp mark 4(3.295,2.447)
3pointgp mark 4(3.438,2.447)
3pointgp mark 4(3.580,2.447)
3pointgp mark 4(3.723,2.447)
3pointgp mark 4(3.866,2.447)
3pointgp mark 4(4.009,2.447)
3pointgp mark 4(4.152,2.447)
3pointgp mark 4(4.294,2.447)
3pointgp mark 4(4.437,2.447)
3pointgp mark 4(4.919,4.811)
color=gp lt color border
[gp node right] at (2.250,5.030) L5 from L0;
rgb color=0.580,0.000,0.827
[gp path] (2.360,5.030)–(2.980,5.030);
[gp path] (1.010,3.397)–(1.153,2.536)–(1.296,2.084)–(1.438,1.953)–(1.581,1.925)
3pointgp mark 6(1.010,3.397)
3pointgp mark 6(1.153,2.536)
3pointgp mark 6(1.296,2.084)
3pointgp mark 6(1.438,1.953)
3pointgp mark 6(1.581,1.925)
3pointgp mark 6(1.724,1.919)
3pointgp mark 6(1.867,1.918)
3pointgp mark 6(2.010,1.918)
3pointgp mark 6(2.152,1.918)
3pointgp mark 6(2.295,1.918)
3pointgp mark 6(2.438,1.918)
3pointgp mark 6(2.581,1.918)
3pointgp mark 6(2.724,1.918)
3pointgp mark 6(2.866,1.918)
3pointgp mark 6(3.009,1.918)
3pointgp mark 6(3.152,1.918)
3pointgp mark 6(3.295,1.918)
3pointgp mark 6(3.438,1.918)
3pointgp mark 6(3.580,1.918)
3pointgp mark 6(3.723,1.918)
3pointgp mark 6(2.670,5.030)
color=gp lt color border
[gp node right] at (4.499,5.030) L5 from L1;
rgb color=0.784,0.784,0.000
[gp path] (4.609,5.030)–(5.229,5.030);
[gp path] (1.010,2.889)–(1.153,2.175)–(1.296,1.972)–(1.438,1.930)–(1.581,1.921)
3pointgp mark 6(1.010,2.889)
3pointgp mark 6(1.153,2.175)
3pointgp mark 6(1.296,1.972)
3pointgp mark 6(1.438,1.930)
3pointgp mark 6(1.581,1.921)
3pointgp mark 6(1.724,1.919)
3pointgp mark 6(1.867,1.918)
3pointgp mark 6(2.010,1.918)
3pointgp mark 6(2.152,1.918)
3pointgp mark 6(2.295,1.918)
3pointgp mark 6(2.438,1.918)
3pointgp mark 6(2.581,1.918)
3pointgp mark 6(2.724,1.918)
3pointgp mark 6(2.866,1.918)
3pointgp mark 6(3.009,1.918)
3pointgp mark 6(3.152,1.918)
3pointgp mark 6(3.295,1.918)
3pointgp mark 6(3.438,1.918)
3pointgp mark 6(3.580,1.918)
3pointgp mark 6(3.723,1.918)
3pointgp mark 6(3.866,1.918)
3pointgp mark 6(4.009,1.918)
3pointgp mark 6(4.152,1.918)
3pointgp mark 6(4.294,1.918)
3pointgp mark 6(4.437,1.918)
3pointgp mark 6(4.919,5.030)
rgb color=0.000,0.392,0.000
[gp path] (1.010,2.399)–(1.153,1.980)–(1.296,1.928)–(1.438,1.920)–(1.581,1.918)
3pointgp mark 6(1.010,2.399)
3pointgp mark 6(1.153,1.980)
3pointgp mark 6(1.296,1.928)
3pointgp mark 6(1.438,1.920)
3pointgp mark 6(1.581,1.918)
3pointgp mark 6(1.724,1.918)
3pointgp mark 6(1.867,1.918)
3pointgp mark 6(2.010,1.918)
3pointgp mark 6(2.152,1.918)
3pointgp mark 6(2.295,1.918)
3pointgp mark 6(2.438,1.918)
3pointgp mark 6(2.581,1.918)
3pointgp mark 6(2.724,1.918)
3pointgp mark 6(2.866,1.918)
3pointgp mark 6(3.009,1.918)
3pointgp mark 6(3.152,1.918)
3pointgp mark 6(3.295,1.918)
3pointgp mark 6(3.438,1.918)
3pointgp mark 6(3.580,1.918)
3pointgp mark 6(3.723,1.918)
3pointgp mark 6(3.866,1.918)
3pointgp mark 6(4.009,1.918)
3pointgp mark 6(4.152,1.918)
3pointgp mark 6(4.294,1.918)
3pointgp mark 6(4.437,1.918)
3pointgp mark 6(4.580,1.918)
3pointgp mark 6(4.723,1.918)
3pointgp mark 6(4.866,1.918)
rgb color=0.545,0.000,0.000
[gp path] (1.010,2.090)–(1.153,1.932)–(1.296,1.920)–(1.438,1.918)–(1.581,1.918)
3pointgp mark 8(1.010,2.090)
3pointgp mark 8(1.153,1.932)
3pointgp mark 8(1.296,1.920)
3pointgp mark 8(1.438,1.918)
3pointgp mark 8(1.581,1.918)
3pointgp mark 8(1.724,1.918)
3pointgp mark 8(1.867,1.918)
3pointgp mark 8(2.010,1.918)
3pointgp mark 8(2.152,1.918)
3pointgp mark 8(2.295,1.918)
3pointgp mark 8(2.438,1.918)
3pointgp mark 8(2.581,1.918)
3pointgp mark 8(2.724,1.918)
3pointgp mark 8(2.866,1.918)
3pointgp mark 8(3.009,1.918)
3pointgp mark 8(3.152,1.918)
3pointgp mark 8(3.295,1.918)
3pointgp mark 8(3.438,1.918)
3pointgp mark 8(3.580,1.918)
3pointgp mark 8(3.723,1.918)
3pointgp mark 8(3.866,1.918)
3pointgp mark 8(4.009,1.918)
3pointgp mark 8(4.152,1.918)
3pointgp mark 8(4.294,1.918)
3pointgp mark 8(4.437,1.918)
3pointgp mark 8(4.580,1.918)
3pointgp mark 8(4.723,1.918)
3pointgp mark 8(4.866,1.918)
3pointgp mark 8(5.008,1.918)
3pointgp mark 8(5.151,1.918)
rgb color=0.000,0.392,0.000
[gp path] (1.010,2.318)–(1.153,1.594)–(1.296,1.428)–(1.438,1.398)–(1.581,1.391)
3pointgp mark 13(1.010,2.318)
3pointgp mark 13(1.153,1.594)
3pointgp mark 13(1.296,1.428)
3pointgp mark 13(1.438,1.398)
3pointgp mark 13(1.581,1.391)
3pointgp mark 13(1.724,1.389)
3pointgp mark 13(1.867,1.389)
3pointgp mark 13(2.010,1.389)
3pointgp mark 13(2.152,1.389)
3pointgp mark 13(2.295,1.389)
3pointgp mark 13(2.438,1.389)
3pointgp mark 13(2.581,1.389)
3pointgp mark 13(2.724,1.389)
3pointgp mark 13(2.866,1.389)
3pointgp mark 13(3.009,1.389)
3pointgp mark 13(3.152,1.389)
3pointgp mark 13(3.295,1.389)
3pointgp mark 13(3.438,1.389)
3pointgp mark 13(3.580,1.389)
3pointgp mark 13(3.723,1.389)
3pointgp mark 13(3.866,1.389)
3pointgp mark 13(4.009,1.389)
3pointgp mark 13(4.152,1.389)
3pointgp mark 13(4.294,1.389)
3pointgp mark 13(4.437,1.389)
3pointgp mark 13(4.580,1.389)
3pointgp mark 13(4.723,1.389)
3pointgp mark 13(4.866,1.389)
rgb color=0.545,0.000,0.000
[gp path] (1.010,1.855)–(1.153,1.442)–(1.296,1.397)–(1.438,1.391)–(1.581,1.389)
3pointgp mark 13(1.010,1.855)
3pointgp mark 13(1.153,1.442)
3pointgp mark 13(1.296,1.397)
3pointgp mark 13(1.438,1.391)
3pointgp mark 13(1.581,1.389)
3pointgp mark 13(1.724,1.389)
3pointgp mark 13(1.867,1.389)
3pointgp mark 13(2.010,1.389)
3pointgp mark 13(2.152,1.388)
3pointgp mark 13(2.295,1.388)
3pointgp mark 13(2.438,1.388)
3pointgp mark 13(2.581,1.388)
3pointgp mark 13(2.724,1.388)
3pointgp mark 13(2.866,1.388)
3pointgp mark 13(3.009,1.388)
3pointgp mark 13(3.152,1.388)
3pointgp mark 13(3.295,1.388)
3pointgp mark 13(3.438,1.388)
3pointgp mark 13(3.580,1.388)
3pointgp mark 13(3.723,1.388)
3pointgp mark 13(3.866,1.388)
3pointgp mark 13(4.009,1.388)
3pointgp mark 13(4.152,1.388)
3pointgp mark 13(4.294,1.388)
3pointgp mark 13(4.437,1.388)
3pointgp mark 13(4.580,1.388)
3pointgp mark 13(4.723,1.388)
3pointgp mark 13(4.866,1.388)
3pointgp mark 13(5.008,1.388)
rgb color=0.000,0.000,0.545
[gp path] (1.010,1.317)–(1.153,0.909)–(1.296,0.868)–(1.438,0.862)–(1.581,0.861)
3pointgp mark 11(1.010,1.317)
3pointgp mark 11(1.153,0.909)
3pointgp mark 11(1.296,0.868)
3pointgp mark 11(1.438,0.862)
3pointgp mark 11(1.581,0.861)
3pointgp mark 11(1.724,0.860)
3pointgp mark 11(1.867,0.860)
3pointgp mark 11(2.010,0.860)
3pointgp mark 11(2.152,0.860)
3pointgp mark 11(2.295,0.860)
3pointgp mark 11(2.438,0.860)
3pointgp mark 11(2.581,0.860)
3pointgp mark 11(2.724,0.860)
3pointgp mark 11(2.866,0.860)
3pointgp mark 11(3.009,0.860)
3pointgp mark 11(3.152,0.860)
3pointgp mark 11(3.295,0.860)
3pointgp mark 11(3.438,0.860)
3pointgp mark 11(3.580,0.860)
3pointgp mark 11(3.723,0.860)
3pointgp mark 11(3.866,0.860)
3pointgp mark 11(4.009,0.860)
3pointgp mark 11(4.152,0.860)
3pointgp mark 11(4.294,0.860)
3pointgp mark 11(4.437,0.860)
3pointgp mark 11(4.580,0.860)
color=gp lt color border
[gp path] (1.010,4.189)–(1.010,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
gp plot 11.010cm0.592cm5.294cm4.189cm
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.375);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.100,0.592);
[gp path] (5.294,0.592)–(5.204,0.592);
[gp path] (1.010,0.628)–(1.100,0.628);
[gp path] (5.294,0.628)–(5.204,0.628);
[gp path] (1.010,0.658)–(1.100,0.658);
[gp path] (5.294,0.658)–(5.204,0.658);
[gp path] (1.010,0.684)–(1.100,0.684);
[gp path] (5.294,0.684)–(5.204,0.684);
[gp path] (1.010,0.707)–(1.100,0.707);
[gp path] (5.294,0.707)–(5.204,0.707);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.727)–(5.294,0.727);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.727)–(1.190,0.727);
[gp path] (5.294,0.727)–(5.114,0.727);
[gp node right] at (0.900,0.727) 1e-08;
[gp path] (1.010,0.863)–(1.100,0.863);
[gp path] (5.294,0.863)–(5.204,0.863);
[gp path] (1.010,0.942)–(1.100,0.942);
[gp path] (5.294,0.942)–(5.204,0.942);
[gp path] (1.010,0.998)–(1.100,0.998);
[gp path] (5.294,0.998)–(5.204,0.998);
[gp path] (1.010,1.042)–(1.100,1.042);
[gp path] (5.294,1.042)–(5.204,1.042);
[gp path] (1.010,1.077)–(1.100,1.077);
[gp path] (5.294,1.077)–(5.204,1.077);
[gp path] (1.010,1.107)–(1.100,1.107);
[gp path] (5.294,1.107)–(5.204,1.107);
[gp path] (1.010,1.133)–(1.100,1.133);
[gp path] (5.294,1.133)–(5.204,1.133);
[gp path] (1.010,1.156)–(1.100,1.156);
[gp path] (5.294,1.156)–(5.204,1.156);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.177)–(5.294,1.177);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.177)–(1.190,1.177);
[gp path] (5.294,1.177)–(5.114,1.177);
[gp node right] at (0.900,1.177) 1e-07;
[gp path] (1.010,1.312)–(1.100,1.312);
[gp path] (5.294,1.312)–(5.204,1.312);
[gp path] (1.010,1.392)–(1.100,1.392);
[gp path] (5.294,1.392)–(5.204,1.392);
[gp path] (1.010,1.448)–(1.100,1.448);
[gp path] (5.294,1.448)–(5.204,1.448);
[gp path] (1.010,1.491)–(1.100,1.491);
[gp path] (5.294,1.491)–(5.204,1.491);
[gp path] (1.010,1.527)–(1.100,1.527);
[gp path] (5.294,1.527)–(5.204,1.527);
[gp path] (1.010,1.557)–(1.100,1.557);
[gp path] (5.294,1.557)–(5.204,1.557);
[gp path] (1.010,1.583)–(1.100,1.583);
[gp path] (5.294,1.583)–(5.204,1.583);
[gp path] (1.010,1.606)–(1.100,1.606);
[gp path] (5.294,1.606)–(5.204,1.606);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.627)–(5.294,1.627);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.627)–(1.190,1.627);
[gp path] (5.294,1.627)–(5.114,1.627);
[gp node right] at (0.900,1.627) 1e-06;
[gp path] (1.010,1.762)–(1.100,1.762);
[gp path] (5.294,1.762)–(5.204,1.762);
[gp path] (1.010,1.841)–(1.100,1.841);
[gp path] (5.294,1.841)–(5.204,1.841);
[gp path] (1.010,1.897)–(1.100,1.897);
[gp path] (5.294,1.897)–(5.204,1.897);
[gp path] (1.010,1.941)–(1.100,1.941);
[gp path] (5.294,1.941)–(5.204,1.941);
[gp path] (1.010,1.976)–(1.100,1.976);
[gp path] (5.294,1.976)–(5.204,1.976);
[gp path] (1.010,2.007)–(1.100,2.007);
[gp path] (5.294,2.007)–(5.204,2.007);
[gp path] (1.010,2.033)–(1.100,2.033);
[gp path] (5.294,2.033)–(5.204,2.033);
[gp path] (1.010,2.056)–(1.100,2.056);
[gp path] (5.294,2.056)–(5.204,2.056);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.076)–(5.294,2.076);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.076)–(1.190,2.076);
[gp path] (5.294,2.076)–(5.114,2.076);
[gp node right] at (0.900,2.076) 1e-05;
[gp path] (1.010,2.212)–(1.100,2.212);
[gp path] (5.294,2.212)–(5.204,2.212);
[gp path] (1.010,2.291)–(1.100,2.291);
[gp path] (5.294,2.291)–(5.204,2.291);
[gp path] (1.010,2.347)–(1.100,2.347);
[gp path] (5.294,2.347)–(5.204,2.347);
[gp path] (1.010,2.391)–(1.100,2.391);
[gp path] (5.294,2.391)–(5.204,2.391);
[gp path] (1.010,2.426)–(1.100,2.426);
[gp path] (5.294,2.426)–(5.204,2.426);
[gp path] (1.010,2.456)–(1.100,2.456);
[gp path] (5.294,2.456)–(5.204,2.456);
[gp path] (1.010,2.482)–(1.100,2.482);
[gp path] (5.294,2.482)–(5.204,2.482);
[gp path] (1.010,2.505)–(1.100,2.505);
[gp path] (5.294,2.505)–(5.204,2.505);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.526)–(5.294,2.526);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.526)–(1.190,2.526);
[gp path] (5.294,2.526)–(5.114,2.526);
[gp node right] at (0.900,2.526) 1e-04;
[gp path] (1.010,2.661)–(1.100,2.661);
[gp path] (5.294,2.661)–(5.204,2.661);
[gp path] (1.010,2.740)–(1.100,2.740);
[gp path] (5.294,2.740)–(5.204,2.740);
[gp path] (1.010,2.797)–(1.100,2.797);
[gp path] (5.294,2.797)–(5.204,2.797);
[gp path] (1.010,2.840)–(1.100,2.840);
[gp path] (5.294,2.840)–(5.204,2.840);
[gp path] (1.010,2.876)–(1.100,2.876);
[gp path] (5.294,2.876)–(5.204,2.876);
[gp path] (1.010,2.906)–(1.100,2.906);
[gp path] (5.294,2.906)–(5.204,2.906);
[gp path] (1.010,2.932)–(1.100,2.932);
[gp path] (5.294,2.932)–(5.204,2.932);
[gp path] (1.010,2.955)–(1.100,2.955);
[gp path] (5.294,2.955)–(5.204,2.955);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.975)–(5.294,2.975);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.975)–(1.190,2.975);
[gp path] (5.294,2.975)–(5.114,2.975);
[gp node right] at (0.900,2.975) 1e-03;
[gp path] (1.010,3.111)–(1.100,3.111);
[gp path] (5.294,3.111)–(5.204,3.111);
[gp path] (1.010,3.190)–(1.100,3.190);
[gp path] (5.294,3.190)–(5.204,3.190);
[gp path] (1.010,3.246)–(1.100,3.246);
[gp path] (5.294,3.246)–(5.204,3.246);
[gp path] (1.010,3.290)–(1.100,3.290);
[gp path] (5.294,3.290)–(5.204,3.290);
[gp path] (1.010,3.325)–(1.100,3.325);
[gp path] (5.294,3.325)–(5.204,3.325);
[gp path] (1.010,3.355)–(1.100,3.355);
[gp path] (5.294,3.355)–(5.204,3.355);
[gp path] (1.010,3.382)–(1.100,3.382);
[gp path] (5.294,3.382)–(5.204,3.382);
[gp path] (1.010,3.405)–(1.100,3.405);
[gp path] (5.294,3.405)–(5.204,3.405);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.425)–(5.294,3.425);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.425)–(1.190,3.425);
[gp path] (5.294,3.425)–(5.114,3.425);
[gp node right] at (0.900,3.425) 1e-02;
[gp path] (1.010,3.560)–(1.100,3.560);
[gp path] (5.294,3.560)–(5.204,3.560);
[gp path] (1.010,3.640)–(1.100,3.640);
[gp path] (5.294,3.640)–(5.204,3.640);
[gp path] (1.010,3.696)–(1.100,3.696);
[gp path] (5.294,3.696)–(5.204,3.696);
[gp path] (1.010,3.739)–(1.100,3.739);
[gp path] (5.294,3.739)–(5.204,3.739);
[gp path] (1.010,3.775)–(1.100,3.775);
[gp path] (5.294,3.775)–(5.204,3.775);
[gp path] (1.010,3.805)–(1.100,3.805);
[gp path] (5.294,3.805)–(5.204,3.805);
[gp path] (1.010,3.831)–(1.100,3.831);
[gp path] (5.294,3.831)–(5.204,3.831);
[gp path] (1.010,3.854)–(1.100,3.854);
[gp path] (5.294,3.854)–(5.204,3.854);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.875)–(5.294,3.875);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.875)–(1.190,3.875);
[gp path] (5.294,3.875)–(5.114,3.875);
[gp node right] at (0.900,3.875) 1e-01;
[gp path] (1.010,4.010)–(1.100,4.010);
[gp path] (5.294,4.010)–(5.204,4.010);
[gp path] (1.010,4.089)–(1.100,4.089);
[gp path] (5.294,4.089)–(5.204,4.089);
[gp path] (1.010,4.145)–(1.100,4.145);
[gp path] (5.294,4.145)–(5.204,4.145);
[gp path] (1.010,4.189)–(1.100,4.189);
[gp path] (5.294,4.189)–(5.204,4.189);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.592)–(1.010,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.010,0.772);
[gp path] (1.010,4.189)–(1.010,4.009);
[gp node center] at (1.010,0.407) $0$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.724,0.592)–(1.724,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.724,0.592)–(1.724,0.772);
[gp path] (1.724,4.189)–(1.724,4.009);
[gp node center] at (1.724,0.407) $5$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.438,0.592)–(2.438,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.438,0.592)–(2.438,0.772);
[gp path] (2.438,4.189)–(2.438,4.009);
[gp node center] at (2.438,0.407) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.152,0.592)–(3.152,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.152,0.592)–(3.152,0.772);
[gp path] (3.152,4.189)–(3.152,4.009);
[gp node center] at (3.152,0.407) $15$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.866,0.592)–(3.866,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.866,0.592)–(3.866,0.772);
[gp path] (3.866,4.189)–(3.866,4.009);
[gp node center] at (3.866,0.407) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.580,0.592)–(4.580,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.580,0.592)–(4.580,0.772);
[gp path] (4.580,4.189)–(4.580,4.009);
[gp node center] at (4.580,0.407) $25$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.294,0.592)–(5.294,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.294,0.592)–(5.294,0.772);
[gp path] (5.294,4.189)–(5.294,4.009);
[gp node center] at (5.294,0.407) $30$;
[gp path] (1.010,4.189)–(1.010,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
[gp node center,rotate=-270] at (0.175,2.390) $ \frac{\left \| \vm{A}_{rr} \cdot \vm{u}_{r}- \vm{B}_{r} \right \|}{\left \| \vm{B}_{r} \right \|} $;
[gp node center] at (3.152,0.130) number of TS iterations;
rgb color=0.580,0.000,0.827
[gp path] (1.010,3.880)–(1.153,3.600)–(1.296,3.381)–(1.438,3.202)–(1.581,3.043)
3pointgp mark 1(1.010,3.880)
3pointgp mark 1(1.153,3.600)
3pointgp mark 1(1.296,3.381)
3pointgp mark 1(1.438,3.202)
3pointgp mark 1(1.581,3.043)
3pointgp mark 1(1.724,2.891)
3pointgp mark 1(1.867,2.746)
3pointgp mark 1(2.010,2.604)
3pointgp mark 1(2.152,2.465)
3pointgp mark 1(2.295,2.328)
3pointgp mark 1(2.438,2.193)
3pointgp mark 1(2.581,2.060)
3pointgp mark 1(2.724,1.927)
3pointgp mark 1(2.866,1.796)
3pointgp mark 1(3.009,1.666)
3pointgp mark 1(3.152,1.537)
3pointgp mark 1(3.295,1.410)
3pointgp mark 1(3.438,1.283)
3pointgp mark 1(3.580,1.157)
[gp path] (1.010,3.906)–(1.153,3.626)–(1.296,3.405)–(1.438,3.225)–(1.581,3.064)
3pointgp mark 2(1.010,3.906)
3pointgp mark 2(1.153,3.626)
3pointgp mark 2(1.296,3.405)
3pointgp mark 2(1.438,3.225)
3pointgp mark 2(1.581,3.064)
3pointgp mark 2(1.724,2.913)
3pointgp mark 2(1.867,2.768)
3pointgp mark 2(2.010,2.627)
3pointgp mark 2(2.152,2.489)
3pointgp mark 2(2.295,2.354)
3pointgp mark 2(2.438,2.220)
3pointgp mark 2(2.581,2.088)
3pointgp mark 2(2.724,1.958)
3pointgp mark 2(2.866,1.829)
3pointgp mark 2(3.009,1.701)
3pointgp mark 2(3.152,1.574)
3pointgp mark 2(3.295,1.449)
3pointgp mark 2(3.438,1.324)
3pointgp mark 2(3.580,1.200)
3pointgp mark 2(3.723,1.077)
rgb color=0.784,0.784,0.000
[gp path] (1.010,3.826)–(1.153,3.543)–(1.296,3.331)–(1.438,3.167)–(1.581,3.023)
3pointgp mark 3(1.010,3.826)
3pointgp mark 3(1.153,3.543)
3pointgp mark 3(1.296,3.331)
3pointgp mark 3(1.438,3.167)
3pointgp mark 3(1.581,3.023)
3pointgp mark 3(1.724,2.893)
3pointgp mark 3(1.867,2.773)
3pointgp mark 3(2.010,2.662)
3pointgp mark 3(2.152,2.557)
3pointgp mark 3(2.295,2.458)
3pointgp mark 3(2.438,2.363)
3pointgp mark 3(2.581,2.270)
3pointgp mark 3(2.724,2.179)
3pointgp mark 3(2.866,2.090)
3pointgp mark 3(3.009,2.002)
3pointgp mark 3(3.152,1.914)
3pointgp mark 3(3.295,1.827)
3pointgp mark 3(3.438,1.740)
3pointgp mark 3(3.580,1.654)
3pointgp mark 3(3.723,1.567)
3pointgp mark 3(3.866,1.481)
3pointgp mark 3(4.009,1.395)
3pointgp mark 3(4.152,1.310)
3pointgp mark 3(4.294,1.224)
3pointgp mark 3(4.437,1.138)
rgb color=0.580,0.000,0.827
[gp path] (1.010,3.919)–(1.153,3.637)–(1.296,3.412)–(1.438,3.231)–(1.581,3.070)
3pointgp mark 4(1.010,3.919)
3pointgp mark 4(1.153,3.637)
3pointgp mark 4(1.296,3.412)
3pointgp mark 4(1.438,3.231)
3pointgp mark 4(1.581,3.070)
3pointgp mark 4(1.724,2.918)
3pointgp mark 4(1.867,2.772)
3pointgp mark 4(2.010,2.631)
3pointgp mark 4(2.152,2.493)
3pointgp mark 4(2.295,2.358)
3pointgp mark 4(2.438,2.225)
3pointgp mark 4(2.581,2.093)
3pointgp mark 4(2.724,1.963)
3pointgp mark 4(2.866,1.834)
3pointgp mark 4(3.009,1.707)
3pointgp mark 4(3.152,1.580)
3pointgp mark 4(3.295,1.455)
3pointgp mark 4(3.438,1.331)
3pointgp mark 4(3.580,1.207)
3pointgp mark 4(3.723,1.085)
rgb color=0.784,0.784,0.000
[gp path] (1.010,3.852)–(1.153,3.559)–(1.296,3.343)–(1.438,3.180)–(1.581,3.037)
3pointgp mark 5(1.010,3.852)
3pointgp mark 5(1.153,3.559)
3pointgp mark 5(1.296,3.343)
3pointgp mark 5(1.438,3.180)
3pointgp mark 5(1.581,3.037)
3pointgp mark 5(1.724,2.907)
3pointgp mark 5(1.867,2.786)
3pointgp mark 5(2.010,2.674)
3pointgp mark 5(2.152,2.568)
3pointgp mark 5(2.295,2.467)
3pointgp mark 5(2.438,2.371)
3pointgp mark 5(2.581,2.277)
3pointgp mark 5(2.724,2.186)
3pointgp mark 5(2.866,2.096)
3pointgp mark 5(3.009,2.008)
3pointgp mark 5(3.152,1.921)
3pointgp mark 5(3.295,1.834)
3pointgp mark 5(3.438,1.748)
3pointgp mark 5(3.580,1.662)
3pointgp mark 5(3.723,1.577)
3pointgp mark 5(3.866,1.492)
3pointgp mark 5(4.009,1.407)
3pointgp mark 5(4.152,1.322)
3pointgp mark 5(4.294,1.237)
3pointgp mark 5(4.437,1.152)
rgb color=0.580,0.000,0.827
[gp path] (1.010,3.926)–(1.153,3.642)–(1.296,3.415)–(1.438,3.233)–(1.581,3.071)
3pointgp mark 6(1.010,3.926)
3pointgp mark 6(1.153,3.642)
3pointgp mark 6(1.296,3.415)
3pointgp mark 6(1.438,3.233)
3pointgp mark 6(1.581,3.071)
3pointgp mark 6(1.724,2.919)
3pointgp mark 6(1.867,2.773)
3pointgp mark 6(2.010,2.632)
3pointgp mark 6(2.152,2.494)
3pointgp mark 6(2.295,2.358)
3pointgp mark 6(2.438,2.225)
3pointgp mark 6(2.581,2.094)
3pointgp mark 6(2.724,1.963)
3pointgp mark 6(2.866,1.835)
3pointgp mark 6(3.009,1.707)
3pointgp mark 6(3.152,1.581)
3pointgp mark 6(3.295,1.455)
3pointgp mark 6(3.438,1.331)
3pointgp mark 6(3.580,1.207)
3pointgp mark 6(3.723,1.085)
rgb color=0.784,0.784,0.000
[gp path] (1.010,3.864)–(1.153,3.565)–(1.296,3.346)–(1.438,3.182)–(1.581,3.040)
3pointgp mark 6(1.010,3.864)
3pointgp mark 6(1.153,3.565)
3pointgp mark 6(1.296,3.346)
3pointgp mark 6(1.438,3.182)
3pointgp mark 6(1.581,3.040)
3pointgp mark 6(1.724,2.909)
3pointgp mark 6(1.867,2.788)
3pointgp mark 6(2.010,2.675)
3pointgp mark 6(2.152,2.569)
3pointgp mark 6(2.295,2.468)
3pointgp mark 6(2.438,2.371)
3pointgp mark 6(2.581,2.277)
3pointgp mark 6(2.724,2.185)
3pointgp mark 6(2.866,2.095)
3pointgp mark 6(3.009,2.007)
3pointgp mark 6(3.152,1.919)
3pointgp mark 6(3.295,1.833)
3pointgp mark 6(3.438,1.746)
3pointgp mark 6(3.580,1.661)
3pointgp mark 6(3.723,1.575)
3pointgp mark 6(3.866,1.490)
3pointgp mark 6(4.009,1.405)
3pointgp mark 6(4.152,1.320)
3pointgp mark 6(4.294,1.236)
3pointgp mark 6(4.437,1.151)
color=gp lt color border
[gp node right] at (2.250,5.030) L5 from L2;
rgb color=0.000,0.392,0.000
[gp path] (2.360,5.030)–(2.980,5.030);
[gp path] (1.010,3.784)–(1.153,3.474)–(1.296,3.249)–(1.438,3.089)–(1.581,2.955)
3pointgp mark 6(1.010,3.784)
3pointgp mark 6(1.153,3.474)
3pointgp mark 6(1.296,3.249)
3pointgp mark 6(1.438,3.089)
3pointgp mark 6(1.581,2.955)
3pointgp mark 6(1.724,2.833)
3pointgp mark 6(1.867,2.719)
3pointgp mark 6(2.010,2.611)
3pointgp mark 6(2.152,2.507)
3pointgp mark 6(2.295,2.407)
3pointgp mark 6(2.438,2.312)
3pointgp mark 6(2.581,2.220)
3pointgp mark 6(2.724,2.132)
3pointgp mark 6(2.866,2.049)
3pointgp mark 6(3.009,1.969)
3pointgp mark 6(3.152,1.894)
3pointgp mark 6(3.295,1.822)
3pointgp mark 6(3.438,1.753)
3pointgp mark 6(3.580,1.688)
3pointgp mark 6(3.723,1.625)
3pointgp mark 6(3.866,1.564)
3pointgp mark 6(4.009,1.505)
3pointgp mark 6(4.152,1.447)
3pointgp mark 6(4.294,1.390)
3pointgp mark 6(4.437,1.335)
3pointgp mark 6(4.580,1.280)
3pointgp mark 6(4.723,1.225)
3pointgp mark 6(4.866,1.171)
3pointgp mark 6(2.670,5.030)
color=gp lt color border
[gp node right] at (4.499,5.030) L5 from L3;
rgb color=0.545,0.000,0.000
[gp path] (4.609,5.030)–(5.229,5.030);
[gp path] (1.010,3.693)–(1.153,3.387)–(1.296,3.170)–(1.438,3.019)–(1.581,2.896)
3pointgp mark 6(1.010,3.693)
3pointgp mark 6(1.153,3.387)
3pointgp mark 6(1.296,3.170)
3pointgp mark 6(1.438,3.019)
3pointgp mark 6(1.581,2.896)
3pointgp mark 6(1.724,2.785)
3pointgp mark 6(1.867,2.681)
3pointgp mark 6(2.010,2.581)
3pointgp mark 6(2.152,2.484)
3pointgp mark 6(2.295,2.389)
3pointgp mark 6(2.438,2.296)
3pointgp mark 6(2.581,2.206)
3pointgp mark 6(2.724,2.120)
3pointgp mark 6(2.866,2.036)
3pointgp mark 6(3.009,1.957)
3pointgp mark 6(3.152,1.881)
3pointgp mark 6(3.295,1.811)
3pointgp mark 6(3.438,1.745)
3pointgp mark 6(3.580,1.683)
3pointgp mark 6(3.723,1.625)
3pointgp mark 6(3.866,1.570)
3pointgp mark 6(4.009,1.519)
3pointgp mark 6(4.152,1.470)
3pointgp mark 6(4.294,1.424)
3pointgp mark 6(4.437,1.379)
3pointgp mark 6(4.580,1.335)
3pointgp mark 6(4.723,1.293)
3pointgp mark 6(4.866,1.252)
3pointgp mark 6(5.008,1.212)
3pointgp mark 6(5.151,1.172)
3pointgp mark 6(4.919,5.030)
color=gp lt color border
[gp node right] at (2.250,5.249) L6 from L2;
rgb color=0.000,0.392,0.000
[gp path] (2.360,5.249)–(2.980,5.249);
[gp path] (1.010,3.794)–(1.153,3.481)–(1.296,3.249)–(1.438,3.088)–(1.581,2.953)
3pointgp mark 13(1.010,3.794)
3pointgp mark 13(1.153,3.481)
3pointgp mark 13(1.296,3.249)
3pointgp mark 13(1.438,3.088)
3pointgp mark 13(1.581,2.953)
3pointgp mark 13(1.724,2.831)
3pointgp mark 13(1.867,2.716)
3pointgp mark 13(2.010,2.607)
3pointgp mark 13(2.152,2.503)
3pointgp mark 13(2.295,2.403)
3pointgp mark 13(2.438,2.308)
3pointgp mark 13(2.581,2.216)
3pointgp mark 13(2.724,2.128)
3pointgp mark 13(2.866,2.044)
3pointgp mark 13(3.009,1.964)
3pointgp mark 13(3.152,1.888)
3pointgp mark 13(3.295,1.815)
3pointgp mark 13(3.438,1.746)
3pointgp mark 13(3.580,1.680)
3pointgp mark 13(3.723,1.616)
3pointgp mark 13(3.866,1.554)
3pointgp mark 13(4.009,1.495)
3pointgp mark 13(4.152,1.436)
3pointgp mark 13(4.294,1.379)
3pointgp mark 13(4.437,1.323)
3pointgp mark 13(4.580,1.268)
3pointgp mark 13(4.723,1.213)
3pointgp mark 13(4.866,1.158)
3pointgp mark 13(2.670,5.249)
color=gp lt color border
[gp node right] at (4.499,5.249) L6 from L3;
rgb color=0.545,0.000,0.000
[gp path] (4.609,5.249)–(5.229,5.249);
[gp path] (1.010,3.715)–(1.153,3.399)–(1.296,3.169)–(1.438,3.012)–(1.581,2.884)
3pointgp mark 13(1.010,3.715)
3pointgp mark 13(1.153,3.399)
3pointgp mark 13(1.296,3.169)
3pointgp mark 13(1.438,3.012)
3pointgp mark 13(1.581,2.884)
3pointgp mark 13(1.724,2.769)
3pointgp mark 13(1.867,2.660)
3pointgp mark 13(2.010,2.555)
3pointgp mark 13(2.152,2.453)
3pointgp mark 13(2.295,2.354)
3pointgp mark 13(2.438,2.258)
3pointgp mark 13(2.581,2.165)
3pointgp mark 13(2.724,2.076)
3pointgp mark 13(2.866,1.990)
3pointgp mark 13(3.009,1.909)
3pointgp mark 13(3.152,1.833)
3pointgp mark 13(3.295,1.761)
3pointgp mark 13(3.438,1.694)
3pointgp mark 13(3.580,1.631)
3pointgp mark 13(3.723,1.572)
3pointgp mark 13(3.866,1.517)
3pointgp mark 13(4.009,1.465)
3pointgp mark 13(4.152,1.415)
3pointgp mark 13(4.294,1.368)
3pointgp mark 13(4.437,1.322)
3pointgp mark 13(4.580,1.278)
3pointgp mark 13(4.723,1.236)
3pointgp mark 13(4.866,1.194)
3pointgp mark 13(5.008,1.154)
3pointgp mark 13(4.919,5.249)
rgb color=0.000,0.000,0.545
[gp path] (1.010,3.646)–(1.153,3.326)–(1.296,3.093)–(1.438,2.939)–(1.581,2.816)
3pointgp mark 11(1.010,3.646)
3pointgp mark 11(1.153,3.326)
3pointgp mark 11(1.296,3.093)
3pointgp mark 11(1.438,2.939)
3pointgp mark 11(1.581,2.816)
3pointgp mark 11(1.724,2.705)
3pointgp mark 11(1.867,2.600)
3pointgp mark 11(2.010,2.497)
3pointgp mark 11(2.152,2.396)
3pointgp mark 11(2.295,2.297)
3pointgp mark 11(2.438,2.200)
3pointgp mark 11(2.581,2.105)
3pointgp mark 11(2.724,2.011)
3pointgp mark 11(2.866,1.921)
3pointgp mark 11(3.009,1.834)
3pointgp mark 11(3.152,1.750)
3pointgp mark 11(3.295,1.670)
3pointgp mark 11(3.438,1.595)
3pointgp mark 11(3.580,1.525)
3pointgp mark 11(3.723,1.459)
3pointgp mark 11(3.866,1.397)
3pointgp mark 11(4.009,1.339)
3pointgp mark 11(4.152,1.285)
3pointgp mark 11(4.294,1.233)
3pointgp mark 11(4.437,1.184)
3pointgp mark 11(4.580,1.136)
color=gp lt color border
[gp path] (1.010,4.189)–(1.010,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
gp plot 11.010cm0.592cm5.294cm4.189cm
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.375);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.100,0.592);
[gp path] (5.294,0.592)–(5.204,0.592);
[gp path] (1.010,0.628)–(1.100,0.628);
[gp path] (5.294,0.628)–(5.204,0.628);
[gp path] (1.010,0.658)–(1.100,0.658);
[gp path] (5.294,0.658)–(5.204,0.658);
[gp path] (1.010,0.684)–(1.100,0.684);
[gp path] (5.294,0.684)–(5.204,0.684);
[gp path] (1.010,0.707)–(1.100,0.707);
[gp path] (5.294,0.707)–(5.204,0.707);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.727)–(5.294,0.727);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.727)–(1.190,0.727);
[gp path] (5.294,0.727)–(5.114,0.727);
[gp node right] at (0.900,0.727) 1e-08;
[gp path] (1.010,0.863)–(1.100,0.863);
[gp path] (5.294,0.863)–(5.204,0.863);
[gp path] (1.010,0.942)–(1.100,0.942);
[gp path] (5.294,0.942)–(5.204,0.942);
[gp path] (1.010,0.998)–(1.100,0.998);
[gp path] (5.294,0.998)–(5.204,0.998);
[gp path] (1.010,1.042)–(1.100,1.042);
[gp path] (5.294,1.042)–(5.204,1.042);
[gp path] (1.010,1.077)–(1.100,1.077);
[gp path] (5.294,1.077)–(5.204,1.077);
[gp path] (1.010,1.107)–(1.100,1.107);
[gp path] (5.294,1.107)–(5.204,1.107);
[gp path] (1.010,1.133)–(1.100,1.133);
[gp path] (5.294,1.133)–(5.204,1.133);
[gp path] (1.010,1.156)–(1.100,1.156);
[gp path] (5.294,1.156)–(5.204,1.156);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.177)–(5.294,1.177);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.177)–(1.190,1.177);
[gp path] (5.294,1.177)–(5.114,1.177);
[gp node right] at (0.900,1.177) 1e-07;
[gp path] (1.010,1.312)–(1.100,1.312);
[gp path] (5.294,1.312)–(5.204,1.312);
[gp path] (1.010,1.392)–(1.100,1.392);
[gp path] (5.294,1.392)–(5.204,1.392);
[gp path] (1.010,1.448)–(1.100,1.448);
[gp path] (5.294,1.448)–(5.204,1.448);
[gp path] (1.010,1.491)–(1.100,1.491);
[gp path] (5.294,1.491)–(5.204,1.491);
[gp path] (1.010,1.527)–(1.100,1.527);
[gp path] (5.294,1.527)–(5.204,1.527);
[gp path] (1.010,1.557)–(1.100,1.557);
[gp path] (5.294,1.557)–(5.204,1.557);
[gp path] (1.010,1.583)–(1.100,1.583);
[gp path] (5.294,1.583)–(5.204,1.583);
[gp path] (1.010,1.606)–(1.100,1.606);
[gp path] (5.294,1.606)–(5.204,1.606);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.627)–(5.294,1.627);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.627)–(1.190,1.627);
[gp path] (5.294,1.627)–(5.114,1.627);
[gp node right] at (0.900,1.627) 1e-06;
[gp path] (1.010,1.762)–(1.100,1.762);
[gp path] (5.294,1.762)–(5.204,1.762);
[gp path] (1.010,1.841)–(1.100,1.841);
[gp path] (5.294,1.841)–(5.204,1.841);
[gp path] (1.010,1.897)–(1.100,1.897);
[gp path] (5.294,1.897)–(5.204,1.897);
[gp path] (1.010,1.941)–(1.100,1.941);
[gp path] (5.294,1.941)–(5.204,1.941);
[gp path] (1.010,1.976)–(1.100,1.976);
[gp path] (5.294,1.976)–(5.204,1.976);
[gp path] (1.010,2.007)–(1.100,2.007);
[gp path] (5.294,2.007)–(5.204,2.007);
[gp path] (1.010,2.033)–(1.100,2.033);
[gp path] (5.294,2.033)–(5.204,2.033);
[gp path] (1.010,2.056)–(1.100,2.056);
[gp path] (5.294,2.056)–(5.204,2.056);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.076)–(5.294,2.076);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.076)–(1.190,2.076);
[gp path] (5.294,2.076)–(5.114,2.076);
[gp node right] at (0.900,2.076) 1e-05;
[gp path] (1.010,2.212)–(1.100,2.212);
[gp path] (5.294,2.212)–(5.204,2.212);
[gp path] (1.010,2.291)–(1.100,2.291);
[gp path] (5.294,2.291)–(5.204,2.291);
[gp path] (1.010,2.347)–(1.100,2.347);
[gp path] (5.294,2.347)–(5.204,2.347);
[gp path] (1.010,2.391)–(1.100,2.391);
[gp path] (5.294,2.391)–(5.204,2.391);
[gp path] (1.010,2.426)–(1.100,2.426);
[gp path] (5.294,2.426)–(5.204,2.426);
[gp path] (1.010,2.456)–(1.100,2.456);
[gp path] (5.294,2.456)–(5.204,2.456);
[gp path] (1.010,2.482)–(1.100,2.482);
[gp path] (5.294,2.482)–(5.204,2.482);
[gp path] (1.010,2.505)–(1.100,2.505);
[gp path] (5.294,2.505)–(5.204,2.505);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.526)–(5.294,2.526);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.526)–(1.190,2.526);
[gp path] (5.294,2.526)–(5.114,2.526);
[gp node right] at (0.900,2.526) 1e-04;
[gp path] (1.010,2.661)–(1.100,2.661);
[gp path] (5.294,2.661)–(5.204,2.661);
[gp path] (1.010,2.740)–(1.100,2.740);
[gp path] (5.294,2.740)–(5.204,2.740);
[gp path] (1.010,2.797)–(1.100,2.797);
[gp path] (5.294,2.797)–(5.204,2.797);
[gp path] (1.010,2.840)–(1.100,2.840);
[gp path] (5.294,2.840)–(5.204,2.840);
[gp path] (1.010,2.876)–(1.100,2.876);
[gp path] (5.294,2.876)–(5.204,2.876);
[gp path] (1.010,2.906)–(1.100,2.906);
[gp path] (5.294,2.906)–(5.204,2.906);
[gp path] (1.010,2.932)–(1.100,2.932);
[gp path] (5.294,2.932)–(5.204,2.932);
[gp path] (1.010,2.955)–(1.100,2.955);
[gp path] (5.294,2.955)–(5.204,2.955);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.975)–(5.294,2.975);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.975)–(1.190,2.975);
[gp path] (5.294,2.975)–(5.114,2.975);
[gp node right] at (0.900,2.975) 1e-03;
[gp path] (1.010,3.111)–(1.100,3.111);
[gp path] (5.294,3.111)–(5.204,3.111);
[gp path] (1.010,3.190)–(1.100,3.190);
[gp path] (5.294,3.190)–(5.204,3.190);
[gp path] (1.010,3.246)–(1.100,3.246);
[gp path] (5.294,3.246)–(5.204,3.246);
[gp path] (1.010,3.290)–(1.100,3.290);
[gp path] (5.294,3.290)–(5.204,3.290);
[gp path] (1.010,3.325)–(1.100,3.325);
[gp path] (5.294,3.325)–(5.204,3.325);
[gp path] (1.010,3.355)–(1.100,3.355);
[gp path] (5.294,3.355)–(5.204,3.355);
[gp path] (1.010,3.382)–(1.100,3.382);
[gp path] (5.294,3.382)–(5.204,3.382);
[gp path] (1.010,3.405)–(1.100,3.405);
[gp path] (5.294,3.405)–(5.204,3.405);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.425)–(5.294,3.425);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.425)–(1.190,3.425);
[gp path] (5.294,3.425)–(5.114,3.425);
[gp node right] at (0.900,3.425) 1e-02;
[gp path] (1.010,3.560)–(1.100,3.560);
[gp path] (5.294,3.560)–(5.204,3.560);
[gp path] (1.010,3.640)–(1.100,3.640);
[gp path] (5.294,3.640)–(5.204,3.640);
[gp path] (1.010,3.696)–(1.100,3.696);
[gp path] (5.294,3.696)–(5.204,3.696);
[gp path] (1.010,3.739)–(1.100,3.739);
[gp path] (5.294,3.739)–(5.204,3.739);
[gp path] (1.010,3.775)–(1.100,3.775);
[gp path] (5.294,3.775)–(5.204,3.775);
[gp path] (1.010,3.805)–(1.100,3.805);
[gp path] (5.294,3.805)–(5.204,3.805);
[gp path] (1.010,3.831)–(1.100,3.831);
[gp path] (5.294,3.831)–(5.204,3.831);
[gp path] (1.010,3.854)–(1.100,3.854);
[gp path] (5.294,3.854)–(5.204,3.854);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.875)–(5.294,3.875);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.875)–(1.190,3.875);
[gp path] (5.294,3.875)–(5.114,3.875);
[gp node right] at (0.900,3.875) 1e-01;
[gp path] (1.010,4.010)–(1.100,4.010);
[gp path] (5.294,4.010)–(5.204,4.010);
[gp path] (1.010,4.089)–(1.100,4.089);
[gp path] (5.294,4.089)–(5.204,4.089);
[gp path] (1.010,4.145)–(1.100,4.145);
[gp path] (5.294,4.145)–(5.204,4.145);
[gp path] (1.010,4.189)–(1.100,4.189);
[gp path] (5.294,4.189)–(5.204,4.189);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.592)–(1.010,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.010,0.772);
[gp path] (1.010,4.189)–(1.010,4.009);
[gp node center] at (1.010,0.407) $0$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.724,0.592)–(1.724,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.724,0.592)–(1.724,0.772);
[gp path] (1.724,4.189)–(1.724,4.009);
[gp node center] at (1.724,0.407) $5$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.438,0.592)–(2.438,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.438,0.592)–(2.438,0.772);
[gp path] (2.438,4.189)–(2.438,4.009);
[gp node center] at (2.438,0.407) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.152,0.592)–(3.152,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.152,0.592)–(3.152,0.772);
[gp path] (3.152,4.189)–(3.152,4.009);
[gp node center] at (3.152,0.407) $15$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.866,0.592)–(3.866,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.866,0.592)–(3.866,0.772);
[gp path] (3.866,4.189)–(3.866,4.009);
[gp node center] at (3.866,0.407) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.580,0.592)–(4.580,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.580,0.592)–(4.580,0.772);
[gp path] (4.580,4.189)–(4.580,4.009);
[gp node center] at (4.580,0.407) $25$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.294,0.592)–(5.294,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.294,0.592)–(5.294,0.772);
[gp path] (5.294,4.189)–(5.294,4.009);
[gp node center] at (5.294,0.407) $30$;
[gp path] (1.010,4.189)–(1.010,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
[gp node center,rotate=-270] at (0.175,2.390) $\mathcal{E}\left( u^{ts},u^R\right)$;
[gp node center] at (3.152,0.130) number of TS iterations;
rgb color=0.580,0.000,0.827
[gp path] (1.010,3.610)–(1.153,3.366)–(1.296,3.168)–(1.438,2.998)–(1.581,2.841)
3pointgp mark 1(1.010,3.610)
3pointgp mark 1(1.153,3.366)
3pointgp mark 1(1.296,3.168)
3pointgp mark 1(1.438,2.998)
3pointgp mark 1(1.581,2.841)
3pointgp mark 1(1.724,2.692)
3pointgp mark 1(1.867,2.548)
3pointgp mark 1(2.010,2.408)
3pointgp mark 1(2.152,2.270)
3pointgp mark 1(2.295,2.134)
3pointgp mark 1(2.438,2.000)
3pointgp mark 1(2.581,1.868)
3pointgp mark 1(2.724,1.737)
3pointgp mark 1(2.866,1.607)
3pointgp mark 1(3.009,1.479)
3pointgp mark 1(3.152,1.351)
3pointgp mark 1(3.295,1.225)
3pointgp mark 1(3.438,1.100)
3pointgp mark 1(3.580,0.976)
[gp path] (1.010,3.621)–(1.153,3.381)–(1.296,3.185)–(1.438,3.016)–(1.581,2.860)
3pointgp mark 2(1.010,3.621)
3pointgp mark 2(1.153,3.381)
3pointgp mark 2(1.296,3.185)
3pointgp mark 2(1.438,3.016)
3pointgp mark 2(1.581,2.860)
3pointgp mark 2(1.724,2.711)
3pointgp mark 2(1.867,2.568)
3pointgp mark 2(2.010,2.429)
3pointgp mark 2(2.152,2.293)
3pointgp mark 2(2.295,2.159)
3pointgp mark 2(2.438,2.027)
3pointgp mark 2(2.581,1.897)
3pointgp mark 2(2.724,1.768)
3pointgp mark 2(2.866,1.640)
3pointgp mark 2(3.009,1.514)
3pointgp mark 2(3.152,1.389)
3pointgp mark 2(3.295,1.265)
3pointgp mark 2(3.438,1.142)
3pointgp mark 2(3.580,1.020)
3pointgp mark 2(3.723,0.898)
rgb color=0.784,0.784,0.000
[gp path] (1.010,3.477)–(1.153,3.234)–(1.296,3.051)–(1.438,2.899)–(1.581,2.763)
3pointgp mark 3(1.010,3.477)
3pointgp mark 3(1.153,3.234)
3pointgp mark 3(1.296,3.051)
3pointgp mark 3(1.438,2.899)
3pointgp mark 3(1.581,2.763)
3pointgp mark 3(1.724,2.638)
3pointgp mark 3(1.867,2.522)
3pointgp mark 3(2.010,2.414)
3pointgp mark 3(2.152,2.312)
3pointgp mark 3(2.295,2.214)
3pointgp mark 3(2.438,2.119)
3pointgp mark 3(2.581,2.027)
3pointgp mark 3(2.724,1.936)
3pointgp mark 3(2.866,1.847)
3pointgp mark 3(3.009,1.758)
3pointgp mark 3(3.152,1.671)
3pointgp mark 3(3.295,1.583)
3pointgp mark 3(3.438,1.496)
3pointgp mark 3(3.580,1.410)
3pointgp mark 3(3.723,1.324)
3pointgp mark 3(3.866,1.237)
3pointgp mark 3(4.009,1.151)
3pointgp mark 3(4.152,1.065)
3pointgp mark 3(4.294,0.979)
3pointgp mark 3(4.437,0.894)
rgb color=0.580,0.000,0.827
[gp path] (1.010,3.625)–(1.153,3.384)–(1.296,3.188)–(1.438,3.019)–(1.581,2.862)
3pointgp mark 4(1.010,3.625)
3pointgp mark 4(1.153,3.384)
3pointgp mark 4(1.296,3.188)
3pointgp mark 4(1.438,3.019)
3pointgp mark 4(1.581,2.862)
3pointgp mark 4(1.724,2.713)
3pointgp mark 4(1.867,2.570)
3pointgp mark 4(2.010,2.431)
3pointgp mark 4(2.152,2.295)
3pointgp mark 4(2.295,2.162)
3pointgp mark 4(2.438,2.030)
3pointgp mark 4(2.581,1.900)
3pointgp mark 4(2.724,1.772)
3pointgp mark 4(2.866,1.645)
3pointgp mark 4(3.009,1.519)
3pointgp mark 4(3.152,1.395)
3pointgp mark 4(3.295,1.271)
3pointgp mark 4(3.438,1.148)
3pointgp mark 4(3.580,1.027)
3pointgp mark 4(3.723,0.906)
rgb color=0.784,0.784,0.000
[gp path] (1.010,3.486)–(1.153,3.242)–(1.296,3.062)–(1.438,2.912)–(1.581,2.778)
3pointgp mark 4(1.010,3.486)
3pointgp mark 4(1.153,3.242)
3pointgp mark 4(1.296,3.062)
3pointgp mark 4(1.438,2.912)
3pointgp mark 4(1.581,2.778)
3pointgp mark 4(1.724,2.653)
3pointgp mark 4(1.867,2.537)
3pointgp mark 4(2.010,2.428)
3pointgp mark 4(2.152,2.325)
3pointgp mark 4(2.295,2.226)
3pointgp mark 4(2.438,2.131)
3pointgp mark 4(2.581,2.038)
3pointgp mark 4(2.724,1.947)
3pointgp mark 4(2.866,1.858)
3pointgp mark 4(3.009,1.770)
3pointgp mark 4(3.152,1.682)
3pointgp mark 4(3.295,1.596)
3pointgp mark 4(3.438,1.509)
3pointgp mark 4(3.580,1.424)
3pointgp mark 4(3.723,1.338)
3pointgp mark 4(3.866,1.253)
3pointgp mark 4(4.009,1.168)
3pointgp mark 4(4.152,1.083)
3pointgp mark 4(4.294,0.998)
3pointgp mark 4(4.437,0.913)
rgb color=0.580,0.000,0.827
[gp path] (1.010,3.625)–(1.153,3.385)–(1.296,3.189)–(1.438,3.019)–(1.581,2.862)
3pointgp mark 6(1.010,3.625)
3pointgp mark 6(1.153,3.385)
3pointgp mark 6(1.296,3.189)
3pointgp mark 6(1.438,3.019)
3pointgp mark 6(1.581,2.862)
3pointgp mark 6(1.724,2.713)
3pointgp mark 6(1.867,2.570)
3pointgp mark 6(2.010,2.431)
3pointgp mark 6(2.152,2.295)
3pointgp mark 6(2.295,2.162)
3pointgp mark 6(2.438,2.030)
3pointgp mark 6(2.581,1.900)
3pointgp mark 6(2.724,1.772)
3pointgp mark 6(2.866,1.645)
3pointgp mark 6(3.009,1.519)
3pointgp mark 6(3.152,1.395)
3pointgp mark 6(3.295,1.271)
3pointgp mark 6(3.438,1.148)
3pointgp mark 6(3.580,1.027)
3pointgp mark 6(3.723,0.905)
rgb color=0.784,0.784,0.000
[gp path] (1.010,3.489)–(1.153,3.244)–(1.296,3.064)–(1.438,2.915)–(1.581,2.780)
3pointgp mark 6(1.010,3.489)
3pointgp mark 6(1.153,3.244)
3pointgp mark 6(1.296,3.064)
3pointgp mark 6(1.438,2.915)
3pointgp mark 6(1.581,2.780)
3pointgp mark 6(1.724,2.656)
3pointgp mark 6(1.867,2.540)
3pointgp mark 6(2.010,2.431)
3pointgp mark 6(2.152,2.327)
3pointgp mark 6(2.295,2.228)
3pointgp mark 6(2.438,2.132)
3pointgp mark 6(2.581,2.039)
3pointgp mark 6(2.724,1.948)
3pointgp mark 6(2.866,1.859)
3pointgp mark 6(3.009,1.770)
3pointgp mark 6(3.152,1.683)
3pointgp mark 6(3.295,1.596)
3pointgp mark 6(3.438,1.510)
3pointgp mark 6(3.580,1.424)
3pointgp mark 6(3.723,1.339)
3pointgp mark 6(3.866,1.254)
3pointgp mark 6(4.009,1.169)
3pointgp mark 6(4.152,1.084)
3pointgp mark 6(4.294,0.999)
3pointgp mark 6(4.437,0.915)
rgb color=0.000,0.392,0.000
[gp path] (1.010,3.338)–(1.153,3.078)–(1.296,2.895)–(1.438,2.752)–(1.581,2.626)
3pointgp mark 6(1.010,3.338)
3pointgp mark 6(1.153,3.078)
3pointgp mark 6(1.296,2.895)
3pointgp mark 6(1.438,2.752)
3pointgp mark 6(1.581,2.626)
3pointgp mark 6(1.724,2.510)
3pointgp mark 6(1.867,2.400)
3pointgp mark 6(2.010,2.296)
3pointgp mark 6(2.152,2.196)
3pointgp mark 6(2.295,2.101)
3pointgp mark 6(2.438,2.010)
3pointgp mark 6(2.581,1.923)
3pointgp mark 6(2.724,1.840)
3pointgp mark 6(2.866,1.760)
3pointgp mark 6(3.009,1.685)
3pointgp mark 6(3.152,1.613)
3pointgp mark 6(3.295,1.544)
3pointgp mark 6(3.438,1.478)
3pointgp mark 6(3.580,1.415)
3pointgp mark 6(3.723,1.353)
3pointgp mark 6(3.866,1.293)
3pointgp mark 6(4.009,1.235)
3pointgp mark 6(4.152,1.178)
3pointgp mark 6(4.294,1.122)
3pointgp mark 6(4.437,1.066)
3pointgp mark 6(4.580,1.011)
3pointgp mark 6(4.723,0.957)
3pointgp mark 6(4.866,0.903)
rgb color=0.545,0.000,0.000
[gp path] (1.010,3.193)–(1.153,2.927)–(1.296,2.746)–(1.438,2.610)–(1.581,2.494)
3pointgp mark 6(1.010,3.193)
3pointgp mark 6(1.153,2.927)
3pointgp mark 6(1.296,2.746)
3pointgp mark 6(1.438,2.610)
3pointgp mark 6(1.581,2.494)
3pointgp mark 6(1.724,2.387)
3pointgp mark 6(1.867,2.286)
3pointgp mark 6(2.010,2.188)
3pointgp mark 6(2.152,2.094)
3pointgp mark 6(2.295,2.002)
3pointgp mark 6(2.438,1.914)
3pointgp mark 6(2.581,1.829)
3pointgp mark 6(2.724,1.747)
3pointgp mark 6(2.866,1.670)
3pointgp mark 6(3.009,1.597)
3pointgp mark 6(3.152,1.529)
3pointgp mark 6(3.295,1.464)
3pointgp mark 6(3.438,1.404)
3pointgp mark 6(3.580,1.348)
3pointgp mark 6(3.723,1.295)
3pointgp mark 6(3.866,1.245)
3pointgp mark 6(4.009,1.197)
3pointgp mark 6(4.152,1.151)
3pointgp mark 6(4.294,1.107)
3pointgp mark 6(4.437,1.064)
3pointgp mark 6(4.580,1.022)
3pointgp mark 6(4.723,0.981)
3pointgp mark 6(4.866,0.941)
3pointgp mark 6(5.008,0.902)
3pointgp mark 6(5.151,0.863)
rgb color=0.000,0.392,0.000
[gp path] (1.010,3.341)–(1.153,3.078)–(1.296,2.896)–(1.438,2.752)–(1.581,2.626)
3pointgp mark 13(1.010,3.341)
3pointgp mark 13(1.153,3.078)
3pointgp mark 13(1.296,2.896)
3pointgp mark 13(1.438,2.752)
3pointgp mark 13(1.581,2.626)
3pointgp mark 13(1.724,2.509)
3pointgp mark 13(1.867,2.399)
3pointgp mark 13(2.010,2.295)
3pointgp mark 13(2.152,2.195)
3pointgp mark 13(2.295,2.100)
3pointgp mark 13(2.438,2.009)
3pointgp mark 13(2.581,1.921)
3pointgp mark 13(2.724,1.838)
3pointgp mark 13(2.866,1.758)
3pointgp mark 13(3.009,1.682)
3pointgp mark 13(3.152,1.610)
3pointgp mark 13(3.295,1.540)
3pointgp mark 13(3.438,1.473)
3pointgp mark 13(3.580,1.409)
3pointgp mark 13(3.723,1.347)
3pointgp mark 13(3.866,1.287)
3pointgp mark 13(4.009,1.228)
3pointgp mark 13(4.152,1.170)
3pointgp mark 13(4.294,1.113)
3pointgp mark 13(4.437,1.057)
3pointgp mark 13(4.580,1.002)
3pointgp mark 13(4.723,0.947)
3pointgp mark 13(4.866,0.893)
rgb color=0.545,0.000,0.000
[gp path] (1.010,3.197)–(1.153,2.927)–(1.296,2.743)–(1.438,2.604)–(1.581,2.484)
3pointgp mark 13(1.010,3.197)
3pointgp mark 13(1.153,2.927)
3pointgp mark 13(1.296,2.743)
3pointgp mark 13(1.438,2.604)
3pointgp mark 13(1.581,2.484)
3pointgp mark 13(1.724,2.373)
3pointgp mark 13(1.867,2.267)
3pointgp mark 13(2.010,2.166)
3pointgp mark 13(2.152,2.068)
3pointgp mark 13(2.295,1.973)
3pointgp mark 13(2.438,1.882)
3pointgp mark 13(2.581,1.795)
3pointgp mark 13(2.724,1.712)
3pointgp mark 13(2.866,1.633)
3pointgp mark 13(3.009,1.559)
3pointgp mark 13(3.152,1.490)
3pointgp mark 13(3.295,1.424)
3pointgp mark 13(3.438,1.363)
3pointgp mark 13(3.580,1.305)
3pointgp mark 13(3.723,1.251)
3pointgp mark 13(3.866,1.199)
3pointgp mark 13(4.009,1.150)
3pointgp mark 13(4.152,1.103)
3pointgp mark 13(4.294,1.058)
3pointgp mark 13(4.437,1.015)
3pointgp mark 13(4.580,0.972)
3pointgp mark 13(4.723,0.931)
3pointgp mark 13(4.866,0.891)
3pointgp mark 13(5.008,0.851)
color=gp lt color border
[gp node right] at (2.250,5.468) L7 from L4;
rgb color=0.000,0.000,0.545
[gp path] (2.360,5.468)–(2.980,5.468);
3pointgp mark 11(2.670,5.468)
color=gp lt color border
[gp path] (1.010,4.189)–(1.010,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
gp plot 11.010cm0.592cm5.294cm4.189cm
Cubic field: errors .
These graphs show the evolution of the error with respect to the iterations.
The equation (<ref>) is verified numerically with these curves.
This explains why, as $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)$ tends towards zero in all cases, $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right)$ decreases to a constant value which corresponds to the discretization error $\left\| \vm{u}^R-\vm{u}^C \right\|_{E_{\Omega}}/\left\| \vm{u}^C \right\|_{E_{\Omega}}$.
As the fine-scale discretization increases, this constant naturally decrease.
But in all cases, only a few iterations make $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right)$ close to the discretization error.
This confirms the same observation made in the literature that the error on the boundary conditions of fine scale problems decreases rapidly with respect to $\left\| \vm{u}^R-\vm{u}^C \right\|_{E_{\Omega}}$.
The proposed solver behaves in the same way, regardless of the number of processes used (there is no side effect introduced by parallelism).
In these graphs, we can see that the $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)$ error, corresponding mainly to the error on the boundary condition of fine-scale problems, evolves in a similar way to the residual error.
For $\epsilon=1.e^{-7}$, the $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)$ error is bellow this threshold a few iterations before the residual error.
Thus, this last criterion is conservative: when it is respected, it ensures that $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)$ is respected.
We can also observe that the curves, whether for the residual or for $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)$ error, are almost the same for a given coarse-mesh discretization.
In particular L2, L3, L4 and L5 "from L0" are very close.
This is also the case for the curves "from L1" and "from L2".
For the "from L3", a small gap appears between L5 and L6.
After a few iterations of the scale loop, the curves behave almost linearly (in linear/logarithmic scale) with a softening of the slopes when at the global-scale the discretization grows.
These observations, which still need to be analyzed theoretically, show that $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)$and $resi$ are influenced by the level of discretization on the global-scale.
But the next test case will already give us a more refined interpretation of these observations.
This last part studies the choice of $\epsilon=1.e^{-7}$.
From the graph of $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right)$, after a few iterations there is no real interest in continuing the scale loop because the precision is clearly imposed by $\left\| \vm{u}^R-\vm{u}^C \right\|_{E_{\Omega}}$.
A relative error with respect to this quantity may be satisfactory in some calculation contexts.
To evaluate the impact of such a looser error condition, some tests are performed with an arbitrary residual error precision: $\epsilon=\frac{\mathcal{E}\left( \vm{u}^R,\vm{u}^C\right)}{10}$.
As mentioned above, the residual error is conservative so if it is less than $\epsilon$ then $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)$ is too.
This leads to the following condition:
\begin{equation}
\mathcal{E}\left( \vm{u}^{ts},\vm{u}^R\right)\leqslant \frac{\mathcal{E}\left( \vm{u}^R,\vm{u}^C\right)}{10}
\label{inequal1}
\end{equation}
Passing at power of 2 and using equation (<ref>) the overall error in such condition will respect the following condition:
\begin{equation}
\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right)^2 \leqslant \left( 1 + \frac{ \left\| \vm{u}^R \right\|_{E_{\Omega}}^2}{100\times \left\| \vm{u}^C \right\|_{E_{\Omega}}^2} \right) \times \mathcal{E}\left( \vm{u}^R,\vm{u}^C\right)^2
\label{inequal2}
\end{equation}
And the relative error between $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right)$ and $\mathcal{E}\left( \vm{u}^R,\vm{u}^C\right)$ is then limited by:
\begin{equation}
\frac{\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right)-\mathcal{E}\left( \vm{u}^R,\vm{u}^C\right)}{\mathcal{E}\left( \vm{u}^R,\vm{u}^C\right)} \leqslant \sqrt{1+\frac{ \left\| \vm{u}^R \right\|_{E_{\Omega}}^2}{100\times \left\| \vm{u}^C \right\|_{E_{\Omega}}^2}}-1
\label{inequal3}
\end{equation}
Thus, numerically, the arbitrarily chosen $\epsilon$ leads to less than 0.5% relative error between $\mathcal{E}\left( \vm{u}^{ts},\vm{u}^C\right)$ and $\mathcal{E}\left( \vm{u}^R,\vm{u}^C\right)$ using $0.9998$ for $\left\| \vm{u}^R \right\|_{E_{\Omega}}^2/\left\| \vm{u}^C \right\|_{E_{\Omega}}^2$ (average of data L3, L4 and L5).
The elapsed time performance for $\epsilon=\frac{\mathcal{E}\left( \vm{u}^R,\vm{u}^C\right)}{10}$ are presented in figure <ref> as a ratio of the elapsed time for the current solver to the best elapsed time among all solutions and all $\epsilon$.
[Level 2]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.375);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.680,0.592)–(5.294,0.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.680,0.592)–(0.860,0.592);
[gp path] (5.294,0.592)–(5.114,0.592);
[gp node right] at (0.570,0.592) $1$;
[gp path] (0.680,1.675)–(0.770,1.675);
[gp path] (5.294,1.675)–(5.204,1.675);
[gp path] (0.680,2.308)–(0.770,2.308);
[gp path] (5.294,2.308)–(5.204,2.308);
[gp path] (0.680,2.758)–(0.770,2.758);
[gp path] (5.294,2.758)–(5.204,2.758);
[gp path] (0.680,3.106)–(0.770,3.106);
[gp path] (5.294,3.106)–(5.204,3.106);
[gp path] (0.680,3.391)–(0.770,3.391);
[gp path] (5.294,3.391)–(5.204,3.391);
[gp path] (0.680,3.632)–(0.770,3.632);
[gp path] (5.294,3.632)–(5.204,3.632);
[gp path] (0.680,3.840)–(0.770,3.840);
[gp path] (5.294,3.840)–(5.204,3.840);
[gp path] (0.680,4.024)–(0.770,4.024);
[gp path] (5.294,4.024)–(5.204,4.024);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.680,4.189)–(5.294,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.680,4.189)–(0.860,4.189);
[gp path] (5.294,4.189)–(5.114,4.189);
[gp node right] at (0.570,4.189) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.680,0.592)–(0.680,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.680,0.592)–(0.680,0.772);
[gp path] (0.680,4.189)–(0.680,4.009);
[gp node center] at (0.680,0.407) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.339,0.592)–(1.339,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.339,0.592)–(1.339,0.772);
[gp path] (1.339,4.189)–(1.339,4.009);
[gp node center] at (1.339,0.407) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.998,0.592)–(1.998,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.998,0.592)–(1.998,0.772);
[gp path] (1.998,4.189)–(1.998,4.009);
[gp node center] at (1.998,0.407) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.657,0.592)–(2.657,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.657,0.592)–(2.657,0.772);
[gp path] (2.657,4.189)–(2.657,4.009);
[gp node center] at (2.657,0.407) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.317,0.592)–(3.317,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.317,0.592)–(3.317,0.772);
[gp path] (3.317,4.189)–(3.317,4.009);
[gp node center] at (3.317,0.407) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.976,0.592)–(3.976,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.976,0.592)–(3.976,0.772);
[gp path] (3.976,4.189)–(3.976,4.009);
[gp node center] at (3.976,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.635,0.592)–(4.635,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.635,0.592)–(4.635,0.772);
[gp path] (4.635,4.189)–(4.635,4.009);
[gp node center] at (4.635,0.407) 64;
[gp path] (0.680,4.189)–(0.680,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
[gp node center,rotate=-270] at (0.175,2.390) ratio (elapsed time)/(best elapsed time);
[gp node center] at (2.987,0.130) number of cores;
[gp node right] at (1.687,5.161) fr;
rgb color=0.000,1.000,1.000
[gp path] (1.797,5.161)–(2.417,5.161);
[gp path] (0.680,0.592)–(1.339,0.592)–(1.998,0.627)–(2.657,0.995)–(3.317,1.947)
3pointgp mark 1(0.680,0.592)
3pointgp mark 1(1.339,0.592)
3pointgp mark 1(1.998,0.627)
3pointgp mark 1(2.657,0.995)
3pointgp mark 1(3.317,1.947)
3pointgp mark 1(3.976,2.357)
3pointgp mark 1(4.635,2.789)
3pointgp mark 1(5.294,3.220)
3pointgp mark 1(2.107,5.161)
color=gp lt color border
[gp node right] at (1.687,4.899) blr, $\epsilon =1.e^{-7}$;
rgb color=0.000,0.000,0.545
[gp path] (1.797,4.899)–(2.417,4.899);
[gp path] (0.680,0.713)–(1.339,0.814)–(1.998,0.884)–(2.657,1.228)–(3.317,2.405)
3pointgp mark 1(0.680,0.713)
3pointgp mark 1(1.339,0.814)
3pointgp mark 1(1.998,0.884)
3pointgp mark 1(2.657,1.228)
3pointgp mark 1(3.317,2.405)
3pointgp mark 1(3.976,2.936)
3pointgp mark 1(4.635,3.081)
3pointgp mark 1(5.294,3.582)
3pointgp mark 1(2.107,4.899)
color=gp lt color border
[gp node right] at (1.687,4.636) dd, $\epsilon=1.e^{-7}$;
rgb color=0.753,0.251,0.000
[gp path] (1.797,4.636)–(2.417,4.636);
[gp path] (0.680,0.592)–(1.339,0.767)–(1.998,0.616)–(2.657,0.666)–(3.317,0.806)
3pointgp mark 2(0.680,0.592)
3pointgp mark 2(1.339,0.767)
3pointgp mark 2(1.998,0.616)
3pointgp mark 2(2.657,0.666)
3pointgp mark 2(3.317,0.806)
3pointgp mark 2(3.976,0.986)
3pointgp mark 2(4.635,0.792)
3pointgp mark 2(5.294,1.045)
3pointgp mark 2(2.107,4.636)
color=gp lt color border
[gp node right] at (1.687,4.374) ts from 0, $\epsilon=1.e^{-7}$;
rgb color=0.000,1.000,0.000
[gp path] (1.797,4.374)–(2.417,4.374);
[gp path] (0.680,2.601)–(1.339,2.411)–(1.998,2.213)–(2.657,1.908)–(3.317,2.001)
3pointgp mark 5(0.680,2.601)
3pointgp mark 5(1.339,2.411)
3pointgp mark 5(1.998,2.213)
3pointgp mark 5(2.657,1.908)
3pointgp mark 5(3.317,2.001)
3pointgp mark 5(3.976,2.047)
3pointgp mark 5(4.635,2.496)
3pointgp mark 5(5.294,2.405)
3pointgp mark 5(2.107,4.374)
color=gp lt color border
[gp node right] at (4.499,4.899) blr, $\epsilon=3.e^{-3}$;
rgb color=0.000,0.000,0.545
[gp path] (4.609,4.899)–(5.229,4.899);
[gp path] (1.998,0.946)–(2.657,1.260)–(3.317,2.400)–(3.976,3.440)–(4.635,3.133)
3pointgp mark 3(0.680,0.771)
3pointgp mark 3(1.998,0.946)
3pointgp mark 3(2.657,1.260)
3pointgp mark 3(3.317,2.400)
3pointgp mark 3(3.976,3.440)
3pointgp mark 3(4.635,3.133)
3pointgp mark 3(5.294,3.613)
3pointgp mark 3(4.919,4.899)
color=gp lt color border
[gp node right] at (4.499,4.636) dd, $\epsilon=3.e^{-3}$;
rgb color=0.753,0.251,0.000
[gp path] (4.609,4.636)–(5.229,4.636);
[gp path] (0.680,0.603)–(1.339,0.761)–(1.998,0.592)–(2.657,0.592)–(3.317,0.592)
3pointgp mark 4(0.680,0.603)
3pointgp mark 4(1.339,0.761)
3pointgp mark 4(1.998,0.592)
3pointgp mark 4(2.657,0.592)
3pointgp mark 4(3.317,0.592)
3pointgp mark 4(3.976,0.592)
3pointgp mark 4(4.635,0.592)
3pointgp mark 4(5.294,0.592)
3pointgp mark 4(4.919,4.636)
color=gp lt color border
[gp node right] at (4.499,4.374) ts from 0, $\epsilon=3.e^{-3}$;
rgb color=0.000,1.000,0.000
[gp path] (4.609,4.374)–(5.229,4.374);
[gp path] (0.680,1.681)–(1.339,1.496)–(1.998,1.263)–(2.657,0.918)–(3.317,0.980)
3pointgp mark 6(0.680,1.681)
3pointgp mark 6(1.339,1.496)
3pointgp mark 6(1.998,1.263)
3pointgp mark 6(2.657,0.918)
3pointgp mark 6(3.317,0.980)
3pointgp mark 6(3.976,0.953)
3pointgp mark 6(4.635,1.382)
3pointgp mark 6(5.294,1.229)
3pointgp mark 6(4.919,4.374)
color=gp lt color border
[gp path] (0.680,4.189)–(0.680,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
gp plot 10.680cm0.592cm5.294cm4.189cm
[Level 5]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.375);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,0.592)–(5.294,0.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,0.592)–(0.785,0.592);
[gp path] (5.294,0.592)–(5.114,0.592);
[gp node right] at (0.495,0.592) $1$;
[gp path] (0.605,1.133)–(0.695,1.133);
[gp path] (5.294,1.133)–(5.204,1.133);
[gp path] (0.605,1.450)–(0.695,1.450);
[gp path] (5.294,1.450)–(5.204,1.450);
[gp path] (0.605,1.675)–(0.695,1.675);
[gp path] (5.294,1.675)–(5.204,1.675);
[gp path] (0.605,1.849)–(0.695,1.849);
[gp path] (5.294,1.849)–(5.204,1.849);
[gp path] (0.605,1.992)–(0.695,1.992);
[gp path] (5.294,1.992)–(5.204,1.992);
[gp path] (0.605,2.112)–(0.695,2.112);
[gp path] (5.294,2.112)–(5.204,2.112);
[gp path] (0.605,2.216)–(0.695,2.216);
[gp path] (5.294,2.216)–(5.204,2.216);
[gp path] (0.605,2.308)–(0.695,2.308);
[gp path] (5.294,2.308)–(5.204,2.308);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,2.391)–(5.294,2.391);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,2.391)–(0.785,2.391);
[gp path] (5.294,2.391)–(5.114,2.391);
[gp node right] at (0.495,2.391) $10$;
[gp path] (0.605,2.932)–(0.695,2.932);
[gp path] (5.294,2.932)–(5.204,2.932);
[gp path] (0.605,3.249)–(0.695,3.249);
[gp path] (5.294,3.249)–(5.204,3.249);
[gp path] (0.605,3.473)–(0.695,3.473);
[gp path] (5.294,3.473)–(5.204,3.473);
[gp path] (0.605,3.648)–(0.695,3.648);
[gp path] (5.294,3.648)–(5.204,3.648);
[gp path] (0.605,3.790)–(0.695,3.790);
[gp path] (5.294,3.790)–(5.204,3.790);
[gp path] (0.605,3.910)–(0.695,3.910);
[gp path] (5.294,3.910)–(5.204,3.910);
[gp path] (0.605,4.015)–(0.695,4.015);
[gp path] (5.294,4.015)–(5.204,4.015);
[gp path] (0.605,4.107)–(0.695,4.107);
[gp path] (5.294,4.107)–(5.204,4.107);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,4.189)–(5.294,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,4.189)–(0.785,4.189);
[gp path] (5.294,4.189)–(5.114,4.189);
[gp node right] at (0.495,4.189) $100$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.605,0.592)–(0.605,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.605,0.592)–(0.605,0.772);
[gp path] (0.605,4.189)–(0.605,4.009);
[gp node center] at (0.605,0.407) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.074,0.592)–(1.074,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.074,0.592)–(1.074,0.772);
[gp path] (1.074,4.189)–(1.074,4.009);
[gp node center] at (1.074,0.407) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.543,0.592)–(1.543,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.543,0.592)–(1.543,0.772);
[gp path] (1.543,4.189)–(1.543,4.009);
[gp node center] at (1.543,0.407) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.012,0.592)–(2.012,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.012,0.592)–(2.012,0.772);
[gp path] (2.012,4.189)–(2.012,4.009);
[gp node center] at (2.012,0.407) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.481,0.592)–(2.481,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.481,0.592)–(2.481,0.772);
[gp path] (2.481,4.189)–(2.481,4.009);
[gp node center] at (2.481,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.950,0.592)–(2.950,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.950,0.592)–(2.950,0.772);
[gp path] (2.950,4.189)–(2.950,4.009);
[gp node center] at (2.950,0.407) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.418,0.592)–(3.418,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.418,0.592)–(3.418,0.772);
[gp path] (3.418,4.189)–(3.418,4.009);
[gp node center] at (3.418,0.407) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.887,0.592)–(3.887,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.887,0.592)–(3.887,0.772);
[gp path] (3.887,4.189)–(3.887,4.009);
[gp node center] at (3.887,0.407) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.356,0.592)–(4.356,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.356,0.592)–(4.356,0.772);
[gp path] (4.356,4.189)–(4.356,4.009);
[gp node center] at (4.356,0.407) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.825,0.592)–(4.825,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.825,0.592)–(4.825,0.772);
[gp path] (4.825,4.189)–(4.825,4.009);
[gp node center] at (4.825,0.407) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.294,0.592)–(5.294,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.294,0.592)–(5.294,0.772);
[gp path] (5.294,4.189)–(5.294,4.009);
[gp node center] at (5.294,0.407) 2048;
[gp path] (0.605,4.189)–(0.605,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
[gp node center] at (2.949,0.130) number of cores;
[gp node right] at (1.687,5.161) fr;
rgb color=0.000,1.000,1.000
[gp path] (1.797,5.161)–(2.417,5.161);
[gp path] (2.012,3.074)–(2.481,3.014)–(2.950,3.189)–(3.418,3.260)–(3.887,3.432)
3pointgp mark 1(2.012,3.074)
3pointgp mark 1(2.481,3.014)
3pointgp mark 1(2.950,3.189)
3pointgp mark 1(3.418,3.260)
3pointgp mark 1(3.887,3.432)
3pointgp mark 1(4.356,3.642)
3pointgp mark 1(2.107,5.161)
color=gp lt color border
[gp node right] at (1.687,4.899) blr, $\epsilon =1.e^{-7}$;
rgb color=0.000,0.000,0.545
[gp path] (1.797,4.899)–(2.417,4.899);
[gp path] (1.074,1.796)–(1.543,1.885)–(2.012,1.949)–(2.481,1.956)–(2.950,2.294)
3pointgp mark 1(1.074,1.796)
3pointgp mark 1(1.543,1.885)
3pointgp mark 1(2.012,1.949)
3pointgp mark 1(2.481,1.956)
3pointgp mark 1(2.950,2.294)
3pointgp mark 1(3.418,2.587)
3pointgp mark 1(3.887,2.878)
3pointgp mark 1(4.356,3.218)
3pointgp mark 1(2.107,4.899)
color=gp lt color border
[gp node right] at (1.687,4.636) dd, $\epsilon=1.e^{-7}$;
rgb color=0.753,0.251,0.000
[gp path] (1.797,4.636)–(2.417,4.636);
[gp path] (2.481,2.760)–(2.950,2.775)–(3.418,2.545)–(3.887,2.600)–(4.356,2.578)
3pointgp mark 2(2.481,2.760)
3pointgp mark 2(2.950,2.775)
3pointgp mark 2(3.418,2.545)
3pointgp mark 2(3.887,2.600)
3pointgp mark 2(4.356,2.578)
3pointgp mark 2(4.825,2.475)
3pointgp mark 2(5.294,2.381)
3pointgp mark 2(2.107,4.636)
color=gp lt color border
[gp node right] at (1.687,4.374) ts from 2, $\epsilon=1.e^{-7}$;
rgb color=0.000,1.000,0.000
[gp path] (1.797,4.374)–(2.417,4.374);
[gp path] (0.605,0.997)–(1.074,1.018)–(1.543,0.991)–(2.012,0.990)–(2.481,0.933)
3pointgp mark 5(0.605,0.997)
3pointgp mark 5(1.074,1.018)
3pointgp mark 5(1.543,0.991)
3pointgp mark 5(2.012,0.990)
3pointgp mark 5(2.481,0.933)
3pointgp mark 5(2.950,0.951)
3pointgp mark 5(3.418,0.954)
3pointgp mark 5(3.887,0.978)
3pointgp mark 5(4.356,1.014)
3pointgp mark 5(4.825,1.014)
3pointgp mark 5(5.294,1.163)
3pointgp mark 5(2.107,4.374)
color=gp lt color border
[gp node right] at (4.499,4.899) blr, $\epsilon=4.e^{-4}$;
rgb color=0.000,0.000,0.545
[gp path] (4.609,4.899)–(5.229,4.899);
[gp path] (0.605,1.580)–(1.074,1.643)–(1.543,1.737)–(2.012,1.797)–(2.481,1.779)
3pointgp mark 3(0.605,1.580)
3pointgp mark 3(1.074,1.643)
3pointgp mark 3(1.543,1.737)
3pointgp mark 3(2.012,1.797)
3pointgp mark 3(2.481,1.779)
3pointgp mark 3(2.950,2.079)
3pointgp mark 3(3.418,2.407)
3pointgp mark 3(3.887,2.757)
3pointgp mark 3(4.356,3.186)
3pointgp mark 3(4.919,4.899)
color=gp lt color border
[gp node right] at (4.499,4.636) dd, $\epsilon=4.e^{-4}$;
rgb color=0.753,0.251,0.000
[gp path] (4.609,4.636)–(5.229,4.636);
[gp path] (2.481,2.659)–(2.950,2.639)–(3.418,2.395)–(3.887,2.409)–(4.356,2.331)
3pointgp mark 4(2.481,2.659)
3pointgp mark 4(2.950,2.639)
3pointgp mark 4(3.418,2.395)
3pointgp mark 4(3.887,2.409)
3pointgp mark 4(4.356,2.331)
3pointgp mark 4(4.825,2.289)
3pointgp mark 4(5.294,2.068)
3pointgp mark 4(4.919,4.636)
color=gp lt color border
[gp node right] at (4.499,4.374) ts from 2, $\epsilon=4.e^{-4}$;
rgb color=0.000,1.000,0.000
[gp path] (4.609,4.374)–(5.229,4.374);
[gp path] (0.605,0.592)–(1.074,0.592)–(1.543,0.592)–(2.012,0.592)–(2.481,0.592)
3pointgp mark 6(0.605,0.592)
3pointgp mark 6(1.074,0.592)
3pointgp mark 6(1.543,0.592)
3pointgp mark 6(2.012,0.592)
3pointgp mark 6(2.481,0.592)
3pointgp mark 6(2.950,0.592)
3pointgp mark 6(3.418,0.592)
3pointgp mark 6(3.887,0.592)
3pointgp mark 6(4.356,0.592)
3pointgp mark 6(4.825,0.592)
3pointgp mark 6(5.294,0.592)
3pointgp mark 6(4.919,4.374)
color=gp lt color border
[gp path] (0.605,4.189)–(0.605,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
gp plot 10.605cm0.592cm5.294cm4.189cm
[Level 6]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.375);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.495,0.592)–(5.294,0.592);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.495,0.592)–(0.675,0.592);
[gp path] (5.294,0.592)–(5.114,0.592);
[gp node right] at (0.385,0.592) $1$;
[gp path] (0.495,1.268)–(0.585,1.268);
[gp path] (5.294,1.268)–(5.204,1.268);
[gp path] (0.495,1.663)–(0.585,1.663);
[gp path] (5.294,1.663)–(5.204,1.663);
[gp path] (0.495,1.944)–(0.585,1.944);
[gp path] (5.294,1.944)–(5.204,1.944);
[gp path] (0.495,2.161)–(0.585,2.161);
[gp path] (5.294,2.161)–(5.204,2.161);
[gp path] (0.495,2.339)–(0.585,2.339);
[gp path] (5.294,2.339)–(5.204,2.339);
[gp path] (0.495,2.489)–(0.585,2.489);
[gp path] (5.294,2.489)–(5.204,2.489);
[gp path] (0.495,2.620)–(0.585,2.620);
[gp path] (5.294,2.620)–(5.204,2.620);
[gp path] (0.495,2.734)–(0.585,2.734);
[gp path] (5.294,2.734)–(5.204,2.734);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.495,2.837)–(5.294,2.837);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.495,2.837)–(0.675,2.837);
[gp path] (5.294,2.837)–(5.114,2.837);
[gp node right] at (0.385,2.837) $10$;
[gp path] (0.495,3.513)–(0.585,3.513);
[gp path] (5.294,3.513)–(5.204,3.513);
[gp path] (0.495,3.908)–(0.585,3.908);
[gp path] (5.294,3.908)–(5.204,3.908);
[gp path] (0.495,4.189)–(0.585,4.189);
[gp path] (5.294,4.189)–(5.204,4.189);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.495,0.592)–(0.495,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.495,0.592)–(0.495,0.772);
[gp path] (0.495,4.189)–(0.495,4.009);
[gp node center] at (0.495,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.295,0.592)–(1.295,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.295,0.592)–(1.295,0.772);
[gp path] (1.295,4.189)–(1.295,4.009);
[gp node center] at (1.295,0.407) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.095,0.592)–(2.095,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.095,0.592)–(2.095,0.772);
[gp path] (2.095,4.189)–(2.095,4.009);
[gp node center] at (2.095,0.407) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.894,0.592)–(2.894,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.894,0.592)–(2.894,0.772);
[gp path] (2.894,4.189)–(2.894,4.009);
[gp node center] at (2.894,0.407) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.694,0.592)–(3.694,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.694,0.592)–(3.694,0.772);
[gp path] (3.694,4.189)–(3.694,4.009);
[gp node center] at (3.694,0.407) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.494,0.592)–(4.494,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.494,0.592)–(4.494,0.772);
[gp path] (4.494,4.189)–(4.494,4.009);
[gp node center] at (4.494,0.407) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.294,0.592)–(5.294,4.189);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.294,0.592)–(5.294,0.772);
[gp path] (5.294,4.189)–(5.294,4.009);
[gp node center] at (5.294,0.407) 2048;
[gp path] (0.495,4.189)–(0.495,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
[gp node center] at (2.894,0.130) number of cores;
[gp node right] at (1.687,4.374) ts from 3, $\epsilon=1.e^{-7}$;
rgb color=0.000,1.000,0.000
[gp path] (1.797,4.374)–(2.417,4.374);
[gp path] (0.495,1.093)–(1.295,1.097)–(2.095,1.105)–(2.894,1.088)–(3.694,1.082)
3pointgp mark 5(0.495,1.093)
3pointgp mark 5(1.295,1.097)
3pointgp mark 5(2.095,1.105)
3pointgp mark 5(2.894,1.088)
3pointgp mark 5(3.694,1.082)
3pointgp mark 5(4.494,1.054)
3pointgp mark 5(5.294,0.946)
3pointgp mark 5(2.107,4.374)
color=gp lt color border
[gp node right] at (4.499,4.374) ts from 3, $\epsilon=2.e^{-4}$;
rgb color=0.000,1.000,0.000
[gp path] (4.609,4.374)–(5.229,4.374);
[gp path] (0.495,0.592)–(1.295,0.592)–(2.095,0.592)–(2.894,0.592)–(3.694,0.592)
3pointgp mark 6(0.495,0.592)
3pointgp mark 6(1.295,0.592)
3pointgp mark 6(2.095,0.592)
3pointgp mark 6(2.894,0.592)
3pointgp mark 6(3.694,0.592)
3pointgp mark 6(4.494,0.592)
3pointgp mark 6(5.294,0.592)
3pointgp mark 6(4.919,4.374)
color=gp lt color border
[gp node right] at (1.687,4.636) dd, $\epsilon=1.e^{-7}$;
rgb color=0.753,0.251,0.000
[gp path] (1.797,4.636)–(2.417,4.636);
[gp path] (3.694,3.949)–(4.494,3.773)–(5.294,3.528);
3pointgp mark 2(3.694,3.949)
3pointgp mark 2(4.494,3.773)
3pointgp mark 2(5.294,3.528)
3pointgp mark 2(2.107,4.636)
color=gp lt color border
[gp node right] at (4.499,4.636) dd, $\epsilon=2.e^{-4}$;
rgb color=0.753,0.251,0.000
[gp path] (4.609,4.636)–(5.229,4.636);
[gp path] (3.694,3.714)–(4.494,3.514)–(5.294,3.235);
3pointgp mark 4(3.694,3.714)
3pointgp mark 4(4.494,3.514)
3pointgp mark 4(5.294,3.235)
3pointgp mark 4(4.919,4.636)
color=gp lt color border
[gp path] (0.495,4.189)–(0.495,0.592)–(5.294,0.592)–(5.294,4.189)–cycle;
gp plot 10.495cm0.592cm5.294cm4.189cm
Accuracy impact: ratio indicating how much slower the solver associate with a curve is compared to the best solver (based on elapsed time) .
For the three fine-scale discretizations tested, L2, L5 and L6, in addition to $\epsilon=1.e^{-7}$, the tested values of $\epsilon$ are $3.e^{-3}$, $4.e^{-4}$ and $2.e^{-4}$ respectively.
For all discretizations, a larger $\epsilon$ reduces the computation time of all solutions depending on it.
For L5 and L6 (figure <ref> and <ref>), the best solution is the solver with a larger $\epsilon$.
The other solvers, even if some gains are observed with larger $\epsilon$, do not provide better performances than solver.
For the "blr" solver, the gain in the factorization step due to the decrease in accuracy is partially lost by a larger number of iterations in the resolution of the conjugate gradient.
For the "dd" solver, the gain due to decrease in accuracy is in the conjugate gradient resolution of the global boundary problem.
With large domains, this task represents about 50% of the "dd" time consumed, so the gains are moderate.
As the number of cores increases, this task takes most of the time ( 97% for L5 with 2048 processes) and the gains increase slightly.
For the solver, the decrease in accuracy reduces the number of iterations of the scale loop.
For L5 with 32, 128 and 2048 processes this task represents respectively about 46%, 51% and 66% of the time consumed which explains a moderate gain (less than 2).
For L2 (figure <ref>), the gain obtained with the solver is more consistent compared to the other solvers but is not sufficient for this solver to show the best performance.
§.§ Micro structural test case
This example illustrates the cases where matrix conditioning can be a problem for a conventional iterative solver which then needs good preconditionners.
A cube (with opposite diagonal corners at the following coordinates: (0,0,0) and (2,2,2); unit: m) is randomly divided by 64 plans of different orientation and origin (see <ref>).
In the cube, each sub-region delimited by plans has a specific and uniform E.
These Young's moduli, which follow an arbitrary internal encoding, range from 36.5GPa to 3650GPa.
The coding makes the distribution of materials relatively random, as illustrated in figure <ref>.
[Fine scale]
[Fine scale: cut at x=0.5]
[Fine scale: discretization detail (cut at x=0.5)]
[Coarse scale]
[Coarse scale: cut at x=0.5]
Micro-structure discretization and Young's modulus values (in Pa).
In this test case, with the solver, all coarse nodes are enriched ($card\left( e\right) =card\left( c\right) $).
Thus, after choosing a fine-scale discretization arbitrarily considered to meet our micro-structure representation needs, we consider a coarse-scale discretization that follows the equation (<ref>) (note that the coarse mesh is slightly adapted to the plane positions.
This can be seen in the figure <ref> where the visible left corner has larger element sizes, because no plane cuts this region).
The resulting fine-scale discretization has 60 278 925 dofs and corresponds to a section <ref> problem between L5 and L6 (figure <ref>).
This cube is compressed in all directions by the same constant load (4MN) applied on all surfaces.
As in the section <ref>, the loop starts with the displacement field of the macro problem without enrichment and the stopping criterion for the iterations is $\epsilon=1e^{-7}$.
The solver (version) is again compared with the "dd" solver.
This first simulation gives after 93 iterations the fields presented in the figure <ref> with the solution of "dd".
[Domain decomposition]
[Two-scale, fine scale]
[Two-scale, coarse scale]
[Two-scale, fine scale]
Micro-structure solutions: Displacement field with the "dd" solver MS_disp_dd. Displacement (*MS_disp_rset,*MS_disp_cset) / strain MS_strain field at iteration 93 with the solver.
The solver can give the displacement field at both scales.
At the coarse-scale MS_disp_cset, only the dofs in the C-set are used for post-processing, giving a crude visualization on the macro-element.
At the fine-scale MS_disp_rset, using the R-set dofs, all details are given and can be compared to the solution obtained with the "dd" solver MS_disp_dd.
Note that the creation of post-processing files based on R-set dofs, obtained in parallel using MPI-IO functions, uses the per-macro-element organization of data described in the section <ref> and thus can be further parallelized using multi-threading.
As with the displacement field, we can obtain the strains from the R-set solution, as shown in figure <ref>, again by looping over the macro-elements using $\vm{u}_F^{e_{macro}}$.
The elapsed times for this first computation are given in the figure <ref>.
[First computation]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.812);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.100,0.592);
[gp path] (5.294,0.592)–(5.204,0.592);
[gp path] (1.010,0.723)–(1.100,0.723);
[gp path] (5.294,0.723)–(5.204,0.723);
[gp path] (1.010,0.834)–(1.100,0.834);
[gp path] (5.294,0.834)–(5.204,0.834);
[gp path] (1.010,0.930)–(1.100,0.930);
[gp path] (5.294,0.930)–(5.204,0.930);
[gp path] (1.010,1.015)–(1.100,1.015);
[gp path] (5.294,1.015)–(5.204,1.015);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.090)–(5.294,1.090);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.090)–(1.190,1.090);
[gp path] (5.294,1.090)–(5.114,1.090);
[gp node right] at (0.900,1.090) $100$;
[gp path] (1.010,1.589)–(1.100,1.589);
[gp path] (5.294,1.589)–(5.204,1.589);
[gp path] (1.010,1.880)–(1.100,1.880);
[gp path] (5.294,1.880)–(5.204,1.880);
[gp path] (1.010,2.087)–(1.100,2.087);
[gp path] (5.294,2.087)–(5.204,2.087);
[gp path] (1.010,2.247)–(1.100,2.247);
[gp path] (5.294,2.247)–(5.204,2.247);
[gp path] (1.010,2.378)–(1.100,2.378);
[gp path] (5.294,2.378)–(5.204,2.378);
[gp path] (1.010,2.489)–(1.100,2.489);
[gp path] (5.294,2.489)–(5.204,2.489);
[gp path] (1.010,2.585)–(1.100,2.585);
[gp path] (5.294,2.585)–(5.204,2.585);
[gp path] (1.010,2.670)–(1.100,2.670);
[gp path] (5.294,2.670)–(5.204,2.670);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.746)–(5.294,2.746);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.746)–(1.190,2.746);
[gp path] (5.294,2.746)–(5.114,2.746);
[gp node right] at (0.900,2.746) $1000$;
[gp path] (1.010,3.244)–(1.100,3.244);
[gp path] (5.294,3.244)–(5.204,3.244);
[gp path] (1.010,3.535)–(1.100,3.535);
[gp path] (5.294,3.535)–(5.204,3.535);
[gp path] (1.010,3.742)–(1.100,3.742);
[gp path] (5.294,3.742)–(5.204,3.742);
[gp path] (1.010,3.903)–(1.100,3.903);
[gp path] (5.294,3.903)–(5.204,3.903);
[gp path] (1.010,4.034)–(1.100,4.034);
[gp path] (5.294,4.034)–(5.204,4.034);
[gp path] (1.010,4.145)–(1.100,4.145);
[gp path] (5.294,4.145)–(5.204,4.145);
[gp path] (1.010,4.241)–(1.100,4.241);
[gp path] (5.294,4.241)–(5.204,4.241);
[gp path] (1.010,4.325)–(1.100,4.325);
[gp path] (5.294,4.325)–(5.204,4.325);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,4.401)–(5.294,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,4.401)–(1.190,4.401);
[gp path] (5.294,4.401)–(5.114,4.401);
[gp node right] at (0.900,4.401) $10000$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.592)–(1.010,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.010,0.772);
[gp path] (1.010,4.401)–(1.010,4.221);
[gp node center,font=7.0pt8.4pt] at (1.010,0.407) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.575,0.592)–(1.575,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.575,0.592)–(1.575,0.772);
[gp path] (1.575,4.401)–(1.575,4.221);
[gp node center,font=7.0pt8.4pt] at (1.575,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.140,0.592)–(2.140,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.140,0.592)–(2.140,0.772);
[gp path] (2.140,4.401)–(2.140,4.221);
[gp node center,font=7.0pt8.4pt] at (2.140,0.407) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.704,0.592)–(2.704,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.704,0.592)–(2.704,0.772);
[gp path] (2.704,4.401)–(2.704,4.221);
[gp node center,font=7.0pt8.4pt] at (2.704,0.407) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.269,0.592)–(3.269,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.269,0.592)–(3.269,0.772);
[gp path] (3.269,4.401)–(3.269,4.221);
[gp node center,font=7.0pt8.4pt] at (3.269,0.407) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.834,0.592)–(3.834,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.834,0.592)–(3.834,0.772);
[gp path] (3.834,4.401)–(3.834,4.221);
[gp node center,font=7.0pt8.4pt] at (3.834,0.407) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.399,0.592)–(4.399,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.399,0.592)–(4.399,0.772);
[gp path] (4.399,4.401)–(4.399,4.221);
[gp node center,font=7.0pt8.4pt] at (4.399,0.407) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.964,0.592)–(4.964,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.964,0.592)–(4.964,0.772);
[gp path] (4.964,4.401)–(4.964,4.221);
[gp node center,font=7.0pt8.4pt] at (4.964,0.407) 2048;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.294,0.592)–(5.294,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.294,0.592)–(5.294,0.772);
[gp path] (5.294,4.401)–(5.294,4.221);
[gp node center,font=7.0pt8.4pt] at (5.294,0.407) 3072;
[gp path] (1.010,4.401)–(1.010,0.592)–(5.294,0.592)–(5.294,4.401)–cycle;
[gp node center,rotate=-270] at (0.194,2.496) elapsed time in s;
[gp node center] at (3.152,0.068) number of cores;
[gp node right] at (1.560,4.519) tsi;
rgb color=0.000,0.392,0.000
[gp path] (1.670,4.519)–(2.290,4.519);
[gp path] (1.010,3.374)–(1.575,2.944)–(2.140,2.616)–(2.704,2.387)–(3.269,2.051)
3pointgp mark 7(1.010,3.374)
3pointgp mark 7(1.575,2.944)
3pointgp mark 7(2.140,2.616)
3pointgp mark 7(2.704,2.387)
3pointgp mark 7(3.269,2.051)
3pointgp mark 7(3.834,1.794)
3pointgp mark 7(4.399,1.638)
3pointgp mark 7(4.964,1.583)
3pointgp mark 7(5.294,1.719)
3pointgp mark 7(1.980,4.519)
color=gp lt color border
[gp node right] at (2.950,4.519) dd;
rgb color=0.753,0.251,0.000
[gp path] (3.060,4.519)–(3.680,4.519);
[gp path] (3.834,3.244)–(4.399,2.637)–(4.964,2.261)–(5.294,2.024);
3pointgp mark 2(3.834,3.244)
3pointgp mark 2(4.399,2.637)
3pointgp mark 2(4.964,2.261)
3pointgp mark 2(5.294,2.024)
3pointgp mark 2(3.370,4.519)
color=gp lt color border
[gp node right] at (4.340,4.519) ideal;
rgb color=0.000,0.000,0.000
[gp path] (4.450,4.519)–(5.070,4.519);
[gp path] (1.010,3.374)–(1.033,3.354)–(1.056,3.334)–(1.078,3.314)–(1.101,3.294)
color=gp lt color border
[gp path] (1.010,4.401)–(1.010,0.592)–(5.294,0.592)–(5.294,4.401)–cycle;
gp plot 11.010cm0.592cm5.294cm4.401cm
[Second computation]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.812);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.592)–(0.915,0.592);
[gp path] (5.294,0.592)–(5.204,0.592);
[gp path] (0.825,0.723)–(0.915,0.723);
[gp path] (5.294,0.723)–(5.204,0.723);
[gp path] (0.825,0.834)–(0.915,0.834);
[gp path] (5.294,0.834)–(5.204,0.834);
[gp path] (0.825,0.930)–(0.915,0.930);
[gp path] (5.294,0.930)–(5.204,0.930);
[gp path] (0.825,1.015)–(0.915,1.015);
[gp path] (5.294,1.015)–(5.204,1.015);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.090)–(5.294,1.090);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.090)–(1.005,1.090);
[gp path] (5.294,1.090)–(5.114,1.090);
[gp node right] at (0.715,1.090) $100$;
[gp path] (0.825,1.589)–(0.915,1.589);
[gp path] (5.294,1.589)–(5.204,1.589);
[gp path] (0.825,1.880)–(0.915,1.880);
[gp path] (5.294,1.880)–(5.204,1.880);
[gp path] (0.825,2.087)–(0.915,2.087);
[gp path] (5.294,2.087)–(5.204,2.087);
[gp path] (0.825,2.247)–(0.915,2.247);
[gp path] (5.294,2.247)–(5.204,2.247);
[gp path] (0.825,2.378)–(0.915,2.378);
[gp path] (5.294,2.378)–(5.204,2.378);
[gp path] (0.825,2.489)–(0.915,2.489);
[gp path] (5.294,2.489)–(5.204,2.489);
[gp path] (0.825,2.585)–(0.915,2.585);
[gp path] (5.294,2.585)–(5.204,2.585);
[gp path] (0.825,2.670)–(0.915,2.670);
[gp path] (5.294,2.670)–(5.204,2.670);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.746)–(5.294,2.746);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.746)–(1.005,2.746);
[gp path] (5.294,2.746)–(5.114,2.746);
[gp node right] at (0.715,2.746) $1000$;
[gp path] (0.825,3.244)–(0.915,3.244);
[gp path] (5.294,3.244)–(5.204,3.244);
[gp path] (0.825,3.535)–(0.915,3.535);
[gp path] (5.294,3.535)–(5.204,3.535);
[gp path] (0.825,3.742)–(0.915,3.742);
[gp path] (5.294,3.742)–(5.204,3.742);
[gp path] (0.825,3.903)–(0.915,3.903);
[gp path] (5.294,3.903)–(5.204,3.903);
[gp path] (0.825,4.034)–(0.915,4.034);
[gp path] (5.294,4.034)–(5.204,4.034);
[gp path] (0.825,4.145)–(0.915,4.145);
[gp path] (5.294,4.145)–(5.204,4.145);
[gp path] (0.825,4.241)–(0.915,4.241);
[gp path] (5.294,4.241)–(5.204,4.241);
[gp path] (0.825,4.325)–(0.915,4.325);
[gp path] (5.294,4.325)–(5.204,4.325);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,4.401)–(5.294,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,4.401)–(1.005,4.401);
[gp path] (5.294,4.401)–(5.114,4.401);
[gp node right] at (0.715,4.401) $10000$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,0.592)–(0.825,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.592)–(0.825,0.772);
[gp path] (0.825,4.401)–(0.825,4.221);
[gp node center,font=7.0pt8.4pt] at (0.825,0.407) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.414,0.592)–(1.414,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.414,0.592)–(1.414,0.772);
[gp path] (1.414,4.401)–(1.414,4.221);
[gp node center,font=7.0pt8.4pt] at (1.414,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.003,0.592)–(2.003,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.003,0.592)–(2.003,0.772);
[gp path] (2.003,4.401)–(2.003,4.221);
[gp node center,font=7.0pt8.4pt] at (2.003,0.407) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.593,0.592)–(2.593,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.593,0.592)–(2.593,0.772);
[gp path] (2.593,4.401)–(2.593,4.221);
[gp node center,font=7.0pt8.4pt] at (2.593,0.407) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.182,0.592)–(3.182,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.182,0.592)–(3.182,0.772);
[gp path] (3.182,4.401)–(3.182,4.221);
[gp node center,font=7.0pt8.4pt] at (3.182,0.407) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.771,0.592)–(3.771,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.771,0.592)–(3.771,0.772);
[gp path] (3.771,4.401)–(3.771,4.221);
[gp node center,font=7.0pt8.4pt] at (3.771,0.407) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.360,0.592)–(4.360,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.360,0.592)–(4.360,0.772);
[gp path] (4.360,4.401)–(4.360,4.221);
[gp node center,font=7.0pt8.4pt] at (4.360,0.407) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.949,0.592)–(4.949,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.949,0.592)–(4.949,0.772);
[gp path] (4.949,4.401)–(4.949,4.221);
[gp node center,font=7.0pt8.4pt] at (4.949,0.407) 2048;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.294,0.592)–(5.294,4.401);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.294,0.592)–(5.294,0.772);
[gp path] (5.294,4.401)–(5.294,4.221);
[gp node center,font=7.0pt8.4pt] at (5.294,0.407) 3072;
[gp path] (0.825,4.401)–(0.825,0.592)–(5.294,0.592)–(5.294,4.401)–cycle;
[gp node center] at (3.059,0.068) number of cores;
[gp node right] at (2.255,4.519) tsi (var 1%);
rgb color=0.000,1.000,0.000
[gp path] (2.365,4.519)–(2.985,4.519);
[gp path] (0.825,2.938)–(1.414,2.443)–(2.003,2.035)–(2.593,1.713)–(3.182,1.329)
3pointgp mark 3(0.825,2.938)
3pointgp mark 3(1.414,2.443)
3pointgp mark 3(2.003,2.035)
3pointgp mark 3(2.593,1.713)
3pointgp mark 3(3.182,1.329)
3pointgp mark 3(3.771,0.934)
3pointgp mark 3(4.360,0.738)
3pointgp mark 3(4.949,0.635)
3pointgp mark 3(5.294,0.681)
3pointgp mark 3(2.675,4.519)
color=gp lt color border
[gp node right] at (4.525,4.519) dd (var 1%);
rgb color=1.000,0.647,0.000
[gp path] (4.635,4.519)–(5.255,4.519);
[gp path] (3.771,3.136)–(4.360,2.508)–(4.949,2.139)–(5.294,1.906);
3pointgp mark 2(3.771,3.136)
3pointgp mark 2(4.360,2.508)
3pointgp mark 2(4.949,2.139)
3pointgp mark 2(5.294,1.906)
3pointgp mark 2(4.945,4.519)
rgb color=0.000,0.000,0.000
[gp path] (0.825,2.938)–(0.867,2.902)–(0.908,2.867)–(0.950,2.832)–(0.992,2.797)
color=gp lt color border
[gp path] (0.825,4.401)–(0.825,0.592)–(5.294,0.592)–(5.294,4.401)–cycle;
gp plot 10.825cm0.592cm5.294cm4.401cm
[Elapsed time ratio]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.750,4.812);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.680,0.592)–(0.770,0.592);
[gp path] (5.419,0.592)–(5.329,0.592);
[gp path] (0.680,0.760)–(0.770,0.760);
[gp path] (5.419,0.760)–(5.329,0.760);
[gp path] (0.680,0.902)–(0.770,0.902);
[gp path] (5.419,0.902)–(5.329,0.902);
[gp path] (0.680,1.025)–(0.770,1.025);
[gp path] (5.419,1.025)–(5.329,1.025);
[gp path] (0.680,1.133)–(0.770,1.133);
[gp path] (5.419,1.133)–(5.329,1.133);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.680,1.230)–(5.419,1.230);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.680,1.230)–(0.860,1.230);
[gp path] (5.419,1.230)–(5.239,1.230);
[gp node right] at (0.570,1.230) $1$;
[gp path] (0.680,1.868)–(0.770,1.868);
[gp path] (5.419,1.868)–(5.329,1.868);
[gp path] (0.680,2.241)–(0.770,2.241);
[gp path] (5.419,2.241)–(5.329,2.241);
[gp path] (0.680,2.506)–(0.770,2.506);
[gp path] (5.419,2.506)–(5.329,2.506);
[gp path] (0.680,2.712)–(0.770,2.712);
[gp path] (5.419,2.712)–(5.329,2.712);
[gp path] (0.680,2.880)–(0.770,2.880);
[gp path] (5.419,2.880)–(5.329,2.880);
[gp path] (0.680,3.021)–(0.770,3.021);
[gp path] (5.419,3.021)–(5.329,3.021);
[gp path] (0.680,3.144)–(0.770,3.144);
[gp path] (5.419,3.144)–(5.329,3.144);
[gp path] (0.680,3.253)–(0.770,3.253);
[gp path] (5.419,3.253)–(5.329,3.253);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.680,3.350)–(5.419,3.350);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.680,3.350)–(0.860,3.350);
[gp path] (5.419,3.350)–(5.239,3.350);
[gp node right] at (0.570,3.350) $10$;
[gp path] (0.680,3.988)–(0.770,3.988);
[gp path] (5.419,3.988)–(5.329,3.988);
[gp path] (0.680,4.361)–(0.770,4.361);
[gp path] (5.419,4.361)–(5.329,4.361);
[gp path] (0.680,4.626)–(0.770,4.626);
[gp path] (5.419,4.626)–(5.329,4.626);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.680,0.592)–(0.680,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.680,0.592)–(0.680,0.772);
[gp path] (0.680,4.626)–(0.680,4.446);
[gp node center,font=7.0pt8.4pt] at (0.680,0.407) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.305,0.592)–(1.305,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.305,0.592)–(1.305,0.772);
[gp path] (1.305,4.626)–(1.305,4.446);
[gp node center,font=7.0pt8.4pt] at (1.305,0.407) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.930,0.592)–(1.930,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.930,0.592)–(1.930,0.772);
[gp path] (1.930,4.626)–(1.930,4.446);
[gp node center,font=7.0pt8.4pt] at (1.930,0.407) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.554,0.592)–(2.554,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.554,0.592)–(2.554,0.772);
[gp path] (2.554,4.626)–(2.554,4.446);
[gp node center,font=7.0pt8.4pt] at (2.554,0.407) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.179,0.592)–(3.179,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.179,0.592)–(3.179,0.772);
[gp path] (3.179,4.626)–(3.179,4.446);
[gp node center,font=7.0pt8.4pt] at (3.179,0.407) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.804,0.592)–(3.804,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.804,0.592)–(3.804,0.772);
[gp path] (3.804,4.626)–(3.804,4.446);
[gp node center,font=7.0pt8.4pt] at (3.804,0.407) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.429,0.592)–(4.429,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.429,0.592)–(4.429,0.772);
[gp path] (4.429,4.626)–(4.429,4.446);
[gp node center,font=7.0pt8.4pt] at (4.429,0.407) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.054,0.592)–(5.054,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.054,0.592)–(5.054,0.772);
[gp path] (5.054,4.626)–(5.054,4.446);
[gp node center,font=7.0pt8.4pt] at (5.054,0.407) 2048;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.419,0.592)–(5.419,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.419,0.592)–(5.419,0.772);
[gp path] (5.419,4.626)–(5.419,4.446);
[gp node center,font=7.0pt8.4pt] at (5.419,0.407) 3072;
[gp path] (0.680,4.626)–(0.680,0.592)–(5.419,0.592)–(5.419,4.626)–cycle;
[gp node center,rotate=-270] at (0.194,2.609) ratio (elapsed time)/(best elapsed time) ;
[gp node center] at (3.049,0.068) number of cores;
rgb color=0.000,0.392,0.000
[gp path] (0.680,1.789)–(1.305,1.872)–(1.930,1.974)–(2.554,2.093)–(3.179,2.155)
3pointgp mark 7(0.680,1.789)
3pointgp mark 7(1.305,1.872)
3pointgp mark 7(1.930,1.974)
3pointgp mark 7(2.554,2.093)
3pointgp mark 7(3.179,2.155)
3pointgp mark 7(3.804,2.331)
3pointgp mark 7(4.429,2.383)
3pointgp mark 7(5.054,2.444)
3pointgp mark 7(5.419,2.560)
rgb color=0.000,1.000,0.000
[gp path] (0.680,1.230)–(1.305,1.230)–(1.930,1.230)–(2.554,1.230)–(3.179,1.230)
3pointgp mark 3(0.680,1.230)
3pointgp mark 3(1.305,1.230)
3pointgp mark 3(1.930,1.230)
3pointgp mark 3(2.554,1.230)
3pointgp mark 3(3.179,1.230)
3pointgp mark 3(3.804,1.230)
3pointgp mark 3(4.429,1.230)
3pointgp mark 3(5.054,1.230)
3pointgp mark 3(5.419,1.230)
rgb color=0.753,0.251,0.000
[gp path] (3.804,4.188)–(4.429,3.662)–(5.054,3.312)–(5.419,2.950);
3pointgp mark 2(3.804,4.188)
3pointgp mark 2(4.429,3.662)
3pointgp mark 2(5.054,3.312)
3pointgp mark 2(5.419,2.950)
rgb color=1.000,0.647,0.000
[gp path] (3.804,4.050)–(4.429,3.497)–(5.054,3.155)–(5.419,2.800);
3pointgp mark 2(3.804,4.050)
3pointgp mark 2(4.429,3.497)
3pointgp mark 2(5.054,3.155)
3pointgp mark 2(5.419,2.800)
color=gp lt color border
[gp path] (0.680,4.626)–(0.680,0.592)–(5.419,0.592)–(5.419,4.626)–cycle;
gp plot 10.680cm0.592cm5.419cm4.626cm
Micro structural performances: Elapsed time and performance ration for the first and second computation..
It shows that, as in the equivalent test case of section <ref>, the solver provides better performance compared to the "dd" solver.
The strong scaling efficiency curves (not presented here) show that the solver also has a smaller slope than the "dd" solver.
Let us now consider a broader application framework and imagine that this problem is the subject of a micro-structure optimization study with respect to a displacement field based objective.
In this case, many calculations from a given set of parameters must be performed to obtain sensitivities and/or new configurations.
To mimic such a case, the Young's modulus values are randomly perturbed by $\pm1\%$ and the computation starts from the first result obtained above.
For the solver, this implies to use the r-set solution of the previous calculation, to redo the factorization of the fine-scale problems and to enter the scale loop.
For the "dd" solver, the domains are condensed again, the preconditioners recomputed, and since the Schur complement relies on the same space, the iterative solver starts from the previous boundary problem solution.
Figure <ref> shows the performance for this second computation ("tsi (var 1%)" and "dd ( var 1%)" for the and domain decomposition solvers respectively).
In figure <ref>, the relative performance of the two computations is presented as a ratio of the elapsed time of one computation to the best elapsed time among all computations and solvers.
The solver has a reduced elapsed time because it iterated only 37 times, compared to 93 times for the initial calculation.
This can be explained by the fact that the initial solution for this computation is already of better quality than what the unenriched coarse field can give ($resi=1.5e^{-2}$ compare to $resi=1.6$ for the first computation) even if the Young's moduli have changed.
Moreover, the local variation of E may only influence the modulus and not the shape of the enrichment functions.
But starting from a better solution also reduces the number of iterations where the full rank solver is used at the coarse scale.
In the first computation $resi<\epsilon\times 10000$ (rule from section <ref>) is met after 25 iterations.
For the second computation, this condition is met only after 4 iterations.
This explains why the ratio of "ts" in figure <ref> grows with the number of processes: as already observed in the analysis of figure<ref>, when a high number of cores is reached, the reduced set of $nbp_{max}$ processes has a negative impact on the factorization performance.
Thus, this poor performance has an impact only 4 times on the second computation against 25 times for the first.
For the "dd" solver, restarting from the previous global boundary solution reduces the number of iterations in solving the global problem but not enough to significantly decrease elapsed time.
The second "dd" computation cost almost the same as the first one, which gives an advantage to solver in this case.
To obtain an even worse conditioned system, exactly the same scenario is replayed with a wider range of E values: from 36.5 GPa to 36500 GPa (maximum ten times larger than previous one).
Results with 2048 cores and both ranges, are given in the table <ref>.
every node/.append style=scale=0.60
(0.000,0.000) rectangle (5.625,4.812);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.100,0.592);
[gp path] (5.294,0.592)–(5.204,0.592);
[gp path] (1.010,0.622)–(1.100,0.622);
[gp path] (5.294,0.622)–(5.204,0.622);
[gp path] (1.010,0.649)–(1.100,0.649);
[gp path] (5.294,0.649)–(5.204,0.649);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.673)–(5.294,0.673);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.673)–(1.190,0.673);
[gp path] (5.294,0.673)–(5.114,0.673);
[gp node right] at (0.900,0.673) 1e-07;
[gp path] (1.010,0.829)–(1.100,0.829);
[gp path] (5.294,0.829)–(5.204,0.829);
[gp path] (1.010,0.921)–(1.100,0.921);
[gp path] (5.294,0.921)–(5.204,0.921);
[gp path] (1.010,0.986)–(1.100,0.986);
[gp path] (5.294,0.986)–(5.204,0.986);
[gp path] (1.010,1.036)–(1.100,1.036);
[gp path] (5.294,1.036)–(5.204,1.036);
[gp path] (1.010,1.077)–(1.100,1.077);
[gp path] (5.294,1.077)–(5.204,1.077);
[gp path] (1.010,1.112)–(1.100,1.112);
[gp path] (5.294,1.112)–(5.204,1.112);
[gp path] (1.010,1.142)–(1.100,1.142);
[gp path] (5.294,1.142)–(5.204,1.142);
[gp path] (1.010,1.169)–(1.100,1.169);
[gp path] (5.294,1.169)–(5.204,1.169);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.193)–(5.294,1.193);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.193)–(1.190,1.193);
[gp path] (5.294,1.193)–(5.114,1.193);
[gp node right] at (0.900,1.193) 1e-06;
[gp path] (1.010,1.349)–(1.100,1.349);
[gp path] (5.294,1.349)–(5.204,1.349);
[gp path] (1.010,1.441)–(1.100,1.441);
[gp path] (5.294,1.441)–(5.204,1.441);
[gp path] (1.010,1.506)–(1.100,1.506);
[gp path] (5.294,1.506)–(5.204,1.506);
[gp path] (1.010,1.556)–(1.100,1.556);
[gp path] (5.294,1.556)–(5.204,1.556);
[gp path] (1.010,1.597)–(1.100,1.597);
[gp path] (5.294,1.597)–(5.204,1.597);
[gp path] (1.010,1.632)–(1.100,1.632);
[gp path] (5.294,1.632)–(5.204,1.632);
[gp path] (1.010,1.662)–(1.100,1.662);
[gp path] (5.294,1.662)–(5.204,1.662);
[gp path] (1.010,1.689)–(1.100,1.689);
[gp path] (5.294,1.689)–(5.204,1.689);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.713)–(5.294,1.713);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.713)–(1.190,1.713);
[gp path] (5.294,1.713)–(5.114,1.713);
[gp node right] at (0.900,1.713) 1e-05;
[gp path] (1.010,1.869)–(1.100,1.869);
[gp path] (5.294,1.869)–(5.204,1.869);
[gp path] (1.010,1.961)–(1.100,1.961);
[gp path] (5.294,1.961)–(5.204,1.961);
[gp path] (1.010,2.026)–(1.100,2.026);
[gp path] (5.294,2.026)–(5.204,2.026);
[gp path] (1.010,2.076)–(1.100,2.076);
[gp path] (5.294,2.076)–(5.204,2.076);
[gp path] (1.010,2.117)–(1.100,2.117);
[gp path] (5.294,2.117)–(5.204,2.117);
[gp path] (1.010,2.152)–(1.100,2.152);
[gp path] (5.294,2.152)–(5.204,2.152);
[gp path] (1.010,2.182)–(1.100,2.182);
[gp path] (5.294,2.182)–(5.204,2.182);
[gp path] (1.010,2.209)–(1.100,2.209);
[gp path] (5.294,2.209)–(5.204,2.209);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.233)–(5.294,2.233);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.233)–(1.190,2.233);
[gp path] (5.294,2.233)–(5.114,2.233);
[gp node right] at (0.900,2.233) 1e-04;
[gp path] (1.010,2.389)–(1.100,2.389);
[gp path] (5.294,2.389)–(5.204,2.389);
[gp path] (1.010,2.481)–(1.100,2.481);
[gp path] (5.294,2.481)–(5.204,2.481);
[gp path] (1.010,2.546)–(1.100,2.546);
[gp path] (5.294,2.546)–(5.204,2.546);
[gp path] (1.010,2.596)–(1.100,2.596);
[gp path] (5.294,2.596)–(5.204,2.596);
[gp path] (1.010,2.637)–(1.100,2.637);
[gp path] (5.294,2.637)–(5.204,2.637);
[gp path] (1.010,2.672)–(1.100,2.672);
[gp path] (5.294,2.672)–(5.204,2.672);
[gp path] (1.010,2.702)–(1.100,2.702);
[gp path] (5.294,2.702)–(5.204,2.702);
[gp path] (1.010,2.729)–(1.100,2.729);
[gp path] (5.294,2.729)–(5.204,2.729);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.753)–(5.294,2.753);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.753)–(1.190,2.753);
[gp path] (5.294,2.753)–(5.114,2.753);
[gp node right] at (0.900,2.753) 1e-03;
[gp path] (1.010,2.909)–(1.100,2.909);
[gp path] (5.294,2.909)–(5.204,2.909);
[gp path] (1.010,3.001)–(1.100,3.001);
[gp path] (5.294,3.001)–(5.204,3.001);
[gp path] (1.010,3.066)–(1.100,3.066);
[gp path] (5.294,3.066)–(5.204,3.066);
[gp path] (1.010,3.116)–(1.100,3.116);
[gp path] (5.294,3.116)–(5.204,3.116);
[gp path] (1.010,3.157)–(1.100,3.157);
[gp path] (5.294,3.157)–(5.204,3.157);
[gp path] (1.010,3.192)–(1.100,3.192);
[gp path] (5.294,3.192)–(5.204,3.192);
[gp path] (1.010,3.222)–(1.100,3.222);
[gp path] (5.294,3.222)–(5.204,3.222);
[gp path] (1.010,3.249)–(1.100,3.249);
[gp path] (5.294,3.249)–(5.204,3.249);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.273)–(5.294,3.273);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.273)–(1.190,3.273);
[gp path] (5.294,3.273)–(5.114,3.273);
[gp node right] at (0.900,3.273) 1e-02;
[gp path] (1.010,3.429)–(1.100,3.429);
[gp path] (5.294,3.429)–(5.204,3.429);
[gp path] (1.010,3.521)–(1.100,3.521);
[gp path] (5.294,3.521)–(5.204,3.521);
[gp path] (1.010,3.586)–(1.100,3.586);
[gp path] (5.294,3.586)–(5.204,3.586);
[gp path] (1.010,3.636)–(1.100,3.636);
[gp path] (5.294,3.636)–(5.204,3.636);
[gp path] (1.010,3.677)–(1.100,3.677);
[gp path] (5.294,3.677)–(5.204,3.677);
[gp path] (1.010,3.712)–(1.100,3.712);
[gp path] (5.294,3.712)–(5.204,3.712);
[gp path] (1.010,3.742)–(1.100,3.742);
[gp path] (5.294,3.742)–(5.204,3.742);
[gp path] (1.010,3.769)–(1.100,3.769);
[gp path] (5.294,3.769)–(5.204,3.769);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.793)–(3.574,3.793);
[gp path] (5.184,3.793)–(5.294,3.793);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.793)–(1.190,3.793);
[gp path] (5.294,3.793)–(5.114,3.793);
[gp node right] at (0.900,3.793) 1e-01;
[gp path] (1.010,3.949)–(1.100,3.949);
[gp path] (5.294,3.949)–(5.204,3.949);
[gp path] (1.010,4.041)–(1.100,4.041);
[gp path] (5.294,4.041)–(5.204,4.041);
[gp path] (1.010,4.106)–(1.100,4.106);
[gp path] (5.294,4.106)–(5.204,4.106);
[gp path] (1.010,4.156)–(1.100,4.156);
[gp path] (5.294,4.156)–(5.204,4.156);
[gp path] (1.010,4.198)–(1.100,4.198);
[gp path] (5.294,4.198)–(5.204,4.198);
[gp path] (1.010,4.232)–(1.100,4.232);
[gp path] (5.294,4.232)–(5.204,4.232);
[gp path] (1.010,4.263)–(1.100,4.263);
[gp path] (5.294,4.263)–(5.204,4.263);
[gp path] (1.010,4.289)–(1.100,4.289);
[gp path] (5.294,4.289)–(5.204,4.289);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,4.313)–(3.574,4.313);
[gp path] (5.184,4.313)–(5.294,4.313);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,4.313)–(1.190,4.313);
[gp path] (5.294,4.313)–(5.114,4.313);
[gp node right] at (0.900,4.313) 1e+00;
[gp path] (1.010,4.469)–(1.100,4.469);
[gp path] (5.294,4.469)–(5.204,4.469);
[gp path] (1.010,4.561)–(1.100,4.561);
[gp path] (5.294,4.561)–(5.204,4.561);
[gp path] (1.010,4.626)–(1.100,4.626);
[gp path] (5.294,4.626)–(5.204,4.626);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.802,0.592)–(1.802,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.802,0.592)–(1.802,0.772);
[gp path] (1.802,4.626)–(1.802,4.446);
[gp node center,font=10.0pt12.0pt] at (1.802,0.407) $50$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.610,0.592)–(2.610,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.610,0.592)–(2.610,0.772);
[gp path] (2.610,4.626)–(2.610,4.446);
[gp node center,font=10.0pt12.0pt] at (2.610,0.407) $100$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.419,0.592)–(3.419,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.419,0.592)–(3.419,0.772);
[gp path] (3.419,4.626)–(3.419,4.446);
[gp node center,font=10.0pt12.0pt] at (3.419,0.407) $150$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.227,0.592)–(4.227,3.546);
[gp path] (4.227,4.446)–(4.227,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.227,0.592)–(4.227,0.772);
[gp path] (4.227,4.626)–(4.227,4.446);
[gp node center,font=10.0pt12.0pt] at (4.227,0.407) $200$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.035,0.592)–(5.035,3.546);
[gp path] (5.035,4.446)–(5.035,4.626);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.035,0.592)–(5.035,0.772);
[gp path] (5.035,4.626)–(5.035,4.446);
[gp node center,font=10.0pt12.0pt] at (5.035,0.407) $250$;
[gp path] (1.010,4.626)–(1.010,0.592)–(5.294,0.592)–(5.294,4.626)–cycle;
[gp node center,rotate=-270] at (0.194,2.609) $ \frac{\left \| \vm{A}_{rr} \cdot \vm{u}_{r}-\vm{B}_{r} \right \|}{\left \| \vm{B}_{r} \right \|} $;
[gp node center] at (3.152,0.068) number of iterations;
[gp node right] at (4.344,4.333) first [36.5,3650];
rgb color=0.000,0.000,0.545
[gp path] (4.454,4.333)–(5.074,4.333);
[gp path] (1.018,4.626)–(1.026,4.419)–(1.042,4.254)–(1.058,4.122)–(1.075,4.009)
3pointgp mark 7(1.139,3.658)
3pointgp mark 7(1.269,3.154)
3pointgp mark 7(1.398,2.794)
3pointgp mark 7(1.527,2.502)
3pointgp mark 7(1.657,2.254)
3pointgp mark 7(1.786,2.002)
3pointgp mark 7(1.915,1.754)
3pointgp mark 7(2.045,1.517)
3pointgp mark 7(2.174,1.269)
3pointgp mark 7(2.303,1.036)
3pointgp mark 7(2.433,0.792)
3pointgp mark 7(4.764,4.333)
color=gp lt color border
[gp node right] at (4.344,4.108) second [36.5,3650];
rgb color=0.000,0.392,0.000
[gp path] (4.454,4.108)–(5.074,4.108);
[gp path] (1.010,3.364)–(1.026,3.497)–(1.042,3.186)–(1.058,2.950)–(1.075,2.794)
3pointgp mark 3(1.010,3.364)
3pointgp mark 3(1.139,2.353)
3pointgp mark 3(1.269,1.772)
3pointgp mark 3(1.398,1.312)
3pointgp mark 3(1.527,0.921)
3pointgp mark 3(4.764,4.108)
color=gp lt color border
[gp node right] at (4.344,3.883) first [36.5,36500];
rgb color=0.753,0.251,0.000
[gp path] (4.454,3.883)–(5.074,3.883);
[gp path] (1.024,4.626)–(1.026,4.602)–(1.042,4.469)–(1.058,4.389)–(1.075,4.334)
3pointgp mark 4(1.139,4.156)
3pointgp mark 4(1.269,3.960)
3pointgp mark 4(1.398,3.834)
3pointgp mark 4(1.527,3.692)
3pointgp mark 4(1.657,3.542)
3pointgp mark 4(1.786,3.406)
3pointgp mark 4(1.915,3.294)
3pointgp mark 4(2.045,3.172)
3pointgp mark 4(2.174,3.066)
3pointgp mark 4(2.303,2.960)
3pointgp mark 4(2.433,2.859)
3pointgp mark 4(2.562,2.750)
3pointgp mark 4(2.691,2.648)
3pointgp mark 4(2.821,2.546)
3pointgp mark 4(2.950,2.449)
3pointgp mark 4(3.079,2.353)
3pointgp mark 4(3.209,2.254)
3pointgp mark 4(3.338,2.149)
3pointgp mark 4(3.467,2.047)
3pointgp mark 4(3.597,1.945)
3pointgp mark 4(3.726,1.845)
3pointgp mark 4(3.855,1.754)
3pointgp mark 4(3.985,1.654)
3pointgp mark 4(4.114,1.556)
3pointgp mark 4(4.243,1.455)
3pointgp mark 4(4.373,1.360)
3pointgp mark 4(4.502,1.269)
3pointgp mark 4(4.631,1.164)
3pointgp mark 4(4.761,1.066)
3pointgp mark 4(4.890,0.968)
3pointgp mark 4(5.019,0.870)
3pointgp mark 4(5.149,0.779)
3pointgp mark 4(5.278,0.673)
3pointgp mark 4(4.764,3.883)
color=gp lt color border
[gp node right] at (4.344,3.658) second [36.5,36500];
rgb color=0.784,0.784,0.000
[gp path] (4.454,3.658)–(5.074,3.658);
[gp path] (1.010,3.379)–(1.026,3.627)–(1.042,3.349)–(1.058,3.168)–(1.075,3.042)
3pointgp mark 2(1.010,3.379)
3pointgp mark 2(1.139,2.753)
3pointgp mark 2(1.269,2.449)
3pointgp mark 2(1.398,2.254)
3pointgp mark 2(1.527,2.106)
3pointgp mark 2(1.657,1.968)
3pointgp mark 2(1.786,1.832)
3pointgp mark 2(1.915,1.713)
3pointgp mark 2(2.045,1.597)
3pointgp mark 2(2.174,1.482)
3pointgp mark 2(2.303,1.371)
3pointgp mark 2(2.433,1.269)
3pointgp mark 2(2.562,1.164)
3pointgp mark 2(2.691,1.058)
3pointgp mark 2(2.821,0.955)
3pointgp mark 2(2.950,0.851)
3pointgp mark 2(3.079,0.749)
3pointgp mark 2(4.764,3.658)
color=gp lt color border
[gp path] (1.010,4.626)–(1.010,0.592)–(5.294,0.592)–(5.294,4.626)–cycle;
gp plot 11.010cm0.592cm5.294cm4.626cm
3*E (GPa) 2*run 3c 2|c|dd
3-7 2c|nb of iterations 2* time (s) 2*nb of iterations 2* time (s)
3-4 all $resi\geqslant10^{-3}$
2*$[36.5,3650]$ first 93 25 198.6 1868 509.9
2-7 second 37 4 53.1 1487 429.8
2*$[36.5,36500]$ first 265 96 691.4 2191 621.4
2-7 second 134 7 128.8 1487 519.4
E (GPa) run all $resi\geqslant10^{-1}$ time (s) 2l
2*$[36.5,36500]$ first 265 27 452.8 2l
2-5 second 134 0 149.4 2l
With 2048 cores, comparison of the first and second micro-structure calculations with different Young's modulus ranges."nb of iterations" is the number of iteration of the loop of the algorithm <ref> and <ref>. "time" is the elapsed time to solve the problem.
With the new E-range, the number of iterations for the first computation (the one starting from the coarse unenriched solution) increases from 93 to 265.
This increase is related to a worse starting point and conditioning, which degraded the residual convergence rate, as can be seen in figure <ref>.
In this figure, the slope (after a few iterations) of the first computation with a smaller E-range (thus with better matrix conditioning) is stronger than that of the first computation with a larger E-range (thus with worse matrix conditioning).
This is confirmed by the second calculation where the curves start with a smaller residual error but have the same slope as their counterpart in the first calculation.
This brings explanations to the observation made in section <ref> where a finer discretization of the problem at the global scale gives a slower residual convergence (see the slopes of the curves in figure <ref>).
In fact, we can now add that it is the conditioning of the global matrix (related to the level of discretization) that has an impact on the convergence.
Regarding performance, like the solver, the "dd" solver needs more iterations to converge (the worse conditioning is not fully corrected by domain condensation).
For first calculation, "dd" needs 2191 iterations with the new E-range against 1868 with the original E-range.
The elapsed time is moderately impacted and increase to 621.4s from 509.9s.
The second calculation proceeds in the same way and there is a moderate gain starting with the first calculation (for both E-range).
For solver, this increasing number of iterations has a greater impact on the first calculation.
The elapsed time goes from 198.6s to 691.4s which gives an advantage to "dd" in this case.
This increase in time is in fact closely related to the number of iterations where the global problem is computed by the full rank solver.
With version, the iterative resolution is activated when $resi<\epsilon \times 1000$ (rule of the section <ref>) and with $\epsilon=1e^{-7}$ the condition is $resi<1e^{-3}$.
The table <ref> gives the number of iterations that do not meet this condition and therefore only correspond to a full rank resolution.
For the first calculation, 96 iterations are performed in full rank mode with the new E-range compared to 25.
If we change the condition to $resi<0.1$, only 27 iterations are spent in full rank mode and the elapsed time drops to 452.8s which is now better than "dd".
This shows that the setting can be improved in some cases.
But even with this new setting, some computations not reported here, with a larger E-range, shows that the solver is in trouble compared to "dd" when the matrix conditioning degrades further.
In all cases, the second calculation with the solver, with both settings and both E-ranges, has a better elapsed time than the first calculation.
This confirms the interest of using the solver when a history can be kept from one computation to another.
§.§ Pull-out test case
This test case is inspired by the crack propagation simulation studied in [8, 25, 10, 5].
It represents the pull-out of a steel anchor embedded in an unreinforced concrete (E=26.4GPa, $\nu$=0.193) cylinder.
The chosen geometry and boundary conditions used in this test are given in figure <ref>.
(0.38973457,0.04431764)(0,0)[lt]0t]lDirichlet: $U_x=U_z=0$
(0.38951107,0.00637104)(0,0)[lt]0t]lNeuman: $1000.\overrightarrow{e_y}$
(0.38973457,0.08426818)(0,0)[lt]0t]lDirichlet: $U_y=0$
[general behavior]
Pull-out test case geometry, boundary conditions and general behavior (obtained by one simulation of this section):
M(0,-450,-60),P(0,-470,-60),Q(0,-450,36),R(1000,-900,0),S(1000,0,0) unit=mm
The steel anchor, not shown in this figure, is located in the hollowing out in the center of the disk and is pulled in the $\overrightarrow{e_y}$ direction.
Its action is simply modeled as a force applied to the contact surface between the compression surface of the anchor head and the concrete.
Other interactions of the anchor with the concrete are simplified because contact, decohesion or other mechanical bond phenomena are beyond the scope of this study.
Thus, all remaining surfaces of the anchor head are considered to have no interaction with the concrete which is left as free surfaces in these areas.
The surface of the anchor body is considered to impose a sliding contact interface along the $\overrightarrow{e_y}$ axis.
The crack is represented by a damage field $d$.
The behavior of the material thus follows the linear elastic damage potential $\varphi(\tens{\epsilon},d)$ which is written, using the Hooke tensor $\tens{C}$, as follows:
\begin{equation}
\varphi(\tens{\epsilon},d)=\frac{1}{2}(1- d)\tens{\epsilon}:\tens{C}:\tens{\epsilon}
\end{equation}
When $d=1$, the material has lost all its rigidity and can be considered as a crack.
In the pull-out test, the crack starts under the head of the anchor and develops at a specific angle in the form of a cone.
The more you pull on the steel anchor, the more the crack continues to grow in a cone.
The classic scenario for simulating such evolutionary phenomena is to use mesh adaptation to follow the movement of the crack front so that the damaged area is always entirely within a finer mesh region.
By always refining a little ahead of the front, the number of mesh adaptations can be limited, keeping the same mesh for a few evolutionary iterations called step below.
To study such a scenario with the method, we consider three arbitrary evolution stages that correspond to three coarse mesh adaptations.
And for each of them, four fictitious steps advance the crack front evenly according to the cone angle in the adapted fixed mesh.
The damage field is calculated by considering the conical envelope described in <ref> with the parameter $h$ in (<ref>) controlling the crack cone size.
The coarse mesh is adapted fo $h=-400$ (50 874
c-set dofs), $h=-300$ (119 829
c-set dofs ) and $h=-100$ (362 904
c-set dofs) considering that $h$ during the steps will fluctuate between $[-410,-400]$, $[-310,-300]$ and $[-110,-100]$ respectively.
The mesh sizes of the fine problems follow the equation (<ref>).
This corresponds approximately to 4, 16 and 58.6 million r-set dofs for the $h=-400$, $h=-300$ and $h=-100$ stages respectively.
The is built at each stage to cover only the damaged area during the 4 evolutionary iterations as shown in figure <ref>.
[step 0]
[step 1]
[step 2]
[step 3]
Evolution of the imposed damage field for the $h=-400$ adaptation. Cross-section of the SP. Zoom of the crack front (with a rather unrealistic shape as mentioned in <ref>).
So, in this test case, there are NSP elements and transition patches (the one with hanging node treatment).
The resolution is performed 4 times.
The first one starts from a zero displacement field (in a real simulation, it would be a displacement field obtained by projection of the result of the last step of the last stage).
The next three resolutions will start with the displacement field calculated in the previous iteration.
Here, since the discretization remains the same, the displacement fields at both scales are on a stable space.
Only some patches need to be re-factored to account for material degradation related to damage propagation.
The precision chosen, as in the other test cases, is $\epsilon=1.e^{-7}$.
The displacement results of the solver, obtained at both scales for all stages, are shown in figure <ref>.
[Global scale,h=-400]
[Global scale,h=-300]
[Global scale,h=-100]
[Local scale,h=-400]
[Local scale,h=-300]
[Local scale,h=-100]
Displacement field for h=-400, h=-300 and h=-100. Cross-section of the SP discretization and the coarse mesh. Displacement multiplied by 10.
The displacements field are presented on the coarse mesh (using only C-set dofs) and on the SP fine discretization (F-set dofs).
Figure <ref>, <ref> and <ref> show at each adaptation the region covered by the SP.
At the crack, we can see distorted elements confirming the correct displacement jump introduced by the fully damaged material.
This jump is also evident in the clear separation of colors between the cone that is torn off and the rest of the disk that hardly moves.
This is even clearer in figures <ref>, <ref> and <ref> where the crack appears in context of the complete disk.
The pulled out cone becomes more and more visible as the damage grows.
These results, confirmed by simulation with other solvers, fully validate the proposed implementation (in particular the treatment of the hanging nodes (section <ref>) and the enrichment of mixed patch (figure <ref>)).
The solver (the version) is again compared to the "fr", "blr" and "dd" solvers.
The cumulative elapsed times of the different steps (per adaptation) give the performance curves shown in the figure <ref>.
every node/.append style=scale=0.70
(0.000,0.000) rectangle (5.438,3.763);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.183,0.691)–(5.049,0.691);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.183,0.691)–(1.363,0.691);
[gp path] (5.049,0.691)–(4.869,0.691);
[gp node right] at (1.054,0.691) $10$;
[gp path] (1.183,0.955)–(1.273,0.955);
[gp path] (5.049,0.955)–(4.959,0.955);
[gp path] (1.183,1.109)–(1.273,1.109);
[gp path] (5.049,1.109)–(4.959,1.109);
[gp path] (1.183,1.219)–(1.273,1.219);
[gp path] (5.049,1.219)–(4.959,1.219);
[gp path] (1.183,1.304)–(1.273,1.304);
[gp path] (5.049,1.304)–(4.959,1.304);
[gp path] (1.183,1.373)–(1.273,1.373);
[gp path] (5.049,1.373)–(4.959,1.373);
[gp path] (1.183,1.432)–(1.273,1.432);
[gp path] (5.049,1.432)–(4.959,1.432);
[gp path] (1.183,1.482)–(1.273,1.482);
[gp path] (5.049,1.482)–(4.959,1.482);
[gp path] (1.183,1.527)–(1.273,1.527);
[gp path] (5.049,1.527)–(4.959,1.527);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.183,1.567)–(5.049,1.567);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.183,1.567)–(1.363,1.567);
[gp path] (5.049,1.567)–(4.869,1.567);
[gp node right] at (1.054,1.567) $100$;
[gp path] (1.183,1.831)–(1.273,1.831);
[gp path] (5.049,1.831)–(4.959,1.831);
[gp path] (1.183,1.985)–(1.273,1.985);
[gp path] (5.049,1.985)–(4.959,1.985);
[gp path] (1.183,2.095)–(1.273,2.095);
[gp path] (5.049,2.095)–(4.959,2.095);
[gp path] (1.183,2.180)–(1.273,2.180);
[gp path] (5.049,2.180)–(4.959,2.180);
[gp path] (1.183,2.249)–(1.273,2.249);
[gp path] (5.049,2.249)–(4.959,2.249);
[gp path] (1.183,2.308)–(1.273,2.308);
[gp path] (5.049,2.308)–(4.959,2.308);
[gp path] (1.183,2.359)–(1.273,2.359);
[gp path] (5.049,2.359)–(4.959,2.359);
[gp path] (1.183,2.404)–(1.273,2.404);
[gp path] (5.049,2.404)–(4.959,2.404);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.183,2.444)–(5.049,2.444);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.183,2.444)–(1.363,2.444);
[gp path] (5.049,2.444)–(4.869,2.444);
[gp node right] at (1.054,2.444) $1000$;
[gp path] (1.183,2.707)–(1.273,2.707);
[gp path] (5.049,2.707)–(4.959,2.707);
[gp path] (1.183,2.862)–(1.273,2.862);
[gp path] (5.049,2.862)–(4.959,2.862);
[gp path] (1.183,2.971)–(1.273,2.971);
[gp path] (5.049,2.971)–(4.959,2.971);
[gp path] (1.183,3.056)–(1.273,3.056);
[gp path] (5.049,3.056)–(4.959,3.056);
[gp path] (1.183,3.126)–(1.273,3.126);
[gp path] (5.049,3.126)–(4.959,3.126);
[gp path] (1.183,3.184)–(1.273,3.184);
[gp path] (5.049,3.184)–(4.959,3.184);
[gp path] (1.183,3.235)–(1.273,3.235);
[gp path] (5.049,3.235)–(4.959,3.235);
[gp path] (1.183,3.280)–(1.273,3.280);
[gp path] (5.049,3.280)–(4.959,3.280);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.183,3.320)–(5.049,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.183,3.320)–(1.363,3.320);
[gp path] (5.049,3.320)–(4.869,3.320);
[gp node right] at (1.054,3.320) $10000$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.183,0.691)–(1.183,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.183,0.691)–(1.183,0.871);
[gp path] (1.183,3.320)–(1.183,3.140);
[gp node center] at (1.183,0.475) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.735,0.691)–(1.735,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.735,0.691)–(1.735,0.871);
[gp path] (1.735,3.320)–(1.735,3.140);
[gp node center] at (1.735,0.475) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.287,0.691)–(2.287,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.287,0.691)–(2.287,0.871);
[gp path] (2.287,3.320)–(2.287,3.140);
[gp node center] at (2.287,0.475) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.840,0.691)–(2.840,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.840,0.691)–(2.840,0.871);
[gp path] (2.840,3.320)–(2.840,3.140);
[gp node center] at (2.840,0.475) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.392,0.691)–(3.392,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.392,0.691)–(3.392,0.871);
[gp path] (3.392,3.320)–(3.392,3.140);
[gp node center] at (3.392,0.475) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.944,0.691)–(3.944,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.944,0.691)–(3.944,0.871);
[gp path] (3.944,3.320)–(3.944,3.140);
[gp node center] at (3.944,0.475) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.496,0.691)–(4.496,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.496,0.691)–(4.496,0.871);
[gp path] (4.496,3.320)–(4.496,3.140);
[gp node center] at (4.496,0.475) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.048,0.691)–(5.048,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.048,0.691)–(5.048,0.871);
[gp path] (5.048,3.320)–(5.048,3.140);
[gp node center] at (5.048,0.475) 128;
[gp path] (1.183,3.320)–(1.183,0.691)–(5.049,0.691)–(5.049,3.320)–cycle;
[gp node center,rotate=-270] at (0.204,2.005) elapsed time in s;
[gp node center] at (3.116,0.151) number of cores;
[gp node right] at (1.570,3.469) fr;
rgb color=0.000,1.000,1.000
[gp path] (1.699,3.469)–(2.395,3.469);
[gp path] (1.183,3.055)–(1.735,2.823)–(2.287,2.588)–(2.840,2.424)–(3.392,2.286)
3pointgp mark 1(1.183,3.055)
3pointgp mark 1(1.735,2.823)
3pointgp mark 1(2.287,2.588)
3pointgp mark 1(2.840,2.424)
3pointgp mark 1(3.392,2.286)
3pointgp mark 1(3.944,2.103)
3pointgp mark 1(4.496,1.925)
3pointgp mark 1(5.048,1.778)
3pointgp mark 1(2.047,3.469)
color=gp lt color border
[gp node right] at (2.911,3.469) blr;
rgb color=0.000,0.000,0.545
[gp path] (3.040,3.469)–(3.736,3.469);
[gp path] (1.183,3.022)–(1.735,2.793)–(2.287,2.571)–(2.840,2.398)–(3.392,2.257)
3pointgp mark 1(1.183,3.022)
3pointgp mark 1(1.735,2.793)
3pointgp mark 1(2.287,2.571)
3pointgp mark 1(2.840,2.398)
3pointgp mark 1(3.392,2.257)
3pointgp mark 1(3.944,2.068)
3pointgp mark 1(4.496,1.923)
3pointgp mark 1(5.048,1.772)
3pointgp mark 1(3.388,3.469)
color=gp lt color border
[gp node right] at (4.252,3.469) dd;
rgb color=0.753,0.251,0.000
[gp path] (4.381,3.469)–(5.077,3.469);
[gp path] (1.183,3.055)–(1.735,2.966)–(2.287,2.712)–(2.840,2.545)–(3.392,2.392)
3pointgp mark 2(1.183,3.055)
3pointgp mark 2(1.735,2.966)
3pointgp mark 2(2.287,2.712)
3pointgp mark 2(2.840,2.545)
3pointgp mark 2(3.392,2.392)
3pointgp mark 2(3.944,2.148)
3pointgp mark 2(4.496,1.821)
3pointgp mark 2(5.048,1.558)
3pointgp mark 2(4.729,3.469)
rgb color=0.580,0.000,0.827
[gp path] (1.183,2.918)–(1.735,2.647)–(2.287,2.382)–(2.840,2.145)–(3.392,1.952)
3pointgp mark 4(1.183,2.918)
3pointgp mark 4(1.735,2.647)
3pointgp mark 4(2.287,2.382)
3pointgp mark 4(2.840,2.145)
3pointgp mark 4(3.392,1.952)
3pointgp mark 4(3.944,1.762)
3pointgp mark 4(4.496,1.586)
3pointgp mark 4(5.048,1.424)
rgb color=0.000,0.000,0.000
[gp path] (1.183,2.918)–(1.222,2.899)–(1.261,2.880)–(1.300,2.862)–(1.339,2.843)
color=gp lt color border
[gp path] (1.183,3.320)–(1.183,0.691)–(5.049,0.691)–(5.049,3.320)–cycle;
gp plot 11.183cm0.691cm5.049cm3.320cm
every node/.append style=scale=0.70
(0.000,0.000) rectangle (5.438,3.763);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,0.691)–(5.049,0.691);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,0.691)–(1.147,0.691);
[gp path] (5.049,0.691)–(4.869,0.691);
[gp node right] at (0.838,0.691) $10$;
[gp path] (0.967,0.955)–(1.057,0.955);
[gp path] (5.049,0.955)–(4.959,0.955);
[gp path] (0.967,1.109)–(1.057,1.109);
[gp path] (5.049,1.109)–(4.959,1.109);
[gp path] (0.967,1.219)–(1.057,1.219);
[gp path] (5.049,1.219)–(4.959,1.219);
[gp path] (0.967,1.304)–(1.057,1.304);
[gp path] (5.049,1.304)–(4.959,1.304);
[gp path] (0.967,1.373)–(1.057,1.373);
[gp path] (5.049,1.373)–(4.959,1.373);
[gp path] (0.967,1.432)–(1.057,1.432);
[gp path] (5.049,1.432)–(4.959,1.432);
[gp path] (0.967,1.482)–(1.057,1.482);
[gp path] (5.049,1.482)–(4.959,1.482);
[gp path] (0.967,1.527)–(1.057,1.527);
[gp path] (5.049,1.527)–(4.959,1.527);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,1.567)–(5.049,1.567);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,1.567)–(1.147,1.567);
[gp path] (5.049,1.567)–(4.869,1.567);
[gp node right] at (0.838,1.567) $100$;
[gp path] (0.967,1.831)–(1.057,1.831);
[gp path] (5.049,1.831)–(4.959,1.831);
[gp path] (0.967,1.985)–(1.057,1.985);
[gp path] (5.049,1.985)–(4.959,1.985);
[gp path] (0.967,2.095)–(1.057,2.095);
[gp path] (5.049,2.095)–(4.959,2.095);
[gp path] (0.967,2.180)–(1.057,2.180);
[gp path] (5.049,2.180)–(4.959,2.180);
[gp path] (0.967,2.249)–(1.057,2.249);
[gp path] (5.049,2.249)–(4.959,2.249);
[gp path] (0.967,2.308)–(1.057,2.308);
[gp path] (5.049,2.308)–(4.959,2.308);
[gp path] (0.967,2.359)–(1.057,2.359);
[gp path] (5.049,2.359)–(4.959,2.359);
[gp path] (0.967,2.404)–(1.057,2.404);
[gp path] (5.049,2.404)–(4.959,2.404);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,2.444)–(5.049,2.444);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,2.444)–(1.147,2.444);
[gp path] (5.049,2.444)–(4.869,2.444);
[gp node right] at (0.838,2.444) $1000$;
[gp path] (0.967,2.707)–(1.057,2.707);
[gp path] (5.049,2.707)–(4.959,2.707);
[gp path] (0.967,2.862)–(1.057,2.862);
[gp path] (5.049,2.862)–(4.959,2.862);
[gp path] (0.967,2.971)–(1.057,2.971);
[gp path] (5.049,2.971)–(4.959,2.971);
[gp path] (0.967,3.056)–(1.057,3.056);
[gp path] (5.049,3.056)–(4.959,3.056);
[gp path] (0.967,3.126)–(1.057,3.126);
[gp path] (5.049,3.126)–(4.959,3.126);
[gp path] (0.967,3.184)–(1.057,3.184);
[gp path] (5.049,3.184)–(4.959,3.184);
[gp path] (0.967,3.235)–(1.057,3.235);
[gp path] (5.049,3.235)–(4.959,3.235);
[gp path] (0.967,3.280)–(1.057,3.280);
[gp path] (5.049,3.280)–(4.959,3.280);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,3.320)–(5.049,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,3.320)–(1.147,3.320);
[gp path] (5.049,3.320)–(4.869,3.320);
[gp node right] at (0.838,3.320) $10000$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,0.691)–(0.967,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,0.691)–(0.967,0.871);
[gp path] (0.967,3.320)–(0.967,3.140);
[gp node center] at (0.967,0.475) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.477,0.691)–(1.477,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.477,0.691)–(1.477,0.871);
[gp path] (1.477,3.320)–(1.477,3.140);
[gp node center] at (1.477,0.475) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.987,0.691)–(1.987,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.987,0.691)–(1.987,0.871);
[gp path] (1.987,3.320)–(1.987,3.140);
[gp node center] at (1.987,0.475) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.498,0.691)–(2.498,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.498,0.691)–(2.498,0.871);
[gp path] (2.498,3.320)–(2.498,3.140);
[gp node center] at (2.498,0.475) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.008,0.691)–(3.008,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.008,0.691)–(3.008,0.871);
[gp path] (3.008,3.320)–(3.008,3.140);
[gp node center] at (3.008,0.475) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.518,0.691)–(3.518,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.518,0.691)–(3.518,0.871);
[gp path] (3.518,3.320)–(3.518,3.140);
[gp node center] at (3.518,0.475) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.028,0.691)–(4.028,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.028,0.691)–(4.028,0.871);
[gp path] (4.028,3.320)–(4.028,3.140);
[gp node center] at (4.028,0.475) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.539,0.691)–(4.539,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.539,0.691)–(4.539,0.871);
[gp path] (4.539,3.320)–(4.539,3.140);
[gp node center] at (4.539,0.475) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.049,0.691)–(5.049,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.049,0.691)–(5.049,0.871);
[gp path] (5.049,3.320)–(5.049,3.140);
[gp node center] at (5.049,0.475) 1024;
[gp path] (0.967,3.320)–(0.967,0.691)–(5.049,0.691)–(5.049,3.320)–cycle;
[gp node center] at (3.008,0.151) number of cores;
rgb color=0.000,1.000,1.000
[gp path] (1.987,3.132)–(2.498,2.931)–(3.008,2.769)–(3.518,2.585)–(4.028,2.461);
3pointgp mark 1(1.987,3.132)
3pointgp mark 1(2.498,2.931)
3pointgp mark 1(3.008,2.769)
3pointgp mark 1(3.518,2.585)
3pointgp mark 1(4.028,2.461)
rgb color=0.000,0.000,0.545
[gp path] (1.987,3.092)–(2.498,2.856)–(3.008,2.704)–(3.518,2.542)–(4.028,2.425);
3pointgp mark 1(1.987,3.092)
3pointgp mark 1(2.498,2.856)
3pointgp mark 1(3.008,2.704)
3pointgp mark 1(3.518,2.542)
3pointgp mark 1(4.028,2.425)
rgb color=0.753,0.251,0.000
[gp path] (1.987,3.124)–(2.498,2.890)–(3.008,2.547)–(3.518,2.242)–(4.028,1.925)
3pointgp mark 2(1.987,3.124)
3pointgp mark 2(2.498,2.890)
3pointgp mark 2(3.008,2.547)
3pointgp mark 2(3.518,2.242)
3pointgp mark 2(4.028,1.925)
3pointgp mark 2(4.539,1.694)
3pointgp mark 2(5.049,1.489)
color=gp lt color border
[gp node right] at (1.612,3.469) tsi;
rgb color=0.580,0.000,0.827
[gp path] (1.741,3.469)–(2.437,3.469);
[gp path] (0.967,2.952)–(1.477,2.664)–(1.987,2.427)–(2.498,2.204)–(3.008,2.008)
3pointgp mark 4(0.967,2.952)
3pointgp mark 4(1.477,2.664)
3pointgp mark 4(1.987,2.427)
3pointgp mark 4(2.498,2.204)
3pointgp mark 4(3.008,2.008)
3pointgp mark 4(3.518,1.840)
3pointgp mark 4(4.028,1.719)
3pointgp mark 4(4.539,1.580)
3pointgp mark 4(5.049,1.493)
3pointgp mark 4(2.089,3.469)
color=gp lt color border
[gp node right] at (3.211,3.469) ideal;
rgb color=0.000,0.000,0.000
[gp path] (3.340,3.469)–(4.036,3.469);
[gp path] (0.967,2.952)–(1.013,2.928)–(1.060,2.904)–(1.106,2.880)–(1.153,2.856)
color=gp lt color border
[gp path] (0.967,3.320)–(0.967,0.691)–(5.049,0.691)–(5.049,3.320)–cycle;
gp plot 10.967cm0.691cm5.049cm3.320cm
every node/.append style=scale=0.70
(0.000,0.000) rectangle (5.438,3.763);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,0.691)–(5.049,0.691);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,0.691)–(1.147,0.691);
[gp path] (5.049,0.691)–(4.869,0.691);
[gp node right] at (0.838,0.691) $100$;
[gp path] (0.967,1.087)–(1.057,1.087);
[gp path] (5.049,1.087)–(4.959,1.087);
[gp path] (0.967,1.318)–(1.057,1.318);
[gp path] (5.049,1.318)–(4.959,1.318);
[gp path] (0.967,1.482)–(1.057,1.482);
[gp path] (5.049,1.482)–(4.959,1.482);
[gp path] (0.967,1.610)–(1.057,1.610);
[gp path] (5.049,1.610)–(4.959,1.610);
[gp path] (0.967,1.714)–(1.057,1.714);
[gp path] (5.049,1.714)–(4.959,1.714);
[gp path] (0.967,1.802)–(1.057,1.802);
[gp path] (5.049,1.802)–(4.959,1.802);
[gp path] (0.967,1.878)–(1.057,1.878);
[gp path] (5.049,1.878)–(4.959,1.878);
[gp path] (0.967,1.945)–(1.057,1.945);
[gp path] (5.049,1.945)–(4.959,1.945);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,2.006)–(5.049,2.006);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,2.006)–(1.147,2.006);
[gp path] (5.049,2.006)–(4.869,2.006);
[gp node right] at (0.838,2.006) $1000$;
[gp path] (0.967,2.401)–(1.057,2.401);
[gp path] (5.049,2.401)–(4.959,2.401);
[gp path] (0.967,2.633)–(1.057,2.633);
[gp path] (5.049,2.633)–(4.959,2.633);
[gp path] (0.967,2.797)–(1.057,2.797);
[gp path] (5.049,2.797)–(4.959,2.797);
[gp path] (0.967,2.924)–(1.057,2.924);
[gp path] (5.049,2.924)–(4.959,2.924);
[gp path] (0.967,3.028)–(1.057,3.028);
[gp path] (5.049,3.028)–(4.959,3.028);
[gp path] (0.967,3.116)–(1.057,3.116);
[gp path] (5.049,3.116)–(4.959,3.116);
[gp path] (0.967,3.193)–(1.057,3.193);
[gp path] (5.049,3.193)–(4.959,3.193);
[gp path] (0.967,3.260)–(1.057,3.260);
[gp path] (5.049,3.260)–(4.959,3.260);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,3.320)–(5.049,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,3.320)–(1.147,3.320);
[gp path] (5.049,3.320)–(4.869,3.320);
[gp node right] at (0.838,3.320) $10000$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.967,0.691)–(0.967,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.967,0.691)–(0.967,0.871);
[gp path] (0.967,3.320)–(0.967,3.140);
[gp node center] at (0.967,0.475) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.550,0.691)–(1.550,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.550,0.691)–(1.550,0.871);
[gp path] (1.550,3.320)–(1.550,3.140);
[gp node center] at (1.550,0.475) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.133,0.691)–(2.133,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.133,0.691)–(2.133,0.871);
[gp path] (2.133,3.320)–(2.133,3.140);
[gp node center] at (2.133,0.475) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.716,0.691)–(2.716,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.716,0.691)–(2.716,0.871);
[gp path] (2.716,3.320)–(2.716,3.140);
[gp node center] at (2.716,0.475) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.300,0.691)–(3.300,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.300,0.691)–(3.300,0.871);
[gp path] (3.300,3.320)–(3.300,3.140);
[gp node center] at (3.300,0.475) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.883,0.691)–(3.883,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.883,0.691)–(3.883,0.871);
[gp path] (3.883,3.320)–(3.883,3.140);
[gp node center] at (3.883,0.475) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.466,0.691)–(4.466,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.466,0.691)–(4.466,0.871);
[gp path] (4.466,3.320)–(4.466,3.140);
[gp node center] at (4.466,0.475) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.049,0.691)–(5.049,3.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.049,0.691)–(5.049,0.871);
[gp path] (5.049,3.320)–(5.049,3.140);
[gp node center] at (5.049,0.475) 2048;
[gp path] (0.967,3.320)–(0.967,0.691)–(5.049,0.691)–(5.049,3.320)–cycle;
[gp node center] at (3.008,0.151) number of cores;
rgb color=0.753,0.251,0.000
[gp path] (2.716,2.649)–(3.300,2.163)–(3.883,1.821)–(4.466,1.467)–(5.049,0.942);
3pointgp mark 2(2.716,2.649)
3pointgp mark 2(3.300,2.163)
3pointgp mark 2(3.883,1.821)
3pointgp mark 2(4.466,1.467)
3pointgp mark 2(5.049,0.942)
rgb color=0.580,0.000,0.827
[gp path] (0.967,3.113)–(1.550,2.713)–(2.133,2.365)–(2.716,2.124)–(3.300,1.893)
3pointgp mark 4(0.967,3.113)
3pointgp mark 4(1.550,2.713)
3pointgp mark 4(2.133,2.365)
3pointgp mark 4(2.716,2.124)
3pointgp mark 4(3.300,1.893)
3pointgp mark 4(3.883,1.744)
3pointgp mark 4(4.466,1.614)
3pointgp mark 4(5.049,1.530)
color=gp lt color border
[gp node right] at (1.483,3.469) tsdd;
rgb color=0.000,0.392,0.000
[gp path] (1.612,3.469)–(2.308,3.469);
[gp path] (3.300,2.021)–(3.883,1.652)–(4.466,1.340);
3pointgp mark 4(3.300,2.021)
3pointgp mark 4(3.883,1.652)
3pointgp mark 4(4.466,1.340)
3pointgp mark 4(1.960,3.469)
rgb color=0.000,0.000,0.000
[gp path] (0.967,3.113)–(1.020,3.077)–(1.073,3.041)–(1.126,3.005)–(1.179,2.969)
color=gp lt color border
[gp path] (0.967,3.320)–(0.967,0.691)–(5.049,0.691)–(5.049,3.320)–cycle;
gp plot 10.967cm0.691cm5.049cm3.320cm
[h=-400,loop only]
every node/.append style=scale=0.70
(0.000,0.000) rectangle (5.438,3.763);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.054,0.691)–(5.049,0.691);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.054,0.691)–(1.234,0.691);
[gp path] (5.049,0.691)–(4.869,0.691);
[gp node right] at (0.925,0.691) $10$;
[gp path] (1.054,1.006)–(1.144,1.006);
[gp path] (5.049,1.006)–(4.959,1.006);
[gp path] (1.054,1.189)–(1.144,1.189);
[gp path] (5.049,1.189)–(4.959,1.189);
[gp path] (1.054,1.320)–(1.144,1.320);
[gp path] (5.049,1.320)–(4.959,1.320);
[gp path] (1.054,1.421)–(1.144,1.421);
[gp path] (5.049,1.421)–(4.959,1.421);
[gp path] (1.054,1.504)–(1.144,1.504);
[gp path] (5.049,1.504)–(4.959,1.504);
[gp path] (1.054,1.574)–(1.144,1.574);
[gp path] (5.049,1.574)–(4.959,1.574);
[gp path] (1.054,1.635)–(1.144,1.635);
[gp path] (5.049,1.635)–(4.959,1.635);
[gp path] (1.054,1.688)–(1.144,1.688);
[gp path] (5.049,1.688)–(4.959,1.688);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.054,1.736)–(5.049,1.736);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.054,1.736)–(1.234,1.736);
[gp path] (5.049,1.736)–(4.869,1.736);
[gp node right] at (0.925,1.736) $100$;
[gp path] (1.054,2.050)–(1.144,2.050);
[gp path] (5.049,2.050)–(4.959,2.050);
[gp path] (1.054,2.234)–(1.144,2.234);
[gp path] (5.049,2.234)–(4.959,2.234);
[gp path] (1.054,2.365)–(1.144,2.365);
[gp path] (5.049,2.365)–(4.959,2.365);
[gp path] (1.054,2.466)–(1.144,2.466);
[gp path] (5.049,2.466)–(4.959,2.466);
[gp path] (1.054,2.549)–(1.144,2.549);
[gp path] (5.049,2.549)–(4.959,2.549);
[gp path] (1.054,2.619)–(1.144,2.619);
[gp path] (5.049,2.619)–(4.959,2.619);
[gp path] (1.054,2.679)–(1.144,2.679);
[gp path] (5.049,2.679)–(4.959,2.679);
[gp path] (1.054,2.733)–(1.144,2.733);
[gp path] (5.049,2.733)–(4.959,2.733);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.054,2.780)–(5.049,2.780);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.054,2.780)–(1.234,2.780);
[gp path] (5.049,2.780)–(4.869,2.780);
[gp node right] at (0.925,2.780) $1000$;
[gp path] (1.054,3.095)–(1.144,3.095);
[gp path] (5.049,3.095)–(4.959,3.095);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.054,0.691)–(1.054,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.054,0.691)–(1.054,0.871);
[gp path] (1.054,3.095)–(1.054,2.915);
[gp node center] at (1.054,0.475) 1;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.625,0.691)–(1.625,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.625,0.691)–(1.625,0.871);
[gp path] (1.625,3.095)–(1.625,2.915);
[gp node center] at (1.625,0.475) 2;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.195,0.691)–(2.195,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.195,0.691)–(2.195,0.871);
[gp path] (2.195,3.095)–(2.195,2.915);
[gp node center] at (2.195,0.475) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.766,0.691)–(2.766,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.766,0.691)–(2.766,0.871);
[gp path] (2.766,3.095)–(2.766,2.915);
[gp node center] at (2.766,0.475) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.336,0.691)–(3.336,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.336,0.691)–(3.336,0.871);
[gp path] (3.336,3.095)–(3.336,2.915);
[gp node center] at (3.336,0.475) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.907,0.691)–(3.907,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.907,0.691)–(3.907,0.871);
[gp path] (3.907,3.095)–(3.907,2.915);
[gp node center] at (3.907,0.475) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.478,0.691)–(4.478,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.478,0.691)–(4.478,0.871);
[gp path] (4.478,3.095)–(4.478,2.915);
[gp node center] at (4.478,0.475) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.048,0.691)–(5.048,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.048,0.691)–(5.048,0.871);
[gp path] (5.048,3.095)–(5.048,2.915);
[gp node center] at (5.048,0.475) 128;
[gp path] (1.054,3.095)–(1.054,0.691)–(5.049,0.691)–(5.049,3.095)–cycle;
[gp node center,rotate=-270] at (0.204,1.893) elapsed time in s;
[gp node center] at (3.051,0.151) number of cores;
[gp node right] at (4.408,3.469) tsi loop global-scale solv;
rgb color=0.933,0.510,0.933
[gp path] (4.537,3.469)–(5.233,3.469);
[gp path] (1.054,2.016)–(1.625,1.947)–(2.195,1.720)–(2.766,1.455)–(3.336,1.315)
3pointgp mark 5(1.054,2.016)
3pointgp mark 5(1.625,1.947)
3pointgp mark 5(2.195,1.720)
3pointgp mark 5(2.766,1.455)
3pointgp mark 5(3.336,1.315)
3pointgp mark 5(3.907,1.157)
3pointgp mark 5(4.478,1.050)
3pointgp mark 5(5.048,0.982)
3pointgp mark 5(4.885,3.469)
color=gp lt color border
[gp node right] at (4.408,3.244) tsi loop fine-scale solv;
rgb color=1.000,0.000,1.000
[gp path] (4.537,3.244)–(5.233,3.244);
[gp path] (1.054,2.986)–(1.625,2.594)–(2.195,2.276)–(2.766,2.016)–(3.336,1.812)
3pointgp mark 6(1.054,2.986)
3pointgp mark 6(1.625,2.594)
3pointgp mark 6(2.195,2.276)
3pointgp mark 6(2.766,2.016)
3pointgp mark 6(3.336,1.812)
3pointgp mark 6(3.907,1.600)
3pointgp mark 6(4.478,1.385)
3pointgp mark 6(5.048,1.158)
3pointgp mark 6(4.885,3.244)
rgb color=0.000,0.000,0.000
[gp path] (1.054,2.986)–(1.094,2.963)–(1.135,2.941)–(1.175,2.919)–(1.215,2.897)
color=gp lt color border
[gp path] (1.054,3.095)–(1.054,0.691)–(5.049,0.691)–(5.049,3.095)–cycle;
gp plot 11.054cm0.691cm5.049cm3.095cm
[h=-300,loop only]
every node/.append style=scale=0.70
(0.000,0.000) rectangle (5.500,3.325);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.838,0.691)–(5.112,0.691);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.838,0.691)–(1.018,0.691);
[gp path] (5.112,0.691)–(4.932,0.691);
[gp node right] at (0.709,0.691) $10$;
[gp path] (0.838,0.971)–(0.928,0.971);
[gp path] (5.112,0.971)–(5.022,0.971);
[gp path] (0.838,1.134)–(0.928,1.134);
[gp path] (5.112,1.134)–(5.022,1.134);
[gp path] (0.838,1.250)–(0.928,1.250);
[gp path] (5.112,1.250)–(5.022,1.250);
[gp path] (0.838,1.340)–(0.928,1.340);
[gp path] (5.112,1.340)–(5.022,1.340);
[gp path] (0.838,1.414)–(0.928,1.414);
[gp path] (5.112,1.414)–(5.022,1.414);
[gp path] (0.838,1.476)–(0.928,1.476);
[gp path] (5.112,1.476)–(5.022,1.476);
[gp path] (0.838,1.530)–(0.928,1.530);
[gp path] (5.112,1.530)–(5.022,1.530);
[gp path] (0.838,1.577)–(0.928,1.577);
[gp path] (5.112,1.577)–(5.022,1.577);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.838,1.620)–(5.112,1.620);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.838,1.620)–(1.018,1.620);
[gp path] (5.112,1.620)–(4.932,1.620);
[gp node right] at (0.709,1.620) $100$;
[gp path] (0.838,1.900)–(0.928,1.900);
[gp path] (5.112,1.900)–(5.022,1.900);
[gp path] (0.838,2.063)–(0.928,2.063);
[gp path] (5.112,2.063)–(5.022,2.063);
[gp path] (0.838,2.179)–(0.928,2.179);
[gp path] (5.112,2.179)–(5.022,2.179);
[gp path] (0.838,2.269)–(0.928,2.269);
[gp path] (5.112,2.269)–(5.022,2.269);
[gp path] (0.838,2.343)–(0.928,2.343);
[gp path] (5.112,2.343)–(5.022,2.343);
[gp path] (0.838,2.405)–(0.928,2.405);
[gp path] (5.112,2.405)–(5.022,2.405);
[gp path] (0.838,2.459)–(0.928,2.459);
[gp path] (5.112,2.459)–(5.022,2.459);
[gp path] (0.838,2.506)–(0.928,2.506);
[gp path] (5.112,2.506)–(5.022,2.506);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.838,2.549)–(5.112,2.549);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.838,2.549)–(1.018,2.549);
[gp path] (5.112,2.549)–(4.932,2.549);
[gp node right] at (0.709,2.549) $1000$;
[gp path] (0.838,2.828)–(0.928,2.828);
[gp path] (5.112,2.828)–(5.022,2.828);
[gp path] (0.838,2.992)–(0.928,2.992);
[gp path] (5.112,2.992)–(5.022,2.992);
[gp path] (0.838,3.108)–(0.928,3.108);
[gp path] (5.112,3.108)–(5.022,3.108);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.838,0.691)–(0.838,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.838,0.691)–(0.838,0.871);
[gp path] (0.838,3.108)–(0.838,2.928);
[gp node center] at (0.838,0.475) 4;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.372,0.691)–(1.372,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.372,0.691)–(1.372,0.871);
[gp path] (1.372,3.108)–(1.372,2.928);
[gp node center] at (1.372,0.475) 8;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.906,0.691)–(1.906,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.906,0.691)–(1.906,0.871);
[gp path] (1.906,3.108)–(1.906,2.928);
[gp node center] at (1.906,0.475) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.441,0.691)–(2.441,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.441,0.691)–(2.441,0.871);
[gp path] (2.441,3.108)–(2.441,2.928);
[gp node center] at (2.441,0.475) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.975,0.691)–(2.975,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.975,0.691)–(2.975,0.871);
[gp path] (2.975,3.108)–(2.975,2.928);
[gp node center] at (2.975,0.475) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.509,0.691)–(3.509,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.509,0.691)–(3.509,0.871);
[gp path] (3.509,3.108)–(3.509,2.928);
[gp node center] at (3.509,0.475) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.043,0.691)–(4.043,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.043,0.691)–(4.043,0.871);
[gp path] (4.043,3.108)–(4.043,2.928);
[gp node center] at (4.043,0.475) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.578,0.691)–(4.578,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.578,0.691)–(4.578,0.871);
[gp path] (4.578,3.108)–(4.578,2.928);
[gp node center] at (4.578,0.475) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.112,0.691)–(5.112,3.108);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.112,0.691)–(5.112,0.871);
[gp path] (5.112,3.108)–(5.112,2.928);
[gp node center] at (5.112,0.475) 1024;
[gp path] (0.838,3.108)–(0.838,0.691)–(5.112,0.691)–(5.112,3.108)–cycle;
[gp node center] at (2.975,0.151) number of cores;
rgb color=0.933,0.510,0.933
[gp path] (0.838,2.300)–(1.372,1.982)–(1.906,1.755)–(2.441,1.559)–(2.975,1.453)
3pointgp mark 5(0.838,2.300)
3pointgp mark 5(1.372,1.982)
3pointgp mark 5(1.906,1.755)
3pointgp mark 5(2.441,1.559)
3pointgp mark 5(2.975,1.453)
3pointgp mark 5(3.509,1.367)
3pointgp mark 5(4.043,1.400)
3pointgp mark 5(4.578,1.298)
3pointgp mark 5(5.112,1.279)
rgb color=1.000,0.000,1.000
[gp path] (0.838,2.718)–(1.372,2.400)–(1.906,2.168)–(2.441,1.950)–(2.975,1.745)
3pointgp mark 6(0.838,2.718)
3pointgp mark 6(1.372,2.400)
3pointgp mark 6(1.906,2.168)
3pointgp mark 6(2.441,1.950)
3pointgp mark 6(2.975,1.745)
3pointgp mark 6(3.509,1.560)
3pointgp mark 6(4.043,1.363)
3pointgp mark 6(4.578,1.186)
3pointgp mark 6(5.112,1.022)
rgb color=0.000,0.000,0.000
[gp path] (0.838,2.718)–(0.887,2.693)–(0.935,2.667)–(0.984,2.642)–(1.032,2.616)
color=gp lt color border
[gp path] (0.838,3.108)–(0.838,0.691)–(5.112,0.691)–(5.112,3.108)–cycle;
gp plot 10.838cm0.691cm5.112cm3.108cm
[h=-100,loop only]
every node/.append style=scale=0.70
(0.000,0.000) rectangle (5.438,3.763);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.838,0.691)–(0.928,0.691);
[gp path] (5.049,0.691)–(4.959,0.691);
[gp path] (0.838,0.841)–(0.928,0.841);
[gp path] (5.049,0.841)–(4.959,0.841);
[gp path] (0.838,0.958)–(0.928,0.958);
[gp path] (5.049,0.958)–(4.959,0.958);
[gp path] (0.838,1.053)–(0.928,1.053);
[gp path] (5.049,1.053)–(4.959,1.053);
[gp path] (0.838,1.133)–(0.928,1.133);
[gp path] (5.049,1.133)–(4.959,1.133);
[gp path] (0.838,1.203)–(0.928,1.203);
[gp path] (5.049,1.203)–(4.959,1.203);
[gp path] (0.838,1.264)–(0.928,1.264);
[gp path] (5.049,1.264)–(4.959,1.264);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.838,1.320)–(5.049,1.320);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.838,1.320)–(1.018,1.320);
[gp path] (5.049,1.320)–(4.869,1.320);
[gp node right] at (0.709,1.320) $100$;
[gp path] (0.838,1.681)–(0.928,1.681);
[gp path] (5.049,1.681)–(4.959,1.681);
[gp path] (0.838,1.893)–(0.928,1.893);
[gp path] (5.049,1.893)–(4.959,1.893);
[gp path] (0.838,2.043)–(0.928,2.043);
[gp path] (5.049,2.043)–(4.959,2.043);
[gp path] (0.838,2.160)–(0.928,2.160);
[gp path] (5.049,2.160)–(4.959,2.160);
[gp path] (0.838,2.255)–(0.928,2.255);
[gp path] (5.049,2.255)–(4.959,2.255);
[gp path] (0.838,2.335)–(0.928,2.335);
[gp path] (5.049,2.335)–(4.959,2.335);
[gp path] (0.838,2.405)–(0.928,2.405);
[gp path] (5.049,2.405)–(4.959,2.405);
[gp path] (0.838,2.466)–(0.928,2.466);
[gp path] (5.049,2.466)–(4.959,2.466);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.838,2.522)–(5.049,2.522);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.838,2.522)–(1.018,2.522);
[gp path] (5.049,2.522)–(4.869,2.522);
[gp node right] at (0.709,2.522) $1000$;
[gp path] (0.838,2.883)–(0.928,2.883);
[gp path] (5.049,2.883)–(4.959,2.883);
[gp path] (0.838,3.095)–(0.928,3.095);
[gp path] (5.049,3.095)–(4.959,3.095);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.838,0.691)–(0.838,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.838,0.691)–(0.838,0.871);
[gp path] (0.838,3.095)–(0.838,2.915);
[gp node center] at (0.838,0.475) 16;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.440,0.691)–(1.440,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.440,0.691)–(1.440,0.871);
[gp path] (1.440,3.095)–(1.440,2.915);
[gp node center] at (1.440,0.475) 32;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.041,0.691)–(2.041,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.041,0.691)–(2.041,0.871);
[gp path] (2.041,3.095)–(2.041,2.915);
[gp node center] at (2.041,0.475) 64;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.643,0.691)–(2.643,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.643,0.691)–(2.643,0.871);
[gp path] (2.643,3.095)–(2.643,2.915);
[gp node center] at (2.643,0.475) 128;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.244,0.691)–(3.244,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.244,0.691)–(3.244,0.871);
[gp path] (3.244,3.095)–(3.244,2.915);
[gp node center] at (3.244,0.475) 256;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.846,0.691)–(3.846,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.846,0.691)–(3.846,0.871);
[gp path] (3.846,3.095)–(3.846,2.915);
[gp node center] at (3.846,0.475) 512;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.447,0.691)–(4.447,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.447,0.691)–(4.447,0.871);
[gp path] (4.447,3.095)–(4.447,2.915);
[gp node center] at (4.447,0.475) 1024;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.049,0.691)–(5.049,3.095);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.049,0.691)–(5.049,0.871);
[gp path] (5.049,3.095)–(5.049,2.915);
[gp node center] at (5.049,0.475) 2048;
[gp path] (0.838,3.095)–(0.838,0.691)–(5.049,0.691)–(5.049,3.095)–cycle;
[gp node center] at (2.943,0.151) number of cores;
rgb color=0.933,0.510,0.933
[gp path] (0.838,2.795)–(1.440,2.373)–(2.041,2.118)–(2.643,2.067)–(3.244,2.006)
3pointgp mark 5(0.838,2.795)
3pointgp mark 5(1.440,2.373)
3pointgp mark 5(2.041,2.118)
3pointgp mark 5(2.643,2.067)
3pointgp mark 5(3.244,2.006)
3pointgp mark 5(3.846,1.980)
3pointgp mark 5(4.447,1.951)
3pointgp mark 5(5.049,1.942)
rgb color=1.000,0.000,1.000
[gp path] (0.838,3.020)–(1.440,2.685)–(2.041,2.369)–(2.643,2.115)–(3.244,1.824)
3pointgp mark 6(0.838,3.020)
3pointgp mark 6(1.440,2.685)
3pointgp mark 6(2.041,2.369)
3pointgp mark 6(2.643,2.115)
3pointgp mark 6(3.244,1.824)
3pointgp mark 6(3.846,1.599)
3pointgp mark 6(4.447,1.349)
3pointgp mark 6(5.049,1.097)
color=gp lt color border
[gp node right] at (4.321,3.469) tsdd loop coarse-scale solv;
rgb color=0.000,1.000,0.000
[gp path] (4.450,3.469)–(5.146,3.469);
[gp path] (3.244,2.225)–(3.846,1.871)–(4.447,1.565);
3pointgp mark 5(3.244,2.225)
3pointgp mark 5(3.846,1.871)
3pointgp mark 5(4.447,1.565)
3pointgp mark 5(4.798,3.469)
color=gp lt color border
[gp node right] at (4.321,3.244) tsdd loop fine-scale solv;
rgb color=0.565,0.933,0.565
[gp path] (4.450,3.244)–(5.146,3.244);
[gp path] (3.244,1.845)–(3.846,1.550)–(4.447,1.303);
3pointgp mark 6(3.244,1.845)
3pointgp mark 6(3.846,1.550)
3pointgp mark 6(4.447,1.303)
3pointgp mark 6(4.798,3.244)
rgb color=0.000,0.000,0.000
[gp path] (0.838,3.020)–(0.893,2.987)–(0.947,2.954)–(1.002,2.921)–(1.057,2.888)
color=gp lt color border
[gp path] (0.838,3.095)–(0.838,0.691)–(5.049,0.691)–(5.049,3.095)–cycle;
gp plot 10.838cm0.691cm5.049cm3.095cm
Accumulation over the 4 steps for each adaptation of the elapsed time in second. For the full computation in (*PO_curve_times_400,*PO_curve_times_300,*PO_curve_times_100) and only for the loop in (*PO_curve_times_400_l,*PO_curve_times_300_l,*PO_curve_times_100_l). Log/log scale..
For "fr" and "blr" it is simply 4 times almost the same cost (mainly factoring and solving, the symbolic phase being done only once).
But for "dd" and "tsi", each execution, after the first one, starts from the previous one and gains iterations in their looping part.
For "dd" only the first step requires iterating in the conjugate gradient to solve the global boundary problem.
The other steps just enter the loop of the algorithm <ref> once and stop.
They cost less than the first step (almost only the condensation and preconditionner are updated and do count, but both depends only on the domain size).
For "tsi", the restart process reduces the number of scale iterations as can be seen in figure <ref> by starting with a solution with smaller residual error.
every node/.append style=scale=0.60
(0.000,0.000) rectangle (4.125,3.675);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.592)–(1.100,0.592);
[gp path] (3.794,0.592)–(3.704,0.592);
[gp path] (1.010,0.611)–(1.100,0.611);
[gp path] (3.794,0.611)–(3.704,0.611);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,0.628)–(3.794,0.628);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,0.628)–(1.190,0.628);
[gp path] (3.794,0.628)–(3.614,0.628);
[gp node right] at (0.900,0.628) 1e-07;
[gp path] (1.010,0.742)–(1.100,0.742);
[gp path] (3.794,0.742)–(3.704,0.742);
[gp path] (1.010,0.808)–(1.100,0.808);
[gp path] (3.794,0.808)–(3.704,0.808);
[gp path] (1.010,0.855)–(1.100,0.855);
[gp path] (3.794,0.855)–(3.704,0.855);
[gp path] (1.010,0.891)–(1.100,0.891);
[gp path] (3.794,0.891)–(3.704,0.891);
[gp path] (1.010,0.921)–(1.100,0.921);
[gp path] (3.794,0.921)–(3.704,0.921);
[gp path] (1.010,0.946)–(1.100,0.946);
[gp path] (3.794,0.946)–(3.704,0.946);
[gp path] (1.010,0.968)–(1.100,0.968);
[gp path] (3.794,0.968)–(3.704,0.968);
[gp path] (1.010,0.988)–(1.100,0.988);
[gp path] (3.794,0.988)–(3.704,0.988);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.005)–(3.794,1.005);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.005)–(1.190,1.005);
[gp path] (3.794,1.005)–(3.614,1.005);
[gp node right] at (0.900,1.005) 1e-06;
[gp path] (1.010,1.118)–(1.100,1.118);
[gp path] (3.794,1.118)–(3.704,1.118);
[gp path] (1.010,1.184)–(1.100,1.184);
[gp path] (3.794,1.184)–(3.704,1.184);
[gp path] (1.010,1.231)–(1.100,1.231);
[gp path] (3.794,1.231)–(3.704,1.231);
[gp path] (1.010,1.268)–(1.100,1.268);
[gp path] (3.794,1.268)–(3.704,1.268);
[gp path] (1.010,1.298)–(1.100,1.298);
[gp path] (3.794,1.298)–(3.704,1.298);
[gp path] (1.010,1.323)–(1.100,1.323);
[gp path] (3.794,1.323)–(3.704,1.323);
[gp path] (1.010,1.345)–(1.100,1.345);
[gp path] (3.794,1.345)–(3.704,1.345);
[gp path] (1.010,1.364)–(1.100,1.364);
[gp path] (3.794,1.364)–(3.704,1.364);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.381)–(3.794,1.381);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.381)–(1.190,1.381);
[gp path] (3.794,1.381)–(3.614,1.381);
[gp node right] at (0.900,1.381) 1e-05;
[gp path] (1.010,1.494)–(1.100,1.494);
[gp path] (3.794,1.494)–(3.704,1.494);
[gp path] (1.010,1.561)–(1.100,1.561);
[gp path] (3.794,1.561)–(3.704,1.561);
[gp path] (1.010,1.608)–(1.100,1.608);
[gp path] (3.794,1.608)–(3.704,1.608);
[gp path] (1.010,1.644)–(1.100,1.644);
[gp path] (3.794,1.644)–(3.704,1.644);
[gp path] (1.010,1.674)–(1.100,1.674);
[gp path] (3.794,1.674)–(3.704,1.674);
[gp path] (1.010,1.699)–(1.100,1.699);
[gp path] (3.794,1.699)–(3.704,1.699);
[gp path] (1.010,1.721)–(1.100,1.721);
[gp path] (3.794,1.721)–(3.704,1.721);
[gp path] (1.010,1.740)–(1.100,1.740);
[gp path] (3.794,1.740)–(3.704,1.740);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,1.757)–(3.794,1.757);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,1.757)–(1.190,1.757);
[gp path] (3.794,1.757)–(3.614,1.757);
[gp node right] at (0.900,1.757) 1e-04;
[gp path] (1.010,1.871)–(1.100,1.871);
[gp path] (3.794,1.871)–(3.704,1.871);
[gp path] (1.010,1.937)–(1.100,1.937);
[gp path] (3.794,1.937)–(3.704,1.937);
[gp path] (1.010,1.984)–(1.100,1.984);
[gp path] (3.794,1.984)–(3.704,1.984);
[gp path] (1.010,2.020)–(1.100,2.020);
[gp path] (3.794,2.020)–(3.704,2.020);
[gp path] (1.010,2.050)–(1.100,2.050);
[gp path] (3.794,2.050)–(3.704,2.050);
[gp path] (1.010,2.075)–(1.100,2.075);
[gp path] (3.794,2.075)–(3.704,2.075);
[gp path] (1.010,2.097)–(1.100,2.097);
[gp path] (3.794,2.097)–(3.704,2.097);
[gp path] (1.010,2.116)–(1.100,2.116);
[gp path] (3.794,2.116)–(3.704,2.116);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.134)–(3.794,2.134);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.134)–(1.190,2.134);
[gp path] (3.794,2.134)–(3.614,2.134);
[gp node right] at (0.900,2.134) 1e-03;
[gp path] (1.010,2.247)–(1.100,2.247);
[gp path] (3.794,2.247)–(3.704,2.247);
[gp path] (1.010,2.313)–(1.100,2.313);
[gp path] (3.794,2.313)–(3.704,2.313);
[gp path] (1.010,2.360)–(1.100,2.360);
[gp path] (3.794,2.360)–(3.704,2.360);
[gp path] (1.010,2.397)–(1.100,2.397);
[gp path] (3.794,2.397)–(3.704,2.397);
[gp path] (1.010,2.426)–(1.100,2.426);
[gp path] (3.794,2.426)–(3.704,2.426);
[gp path] (1.010,2.452)–(1.100,2.452);
[gp path] (3.794,2.452)–(3.704,2.452);
[gp path] (1.010,2.473)–(1.100,2.473);
[gp path] (3.794,2.473)–(3.704,2.473);
[gp path] (1.010,2.493)–(1.100,2.493);
[gp path] (3.794,2.493)–(3.704,2.493);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.510)–(2.184,2.510);
[gp path] (3.684,2.510)–(3.794,2.510);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.510)–(1.190,2.510);
[gp path] (3.794,2.510)–(3.614,2.510);
[gp node right] at (0.900,2.510) 1e-02;
[gp path] (1.010,2.623)–(1.100,2.623);
[gp path] (3.794,2.623)–(3.704,2.623);
[gp path] (1.010,2.689)–(1.100,2.689);
[gp path] (3.794,2.689)–(3.704,2.689);
[gp path] (1.010,2.736)–(1.100,2.736);
[gp path] (3.794,2.736)–(3.704,2.736);
[gp path] (1.010,2.773)–(1.100,2.773);
[gp path] (3.794,2.773)–(3.704,2.773);
[gp path] (1.010,2.803)–(1.100,2.803);
[gp path] (3.794,2.803)–(3.704,2.803);
[gp path] (1.010,2.828)–(1.100,2.828);
[gp path] (3.794,2.828)–(3.704,2.828);
[gp path] (1.010,2.850)–(1.100,2.850);
[gp path] (3.794,2.850)–(3.704,2.850);
[gp path] (1.010,2.869)–(1.100,2.869);
[gp path] (3.794,2.869)–(3.704,2.869);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,2.886)–(2.184,2.886);
[gp path] (3.684,2.886)–(3.794,2.886);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,2.886)–(1.190,2.886);
[gp path] (3.794,2.886)–(3.614,2.886);
[gp node right] at (0.900,2.886) 1e-01;
[gp path] (1.010,2.999)–(1.100,2.999);
[gp path] (3.794,2.999)–(3.704,2.999);
[gp path] (1.010,3.066)–(1.100,3.066);
[gp path] (3.794,3.066)–(3.704,3.066);
[gp path] (1.010,3.113)–(1.100,3.113);
[gp path] (3.794,3.113)–(3.704,3.113);
[gp path] (1.010,3.149)–(1.100,3.149);
[gp path] (3.794,3.149)–(3.704,3.149);
[gp path] (1.010,3.179)–(1.100,3.179);
[gp path] (3.794,3.179)–(3.704,3.179);
[gp path] (1.010,3.204)–(1.100,3.204);
[gp path] (3.794,3.204)–(3.704,3.204);
[gp path] (1.010,3.226)–(1.100,3.226);
[gp path] (3.794,3.226)–(3.704,3.226);
[gp path] (1.010,3.245)–(1.100,3.245);
[gp path] (3.794,3.245)–(3.704,3.245);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.010,3.262)–(2.184,3.262);
[gp path] (3.684,3.262)–(3.794,3.262);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.010,3.262)–(1.190,3.262);
[gp path] (3.794,3.262)–(3.614,3.262);
[gp node right] at (0.900,3.262) 1e+00;
[gp path] (1.010,3.376)–(1.100,3.376);
[gp path] (3.794,3.376)–(3.704,3.376);
[gp path] (1.010,3.442)–(1.100,3.442);
[gp path] (3.794,3.442)–(3.704,3.442);
[gp path] (1.010,3.489)–(1.100,3.489);
[gp path] (3.794,3.489)–(3.704,3.489);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.338,0.592)–(1.338,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.338,0.592)–(1.338,0.772);
[gp path] (1.338,3.489)–(1.338,3.309);
[gp node center] at (1.338,0.407) $5$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.747,0.592)–(1.747,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.747,0.592)–(1.747,0.772);
[gp path] (1.747,3.489)–(1.747,3.309);
[gp node center] at (1.747,0.407) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.156,0.592)–(2.156,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.156,0.592)–(2.156,0.772);
[gp path] (2.156,3.489)–(2.156,3.309);
[gp node center] at (2.156,0.407) $15$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.566,0.592)–(2.566,2.409);
[gp path] (2.566,3.309)–(2.566,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.566,0.592)–(2.566,0.772);
[gp path] (2.566,3.489)–(2.566,3.309);
[gp node center] at (2.566,0.407) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.975,0.592)–(2.975,2.409);
[gp path] (2.975,3.309)–(2.975,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.975,0.592)–(2.975,0.772);
[gp path] (2.975,3.489)–(2.975,3.309);
[gp node center] at (2.975,0.407) $25$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.385,0.592)–(3.385,2.409);
[gp path] (3.385,3.309)–(3.385,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.385,0.592)–(3.385,0.772);
[gp path] (3.385,3.489)–(3.385,3.309);
[gp node center] at (3.385,0.407) $30$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.794,0.592)–(3.794,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.794,0.592)–(3.794,0.772);
[gp path] (3.794,3.489)–(3.794,3.309);
[gp node center] at (3.794,0.407) $35$;
[gp path] (1.010,3.489)–(1.010,0.592)–(3.794,0.592)–(3.794,3.489)–cycle;
[gp node center,rotate=-270] at (0.175,2.040) $ \frac{\left \| \vm{A}_{rr} \cdot \vm{u}_{r}- \vm{B}_{r} \right \|}{\left \| \vm{B}_{r} \right \|} $;
[gp node center] at (2.402,0.130) number of iterations;
[gp node right] at (2.844,3.196) step 0;
rgb color=0.753,0.251,0.000
[gp path] (2.954,3.196)–(3.574,3.196);
[gp path] (1.010,3.091)–(1.092,2.874)–(1.174,2.695)–(1.256,2.553)–(1.338,2.426)
3pointgp mark 8(1.010,3.091)
3pointgp mark 8(1.419,2.318)
3pointgp mark 8(1.829,1.907)
3pointgp mark 8(2.238,1.566)
3pointgp mark 8(2.648,1.247)
3pointgp mark 8(3.057,0.956)
3pointgp mark 8(3.466,0.705)
3pointgp mark 8(3.264,3.196)
color=gp lt color border
[gp node right] at (2.844,2.971) step 1;
rgb color=0.753,0.251,0.000
[gp path] (2.954,2.971)–(3.574,2.971);
[gp path] (1.010,2.818)–(1.092,2.631)–(1.174,2.465)–(1.256,2.324)–(1.338,2.200)
3pointgp mark 6(1.010,2.818)
3pointgp mark 6(1.501,1.971)
3pointgp mark 6(1.993,1.411)
3pointgp mark 6(2.484,0.991)
3pointgp mark 6(2.975,0.683)
3pointgp mark 6(3.264,2.971)
color=gp lt color border
[gp node right] at (2.844,2.746) step 2;
rgb color=0.753,0.251,0.000
[gp path] (2.954,2.746)–(3.574,2.746);
[gp path] (1.010,2.837)–(1.092,2.653)–(1.174,2.494)–(1.256,2.352)–(1.338,2.220)
3pointgp mark 5(1.010,2.837)
3pointgp mark 5(1.583,1.893)
3pointgp mark 5(2.156,1.261)
3pointgp mark 5(2.730,0.785)
3pointgp mark 5(3.264,2.746)
color=gp lt color border
[gp node right] at (2.844,2.521) step 3;
rgb color=0.753,0.251,0.000
[gp path] (2.954,2.521)–(3.574,2.521);
[gp path] (1.010,2.832)–(1.092,2.653)–(1.174,2.483)–(1.256,2.338)–(1.338,2.210)
3pointgp mark 3(1.010,2.832)
3pointgp mark 3(1.665,1.787)
3pointgp mark 3(2.320,1.161)
3pointgp mark 3(2.975,0.644)
3pointgp mark 3(3.264,2.521)
color=gp lt color border
[gp path] (1.010,3.489)–(1.010,0.592)–(3.794,0.592)–(3.794,3.489)–cycle;
gp plot 11.010cm0.592cm3.794cm3.489cm
every node/.append style=scale=0.60
(0.000,0.000) rectangle (4.125,3.675);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.592)–(0.915,0.592);
[gp path] (3.794,0.592)–(3.704,0.592);
[gp path] (0.825,0.611)–(0.915,0.611);
[gp path] (3.794,0.611)–(3.704,0.611);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,0.628)–(3.794,0.628);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.628)–(1.005,0.628);
[gp path] (3.794,0.628)–(3.614,0.628);
[gp node right] at (0.715,0.628) 1e-07;
[gp path] (0.825,0.742)–(0.915,0.742);
[gp path] (3.794,0.742)–(3.704,0.742);
[gp path] (0.825,0.808)–(0.915,0.808);
[gp path] (3.794,0.808)–(3.704,0.808);
[gp path] (0.825,0.855)–(0.915,0.855);
[gp path] (3.794,0.855)–(3.704,0.855);
[gp path] (0.825,0.891)–(0.915,0.891);
[gp path] (3.794,0.891)–(3.704,0.891);
[gp path] (0.825,0.921)–(0.915,0.921);
[gp path] (3.794,0.921)–(3.704,0.921);
[gp path] (0.825,0.946)–(0.915,0.946);
[gp path] (3.794,0.946)–(3.704,0.946);
[gp path] (0.825,0.968)–(0.915,0.968);
[gp path] (3.794,0.968)–(3.704,0.968);
[gp path] (0.825,0.988)–(0.915,0.988);
[gp path] (3.794,0.988)–(3.704,0.988);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.005)–(3.794,1.005);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.005)–(1.005,1.005);
[gp path] (3.794,1.005)–(3.614,1.005);
[gp node right] at (0.715,1.005) 1e-06;
[gp path] (0.825,1.118)–(0.915,1.118);
[gp path] (3.794,1.118)–(3.704,1.118);
[gp path] (0.825,1.184)–(0.915,1.184);
[gp path] (3.794,1.184)–(3.704,1.184);
[gp path] (0.825,1.231)–(0.915,1.231);
[gp path] (3.794,1.231)–(3.704,1.231);
[gp path] (0.825,1.268)–(0.915,1.268);
[gp path] (3.794,1.268)–(3.704,1.268);
[gp path] (0.825,1.298)–(0.915,1.298);
[gp path] (3.794,1.298)–(3.704,1.298);
[gp path] (0.825,1.323)–(0.915,1.323);
[gp path] (3.794,1.323)–(3.704,1.323);
[gp path] (0.825,1.345)–(0.915,1.345);
[gp path] (3.794,1.345)–(3.704,1.345);
[gp path] (0.825,1.364)–(0.915,1.364);
[gp path] (3.794,1.364)–(3.704,1.364);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.381)–(3.794,1.381);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.381)–(1.005,1.381);
[gp path] (3.794,1.381)–(3.614,1.381);
[gp node right] at (0.715,1.381) 1e-05;
[gp path] (0.825,1.494)–(0.915,1.494);
[gp path] (3.794,1.494)–(3.704,1.494);
[gp path] (0.825,1.561)–(0.915,1.561);
[gp path] (3.794,1.561)–(3.704,1.561);
[gp path] (0.825,1.608)–(0.915,1.608);
[gp path] (3.794,1.608)–(3.704,1.608);
[gp path] (0.825,1.644)–(0.915,1.644);
[gp path] (3.794,1.644)–(3.704,1.644);
[gp path] (0.825,1.674)–(0.915,1.674);
[gp path] (3.794,1.674)–(3.704,1.674);
[gp path] (0.825,1.699)–(0.915,1.699);
[gp path] (3.794,1.699)–(3.704,1.699);
[gp path] (0.825,1.721)–(0.915,1.721);
[gp path] (3.794,1.721)–(3.704,1.721);
[gp path] (0.825,1.740)–(0.915,1.740);
[gp path] (3.794,1.740)–(3.704,1.740);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.757)–(3.794,1.757);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.757)–(1.005,1.757);
[gp path] (3.794,1.757)–(3.614,1.757);
[gp node right] at (0.715,1.757) 1e-04;
[gp path] (0.825,1.871)–(0.915,1.871);
[gp path] (3.794,1.871)–(3.704,1.871);
[gp path] (0.825,1.937)–(0.915,1.937);
[gp path] (3.794,1.937)–(3.704,1.937);
[gp path] (0.825,1.984)–(0.915,1.984);
[gp path] (3.794,1.984)–(3.704,1.984);
[gp path] (0.825,2.020)–(0.915,2.020);
[gp path] (3.794,2.020)–(3.704,2.020);
[gp path] (0.825,2.050)–(0.915,2.050);
[gp path] (3.794,2.050)–(3.704,2.050);
[gp path] (0.825,2.075)–(0.915,2.075);
[gp path] (3.794,2.075)–(3.704,2.075);
[gp path] (0.825,2.097)–(0.915,2.097);
[gp path] (3.794,2.097)–(3.704,2.097);
[gp path] (0.825,2.116)–(0.915,2.116);
[gp path] (3.794,2.116)–(3.704,2.116);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.134)–(3.794,2.134);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.134)–(1.005,2.134);
[gp path] (3.794,2.134)–(3.614,2.134);
[gp node right] at (0.715,2.134) 1e-03;
[gp path] (0.825,2.247)–(0.915,2.247);
[gp path] (3.794,2.247)–(3.704,2.247);
[gp path] (0.825,2.313)–(0.915,2.313);
[gp path] (3.794,2.313)–(3.704,2.313);
[gp path] (0.825,2.360)–(0.915,2.360);
[gp path] (3.794,2.360)–(3.704,2.360);
[gp path] (0.825,2.397)–(0.915,2.397);
[gp path] (3.794,2.397)–(3.704,2.397);
[gp path] (0.825,2.426)–(0.915,2.426);
[gp path] (3.794,2.426)–(3.704,2.426);
[gp path] (0.825,2.452)–(0.915,2.452);
[gp path] (3.794,2.452)–(3.704,2.452);
[gp path] (0.825,2.473)–(0.915,2.473);
[gp path] (3.794,2.473)–(3.704,2.473);
[gp path] (0.825,2.493)–(0.915,2.493);
[gp path] (3.794,2.493)–(3.704,2.493);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.510)–(2.184,2.510);
[gp path] (3.684,2.510)–(3.794,2.510);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.510)–(1.005,2.510);
[gp path] (3.794,2.510)–(3.614,2.510);
[gp node right] at (0.715,2.510) 1e-02;
[gp path] (0.825,2.623)–(0.915,2.623);
[gp path] (3.794,2.623)–(3.704,2.623);
[gp path] (0.825,2.689)–(0.915,2.689);
[gp path] (3.794,2.689)–(3.704,2.689);
[gp path] (0.825,2.736)–(0.915,2.736);
[gp path] (3.794,2.736)–(3.704,2.736);
[gp path] (0.825,2.773)–(0.915,2.773);
[gp path] (3.794,2.773)–(3.704,2.773);
[gp path] (0.825,2.803)–(0.915,2.803);
[gp path] (3.794,2.803)–(3.704,2.803);
[gp path] (0.825,2.828)–(0.915,2.828);
[gp path] (3.794,2.828)–(3.704,2.828);
[gp path] (0.825,2.850)–(0.915,2.850);
[gp path] (3.794,2.850)–(3.704,2.850);
[gp path] (0.825,2.869)–(0.915,2.869);
[gp path] (3.794,2.869)–(3.704,2.869);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.886)–(2.184,2.886);
[gp path] (3.684,2.886)–(3.794,2.886);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.886)–(1.005,2.886);
[gp path] (3.794,2.886)–(3.614,2.886);
[gp node right] at (0.715,2.886) 1e-01;
[gp path] (0.825,2.999)–(0.915,2.999);
[gp path] (3.794,2.999)–(3.704,2.999);
[gp path] (0.825,3.066)–(0.915,3.066);
[gp path] (3.794,3.066)–(3.704,3.066);
[gp path] (0.825,3.113)–(0.915,3.113);
[gp path] (3.794,3.113)–(3.704,3.113);
[gp path] (0.825,3.149)–(0.915,3.149);
[gp path] (3.794,3.149)–(3.704,3.149);
[gp path] (0.825,3.179)–(0.915,3.179);
[gp path] (3.794,3.179)–(3.704,3.179);
[gp path] (0.825,3.204)–(0.915,3.204);
[gp path] (3.794,3.204)–(3.704,3.204);
[gp path] (0.825,3.226)–(0.915,3.226);
[gp path] (3.794,3.226)–(3.704,3.226);
[gp path] (0.825,3.245)–(0.915,3.245);
[gp path] (3.794,3.245)–(3.704,3.245);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,3.262)–(2.184,3.262);
[gp path] (3.684,3.262)–(3.794,3.262);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,3.262)–(1.005,3.262);
[gp path] (3.794,3.262)–(3.614,3.262);
[gp node right] at (0.715,3.262) 1e+00;
[gp path] (0.825,3.376)–(0.915,3.376);
[gp path] (3.794,3.376)–(3.704,3.376);
[gp path] (0.825,3.442)–(0.915,3.442);
[gp path] (3.794,3.442)–(3.704,3.442);
[gp path] (0.825,3.489)–(0.915,3.489);
[gp path] (3.794,3.489)–(3.704,3.489);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.095,0.592)–(1.095,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.095,0.592)–(1.095,0.772);
[gp path] (1.095,3.489)–(1.095,3.309);
[gp node center] at (1.095,0.407) $5$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.432,0.592)–(1.432,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.432,0.592)–(1.432,0.772);
[gp path] (1.432,3.489)–(1.432,3.309);
[gp node center] at (1.432,0.407) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.770,0.592)–(1.770,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.770,0.592)–(1.770,0.772);
[gp path] (1.770,3.489)–(1.770,3.309);
[gp node center] at (1.770,0.407) $15$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.107,0.592)–(2.107,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.107,0.592)–(2.107,0.772);
[gp path] (2.107,3.489)–(2.107,3.309);
[gp node center] at (2.107,0.407) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.444,0.592)–(2.444,2.409);
[gp path] (2.444,3.309)–(2.444,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.444,0.592)–(2.444,0.772);
[gp path] (2.444,3.489)–(2.444,3.309);
[gp node center] at (2.444,0.407) $25$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.782,0.592)–(2.782,2.409);
[gp path] (2.782,3.309)–(2.782,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.782,0.592)–(2.782,0.772);
[gp path] (2.782,3.489)–(2.782,3.309);
[gp node center] at (2.782,0.407) $30$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.119,0.592)–(3.119,2.409);
[gp path] (3.119,3.309)–(3.119,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.119,0.592)–(3.119,0.772);
[gp path] (3.119,3.489)–(3.119,3.309);
[gp node center] at (3.119,0.407) $35$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.457,0.592)–(3.457,2.409);
[gp path] (3.457,3.309)–(3.457,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.457,0.592)–(3.457,0.772);
[gp path] (3.457,3.489)–(3.457,3.309);
[gp node center] at (3.457,0.407) $40$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.794,0.592)–(3.794,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.794,0.592)–(3.794,0.772);
[gp path] (3.794,3.489)–(3.794,3.309);
[gp node center] at (3.794,0.407) $45$;
[gp path] (0.825,3.489)–(0.825,0.592)–(3.794,0.592)–(3.794,3.489)–cycle;
[gp node center] at (2.309,0.130) number of iterations;
[gp node right] at (2.844,3.196) step 0;
rgb color=0.000,0.392,0.000
[gp path] (2.954,3.196)–(3.574,3.196);
[gp path] (0.825,3.179)–(0.892,2.963)–(0.960,2.788)–(1.027,2.660)–(1.095,2.540)
3pointgp mark 8(0.825,3.179)
3pointgp mark 8(1.162,2.424)
3pointgp mark 8(1.500,1.942)
3pointgp mark 8(1.837,1.576)
3pointgp mark 8(2.175,1.320)
3pointgp mark 8(2.512,1.134)
3pointgp mark 8(2.849,0.966)
3pointgp mark 8(3.187,0.802)
3pointgp mark 8(3.524,0.644)
3pointgp mark 8(3.264,3.196)
color=gp lt color border
[gp node right] at (2.844,2.971) step 1;
rgb color=0.000,0.392,0.000
[gp path] (2.954,2.971)–(3.574,2.971);
[gp path] (0.825,2.719)–(0.892,2.540)–(0.960,2.364)–(1.027,2.220)–(1.095,2.091)
3pointgp mark 6(0.825,2.719)
3pointgp mark 6(1.230,1.862)
3pointgp mark 6(1.635,1.277)
3pointgp mark 6(2.040,0.851)
3pointgp mark 6(3.264,2.971)
color=gp lt color border
[gp node right] at (2.844,2.746) step 2;
rgb color=0.000,0.392,0.000
[gp path] (2.954,2.746)–(3.574,2.746);
[gp path] (0.825,2.736)–(0.892,2.553)–(0.960,2.393)–(1.027,2.247)–(1.095,2.120)
3pointgp mark 5(0.825,2.736)
3pointgp mark 5(1.297,1.787)
3pointgp mark 5(1.770,1.141)
3pointgp mark 5(2.242,0.671)
3pointgp mark 5(3.264,2.746)
color=gp lt color border
[gp node right] at (2.844,2.521) step 3;
rgb color=0.000,0.392,0.000
[gp path] (2.954,2.521)–(3.574,2.521);
[gp path] (0.825,2.736)–(0.892,2.553)–(0.960,2.387)–(1.027,2.238)–(1.095,2.105)
3pointgp mark 3(0.825,2.736)
3pointgp mark 3(1.365,1.663)
3pointgp mark 3(1.905,0.951)
3pointgp mark 3(3.264,2.521)
color=gp lt color border
[gp path] (0.825,3.489)–(0.825,0.592)–(3.794,0.592)–(3.794,3.489)–cycle;
gp plot 10.825cm0.592cm3.794cm3.489cm
every node/.append style=scale=0.60
(0.000,0.000) rectangle (4.125,3.675);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.592)–(0.915,0.592);
[gp path] (3.794,0.592)–(3.704,0.592);
[gp path] (0.825,0.611)–(0.915,0.611);
[gp path] (3.794,0.611)–(3.704,0.611);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,0.628)–(3.794,0.628);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.628)–(1.005,0.628);
[gp path] (3.794,0.628)–(3.614,0.628);
[gp node right] at (0.715,0.628) 1e-07;
[gp path] (0.825,0.742)–(0.915,0.742);
[gp path] (3.794,0.742)–(3.704,0.742);
[gp path] (0.825,0.808)–(0.915,0.808);
[gp path] (3.794,0.808)–(3.704,0.808);
[gp path] (0.825,0.855)–(0.915,0.855);
[gp path] (3.794,0.855)–(3.704,0.855);
[gp path] (0.825,0.891)–(0.915,0.891);
[gp path] (3.794,0.891)–(3.704,0.891);
[gp path] (0.825,0.921)–(0.915,0.921);
[gp path] (3.794,0.921)–(3.704,0.921);
[gp path] (0.825,0.946)–(0.915,0.946);
[gp path] (3.794,0.946)–(3.704,0.946);
[gp path] (0.825,0.968)–(0.915,0.968);
[gp path] (3.794,0.968)–(3.704,0.968);
[gp path] (0.825,0.988)–(0.915,0.988);
[gp path] (3.794,0.988)–(3.704,0.988);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.005)–(3.794,1.005);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.005)–(1.005,1.005);
[gp path] (3.794,1.005)–(3.614,1.005);
[gp node right] at (0.715,1.005) 1e-06;
[gp path] (0.825,1.118)–(0.915,1.118);
[gp path] (3.794,1.118)–(3.704,1.118);
[gp path] (0.825,1.184)–(0.915,1.184);
[gp path] (3.794,1.184)–(3.704,1.184);
[gp path] (0.825,1.231)–(0.915,1.231);
[gp path] (3.794,1.231)–(3.704,1.231);
[gp path] (0.825,1.268)–(0.915,1.268);
[gp path] (3.794,1.268)–(3.704,1.268);
[gp path] (0.825,1.298)–(0.915,1.298);
[gp path] (3.794,1.298)–(3.704,1.298);
[gp path] (0.825,1.323)–(0.915,1.323);
[gp path] (3.794,1.323)–(3.704,1.323);
[gp path] (0.825,1.345)–(0.915,1.345);
[gp path] (3.794,1.345)–(3.704,1.345);
[gp path] (0.825,1.364)–(0.915,1.364);
[gp path] (3.794,1.364)–(3.704,1.364);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.381)–(3.794,1.381);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.381)–(1.005,1.381);
[gp path] (3.794,1.381)–(3.614,1.381);
[gp node right] at (0.715,1.381) 1e-05;
[gp path] (0.825,1.494)–(0.915,1.494);
[gp path] (3.794,1.494)–(3.704,1.494);
[gp path] (0.825,1.561)–(0.915,1.561);
[gp path] (3.794,1.561)–(3.704,1.561);
[gp path] (0.825,1.608)–(0.915,1.608);
[gp path] (3.794,1.608)–(3.704,1.608);
[gp path] (0.825,1.644)–(0.915,1.644);
[gp path] (3.794,1.644)–(3.704,1.644);
[gp path] (0.825,1.674)–(0.915,1.674);
[gp path] (3.794,1.674)–(3.704,1.674);
[gp path] (0.825,1.699)–(0.915,1.699);
[gp path] (3.794,1.699)–(3.704,1.699);
[gp path] (0.825,1.721)–(0.915,1.721);
[gp path] (3.794,1.721)–(3.704,1.721);
[gp path] (0.825,1.740)–(0.915,1.740);
[gp path] (3.794,1.740)–(3.704,1.740);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.757)–(3.794,1.757);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.757)–(1.005,1.757);
[gp path] (3.794,1.757)–(3.614,1.757);
[gp node right] at (0.715,1.757) 1e-04;
[gp path] (0.825,1.871)–(0.915,1.871);
[gp path] (3.794,1.871)–(3.704,1.871);
[gp path] (0.825,1.937)–(0.915,1.937);
[gp path] (3.794,1.937)–(3.704,1.937);
[gp path] (0.825,1.984)–(0.915,1.984);
[gp path] (3.794,1.984)–(3.704,1.984);
[gp path] (0.825,2.020)–(0.915,2.020);
[gp path] (3.794,2.020)–(3.704,2.020);
[gp path] (0.825,2.050)–(0.915,2.050);
[gp path] (3.794,2.050)–(3.704,2.050);
[gp path] (0.825,2.075)–(0.915,2.075);
[gp path] (3.794,2.075)–(3.704,2.075);
[gp path] (0.825,2.097)–(0.915,2.097);
[gp path] (3.794,2.097)–(3.704,2.097);
[gp path] (0.825,2.116)–(0.915,2.116);
[gp path] (3.794,2.116)–(3.704,2.116);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.134)–(3.794,2.134);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.134)–(1.005,2.134);
[gp path] (3.794,2.134)–(3.614,2.134);
[gp node right] at (0.715,2.134) 1e-03;
[gp path] (0.825,2.247)–(0.915,2.247);
[gp path] (3.794,2.247)–(3.704,2.247);
[gp path] (0.825,2.313)–(0.915,2.313);
[gp path] (3.794,2.313)–(3.704,2.313);
[gp path] (0.825,2.360)–(0.915,2.360);
[gp path] (3.794,2.360)–(3.704,2.360);
[gp path] (0.825,2.397)–(0.915,2.397);
[gp path] (3.794,2.397)–(3.704,2.397);
[gp path] (0.825,2.426)–(0.915,2.426);
[gp path] (3.794,2.426)–(3.704,2.426);
[gp path] (0.825,2.452)–(0.915,2.452);
[gp path] (3.794,2.452)–(3.704,2.452);
[gp path] (0.825,2.473)–(0.915,2.473);
[gp path] (3.794,2.473)–(3.704,2.473);
[gp path] (0.825,2.493)–(0.915,2.493);
[gp path] (3.794,2.493)–(3.704,2.493);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.510)–(2.184,2.510);
[gp path] (3.684,2.510)–(3.794,2.510);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.510)–(1.005,2.510);
[gp path] (3.794,2.510)–(3.614,2.510);
[gp node right] at (0.715,2.510) 1e-02;
[gp path] (0.825,2.623)–(0.915,2.623);
[gp path] (3.794,2.623)–(3.704,2.623);
[gp path] (0.825,2.689)–(0.915,2.689);
[gp path] (3.794,2.689)–(3.704,2.689);
[gp path] (0.825,2.736)–(0.915,2.736);
[gp path] (3.794,2.736)–(3.704,2.736);
[gp path] (0.825,2.773)–(0.915,2.773);
[gp path] (3.794,2.773)–(3.704,2.773);
[gp path] (0.825,2.803)–(0.915,2.803);
[gp path] (3.794,2.803)–(3.704,2.803);
[gp path] (0.825,2.828)–(0.915,2.828);
[gp path] (3.794,2.828)–(3.704,2.828);
[gp path] (0.825,2.850)–(0.915,2.850);
[gp path] (3.794,2.850)–(3.704,2.850);
[gp path] (0.825,2.869)–(0.915,2.869);
[gp path] (3.794,2.869)–(3.704,2.869);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.886)–(2.184,2.886);
[gp path] (3.684,2.886)–(3.794,2.886);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.886)–(1.005,2.886);
[gp path] (3.794,2.886)–(3.614,2.886);
[gp node right] at (0.715,2.886) 1e-01;
[gp path] (0.825,2.999)–(0.915,2.999);
[gp path] (3.794,2.999)–(3.704,2.999);
[gp path] (0.825,3.066)–(0.915,3.066);
[gp path] (3.794,3.066)–(3.704,3.066);
[gp path] (0.825,3.113)–(0.915,3.113);
[gp path] (3.794,3.113)–(3.704,3.113);
[gp path] (0.825,3.149)–(0.915,3.149);
[gp path] (3.794,3.149)–(3.704,3.149);
[gp path] (0.825,3.179)–(0.915,3.179);
[gp path] (3.794,3.179)–(3.704,3.179);
[gp path] (0.825,3.204)–(0.915,3.204);
[gp path] (3.794,3.204)–(3.704,3.204);
[gp path] (0.825,3.226)–(0.915,3.226);
[gp path] (3.794,3.226)–(3.704,3.226);
[gp path] (0.825,3.245)–(0.915,3.245);
[gp path] (3.794,3.245)–(3.704,3.245);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,3.262)–(2.184,3.262);
[gp path] (3.684,3.262)–(3.794,3.262);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,3.262)–(1.005,3.262);
[gp path] (3.794,3.262)–(3.614,3.262);
[gp node right] at (0.715,3.262) 1e+00;
[gp path] (0.825,3.376)–(0.915,3.376);
[gp path] (3.794,3.376)–(3.704,3.376);
[gp path] (0.825,3.442)–(0.915,3.442);
[gp path] (3.794,3.442)–(3.704,3.442);
[gp path] (0.825,3.489)–(0.915,3.489);
[gp path] (3.794,3.489)–(3.704,3.489);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.095,0.592)–(1.095,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.095,0.592)–(1.095,0.772);
[gp path] (1.095,3.489)–(1.095,3.309);
[gp node center] at (1.095,0.407) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.395,0.592)–(1.395,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.395,0.592)–(1.395,0.772);
[gp path] (1.395,3.489)–(1.395,3.309);
[gp node center] at (1.395,0.407) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.695,0.592)–(1.695,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.695,0.592)–(1.695,0.772);
[gp path] (1.695,3.489)–(1.695,3.309);
[gp node center] at (1.695,0.407) $30$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.995,0.592)–(1.995,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.995,0.592)–(1.995,0.772);
[gp path] (1.995,3.489)–(1.995,3.309);
[gp node center] at (1.995,0.407) $40$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.295,0.592)–(2.295,2.409);
[gp path] (2.295,3.309)–(2.295,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.295,0.592)–(2.295,0.772);
[gp path] (2.295,3.489)–(2.295,3.309);
[gp node center] at (2.295,0.407) $50$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.594,0.592)–(2.594,2.409);
[gp path] (2.594,3.309)–(2.594,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.594,0.592)–(2.594,0.772);
[gp path] (2.594,3.489)–(2.594,3.309);
[gp node center] at (2.594,0.407) $60$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.894,0.592)–(2.894,2.409);
[gp path] (2.894,3.309)–(2.894,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.894,0.592)–(2.894,0.772);
[gp path] (2.894,3.489)–(2.894,3.309);
[gp node center] at (2.894,0.407) $70$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.194,0.592)–(3.194,2.409);
[gp path] (3.194,3.309)–(3.194,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.194,0.592)–(3.194,0.772);
[gp path] (3.194,3.489)–(3.194,3.309);
[gp node center] at (3.194,0.407) $80$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.494,0.592)–(3.494,2.409);
[gp path] (3.494,3.309)–(3.494,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.494,0.592)–(3.494,0.772);
[gp path] (3.494,3.489)–(3.494,3.309);
[gp node center] at (3.494,0.407) $90$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.794,0.592)–(3.794,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.794,0.592)–(3.794,0.772);
[gp path] (3.794,3.489)–(3.794,3.309);
[gp node center] at (3.794,0.407) $100$;
[gp path] (0.825,3.489)–(0.825,0.592)–(3.794,0.592)–(3.794,3.489)–cycle;
[gp node center] at (2.309,0.130) number of iterations;
[gp node right] at (2.844,3.196) step 0;
rgb color=0.000,0.000,0.545
[gp path] (2.954,3.196)–(3.574,3.196);
[gp path] (0.825,3.240)–(0.855,3.042)–(0.885,2.858)–(0.915,2.719)–(0.945,2.615)
3pointgp mark 8(0.825,3.240)
3pointgp mark 8(0.975,2.525)
3pointgp mark 8(1.125,2.247)
3pointgp mark 8(1.275,2.055)
3pointgp mark 8(1.425,1.879)
3pointgp mark 8(1.575,1.715)
3pointgp mark 8(1.725,1.561)
3pointgp mark 8(1.875,1.436)
3pointgp mark 8(2.025,1.338)
3pointgp mark 8(2.175,1.268)
3pointgp mark 8(2.324,1.205)
3pointgp mark 8(2.474,1.141)
3pointgp mark 8(2.624,1.082)
3pointgp mark 8(2.774,1.020)
3pointgp mark 8(2.924,0.962)
3pointgp mark 8(3.074,0.901)
3pointgp mark 8(3.224,0.842)
3pointgp mark 8(3.374,0.785)
3pointgp mark 8(3.524,0.725)
3pointgp mark 8(3.674,0.658)
3pointgp mark 8(3.264,3.196)
color=gp lt color border
[gp node right] at (2.844,2.971) step 1;
rgb color=0.000,0.000,0.545
[gp path] (2.954,2.971)–(3.574,2.971);
[gp path] (0.825,2.732)–(0.855,2.553)–(0.885,2.383)–(0.915,2.238)–(0.945,2.109)
3pointgp mark 6(0.825,2.732)
3pointgp mark 6(1.005,1.879)
3pointgp mark 6(1.185,1.378)
3pointgp mark 6(1.365,1.141)
3pointgp mark 6(1.545,0.951)
3pointgp mark 6(1.725,0.797)
3pointgp mark 6(1.905,0.683)
3pointgp mark 6(3.264,2.971)
color=gp lt color border
[gp node right] at (2.844,2.746) step 2;
rgb color=0.000,0.000,0.545
[gp path] (2.954,2.746)–(3.574,2.746);
[gp path] (0.825,2.756)–(0.855,2.587)–(0.885,2.424)–(0.915,2.290)–(0.945,2.163)
3pointgp mark 5(0.825,2.756)
3pointgp mark 5(1.035,1.853)
3pointgp mark 5(1.245,1.336)
3pointgp mark 5(1.455,1.091)
3pointgp mark 5(1.665,0.942)
3pointgp mark 5(1.875,0.842)
3pointgp mark 5(2.085,0.757)
3pointgp mark 5(2.295,0.671)
3pointgp mark 5(3.264,2.746)
color=gp lt color border
[gp node right] at (2.844,2.521) step 3;
rgb color=0.000,0.000,0.545
[gp path] (2.954,2.521)–(3.574,2.521);
[gp path] (0.825,2.759)–(0.855,2.597)–(0.885,2.434)–(0.915,2.296)–(0.945,2.163)
3pointgp mark 3(0.825,2.759)
3pointgp mark 3(1.065,1.744)
3pointgp mark 3(1.305,1.243)
3pointgp mark 3(1.545,1.020)
3pointgp mark 3(1.785,0.881)
3pointgp mark 3(2.025,0.778)
3pointgp mark 3(2.265,0.683)
3pointgp mark 3(3.264,2.521)
color=gp lt color border
[gp path] (0.825,3.489)–(0.825,0.592)–(3.794,0.592)–(3.794,3.489)–cycle;
gp plot 10.825cm0.592cm3.794cm3.489cm
[step 0]
every node/.append style=scale=0.60
(0.000,0.000) rectangle (4.125,3.675);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.592)–(0.915,0.592);
[gp path] (3.794,0.592)–(3.704,0.592);
[gp path] (0.825,0.611)–(0.915,0.611);
[gp path] (3.794,0.611)–(3.704,0.611);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,0.628)–(3.794,0.628);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,0.628)–(1.005,0.628);
[gp path] (3.794,0.628)–(3.614,0.628);
[gp node right] at (0.715,0.628) 1e-07;
[gp path] (0.825,0.742)–(0.915,0.742);
[gp path] (3.794,0.742)–(3.704,0.742);
[gp path] (0.825,0.808)–(0.915,0.808);
[gp path] (3.794,0.808)–(3.704,0.808);
[gp path] (0.825,0.855)–(0.915,0.855);
[gp path] (3.794,0.855)–(3.704,0.855);
[gp path] (0.825,0.891)–(0.915,0.891);
[gp path] (3.794,0.891)–(3.704,0.891);
[gp path] (0.825,0.921)–(0.915,0.921);
[gp path] (3.794,0.921)–(3.704,0.921);
[gp path] (0.825,0.946)–(0.915,0.946);
[gp path] (3.794,0.946)–(3.704,0.946);
[gp path] (0.825,0.968)–(0.915,0.968);
[gp path] (3.794,0.968)–(3.704,0.968);
[gp path] (0.825,0.988)–(0.915,0.988);
[gp path] (3.794,0.988)–(3.704,0.988);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.005)–(3.794,1.005);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.005)–(1.005,1.005);
[gp path] (3.794,1.005)–(3.614,1.005);
[gp node right] at (0.715,1.005) 1e-06;
[gp path] (0.825,1.118)–(0.915,1.118);
[gp path] (3.794,1.118)–(3.704,1.118);
[gp path] (0.825,1.184)–(0.915,1.184);
[gp path] (3.794,1.184)–(3.704,1.184);
[gp path] (0.825,1.231)–(0.915,1.231);
[gp path] (3.794,1.231)–(3.704,1.231);
[gp path] (0.825,1.268)–(0.915,1.268);
[gp path] (3.794,1.268)–(3.704,1.268);
[gp path] (0.825,1.298)–(0.915,1.298);
[gp path] (3.794,1.298)–(3.704,1.298);
[gp path] (0.825,1.323)–(0.915,1.323);
[gp path] (3.794,1.323)–(3.704,1.323);
[gp path] (0.825,1.345)–(0.915,1.345);
[gp path] (3.794,1.345)–(3.704,1.345);
[gp path] (0.825,1.364)–(0.915,1.364);
[gp path] (3.794,1.364)–(3.704,1.364);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.381)–(3.794,1.381);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.381)–(1.005,1.381);
[gp path] (3.794,1.381)–(3.614,1.381);
[gp node right] at (0.715,1.381) 1e-05;
[gp path] (0.825,1.494)–(0.915,1.494);
[gp path] (3.794,1.494)–(3.704,1.494);
[gp path] (0.825,1.561)–(0.915,1.561);
[gp path] (3.794,1.561)–(3.704,1.561);
[gp path] (0.825,1.608)–(0.915,1.608);
[gp path] (3.794,1.608)–(3.704,1.608);
[gp path] (0.825,1.644)–(0.915,1.644);
[gp path] (3.794,1.644)–(3.704,1.644);
[gp path] (0.825,1.674)–(0.915,1.674);
[gp path] (3.794,1.674)–(3.704,1.674);
[gp path] (0.825,1.699)–(0.915,1.699);
[gp path] (3.794,1.699)–(3.704,1.699);
[gp path] (0.825,1.721)–(0.915,1.721);
[gp path] (3.794,1.721)–(3.704,1.721);
[gp path] (0.825,1.740)–(0.915,1.740);
[gp path] (3.794,1.740)–(3.704,1.740);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,1.757)–(3.794,1.757);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,1.757)–(1.005,1.757);
[gp path] (3.794,1.757)–(3.614,1.757);
[gp node right] at (0.715,1.757) 1e-04;
[gp path] (0.825,1.871)–(0.915,1.871);
[gp path] (3.794,1.871)–(3.704,1.871);
[gp path] (0.825,1.937)–(0.915,1.937);
[gp path] (3.794,1.937)–(3.704,1.937);
[gp path] (0.825,1.984)–(0.915,1.984);
[gp path] (3.794,1.984)–(3.704,1.984);
[gp path] (0.825,2.020)–(0.915,2.020);
[gp path] (3.794,2.020)–(3.704,2.020);
[gp path] (0.825,2.050)–(0.915,2.050);
[gp path] (3.794,2.050)–(3.704,2.050);
[gp path] (0.825,2.075)–(0.915,2.075);
[gp path] (3.794,2.075)–(3.704,2.075);
[gp path] (0.825,2.097)–(0.915,2.097);
[gp path] (3.794,2.097)–(3.704,2.097);
[gp path] (0.825,2.116)–(0.915,2.116);
[gp path] (3.794,2.116)–(3.704,2.116);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.134)–(3.794,2.134);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.134)–(1.005,2.134);
[gp path] (3.794,2.134)–(3.614,2.134);
[gp node right] at (0.715,2.134) 1e-03;
[gp path] (0.825,2.247)–(0.915,2.247);
[gp path] (3.794,2.247)–(3.704,2.247);
[gp path] (0.825,2.313)–(0.915,2.313);
[gp path] (3.794,2.313)–(3.704,2.313);
[gp path] (0.825,2.360)–(0.915,2.360);
[gp path] (3.794,2.360)–(3.704,2.360);
[gp path] (0.825,2.397)–(0.915,2.397);
[gp path] (3.794,2.397)–(3.704,2.397);
[gp path] (0.825,2.426)–(0.915,2.426);
[gp path] (3.794,2.426)–(3.704,2.426);
[gp path] (0.825,2.452)–(0.915,2.452);
[gp path] (3.794,2.452)–(3.704,2.452);
[gp path] (0.825,2.473)–(0.915,2.473);
[gp path] (3.794,2.473)–(3.704,2.473);
[gp path] (0.825,2.493)–(0.915,2.493);
[gp path] (3.794,2.493)–(3.704,2.493);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.510)–(3.794,2.510);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.510)–(1.005,2.510);
[gp path] (3.794,2.510)–(3.614,2.510);
[gp node right] at (0.715,2.510) 1e-02;
[gp path] (0.825,2.623)–(0.915,2.623);
[gp path] (3.794,2.623)–(3.704,2.623);
[gp path] (0.825,2.689)–(0.915,2.689);
[gp path] (3.794,2.689)–(3.704,2.689);
[gp path] (0.825,2.736)–(0.915,2.736);
[gp path] (3.794,2.736)–(3.704,2.736);
[gp path] (0.825,2.773)–(0.915,2.773);
[gp path] (3.794,2.773)–(3.704,2.773);
[gp path] (0.825,2.803)–(0.915,2.803);
[gp path] (3.794,2.803)–(3.704,2.803);
[gp path] (0.825,2.828)–(0.915,2.828);
[gp path] (3.794,2.828)–(3.704,2.828);
[gp path] (0.825,2.850)–(0.915,2.850);
[gp path] (3.794,2.850)–(3.704,2.850);
[gp path] (0.825,2.869)–(0.915,2.869);
[gp path] (3.794,2.869)–(3.704,2.869);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,2.886)–(2.184,2.886);
[gp path] (3.684,2.886)–(3.794,2.886);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,2.886)–(1.005,2.886);
[gp path] (3.794,2.886)–(3.614,2.886);
[gp node right] at (0.715,2.886) 1e-01;
[gp path] (0.825,2.999)–(0.915,2.999);
[gp path] (3.794,2.999)–(3.704,2.999);
[gp path] (0.825,3.066)–(0.915,3.066);
[gp path] (3.794,3.066)–(3.704,3.066);
[gp path] (0.825,3.113)–(0.915,3.113);
[gp path] (3.794,3.113)–(3.704,3.113);
[gp path] (0.825,3.149)–(0.915,3.149);
[gp path] (3.794,3.149)–(3.704,3.149);
[gp path] (0.825,3.179)–(0.915,3.179);
[gp path] (3.794,3.179)–(3.704,3.179);
[gp path] (0.825,3.204)–(0.915,3.204);
[gp path] (3.794,3.204)–(3.704,3.204);
[gp path] (0.825,3.226)–(0.915,3.226);
[gp path] (3.794,3.226)–(3.704,3.226);
[gp path] (0.825,3.245)–(0.915,3.245);
[gp path] (3.794,3.245)–(3.704,3.245);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.825,3.262)–(2.184,3.262);
[gp path] (3.684,3.262)–(3.794,3.262);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.825,3.262)–(1.005,3.262);
[gp path] (3.794,3.262)–(3.614,3.262);
[gp node right] at (0.715,3.262) 1e+00;
[gp path] (0.825,3.376)–(0.915,3.376);
[gp path] (3.794,3.376)–(3.704,3.376);
[gp path] (0.825,3.442)–(0.915,3.442);
[gp path] (3.794,3.442)–(3.704,3.442);
[gp path] (0.825,3.489)–(0.915,3.489);
[gp path] (3.794,3.489)–(3.704,3.489);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.095,0.592)–(1.095,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.095,0.592)–(1.095,0.772);
[gp path] (1.095,3.489)–(1.095,3.309);
[gp node center] at (1.095,0.407) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.395,0.592)–(1.395,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.395,0.592)–(1.395,0.772);
[gp path] (1.395,3.489)–(1.395,3.309);
[gp node center] at (1.395,0.407) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.695,0.592)–(1.695,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.695,0.592)–(1.695,0.772);
[gp path] (1.695,3.489)–(1.695,3.309);
[gp node center] at (1.695,0.407) $30$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.995,0.592)–(1.995,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.995,0.592)–(1.995,0.772);
[gp path] (1.995,3.489)–(1.995,3.309);
[gp node center] at (1.995,0.407) $40$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.295,0.592)–(2.295,2.634);
[gp path] (2.295,3.309)–(2.295,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.295,0.592)–(2.295,0.772);
[gp path] (2.295,3.489)–(2.295,3.309);
[gp node center] at (2.295,0.407) $50$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.594,0.592)–(2.594,2.634);
[gp path] (2.594,3.309)–(2.594,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.594,0.592)–(2.594,0.772);
[gp path] (2.594,3.489)–(2.594,3.309);
[gp node center] at (2.594,0.407) $60$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.894,0.592)–(2.894,2.634);
[gp path] (2.894,3.309)–(2.894,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.894,0.592)–(2.894,0.772);
[gp path] (2.894,3.489)–(2.894,3.309);
[gp node center] at (2.894,0.407) $70$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.194,0.592)–(3.194,2.634);
[gp path] (3.194,3.309)–(3.194,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.194,0.592)–(3.194,0.772);
[gp path] (3.194,3.489)–(3.194,3.309);
[gp node center] at (3.194,0.407) $80$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.494,0.592)–(3.494,2.634);
[gp path] (3.494,3.309)–(3.494,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.494,0.592)–(3.494,0.772);
[gp path] (3.494,3.489)–(3.494,3.309);
[gp node center] at (3.494,0.407) $90$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.794,0.592)–(3.794,3.489);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.794,0.592)–(3.794,0.772);
[gp path] (3.794,3.489)–(3.794,3.309);
[gp node center] at (3.794,0.407) $100$;
[gp path] (0.825,3.489)–(0.825,0.592)–(3.794,0.592)–(3.794,3.489)–cycle;
[gp node center] at (2.309,0.130) number of iterations;
[gp node right] at (2.844,3.196) h=-400;
rgb color=0.753,0.251,0.000
[gp path] (2.954,3.196)–(3.574,3.196);
[gp path] (0.825,3.091)–(0.855,2.874)–(0.885,2.695)–(0.915,2.553)–(0.945,2.426)
3pointgp mark 8(0.825,3.091)
3pointgp mark 8(0.975,2.318)
3pointgp mark 8(1.125,1.907)
3pointgp mark 8(1.275,1.566)
3pointgp mark 8(1.425,1.247)
3pointgp mark 8(1.575,0.956)
3pointgp mark 8(1.725,0.705)
3pointgp mark 8(3.264,3.196)
color=gp lt color border
[gp node right] at (2.844,2.971) h=-300;
rgb color=0.000,0.392,0.000
[gp path] (2.954,2.971)–(3.574,2.971);
[gp path] (0.825,3.179)–(0.855,2.963)–(0.885,2.788)–(0.915,2.660)–(0.945,2.540)
3pointgp mark 8(0.825,3.179)
3pointgp mark 8(0.975,2.424)
3pointgp mark 8(1.125,1.942)
3pointgp mark 8(1.275,1.576)
3pointgp mark 8(1.425,1.320)
3pointgp mark 8(1.575,1.134)
3pointgp mark 8(1.725,0.966)
3pointgp mark 8(1.875,0.802)
3pointgp mark 8(2.025,0.644)
3pointgp mark 8(3.264,2.971)
color=gp lt color border
[gp node right] at (2.844,2.746) h=-100;
rgb color=0.000,0.000,0.545
[gp path] (2.954,2.746)–(3.574,2.746);
[gp path] (0.825,3.240)–(0.855,3.042)–(0.885,2.858)–(0.915,2.719)–(0.945,2.615)
3pointgp mark 8(0.825,3.240)
3pointgp mark 8(0.975,2.525)
3pointgp mark 8(1.125,2.247)
3pointgp mark 8(1.275,2.055)
3pointgp mark 8(1.425,1.879)
3pointgp mark 8(1.575,1.715)
3pointgp mark 8(1.725,1.561)
3pointgp mark 8(1.875,1.436)
3pointgp mark 8(2.025,1.338)
3pointgp mark 8(2.175,1.268)
3pointgp mark 8(2.324,1.205)
3pointgp mark 8(2.474,1.141)
3pointgp mark 8(2.624,1.082)
3pointgp mark 8(2.774,1.020)
3pointgp mark 8(2.924,0.962)
3pointgp mark 8(3.074,0.901)
3pointgp mark 8(3.224,0.842)
3pointgp mark 8(3.374,0.785)
3pointgp mark 8(3.524,0.725)
3pointgp mark 8(3.674,0.658)
3pointgp mark 8(3.264,2.746)
color=gp lt color border
[gp path] (0.825,3.489)–(0.825,0.592)–(3.794,0.592)–(3.794,3.489)–cycle;
gp plot 10.825cm0.592cm3.794cm3.489cm
Convergence of residual errors for the 3 adaptation stages and associated steps. Lin/Log scale.
As the crack grows, the matrix conditioning decreases, and we can again observe that the rate of convergence of the residual error decreases (figure <ref>).
As in the other test cases, the solver provides consistent performance for all numbers of cores tested.
Only the "dd" solver achieves equal or better performance when used with a high number of processes.
This decrease in scalability is due, when using many cores, on the one hand to the low number of patches per process and the high number of distributed patches and on the other hand to the low scalability of the coarse-scale resolution (even with the use of the version).
This can be seen in figures <ref>, <ref> and <ref> where the times spent in the loop to solve the fine-scale problems ("loop fine-scale solv") and the global-scale problem ("loop global-scale solv") are shown.
We can observe the drop in scalability when reaching $nbp_{max}$ processes (see section <ref>) with the coarse scale resolution.
To prove that this last point, if addressed, can significantly improve the scalability of the solver, a new version ("tsdd" hereafter) has been implemented with the domain decomposition solver of <ref> used at the coarse scale level ($\dagger$ of algorithm <ref>).
This change brings the scalability of the "dd" solver to the coarse scale.
And by always using the last solution of the global boundary problem (at coarse scale) as the starting point for the "dd" conjugate gradient resolution (algorithm <ref>), the evolutionary aspect of the simulation is now taken into account in coarse scale resolution.
In addition, during the loop, restarting from the previous solution of the global boundary problem also reduces the number of iterations of the conjugate gradient.
The iteration gains appears in the figure <ref>.
Note the oscillation that appears for step 0 around the 35th iteration just as the residual curve in the figure <ref> stabilizes on another slope.
This must be understood in a future work on the subject.
This new "tsdd" version gives, as expected, better results (see figure <ref>) since the use of the "dd" solver at coarse scale is more efficient than a direct/iterative resolution as can be seen in <ref>.
The slope at both scales is correct, giving a nice overall slope for the "tsdd" solver.
This proves, that any solver that scales well to a large number of cores and benefits from iterative context resolution will be a good candidate to solve large coarse-scale problem.
It is natural to think of the distributed solver for this, as it can provide such capabilities, as has been proven in this work.
This is left as a future prospect.
Since the patches do not have the same "weight", the local problem scheduling proposed in section <ref> can be fully validated by this test case.
In table <ref> tree variants of the algorithms <ref>, <ref>, <ref> and <ref>, called VO, V1 and V2, are tested for h=-300 and h=-100 with 64 and 512 processes.
every node/.append style=scale=0.70
(0.000,0.000) rectangle (5.625,4.375);
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,0.691)–(5.237,0.691);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,0.691)–(1.105,0.691);
[gp path] (5.237,0.691)–(5.057,0.691);
[gp node right] at (0.796,0.691) $0$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,1.186)–(5.237,1.186);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,1.186)–(1.105,1.186);
[gp path] (5.237,1.186)–(5.057,1.186);
[gp node right] at (0.796,1.186) $50$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,1.682)–(5.237,1.682);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,1.682)–(1.105,1.682);
[gp path] (5.237,1.682)–(5.057,1.682);
[gp node right] at (0.796,1.682) $100$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,2.177)–(5.237,2.177);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,2.177)–(1.105,2.177);
[gp path] (5.237,2.177)–(5.057,2.177);
[gp node right] at (0.796,2.177) $150$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,2.672)–(5.237,2.672);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,2.672)–(1.105,2.672);
[gp path] (5.237,2.672)–(5.057,2.672);
[gp node right] at (0.796,2.672) $200$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,3.167)–(3.380,3.167);
[gp path] (5.108,3.167)–(5.237,3.167);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,3.167)–(1.105,3.167);
[gp path] (5.237,3.167)–(5.057,3.167);
[gp node right] at (0.796,3.167) $250$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,3.663)–(3.380,3.663);
[gp path] (5.108,3.663)–(5.237,3.663);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,3.663)–(1.105,3.663);
[gp path] (5.237,3.663)–(5.057,3.663);
[gp node right] at (0.796,3.663) $300$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,4.158)–(5.237,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,4.158)–(1.105,4.158);
[gp path] (5.237,4.158)–(5.057,4.158);
[gp node right] at (0.796,4.158) $350$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (0.925,0.691)–(0.925,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (0.925,0.691)–(0.925,0.871);
[gp path] (0.925,4.158)–(0.925,3.978);
[gp node center] at (0.925,0.475) $0$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.356,0.691)–(1.356,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.356,0.691)–(1.356,0.871);
[gp path] (1.356,4.158)–(1.356,3.978);
[gp node center] at (1.356,0.475) $10$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (1.787,0.691)–(1.787,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (1.787,0.691)–(1.787,0.871);
[gp path] (1.787,4.158)–(1.787,3.978);
[gp node center] at (1.787,0.475) $20$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.219,0.691)–(2.219,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.219,0.691)–(2.219,0.871);
[gp path] (2.219,4.158)–(2.219,3.978);
[gp node center] at (2.219,0.475) $30$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (2.650,0.691)–(2.650,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (2.650,0.691)–(2.650,0.871);
[gp path] (2.650,4.158)–(2.650,3.978);
[gp node center] at (2.650,0.475) $40$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.081,0.691)–(3.081,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.081,0.691)–(3.081,0.871);
[gp path] (3.081,4.158)–(3.081,3.978);
[gp node center] at (3.081,0.475) $50$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.512,0.691)–(3.512,3.078);
[gp path] (3.512,3.978)–(3.512,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.512,0.691)–(3.512,0.871);
[gp path] (3.512,4.158)–(3.512,3.978);
[gp node center] at (3.512,0.475) $60$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (3.943,0.691)–(3.943,3.078);
[gp path] (3.943,3.978)–(3.943,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (3.943,0.691)–(3.943,0.871);
[gp path] (3.943,4.158)–(3.943,3.978);
[gp node center] at (3.943,0.475) $70$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.375,0.691)–(4.375,3.078);
[gp path] (4.375,3.978)–(4.375,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.375,0.691)–(4.375,0.871);
[gp path] (4.375,4.158)–(4.375,3.978);
[gp node center] at (4.375,0.475) $80$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (4.806,0.691)–(4.806,3.078);
[gp path] (4.806,3.978)–(4.806,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (4.806,0.691)–(4.806,0.871);
[gp path] (4.806,4.158)–(4.806,3.978);
[gp node center] at (4.806,0.475) $90$;
color=gp lt color axes
gp lt axes
gp dt axes
[gp path] (5.237,0.691)–(5.237,4.158);
color=gp lt color border
gp lt border
gp dt solid
[gp path] (5.237,0.691)–(5.237,0.871);
[gp path] (5.237,4.158)–(5.237,3.978);
[gp node center] at (5.237,0.475) $100$;
[gp path] (0.925,4.158)–(0.925,0.691)–(5.237,0.691)–(5.237,4.158)–cycle;
[gp node center,rotate=-270] at (0.204,2.424) number of conjugate gradient iterations;
[gp node center] at (3.081,0.151) number of iterations;
[gp node right] at (4.154,3.865) step 0;
rgb color=0.753,0.251,0.000
[gp path] (4.283,3.865)–(4.979,3.865);
[gp path] (0.925,3.801)–(0.968,3.940)–(1.011,3.970)–(1.054,3.613)–(1.097,3.564)
3pointgp mark 8(0.925,3.801)
3pointgp mark 8(1.141,3.455)
3pointgp mark 8(1.356,3.058)
3pointgp mark 8(1.572,2.969)
3pointgp mark 8(1.787,2.860)
3pointgp mark 8(2.003,2.771)
3pointgp mark 8(2.219,2.712)
3pointgp mark 8(2.434,2.642)
3pointgp mark 8(2.650,1.533)
3pointgp mark 8(2.865,2.058)
3pointgp mark 8(3.081,1.454)
3pointgp mark 8(3.297,1.999)
3pointgp mark 8(3.512,1.969)
3pointgp mark 8(3.728,1.236)
3pointgp mark 8(3.943,1.929)
3pointgp mark 8(4.159,0.909)
3pointgp mark 8(4.375,0.859)
3pointgp mark 8(4.590,0.909)
3pointgp mark 8(4.806,0.869)
3pointgp mark 8(5.021,0.790)
3pointgp mark 8(4.631,3.865)
color=gp lt color border
[gp node right] at (4.154,3.640) step 1;
rgb color=0.000,0.392,0.000
[gp path] (4.283,3.640)–(4.979,3.640);
[gp path] (0.925,2.672)–(0.968,2.355)–(1.011,2.216)–(1.054,2.078)–(1.097,1.999)
3pointgp mark 6(0.925,2.672)
3pointgp mark 6(1.184,1.691)
3pointgp mark 6(1.442,1.365)
3pointgp mark 6(1.701,1.285)
3pointgp mark 6(1.960,0.929)
3pointgp mark 6(2.219,0.800)
3pointgp mark 6(2.477,0.770)
3pointgp mark 6(2.736,1.087)
3pointgp mark 6(2.995,1.087)
3pointgp mark 6(4.631,3.640)
color=gp lt color border
[gp node right] at (4.154,3.415) step 2;
rgb color=0.000,0.000,0.545
[gp path] (4.283,3.415)–(4.979,3.415);
[gp path] (0.925,2.702)–(0.968,3.118)–(1.011,2.256)–(1.054,2.177)–(1.097,2.078)
3pointgp mark 5(0.925,2.702)
3pointgp mark 5(1.227,1.691)
3pointgp mark 5(1.529,1.365)
3pointgp mark 5(1.831,1.226)
3pointgp mark 5(2.132,1.166)
3pointgp mark 5(2.434,1.137)
3pointgp mark 5(2.736,1.127)
3pointgp mark 5(3.038,0.780)
3pointgp mark 5(4.631,3.415)
color=gp lt color border
[gp node right] at (4.154,3.190) step 3;
rgb color=0.545,0.000,0.000
[gp path] (4.283,3.190)–(4.979,3.190);
[gp path] (0.925,2.672)–(0.968,2.484)–(1.011,2.345)–(1.054,2.246)–(1.097,2.197)
3pointgp mark 3(0.925,2.672)
3pointgp mark 3(1.270,1.672)
3pointgp mark 3(1.615,1.365)
3pointgp mark 3(1.960,1.315)
3pointgp mark 3(2.305,1.166)
3pointgp mark 3(2.650,0.849)
3pointgp mark 3(4.631,3.190)
color=gp lt color border
[gp path] (0.925,4.158)–(0.925,0.691)–(5.237,0.691)–(5.237,4.158)–cycle;
gp plot 10.925cm0.691cm5.237cm4.158cm
Number of iterations for solving the "dd" conjugate gradient at the coarse scale ("tsdd" version) for h=-100 with 1024 cores..
3l| 90 h=-300 90 h=-100 90 h=-100
1l| 2l|Number of processes 64 64 512
1l|4* 2l|Total number of patches 26609 94388 94388
1l| 2l|Min number of patches per process 456 1457 225
1l| 2l|Max number of patches per process 610 1802 316
1l| 2l|Average number of patches per process 535.5 1702.1 272.6
1l| 2l|Average number of distributed patches per process 226.6 442.1 160.4
1|c|5*V0 2l|Number of sequence 303.3 586.3 221.3
1|c| 2*Average observed cost per process elapsed time (s) 158.4 908.7 184.0
1|c| vs V2 (%) 8.1 3.7 7.3
1|c| 2*Observed cost standard deviation in s 12.6 57.0 9.0
1|c| vs V2 (%) 24.9 38.3 30.5
1|c|5*V1 2l|Number of sequence 301.3 591.7 216.7
1|c| 2*Average observed cost per process elapsed time (s) 167.7 986.1 186.0
1|c| vs V2 (%) 14.4 12.5 8.4
1|c| 2*Observed cost standard deviation in s 17.0 81.2 11.0
1|c| vs V2 (%) 68.4 97.0 59.3
1|c|3*V2 2l|Number of sequence 303.3 580.0 213.7
1|c| Average observed cost per process elapsed time (s) 146.5 876.4 171.6
1|c| Observed cost standard deviation in s 10.1 41.2 6.9
tableComparison of distributed patches sequencing algorithms. V2 corresponds to the algorithm <ref>, <ref>, <ref> and <ref>. V0 corresponds to the algorithm <ref>, <ref> and <ref> without taking into account the weight (The $\mathcal{D}$ and $\mathcal{L}$ are not sorted, the $mxwg$ and $mxwl$ are not computed nor used, the first available patch in these groups is selected). V1 corresponds to the algorithm <ref>, <ref> and <ref> with a patch selection, neither using nor computing, $mxwg$ and $mxwl$ (the first available patch in the group is selected). The results given are the average values of 3 runs with the same parameters. The observed cost corresponds to the creation, factoring and resolution of all patches for all iterations and steps.
The V0 version corresponds to the algorithms <ref>, <ref> and <ref> where the weight is not used: $\mathcal{D}$ and $\mathcal{L}$ are not sorted, $mxwg$ and $mxwl$ are not calculated or used, the first available patch in the group is selected (i.e. the one with a "random" weight that depends only on how the patches were constructed).
The V1 version uses the descending weight ordered groups $\mathcal{D}$ and $\mathcal{L}$ but does not use or compute $mxwg$ and $mxwl$.
The first available patch in the group is selected (i.e. the one with the highest weight).
The V2 version corresponds to the proposed algorithms <ref>, <ref> and <ref>.
To compare these versions, a metric called "observed cost per process" counts the elapsed time (averaged over 3 runs) per process related to all tasks connected with distributed patch sequencing (i.e. all creations, factorizations, and resolutions of all patches for all iterations and steps).
The mean and standard deviation of this metric, for all processes, are given in absolute value and relative to the V2 performance.
The V0 and V1 versions are 3.7% to 8.1% and 8.4% to 14.4% slower than the V2 version, respectively, in terms of average performance.
Surprisingly, the "random" V0 version is not the worst.
Certainly because during construction, the groups were built in a favorable order by chance.
The standard deviation of this measure reflects the load imbalance introduced by the sequencing algorithm and the coarse mesh distribution.
Between V0,V1 and V2, only the sequencing varies and the coarse mesh distribution remain the same.
Thus, the variation in standard deviation from V2 somewhat reflects the load imbalance added by V0 and V1, which are 24.9% to 38.3% and 59.3% to 97.0% more dispersed than the V2 version, respectively.
In any case, the V2 version is the one that also provides the smallest (or equal) number of sequences compared to the V0 and V1 versions.
These numbers of sequences are as expected higher but close to the average number of distributed patches per process.
All these observations validate the choice of V2 as the best version to use in this paper among V0,V1 and V2.
Also note that this test case validates the enrichment function and the processing of hanging nodes in patches with a mixture of refined and unrefined elements.
Some test not presented here confirm that these choices give the best convergence compared to some truncated patch solutions.
§ CONCLUSIONS
This work introduces some novelties in the original method:
* a new criterion to control the iterations of the resolution loop;
* a efficient algebraic computation of the matrices of linear systems at both scales;
* the use for the enrichment of a linearly interpolated function (with a shift);
* the treatement of blending element problem with the use of additional enriched nodes corresponding to mixed discretization patches;
* a distributed implementation of the method;
* a new algorithm for scheduling the computation of fine-scale problems;
* a proof given in sequential that there is an optimal scale jump (confirmed in parallel);
* the use of specific resolution (, , iterative domain decomposition solver) at global level to improve scalability of this scale.
These new features improve the capability of the method in several ways.
The new stopping criterion is comparable to the error (in energy norm) of the boundary condition imposed on the fine-scale problem.
It thus allows to obtain results with a given precision.
And when used by the solver in evolutionary phenomena (where the resolution starts from a close previous solution) the number of iterations can be considerably reduced.
With respect to performance, the proposed distributed implementation is promising and performs efficiently compared to some other parallel solvers for a wide range of used cores.
In particular, it allows, with few cores, to handle larger problems than with the other solvers tested.
Parallelism at both scales is clearly to be maintained, but some elements still need to be improved (at global scale in particular).
The scalability of local scale is correct thanks to, among other things, the new scheduling of the local scale problems.
Now, in terms of future works, parallel performance and the use of the new criterion are two topics that can be explored further. The proposed new residual error criterion has already given valuable insights into the convergence of the solver.
In particular, this allowed to highlight the influence of the matrix conditioning of the system at the global scale on the convergence.
A full analysis of this observation should be done to get a theoretical confirmation.
Regarding enrichment, a closer comparison with SGFEM and other enrichment techniques could also be conducted in a future work in a quantitative way, thanks to this new convergence criterion.
Otherwise, for parallel performance, when the global problem becomes large, both solutions, and iterative domain decomposition solver, exploited the fact that between iterations, at the global scale, the linear system evolves relatively slowly.
Therefore, the iterative resolution of this type of problem can gain in performance by using the previous computed solution.
This lead to the conclusion that any solver that scales well over a wide range of cores numbers and that takes advantage of an iterative context in its computation will be a good candidate for solving large coarse scale problem.
And of course, the solver itself can be used for this.
But many questions need to be answered before embarking on such a process.
Would we act independently, with one solver solving the large problem at the coarse scale of another solver at each of its iterations?
Or should we just do some "fine to coarse" transfer, solve a smaller coarse problem, and do some "coarse to fine" transfers?
And in this last case, what "some" should be: 2 more?
This last idea, which is somehow related to the multigrid methodology ( in [30] grid transfer according to V or W or other scheme), is an interesting future work.
Still on the subject of parallel performance, one can also consider a hybrid multithreading/MPI implementation of the proposed distributed implementation to somehow select the right range of cores for the right scale.
Specifically, in the algorithm <ref>, as mentioned in section <ref>, when all distributed patches are calculated, the remaining local patches are computed independently (processes are not longer synchronized).
For this, in a process, a sequential computation was naturally used but as in [18] a multithreaded computation can also be a solution.
The main impact would be to provide a parameter to adjust the number of distributed patches and the memory consumption per process.
In a multithreaded context, the constraint added by the dynamic scheduling proposed in [18] (the largest local patch limits the number of threads) will certainly play a role in choosing the MPI/thread partition.
But this flexibility will certainly also allow to choose the right number of processes at the global scale depending on the solver used at this scale.
As a side effect, many other tasks can also benefit from this hybrid implementation: many loops on the macro elements can be nicely paralyzed across many threads in the algorithm <ref>, <ref>, <ref>, <ref> and <ref> due to the per-macro-element storage and computation of data.
But for large global-scale problem, the question of which solver to choose for that scale will certainly remain open.
Finally, a last perspective on local scale performance concerns the use of a direct solver to solve patches (distributed or not).
It is possible to consider that the enrichment functions vary slowly during the iteration.
Thus, if conditioning is not an issue, using an iterative solver to solve the patches may be more efficient because using the previous solutions at almost all iteration would certainly reduce the number of solver iterations.
And in this context, storing data by macro element could relatively naturally result in a versatile block Jacobi preconditioner (each block can be used for many patches).
This can also have an impact on the memory footprint (it is no longer necessary to store the factorization of each patch).
§ ACKNOWLEDGEMENTS
The authors would like to thank Nicolas Chevaugeon and Gregory Legrain with whom the conversations about parallelism and the method were fruitful.
[1]
P. Amestoy, C. Ashcraft, O. Boiteau, A. Buttari, J. L'Excellent, and
C. Weisbecker.
Improving multifrontal methods by means of block low-rank
SIAM Journal on Scientific Computing, 37(3):A1451–A1474, 2015.
[2]
P. Amestoy, I. Duff, J. L'Excellent, and J. Koster.
A fully asynchronous multifrontal solver using distributed dynamic
SIAM Journal on Matrix Analysis and Applications, 23(1):15–41,
[3]
P. R. Amestoy, A. Guermouche, J.-Y. L’Excellent, and S. Pralet.
Hybrid scheduling for the parallel solution of linear systems.
Parallel Computing, 32(2):136 – 156, 2006.
Parallel Matrix Algorithms and Applications (PMAA’04).
[4]
I. Babuška and U. Banerjee.
Stable Generalized Finite Element Method (SGFEM).
Computer Methods in Applied Mechanics and Engineering,
201-204:91–111, 2012.
[5]
S. Bordas, T. Rabczuk, and G. Zi.
Three-dimensional crack initiation, propagation, branching and
junction in non-linear materials by an extended meshfree method without
asymptotic enrichment.
Engineering Fracture Mechanics, 75(5):943–960, Mar. 2008.
[6]
C. A. Duarte and D. J. Kim.
Analysis and applications of a generalized finite element method
with global-local enrichment functions.
Computer Methods in Applied Mechanics and Engineering,
197(6-8):487–504, 2008.
[7]
C. A. Duarte, D.-J. Kim, and I. Babuška.
A global-local approach for the construction of enrichment functions
for the generalized fem and its application to three-dimensional cracks.
In V. M. A. Leitão, C. J. S. Alves, and C. Armando Duarte,
editors, Advances in Meshfree Techniques, pages 1–26, Dordrecht, 2007.
Springer Netherlands.
[8]
R. Eligehausen and G. Sawade.
A fracture mechanics based description of the pull-out behavior of
headed studs embedded in concrete.
In RILEM Report, page 281–299. L. Elfgren, Chapman and Hall,
London, 1989.
[9]
T. Fries.
A corrected XFEM approximation without problems in blending
International Journal for Numerical Methods in Engineering,
75(5):503–532, jul 2008.
[10]
T. C. Gasser and G. A. Holzapfel.
Modeling 3D crack propagation in unreinforced concrete using PUFEM.
Computer Methods in Applied Mechanics and Engineering,
194(25-26):2859–2896, 2005.
[11]
R. Geelen, J. Plews, M. Tupek, and J. Dolbow.
An extended/generalized phase‐field finite element method for
crack growth with global‐local enrichment.
International Journal for Numerical Methods in Engineering,
121(11):2534–2557, jun 2020.
[12]
V. Gupta, C. Duarte, I. Babuška, and U. Banerjee.
Stable GFEM (SGFEM): Improved conditioning and accuracy of GFEM/XFEM
for three-dimensional fracture mechanics.
Computer Methods in Applied Mechanics and Engineering,
289(FEBRUARY):355–386, 2015.
[13]
V. Gupta, C. A. Duarte, I. Babuška, and U. Banerjee.
A stable and optimally convergent generalized FEM (SGFEM) for linear
elastic fracture mechanics.
Computer Methods in Applied Mechanics and Engineering,
266:23–39, 2013.
[14]
V. Gupta, D.-j. Kim, and C. A. Duarte.
Analysis and improvements of global–local enrichments for the
Generalized Finite Element Method.
Computer Methods in Applied Mechanics and Engineering,
245-246:47–62, oct 2012.
[15]
N. J. Higham and T. Mary.
Solving block low-rank linear systems by LU factorization is
numerically stable.
IMA Journal of Numerical Analysis, 42(2):951–980, apr 2022.
[16]
G. Karypis and V. Kumar.
A coarse-grain parallel multilevel $k$-way partitioning algorithm.
In Proceedings of the 8th SIAM conference on Parallel Processing
for Scientific Computing, 1997.
[17]
D.-J. Kim, C. A. Duarte, and S. P. Proenca.
A generalized finite element method with global-local enrichment
functions for confined plasticity problems.
Computational Mechanics, 50(5):563–578, nov 2012.
[18]
D. J. Kim, C. A. Duarte, and N. A. Sobh.
Parallel simulations of three-dimensional cracks using the
generalized finite element method.
Computational Mechanics, 47(3):265–282, 2011.
[19]
D.-J. Kim, S.-G. Hong, and C. A. Duarte.
Generalized finite element analysis using the preconditioned
conjugate gradient method.
Applied Mathematical Modelling, 39(19):5837–5848, oct 2015.
[20]
D.-J. Kim, J. P. a. Pereira, and C. a. Duarte.
Analysis of three-dimensional fracture mechanics problems:A
two-scale approach using coarse-generalized FEM meshes.
International Journal for numerical methods in engineering,
81:335–365, 2010.
[21]
H. Li and C. A. Duarte.
A two-scale generalized finite element method for parallel
simulations of spot welds in large structures.
Computer Methods in Applied Mechanics and Engineering,
337:28–65, 2018.
[22]
H. Li, P. O'Hara, and C. Duarte.
Non-intrusive coupling of a 3-D Generalized Finite Element Method
and Abaqus for the multiscale analysis of localized defects and structural
Finite Elements in Analysis and Design, 193(April):103554, oct
[23]
A. K. Noor.
Global-local methodologies and their application to nonlinear
Finite Elements in Analysis and Design, 2(4):333–346, 1986.
[24]
P. O'Hara, C. A. Duarte, and T. Eason.
Generalized finite element analysis of three-dimensional heat
transfer problems exhibiting sharp thermal gradients.
Computer Methods in Applied Mechanics and Engineering,
198(21-26):1857–1871, 2009.
[25]
J. Ožbolt, R. Elighausen, and H. Reinhardt.
Size effect on the concrete cone pull-out load.
International Journal of Fracture, 95(1):391–404, 1999.
[26]
J. P. A. Pereira, D.-J. Kim, and C. A. Duarte.
A two-scale approach for the analysis of propagating
three-dimensional fractures.
Computational Mechanics, 49(1):99–121, Aug. 2011.
[27]
J. Plews and C. Duarte.
Bridging multiple structural scales with a generalized finite
element method.
International Journal for Numerical Methods in Engineering,
102(3-4):180–201, apr 2015.
[28]
J. Plews and C. Duarte.
Generalized finite element approaches for analysis of localized
thermo-structural effects.
International Journal for Numerical Methods in Engineering,
104(6):408–438, nov 2015.
[29]
J. A. Plews and C. A. Duarte.
A two-scale generalized finite element approach for modeling
localized thermoplasticity.
International Journal for Numerical Methods in Engineering,
108(10):1123–1158, dec 2016.
[30]
Y. Saad.
Iterative Methods for Sparse Linear Systems.
Society for Industrial and Applied Mathematics, second edition, 2003.
[31]
A. Salzman.
Thick Level Set model implementation in 3D parallel context.
PhD thesis, Ecole Centrale de Nantes, 1, rue de la noë, 44321,
Nantes, France, Oct. 2019.
§ NOTATION CONVENTION
The following conventions are used in the algorithms in this document:
* $\Leftarrow$ implies assembling a vector or matrix of a space(s) or set(s) embedded in the receiving space(s) or set(s)
* $\left( A,B\right) \gets\left( C,D\right) $ is equivalent to $A \gets C$ and $B\gets D$
* $\left( A,B\right) \Leftarrow \left( C,D\right) $ is equivalent to $A \Leftarrow C$ and $B\Leftarrow D$
* $A \Leftarrow\left( C,D\right) $ is equivalent to $A \Leftarrow C$ and $A\Leftarrow D$
* $A\gets \left\lbrace B,C\right\rbrace $ indicate that the matrices $B$ and $C$ are merged by column to form a single matrix stored in $A$
* $\left( A,B,C\right) \gets$ PROCEDURE... indicates that the called PROCEDURE returns data which are stored in $A$,$B$ and $C$.
See return declaration in PROCEDURE.
* $\triangleright$ represents a task involving communication (point to point or collective)
* If $\mathcal{C}$ is a set of entities:
* $\mathcal{C} \setminus e$ is removing $e$ from $\mathcal{C}$
* $\mathcal{C} \cup e$ is adding $e$ to the end of $\mathcal{C}$
* $weitgh_k$ is the weight of a patch identified by the letter $k$: number of micro-scale elements embedded in the patch $k$
* To simplify the algorithms, $A^{-1}$ expresses the inverse of the matrix $A$, but it is in many cases the factorization that is actually obtained and used.
Thus $X=A^{-1}\cdot B$ is calculate as the solution of $L\cdot D\cdot L^{t}\cdot X=B$ where $L$ and $D$ are the lower triangular and diagonal matrices obtained by factoring $A$.
* The subscripted letters, corresponding to the sets of values defined in the table <ref>, give the dofs on which the matrix or the vector is defined. Thus, $\vm{A}_{gg}$ is a matrix of dimension $card(g)\times card(g)$ and $\vm{U}_{g}$ is a vector of size $card(g)$.
* The superscript $patches$ indicates the set of matrices or vectors of all fine-scale problems: $X^{patches}=\left\{X^p : p\in patches\right\}$ with $patches=\left\{1,2,...,card(I_e^g)\right\}$ (see section <ref> for definition of $I_e^g$)
§ VALUES SET
Set Values type Relation Scale
$G$ global-scale values $G=D\cup g,D\cap g=\oslash$ 9*0coarse-level
$D$ fixed values (Dirichlet) $D\subset G$
$C$ classical values $C=D\cup c$
$g$ free values $g=c \cup e$
$c$ classical free values $c=C\setminus D$, $c=h\cup k$
$e$ enriched free values $e=G\setminus C$
$m$ free values related to SP elements $m=k\cup e$
$k$ classical free values related to SP elements $k\in c,k\cap h=\oslash$
$h$ classical free values not related to SP elements $h=c\setminus k$
$Q$ fine-scale values $Q=L\cup DG\cup d\cup q$ 5* patch-level ($p^{th}$)
$L$ fixed values (by linear relation) $L\subset Q,L\cap DP=\oslash,L\cap d=\oslash$
$DP$ fixed values ($D$ restricted to patch $p$) $DP=D\cap Q$
$d$ fixed values (Dirichlet from fine solution $f$) $d= f\cap Q$
$q$ free values $q=Q\setminus(d\cup DG \cup L)$
$R$ reference values $R= M\cup L,M\cap L=\oslash$ 9*mix fine/coarse level
$L$ fixed values (by linear relation) $L\subset R,L\cap DR=\oslash$
$M$ reference values without L $M=r\cup DR=F\cup H,r\cap DR=\oslash$
$DR$ fixed values (inherited from D) $DR\subset M, D\subseteq DR$
$F$ reference values related to SP elements $F=f\cup (DR\setminus (DR\cap H))$
$H$ reference values not related to SP elements $H=h\cup (DR\setminus (DR\cap F))$
$r$ reference free values $r=h\cup f$
$f$ free values related to SP elements $f\subset r$
$h$ free values not related to SP elements $h=r\setminus f,h\text{ same as }h\text{ at coarse-level}$
Value sets description ( lowercase letter for free values, upper case letter for free and fixed values)
A large number of sets are put in place to clarify the presentation of the different discretizations.
The value of the discretized field at a node is called the value.
This value can be fixed or free.
When it is free, it can be classical or enriched.
There are three reasons why a value may be fixed: on the physical Dirichlet boundary conditions, on the patch boundary when solving fine scale problems or when setting the hanging node value of a micro node.
Regarding the set notations, lower case will be used for free values while upper case will be used for fixed values or for a set of values gathering both fixed and free values.
All value types are described in table <ref>.
§ MESH ADAPTATION
In this work, the mesh adaptation uses a simple splitting strategy.
For each level of refinement, we take the impacted elements (i.e. those that meet a certain criterion provided by a user-defined function) from the previous level and divide each of them in an encapsulated manner.
New nodes are added to the middle of the edges of impacted elements.
The affected elements are then replaced by a tetrahedronization based on the old and new nodes, so that the new elements are encapsulated in the destroyed old elements.
The figure <ref> illustrates this process, in 2D for simplicity.
[][Starting mesh]
[][L2: wrong ]
[][L2: corect]
[][Adapted mesh]
mesh refinement principles (presented in 2D but the same principle applies in 3D): the dot represents the area of interest and the stars the hanging nodes. Only one hanging node is accepted per edge
In the figure <ref> 2 triangles represent the starting level of the adaptation with a dot representing the area of interest.
The first level of refinement, L1 (figure <ref>), divides only the triangle covering the area of interest.
A hanging node (start in figure <ref>) appears in the middle of the edge common to the triangles in the figure <ref>.
Then, when moving to the next level of refinement, L2, again only the element covering the area of interest is split.
If we simply split this element (figure <ref>) 2 hanging nodes appear on the same edge.
This is considered too abrupt in terms of mesh transition and is therefore avoided by forcing the splitting of the second initial unmodified triangle (figure <ref>).
This "one hanging node per edge" rule imposes a smoother mesh transition.
When the adaptation is complete (i.e. for example when the target size of the element is reached in the area of interest), the hanging nodes are removed by splitting the element with the hanging edge/face using only that node and the existing vertices (figure <ref>).
Note that in 3D, an additional node can be added to the center of gravity of the tetrahedron which is modified by removing the hanging node.
The algorithm associated with this process is implemented in parallel.
In short, for one level of refinement, it reviews all impacted elements.
Next, it examines whether splitting these elements would violate the "one hanging node per edge" rule.
If this is the case, the connected elements (i.e. those with a hanging edge/face) are added to the list of elements to divide.
This verification operation is repeated until no more elements are added.
There may be some communication during this task.
Then all selected element are split.
The next level of refinement may occur or, if it is complete (i.e. there are no more elements selected for splitting), the final deletion of hanging nodes is performed.
More details can be found in [31].
Note that a constrained Delaunay tetrahedralization can be an interesting alternative for this work as long as the coarse mesh remains a skeleton of the new discretization: all faces/edges of the tetrahedra of the coarse mesh remain defined (possibly divided several times) in the new mesh.
§ $\VM{T}_{MG}$ CONSTRUCTION
The $\vm{T}_{MG}$ matrix is a linear interpolation operator with the following structure, considering that the M-set is ordered with the H-set first and the F-set second, and that the G-set is ordered with the D-set first, the c-set second and the e-set third with h-set first in the c-set:
\begin{equation}
\vm{T}_{MG}=\left( \begin{array}{cccc}
\vm{T}_{HD}&\vm{T}_{Hh}&\vm{0}&\vm{0}\\
\vm{T}_{FD}&\vm{0}&\vm{T}_{Fk}&\vm{T}_{Fe}
\end{array}\right)
\label{oper_rg}
\end{equation}
A term of $\vm{T}_{MG}$ at index $(a,b)$ is, according to the equation (<ref>), either $N^i(\vm{x}^j)$ or $N^i(\vm{x}^j)\vm{F}^p(\vm{x}^j)\cdot\vm{e_k}$ where $\vm{x}^j$ is the node associated to the dof $a$, $\vm{e_k}$ is the unite-vector associated to the component that dof $a$ represent on the node $\vm{x}^j$, and $\vm{x}^i$ is the node associated to dof $b$.
When $a\in H$ and $b\in m$, as the node $\vm{x}^j$ is not in the support of the node $\vm{x}^i$ all $N^i(\vm{x}^j)$ terms are zero and the H-set$\times$m-set block is null.
And it is the same when $a\in F$ and $b\in h$, the block F-set$\times$h-set is null.
When $b\in D$ only the nodes $\vm{x}^j$ on the derived Dirichlet boundary conditions give nonzero terms, i.e. when $a\in DR$.
All other terms are zero in $\vm{T}_{HD}$ and $\vm{T}_{FD}$.
For $a\in H$ and $b\in h$ the macro and micro meshes are the same and the only non-zero terms are when $a=b$.
The h-set$\times$h-set block is then an identity matrix and all the other terms of $\vm{T}_{Hh}$ are null.
The remaining blocks $\vm{T}_{Fk}$ and $\vm{T}_{Fe}$ correspond respectively to $N^i(\vm{x}^j)$ for a classical dof ($b\in k$) and to $N^i(\vm{x}^j)\vm{F}^p(\vm{x}^j)\cdot\vm{e_k}$ for an enriched dof ($b\in e$).
§ GLOBAL SCALE PROBLEM CONSTRUCTION
The equality (<ref>) gives the following system:
\begin{equation}
\vm{T}_{MG}^t\cdot \vm{A}_{MM} \cdot \vm{T}_{MG}\cdot \vm{U}_g=\vm{T}_{MG}^t\cdot \vm{B}_{M}
\label{Agg_begin}
\end{equation}
It can be rewritten as follows using (<ref>) and the divided matrices on the sets H and F :
\begin{equation}
\left( \begin{array}{cc}
\vm{T}_{HD}^t&\vm{T}_{FD}^t\\
\vm{T}_{Hh}^t&\vm{0}\\
\vm{0}&\vm{T}_{Fk}^t\\
\vm{0}&\vm{T}_{Fe}^t
\end{array}\right) \cdot\left( \begin{array}{cc}
\vm{A}_{HH}&\vm{A}_{HF}\\
\vm{A}_{FH}&\vm{A}_{FF}
\end{array}\right) \cdot \left( \begin{array}{cccc}
\vm{T}_{HD}&\vm{T}_{Hh}&\vm{0}&\vm{0}\\
\vm{T}_{FD}&\vm{0}&\vm{T}_{Fk}&\vm{T}_{Fe}
\end{array}\right) \cdot\left( \begin{array}{c}
\vm{u}_D\\
\vm{u}_h\\
\vm{u}_k\\
\vm{u}_e
\end{array}\right)=\left( \begin{array}{cc}
\vm{T}_{HD}^t&\vm{T}_{FD}^t\\
\vm{T}_{Hh}^t&\vm{0}\\
\vm{0}&\vm{T}_{Fk}^t\\
\vm{0}&\vm{T}_{Fe}^t
\end{array}\right) \cdot \left( \begin{array}{c}
\vm{B}_H\\
\vm{B}_F
\end{array}\right)
\label{g_sys_detail}
\end{equation}
This gives the system:
\begin{equation}
\left( \begin{array}{cc}
\vm{A}_{DD}&\vm{A}_{Dg}\\
\vm{A}_{gD}&\vm{A}_{gg}
\end{array}\right)\cdot \left( \begin{array}{c}
\vm{U}_D\\
\vm{U}_g
\end{array}\right)=\left( \begin{array}{c}
\vm{B}_D\\
\vm{BN}_g
\end{array}\right)
\label{G_sys}
\end{equation}
where, considering here a symmetric system (but there is no restriction for a non-symmetric system) :
\begin{equation}
\vm{A}_{gg}=\left( \begin{array}{ccc}
\vm{T}_{Hh}^t\cdot \vm{A}_{HH} \cdot \vm{T}_{Hh}&\vm{T}_{Hh}^t\cdot \vm{A}_{HF}\cdot \vm{T}_{Fk}&\vm{T}_{Hh}^t\cdot \vm{A}_{HF}\cdot \vm{T}_{Fe}\\
\vm{T}_{Fk}^t \cdot \vm{A}_{HF}^t\cdot \vm{T}_{Hh}&\vm{T}_{Fk}^t \cdot \vm{P}_{Fk}&\vm{P}_{Fk}^t \cdot \vm{T}_{Fe} \\
\vm{T}_{Fe}^t \cdot \vm{A}_{HF}^t\cdot \vm{T}_{Hh}&\vm{T}_{Fe}^t \cdot \vm{P}_{Fk}& \vm{T}_{Fe}^t\cdot \vm{A}_{FF} \cdot \vm{T}_{Fe}
\end{array}\right)~\text{with}~\vm{P}_{Fk}=\vm{A}_{FF}\cdot \vm{T}_{Fk}
\label{Agg_1}
\end{equation}
\begin{equation}
\vm{BN}_{g}=\left( \begin{array}{c}
\vm{T}_{Hh}^t\cdot \vm{B}_{H}\\
\vm{T}_{Fk}^t \cdot \vm{B}_{F} \\
\vm{T}_{Fe}^t \cdot \vm{B}_{F}
\end{array}\right)
\label{Bg0}
\end{equation}
\begin{equation}
\vm{A}_{gD}=\left( \begin{array}{c}
\vm{T}_{Hh}^t\cdot \left( \vm{A}_{HH} \cdot \vm{T}_{HD} +\vm{A}_{HF} \cdot \vm{T}_{FD}\right)\\
\vm{T}_{Fk}^t\cdot \vm{A}_{HF}^t \cdot \vm{T}_{HD} +\vm{P}_{Fk}^t \cdot \vm{T}_{FD}\\
\vm{T}_{Fe}^t\cdot \left( \vm{A}_{HF}^t \cdot \vm{T}_{HD} +\vm{A}_{FF} \cdot \vm{T}_{FD}\right)
\end{array}\right)
\label{AgD}
\end{equation}
and $\vm{A}_{DD}$, $\vm{A}_{Dg}$ and $\vm{B}_{D}$ not being shown here because not used later.
The $\vm{A}_{HF}$ term represents the coupling term of the NSP part with the SP boundary (nodes surrounding SP, around yellow part in the figure <ref>, named SPF in this work).
For this boundary, we know that the nodes are not enriched, so the sub-bloc corresponding to these dofs in $\vm{T}_{Fe}$ is zero and $\vm{A}_{HF}\cdot \vm{T}_{Fe}=0$.
Moreover, for this boundary, all the fine-scale elements are identical to the global-scale elements, therefore the sub-bloc corresponding to these dofs in $\vm{T}_{Fk}$ is an identity matrix $\vm{I}_{Fk}$ and $\vm{A}_{HF}\cdot \vm{T}_{Fk}=\vm{A}_{HF}\cdot \vm{I}_{Fk}$.
Otherwise, the $\vm{T}_{Hh}$ operator, because of its structure (given in <ref>) reduces a block with H-set rows or columns to a block with h-set rows or columns. Thus $\vm{A}_{gg}$ can be written as :
\begin{equation}
\vm{A}_{gg}=\left( \begin{array}{ccc}
\vm{A}_{hh}&\vm{A}_{hF}\cdot \vm{I}_{Fk}&\vm{0}\\
\vm{I}_{Fk}^t\cdot \vm{A}_{hF}^t&\vm{T}_{Fk}^t \cdot \vm{P}_{Fk}&\vm{P}_{Fk}^t \cdot \vm{T}_{Fe} \\
\vm{0}&\vm{T}_{Fe}^t \cdot \vm{P}_{Fk}& \vm{T}_{Fe}^t\cdot \vm{A}_{FF} \cdot \vm{T}_{Fe}
\end{array}\right)
\label{Aggs}
\end{equation}
And the $\vm{BN}_{g}$ and $\vm{A}_{gD}$ matrices simplify as follows:
\begin{equation}
\vm{BN}_{g}=\left( \begin{array}{c}
\vm{B}_{h}\\
\vm{T}_{Fk}^t \cdot \vm{B}_{F} \\
\vm{T}_{Fe}^t \cdot \vm{B}_{F}
\end{array}\right)
\label{BN1}
\end{equation}
\begin{equation}
\vm{A}_{gD}=\left( \begin{array}{c}
\vm{A}_{hH}\cdot \vm{T}_{HD} +\vm{A}_{hF} \cdot \vm{T}_{FD}\\
\vm{I}_{Fk}^t\cdot \vm{A}_{HF}^t \cdot \vm{T}_{HD}+\vm{P}_{Fk}^t \cdot \vm{T}_{FD} \\
\vm{T}_{Fe}^t\cdot \vm{A}_{FF} \cdot \vm{T}_{FD}
\end{array}\right)
\label{AgD1}
\end{equation}
By eliminating the Dirichlet boundary condition of the system (<ref>), the final system to solve is (<ref>) with
\begin{equation}
\vm{B}_{g}=\vm{BN}_{g}-\vm{A}_{gD}\cdot \vm{U}_D
\label{Bg1}
\end{equation}
and with full terms:
\begin{equation}
\vm{B}_{g}=\left( \begin{array}{c}
\vm{B}_{h} -\vm{A}_{hH}\cdot \vm{T}_{HD} \cdot \vm{U}_{D} -\vm{A}_{hF} \cdot \vm{T}_{FD}\cdot \vm{U}_{D}\\
\vm{T}_{Fk}^t \cdot \vm{B}_{F}- \vm{I}_{Fk}^t\cdot \vm{A}_{HF}^t \cdot \vm{T}_{HD}\cdot \vm{U}_{D} -\vm{P}_{Fk}^t \cdot \vm{T}_{FD}\cdot \vm{U}_{D} \\
\vm{T}_{Fe}^t \cdot \left( \vm{B}_{F}- \vm{A}_{FF} \cdot \vm{T}_{FD}\cdot \vm{U}_{D}\right)
\end{array}\right)
\label{Bg}
\end{equation}
In this work, the Dirichlet boundary conditions are zero imposed displacements in all tested cases.
Thus, to simplify the presentation, $\vm{U}_{D}$ is fixed at $\vm{0}$ and <ref> becomes:
\begin{equation}
\vm{B}_{g}=\left( \begin{array}{c}
\vm{B}_{h}\\
\vm{T}_{Fk}^t \cdot \vm{B}_{F} \\
\vm{T}_{Fe}^t \cdot \vm{B}_{F}
\end{array}\right)
\label{Bgs}
\end{equation}
§ PROCEDURES
§.§ INIT
The INIT procedure is given by algorithm <ref>.
create empty $\vm{A}_{gg}^{ini}$ and $\vm{B}_{g}^{ini}$
$e_{macro}\in SP$
Initialize $\vm{u}_{R}^{e_{macro}}$ fine-scale dofs covered by $e_{macro}$
Eliminate linear relation if any from $\vm{u}_{R}^{e_{macro}}$ to create $\vm{u}_{F}^{e_{macro}}$
create empty $\vm{A}_{FF}^{e_{macro}}$ matrix and $\vm{B}_{F}^{e_{macro}}$ vector
$e_{micro}\in e_{macro}$
compute $\vm{A}_{FF}^{e_{micro}}$ matrix and $\vm{B}_{F}^{e_{micro}}$ vector
$\left( \vm{A}_{FF}^{e_{macro}},\vm{B}_{F}^{e_{macro}}\right) \Leftarrow \left( \vm{A}_{FF}^{e_{micro}},\vm{B}_{F}^{e_{micro}}\right) $
Create $\vm{T}_{Fk}^{e_{macro}}$ operator based on classical form function of $e_{macro}$ at $\vm{u}_{F}^{e_{macro}}$ dof location
$\vm{P}_{Fk}^{e_{macro}}\gets \vm{A}_{FF}^{e_{macro}}\cdot \vm{T}_{Fk}^{e_{macro}}$
$\vm{A}_{kk}^{e_{macro}}\gets \vm{T}_{Fk}^{t~e_{macro}}\cdot \vm{P}_{Fk}^{e_{macro}}$
$\vm{B}_{k}^{e_{macro}}\gets \vm{T}_{Fk}^{t~e_{macro}}\cdot \vm{B}_{F}^{e_{macro}}$
$\left(\vm{A}_{gg}^{ini},\vm{B}_{g}^{ini}\right) \Leftarrow \left( \vm{A}_{kk}^{e_{macro}},\vm{B}_{k}^{e_{macro}}\right)$
Create a zero value $\vm{T}_{Fe}^{e_{macro}}$ operator, component product of shifted enrichment function with classical form function of $e_{macro}$ at $\vm{u}_{F}^{e_{macro}}$ dof location
$\vm{T}_{Fm}^{e_{macro}}\gets \left\{\vm{T}_{Fk}^{e_{macro}},\vm{T}_{Fe}^{e_{macro}}\right\}$
$e_{macro}\in NSP$
compute $\vm{A}_{hh}^{e_{macro}}$,$\vm{A}_{hF}^{e_{macro}}$ matrix and $\vm{B}_{h}^{e_{macro}}$ vector
$\left( \vm{A}_{gg}^{ini},\vm{B}_{g}^{tmp}\right) \Leftarrow\left( \left( \vm{A}_{hh}^{e_{macro}},\vm{A}_{hF}^{e_{macro}} \cdot \vm{I}_{Fk}^{e_{macro}}\right) ,\vm{B}_{h}^{e_{macro}} \right) $
$\left\|\vm{B}_r\right\|\gets$COMPUTE_B_NORM$\vm{B}_{g}^{tmp}$, $\vm{B}_F$
$\vm{B}_{g}^{ini} \gets \vm{B}_{g}^{ini}+\vm{B}_{g}^{tmp}$
$p\in patches$
Create $\vm{u}_{Q}^{p}$ fine-scale dofs covered by patch $p$
Eliminate fixed values from $\vm{u}_{Q}^{p}$ to create $\vm{u}_{q}^{p}$
Create empty $\vm{A}_{qq}^{p}$ matrix and $\vm{BI}_{q}^{p}$ vector
$e_{macro}\in p$
$\left( \vm{A}_{qq}^{p}~\text{and}~\vm{A}_{qd}^{p}, \vm{BI}_{q}^{p} \right) \Leftarrow \left( \vm{A}_{FF}^{e_{macro}},\vm{B}_{F}^{e_{macro}}\right) $
$\vm{D}_{qd}^{p}\gets -\vm{A}_{qd}^{p}$
algorithm: initialization part. Procedure COMPUTE_B_NORM is given by algorithm <ref>
In this procedure, the constant sub-blocks of <ref> and <ref> are stored in memory as $\vm{A}_{gg}^{ini}$ and $\vm{B}_{g}^{ini}$ which will be reused in the loop to compute the final matrices $\vm{A}_{gg}$ and $\vm{B}_g$.
In INIT, the first loop on SP macro-elements computes and attaches to a macro-element $e_{macro}$ the matrices $\vm{A}_{FF}^{e_{macro}}$ and $\vm{B}_{F}^{e_{macro}}$, computed by integration and assembly of (<ref>) over $\omega^{e_{macro}}$ with elimination of (<ref>) but keeping Dirichlet values.
In this loop, the vectors $\vm{u}_F^{e_{macro}}$, dedicated to the storage of the $e_{macro}$ part of the $\vm{u}_F$ vector, are also allocated and attached to $e_{macro}$.
Still in this loop, $\vm{T}_{Fk}^{e_{macro}}$ and $\vm{P}_{Fk}^{e_{macro}}$ are computed and attached to the macro-elements because they will also be reused in the loop (in fact it is $\vm{T}_{Fm}^{e_{macro}}$ operator with a $\vm{T}_{Fe}^{e_{macro}}$ null at this stage that is saved in memory).
Finally, the $\vm{T}_{Fk}^{t~e_{macro}} \cdot \vm{P}_{Fk}^{e_{macro}}$ and $\vm{T}_{Fk}^{t~e_{macro}}\cdot \vm{B}_{F}^{e_{macro}}$ terms are calculated and assembled into $\vm{A}_{gg}^{ini}$ and $\vm{B}_{g}^{ini}$.
The second loop on the NSP elements (if any) computes the terms $\vm{A}_{hh}$, $\vm{A}_{hF}$ and $\vm{B}_h$ and assembles them into $\vm{A}_{gg}^{ini}$ and $\vm{B}_{g}^{tmp}$.
This last temporary vector contains the boundary condition for the NSP elements and is used as an argument to the COMPUTE_B_NORM procedure ( algorithm <ref> below) called by INIT to obtain $\left \|\vm{B}_{r} \right \|$, the constant denominator of $resi$.
Once this is done, $\vm{B}_{g}^{tmp}$ is added to $\vm{B}_{g}^{ini}$ to save it.
The INIT procedure, with a loop over the patches, is also responsible for defining and partially creating the fine-scale linear systems (<ref>).
Construction of one of these systems for $p\in I_e^g$, is simply obtained by assembling $\vm{A}_{FF}^{e_{macro}}$ and $\vm{B}_{F}^{e_{macro}}$ with $e_{macro}\in J^p$ to form the following system:
\begin{equation}
\left( \begin{array}{ccc}
\vm{A}_{DR\,DR}^p&\vm{A}_{DRd}^p&\vm{A}_{DRq}^p\\
\vm{A}_{dDR}^p&\vm{A}_{dd}^p&\vm{A}_{dq}^p\\
\vm{A}_{qDR}^p&\vm{A}_{qd}^p&\vm{A}_{qq}^p
\end{array}\right)\cdot \left( \begin{array}{c}
\vm{u}_{DR}^p\\
\vm{u}_d^p\\
\vm{u}_q^p
\end{array}\right)=\left( \begin{array}{c}
\vm{BN}_{DR}^p\\
\vm{BN}_d^p\\
\vm{BN}_q^p
\end{array}\right)
\label{QMF_sys}
\end{equation}
Eliminating from (<ref>) the Dirichlet boundary condition on $\partial\omega_p$ we obtain:
\begin{equation}
\vm{A}_{qq}^p
\cdot \vm{u}_q^p =
\vm{BN}_q^p-\vm{A}_{qDR}^p\cdot\vm{u}_{DR}^p-\vm{A}_{qd}^p\cdot\vm{u}_d^p
\label{QMF_sys_red}
\end{equation}
Right-hand side of (<ref>) (which corresponds to the vector $\vm{B}_q^p$) is divided in two contributions: $\vm{BI}_q^p=\vm{BN}_q^p-\vm{A}_{qDR}^p\cdot\vm{u}_{DR}^p$ which is constant during the loop and $\vm{D}_{qd}^p\cdot\vm{u}_d^p$ (with $\vm{D}_{qd}^p=-\vm{A}_{qd}^p$) which varies during the loop as does $\vm{u}_d^p$ (subset of $\vm{u}_F$).
Thus, the INIT procedure creates for all patches $\vm{A}_{qq}^p$, $\vm{BI}_q^p$ and $\vm{D}_{qd}^p$.
§.§ UPDATE_MICRO_DOFS
The UPDATE_MICRO_DOFS procedure (algorithm <ref>) updates the values of $\vm{u}_F$ from a $\vm{U}_G$ vector given as argument according to the equation (<ref>).
This is done by a simple loop over the SP elements where for $e_{macro}\in SP$ the following is computed :
$\vm{u}_F^{e_{macro}}=\vm{T}_{Fm}^{e_{macro}}\cdot \vm{U}_m^{e_{macro}}$ where $\vm{U}_m^{e_{macro}}$ is the restriction of $\vm{U}_G$ to $e_{macro}$ dofs.
The $\vm{T}_{Fm}^{e_{macro}}$ is supposed to be correctly computed when we enter this procedure (i.e. its $\vm{T}_{Fe}^{e_{macro}}$ part either null or updated by the UPDATE_MACRO_PRB procedure).
§.§ MICRO-SCALE_RESOLUTION
The MICRO-SCALE_RESOLUTION procedure (algorithm <ref>) loops over the patches to compute, for each patch $p$, the term $\vm{D}_{qd}^p\cdot\vm{u}_d^p$ (given in (<ref>)) with $\vm{u}_d^p$ being the current $\vm{u}_F$ restricted to $\partial\omega_p\setminus\partial\Omega^D$.
This term added to $\vm{BI}_q^p$ gives the right-hand side $\vm{B}_q^p$ of the system (<ref>).
Then, as mentioned earlier, this system of equations is solved with a direct solver ($\ddagger$ in algorithm <ref>) that first processes a factorization of $\vm{A}_{qq}^p$ before doing a backward/forward resolution.
When the factorization is finished, as $\vm{A}_{qq}^p$ is constant during the loop, only the factors are kept in memory, $\vm{A}_{qq}^p$ being freed thus gaining memory.
The following resolutions use the factors directly to solve the systems, which saves factoring time.
This procedure, when completed, provides all $\vm{u}_q^p$ solutions.
§.§ UPDATE_MACRO_PRB
The UPDATE_MACRO_PRB procedure (algorithm <ref> ) is dedicated to the update of the problem at the global-scale (local to global transfer).
It starts by copying $\vm{A}_{gg}^{ini}$ and $\vm{B}_{g}^{ini}$ into $\vm{A}_{gg}$ and $\vm{B}_{g}$ respectively.
Then, a loop on the SP elements finishes the computation of $\vm{A}_{gg}$ and $\vm{B}_{g}$ as follows.
The $\vm{T}_{Fe}^{e_{macro}}$, based on all $\vm{u}_q^p$ covering $e_{macro}$, is constructed by following the enrichment function defined in (<ref>) and the kinematic equation (<ref>).
This operator is used to update the $\vm{T}_{Fm}^{e_{macro}}$ operator (for the algorithm <ref>) and compute the terms $\vm{T}_{Fe}^{t~e_{macro}} \cdot \vm{B}_{F}^{e_{macro}}$, $\vm{T}_{Fe}^{t~e_{macro}} \cdot \vm{P}_{Fk}^{e_{macro}}$ and $\vm{T}_{Fe}^{t~e_{macro}}\cdot \vm{A}_{FF}^{e_{macro}}\cdot \vm{T}_{Fe}^{e_{macro}}$ which are assembled into $\vm{A}_{gg}$ and $\vm{B}_{g}$.
algorithmalgorithm : update macro problem. Here $\vm{1}_F^{e_{macro}}$ is a unit vector corresponding to the F-set dofs (restricted to the dofs covered by $e_{macro}$ ). The $e^P$ corresponds to the index of the enriched dof for the patch $p$ and $k^{p}$ to the index of the classical dof related to $e^P$
UPDATE_MACRO_PRB$\vm{A}_{gg}^{ini}$, $\vm{B}_{g}^{ini}$, $\vm{u}_q^{patches}$
$\left( \vm{A}_{gg},\vm{B}_{g} \right) \gets \left( \vm{A}_{gg}^{ini},\vm{B}_{g}^{ini}\right) $
$e_{macro}\in SP$
$p\in$ patches including $e_{macro}$
$\vm{X}_F^{p,e_{macro}}\gets$ restriction of $\vm{u}_{q}^{p}$ to dofs covered by $e_{macro}$
$shift^p \gets \vm{X}_{F}^{p,e_{macro}}[e^{p}]$
$\vm{T}_{Fe^{p}}^{e_{macro}} \gets \vm{T}_{Fk^{p}}^{e_{macro}}*\left( \vm{X}_F^{p,e_{macro}} -\vm{1}_F^{e_{macro}}\times shift^{p} \right)$
$\vm{T}_{Fm}^{e_{macro}}\gets \left\{\vm{T}_{Fk}^{e_{macro}},\vm{T}_{Fe}^{e_{macro}}\right\}$
$\vm{A}_{ee}^{e_{macro}}\gets \vm{T}_{Fe}^{t~e_{macro}}\cdot \vm{A}_{FF}^{e_{macro}}\cdot \vm{T}_{Fe}^{e_{macro}}$
$\vm{A}_{ek}^{e_{macro}}\gets \vm{T}_{Fe}^{t~e_{macro}}\cdot \vm{P}_{Fk}^{e_{macro}}$
$ \vm{A}_{gg} \Leftarrow\left( \vm{A}_{ee}^{e_{macro}},\vm{A}_{ek}^{e_{macro}} \right) $
$\vm{B}_{g} \Leftarrow \vm{T}_{Fe}^{t~e_{macro}}\cdot \vm{B}_{F}^{e_{macro}} $
algorithmalgorithm: update micro-scale field.
$e_{macro}\in SP$
$U_m^{e_{macro}}\gets \text{restriction~of~}U_{g}~\text{to}~e_{macro}~\text{dofs}$
$\vm{u}_F^{e_{macro}} \gets \vm{T}_{Fm}^{e_{macro}}\cdot U_m^{e_{macro}}$
algorithmalgorithm: fine-scale problem computation.
MICRO-SCALE_RESOLUTION$\vm{u}_F$, $\vm{A}_{qq}^{patches}$, $\vm{BI}_q^{patches}$, $\vm{D}_{qd}^{patches}$
$p\in patches$
$e_{macro}\in p$
$\vm{u}_{d}^{p} \Leftarrow \vm{u}_{F}^{e_{macro}}$
$\vm{B}_{q}^{p}\gets \vm{BI}_{q}^{p}+\vm{D}_{qd}^{p}.\vm{u}_{d}^{p}$
$\vm{u}_{q}^{p}\gets \vm{A}_{qq}^{-1~p}\cdot \vm{B}_{q}^{p}$ $\ddagger$
§.§ COMPUTE_RESIDUAL
The COMPUTE_RESIDUAL procedure (algorithm <ref>) computes only the term $\left \| \vm{A}_{fr}\cdot \vm{u}_{r}-\vm{B}_{f} \right \|$ of (<ref>) considering that it is mainly a scalar product of the vector $\vm{A}_{fr}\cdot \vm{u}_{r}-\vm{B}_{f}$ by itself.
In this procedure, since the information is stored on the F-set, the computations are performed on this set.
But since the organization of the data is by macro-element, the residual vector is not assembled into a complete F-set vector.
During the first loop on SP, its contribution per macro-element ($\vm{A}_{FF}^{e_{macro}}\cdot \vm{u}_{F}^{e_{macro}}-\vm{B}_{F}^{e_{macro}}$) is stored in an accumulation buffer called $\vm{VR}_{F}^{e_{macro}}$.
It is an accumulation buffer because the residual contributions of $e_{macro}$ boundaries from other adjacent macro-elements (remote or local to the process) are added to it.
This accumulation allows to compute a part of the dot product of the residual F-set vector as a local dot product (appearing in the second loop on the SP element).
Their sum is the numerator of $resi$ to the power of two.
But since this local computation is redundant at the boundaries of the macro-elements, a diagonal scaling matrix, applied to the local dot product, is computed once for the entire loop, so that the boundary terms are counted only once.
This scaling matrix is also used, with a zero scaling factor, to remove the Dirichlet DR-set contributions in the local dot product.
This design maintains a semi-independent computation of these accumulation buffers and local dot product which can be treated in parallel (MPI and even with threads as proposed in conclusion).
Note that, with the enrichment strategy adopted in section <ref>, as the h-set part of the residual vector that has been eliminated, the rows associated with the SPF nodes (see <ref>) can also be considered null.
They are eliminated by using a zero scale factor in the local scalar product for the SPF node dofs.
The $resi$ value is obtained by dividing the computed numerator by $\left \|\vm{B}_{r} \right \|$ term given as argument to the procedure.
§.§ COMPUTE_B_NORM
The COMPUTE_B_NORM procedure (algorithm <ref> ) provides the term $\left \|\vm{B}_{r} \right \|$ from the vectors $\vm{B}_g^{tmp}$ and $\vm{B}_F$ containing the Neumann boundary condition and volume loading related to the NSP and SP elements respectively.
It uses an algorithm very similar to that of <ref> in that the scalar product of vector $\vm{B}_r$ by itself is transformed into a sum of local dot products.
The scalar product of $\vm{B}_g^{tmp}$ initializes the sum of the local scalar products of $\vm{B}_r$.
The same diagonal scaling matrix as in <ref>, is applied to the local dot product to correct for the redundant contribution to the macro-element boundaries.
algorithmalgorithm: compute residual value.
Return $ \frac{\left \| \vm{A}_{rr}\cdot \vm{u}_{r}-\vm{B}_{r} \right \|}{\left \|\vm{B}_{r} \right \|} $
COMPUTE_RESIDUAL$\vm{u}_F$, $\vm{A}_{FF}$, $\vm{B}_F$, $\left\|\vm{B}_r\right\|$
$resi \gets 0$
$e_{macro}\in SP$
$\vm{V}_{F}^{e_{macro}}\gets \vm{A}_{FF}^{e_{macro}}\cdot \vm{u}_{F}^{e_{macro}}-\vm{B}_{F}^{e_{macro}}$
$\vm{VR}_{F}^{e_{macro}}\gets \vm{VR}_{F}^{e_{macro}}+\vm{V}_{F}^{e_{macro}}$
$e_{adj}\in e_{macro}$ Adjacency
$\vm{VR}_{F}^{e_{adj}}\gets \vm{VR}_{F}^{e_{adj}}+\vm{V}_{F}^{e_{macro}~\cap ~e_{adj}}$
$e_{macro}\in SP$
$\vm{V}_{F}^{e_{macro}}\gets \text{scaled}~\vm{VR}_{F}^{e_{macro}}$
$resi\gets resi+ \vm{V}_{F}^{t~e_{macro}}\cdot \vm{VR}_{F}^{e_{macro}}$
reduce $resi$ on all processes
$resi\gets \sqrt{resi}/\left\|\vm{B}_r\right\|$
algorithmalgorithm: compute $\left\|\vm{B}_r\right\|$.
COMPUTE_B_NORM$\vm{B}_{g}^{tmp}$, $\vm{B}_F$
$\left\|\vm{B}_r\right\| \gets \vm{B}_{g}^{t~tmp}\cdot \vm{B}_{g}^{tmp}$
$e_{macro}\in SP$
$\vm{VR}_{F}^{e_{macro}}\gets \vm{VR}_{F}^{e_{macro}}+\vm{B}_{F}^{e_{macro}}$
$e_{adj}\in e_{macro}$ Adjacency
$\vm{VR}_{F}^{e_{adj}}\gets \vm{VR}_{F}^{e_{adj}}+\vm{B}_{F}^{e_{macro}~\cap ~e_{adj}}$
$e_{macro}\in SP$
$\vm{V}_{F}^{e_{macro}}\gets \text{scaled}~\vm{VR}_{F}^{e_{macro}}$
$\left\|\vm{B}_r\right\| \gets \left\|\vm{B}_r\right\| + \vm{V}_{F}^{t~e_{macro}}\cdot \vm{VR}_{F}^{e_{macro}}$
reduce $\left\|\vm{B}_r\right\| $ on all processes
$\left\|\vm{B}_r\right\| \gets \sqrt{\left\|\vm{B}_r\right\| }$
§ PATCH COST ESTIMATE
In the example of the cube, we have 4 types of patches depending on whether the enriched dof is on the corners, edges, faces or volumes of the cube.
Following the octree refinement strategy, for a coarse level $L_c$ and a final level $L$ ($L_c\leqslant L$), the numbers of these patches and their number of dofs are given by the following:
\begin{gather}
nbpatch_{corner}\left( L_c \right)=8\\
nbpatch_{edge}\left( L_c\right) =12 \left( {{2}^{L_c}}-1\right)\\
nbpatch_{face}\left( L_c\right) =6 {{\left( {{2}^{L_c}}-1\right) }^{2}}\\
nbpatch_{vol}\left( L_c\right) ={{\left( {{2}^{L_c}}-1\right) }^{3}}
\end{gather}
\begin{gather}
nbdofs_{corner}\left( L_c,L\right) = 3 {{\left( {{2}^{L-L_c}}+1\right) }^{3}} \\
nbdofs_{edge}\left( L_c,L\right) =3 \left( 2 {{\left( {{2}^{L-L_c}}+1\right) }^{3}}-{{\left( {{2}^{L-L_c}}+1\right) }^{2}}\right)\\
nbdofs_{face}\left( L_c,L\right) =3 \left( 4 {{\left( {{2}^{L-L_c}}+1\right) }^{3}}-4 {{\left( {{2}^{L-L_c}}+1\right) }^{2}}+\left( {{2}^{L-L_c}}+1\right) \right)\\
nbdofs_{vol}\left( L_c,L\right) =3 \left( 8 {{\left( {{2}^{L-L_c}}+1\right) }^{3}}-12 {{\left( {{2}^{L-L_c}}+1\right) }^{2}}+6 \left( {{2}^{L-L_c}}+1\right) -1\right)
\end{gather}
Solving a pacth with $N_d$ dofs and a sparse ratio $SR_p$, for $N_L$ iterations using (<ref>) can be expressed as
\begin{equation}
\label{TAP_anexe_c1}
cost_{1\_patch}(N_d,N_l,SR_p)=count_{fact}(N_d,SR_p)+NL\times count_{bf}(N_d,SR)
\end{equation}
since only one factorization is performed and a backward/forward resolution is done at each iteration.
The final cost for all patches using (<ref>, <ref>, <ref>) will be the following:
\begin{equation}
\label{TAP_anexe_cp}
\begin{split}
cost_{patch}(L_c,L,N_l,SR_p)=nbpatch_{corner}\left( L_c \right)\times cost_{1\_patch}(nbdofs_{corner}\left( L_c,L\right),N_l,SR_p)+\\ nbpatch_{edge}\left( L_c \right)\times cost_{1\_patch}(nbdofs_{edge}\left( L_c,L\right),N_l,SR_p)+\\ nbpatch_{face}\left( L_c \right)\times cost_{1\_patch}(nbdofs_{face}\left( L_c,L\right),N_l,SR_p)+\\ nbpatch_{vol}\left( L_c \right)\times cost_{1\_patch}(nbdofs_{vol}\left( L_c,L\right),N_l,SR_p)
\end{split}
\end{equation}
§ STATIC SCHEDULING
The scheduling algorithm, formalized in the algorithms <ref>,<ref> and <ref>, consists in arbitrarily selecting a vertex for $\mathcal{S}$ by ascending order of process.
Create $\mathcal{D}$ and $\mathcal{L}$ sets of distributed and local patches sorted in decreasing weight order
Create $\mathcal{D}_o$ and $\mathcal{L}_o$ empty sets that will store, in sequence ordering, distributed and local patches
Create color vector $\vm{col}$ of size $nbpid-pid$
$nbs\gets card\left( \mathcal{D}\right)$
$nbs\gets \max_{i\in \mathcal{P}} {nbs}_i$
$sid \gets 1 $
$nbe \gets nbs $
$nbs\gets nbe$
$sid\leqslant nbs$
$mxwg \gets$COMPUTE_WG$\mathcal{D}$,$pid$
$\left( mxwl,\mathcal{D}, \mathcal{L},\mathcal{D}_o,\mathcal{L}_o, \vm{col}\right) \gets$PICK_FIRST$\mathcal{D}$, $\mathcal{L}$,$\mathcal{D}_o$,$\mathcal{L}_o$, $\vm{col}$, $pid$, $mxwg$
Receive from $pid-1$ $\vm{col}$ and $mxwl$
$\left( mxwl,\mathcal{D}, \mathcal{L},\mathcal{D}_o,\mathcal{L}_o, \vm{col}\right) \gets$PICK$\mathcal{D}$, $\mathcal{L}$,$\mathcal{D}_o$,$\mathcal{L}_o$, $\vm{col}$, $pid$, $mxwg$,$mxwl$
$patch_s\gets$ pick in $\mathcal{D}$ the patch with identifier$=\vm{col}[0]$
$\mathcal{D}_o\gets \mathcal{D}_o \cup patch_s$
$\mathcal{D}\gets \mathcal{D} \setminus patch_s$
Send to $pid+1$ last $nbpid-1-pid$ components of $\vm{col}$ and $mxwl$
Store $\vm{col}$ as a the new last column of $M$
$sid\gets sid+1$
$nbe\gets nbe+card\left( \mathcal{D}\right)$
$nbe\gets \max_{i\in \mathcal{P}} {nbe}_i$
$nbs < nbe$
$\mathcal{L}_o\gets \mathcal{L}_o \cup \mathcal{L}$
Create communicators based on colored sequence of $M$
Sequencing algorithm: Let $\mathcal{P}$ be the set of process identifiers (starting at 0) and $nbpid= card\left( \mathcal{P}\right) $.
This algorithm is executed on each process $pid\in \mathcal{P}$ containing patches either local (entirely defined in $pid$) or distributed (on $pid$ and other processes ).
In this algorithm, $nbs$ is the maximum number of sequences for all processes in $\mathcal{P}$, $nbe$ is the maximum number of enlarged sequences for all processes in $\mathcal{P}$, $mxwg$ is the maximum weight of all distributed patches of all process with an identifier greater than $pid$ (given by the algorithm <ref>), $mxwl$ is the maximum weight of all selected patches of all processes with an identifier less than $pid$, $M$ is a matrix storing per column each $\mathcal{S}$ as a set of colors for all sequences, and PICK_FIRST, PICK, COMPUTE_WG are procedures described respectively in the algorithms <ref>, <ref>,<ref>.
See <ref> for other conventions.
algorithmPick first algorithm: On $pid=0$ the choice of the candidate is independent of the choice made in the other processes. First try in $\mathcal{D}$ then in $\mathcal{L}$. If both are empty, this process does not participate in the current sequence. See the algorithm <ref> for the notation.
PICK_FIRST$\mathcal{D}$, $\mathcal{L}$,$\mathcal{D}_o$,$\mathcal{L}_o$, $\vm{col}$, $pid$, $mxwg$
$\vm{col}\gets -\vm{1}$
$patch_s\gets$ pick out first patch in $\mathcal{D}$
$\mathcal{A}_s \gets \{n,n\in \mathcal{P} , n\geqslant pid, n~\text{participate to}~ patch_s\}$
$\forall l \in \mathcal{A}_s ~\vm{col}[l-pid]\gets {patch_s}$ Identifier
$\mathcal{D}_o\gets \mathcal{D}_o \cup patch_s$
$\mathcal{D}\gets \mathcal{D} \setminus patch_s$
$patch_s\gets$ the first patch $s$ in $\mathcal{L}$ with $weight_{s}<mxwg$
$patch_s=\varnothing$ $patch_s\gets$ pick out last patch in $\mathcal{L}$
$\vm{col}[0]\gets {patch_s}$ Identifier
$\mathcal{L}_o\gets \mathcal{L}_o \cup patch_s$
$\mathcal{L}\gets \mathcal{L} \setminus patch_s$
$patch_s\gets \varnothing$
$\vm{col}[0]\gets -1$
$mxwl\gets weight_{patch_s}$
$mxwl$,$\mathcal{D}$, $\mathcal{L}$,$\mathcal{D}_o$,$\mathcal{L}_o$, $\vm{col}$
algorithmPick algorithm: First, try in $\mathcal{D}$. The choice of the candidate depends on the choices made by all the processes having an identifier $< pid$. If no candidate is available, try in $\mathcal{L}$ respecting a certain condition on the weights. If no candidate is found, this process do not participate in the current sequence. See algorithm <ref> for the notation.
PICK$\mathcal{D}$, $\mathcal{L}$,$\mathcal{D}_o$,$\mathcal{L}_o$, $\vm{col}$, $pid$, $mxwg$,$mxwl$
$patch_s\gets \varnothing$
$patch_s\gets$ the first patch $s$, if any, in $\mathcal{D}$ not already selected in $\vm{col}$
$patch_s \neq\varnothing$
$\mathcal{A}_s \gets \{n,n\in \mathcal{P} , n\geqslant pid, n~\text{participate to}~ patch_s\}$
$\forall l \in \mathcal{A}_s ~\vm{col}[l-pid]\gets {patch_s}$ Identifier
$\mathcal{D}_o\gets \mathcal{D}_o \cup patch_s$
$\mathcal{D}\gets \mathcal{D} \setminus patch_s$
$patch_s=\varnothing~\text{and}~ \mathcal{L}\neq\varnothing$
$crit\gets \max(mxwg,mxwl)$
$patch_s\gets$ the first patch s in $\mathcal{L}$ with $weight_{s}<crit$
$patch_s=\varnothing$ $patch_s\gets$ pick out last patch in $\mathcal{L}$ $\vm{col}[0]\gets {patch_s}$ identifier
$\mathcal{L}_o\gets \mathcal{L}_o \cup patch_s$
$\mathcal{L}\gets \mathcal{L} \setminus patch_s$
$patch_s\gets \varnothing$
$\vm{col}[0]\gets -1$
$mxwl\gets \max (mxwl,weight_{patch_s})$
$mxwl$,$\mathcal{D}$, $\mathcal{L}$,$\mathcal{D}_o$,$\mathcal{L}_o$, $\vm{col}$
A color is imposed on the distributed vertex with the highest weight in the lowest process (see figure <ref>*seq:4pseq1-*seq:4pseq8 for illustration).
In the algorithms <ref> and <ref> this color is the identifier of the selected patch.
All its adjacent vertices are frozen to avoid choosing one of them for a new color (it corresponds to fill $\vm{col}$ by a color in algorithm <ref> and <ref>).
Then, if a vertex remains uncolored and unfrozen, it can be assigned a new color by choosing again the one with the highest weight in the lowest process.
It will freeze its own set of vertices.
The selection continues until all vertices of the graph have a color or are frozen.
The set of colored vertices in $\mathcal{S}$ can be used directly to create a colored MPI communicator[with MPI_Comm_split].
During selection, the lowest process may not have a distributed vertex available.
In this case, a local patch with an appropriate weight is chosen if it is present, otherwise this process will not participate in this sequence.
At each sequence construction, two weights, $mxwl$ and $mxwg$ (in the algorithm <ref>), are exchanged between the processes respectively, from the lowest identifier to the highest identifier and in reverse direction.
The $mxwg$ weight is the maximum weight of all distributed patches stored in all processes whose process identifier is greater than the current one.
It is obtained with the algorithm <ref>.
$mxwg\gets 0$
$mxwg\gets mxwg_{pid+1}$
$mxwg\gets \max(mxwg,\max_{k\in \mathcal{D}} (weight_k))$
$pid>0$$mxwg_{pid-1}\gets mxwg$
Compute the maximum weight of all distributed patches of all process whose identifier is greater than $pid$. See algorithm <ref> for the notation.
It is used directly in the algorithm <ref> when a local patch has to be select (i.e. when all the distributed patches of process 0 have been consumed): the selected local patch must have a weight lower than $mxwg$ in order not to introduce an imbalance with respect to the weight of the remaining distributed patches on all the other processes that will potentially be selected when the remaining sequence is selected.
In the algorithm <ref>, $mxwg$ is used in conjunction with $mxwl$ which corresponds to the maximum weight of all selected patches in all processes whose identifier is less than the current one.
In this algorithm, again when it comes to selecting a local patch, the one whose weight is lower than $max(mxwg,mxwl)$ is selected.
Thus, the selected local patch will have a cost lower than the worst cost between the already selected patches and those that will be potentially selected.
§ COMPUTATIONAL CONDITION
This work was performed by using HPC resources of the Centrale Nantes Supercomputing Center ICI-CNSC on the cluster Liger (France).
It is an INTEL-based computer composed of 252 nodes, with 24 cores (2 x 12 cores Xeon E5-2680v3 at 2.5GHz) per node.
Those nodes use a Gpfs network drive for I/O and 128GB of memory.
The software used on this platform are eXlibris 2018 (https://git.gem.ec-nantes.fr/explore), the GCC 9.2 compiler, OpenMPI 4.03, Mumps 5.4.1, Intel MKL Scalapack, ParMetis 4.0.3, Intel MKL lapack, Intel MKL blas.
All Intel software comes from the parallel studio 2020 suites.
The integers are compiled with a 4 bytes precision.
§ MICRO STRUCTURE PLAN DEFINITION
The equations of the 64 planes are given below:
$\footnotesize \begin{array}{cc}
0.8609265\times x+0.1956651\times y+0.4695963\times z-1.748384=0&
0.7195847\times x+0.5102510\times y+0.4710009\times z-2.943090=0\\
0.5688037\times x+0.06691808\times y+0.8197465\times z-1.924178=0&
0.1071920\times x+0.6029550\times y+0.7905410\times z-1.271770=0\\
0.7416569\times x+0.2076639\times y+0.6378250\times z-2.580966=0&
0.4863807\times x+0.4607817\times y+0.7423705\times z-1.215735=0\\
0.2568314\times x+0.9470659\times y+0.1926236\times z-1.424000=0&
0.7151591\times x+0.3916348\times y+0.5789384\times z-1.921521=0\\
0.1054744\times x+0.9844276\times y+0.1406325\times z-1.712618=0&
0.6326948\times x+0.2613305\times y+0.7289744\times z-1.181465=0\\
0.2716518\times x+0.9055059\times y+0.3259821\times z-1.135105=0&
0.6398444\times x+0.3745431\times y+0.6710563\times z-2.180073=0\\
0.5836437\times x+0.5275241\times y+0.6173154\times z-2.052645=0&
0.5684090\times x+0.1764028\times y+0.8036127\times z-1.657721=0\\
0.8197048\times x+0.1024631\times y+0.5635471\times z-2.360125=0&
0.1274946\times x+0.7139698\times y+0.6884709\times z-2.370968=0\\
0.7305503\times x+0.1398926\times y+0.6683758\times z-0.9220794=0&
0.4882044\times x+0.5858453\times y+0.6468708\times z-0.5461269=0\\
0.7241275\times x+0.5738369\times y+0.3825579\times z-2.412523=0&
0.3979635\times x+0.3684848\times y+0.8401452\times z-1.930610=0\\
0.8650248\times x+0.1441708\times y+0.4805693\times z-1.795212=0&
0.8572357\times x+0.4653565\times y+0.2204320\times z-1.902939=0\\
0.8669178\times x+0.1284323\times y+0.4816210\times z-2.635038=0&
0.1543033\times x+0.7715167\times y+0.6172134\times z-1.961483=0\\
0.4793697\times x+0.7989495\times y+0.3631589\times z-1.759905=0&
0.1659080\times x+0.2156804\times y+0.9622663\times z-0.6748800=0\\
0.2468854\times x+0.7715167\times y+0.5863527\times z-0.6423204=0&
0.5649452\times x+0.6013933\times y+0.5649452\times z-1.464101=0\\
0.4231527\times x+0.8711968\times y+0.2489134\times z-1.657172=0&
0.6388813\times x+0.5525460\times y+0.5352789\times z-1.462725=0\\
0.6333450\times x+0.4446890\times y+0.6333450\times z-2.730715=0&
0.2433962\times x+0.2920754\times y+0.9249055\times z-2.773892=0\\
0.3298492\times x+0.7985822\times y+0.5034540\times z-2.098559=0&
0.1632993\times x+0.4082483\times y+0.8981462\times z-2.001109=0\\
0.5423839\times x+0.6693248\times y+0.5077637\times z-0.9228155=0&
0.6018227\times x+0.5249942\times y+0.6018227\times z-1.695434=0\\
0.9093977\times x+0.3247849\times y+0.2598279\times z-2.058806=0&
0.8230470\times x+0.5534282\times y+0.1277142\times z-0.7484869=0\\
0.4433384\times x+0.1313595\times y+0.8866768\times z-0.8443746=0&
0.6734445\times x+0.6884099\times y+0.2693778\times z-1.324568=0\\
0.2972254\times x+0.9145396\times y+0.2743619\times z-2.464607=0&
0.7218661\times x+0.3925938\times y+0.5698943\times z-2.207026=0\\
0.5392394\times x+0.6564654\times y+0.5275168\times z-2.088212=0&
0.3743731\times x+0.06606583\times y+0.9249217\times z-1.333261=0\\
0.1427762\times x+0.1665723\times y+0.9756376\times z-0.5186730=0&
0.4939317\times x+0.2798946\times y+0.8232196\times z-1.744109=0\\
0.7648147\times x+0.07114555\times y+0.6403100\times z-1.913092=0&
0.8026276\times x+0.2390806\times y+0.5464699\times z-2.138411=0\\
0.7220829\times x+0.6804243\times y+0.1249759\times z-1.493591=0&
0.5806682\times x+0.5806682\times y+0.5706566\times z-1.237695=0\\
0.8945864\times x+0.4388537\times y+0.08439495\times z-1.095704=0&
0.3212124\times x+0.4534764\times y+0.8313734\times z-1.009433=0\\
0.9747546\times x+0.05415304\times y+0.2166121\times z-1.855889=0&
0.5572679\times x+0.6868651\times y+0.4665499\times z-2.181626=0\\
0.4943023\times x+0.8687737\times y+0.02995771\times z-0.7418343=0&
0.4652615\times x+0.6203487\times y+0.6314263\times z-1.781062=0\\
0.3647265\times x+0.4103173\times y+0.8358315\times z-0.9762100=0&
0.5807795\times x+0.5915347\times y+0.5592691\times z-1.467443=0\\
0.9473874\times x+0.2368468\times y+0.2153153\times z-1.995499=0&
0.9486833\times x+0.3162278\times y-1.061239=0\\
0.8602915\times y+0.5098024\times z-0.3024251=0&
0.9363292\times x+0.3511234\times z-0.9363292=0\\
0.9805807\times x+0.1961161\times z-1.668649=0&
0.1240347\times y+0.9922779\times z-0.9292094=0
\end{array}$
§ PULL-OUT DAMAGE COMPUTATION
The damage field is computed using level-set technologies where the closest distance to the surfaces given by (<ref>) is interpreted, after scaling, as a damage between $[0,1]$.
The conical envelope where the damage is calculated (non-zero) is defined by the surface given by the following series of equations:
\begin{equation}
\begin{array}{c}
(x^2+z^2).(\cos\theta)^2 - (y-o_1)^2 {(\sin\theta)}^2 =0\\
(x^2+z^2).(\cos\theta)^2 - (y-o_2)^2 {(\sin\theta)}^2 =0\\
\end{array}
\label{PO_annex_cone_eq}
\end{equation}
* $\theta=35$° is the angle at the apex between the axes $\overrightarrow{e_y}$ and the generating line of the lateral surface.
* $o_1=-545.08$ and $o_2=-531.31$ are $y$ coordinates of the two apexes.
* $h$ is the parameter that controls the stage of disk damage.
The envelope of the conic is then any point $P$ such that:
\begin{equation}
P(x,y,z)\in \text{conic envelope if}~\left\lbrace \begin{array}{c}
(x^2+z^2).(\cos\theta)^2 < (y-o_1)^2 {(\sin\theta)}^2\\
(x^2+z^2).(\cos\theta)^2 > (y-o_2)^2 {(\sin\theta)}^2\\
\end{array}\right.
\label{PO_annex_cone_env}
\end{equation}
Note that using planes to stop the conical envelope is a simple and convenient way to obtain a bounded region of easily adjustable size.
But for sure, it gives, a completely unrealistic shape of the damage front.
§ DOMAIN DECOMPOSITION RESOLUTION ALGORITHM
The domain decomposition method used, given by the algorithm <ref>, is a distributed non-overlapping Schur complement method.
Each process holds a domain that is condensed on the process boundary (using the incomplete LDL$^t$ MUMPS factorization).
The global boundary problem (i.e. the global Schur complement problem) is solved with a distributed preconditionned (block Jacobi) conjugate gradient given by the algorithm <ref>.
This distributed version of the conjugate gradient works only with local contributions and communicates mainly when computing the scalar product.
The block Jacobi preconditioner simply uses the factorization of the global boundary matrix diagonal block owned by the current process.
For a given process, this diagonal block is arbitrary chosen as its local boundary dofs, excluding all dofs present in any process whose identifier is less than that of the current process.
Communication only occurs at the time of the construction of the diagonal block, when all the contributions of the other processes are added to the process that owns the block.
The factorization of this block is then local to each process.
This arbitrary choice, easy to implement, induces that some processes (at least the last one) may not have a diagonal block to treat, which induces some imbalance in the preconditionning task.
Note that this choice ensures that the diagonal block can always be factorized as long as the global matrix is not ill-conditioned.
algorithmBlock Jacobi preconditionned iterative parallel domain decomposition algorithm:
The CONJGRAD procedure is given in the algorithm <ref>
$\vm{S}_{bb} \gets \vm{A}_{bb}-\vm{A}_{bI}\cdot \vm{A}_{II}^{-1}\cdot \vm{A}_{Ib}$
$\vm{B}_{b} \gets \vm{B}_{b}-\vm{A}_{bI}\cdot \vm{A}_{II}^{-1}\cdot \vm{B}_{I}$
$ \vm{M}_{jj} \Leftarrow \bigcup\limits_{\forall p \in \mathcal{P}~\text{with}~t_p=b_p\cap j\neq \oslash} \vm{S}_{t_pt_p} $
Factorize $\vm{M}_{jj}$ to form the block Jacobi preconditionner
initialize a null vector $\vm{X}_s^0$ (or use a previously computed one)
$\left( \vm{X}_s,crit\right) \gets$CONJGRAD$\vm{X}_s^0$,$\vm{S}_{bb}$,$\vm{B}_b$,$\vm{M}_{jj}^{-1}$,$\epsilon$
$\vm{X}_b\gets \vm{X}_{s \cap b}$
$\vm{X}_I\gets \vm{A}_{II}^{-1} \cdot \left( \vm{B}_I-\vm{A}_{Ib}\cdot \vm{X}_b \right) $
algorithmParallel conjugate gradient algorithm.
$\vm{X}_s \gets \vm{X}_s^0$
$\vm{R}_s \Leftarrow \vm{B}_b$
$stop \gets \vm{R}_s\cdot \vm{R}_s\times \epsilon^2$
$\vm{SP}_s \Leftarrow \vm{S}_{bb}\cdot \vm{X}_{s\cap b}$
$\vm{R}_s \gets \vm{R}_s-\vm{SP}_s$
$\vm{Z}_s \Leftarrow \vm{M}_{jj}^{-1}\cdot \vm{R}_{s\cap j}$
$res_o \gets \vm{R}_s\cdot \vm{Z}_s$
$\vm{P}_s \gets \vm{Z}_s$
$alt \gets 0$
$res_n \gets res_o$
$\vm{SP}_s \Leftarrow \vm{S}_{bb}\cdot \vm{P}_{s\cap b}$
$\alpha \gets \frac{res_n}{\vm{P}_s\cdot \vm{SP}_s}$
$\vm{X}_s \gets \vm{X}_s + \alpha\times \vm{P}_s$
$\vm{R}_s \gets \vm{R}_s - \alpha\times \vm{SP}_s$
$crit^2 \gets \vm{R}_s\cdot \vm{R}_s$
$crit^2 \geqslant stop$
$\vm{Z}_s \Leftarrow \vm{M}_{jj}^{-1}\cdot \vm{R}_{s\cap j}$
$res_o \gets \vm{R}_s\cdot \vm{Z}_s$
$\beta \gets \frac{res_o}{res_n}$
$\vm{P}_s \gets \vm{Z}_s + \beta\times \vm{P}_s$
$iter \gets iter+1$
$alt \gets 1$
$alt=1$ or $iter>iter_{max}$
$crit \gets \epsilon.\sqrt{\frac{crit^2}{stop}}$
In the algorithm <ref> and <ref> let $\mathcal{P}$ be the set of process identifiers (starting at 0) and $nbpid= card\left( \mathcal{P}\right)$.
These algorithms are executed on each process $pid\in \mathcal{P}$ with:
* $K$ the set of dofs of the domain held by the process $pid$, $K= I\cup b$ , $I\cap b=\oslash$
* $I$ the set of dofs eliminated by condensation in the $pid$ process
* $b$ the set of boundary dofs of the domain $pid$ (i.e. the Schur complement dofs)
* $s$ the set of all domain boundary dofs, $s=\bigcup\limits_{p\in \mathcal{P}} b_{p}$
* $j$ the set of boundary dofs owned by the $pid$ process: $j=b\setminus \left( \bigcup\limits_{p=0,pid-1} b_{p}\right) $
The following applies for these sets:
* $\forall p \in \mathcal{P}$ and $\forall q \in \mathcal{P}$,$p \neq q$ then $j_p\cap j_q = \oslash$
* $\forall p \in \mathcal{P}$ and $\forall q \in \mathcal{P}$,$p \neq q$ then
$b_p\cap b_q \neq \oslash$ if $p$ and $q$ are connected by at least one mesh node
* for $pid=0$ $j=b$
* for $pid=nbid-1$ $j=\oslash$
* $j$ can also be $\oslash$ for any process $p\in \mathcal{P}\setminus (nbpid-1)$ if the domain $p$ is surrounded by domains of lower process identifier
§ MUMPS BLOCK LOW-RANK RESOLUTION
The block low-rank (BLR) feature has been introduced in Mumps based on [1].
We choose to activated it with the automatic choice of the BLR option by the library (ICNTL(35)=1 see Mumps 5.4.1 Users' guide ).
The BLR factorization variant is the default (ICNTL(36)=0, UFSC variant).
The dropping parameter used during BLR compression (CNTL(7)), is chosen to be identical to residual threshold $\epsilon$ of the algorithms <ref> and <ref>.
This choice is guided by the fact that the dropping factor and residual error are expected to be strongly related, as shown in [15].
To enforce the condition on the residual error, the BLR resolution is followed by a parallel preconditioned conjugate gradient resolution (algorithm <ref>) using both the factorization (as a preconditioner) and the solution (as a starting point) of the low-rank resolution (algorithm <ref>).
$\left( \tilde{\vm{A}}_{rr}^{-1},\tilde{\vm{X}}_r\right) \gets \text{Mumps BLR resolution of } \vm{A}_{rr}\cdot \vm{X}_{r}=\vm{B}_r$
$\left( \vm{X}_r,crit\right) \gets$CONJGRAD$\tilde{\vm{X}}_r$,$\vm{A}_{rr}$,$\vm{B}_r$,$\tilde{\vm{A}}_{rr}^{-1}$,$\epsilon$
Block low-rank algorithm:
the CONJGRAD procedure is given in the algorithm <ref>. Here $\tilde{.}$ represent the approximate factorization and the solution of the low-rank resolution.
This second resolution is supposed to iterate very little to just force the residual error to be less than $\epsilon$.
|
# Asymptotic mean value properties for the elliptic and parabolic double phase
equations
Weili Meng and Chao Zhang∗ Weili Meng School of Mathematics, Harbin Institute
of Technology, Harbin 150001, P.R. China<EMAIL_ADDRESS>Chao Zhang
School of Mathematics, Harbin Institute of Technology, Harbin 150001, P.R.
China<EMAIL_ADDRESS>
###### Abstract.
We characterize an asymptotic mean value formula in the viscosity sense for
the double phase elliptic equation
$-\text{\rm{div}}(\lvert\nabla u\rvert^{p-2}\nabla u+a(x)\lvert\nabla
u\rvert^{q-2}\nabla u)=0$
and the normalized double phase parabolic equation
$u_{t}=\lvert\nabla u\rvert^{2-p}\text{\rm{div}}(\lvert\nabla
u\rvert^{p-2}\nabla u+a(x,t)\lvert\nabla u\rvert^{q-2}\nabla u),\quad 1<p\leq
q<\infty.$
This is the first mean value result for such kind of nonuniformly elliptic and
parabolic equations. In addition, the results obtained can also be applied to
the $p(x)$-Laplace equations and the variable coefficient $p$-Laplace type
equations.
###### Key words and phrases:
Mean value property; Viscosity solutions; Elliptic and parabolic double phase
equations
###### 2020 Mathematics Subject Classification:
35B05, 35D40, 35J92, 35K92
∗Corresponding author.
## 1\. Introduction
Let $\Omega$ be a bounded domain in $\mathbb{R}^{N}(N\geq 2)$. We consider the
following double phase elliptic equation
$\displaystyle-\text{\rm{div}}(\lvert\nabla u\rvert^{p-2}\nabla
u+a(x)\lvert\nabla u\rvert^{q-2}\nabla u)=0\quad\text{in }\Omega,$ (1.1)
where $1<p\leq q<\infty$ and $a(x)\geq 0$. It is the Euler-Lagrange equation
of the non-autonomous functional
$W^{1,1}(\Omega)\ni w\mapsto\int_{\Omega}\left(\frac{1}{p}\lvert\nabla
w\rvert^{p}+\frac{a(x)}{q}\lvert\nabla w\rvert^{q}\right)dx.$
Originally, this functional is connected to the Homogenization theory and
Lavrentiev phenomenon [22, 28, 33], which reflects the behavior of strongly
anisotropic materials, where the coefficient $a(\cdot)$ is used to regulate
two mixtures with $p$ and $q$ hardening, respectively.
During the last years, problems of the type considered in (1.1) have received
great attention from the variational point of view. The regularity of
minimizers and weak solutions is determined via a delicate interaction between
the growth conditions and the pointwise behaviour of $a(\cdot)$. Starting from
a series of remarkable works of Colombo and Mingione et. al. [2, 10, 11],
despite its relatively short history, double phase problems has already
achieved an elaborate theory with several connections to other branches. We
refer the readers to [1, 7, 8, 9, 12, 13, 14, 15, 16, 17, 19, 29] and the
references therein.
It is well-known that a continuous function $u$ is harmonic if and only if it
obeys the mean value formula discovered by Gauss. That is, $u$ solves the
Laplace equation $\Delta u=0$ in $\Omega$ if and only if
$u(x)=\dfrac{1}{|B_{\varepsilon}(x)|}\int_{B_{\varepsilon}(x)}u(y)dy=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy$
holds for all $x\in\Omega$ and $B_{\varepsilon}(x)\subset\Omega$. In fact, an
asymptotic version of the mean value property
$u(x)=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy+o(\varepsilon^{2})\quad\textmd{as
}\varepsilon\rightarrow 0$
suffices to characterize harmonic functions (see [6, 23, 32]). Moreover, a
nonlinear mean value property was explored in [26] that continuous function
$u$ is a viscosity solution of the $p$-Laplace equation
$-\Delta_{p}u=-\text{\rm{div}}(\lvert\nabla u\rvert^{p-2}\nabla
u)=0\quad\text{in }\Omega$
if and only if the asymptotic expansion
$u(x)=\frac{\alpha_{p}}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}u+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}u\right\\}+\beta_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy+o(\varepsilon^{2})\quad\text{as
}\varepsilon\rightarrow 0$
holds for all $x\in\Omega$ in the viscosity sense, where
$\alpha_{p}+\beta_{p}=1$ and $\frac{\alpha_{p}}{\beta_{p}}=\frac{p-2}{N+2}$.
The second term is a linear one, while the first term counts for the
nonlinearity: the greater the $p$, the more nonlinear the formula. The
expression holds in the viscosity sense, which means that when the $C^{2}$
test function $\phi$ with non-vanishing gradient is close to $u$ from below
(above), the expression is satisfied with $\geq$ ($\leq$) for the test
function at $x$ respectively.
These mean value formulas originate from the study of dynamic programming for
tug-of-war games. The viscosity solution of the normalized parabolic
$p$-Laplace equation is characterized by an asymptotic mean value formula,
which is related to the tug-of-war game with noise, see [20, 24, 25, 27, 30,
31]. For more related asymptotic mean value results, we refer to [21] for
$p$-harmonic functions in the Heisenberg group, [3] for Monge-Ampère equation,
[4, 27] for the nonlinear parabolic equations and the recently published
monograph [5] for historical references and more general equations.
From the results mentioned above, we can see that there are few results
concerning the asymptotic mean value properties for the general nonuniformly
elliptic and parabolic equations. Motivated by the previous works [26, 27],
our intention in the present paper is to build a new bridge between the
viscosity solutions and the asymptotic mean value formula for the double phase
equations (1.1) and (1.4). In addition, the method developed here can also be
used to more equations, such as the $p(x)$-Laplace equations and the variable
coefficient $p$-Laplace type equations. The first result is stated as follows.
###### Theorem 1.1.
Let $1<p\leq q<\infty$, the non-negative function $a(x)$ be a $C^{1}$ function
in $\Omega$ and let $u(x)$ be a continuous function in $\Omega$. Then Eq.
(1.1) holds in the viscosity sense if and only if the asymptotic expansion
$\displaystyle u(x)$
$\displaystyle=\frac{\alpha_{p}+M_{u}(x)\alpha_{q}}{2(1+M_{u}(x))}\left\\{\max\limits_{\overline{B_{\varepsilon}(x)}}u+\min\limits_{\overline{B_{\varepsilon}(x)}}u\right\\}+\dfrac{\beta_{p}+M_{u}(x)\beta_{q}}{1+M_{u}(x)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy$
$\displaystyle\quad+\frac{\varepsilon\lvert\nabla
u(x)\rvert^{q-p}}{4(N+p)(1+M_{u}(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y+\varepsilon\nabla
a(x))-u(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2})$ (1.2)
as $\varepsilon\rightarrow 0$, holds for all $x\in\Omega$ in the viscosity
sense. Here
$\displaystyle\alpha_{p}+\beta_{p}=1,\quad\frac{\alpha_{p}}{\beta_{p}}=\frac{p-2}{N+2},$
$\displaystyle\alpha_{q}+\beta_{q}=1,\quad\frac{\alpha_{q}}{\beta_{q}}=\frac{q-2}{N+2},$
(1.3) $\displaystyle M_{u}(x)=a(x)\dfrac{N+q}{N+p}\lvert\nabla
u(x)\rvert^{q-p}.$
###### Remark 1.2.
Note that here we only require that $u$ is a continuous function. However,
$\nabla u$ appears in the formula (1.1) due to the fact that it is an
expression in the viscosity sense. In other words, we focus on the $C^{2}$
test function $\phi$ that approaches $u$ from above and below. More details
will be given in Section 2.
###### Remark 1.3.
From the formula (1.1), we can see that all the terms on the right-hand side
are nonlinear, which is different from the standard $p$-Laplace equation. The
exponents $p,q$ and the non-negative coefficient $a(x)$ coupling together
influence on the nonlinearity in a delicate way. In particular, when $a(x)$ is
a positive constant, Eq. (1.1) is nothing but the $(p,q)$-Laplace equation.
Then the third term
$\frac{\varepsilon\lvert\nabla
u(x)\rvert^{q-p}}{4(N+p)(1+M_{u}(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y+\varepsilon\nabla
a(x))-u(y-\varepsilon\nabla a(x))dy$
will vanish.
Next, we turn to the parabolic case. Let $T>0$, $\Omega_{T}=\Omega\times(0,T)$
be a space-time cylinder, and let $a(x,t)\geq 0$ be a function that is $C^{1}$
in the space variable and continuous in the time variable, respectively. We
consider the following parabolic equation
$u_{t}=\lvert\nabla u\rvert^{2-p}\text{\rm{div}}(\lvert\nabla
u\rvert^{p-2}\nabla u+a(x,t)\lvert\nabla u\rvert^{q-2}\nabla
u)\quad\text{\rm{in }}\Omega_{T},$ (1.4)
which is called the normalized double phase parabolic equation. The difference
between elliptic and the normalized parabolic case is that we have to consider
the influence of time variable $t$ in parabolic setting. To this end, we try
to separate the estimates according to $p$ and $q$, and consider the integrals
in different time intervals. Finally, we find that when the two time lags
satisfy certain viscosity condition, $u$ satisfies the asymptotic mean value
formula in the viscosity sense is equivalent to $u$ is the viscosity solution
to Eq. (1.4). The second result is stated as follows.
###### Theorem 1.4.
Let $1<p\leq q<\infty$ , the non-negative function $a(x,t)$ be a function that
is $C^{1}$ in the space variable, and continuous in the time variable and let
$u(x,t)$ be a continuous function in $\Omega_{T}$. Then Eq. (1.4) holds in the
viscosity sense if and only if the asymptotic expansion
$\displaystyle u(x,t)$
$\displaystyle=\dfrac{1}{1+M_{u}(x,t)}\left(\dfrac{\alpha_{p}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A_{u}(x,t)}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}u(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}u(y,s)\right\\}ds\right.$
$\displaystyle\quad+\left.\beta_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A_{u}(x,t)}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y,s)dyds\right)$
$\displaystyle\quad+\dfrac{M_{u}(x,t)}{1+M_{u}(x,t)}\left(\dfrac{\alpha_{q}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{B_{u}(x,t)}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}u(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}u(y,s)\right\\}ds\right.$
$\displaystyle\quad\left.+\beta_{q}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{B_{u}(x,t)}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y,s)dyds\right)$
$\displaystyle\quad+\dfrac{\varepsilon\lvert\nabla
u(x,t)\rvert^{q-p}}{4(N+p)(1+M_{u}(x,t))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y+\varepsilon\nabla
a(x,t),t)-u(y-\varepsilon\nabla a(x,t),t)dy$
$\displaystyle\quad+o(\varepsilon^{2})$ (1.5)
as $\varepsilon\rightarrow 0$, holds for all $(x,t)\in\Omega_{T}$ in the
viscosity sense. Here
$\displaystyle\begin{split}&\alpha_{p}+\beta_{p}=1,\quad\frac{\alpha_{p}}{\beta_{p}}=\dfrac{p-2}{N+2},\\\
&\alpha_{q}+\beta_{q}=1,\quad\frac{\alpha_{q}}{\beta_{q}}=\dfrac{q-2}{N+2},\\\
&M_{u}(x,t)=a(x,t)\frac{N+q}{N+p}\lvert\nabla u(x,t)\rvert^{q-p},\\\
&\frac{N+p}{A_{u}(x,t)}+\frac{a(x,t)(N+q)\lvert\nabla
u(x,t)\rvert^{q-p}}{B_{u}(x,t)}=1,\quad A_{u}(x,t),B_{u}(x,t)>0.\end{split}$
(1.6)
###### Remark 1.5.
It is worth mentioning that the positive functions $A(x,t)$ and $B(x,t)$
depend on the test function $\phi$. It means that
$\frac{N+p}{A_{u}(x,t)}+\frac{a(x,t)(N+q)\lvert\nabla
u(x,t)\rvert^{q-p}}{B_{u}(x,t)}=1$
holds in the viscosity sense, which we called the viscosity condition.
This manuscript is organized as follows. In Section 2, we introduce the basic
definitions and give some necessary lemmas that will be used later. In Section
3, we give the proof of Theorem 1.1 and present some corollaries, including
the $p(x)$-Laplace type equation. Finally, we prove Theorem 1.4 in Section 4.
## 2\. Preliminaries
In this section, inspired by the ideas developed in [26], we first give the
definition of the asymptotic mean value formula for $u$ at $x\in\Omega$.
###### Definition 2.1.
A continuous function u satisfies
$\displaystyle u(x)$
$\displaystyle=\dfrac{\alpha_{p}+M_{u}(x)\alpha_{q}}{2(1+M_{u}(x))}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}u+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}u\right\\}+\dfrac{\beta_{p}+M_{u}(x)\beta_{q}}{1+M_{u}(x)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy$
$\displaystyle\quad+\dfrac{\varepsilon\lvert\nabla
u(x)\rvert^{q-p}}{4(N+p)(1+M_{u}(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y+\varepsilon\nabla
a(x))-u(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2})$
as $\varepsilon\rightarrow 0$, in the viscosiy sense if
* (i)
for every $\phi\in C^{2}$ such that $u-\phi$ has a strict minimum at the point
$x\in\Omega$ with $u(x)=\phi(x)$ and $\nabla\phi(x)\neq 0$, we have
$\displaystyle\begin{split}0&\geq-\phi(x)+\dfrac{\alpha_{p}+M_{\phi}(x)\alpha_{q}}{2(1+M_{\phi}(x))}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}\phi+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}\phi\right\\}+\dfrac{\beta_{p}+M_{\phi}(x)\beta_{q}}{1+M_{\phi}(x)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y)dy\\\
&\quad+\dfrac{\varepsilon\lvert\nabla\phi(x)\rvert^{q-p}}{4(N+p)(1+M_{\phi}(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))-\phi(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2}).\end{split}$ (2.1)
* (ii)
for every $\phi\in C^{2}$ such that $u-\phi$ has a strict maximum at the point
$x\in\Omega$ with $u(x)=\phi(x)$ and $\nabla\phi(x)\neq 0$, we have
$\displaystyle\begin{split}0&\leq-\phi(x)+\dfrac{\alpha_{p}+M_{\phi}(x)\alpha_{q}}{2(1+M_{\phi}(x))}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}\phi+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}\phi\right\\}+\dfrac{\beta_{p}+M_{\phi}(x)\beta_{q}}{1+M_{\phi}(x)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y)dy\\\
&\quad+\dfrac{\varepsilon\lvert\nabla\phi(x)\rvert^{q-p}}{4(N+p)(1+M_{\phi}(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))-\phi(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2}).\end{split}$ (2.2)
Next, we consider the viscosity solution of the double phase elliptic
equations. Let us expand the left-hand side of Eq. (1.1) as follows:
$\displaystyle\quad\text{\rm{div}}(\lvert\nabla u\rvert^{p-2}\nabla
u+a(x)\lvert\nabla u\rvert^{q-2}\nabla u)$ $\displaystyle=\lvert\nabla
u\rvert^{p-2}((p-2)\Delta_{\infty}u+\Delta u)+a(x)\lvert\nabla
u\rvert^{q-2}((q-2)\Delta_{\infty}u+\Delta u)$
$\displaystyle\quad+\lvert\nabla u\rvert^{q-2}\langle\nabla a,\nabla
u\rangle,$
where $\Delta_{\infty}u=|\nabla u|^{-2}\left<D^{2}u\nabla u,\nabla u\right>$.
Suppose that $u$ is a smooth function with $\nabla u\neq 0$, we can see that
$u$ is a solution to Eq. (1.1) if and only if
$\displaystyle-(p-2)\Delta_{\infty}u-\Delta u-a(x)\lvert\nabla
u\rvert^{q-p}((q-2)\Delta_{\infty}u+\Delta u)$
$\displaystyle\quad-\lvert\nabla u\rvert^{q-p}\langle\nabla a,\nabla
u\rangle=0.$ (2.3)
Then we give the definition of viscosity solutions to Eq. (1.1).
###### Definition 2.2 ([18], Definition 2.5).
Let $1<p\leq q<\infty$ and consider the equation
$-\text{\rm{div}}(\lvert\nabla u\rvert^{p-2}\nabla u+a(x)\lvert\nabla
u\rvert^{q-2}\nabla u)=0.$
* (i)
A lower semi-continuous function $u$ is a viscosity supersolution if for every
$\phi\in C^{2}$ such that $u-\phi$ has a strict minimum at the point
$x\in\Omega$ with $\nabla\phi(x)\neq 0$ we have
$\displaystyle-\left((p-2)\Delta_{\infty}\phi(x)+\Delta\phi(x)\right)-a(x)\lvert\nabla\phi(x)|^{q-p}\left((q-2)\Delta_{\infty}\phi(x)+\Delta\phi(x)\right)$
$\displaystyle\quad-\lvert\nabla\phi(x)|^{q-p}\langle\nabla
a(x),\nabla\phi(x)\rangle\geq 0.$ (2.4)
* (ii)
An upper semi-continuous function $u$ is a viscosity subsolution if for every
$\phi\in C^{2}$ such that $u-\phi$ has a strict maximum at the point
$x\in\Omega$ with $\nabla\phi(x)\neq 0$ we have
$\displaystyle\begin{split}&-\left((p-2)\Delta_{\infty}\phi(x)+\Delta\phi(x)\right)-a(x)\lvert\nabla\phi(x)|^{q-p}\left((q-2)\Delta_{\infty}\phi(x)+\Delta\phi(x)\right)\\\
&\quad-\lvert\nabla\phi(x)|^{q-p}\langle\nabla a(x),\nabla\phi(x)\rangle\leq
0.\end{split}$ (2.5)
* (iii)
Finally, $u$ is a viscosity solution if and only if $u$ is both a viscosity
supersolution and a viscosity subsolution.
We next state the following useful results (Lemmas 2.3–2.5), which can be
found in [26, Section 2].
###### Lemma 2.3.
Let $\phi$ be a $C^{2}$ function in a neighborhood of $x$ and let
$x_{1}^{\varepsilon}$ and $x_{2}^{\varepsilon}$ be the points at which $\phi$
attains its minimum and maximum in $\overline{B_{\varepsilon}(x)}$
respectively. We have
$\displaystyle-\phi(x)+\frac{1}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}\phi+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}\phi\right\\}\geq\frac{1}{2}\langle
D^{2}\phi(x)(x^{\varepsilon}_{1}-x),(x^{\varepsilon}_{1}-x)\rangle+o(\varepsilon^{2})$
(2.6)
and
$\displaystyle-\phi(x)+\frac{1}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}\phi+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}\phi\right\\}\leq\frac{1}{2}\langle
D^{2}\phi(x)(x^{\varepsilon}_{2}-x),(x^{\varepsilon}_{2}-x)\rangle+o(\varepsilon^{2}).$
(2.7)
###### Lemma 2.4.
Let $\phi$ be a $C^{2}$ function in a neighborhood of $x$ with
$\nabla\phi(x)\neq 0$. We have
$\displaystyle\lim\limits_{\varepsilon\rightarrow
0+}\dfrac{x^{\varepsilon}_{1}-x}{\varepsilon}=-\dfrac{\nabla\phi}{\lvert\nabla\phi\rvert}(x),$
(2.8)
where $x_{1}^{\varepsilon}$ is defined as in Lemma 2.3.
###### Lemma 2.5.
Let $\phi$ be a $C^{2}$ function in a neighborhood of $x$. We have
$\displaystyle-\phi(x)+\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y)dy=\dfrac{\varepsilon^{2}}{2(N+2)}\Delta\phi(x)+o(\varepsilon^{2})\quad\text{as
}\varepsilon\rightarrow 0.$
Although Lemma 2.3 and Lemma 2.5 provide the bridge between the viscosity
solution of $p$-Laplace equation $-\Delta_{p}u=0$ and the asymptotic mean
value formula in [26], it is not enough for the double phase elliptic equation
due to the presence of the term $\left<\nabla a,\nabla u\right>$ in Eq. (2).
Therefore, we need the following lemma.
###### Lemma 2.6.
Let $\phi$ be a $C^{2}$ function in a neighborhood of $x$. We have
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))-\phi(y-\varepsilon\nabla a(x))dy=2\varepsilon\langle\nabla\phi(x),\nabla
a(x)\rangle+o(\varepsilon^{2})$
as $\varepsilon\rightarrow 0$.
###### Proof.
Observe that
$\displaystyle\quad\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))dy$
$\displaystyle=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B(0,1)}\phi(x+(\nabla
a(x)+z)\varepsilon)dz$
$\displaystyle=\phi(x)+\varepsilon\langle\nabla\phi(x),\nabla
a(x)\rangle+\frac{\varepsilon^{2}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B(0,1)}\langle
D^{2}\phi(x)\nabla a(x),\nabla a(x)\rangle+\langle D^{2}\phi(x)z,z\rangle dz$
$\displaystyle\quad+o(\varepsilon^{2})$
and
$\displaystyle\quad\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y-\varepsilon\nabla
a(x))dy$
$\displaystyle=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B(0,1)}\phi(x+(z-\nabla
a(x))\varepsilon)dz$
$\displaystyle=\phi(x)-\varepsilon\langle\nabla\phi(x),\nabla
a(x)\rangle+\frac{\varepsilon^{2}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B(0,1)}\langle
D^{2}\phi(x)\nabla a(x),\nabla a(x)\rangle+\langle D^{2}\phi(x)z,z\rangle dz$
$\displaystyle\quad+o(\varepsilon^{2}).$
Thus, we obtain
$\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))-\phi(y-\varepsilon\nabla a(x))dy=2\varepsilon\langle\nabla\phi(x),\nabla
a(x)\rangle+o(\varepsilon^{2})\quad\text{as }\varepsilon\rightarrow 0.$
This finishes the proof. ∎
## 3\. Elliptic case
In this section, we will prove Theorem 1.1 and consider several special cases
as corollaries. Then we apply the ideas to the $p(x)$-Laplace type equations
and give the corresponding conclusions.
###### Proof of Theorem 1.1.
Considering the sufficiency, we need to show that $u$ is a viscosity solution
to Eq. (1.1) by $u$ satisfying the asymptotic mean value formula. We first
prove that $u$ is a viscosity supersolution. To be precise, we intend to prove
((i)) from (2.1).
For the case that $p>2$, we know from (1.1) that $\alpha_{p}>0$ and
$\alpha_{q}>0$. Suppose that the function $u$ satisfies the asymptotic mean
value formula in the viscosity sense. Recalling (2.1), we have
$\displaystyle 0$
$\displaystyle\geq-\phi(x)+\dfrac{\alpha_{p}+M_{\phi}(x)\alpha_{q}}{2(1+M_{\phi}(x))}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}\phi+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}\phi\right\\}+\dfrac{\beta_{p}+M_{\phi}(x)\beta_{q}}{1+M_{\phi}(x)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y)dy$
$\displaystyle\quad+\dfrac{\varepsilon\lvert\nabla\phi(x)\rvert^{q-p}}{4(N+p)(1+M_{\phi}(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))-\phi(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2}).$
From $\alpha_{p}+\beta_{p}=1,\alpha_{q}+\beta_{q}=1$, we write
$\displaystyle 0$
$\displaystyle\geq\dfrac{\alpha_{p}+M_{\phi}(x)\alpha_{q}}{1+M_{\phi}(x)}\@slowromancap
i@+\dfrac{\beta_{p}+M_{\phi}(x)\beta_{q}}{1+M_{\phi}(x)}\@slowromancap
ii@+\dfrac{\varepsilon\lvert\nabla\phi(x)\rvert^{q-p}}{4(N+p)(1+M_{\phi}(x))}\@slowromancap
iii@+o(\varepsilon^{2}),$
where
$\displaystyle\@slowromancap
i@=-\phi(x)+\frac{1}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}\phi+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}\phi\right\\},$
$\displaystyle\@slowromancap
ii@=-\phi(x)+\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y)dy,$
$\displaystyle\@slowromancap
iii@=\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))-\phi(y-\varepsilon\nabla a(x))dy.$
The non-negativity of $M_{\phi}(x)$ implies that
$\displaystyle 0$ $\displaystyle\geq\alpha_{p}\@slowromancap
i@+\beta_{p}\@slowromancap ii@+M_{\phi}(x)\left(\alpha_{q}\@slowromancap
i@+\beta_{q}\@slowromancap
ii@\right)+\dfrac{\varepsilon\lvert\nabla\phi(x)\rvert^{q-p}}{4(N+p)}\@slowromancap
iii@+o(\varepsilon^{2}).$
It follows from Lemmas 2.3, 2.5 and 2.6 that
$\displaystyle 0$
$\displaystyle\geq\frac{\alpha_{p}}{2}\left<D^{2}\phi(x)(x^{\varepsilon}_{1}-x),(x^{\varepsilon}_{1}-x)\right>+\dfrac{\varepsilon^{2}\beta_{p}}{2(N+2)}\Delta\phi(x)$
$\displaystyle\quad+M_{\phi}(x)\Bigg{(}\frac{\alpha_{q}}{2}\langle
D^{2}\phi(x)(x^{\varepsilon}_{1}-x),(x^{\varepsilon}_{1}-x)\rangle+\left.\dfrac{\varepsilon^{2}\beta_{q}}{2(N+2)}\Delta\phi(x)\right)$
$\displaystyle\quad+\dfrac{\varepsilon^{2}\lvert\nabla\phi(x)\rvert^{q-p}}{2(N+p)}\left<\nabla\phi(x),\nabla
a(x)\right>+o(\varepsilon^{2}).$
Dividing by $\frac{\varepsilon^{2}}{2}$, taking the limit as
$\varepsilon\rightarrow 0$ and by Lemma 2.4, we have
$\displaystyle 0$
$\displaystyle\geq\alpha_{p}\Delta_{\infty}\phi(x)+\dfrac{\beta_{p}}{N+2}\Delta\phi(x)+M_{\phi}(x)\left(\alpha_{q}\Delta_{\infty}\phi(x)+\dfrac{\beta_{q}}{N+2}\Delta\phi(x)\right)$
$\displaystyle\quad+\dfrac{\lvert\nabla\phi(x)\rvert^{q-p}}{N+p}\langle\nabla\phi(x),\nabla
a(x)\rangle.$
Multipling by $N+p$, we get
$\displaystyle 0$
$\displaystyle\geq(p-2)\Delta_{\infty}\phi(x)+\Delta\phi(x)+a(x)\lvert\nabla\phi(x)\rvert^{q-p}((q-2)\Delta_{\infty}\phi(x)+\Delta\phi(x))$
$\displaystyle\quad+\lvert\nabla\phi(x)\rvert^{q-p}\langle\nabla
a(x),\nabla\phi(x)\rangle.$
Therefore, $u$ is a viscosity supersolution according to ((i)). We can use
(2.7) instead of (2.6) to prove that $u$ is a viscosity subsolution and we
omit the proof.
For the necessity of the theorem, we need to prove that $u$ satisfies the
asymptotic mean value formula in the viscosity sense if $u$ is a viscosity
solution to Eq. (1.1). Assume that $u$ is a viscosity solution to Eq. (1.1).
In particular, $u$ is a viscosity subsolution. From (2.5), we have
$\displaystyle 0$
$\displaystyle\leq(p-2)\Delta_{\infty}\phi(x)+\Delta\phi(x)+a(x)\lvert\nabla\phi(x)\rvert^{q-p}((q-2)\Delta_{\infty}\phi(x)+\Delta\phi(x))$
$\displaystyle\quad+\lvert\nabla\phi(x)\rvert^{q-p}\left<\nabla
a(x),\nabla\phi(x)\right>.$
By Lemma 2.4,
$\displaystyle 0$
$\displaystyle\leq(p-2)\left<D^{2}\phi(x)\left(\dfrac{x^{\varepsilon}_{1}-x}{\varepsilon}\right),\left(\dfrac{x^{\varepsilon}_{1}-x}{\varepsilon}\right)\right>+\Delta\phi(x)$
$\displaystyle\quad+a(x)\lvert\nabla\phi(x)|^{q-p}\left((q-2)\left<D^{2}\phi(x)\left(\dfrac{x^{\varepsilon}_{1}-x}{\varepsilon}\right)\right.\right.,\left(\dfrac{x^{\varepsilon}_{1}-x}{\varepsilon}\right)\bigg{>}+\Delta\phi(x)\bigg{)}$
$\displaystyle\quad+\lvert\nabla\phi(x)|^{q-p}\langle\nabla
a(x),\nabla\phi(x)\rangle+o(1).$
Multipling by $\varepsilon^{2}$ on the inequality above, we get
$\displaystyle 0$ $\displaystyle\leq(p-2)\langle
D^{2}\phi(x)\left(x^{\varepsilon}_{1}-x\right),\left(x^{\varepsilon}_{1}-x\right)\rangle+\varepsilon^{2}\Delta\phi(x)$
$\displaystyle\quad+a(x)\lvert\nabla\phi(x)|^{q-p}\left((q-2)\langle
D^{2}\phi(x)\left(x^{\varepsilon}_{1}-x\right),\left(x^{\varepsilon}_{1}-x\right)\rangle+\varepsilon^{2}\Delta\phi(x)\right)$
$\displaystyle\quad+\varepsilon^{2}\lvert\nabla\phi(x)|^{q-p}\langle\nabla
a(x),\nabla\phi(x)\rangle+o(\varepsilon^{2}).$
By Lemmas 2.3, 2.5 and 2.6, we have
$\displaystyle 0$ $\displaystyle\leq 2(p-2)\@slowromancap
i@+2(N+2)\@slowromancap
ii@+a(x)\lvert\nabla\phi(x)|^{q-p}(2(q-2)\@slowromancap
i@+2(N+2)\@slowromancap ii@)$
$\displaystyle\quad+\frac{\varepsilon}{2}\lvert\nabla\phi(x)|^{q-p}\@slowromancap
iii@+o(\varepsilon^{2}).$
Furthermore, dividing by $2(N+p)$, we obtain
$\displaystyle 0$ $\displaystyle\leq\left(\alpha_{p}\@slowromancap
i@+\beta_{p}\@slowromancap
ii@\right)+a(x)\dfrac{N+q}{N+p}\lvert\nabla\phi(x)|^{q-p}\left(\alpha_{q}\@slowromancap
i@+\beta_{q}\@slowromancap
ii@\right)+\dfrac{\varepsilon\lvert\nabla\phi(x)\rvert^{q-p}}{4(N+p)}\@slowromancap
iii@+o(\varepsilon^{2}).$
Then separating $\phi(x)$ from $\@slowromancap i@$ and $\@slowromancap ii@$,
we get
$\displaystyle(1+M_{\phi}(x))\phi(x)$
$\displaystyle\leq\dfrac{\alpha_{p}+M_{\phi}(x)\alpha_{q}}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}\phi+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}\phi\right\\}$
$\displaystyle\quad+\left(\beta_{p}+M_{\phi}(x)\beta_{q}\right)\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y)dy$
$\displaystyle\quad+\dfrac{\varepsilon\lvert\nabla\phi(x)\rvert^{q-p}}{4(N+p)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))-\phi(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2}).$
Thus
$\displaystyle 0$
$\displaystyle\leq-\phi(x)+\dfrac{\alpha_{p}+M_{\phi}(x)\alpha_{q}}{2(1+M_{\phi}(x))}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}\phi+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}\phi\right\\}+\dfrac{\beta_{p}+M_{\phi}(x)\beta_{q}}{1+M_{\phi}(x)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y)dy$
$\displaystyle\quad+\dfrac{\varepsilon\lvert\nabla\phi(x)\rvert^{q-p}}{4(N+p)(1+M_{\phi}(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x))-\phi(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2}).$
Similarly, we can also prove that $u$ satisfies (2.2) if $u$ is the viscosity
supersolution.
When $1<p\leq 2$, we can divide it into the following cases: $p=2,q>2$;
$p=2,q=2$; $1<p<2,q=2$; $1<p<2,q>2$; $1<q<2$. The proofs of these cases are
similar to the case that $p>2$, by using (2.7) instead of (2.6) in Lemma 2.3
if necessary.
Combining the arguments above, we complete the proof. ∎
From the proof of Theorem 1.1, the following corollaries will follow.
###### Corollary 3.1 ($p$-Laplace equation).
Let $1<p<\infty$ and $u(x)$ be a continuous function in a domain
$\Omega\subset\mathbb{R}^{N}$. The equation
$-\text{\rm{div}}(\lvert\nabla u\rvert^{p-2}\nabla u)=0\quad\text{in }\Omega$
holds in the viscosity sense if and only if the asymptotic expansion
$\displaystyle u(x)$
$\displaystyle=\dfrac{\alpha_{p}}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}u+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}u\right\\}+\beta_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy+o(\varepsilon^{2})\quad\text{as
}\varepsilon\rightarrow 0$
holds for all $x\in\Omega$ in the viscosity sense. Here
$\alpha_{p}+\beta_{p}=1,\frac{\alpha_{p}}{\beta_{p}}=\frac{p-2}{N+2}.$
###### Remark 3.2.
Corollary 3.1 is the main result in [26]. In fact, Corollary 3.1 also holds
for $p=\infty$ with $\alpha_{p}=1$, $\beta_{p}=0$.
###### Corollary 3.3 (Variable coefficient $p$-Laplace equation).
Let $1<p<\infty$, $\tilde{a}(x)$ be a $C^{1}$ function in a domain
$\Omega\subset\mathbb{R}^{N}$ with $\tilde{a}(x)\geq 1$ and let $u(x)$ be a
continuous function in $\Omega$. The equation
$\displaystyle-\text{\rm{div}}(\tilde{a}(x)\lvert\nabla u\rvert^{p-2}\nabla
u)=0\quad\text{in }\Omega$
holds in the viscosity sense if and only if the asymptotic expansion
$\displaystyle u(x)$
$\displaystyle=\dfrac{\alpha_{p}}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}u+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}u\right\\}+\beta_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy$
$\displaystyle\quad+\dfrac{\varepsilon}{4(N+p)\tilde{a}(x)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y+\varepsilon\nabla\tilde{a}(x))-u(y-\varepsilon\nabla\tilde{a}(x))dy+o(\varepsilon^{2})$
as $\varepsilon\rightarrow 0$, holds for all $x\in\Omega$ in the viscosity
sense. Here
$\alpha_{p}+\beta_{p}=1,\frac{\alpha_{p}}{\beta_{p}}=\frac{p-2}{N+2}$.
###### Proof.
When $q=p$ in Eq. (1.1), we have
$-\text{\rm{div}}((a(x)+1)\lvert\nabla u\rvert^{p-2}\nabla u)=0.$
In this situation, we have
$\alpha_{p}=\alpha_{q},\quad\beta_{p}=\beta_{q},\quad M_{u}(x)=a(x).$
Thus, the asymptotic mean value formula (1.1) reads as
$\displaystyle u(x)$
$\displaystyle=\dfrac{\alpha_{p}+M_{u}(x)\alpha_{q}}{2(1+M_{u}(x))}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}u+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}u\right\\}+\dfrac{\beta_{p}+M_{u}(x)\beta_{q}}{1+M_{u}(x)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy$
$\displaystyle\quad+\dfrac{\varepsilon\lvert\nabla
u(x)\rvert^{q-p}}{4(N+p)(1+M_{u}(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y+\varepsilon\nabla
a(x))-u(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2})$
$\displaystyle=\dfrac{\alpha_{p}}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}u+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}u\right\\}+\beta_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy$
$\displaystyle\quad+\dfrac{\varepsilon}{4(N+p)(1+a(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y+\varepsilon\nabla
a(x))-u(y-\varepsilon\nabla a(x))dy+o(\varepsilon^{2}).$
Let $\tilde{a}(x)=a(x)+1$. We finish the proof. ∎
###### Remark 3.4.
In fact, the condition $\tilde{a}(x)\geq 1$ can be replaced by
$\tilde{a}(x)>0$ and we can prove the conclusion by the same method as in
Theorem 1.1.
Finally, we consider the $p(x)$-Laplace equation
$\displaystyle-\text{\rm{div}}(\lvert\nabla u\rvert^{p(x)-2}\nabla
u)=0\quad\text{in }\Omega.$ (3.1)
Let us formally expand the left-hand side of Eq. (3.1) as follows:
$\displaystyle\quad\text{\rm{div}}(\lvert\nabla u\rvert^{p(x)-2}\nabla u)$
$\displaystyle=\sum_{i=1}^{N}\partial_{i}(\lvert\nabla u\rvert^{p(x)-2}u_{i})$
$\displaystyle=\sum_{i=1}^{N}\partial_{i}(\lvert\nabla
u\rvert^{p(x)-2})u_{i}+\lvert\nabla u\rvert^{p(x)-2}\Delta u$
$\displaystyle=\sum_{i=1}^{N}\partial_{i}(e^{(p(x)-2)\ln\lvert\nabla
u\rvert})u_{i}+\lvert\nabla u\rvert^{p(x)-2}\Delta u$
$\displaystyle=\sum_{i=1}^{N}\lvert\nabla u\rvert^{p(x)-2}(\ln\lvert\nabla
u\rvert\partial_{i}p+(p(x)-2)\partial_{i}\ln\lvert\nabla
u\rvert)u_{i}+\lvert\nabla u\rvert^{p(x)-2}\Delta u$
$\displaystyle=\sum_{i=1}^{N}\lvert\nabla u\rvert^{p(x)-2}(\ln\lvert\nabla
u\rvert p_{i}+(p(x)-2)\lvert\nabla
u\rvert^{-2}\sum_{j=1}^{N}u_{ij}u_{j})u_{i}+\lvert\nabla
u\rvert^{p(x)-2}\Delta u$ $\displaystyle=\lvert\nabla
u\rvert^{p(x)-2}\left(\ln\lvert\nabla u\rvert\langle\nabla p,\nabla
u\rangle+(p(x)-2)\Delta_{\infty}u+\Delta u)\right).$
Suppose that $u$ is a smooth function with $\nabla u\neq 0$, we can see that
$u$ is a solution to Eq. (3.1) if and only if
$-(p(x)-2)\Delta_{\infty}u-\Delta u-\ln\lvert\nabla u\rvert\langle\nabla
p,\nabla u\rangle=0.$
We find that the term $\langle\nabla p,\nabla u\rangle$ appears in the
equation above. We can still use Lemma 2.6 to obtain the asymptotic mean value
formula by the same method as in Theorem 1.1, which is given as the following
theorem without proof.
###### Theorem 3.5 ($p(x)$-Laplace equation).
Let $p(x)$ be a $C^{1}$ function in a domain $\Omega\subset\mathbb{R}^{N}$
with $1<p(x)<\infty$ and $u(x)$ be a continuous function in $\Omega$. Then Eq.
(3.1) holds in the viscosity sense if and only if the asymptotic expansion
$\displaystyle u(x)$
$\displaystyle=\dfrac{\alpha_{p}(x)}{2}\left\\{\mathop{\rm{max}}\limits_{\overline{B_{\varepsilon}(x)}}u+\mathop{\rm{min}}\limits_{\overline{B_{\varepsilon}(x)}}u\right\\}+\beta_{p}(x)\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y)dy$
$\displaystyle\quad+\dfrac{\varepsilon\ln\lvert\nabla
u(x)\rvert}{4(N+p(x))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y+\varepsilon\nabla
p(x))-u(y-\varepsilon\nabla p(x))dy+o(\varepsilon^{2})$
as $\varepsilon\rightarrow 0$, holds for all $x\in\Omega$ in the viscosity
sense. Here
$\alpha_{p}(x)+\beta_{p}(x)=1,\frac{\alpha_{p}(x)}{\beta_{p}(x)}=\frac{p(x)-2}{N+2}$.
## 4\. Parabolic case
In this section, we start from the definition of viscosity solutions to the
normalized double phase parabolic equation, and combine the ideas in [27] to
investigate the possible form of the mean value formula. We integrate the
terms with $p$ and $q$ over different time intervals, and find that when these
two time lags satisfy the viscosity condition, the mean value formula holds.
We first give the definition of viscosity solutions to Eq. (1.4). The similar
definition can be found in [27, Definition 1].
###### Definition 4.1.
A function $u:\Omega_{T}\rightarrow\mathbb{R}$ is a viscosity solution to
(1.4) if $u$ is continuous and whenever $(x,t)\in\Omega_{T}$ and $\phi\in
C^{2}(\Omega_{T})$ is such that
* (i)
$u(x,t)=\phi(x,t)$.
* (ii)
$u(y,s)>\phi(y,s)$ for all $(y,s)\in\Omega_{T},(y,s)\neq(x,t)$, then we have
at the point $(x,t)$
$\begin{cases}\phi_{t}\geq(p-2)\Delta_{\infty}\phi+\Delta\phi+a\lvert\nabla\phi|^{q-p}\left((q-2)\Delta_{\infty}\phi+\Delta\phi\right)\\\
\qquad+\lvert\nabla\phi|^{q-p}\langle\nabla a,\nabla\phi\rangle&\text{ if
}\nabla\phi(x,t)\neq 0,\\\
\phi_{t}\geq\lambda_{\min}((p-2)D^{2}\phi)+\Delta\phi&\text{ if
}\nabla\phi(x,t)=0.\end{cases}$
In addition, when the test function $\phi$ touches $u$ from above, all
inequalities are reversed and $\lambda_{\min}((p-2)D^{2}\phi)$ is replaced by
$\lambda_{\max}((p-2)D^{2}\phi)$.
In fact, the number of test functions $\phi$ can be reduced, if the gradient
of a test function $\phi$ vanishes, we can suppose $D^{2}\phi=0$. Nothing is
required if $\nabla\phi=0$ and $D^{2}\phi\neq 0$. We state the following lemma
without proof (see [27, Lemma 2] for details).
###### Lemma 4.2.
A function $u:\Omega_{T}\rightarrow\mathbb{R}$ is a viscosity solution to Eq.
(1.4) if $u$ is continuous and whenever $(x,t)\in\Omega_{T}$ and $\phi\in
C^{2}(\Omega_{T})$ is such that
* (i)
$u(x,t)=\phi(x,t)$.
* (ii)
$u(y,s)>\phi(y,s)$ for all $(y,s)\in\Omega_{T},(y,s)\neq(x,t)$, then at the
point $(x,t)$, if $\nabla\phi(x,t)\neq 0$, we have
$\displaystyle\begin{split}\phi_{t}&\geq(p-2)\Delta_{\infty}\phi+\Delta\phi+a\lvert\nabla\phi|^{q-p}\left((q-2)\Delta_{\infty}\phi+\Delta\phi\right)\\\
&\quad+\lvert\nabla\phi|^{q-p}\langle\nabla a,\nabla\phi\rangle;\end{split}$
(4.1)
if $\nabla\phi(x,t)=0$ and $D^{2}\phi(x,t)=0$, we have
$\displaystyle\phi_{t}(x,t)\geq 0.$ (4.2)
In addition, when the test function $\phi$ touches $u$ from above, all
inequalities are reversed.
The definition of $u$ satisfying the asymptotic mean value formula (1.4) at
the point $(x,t)$ in the viscosity sense is similar to Definition 2.1, so we
omit it. But it is worth to mentioning that $\nabla\phi(x,t)=0$ is allowed in
the parabolic case, which is consistent with Definition 4.1.
Similar to the elliptic case, we also need the following lemmas. The ideas of
Lemmas 4.3–4.5 come from [27, Section 3] and the proofs are similar.
###### Lemma 4.3.
Let $\phi$ be a $C^{2}$ function in a neighborhood of $(x,t)$,
$\varepsilon>0,A(x,t)>0,s\in(t-\frac{\varepsilon^{2}}{A(x,t)},t)$. Denote by
$x^{\varepsilon,s}_{1},x^{\varepsilon,s}_{2}$ points in which $\phi$ attains
its minimum and maximum over a ball $\overline{B_{\varepsilon}(x)}$ at time
$s$ respectively. We have
$\displaystyle\begin{split}&\quad\dfrac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A(x,t)}}^{t}\left\\{\mathop{\rm{max}}\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)+\mathop{\rm{min}}\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)\right\\}ds-\phi(x,t)\\\
&\geq\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A(x,t)}}^{t}\left<D^{2}\phi(x,t)(x^{\varepsilon,s}_{1}-x),(x^{\varepsilon,s}_{1}-x)\right>ds-\frac{\varepsilon^{2}}{2A(x,t)}\phi_{t}(x,t)+o(\varepsilon^{2})\end{split}$
(4.3)
and
$\displaystyle\begin{split}&\quad\dfrac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A(x,t)}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)\right\\}ds-\phi(x,t)\\\
&\leq\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A(x,t)}}^{t}\left<D^{2}\phi(x,t)(x^{\varepsilon,s}_{2}-x),(x^{\varepsilon,s}_{2}-x)\right>ds-\frac{\varepsilon^{2}}{2A(x,t)}\phi_{t}(x,t)+o(\varepsilon^{2}).\end{split}$
(4.4)
###### Lemma 4.4.
Let $\phi$ be a $C^{2}$ function in a neighborhood of $(x,t)$ with
$\nabla\phi(x,t)\neq 0$. We have
$\displaystyle\lim\limits_{\varepsilon\rightarrow
0+}\frac{x^{\varepsilon,s}_{1}-x}{\varepsilon}=-\dfrac{\nabla\phi}{\lvert\nabla\phi\rvert}(x,t),$
(4.5)
where $x^{\varepsilon,s}_{1}$ is defined as in Lemma 4.3.
###### Lemma 4.5.
Let $\phi$ be a $C^{2}$ function in a neighborhood of $(x,t)$, $s$ and
$A(x,t)$ are defined as in Lemma 4.3. Then
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A(x,t)}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y,s)dyds-\phi(x,t)=\dfrac{\varepsilon^{2}}{2(N+2)}\Delta\phi(x,t)-\dfrac{\varepsilon^{2}}{2A(x,t)}\phi_{t}(x,t)+o(\varepsilon^{2}).$
###### Lemma 4.6.
Let $\phi$ be a $C^{2}$ function in a neighborhood of $(x,t)$. We have
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x,t),t)-\phi(y-\varepsilon\nabla
a(x,t),t)dy=2\varepsilon\left<\nabla\phi(x,t),\nabla
a(x,t)\right>+o(\varepsilon^{2}).$
Now we are ready to prove the second main result.
###### Proof of Theorem 1.4.
We first prove the sufficiency. If $u$ satisfies the asymptotic mean value
formula 1.4 in the viscosity sense, we need to prove that $u$ is a viscosity
solution. If so, for a test function $\phi$, we have
$\displaystyle 0$
$\displaystyle\geq-\phi(x,t)+\dfrac{1}{1+M_{\phi}(x,t)}\left(\dfrac{\alpha_{p}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A_{\phi}(x,t)}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)\right\\}ds\right.$
$\displaystyle\quad\left.+\beta_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A_{\phi}(x,t)}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y,s)dyds\right)$
$\displaystyle\quad+\dfrac{M_{\phi}(x,t)}{1+M_{\phi}(x,t)}\left(\dfrac{\alpha_{q}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{B_{\phi}(x,t)}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)\right\\}ds\right.$
$\displaystyle\quad\left.+\beta_{q}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{B_{\phi}(x,t)}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y,s)dyds\right)$
$\displaystyle\quad+\frac{\varepsilon\lvert\nabla\phi(x,t)\rvert^{q-p}}{4(N+p)(1+M_{\phi}(x,t))}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x,t),t)-\phi(y-\varepsilon\nabla a(x,t),t)dy$
$\displaystyle\quad+o(\varepsilon^{2}).$
By the non-negativity of $M_{\phi}(x,t)$ and splitting $\phi(x,t)$, we get
$\displaystyle 0$
$\displaystyle\geq\alpha_{p}\left(\dfrac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A_{\phi}(x,t)}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)\right\\}ds-\phi(x,t)\right)$
$\displaystyle\quad+\beta_{p}\left(\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A_{\phi}(x,t)}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y,s)dyds-\phi(x,t)\right)$
$\displaystyle\quad+M_{\phi}(x,t)\left[\alpha_{q}\left(\dfrac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{B_{\phi}(x,t)}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)\right\\}ds-\phi(x,t)\right)\right.$
$\displaystyle\quad\left.+\beta_{q}\left(\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{B_{\phi}(x,t)}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y,s)dyds-\phi(x,t)\right)\right]$
$\displaystyle\quad+\frac{\varepsilon\lvert\nabla\phi(x,t)\rvert^{q-p}}{4(N+p)}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y+\varepsilon\nabla
a(x,t),t)-\phi(y-\varepsilon\nabla a(x,t),t)dy+o(\varepsilon^{2}),$ (4.6)
where
$\alpha_{p},\alpha_{q},\beta_{p},\beta_{q},M_{\phi}(x,t),A_{\phi}(x,t),B_{\phi}(x,t)$
are determined by (1.6).
Assume that $p>2$, where $\alpha_{p},\alpha_{q}>0$. For inequality (4), we
apply Lemmas 4.3, 4.5 and Lemma 4.6 to have
$\displaystyle 0$
$\displaystyle\geq\frac{\alpha_{p}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{A_{\phi}(x,t)}}^{t}\left<D^{2}\phi(x,t)(x^{\varepsilon,s}_{1}-x),(x^{\varepsilon,s}_{1}-x)\right>ds+\dfrac{\varepsilon^{2}\beta_{p}}{2(N+2)}\Delta\phi(x,t)$
$\displaystyle\quad-\dfrac{\varepsilon^{2}}{2A_{\phi}(x,t)}\phi_{t}(x,t)+M_{\phi}(x,t)\Bigg{(}\frac{\alpha_{q}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{B_{\phi}(x,t)}}^{t}\left<D^{2}\phi(x,t)(x^{\varepsilon,s}_{1}-x),(x^{\varepsilon,s}_{1}-x)\right>ds$
$\displaystyle\quad+\dfrac{\varepsilon^{2}\beta_{q}}{2(N+2)}\Delta\phi(x,t)-\dfrac{\varepsilon^{2}}{2B_{\phi}(x,t)}\phi_{t}(x,t)\Bigg{)}+\dfrac{\varepsilon^{2}\lvert\nabla\phi(x,t)\rvert^{q-p}}{2(N+p)}\left<\nabla\phi(x,t),\nabla
a(x,t)\right>$ $\displaystyle\quad+o(\varepsilon^{2}).$
When $\nabla\phi(x,t)\neq 0$, multiplying by $\dfrac{2}{\varepsilon^{2}}$ and
taking the limit as $\varepsilon\rightarrow 0$ on the inequality above, by
Lemma 4.4, we have
$\displaystyle 0$
$\displaystyle\geq\alpha_{p}\Delta_{\infty}\phi(x,t)+\dfrac{\beta_{p}}{N+2}\Delta\phi(x,t)-\frac{1}{A_{\phi}(x,t)}\phi_{t}(x,t)$
$\displaystyle\quad+M_{\phi}(x,t)\left(\alpha_{q}\Delta_{\infty}\phi(x,t)+\dfrac{\beta_{q}}{N+2}\Delta\phi(x,t)-\frac{1}{B_{\phi}(x,t)}\phi_{t}(x,t)\right)$
$\displaystyle\quad+\dfrac{\lvert\nabla\phi(x,t)\rvert^{q-p}}{N+p}\left<\nabla\phi(x,t),\nabla
a(x,t)\right>.$
Multipling by $N+p$ again, we get
$\displaystyle 0$
$\displaystyle\geq(p-2)\Delta_{\infty}\phi(x,t)+\Delta\phi(x,t)+a(x,t)\lvert\nabla\phi(x,t)\rvert^{q-p}((q-2)\Delta_{\infty}\phi(x,t)$
$\displaystyle\quad+\Delta\phi(x,t))-\left(\dfrac{N+p}{A_{\phi}(x,t)}+\dfrac{a(x,t)(N+q)\lvert\nabla\phi(x,t)\rvert^{q-p}}{B_{\phi}(x,t)}\right)\phi_{t}(x,t)$
$\displaystyle\quad+\lvert\nabla\phi(x,t)\rvert^{q-p}\left<\nabla\phi(x,t),\nabla
a(x,t)\right>.$
Recalling (1.6), we have
$\displaystyle\dfrac{N+p}{A_{\phi}(x,t)}+\dfrac{a(x,t)(N+q)\lvert\nabla\phi(x,t)\rvert^{q-p}}{B_{\phi}(x,t)}=1.$
(4.7)
Therefore, we obtain
$\phi_{t}\geq(p-2)\Delta_{\infty}\phi+\Delta\phi+a\lvert\nabla\phi|^{q-p}\left((q-2)\Delta_{\infty}\phi+\Delta\phi\right)+\lvert\nabla\phi|^{q-p}\langle\nabla
a,\nabla\phi\rangle.$
It follows that (4.1) holds when $\nabla\phi(x,t)\neq 0$. When
$\nabla\phi(x,t)=0$ and $D^{2}\phi(x,t)=0$, by (4.7), we get
$A_{\phi}(x,t)=N+p$ and $M_{\phi}(x,t)=0$. According to the asymptotic mean
value formula, we have
$\displaystyle 0$
$\displaystyle\geq-\phi(x,t)+\frac{\alpha_{p}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{N+p}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)\right\\}ds$
$\displaystyle\quad+\beta_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{N+p}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}\phi(y,s)dyds+o(\varepsilon^{2}).$
By Lemma 4.5 and the expansion
$\phi(y,s)-\phi(x,t)=\phi_{t}(x,t)(s-t)+o(|s-t|+|y-x|^{2}),$
we have
$\displaystyle 0$
$\displaystyle\geq\alpha_{p}\left(\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{N+p}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}\phi(y,s)\right\\}ds-\phi(x,t)\right)$
$\displaystyle\quad-\frac{\varepsilon^{2}\beta_{p}}{2(N+p)}\phi_{t}(x,t)+o(\varepsilon^{2})$
$\displaystyle=\frac{\alpha_{p}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{N+p}}^{t}\left\\{\max\limits_{y\in\overline{B_{\varepsilon}(x)}}(\phi(y,s)-\phi(x,t))+\min\limits_{y\in\overline{B_{\varepsilon}(x)}}(\phi(y,s)-\phi(x,t))\right\\}ds$
$\displaystyle\quad-\dfrac{\varepsilon^{2}\beta_{p}}{2(N+p)}\phi_{t}(x,t)+o(\varepsilon^{2})$
$\displaystyle=\alpha_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{N+p}}^{t}\phi_{t}(x,t)(s-t)ds-\frac{\varepsilon^{2}\beta_{p}}{2(N+p)}\phi_{t}(x,t)+o(\varepsilon^{2})$
$\displaystyle=-\frac{\varepsilon^{2}\alpha_{p}}{2(N+p)}\phi_{t}(x,t)-\frac{\varepsilon^{2}\beta_{p}}{2(N+p)}\phi_{t}(x,t)+o(\varepsilon^{2})$
$\displaystyle=-\frac{\varepsilon^{2}}{2(N+p)}\phi_{t}(x,t)+o(\varepsilon^{2}).$
Dividing by $\varepsilon^{2}$ and taking the limit as $\varepsilon\rightarrow
0$, we have
$\phi_{t}(x,t)\geq 0.$
Thus, we prove that $u$ is a viscosity supersolution. We can use the same
method to prove that $u$ is a viscosity subsolution.
For the necessity and other cases, since it is similar to the proof of
elliptic case, so we omit it. The proof is complete. ∎
In particular, we consider the case that $a\equiv 0$. For this case, it
follows from (1.6) that $A_{u}(x,t)=N+p$. Then the following corollary holds.
###### Corollary 4.7 (Normalized parabolic $p$-Laplace equation).
Let $1<p<\infty$ and $u(x,t)$ be a continuous function in a domain
$\Omega_{T}$. The equation
$u_{t}=\lvert\nabla u\rvert^{2-p}\text{\rm{div}}(\lvert\nabla
u\rvert^{p-2}\nabla u)\quad\text{in }\Omega_{T}$
holds in the viscosity sense if and only if the asymptotic expansion
$\displaystyle u(x,t)$
$\displaystyle=\dfrac{\alpha_{p}}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{N+p}}^{t}\left\\{\mathop{\rm{max}}\limits_{y\in\overline{B_{\varepsilon}(x)}}u(y,s)+\mathop{\rm{min}}\limits_{y\in\overline{B_{\varepsilon}(x)}}u(y,s)\right\\}ds$
$\displaystyle\quad+\beta_{p}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{t-\frac{\varepsilon^{2}}{N+p}}^{t}\mathchoice{{\vbox{\hbox{$\textstyle-$}}\kern-4.86108pt}}{{\vbox{\hbox{$\scriptstyle-$}}\kern-3.25pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-2.29166pt}}{{\vbox{\hbox{$\scriptscriptstyle-$}}\kern-1.875pt}}\\!\int_{B_{\varepsilon}(x)}u(y,s)dyds+o(\varepsilon^{2})\quad\text{as
}\varepsilon\rightarrow 0$
holds for all $(x,t)\in\Omega_{T}$ in the viscosity sense. Here
$\alpha_{p}+\beta_{p}=1,\frac{\alpha_{p}}{\beta_{p}}=\frac{p-2}{N+2}.$
###### Remark 4.8.
Corollary 4.7 is the main result in [27]. In fact, Corollary 4.7 also holds
for $p=\infty$ with $\alpha_{p}=1$, $\beta_{p}=0$.
## Acknowledgment
This work was supported by the National Natural Science Foundation of China
(No. 12071098) and the Fundamental Research Funds for the Central Universities
(No. 2022FRFK060022).
## References
* [1] S. Baasandorj, S. S. Byun and J. Oh, Calderón-Zygmund estimates for generalized double phase problems, J. Funct. Anal. 279 (7) (2020), 108670, 57 pp.
* [2] P. Baroni, M. Colombo and G. Mingione, Regularity for general functionals with double phase, Calc. Var. Partial Differential Equations 57 (2) (2018), Paper No. 62, 48 pp.
* [3] P. Blanc, F. Charro, J. J. Manfredi and J. D. Rossi, A nonlinear mean value property for the Monge-Ampère operator, J. Convex Anal. 28 (2) (2021), 353–386.
* [4] P. Blanc, F. Charro, J. J. Manfredi and J. D. Rossi, Asymptotic mean value formulas for parabolic nonlinear equations, Rev. Un. Mat. Argentina 64 (1) (2022), 137–164.
* [5] P. Blanc and J. D. Rossi, Game Theory and Partial Differential Equations, De Gruyter, Berlin-Boston, 2019.
* [6] W. Blaschke, Ein Mittelwertsatz und eine kennzeichnende Eigenschaft des logarithmischen Potentials, Ber. Verh. Sächs. Akad. Wiss. Leipzig 68 (1916), 3–7.
* [7] V. Bögelein, F. Duzaar, P. Marcellini and C. Scheven, Boundary regularity for elliptic systems with $p,q$-growth, J. Math. Pures Appl. 159 (2022), 250–293.
* [8] S. S. Byun and J. Oh, Global gradient estimates for non-uniformly elliptic equations, Calc. Var. Partial Differential Equations 56 (2) (2017), Paper No. 46, 36 pp.
* [9] F. Colasuonno and M. Squassina, Eigenvalues for double phase variational integrals, Ann. Mat. Pura Appl. 195 (2016), 1917–1959.
* [10] M. Colombo and G. Mingione, Regularity for double phase variational problems, Arch. Rational Mech. Anal. 215 (2015), 443–496.
* [11] M. Colombo and G. Mingione, Bounded minimisers of double phase variational integrals, Arch. Ration. Mech. Anal. 218 (2015), 219–273.
* [12] M. Colombo and G. Mingione, Calderón-Zygmund estimates and non-uniformly elliptic operators, J. Funct. Anal. 270 (2016), 1416–1478.
* [13] C. De Filippis and G. Mingione, A borderline case of Calderón-Zygmund estimates for non-uniformly elliptic problems, St. Petersburg Math. J. 31 (3) (2019), 82–115.
* [14] C. De Filippis and G. Mingione, Lipschitz bounds and nonautonomous integrals, Arch. Ration. Mech. Anal. 242 (2021), 973–1057.
* [15] C. De Filippis and G. Mingione, Nonuniformly elliptic Schauder theory, arXiv:2201.07369.
* [16] C. De Filippis and G. Palatucci, Hölder regularity for nonlocal double phase equations, J. Differential Equations 267 (1) (2019) 547–586.
* [17] Y. Fang, V. Rădulescu, C. Zhang and X. Zhang, Gradient estimates for multi-phase problems in Campanato spaces, Indiana Univ. Math. J. 71 (3) (2022), 1079–1099.
* [18] Y. Fang and C. Zhang, Equivalence between distributional and viscosity solutions for the double-phase equation, Adv. Calc. Var. 15 (4) (2022), 811–829.
* [19] Y. Fang and C. Zhang, On weak and viscosity solutions of nonlocal double phase equations, Int. Math. Res. Not. IMRN, https://doi.org/10.1093/imrn/rnab351.
* [20] Y. Fang and C. Zhang, Regularity for quasi-linear parabolic equations with nonhomogeneous degeneracy or singularity, Calc. Var. Partial Differential Equations 62 (1) (2023), Paper No. 2, 46pp.
* [21] F. Ferrari, Q. Liu and J. J. Manfredi, On the characterization of $p$-harmonic functions on the Heisenberg group by mean value properties, Discrete Contin. Dyn. Syst. 34 (7) (2014), 2779–2793.
* [22] V. V. Jikov, S. M. Kozlov and O. A. Oleinik, Homogenization of Differential Operators and Integral Functionals, Springer, Berlin, 1994.
* [23] Ü. Kuran, On the mean-value property of harmonic functions, Bull. London Math. Soc. 4 (1972), 311–312.
* [24] E. Le Gruyer, On absolutely minimizing Lipschitz extensions and PDE $\Delta_{\infty}u=0$, NoDEA Nonlinear Differential Equations Appl. 14 (2007), 29–55.
* [25] E. Le Gruyer and J. C. Archer, Harmonious extensions, SIAM J. Math. Anal. 29 (1998), 279–292.
* [26] J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization of $p$-harmonic functions, Proc. Amer. Math. Soc. 138 (3) (2010), 881–889.
* [27] J. J. Manfredi, M. Parviainen and J. D. Rossi, An asymptotic mean value characterization for a class of nonlinear parabolic equations related to tug-of-war games, SIAM J. Math. Anal. 42 (5) (2010), 2058–2081.
* [28] P. Marcellini, Regularity and existence of solutions of elliptic equations with $p,q$-growth conditions, J. Differential Equations 90 (1991), 1–30.
* [29] N. S. Papageorgiou, A. Pudełko and V. Rădulescu, Non-autonomous $(p,q)$-equations with unbalanced growth, Math. Ann. (2022), https://doi.org/10.1007/s00208-022-02381-0.
* [30] Y. Peres, O. Schramm, S. Sheffield and D. B. Wilson, Tug-of-war and the infinity Laplacian, J. Amer. Math. Soc. 22 (2009), no. 1, 167–210.
* [31] Y. Peres and S. Sheffield, Tug-of-war with noise: A game-theoretic view of the $p$-Laplacian, Duke Math. J. 145 (1) (2008), 91–120.
* [32] I. Privaloff, Sur les fonctions harmoniques, Mat. Sb. 32 (1925), 464–471.
* [33] V. V. Zhikov, On Lavrentiev’s phenomenon, Russ. J. Math. Phys. 3 (1995), 249–269.
|
Incorporating prior knowledge of physics laws and structural properties of dynamical systems into the design of deep learning architectures has proven to be a powerful technique for improving their computational efficiency and generalization capacity. Learning accurate models of robot dynamics is critical for safe and stable control. Autonomous mobile robots, including wheeled, aerial, and underwater vehicles, can be modeled as controlled Lagrangian or Hamiltonian rigid-body systems evolving on matrix Lie groups. In this paper, we introduce a new structure-preserving deep learning architecture, the Lie group Forced Variational Integrator Network (LieFVIN), capable of learning controlled Lagrangian or Hamiltonian dynamics on Lie groups, either from position-velocity or position-only data. By design, LieFVINs preserve both the Lie group structure on which the dynamics evolve and the symplectic structure underlying the Hamiltonian or Lagrangian systems of interest. The proposed architecture learns surrogate discrete-time flow maps allowing accurate and fast prediction without numerical-integrator, neural-ODE, or adjoint techniques, which are needed for vector fields. Furthermore, the learnt discrete-time dynamics can be utilized with computationally scalable discrete-time (optimal) control strategies.
Dynamics Learning, Variational Integrators, Symplectic Integrators, Structure-Preserving Neural Networks, Physics-Informed Machine Learning, Predictive Control, Lie Group Dynamics
The authors gratefully acknowledge support from NSF under grants CCF-2112665, DMS-1345013, DMS-1813635 and from AFOSR under grant FA9550-18-1-0288.
|
[a]T. Miener
# The performance of the MAGIC telescopes using deep convolutional neural
networks with CTLearn
D. Nieto R. López-Coto J. L. Contreras J. G. Green D. Green E. Mariotti
###### Abstract
The Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescope system is
located on the Canary Island of La Palma and inspects the very high-energy
(VHE, few tens of GeV and above) gamma-ray sky. MAGIC consists of two imaging
atmospheric Cherenkov telescopes (IACTs), which capture images of the air
showers originating from the absorption of gamma rays and cosmic rays by the
atmosphere, through the detection of Cherenkov photons emitted in the shower.
The sensitivity of IACTs to gamma-ray sources is mainly determined by the
ability to reconstruct the properties (type, energy, and arrival direction) of
the primary particle generating the air shower. The state-of-the-art IACT
pipeline for shower reconstruction is based on the parameterization of the
shower images by extracting geometric and stereoscopic features and machine
learning algorithms like random forest or boosted decision trees. In this
contribution, we explore deep convolutional neural networks applied directly
to the pixelized images of the camera as a promising method for IACT full-
event reconstruction and present the performance of the method on
observational data using CTLearn, a package for IACT event reconstruction that
exploits deep learning.
## 1 Introduction
In this contribution, we show how deep convolutional neural networks (CNNs)
can be utilized to detect astrophysical gamma-ray sources like the Crab Nebula
using CTLearn111https://github.com/ctlearn-project/ctlearn [1, 2, 3, 4], a
deep learning (DL) framework for IACT event reconstruction, and DL1-Data-
Handler222https://github.com/cta-observatory/dl1-data-handler (DL1DH) [5], a
package designed for the data management of machine learning image analysis
techniques for IACT data. The results are compared to the standard analysis
(random forest (RF) for the background rejection, Look-Up tables (LUTs) for
the energy estimation and RF for bidimensional direction reconstruction)
obtained with MAGIC Analysis and Reconstruction Software MARS [6, 7]. Previous
DL analyses of MAGIC data [8] were carried out with CTLearn v0.5 based on
TensorFlow333https://www.tensorflow.org/ v1, while this work used CTLearn
v0.6, which adopted the Keras444https://keras.io/ API [9] from TensorFlow v2
[10].
The workflow of the MAGIC DL analysis with CTLearn is illustrated in Fig. 1.
First, the images are calibrated and cleaned by MARS to suppress the major
fraction of the Night Sky Background (NSB). Crucial information are translated
into uproot555https://github.com/scikit-hep/uproot4-readable branches [11]
using a complementary macro. Then, the DL1DH assembles several data levels
from MARS and unifies them in a common data format in HDF5 designed for DL
purposes. The image preprocessing and data reading is managed by the DL1DH.
Bilinear interpolation is used to map the hexagonal pixel layout of the MAGIC
cameras to a Cartesian lattice to directly apply CNNs [12]. Finally, CTLearn
performs training and prediction with CNN-based models, allowing for full-
event reconstruction.
Figure 1: Workflow of the MAGIC DL analysis with CTLearn [8].
## 2 DL analysis with the MAGIC telescopes
### 2.1 Model selection
For this work, we selected CTLearn’s Thin-ResNet (TRN) [15] model [13, 14],
which is a shallow residual neural network [16] with 33 layers666The first
initialization layer of the original Thin-ResNet [15] is skipped in order to
adjust for the specific input shape of the MAGIC images.. In each of the
residual blocks, we deploy a dual squeeze-and-excitation (SE) attention
mechanism [17] to focus on the channel relationship. We perform either
particle classification or regression (energy or arrival direction
reconstruction) with a fully-connected head (FCH), a traditional multi-layer
perceptron (MLP) neural network. The properties (type, energy, and arrival
direction) of the primary particle generating the air shower are reconstructed
in the single-task learning mode (see [18] for an IACT-based multi-task
learning architecture), where each task is trained with a separate network. We
explore stereoscopic information by concatenating the images (integrated pixel
charges and signal arrival times) of the two MAGIC telescopes channel-wise
before feeding the network as depicted in Fig. 2.
Figure 2: CTLearn’s TRN model with channel-wise concatenation of the two
stereoscopic images recorded by the MAGIC telescopes (M1 and M2).
### 2.2 Validation on simulations
The evaluation of the performance using common metrics like ROC curves, energy
and angular resolution curves with the same quality cuts (see Fig. 3) are
taken from [8]. A similar performance is also observed with CTLearn v0.6.
Monte Carlo (MC) gamma simulations coming uniformly from a $0.4^{\circ}$
offset of the telescope pointing (ringwobble) are used to obtain the
reconstruction performance. For the background rejection (see _left panel_ of
Fig. 3), we tested against MC proton simulations and observational off-source
data, where we do not expect any gamma-ray signal.
Figure 3: The validation of the performance is taken from [8]. _Left)_ ROC
curves with MC proton simulations and observational off data. _Center)_
Angular resolution vs. reconstructed energy. _Right)_ Energy resolution vs.
simulated energy.
### 2.3 Results on observational data
We analyzed 5.38 h of observations of the standard gamma-ray candle the Crab
Nebula, taken with the MAGIC telescopes on four different nights in 2016 under
good weather conditions at low zenith distance (zd < $35^{\circ}$). We used
MARS and CTLearn with two settings of analysis cuts (in background suppression
and reconstructed energy) focusing on the medium energy (ME; E > $250$ GeV)
and low energy (LE; E > $100$ GeV) range. For a fair comparison between the
different analysis methods, the background (bkg) rates of the CTLearn analyses
are adjusted, through a fine-tuning of the background suppression cut, to
match for the corresponding standard MARS analyses (ME or LE). The Crab Nebula
is detected using $\theta^{2}$ plots (see Fig. 4 for the CTLearn ME analysis),
where $\theta$ is the angular separation of the source position and the
reconstructed arrival direction of the very high-energy photon. The main
results of all analyses are summarized in Tab. 1. The same arrival direction
cuts, which defines the fiducial gamma-ray signal region in the $\theta^{2}$
plots, are applied to all different analysis methods. Three off-source
positions are considered to evaluate the background distributions. The
sensitivity is computed as the strength of the source that gives
excess/sqrt(background) = 5 after 50h with the condition of excess/background
> 5% and is given in percentage of the Crab Nebula flux. The significance is
calculated following Li&Ma [19].
Analysis | $N_{on}$ | $N_{off}$ | $N_{ex}$ | $\gamma$ rate [/min] | bkg rate [/min] | Sen. [% Crab] | Sig. (Li&Ma)
---|---|---|---|---|---|---|---
MARS – ME | $1934$ | $45.3\pm 3.9$ | $1888.7\pm 44.1$ | $5.85\pm 0.14$ | $0.140\pm 0.012$ | $0.58\pm 0.03$ | $66.6\sigma$
CTLearn – ME | $1907$ | $46.0\pm 3.9$ | $1861.0\pm 43.8$ | $5.77\pm 0.14$ | $0.143\pm 0.012$ | $0.60\pm 0.03$ | $66.0\sigma$
MARS – LE | $7933$ | $1827.3\pm 24.7$ | $6105.7\pm 92.4$ | $18.91\pm 0.29$ | $5.661\pm 0.076$ | $1.50\pm 0.01$ | $83.7\sigma$
CTLearn – LE | $7889$ | $1826.3\pm 24.7$ | $6062.7\pm 92.2$ | $18.78\pm 0.29$ | $5.658\pm 0.076$ | $1.51\pm 0.01$ | $83.2\sigma$
Table 1: Summary of all performed analyses (LE/ME and MARS/CTLearn) of the
same Crab Nebula sample. Figure 4: $\theta^{2}$ plot for the CTLearn ME
analysis.
## 3 Conclusions and outlook
This contribution shows that CNN-based full-event reconstruction works for MC
simulations and observational data of the MAGIC telescopes. The performance
obtained with CTLearn v0.6 matches the sensitivity of detection of the
conventional analysis on real data. The selected TRN model is relatively
shallow and further performance enhancements are foreseen by increasing the
model depth/complexity. We plan to evaluate the full performance of the MAGIC
telescopes with CNN-based analyses under various observation conditions in the
future.
## References
* [1] Brill et al. CTLearn v0.6.0: Deep learning for imaging atmospheric Cherenkov telescopes event reconstruction, Zenodo [10.5281/zenodo.6842323], (2022).
* [2] Nieto et al. Exploring deep learning as an event classification method for the Cherenkov Telescope Array, Proceedings of $35^{\text{th}}$ International Cosmic Ray Conference (ICRC) 301, 809 (2017).
* [3] Nieto et al. CTLearn: Deep Learning for Gamma-ray Astronomy, Proceedings of $36^{\text{th}}$ International Cosmic Ray Conference (ICRC) 358, 752 (2019).
* [4] Nieto et al. Reconstruction of IACT events using deep learning techniques with CTLearn, Proceedings of XXX Astronomical Data Analysis Software and Systems (ADASS) conference 532, 191 (2022).
* [5] Kim et al. DL1-Data-Handler v0.10.8: DL1 HDF5 writer, reader, and processor for IACT data, Zenodo [10.5281/zenodo.7053921], (2022).
* [6] Zanin et al. MARS, The MAGIC Analysis and Reconstruction Software, Proceedings of $33^{\text{th}}$ International Cosmic Ray Conference (ICRC) , 773 (2013).
* [7] Aleksić et al. The major upgrade of the MAGIC telescopes, Part II: A performance study using observations of the Crab Nebula, Astroparticle Physics 72, 76 (2016).
* [8] Miener et al. IACT event analysis with the MAGIC telescopes using deep convolutional neural networks with CTLearn, Proceedings of XXXI Astronomical Data Analysis Software and Systems (ADASS) conference [arXiv:2112.01828], (2021).
* [9] Chollet et al. Keras, https://keras.io (2015).
* [10] TensorFlow Developers TensorFlow v2.8.0, Zenodo [10.5281/zenodo.5949125], (2022).
* [11] Pivarski et al. scikit-hep/uproot4: 4.1.4, Zenodo [10.5281/zenodo.5567737], (2021).
* [12] Nieto et al. Studying Deep Convolutional Neural Networks With Hexagonal Lattices for Imaging Atmospheric Cherenkov Telescope Event Reconstruction, Proceedings of $36^{\text{th}}$ International Cosmic Ray Conference (ICRC) 358, 753 (2019).
* [13] Grespan et al. Deep-learning-driven event reconstruction applied to simulated data from a single Large-Sized Telescope of CTA, Proceedings of $37^{\text{th}}$ International Cosmic Ray Conference (ICRC) 395, 771 (2021).
* [14] Miener et al. Reconstruction of stereoscopic CTA events using deep learning with CTLearn, Proceedings of $37^{\text{th}}$ International Cosmic Ray Conference (ICRC) 395, 730 (2021).
* [15] Xie et al. Utterance-level Aggregation For Speaker Recognition In The Wild, Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 5791 (2019).
* [16] He et al. Deep Residual Learning for Image Recognition, Proceedings of the IEEE conference on computer vision and pattern recognition , 770 (2016).
* [17] Hu et al. Squeeze-and-excitation networks, Proceedings of the IEEE conference on computer vision and pattern recognition , 7132 (2018).
* [18] Vuillaume et al. Analysis of the Cherenkov Telescope Array first Large-Sized Telescope real data using convolutional neural networks, Proceedings of $37^{\text{th}}$ International Cosmic Ray Conference (ICRC) 395, 703 (2021).
* [19] Li and Ma Analysis methods for results in gamma-ray astronomy, Astrophys. J. 272, 317 (1983).
## Acknowledgments
We would like to thank the Instituto de Astrofísica de Canarias for the
excellent working conditions at the Observatorio del Roque de los Muchachos in
La Palma. The financial support of the German BMBF, MPG and HGF; the Italian
INFN and INAF; the Swiss National Fund SNF; the ERDF under the Spanish
Ministerio de Ciencia e Innovación (MICINN) (FPA2017-87859-P, FPA2017-
85668-P, FPA2017-82729-C6-5-R, FPA2017-90566-REDC, PID2019-104114RB-C31,
PID2019-104114RB-C32, PID2019- 105510GB-C31C42, PID2019- 107847RB-C44,
PID2019-107988GB-C22); the Indian Department of Atomic Energy; the Japanese
ICRR, the University of Tokyo, JSPS, and MEXT; the Bulgarian Ministry of
Education and Science, National RI Roadmap Project DO1-268/16.12.2019 and the
Academy of Finland grant nr. 317637 and 320045 are gratefully acknowledged.
This work was also supported by the Spanish Centro de Excelencia “Severo
Ochoa” SEV-2016- 0588, SEV-2017-0709 and CEX2019-000920-S, and "María de
Maeztu” CEX2019-000918-M, the Unidad de Excelencia “María de Maeztu”
MDM-2015-0509-18-2 and the "la Caixa" Foundation (fellowship
LCF/BQ/PI18/11630012), by the Croatian Science Foundation (HrZZ) Project
IP-2016-06-9782 and the University of Rijeka Project 13.12.1.3.02, by the DFG
Collaborative Research Centers SFB823/C4 and SFB876/C3, the Polish National
Research Centre grant UMO-2016/22/M/ST9/00382 and by the Brazilian MCTIC, CNPq
and FAPERJ. TM acknowledges support from PID2019-104114RB-C32. JLC and DN
acknowledges partial support from The European Science Cluster of Astronomy &
Particle Physics ESFRI Research Infrastructures funded by the European Union’s
Horizon 2020 research and innovation program under Grant Agreement no. 824064.
We acknowledge the support of NVIDIA Corporation with the donation of a Titan
X Pascal GPU used for part of this research.
This paper has gone through internal review by the MAGIC Collaboration.
|
Graph Search based Polar Code Design
Marvin Geiselhart, Andreas Zunker, Ahmed Elkelesh, Jannis Clausius and Stephan ten Brink
Institute of Telecommunications, Pfaffenwaldring 47, University of Stuttgart, 70569 Stuttgart, Germany
This work is supported by the German Federal Ministry of Education and Research (BMBF) within the project Open6GHub (grant no. 16KISK019).
MLmaximum likelihood
BPbelief propagation
BPLbelief propagation list
LDPClow-density parity-check
BERbit error rate
BPSKbinary phase shift keying
AWGNadditive white Gaussian noise
LLRLog-likelihood ratio
MAPmaximum a posteriori
FERframe error rate
BLERblock error rate
SCLsuccessive cancellation list
SCsuccessive cancellation
BI-DMCBinary Input Discrete Memoryless Channel
CRCcyclic redundancy check
CA-SCLCRC-aided successive cancellation list
BECBinary Erasure Channel
BSCBinary Symmetric Channel
3GPP3rd Generation Partnership Project
eMBBenhanced Mobile Broadband
CNcheck node
VNvariable node
GenAlgGenetic Algorithm
CSIChannel State Information
OSDordered statistic decoding
MWPC-BPminimum-weight parity-check BP
FFGForney-style factor graph
MBBPmultiple-bases belief propagation
URLLCultra-reliable low-latency communications
DMCdiscrete memoryless channel
SGDstochastic gradient descent
NNneural network
5Gfifth generation mobile telecommunication
SCANsoft cancellation
AEDautomorphism ensemble decoding
CCDFcomplementary cumulative distribution function
It is well known that to fulfill their full potential, the design of polar codes must be tailored to their intended decoding algorithm. While for SC decoding, information theoretically optimal constructions are available, the code design for other decoding algorithms (such as BP decoding) can only be optimized using extensive Monte Carlo simulations.
We propose to view the design process of polar codes as a graph search problem and thereby approaching it more systematically. Based on this formalism, the design-time complexity can be significantly reduced compared to state-of-the-art GenAlg and deep learning-based design algorithms. Moreover, sequences of rate-compatible polar codes can be efficiently found. Finally, we analyze both the complexity of the proposed algorithm and the error-rate performance of the constructed codes.
§ INTRODUCTION
Polar codes, introduced by Arıkan, have attracted much interest due to their theoretical capability to achieve the capacity of the BI-DMC under SC decoding [1] and their standardization in the 5G.
In the short blocklength regime, however, the performance of polar codes under SC decoding is not satisfactory. Therefore, alternative decoding algorithms have been proposed to improve the error-rate performance (e.g., SCL decoding [2], AED [3]), providing soft output (e.g., SCAN decoding [4]) or reducing the latency (e.g., BP decoding [5]).
It has been shown that different channels and decoding algorithms require different code designs to achieve the best possible error-rate performance [6].
While polar code design for BI-DMC under SC decoding is well studied, there exists no explicit construction optimized for other decoding algorithms.
Consequently, finding suitable code designs for these decoders either is based on sub-optimal approximations, heuristics or requires extensive Monte Carlo simulations.
In [7], codes for SCL decoding are designed based on a heuristic, while in [8], designs are hand-crafted based on an information-theoretic analysis of the decoder.
Density evolution and its Gaussian approximation have been used in [9] and [10, 11, 12], respectively, to design polar codes. For iterative BP decoding, LLR evolution has been proposed in [13].
More generally applicable code design algorithms are based on Monte Carlo methods. In [14], the bitwise BER is used to find reliable synthetic channels and successively generate the code design in a greedy fashion.
A similar approach is used in [15], where the actual performance of the codes is simulated instead of the BER.
To allow for a broader search than greedy algorithms, the use of a GenAlg has been proposed in [6]. Here, each code design is treated as an individual in a population that evolves over multiple generations using selection, crossover and mutation. Since then, the efficiency of GenAlg has been improved by better crossover algorithms and caching [16].
Further, machine learning methods were applied to polar code design. In [17], the code design is learned via gradient descent through an unrolled BP decoder. More recently, polar codes were learned via reinforcement learning [18]. In [19], a NN is trained to predict the FER performance of polar code designs and then, a projected gradient algorithm is used to find the input to the NN (i.e., a polar code design) that minimizes the FER.
The main contributions of this paper can be summarized as follows:
* We present a new perspective on polar code design as a problem on a graph
* First algorithms to optimize single code designs and rate-compatible reliability sequences are proposed
* We propose the use of confidence intervals as a general method to reduce the complexity of Monte Carlo simulation based code search.
§ PRELIMINARIES
code=[circle,fill=black,inner sep=3pt,minimum size=1mm]
nocode=[circle,inner sep=3pt,fill=black!40,minimum size=1mm]
basic = [draw,minimum size=2mm,black]
edgenode = [pos=.5,below]
edgenode_ab = [pos=.5,above]
line = [line width = 0.5mm]
in 0,1,...,32
[code] (k) at (-,0) ;
[nocode] (k1u1) at (-1,1) ;
[nocode] (k6u5) at (-6,1) ;
[nocode] (k7u18) at (-7,1) ;
[nocode] (k10u6) at (-10,1) ;
[nocode] (k11u9) at (-11,1) ;
[nocode] (k21u11) at (-21,1) ;
[nocode] (k22u13) at (-22,1) ;
[above =of k32, yshift=-.2cm] $k=0$;
[above =of k0, yshift=-.2cm] $k=32$;
[line] (k0) – (k1) node[edgenode] 0;
[line] (k0) – (k1u1) node[edgenode_ab] 1;
[line] (k1u1) – (k2) node[edgenode_ab] 2;
[line] (k2) – (k3) node[edgenode] 0;
[line] (k3) – (k4) node[edgenode] 4;
[line] (k4) – (k5) node[edgenode] 8;
[line] (k5) – (k6) node[edgenode] 24;
[line] (k6) – (k7) node[edgenode] 20;
[line] (k5) – (k6u5) node[edgenode_ab] 5;
[line] (k6u5) – (k7u18) node[edgenode_ab] 18;
[line] (k7u18) – (k8) node[edgenode_ab] 20;
[line] (k8) – (k9) node[edgenode] 17;
[line] (k9) – (k10) node[edgenode] 10;
[line] (k10) – (k11) node[edgenode] 16;
[line] (k9) – (k10u6) node[edgenode_ab] 6;
[line] (k10u6) – (k11u9) node[edgenode_ab] 9;
[line] (k11u9) – (k12) node[edgenode_ab] 16;
[line] (k12) – (k13) node[edgenode] 10;
[line] (k13) – (k14) node[edgenode] 3;
[line] (k14) – (k15) node[edgenode] 12;
[line] (k15) – (k16) node[edgenode] 24;
[line] (k16) – (k17) node[edgenode] 25;
[line] (k17) – (k18) node[edgenode] 26;
[line] (k18) – (k19) node[edgenode] 21;
[line] (k19) – (k20) node[edgenode] 7;
[line] (k20) – (k21) node[edgenode] 14;
[line] (k21) – (k22) node[edgenode] 11;
[line] (k20) – (k21u11) node[edgenode_ab] 11;
[line] (k21u11) – (k22u13) node[edgenode_ab] 13;
[line] (k22u13) – (k23) node[edgenode_ab] 22;
[line] (k23) – (k24) node[edgenode] 14;
[line] (k24) – (k25) node[edgenode] 19;
[line] (k25) – (k26) node[edgenode] 28;
[line] (k26) – (k27) node[edgenode] 23;
[line] (k27) – (k28) node[edgenode] 15;
[line] (k28) – (k29) node[edgenode] 27;
[line] (k29) – (k30) node[edgenode] 29;
[line] (k30) – (k31) node[edgenode] 30;
[line] (k31) – (k32) node[edgenode] 31;
Sequence of $(N=32,k)$ polar codes for the AWGN channel and BP decoding. Black dots represent the best possible polar code for each code dimension $k$, while gray dots are sub-optimal codes required to create a reliability sequence.
§.§ Polar Codes
Polar codes, as introduced in [1], are based on the $n$-fold application of the basic channel transformation, transforming $N=2^n$ identical channels into $N$ polarized synthetic channels. The subset $\mathcal{A}\subseteq \{0,...,N-1\}$ of synthetic channels with $|\mathcal{A}|=k$ is said to be reliable[The reliability refers to the information after decoding and, thus, is not a universal code property, but also dependent on the decoder.] and carries the information (i.e., information set), while the remaining $N-k$ synthetic channels $\mathcal{A}^c$ are said to be unreliable and thus transmit a frozen 0 (i.e., frozen set).
The code $\mathcal C$ is defined by the encoding rule
\begin{equation*}
\mathbf x = \mathbf u \cdot \mathbf G_N, \qquad \mathbf G_N = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}^{\otimes n},
\end{equation*}
with $\mathbf u_{\mathcal A} \in \{0,1\}^k \text{, } \mathbf u_{\mathcal A^c} = \mathbf 0$. Thus, the code rate is $R=\nicefrac{k}{N}$.
The choice of $\mathcal A$ is called polar code design and optimal solutions are dependent on both the channel and the decoding algorithm [6].
An alternative notation for specifying the sets $\mathcal{A}$ and $\mathcal{A}^c$, respectively, is the binary vector $\mathbf{A}$ with
\begin{equation*}
A_i = \begin{cases}
1 & \text{if } i \in \mathcal A\\
0 & \text{if } i \in \mathcal A^c
\end{cases}.
\end{equation*}
Throughout this paper, we will use $\mathcal{A}$-set, $\mathbf{A}$-vector and code $\mathcal{C}$ notation interchangeably.
§.§ Polar Code Reliability Sequences
Practical applications require a simple change of the code rate whenever the channel conditions vary. For a fixed blocklength $N$, the code rate can be changed by moving some indices from $\mathcal{A}^c$ to $\mathcal{A}$ or vice-versa. A common way to specify the order of freezing/unfreezing is in form of a reliability sequence $\mathbf{Q}$ that lists the indices of the synthetic channels in descending reliability order[In literature, ascending reliability is commonly used. However, descending order results in easier notation.]. To construct a polar code with a desired $k$, the $k$ most reliable (i.e., the first $k$) indices are chosen to be the information set, i.e.,
\begin{equation*}
\mathcal{A} = \{i \in Q_j | j < k\}.
\end{equation*}
Examples for reliability sequences are based on the Bhattacharyya parameter [1], $\beta$-expansion [20] and the 5G sequence [21].
Remark: Reliability sequences are in general sub-optimal. In other words, given a channel and decoding algorithm, the optimal code designs $\mathcal{A}_k$ for each $k$ do not necessarily fulfill $\mathcal{A}_{k-1} \subset \mathcal{A}_k$ and hence, do not necessarily form a sequence. Fig. <ref> illustrates this property for $N=32$ and BP decoding. Each black dot corresponds to the optimal code design for the respective code dimension $k$. There is no consecutive sequence of synthetic channels that contains all the best codes. Instead, for some code dimensions, sub-optimal codes (gray nodes) must be included to create a sequence.
§ POLAR CODE DESIGN ON GRAPHS
§.§ Monte Carlo Simulation Based Code Search
For most polar decoding algorithms besides SC decoding, optimal explicit code constructions are unknown. Hence, one has to select good codes based on their measured performance. The performance is estimated at a pre-defined SNR using Monte Carlo simulation.
With the number of simulated frame errors $N_\mathrm{FE}$ and trials $N_{\mathrm{T}}$, the accuracy of the simulation can be evaluated by a confidence interval $(P_\mathrm{FE,LB}, P_\mathrm{FE,UB})$ which contains the actual FER $P_\mathrm{FE}$ of the code with a chosen probability $\gamma$, called the confidence level.
The frame errors are independent events, and hence, the number of observed frame errors $N_\mathrm{FE}$ is binomially distributed[In contrast, bit errors after decoding are not independent events, and hence, the outlined method only works for FER.]. The confidence intervals can be thus computed using the relationship between binomial cumulative distribution and the incomplete beta function [22].
However, according to the central limit theorem for $N_\mathrm T \rightarrow \mathcal 1$, the distribution of the observed FER $\hat P_\mathrm{FE} = N_\mathrm{FE}/N_\mathrm T $ approaches a normal distribution with mean $\mu = P_\mathrm{FE}$ and variance
\begin{equation*}
\sigma^2 = \frac{P_\mathrm{FE} \cdot \left(1 - P_\mathrm{FE} \right)}{N_\mathrm{T}}.
\end{equation*}
The confidence interval $(P_\mathrm{FE, LB}, P_\mathrm{FE,UB})$ of a Monte Carlo simulation can be approximated as ${(\hat P_\mathrm{FE,LB}, \hat P_\mathrm{FE,UB})} = {(\hat P_\mathrm{FE} - \delta, \hat P_\mathrm{FE} + \delta)}$ with
\begin{equation}
\delta =\sqrt{\frac{\hat P_\mathrm{FE} \cdot \left(1 - \hat P_\mathrm{FE} \right)}{N_\mathrm{T}}} \cdot Q^{-1}(\alpha),\label{eq:confint}
\end{equation}
where $\alpha = \frac{1-\gamma}{2}$ and $Q^{-1}(\alpha)$ is the inverse of the CCDF of the standard normal distribution [22].
Note that the approximation becomes inaccurate if $N_\mathrm{FE}$ or $P_\mathrm{FE} \cdot N_\mathrm T$ are too small.
Confidence intervals can be used to compare two codes $\mathcal{C}_0$ and $\mathcal{C}_1$.
If ${P_{\mathrm{FE,UB}}(\mathcal{C}_0) < P_{\mathrm{FE,LB}}(\mathcal{C}_1)}$ holds, then the FER of $\mathcal{C}_0$ is lower than that of $\mathcal{C}_1$ with probability
\begin{equation*}
P\left[P_\mathrm{FE}(\mathcal{C}_0)<P_\mathrm{FE}(\mathcal{C}_1)\right] > 1 - \frac{(1-\gamma)^2}{4}.
\end{equation*}
Furthermore, if an accurate estimation of the FER is not required, the computational complexity can be reduced by terminating the Monte Carlo simulations as soon as it is determined which code is better.
Algorithm <ref> generalizes this to finding the best $L$ codes of a set of codes ${\mathcal{L}=\{\mathcal{C}_0,\mathcal{C}_1,\dots\}}$.
List $\mathcal{L}$ of codes $\mathcal{C}$, target number of codes $L$,confidence level $\gamma$, $E_\mathrm{b}/N_0$
List $\mathcal{L}^*$ of $L$ best codes
$N_\mathrm{FE} \gets 0$
$N_{\mathrm{T},\mathcal{C}} \gets 0 \quad \forall \mathcal{C} \in \mathcal{L} $
$|\mathcal{L}| > L$
$N_\mathrm{FE} \gets N_\mathrm{FE} + 1$
$\mathcal{C} \in \mathcal{L}$
Simulate code $\mathcal{C}$ for 1 frame error, $N_\mathrm{T}$ trials at $E_\mathrm{b}/N_0$
$N_{\mathrm{T},\mathcal{C}} \gets N_{\mathrm{T},\mathcal{C}} + N_\mathrm{T}$
$\hat P_\mathrm{FE,\mathcal{C}} \gets N_\mathrm{FE}/N_{\mathrm{T},\mathcal{C}}$
compute $\hat P_\mathrm{FE,LB,\mathcal{C}}$ $\hat P_\mathrm{FE,UB,\mathcal{C}}$ from $ \gamma,\hat P_\mathrm{FE,\mathcal{C}},N_{\mathrm{T},\mathcal{C}}$ according to (<ref>)
$\hat P_\mathrm{FE,cutoff} \gets$ $L$-th smallest $\hat P_\mathrm{FE,UB,\mathcal{C}}$
$\mathcal{L} \gets \{\mathcal{C}\in \mathcal{L} \mid \hat P_\mathrm{FE,LB,\mathcal{C}} < \hat P_\mathrm{FE,cutoff}$}
$\mathcal{L}^*\gets \mathcal{L}$
Monte Carlo simulation based search of best $L$ code designs with early termination based on confidence intervals
§.§ The Graph of Polar Code Designs
code=[circle,draw,fill=black, font=]
basic = [draw,minimum size=2mm,black]
edgenode = [pos=.5,above]
line = [line width = 0.5mm]
[] at (-3,2) $k-1$;
[] at (0,2) $k$;
[] at (3,2) $k+1$;
[] at (-5.5,0) $\dots$;
[] at (5.5,0) $\dots$;
[] at (0,1) $\vdots$;
[] at (0,-1) $\vdots$;
[code, label=below:00011011] (mid) at (0,0) ;
[code, label= left:00001011] (l1) at (-3,1.5) ;
[code, label= left:00010011] (l2) at (-3,0.5) ;
[code, label= left:00011001] (l3) at (-3,-0.5) ;
[code, label= left:00011010] (l4) at (-3,-1.5) ;
[code, label= right:10011011] (r1) at (3,1.5) ;
[code, label= right:01011011] (r2) at (3,0.5) ;
[code, label= right:00111011] (r3) at (3,-0.5) ;
[code, label= right:00011111] (r4) at (3,-1.5) ;
[line] (mid) – (l1) node[edgenode] 3;
[line] (mid) – (l2) node[edgenode] 4;
[line] (mid) – (l3) node[edgenode] 6;
[line] (mid) – (l4) node[edgenode] 7;
[line] (mid) – (r1) node[edgenode] 0;
[line] (mid) – (r2) node[edgenode] 1;
[line] (mid) – (r3) node[edgenode] 2;
[line] (mid) – (r4) node[edgenode] 5;
Excerpt of the code design graph for $N=8$.
Complete graph with all polar code designs of length $N=4$.
To relate different polar code designs to each other, we propose to use a (directed) graph. Each polar code design (i.e., $\mathbf{A}$-vector) corresponds to a vertex. Two codes $\mathcal{A}$ and $\mathcal{A}'$ differing exactly by one frozen/unfrozen bit are connected by an edge, and the edge label indicates the bit position in which they differ, i.e.,
\begin{equation*}
\mathcal{A} \frac{j}{\qquad} \mathcal{A}' \quad \Leftrightarrow \quad \mathcal{A}' = \mathcal{A} \cup \{j\}.
\end{equation*}
Note that this is identical to the Hasse diagram of all information sets ordered by inclusion. We define the partial order
\begin{equation*}
\mathcal{A} \prec \mathcal{A}' \quad \Leftrightarrow \quad \mathcal{A} \subset \mathcal{A}'
\end{equation*}
that can also compare codes not directly neighboring, but connected via a chain of edges.
This notion of order is motivated by the fact that the FER of two codes $\mathcal{A}$ and $\mathcal{A}'$ with $\mathcal{A} \prec \mathcal{A}'$ at identical $E_\mathrm{s}/N_0$ fulfill $P_\mathrm{FE}(\mathcal{A}) \le P_\mathrm{FE}(\mathcal{A}')$, as the decoder of $\mathcal{A}$ has access to more a priori information (additional frozen bits) than the decoder of $\mathcal{A}'$. Therefore, the graph implies some local “smoothness” of the FER in the neighborhood around each code.
In Fig. <ref> an excerpt of the graph for $N=8$ is shown. Note that we implicitly assume increasing code dimensions from left to right and, thus, the direction of the edges is omitted for readability.
Fig. <ref> shows the complete graph for all polar codes with blocklength $N=4$.
§.§ Optimization of a Single Polar Code Design
code=[circle,draw,fill=black, inner sep=3pt]
basic = [draw,minimum size=2mm,black]
edgenode = [pos=.425, above, inner sep=2pt]
line = [line width = 0.5mm]
[] at (0,0.5) $k$;
[code, label=right:00011011] (mid) at (0,0) ;
[] at (2.5,0) 1.;
[] at (-7.5,-1) 2.;
[] at (2.5,-4.5) 3.;
[] at (-5,0.5) $k-1$;
[code, label= left:00001011 ] (l1) at (-5,0) ;
[code, label= left:00010011] (l2) at (-5,-1) ;
[code, label= left:00011001 ] (l3) at (-5,-2) ;
[code, label= left:00011010 ] (l4) at (-5,-3) ;
[line] (mid) – (l1) node[edgenode] 3;
[line] (mid) – (l2) node[edgenode] 4;
[line] (mid) – (l3) node[edgenode] 6;
[line] (mid) – (l4) node[edgenode] 7;
[code, label= right: 10010011] (r1) at (0,-1.5) ;
[code, label= right: 01010011] (r2) at (0,-2.5) ;
[code, label= right: 00110011] (r3) at (0,-3.5) ;
[code, label= right:00010111] (r4) at (0,-4.5) ;
[line] (l2) – (r1) node[edgenode] 0;
[line] (l2) – (r2) node[edgenode] 1;
[line] (l2) – (r3) node[edgenode] 2;
[line] (l2) – (r4) node[edgenode] 5;
Example of graph search for a single polar code design, $N=8$.
A first algorithm to traverse the graph in order to find an optimized, single code design is the bit swapping algorithm shown in Algorithm <ref>. Starting from any code design $\mathcal{C}_0$ with the desired code dimension $k$ (e.g., using $\beta$-expansion), information and frozen bits are alternately exchanged. The algorithm keeps a list of the $L$ best candidates and estimates the performance of its left neighbors using Algorithm <ref>. Then, the right neighbors of these codes are simulated. This way, the algorithm “zig-zags” through the graph between $k$ and $k-1$, until no more progress is made. An example for a single iteration of the algorithm is illustrated in Fig. <ref> for $N=8$.
Optimization of a single code design
Start code $\mathcal{C}_0$, list size $L$, confidence level $\gamma$, $E_\mathrm{b}/N_0$
List $\mathcal{L}^*$ of $L$ best codes
$\mathcal{L} \gets \{ \mathcal{C}_0\}$
no further improvement
$\mathcal{L} \gets \bigcup_{\mathcal{C}\in \mathcal{L}} (\text{left neighbors of } \mathcal{C}$)
$\mathcal{L} \gets \operatorname{Algorithm~1}(\mathcal{L}, L, \gamma, E_\mathrm{b}/N_0)$
$\mathcal{L} \gets \bigcup_{\mathcal{C}\in \mathcal{L}} (\text{right neighbors of } \mathcal{C})$
$\mathcal{L} \gets \operatorname{Algorithm~1}(\mathcal{L}, L, \gamma, E_\mathrm{b}/N_0)$
$\mathcal{L}^*\gets \mathcal{L}$
§.§ Optimizing a Bit Reliability Sequence
code=[circle,draw,fill=black, inner sep=5pt]
basic = [draw,minimum size=2mm,black]
edgenode = [pos=.5,above, inner sep= 3pt]
line = [line width = 0.5mm]
[] at (0,0.5) $k_\mathrm{start}$;
[code, label=below:00010111] (k4) at (0,0) ;
[code, label=below:00011111] (k5) at (2,0) ;
[code, label=below:00010011] (k3) at (-2,0) ;
(d51) at (2,-1) ;
(d31) at (-2,-1) ;
(d52) at (2,1) ;
(d32) at (-2,1) ;
[line] (k4) – (k3) node[edgenode] 5;
[line] (k4) – (k5) node[edgenode] 4;
[line,dashed] (k4) – (d31) ;
[line,dashed] (k4) – (d51) ;
[line,dashed] (k4) – (d32) ;
[line,dashed] (k4) – (d52) ;
[code, label=below:01011111] (k6) at (4,0) ;
[code, label=below:00000011] (k2) at (-4,0) ;
[line] (k3) – (k2) node[edgenode] 3;
[line] (k5) – (k6) node[edgenode] 1;
(d61) at (4,-1) ;
(d21) at (-4,-1) ;
(d62) at (4,1) ;
(d22) at (-4,1) ;
[line,dashed] (k3) – (d21) ;
[line,dashed] (k5) – (d61) ;
[line,dashed] (k3) – (d22) ;
[line,dashed] (k5) – (d62) ;
[code, label=below:01111111] (k7) at (6,0) ;
[code, label=below:00000001] (k1) at (-6,0) ;
[line] (k2) – (k1) node[edgenode] 6;
[line] (k6) – (k7) node[edgenode] 2;
(d71) at (6,1) ;
(d11) at (-6,1) ;
[line,dashed] (k2) – (d11) ;
[line,dashed] (k6) – (d71) ;
[] at (-8,0.5) $k=0$;
[] at (8,0.5) $k=N$;
[code, label=below:11111111] (k8) at (8,0) ;
[code, label=below:00000000] (k0) at (-8,0) ;
[line] (k1) – (k0) node[edgenode] 7;
[line] (k7) – (k8) node[edgenode] 0;
Example of graph search for a rate-compatible sequence, $N=8$.
A similar approach to Algorithm <ref> can be used to optimize a rate-compatible sequence of codes. This procedure is listed in Algorithm <ref>. Starting from a list of good codes that was found using Algorithm <ref> for some starting code dimension $k_\mathrm{start}$, the algorithm develops sequences of neighboring codes outwards to $k=0$ and $k=N$. In each step, the best $L$ sequences $S$ are kept based on a path metric
\begin{equation}
\tau(S) = \sum_{\mathcal{C} \in S} \log\frac{P_\mathrm{FE}(\mathcal{C})}{P_{\mathrm{FE,best},k(\mathcal{C})}} = \sum_{\mathcal{C} \in S} \log P_\mathrm{FE}(\mathcal{C}) + c, \label{eq:pathmetric}
\end{equation}
where $P_{\mathrm{FE,best},k(\mathcal{C})}$ is the FER of the best found code for the same code dimension as $\mathcal{C}$. This path metric can be interpreted as the error-rate loss of the codes in the sequence versus the best codes that are possible for each $k$.
This way, the algorithm aims at finding a good compromise of decently performing codes under the constraint that they form a sequence.
This constraint is enforced by lines <ref> and <ref>, where the currently found paths are augmented by appending (or pre-pending, respectively) only neighboring codes in the currently simulated batch $\mathcal{L}_k$. If multiple codes neighbor the last code $S_\mathrm{last}$ (or first code $S_\mathrm{first}$) in the sequence $S$, the sequence is duplicated for each option. Likewise, the work-list of codes to simulate in the next step includes all codes neighboring $S_\mathrm{last}$ (line <ref>) and $S_\mathrm{first}$ (line <ref>), respectively.
For a list size of $L=1$ and starting code dimension $k_\mathrm{start}=0$, the algorithm degenerates to the greedy procedure presented in [15].
Fig. <ref> illustrates Algorithm <ref> for $N=8$ and $k_\mathrm{start}=4$. The bit reliability sequence $\mathbf Q$ can be extracted as the sequence of edge labels on the path from $k=0$ to $k=N$; in this example $\mathbf Q = [7,6,3,5,4,1,2,0]$.
List of start codes $\mathcal{L}_{k_\mathrm{start}}$, $k_\mathrm{start}$, list size $L$, confidence level $\gamma$, $E_\mathrm{b}/N_0$
Best sequence $S^*$
$k_\mathrm{min}\gets k_\mathrm{start}, \,k_\mathrm{max}\gets k_\mathrm{start}$
$\mathcal{L}_{k_\mathrm{start}} \gets \operatorname{Algorithm~1}(\mathcal{L}_{k_\mathrm{start}}, L, \gamma, E_\mathrm{b}/N_0)$
$\mathcal{S}_\mathrm{paths} \gets \{ [\mathcal{C}] \mid \mathcal{C} \in \mathcal{L}_k \}$
$k_\mathrm{min}>0$ or $k_\mathrm{max}<N$
$k_\mathrm{max} \gets k_\mathrm{max}+1$
$\mathcal{L}_{k_\mathrm{max}} \gets \bigcup_{S\in \mathcal{S}_\mathrm{paths}} (\text{right neighbors of } S_\mathrm{last}$)
$\mathcal{L}_{k_\mathrm{max}} \gets \operatorname{Algorithm~1}(\mathcal{L}_{k_\mathrm{max}}, L, \gamma, E_\mathrm{b}/N_0)$
Augment $S \in \mathcal{S}_\mathrm{paths}$ using codes $\mathcal{C} \in \mathcal{L}_{k_\mathrm{max}}$
$k_\mathrm{min}\gets k_\mathrm{min}-1$
$\mathcal{L}_{k_\mathrm{min}} \gets \bigcup_{S\in \mathcal{S}_\mathrm{paths}} (\text{left neighbors of } S_\mathrm{first}$)
$\mathcal{L}_{k_\mathrm{min}} \gets \operatorname{Algorithm~1}(\mathcal{L}_{k_\mathrm{min}}, L, \gamma, E_\mathrm{b}/N_0)$
Augment $S \in \mathcal{S}_\mathrm{paths}$ using codes $\mathcal{C} \in \mathcal{L}_{k_\mathrm{min}}$
Prune $\mathcal{S}_\mathrm{paths}$ to best $L$ paths w.r.t. to $\tau(S)$ from (<ref>)
$S^* \gets \arg\min_{S\in \mathcal{S}_\mathrm{paths}} \tau(S)$
Rate-compatible polar code sequence optimization
§ RESULTS
§.§ Single Code Design Optimization
Design complexity in terms of simulation effort vs. the achievable FER. The lines record the mean and median of 11 independent optimization runs for each optimizer.
grid style=dotted,gray,
legend columns=1,
legend pos=south west,
legend cell align=left,
legend style=fill=none,text opacity=1, draw=none,
xlabel=$E_\mathrm b/N_0$ in dB,
legend image post style=mark indices=,
mark size=1.5pt,
[color=mittelblau,line width = 1pt,mark=+,mark size=2.5pt,mark options=solid,densely dotted]
table[col sep=comma]
5G construction;
[color=orange,line width=1pt,mark=triangle,mark size=2.5pt,mark options=solid,densely dashed]
table[col sep=comma]
$\beta$-expansion, $\beta = 1.159$;
[color=apfelgruen,line width = 1pt,mark=o,mark size=2.5pt,mark options=solid,densely dashed]
table[col sep=comma, row sep=crcr]
Neural network @ [2.5]dB [19] ;
[color=black,line width = 1pt,mark=x,mark size=2.5pt,mark options=solid,dashdotted]
table[col sep=comma]
Genetic algorithm @ [2.5]dB [6];
[color=rot,line width = 1pt,mark=triangle,mark size=2.5pt,mark options=solid,solid]
table[col sep=comma, row sep=crcr]
Graph search, greedy ($L=1$) @ [2.5]dB;
Performance of (512,128) polar codes under BP decoding with $N_\mathrm{it,max}~=~20$ iterations.
We evaluate Algorithm <ref> for designing polar codes for the AWGN channel and BP decoding. For more information on BP, we refer the interested reader to [23]. We compare the proposed method to optimizations using the deep learning approach from [19] and the GenAlg proposed in [6] with the complexity reduction improvements from [16].
For the deep learning based method, the NN consists of three dense layers with 128 neurons each and it is trained for 100 epochs per design algorithm iteration.
The GenAlg uses a population size of 50.
First, we design $(128,64)$ polar codes for $N_\mathrm{it,max}=100$ BP decoding iterations at an SNR $E_\mathrm{b}/N_0~=~3~\text{dB}$. The graph search algorithm uses a list size $L=4$ and $\gamma=0.8$. We notice that all algorithms converge to the identical, presumably globally optimal code design with the same FER performance.
Therefore, to compare the algorithms quantitatively, we record the total number of frames transmitted in the Monte Carlo simulation. As all algorithms are incremental and intermediate solutions can be taken at any step in the optimization progress, we plot the mean and median FER performance of the best codes from 11 independent runs of each optimizer in Fig. <ref>. We can see that the NN-based method has the largest design complexity as it requires a large data-set until the projected gradient method can start to produce gains. The GenAlg starts off the fastest, however, then converges more slowly than the proposed graph search, which needs the least complexity to reliably converge to the optimal code design.
Next, we design longer (512,128) codes for $N_\mathrm{it,max}=20$ BP iterations at $E_\mathrm{b}/N_0~=~2.5~\text{dB}$. We compare the three Monte Carlo based designs and also the 5G design as well as the $\beta$-expansion based design with an optimized value for $\beta=1.159$ in Fig. <ref>. Here, the Monte Carlo optimized code designs perform better than the standardized codes and $\beta$-expansion. Moreover, the graph search designed a code outperforming also the GenAlg, even without a list (i.e., $L=1$).
§.§ Bit Reliability Sequence
grid style=dotted,gray,
legend columns=1,
legend style=at=(0.4,0.97),anchor=north,draw=none,fill=none,
legend cell align=left,
xlabel=Code dimension $k$,
ylabel=$E_\mathrm{b}/N_0$ required for $P_\mathrm{FE}\le 10^{-3}$,
legend image post style=mark indices=,
mark size=1.5pt,
[color=orange,line width = 1pt,densely dashed]
table[col sep=comma]
$\beta$-expansion, $\beta=2^{\nicefrac{1}{4}}$;
[color=mittelblau,line width = 1pt,dashdotted]
table[col sep=comma]
5G construction;
[color=black,line width = 1pt,densely dotted]
table[col sep=comma]
Graph search, greedy, $k_\mathrm{start}=64$;
[color=rot,line width = 1pt,solid]
table[col sep=comma]
Graph search, $L=40$, $k_\mathrm{start}=32$;
Performance of rate-compatible polar code sequences with $N=128$ for BP decoding with $N_\mathrm{it,max}=200$ iterations.
To evaluate Algorithm <ref>, we design polar codes with blocklength $N=128$ for BP decoding with $N_\mathrm{it,max}=200$ iterations. As neither GenAlg nor NN based methods can optimize a rate-compatible sequence, we compare to the 5G and the $\beta$-expansion (with the standard parameter $\beta=2^{\nicefrac{1}{4}}$) sequences. To visualize the performance of the code sequence of a wide range of code rates, we plot the required $E_\mathrm{b}/N_0$ to reach an FER of $10^{-3}$ versus the code dimension $k$ in Fig. <ref>. First, a greedy search ($L=1$) from $k_\mathrm{start}=64$ is performed. The sequence already outperforms both the 5G and the $\beta$-expansion sequences in the vicinity of the expansion point, however, the performance deteriorates for very high rates and in particular, low rates. Hence, we chose a lower rate expansion point $k_\mathrm{start}=32$ and also use a list $L=40$. This way, a code sequence is found that outperforms the 5G and $\beta$-expansion designs over all rates, with a maximum improvement of roughly half a dB for $k=49$. We notice that the graph search algorithm produces a sequence with much smoother transitions from one code rate to another, i.e., more predictable performance when the rate is changed, while the curves for the traditional code designs are very jagged.
§ CONCLUSION
In this paper, we introduced a new perspective on polar code design as a search on a graph. This makes it possible to systematically optimize a single code design and also find reliability sequences for rate-compatible polar codes. To this end, we proposed two algorithms for traversing the graph and showed that they provide lower computational complexity than other Monte Carlo simulation based design methods and can result in better code designs with respect to the error-rate performance.
The proposed methods are very general and can be easily applied to other decoding algorithms such as SCAN, BPL and AED. In particular, the graph can be altered such that the resulting codes follow desired properties such as the partial order of synthetic channels.
[1]
E. Arıkan, “Channel Polarization: A Method for Constructing
Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels,”
IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3051–3073, Jul. 2009.
[2]
I. Tal and A. Vardy, “List Decoding of Polar Codes,” IEEE Trans. Inf.
Theory, vol. 61, no. 5, pp. 2213–2226, May 2015.
[3]
M. Geiselhart, A. Elkelesh, M. Ebada, S. Cammerer, and S. ten Brink, “On the
Automorphism Group of Polar Codes,” in IEEE Inter. Symp. Inf. Theory
(ISIT), 2021, pp. 1230–1235.
[4]
U. U. Fayyaz and J. R. Barry, “Low-Complexity Soft-Output Decoding of
Polar Codes,” IEEE J. Sel. Areas Commun., vol. 32, no. 5, 2014.
[5]
E. Arıkan, “A Performance Comparison of Polar Codes and Reed-Muller
Codes,” IEEE Commun. Lett., vol. 12, no. 6, pp. 447–449, Jun. 2008.
[6]
A. Elkelesh, M. Ebada, S. Cammerer, and S. ten Brink, “Decoder-Tailored Polar
Code Design Using the Genetic Algorithm,” IEEE Transactions on
Communications, vol. 67, no. 7, pp. 4521–4534, 2019.
[7]
P. Yuan, T. Prinz, G. Böcherer, O. İşcan, R. Böhnke, and W. Xu,
“Polar Code Construction for List Decoding,” in IEEE Inter. ITG
Conf. on Syst., Commun. and Coding (SCC), Feb. 2019, pp. 1–6.
[8]
M. C. Coşkun and H. D. Pfıster, “An information-theoretic perspective on
successive cancellation list decoding and polar code design,” IEEE
Trans. Inf. Theory, vol. 68, no. 9, pp. 5779–5791, 2022.
[9]
R. Mori and T. Tanaka, “Performance of Polar Codes with the Construction
using Density Evolution,” IEEE Commun. Lett., vol. 13, no. 7, pp.
519–521, July 2009.
[10]
P. Trifonov, “Efficient Design and Decoding of Polar Codes,” IEEE
Trans. Commun., vol. 60, no. 11, pp. 3221–3227, Nov. 2012.
[11]
D. Wu, Y. Li, and Y. Sun, “Construction and Block Error Rate Analysis of
Polar Codes Over AWGN Channel Based on Gaussian Approximation,” IEEE
Commun. Lett., vol. 18, no. 7, pp. 1099–1102, July 2014.
[12]
R. M. Oliveira and R. C. De Lamare, “Polar codes based on piecewise gaussian
approximation: Design and analysis,” IEEE Access, vol. 10, pp.
73 571–73 582, 2022.
[13]
M. Qin, J. Guo, A. Bhatia, A. G. i Fabregas, and P. Siegel, “Polar Code
Constructions Based on LLR Evolution,” IEEE Commun. Lett., vol. 21,
no. 6, pp. 1221–1224, June 2017.
[14]
S. Sun and Z. Zhang, “Designing Practical Polar Codes Using Simulation-Based
Bit Selection,” IEEE J. Emerging and Sel. Topics Circuits Syst.,
vol. 7, no. 4, pp. 594–603, Dec. 2017.
[15]
J. Liu and J. Sha, “Frozen bits selection for polar codes based on simulation
and BP decoding,” IEICE Electronics Express, Mar. 2017.
[16]
H. Zhou, W. J. Gross, Z. Zhang, X. You, and C. Zhang, “Low-complexity
construction of polar codes based on genetic algorithm,” IEEE
Communications Letters, vol. 25, no. 10, pp. 3175–3179, 2021.
[17]
M. Ebada, S. Cammerer, A. Elkelesh, and S. ten Brink, “Deep learning-based
polar code design,” in 2019 57th Annual Allerton Conference on
Communication, Control, and Computing (Allerton), 2019, pp. 177–183.
[18]
Y. Liao, S. A. Hashemi, J. M. Cioffi, and A. Goldsmith, “Construction of polar
codes with reinforcement learning,” IEEE Transactions on
Communications, vol. 70, no. 1, pp. 185–198, 2022.
[19]
M. Léonardon and V. Gripon, “Using Deep Neural Networks to Predict and
Improve the Performance of Polar Codes,” IEEE 11th Inter. Symp. on
Topics in Coding (ISTC), 2021.
[20]
G. He, J. C. Belfiore, I. Land, G. Yang, X. Liu, Y. Chen, R. Li, J. Wang,
Y. Ge, R. Zhang, and W. Tong, “$\beta$-expansion: A Theoretical Framework
for Fast and Recursive Construction of Polar Codes,” in IEEE Global
Commun. Conf. (GLOBECOM), Dec. 2017, pp. 1–6.
[21]
“Technical Specification Group Radio Access Network,” 3GPP, 2018, TS
38.212 V.15.1.1. [Online]. Available:
[22]
J. Hamkins, “Confidence Intervals for Error Rates Observed in Coded
Communications Systems,” in The Interplanetary Network Progress
Report, vol. 42-201, 2015, pp. 1–17.
[23]
A. Elkelesh, S. Cammerer, M. Ebada, and S. ten Brink, “Mitigating Clipping
Effects on Error Floors under Belief Propagation Decoding of Polar Codes,”
in Inter. Symp. Wireless Commun. Syst., Aug. 2017.
|
# Low coherence interferometric detection of the spectral dependence of the
retro-reflection coefficient of an anti-reflective coated interface
Michel Lequime * Imran Khan Myriam Zerrad and Claude Amra Aix Marseille
Univ, CNRS, Centrale Marseille, Institut Fresnel, Marseille, France
<EMAIL_ADDRESS>
###### Abstract
The measurement of very low reflection coefficients of anti-reflective coated
interfaces has become a key issue for the realization of precision instruments
such as the giant interferometers used for the detection of gravitational
waves. We propose in this paper a method, based on low coherence
interferometry and balanced detection, which not only allows to obtain the
spectral dependence of this reflection coefficient in amplitude and phase,
with a sensitivity of the order of 0.1 ppm and a spectral resolution of 0.2
nm, but also to eliminate any spurious influence related to the possible
presence of uncoated interfaces. This method also implements a data processing
similar to that used in Fourier transform spectrometry. After establishing the
formulas that control the accuracy and the signal-to-noise ratio of this
method, we present the results that provide a complete demonstration of its
successful operation in various experimental conditions.
††journal: oe††articletype: Research Article
## 1 Introduction
Anti-reflective coatings are undoubtedly one of the most important categories
of optical interference coatings [1, 2]. They are used in a wide range of
applications, from consumer optics (photography, eyewear, LCD display) to high
performance scientific instrumentation (Earth observation from space,
interferometric detection of gravitational waves). Because of this broad range
of applications, the reflection specifications of these coatings are extremely
varied, ranging from percent or fraction of a percent over large spectral
ranges [3, 4] to a few ppm for specific wavelengths corresponding to laser
emissions [5].
Here, we are interested in the latter, and the corresponding coatings are then
often designated by the names V-coat or V-shape coatings, because the
theoretical variation of their reflectivity in logarithmic units presents a V
shape whose minimum is centered at the design wavelength. Such a AR coating
response corresponds, for example, to the one that can be obtained by
depositing on a substrate two layers of materials with high and low refractive
indices respectively and whose thicknesses are adjusted in accordance to the
value of their indices and the required central wavelength of operation [6].
Thus, to obtain a theoretical zero in reflection on a N-BK7 window at the
wavelength of 1064 nm, using Niobium pentoxide (Nb2O5) as the high index
material and silicon dioxide (SiO2) as the low index material, we can use the
following stack formula
N-BK7 / 1.690H / 0.681L / Air
where H and L are quarter-wave thicknesses of high and low index materials
respectively.
Independently of the manufacturing challenges that the reliable realization of
such a deposition is likely to raise, even in the case of deposition machines
using stable and energetic processes (Ion Beam Sputtering, Plasma Assisted
Reactive Magnetron Sputtering) and high performance in situ optical monitoring
systems (see for instance [5]), it is also a challenging task to accurately
characterize the residual reflection value of the coated face with an accuracy
of a few ppm and a perfect insensitivity to the optical properties
(reflection, scattering) of the other face. The use of classical techniques
[7], such as the subtraction of the theoretical contribution from the rear
face to the experimental reflection factor, the rough sanding of this rear
face or its coating with a black paint, are not suitable here as they are
adapted to cases where a precision of 0.1% is sufficient.
Two measurement methods that achieve the required sensitivity levels are the
two-channel cavity ring down technique [8] and the use of a tunable laser with
high side-mode suppression ratio [9]. The former achieves sub-ppm accuracy,
but is monochromatic (635 nm) and requires the use of a two-side coated
component illuminated with a typical incidence angle of 5 degrees. The later
gives access to the reflection spectrum of one of the faces, but again, it is
necessary to illuminate the sample with a non-zero angle of incidence (2
degrees for instance), and the absolute calibration of the set-up with a ppm
accuracy seems very difficult.
The method we describe in this paper provides an effective solution to address
these challenges and allows to measure the reflection spectrum of the
antireflection coated face of a plane window under normal incidence and with a
much better sensitivity than ppm. Moreover, it allows to determine the
spectral dependence of the phase shift to the reflection on this stack.
Section 2.1 provides a description of the low-coherence, balanced-detection
interferometric set-up used to record this reflected flux, while Section 2.2
details the Fourier transform processing scheme implemented to extract the
spectral dependence of this reflection coefficient, amplitude and phase.
Section 2.3 analyzes the theoretical SNR of this measurement method, as well
as the influence on this SNR of the reduction of the width of the data
processing windows, and the consequences on the result of this measurement of
an angular misalignment of one of the faces of the sample. Section 3 describes
the experimental results obtained on a 2 mm thick silica wafer with a V-shaped
antireflection coating on one side, while Section 4 provides a critical
analysis of these results. Finally, Section 5 summarizes our main achievements
and defines possible next steps in the development of this technique.
## 2 Method
### 2.1 Set-up description
The set-up used to measure this reflection coefficient is referred to as
BARRITON (for Back-scattering And Retro-Reflection by InterferomeTry with lOw
cohereNce). It is an upgraded version of the one described in Reference [10]
and is shown schematically in Fig. 1.
Figure 1: Low-coherence balanced-detection interferometric set-up (BARRITON).
The linearly polarized light flux provided by a superluminescent diode (SLD
1050) centered around 1050 nm is coupled into a PM980-XP polarization-
maintaining single-mode fiber (mode field diameter $2w_{0}=6.6$ $\mu$m @ 980
nm), whose output end is placed at the focus of a reflective collimator RC of
$f=7$ mm focal length. The resulting low divergence Gaussian beam passes
through a single order half-wave plate HWP, whose angular position allows to
modify the orientation of the polarization direction of this beam with respect
to that of a fixed linear polarizer LP0. This allows an independent adjustment
of the useful power of the light beam and its emission spectrum (central
wavelength and line-width), while keeping a fixed orientation (TE or TM) to
its polarization.
This polarized light beam is divided into two sub-beams by a non-polarizing
cube splitter BS1. The reflected beam is sent towards the sample to be
characterized and the light flux retro-reflected by this sample is transmitted
by the same cube splitter and forms the Signal channel (SIG). The beam
transmitted through BS1 forms the Reference channel (REF): this beam is retro-
reflected by a hollow retro-reflector HRR which also laterally shifts it with
respect to the incidence direction. These two channels are then superimposed
through a second non-polarizing cube splitter BS2 and the two complementary
outputs of this coherent mixer are detected by the two photodiodes (PD1 and
PD2) of a NIRVANA balanced receiver [11]. Four mirrors (from M1 to M4) and a
total reflection prism (RAP, right angle prism) are used to adjust the
position or orientation of the different beams inside the set-up.
From now on, we will assume that the sample is a flat silica window with an
anti-reflective coating on one side. In a very general way, the currents
delivered by each of the two photodiodes are thus described by the following
equations [10, 12]:
$I_{1}=I_{\text{dc,1}}+I_{\text{ac},1}\qquad
I_{2}=I_{\text{dc,2}}-I_{\text{ac},2}$ (1)
with, for $j=1,2$:
$I_{\text{dc},j}=\eta_{a}T_{\text{ref},j}\int\limits_{0}^{\infty}S(f)\mathcal{P}(f)\thinspace
df+\eta_{a}T_{\text{sig},j}\int\limits_{0}^{\infty}S(f)\mathcal{P}(f)\left|r(f)\right|^{2}\thinspace
df$ (2)
and
$I_{\text{ac},j}=2\eta_{a}\sqrt{T_{\text{ref},j}T_{\text{sig},j}}\thinspace\Re\left\\{\int\limits_{0}^{\infty}S(f)\mathcal{P}(f)r(f)\thinspace
e^{-ik_{\text{a}}\Delta L}\thinspace df\right\\}$ (3)
where $f$ is the frequency of the optical field, $S(f)$ is the spectral
dependence of the photodiode responsivity, $\mathcal{P}(f)$ is the power
spectral density of the light source, $r(f)$ is the coherent coefficient of
reflection of the plane glass window [6], $k_{a}$ is the wave vector in air,
$\Delta L$ is the optical path difference between SIG and REF channels,
$T_{\text{ref},j}$ (respectively $T_{\text{sig},j}$) is the transmittion
coefficient of all the optical elements crossed by the reference (respectively
signal) beam between the source and the photodiode $j$ ($j=1,2$), and
$\eta_{a}$ is a factor that quantifies the geometrical overlap between the
Gaussian profile of the light beam and the sensitive area of the photodiode,
i.e.
$\eta_{a}=\frac{\displaystyle\int\limits_{0}^{a}e^{-2r^{2}/w_{d}^{2}}r\thinspace
dr}{\displaystyle\int\limits_{0}^{\infty}e^{-2r^{2}/w_{d}^{2}}r\thinspace
dr}=1-e^{-2a^{2}/w_{d}^{2}}$ (4)
where $a$ is the radius of the sensitive area of the photodiodes and $w_{d}$
is the modal radius of the Gaussian beam after a propagation distance $d$ from
the exit pupil of the RC reflective collimator.
The RF output $V$ of the balanced receiver corresponds to the voltage
resulting from the amplification of the difference between the currents of the
two photodiodes whose dc components are balanced, i.e.
$V=G(I_{2}-\alpha I_{1})\quad\text{with}\quad I_{\text{dc},2}=\alpha
I_{\text{dc},1}$ (5)
where $G$ is the transimpedance gain of the RF channel. LP1 and LP2 linear
polarizers (in Fig. 1) are used to fine tune the balancing of the dc
components. Consequently, we have
$V=G\mathcal{T}\thinspace\Re\left\\{\int\limits_{0}^{\infty}S(f)\mathcal{P}(f)r(f)\thinspace
e^{-ik_{a}\Delta L}\thinspace df\right\\}$ (6)
where $\mathcal{T}$ is a global transmission factor given by
$\mathcal{T}=2\eta_{a}\left(\sqrt{T_{\text{ref},1}T_{\text{sig},1}}+\alpha\sqrt{T_{\text{ref},2}T_{\text{sig},2}}\right)$
(7)
### 2.2 Data Processing
The voltage $V$ is digitized over 16 bits while the corner cube is translated
at a constant speed $v$ along the $z$ axis. Figure 2 shows the time dependence
of this digitized voltage when the front face of the sample corresponds to the
uncoated face of the plane window (raw data).
Figure 2: Time dependence of the voltage $V$ recorded by the set-up when the
front side of the silica window corresponds to the uncoated side (SLD driving
current 190 mA, translation speed 0.5 mm/s, window thickness 2 mm) - Left,
enlarged view of the first echo (uncoated face); center, full scan in optical
path difference (OPD); right, enlarged view of the second echo (coated face).
The first echo (the resulting interference signal from a broadband light
source) is obviously the highest amplitude signal, but the detection of the
second echo is obtained with a very good signal-to-noise ratio (SNR), even
when the anti-reflection coating is, as here, of very good quality (average
reflection coefficient of about 300 ppm on the spectral bandwidth of the
source).
To obtain the mathematical expression of the time dependence of the voltage
$V$, we must take into account the uniform translational motion of the
retroreflector used in the reference channel [$\Delta L=2vt$] as well as the
frequency expression of the wave vector in air [$k_{\text{a}}\approx
k_{v}=2\pi f/c$, where $k_{v}$ is the wave vector in vacuum], which leads to
$V(t)=G\mathcal{T}\thinspace\Re\left\\{\int\limits_{0}^{\infty}S(f)\mathcal{P}(f)r(f)\thinspace
e^{-2i\pi\frac{2v}{c}ft}\thinspace df\right\\}$ (8)
Moreover, in the case of a window with plane and parallel faces, the
reflection coefficient $r(f)$ can be put in the form of an infinite sum of
elementary reflections, namely
$r=r_{1}+t_{1}r_{2}t^{\prime}_{1}\thinspace
e^{2ik_{v}d_{s}n_{s}}+t_{1}r_{2}r^{\prime}_{1}r_{2}t^{\prime}_{1}\thinspace
e^{4ik_{v}d_{s}n_{s}}+...$ (9)
where $d_{s}$ is the thickness of the window, $n_{s}$ is its refractive index,
and the coefficients $r$, $r^{\prime}$, $t$, and $t^{\prime}$ are as shown in
Fig. 3. The frequency dependence of these quantities has been omitted here for
the sake of simplicity.
Figure 3: Schematic view of the multiple reflections inside the plane-parallel
window.
Combining (8) and (9), we get
$V(t)=\sum\limits_{m=1}^{\infty}V_{m}(t)=\sum\limits_{m=1}^{\infty}G\mathcal{T}\thinspace\Re\left\\{\int\limits_{0}^{\infty}\mathcal{B}_{m}(f)\thinspace
e^{-2i\pi\frac{2v}{c}ft}\thinspace df\right\\}$ (10)
where
$\mathcal{B}_{m}(f)=S(f)\mathcal{P}(f)\rho_{m}(f)\quad\text{with}\quad\left\\{\begin{aligned}
&\rho_{1}=r_{1}\\\ &\rho_{2}=t_{1}r_{2}t^{\prime}_{1}\thinspace
e^{2i\pi\frac{2d_{s}n_{s}}{c}f}\\\
&\rho_{3}=t_{1}r_{2}r^{\prime}_{1}r_{2}t^{\prime}_{1}\thinspace
e^{2i\pi\frac{4d_{s}n_{s}}{c}f}\\\ &...\end{aligned}\right.$ (11)
The quantities $S(f)$ and $\mathcal{P}(f)$ are real functions with bounded
support in $\mathbb{R}^{+}$ and the spectral profile of their product is very
close to a Gaussian centered at $f_{0}=c/\lambda_{0}$ and whose full width at
half maximum is $\Delta f$ [10]. This boundedness allows us to replace the
lower limit of integration of equation (10), i.e. 0, by $-\infty$. Each
function $V_{m}(t)$ is therefore proportional to the real part of the Fourier
transform of a Gaussian, whose shape is both strongly attenuated and slightly
modulated (a single oscillation within the frequency support) by the
reflection coefficient $r_{2}(f)$. Consequently, the total width $\Delta t$ of
the support of this function is defined in order of magnitude by
$\Delta\left(\frac{2v}{c}t\right)\sim 4\frac{4\pi}{\Delta
f}\quad\Rightarrow\quad\Delta
t\sim\frac{8\pi\lambda_{0}^{2}}{v\thinspace\Delta\lambda}$ (12)
The spectral width $\Delta\lambda$ of the superluminescent diode is an
increasing function of the driving current and varies between 27 nm and 82 nm,
full width at half maximum (FWHM), while its central wavelength $\lambda_{0}$
varies correspondingly between 1076 nm and 1042 nm. Therefore, the width of
the support of the functions $V_{m}(t)$ is, in the worst case, on the order of
$1/v$ seconds for $v$ in mm/s.
Besides, the time interval $\Delta T$ separating two consecutive echoes is
defined by [10]
$\Delta T=\frac{n_{g}(\lambda_{0})d_{s}}{v}$ (13)
where $n_{g}$ is the group index of the window glass. In order to ensure no
overlap of the functions $V_{m}(t)$, the following condition must be satisfied
$\Delta T>\Delta t\quad\Rightarrow\quad n_{g}(\lambda_{0})d_{s}>1\text{ mm}$
(14)
or, for a silica window: $d_{s}>0.7$ mm. The samples we use have thicknesses
of 2 mm, so this non-overlapping condition is largely respected.
The data processing that we implement consists of
1. 1.
isolating in the signal $V(t)$ temporal windows of width $\Delta T$ centered
on each of the echoes
$W_{m}(t)=\text{Rect}\left[\frac{t-t_{m}}{\Delta
T}\right]V(t)\quad\text{where}\quad t_{m}=(m-1)\Delta T$ (15)
2. 2.
using the non-overlapping condition to replace $W_{m}(t)$ by $V_{m}(t)$
3. 3.
calculating numerically the discrete Fourier transform (DFT) of the windowed
signals $V_{m}(t)$, i.e.
$\mathcal{S}_{m}(F_{l})=\sum\limits_{k=-N/2}^{k=N/2-1}V_{m}(t_{k})\thinspace
e^{-2i\pi F_{l}t_{k}}\thinspace dt\quad\text{for }l=-N/2,-N/2+1,...,N/2-1$
(16)
where
$dt=\frac{1}{F_{s}}=\frac{\Delta T}{N-1}\quad\text{;}\quad
t_{k}=t_{m}+k.dt\quad\text{;}\quad F_{l}=l.dF\quad\text{;}\quad
dF=\frac{1}{(N-1)dt}=\frac{1}{\Delta T}$ (17)
This discrete Fourier transform is associated with a continuous Fourier
transform defined by
$\widetilde{S}_{m}(F)=\int\limits_{-\infty}^{+\infty}V_{m}(t)\thinspace
e^{-2i\pi Ft}\thinspace
dt=G\mathcal{T}\int\limits_{-\infty}^{+\infty}\Re\left\\{\int\limits_{-\infty}^{+\infty}\mathcal{B}_{m}(f)\thinspace
e^{-2i\pi\frac{2v}{c}ft}\thinspace df\right\\}\thinspace e^{-2i\pi
Ft}\thinspace dt$ (18)
which can be easily calculated by transforming the real part into a half-sum
of conjugated complex quantities, which leads to
$\widetilde{V}_{m}(F)=\frac{1}{2}G\mathcal{T}\left\\{\frac{c}{2v}\mathcal{B}_{m}\left(-\frac{c}{2v}F\right)+\frac{c}{2v}\mathcal{B}_{m}^{*}\left(\frac{c}{2v}F\right)\right\\}$
(19)
4. 4.
using the latter result to identify the DFT terms $\mathcal{S}_{m}(F_{l})$
with the continuous Fourier transform $\widetilde{V}_{m}(F)$ sampled at
$F=F_{l}$, or
$\mathcal{S}_{m}(F_{l})=G\mathcal{T}\frac{c}{4v}\mathcal{B}_{m}^{*}\left(\frac{c}{2v}F_{l}\right)=G\mathcal{T}\frac{c}{4v}S(f_{l})\mathcal{P}(f_{l})\rho_{m}^{*}(f_{l})\quad\text{where}\quad
f_{l}=\frac{c}{2v}F_{l}$ (20)
5. 5.
removing the unknown terms by taking the ratio between two DFT samples, one of
which is chosen as calibration term
$\frac{\mathcal{S}_{m}^{*}(F_{l})}{\mathcal{S}_{c}^{*}(F_{l})}=\frac{\rho_{m}(f_{l})}{\rho_{c}(f_{l})}$
(21)
If we assume, for example, that the front face of the window corresponds to
the uncoated side, we will choose the first echo as calibration echo ($c=1$),
which will allow us to write
$\frac{|\mathcal{S}_{2}(F_{l})|^{2}}{|\mathcal{S}_{1}(F_{l})|^{2}}=\frac{|\rho_{2}(f_{l})|^{2}}{|\rho_{1}(f_{l})|^{2}}=\frac{|t_{1}(f_{l})t^{\prime}_{1}(f_{l})r_{2}(f_{l})|^{2}}{|r_{1}(f_{l})|^{2}}=\frac{T_{1}^{2}(f_{l})R_{2}(f_{l})}{R_{1}(f_{l})}$
(22)
or
$R_{\text{coat}}(f_{l})=\frac{R_{s}(f_{l})}{[1-R_{s}(f_{l})]^{2}}\frac{|\mathcal{S}_{2}(F_{l})|^{2}}{|\mathcal{S}_{1}(F_{l})|^{2}}\quad\text{where}\quad
R_{s}(f_{l})=\left[\frac{n_{s}(f_{l})-1}{n_{s}(f_{l})+1}\right]^{2}$ (23)
The spectral dependence of the refractive index of the substrate being
perfectly known, the ratio of the power spectral densities of the Fourier
transforms of the first two echoes allows us to determine the spectral
dependence of the reflection coefficient of the coated side. If we now reverse
the orientation of the window, the same approach leads to
$\frac{R_{\text{coat}}(f_{l})}{[1-R_{\text{coat}}(f_{l})]^{2}}=R_{s}(f_{l})\frac{|\mathcal{S}_{1}(F_{l})|^{2}}{|\mathcal{S}_{2}(F_{l})|^{2}}$
(24)
which is a bite more complicated to process from a numerical point of view
than the result obtained with the first orientation. But, with this second
orientation, we can also write
$\text{arg}[\rho_{1}(f_{l})]=\text{arg}[r_{\text{coat}}(f_{l})]=-\text{arg}[\mathcal{S}_{1}(F_{l})]$
(25)
and thus determine, in a very simple way, the spectral dependence of the phase
shift on the anti-reflection coating. Note that all these measurements are
performed at very low frequencies ($F$ is about 2 kHz for a translation speed
$v$ of 1 mm/s), while the results obtained are at optical frequencies ($f$
about 300 THz). This frequency down-conversion is one of the key advantages of
Fourier transform spectrometry [13, 14].
As discussed in the introduction, the processing method that we propose allows
us to determine the spectral dependence of the reflection coefficient of the
coated interface, in amplitude and phase. However, it remains to be answered
the achieved precision and spectral resolution using this method. And in order
to address the first point, we have a dedicated section (2.3.1) due to the
involved complexity, while the second quantity can be quickly estimated from
the description of the processing method that we have just described. Indeed,
the DFT samples $\mathcal{S}_{m}(F_{l})$ introduced in (16) can be expressed
in terms of the continuous Fourier transform $\widetilde{V}_{m}(F)$ as [15]
$\mathcal{S}_{m}(F_{l})=\frac{1}{dt\sqrt{N}}\int\limits_{-\infty}^{+\infty}\left\\{\widetilde{V}_{m}(F)\star\left[\frac{1}{dF}\text{sinc}\left(\frac{F}{dF}\right)\right]\right.\\\
\left.\star\left[\frac{1}{NdF}\text{comb}\left(\frac{F}{NdF}\right)\right]\right\\}\delta(F-l.dF)\thinspace
dF$ (26)
where the $\star$ symbol represents a convolution operation, sinc is the sine
cardinal function [$\text{sinc}(x)=\sin(\pi x)/(\pi x)$] and comb is the Dirac
comb function. The presence of a convolution by a cardinal sine in equation
(26) shows that the spectral resolution of this method is defined by the
frequency pitch $dF$, that is
$dF=\frac{1}{\Delta T}\quad\Rightarrow\quad df=\frac{c}{2v\Delta
T}\quad\Rightarrow\quad d\lambda=\frac{\lambda_{0}^{2}}{2n_{g}d_{s}}$ (27)
or 0.2 nm for a 2mm-thick silica window.
### 2.3 Sensitivity to detection noise and alignment bias
#### 2.3.1 Detection noise
The sources of noise which could affect the measurement of a reflection
coefficient $|\rho_{m}|^{2}$ associated with the echo $m$ are essentially the
quantum noise associated with the DC component of the current delivered by
each photodiode and the residual effect of the intensity noise of the
superluminescent diode. This last term can result from the imperfect rejection
of the common modes of disturbance which reflects the CMRR (Common Mode
Rejection Ratio) of the balanced receiver.
The variance of the shot noise affecting the voltage $V$ provided by the
balanced receiver is proportional to the sum of the quantum noise affecting
each of the two photodiodes (the two noises are indeed independent), i.e.
$\sigma_{I}^{2}=2e(I_{1}+I_{2})\thinspace
B\quad\Rightarrow\quad\sigma_{V}^{2}=G^{2}\sigma_{I}^{2}\sim
2G^{2}e(I_{\text{dc},1}+I_{\text{dc},2})\thinspace
B=2G^{2}(1+\alpha)eI_{\text{dc},1}\thinspace B$ (28)
where $B$ is the detection bandwidth and $e$ the elementary charge. We must
also consider the contribution related to the dark current $I_{\text{dark}}$
of each of these photodiodes, or
$\sigma_{I}^{2}=4eI_{\text{dark}}\thinspace
B\quad\Rightarrow\quad\sigma_{V}^{2}=4G^{2}eI_{\text{dark}}\thinspace B\sim
2G^{2}S^{2}\text{NEP}^{2}\thinspace B$ (29)
where $S$ is the responsivity of the photodiode ($S\sim 0.8$ A/W) and NEP its
noise equivalent power (3 pW/$\sqrt{\text{Hz}}$). If we only consider noise of
quantum origin, the variance of $V$ is thus defined by
$\sigma_{V}^{2}=2G^{2}(1+\alpha)eI_{\text{dc},1}\thinspace
B+2G^{2}S^{2}\text{NEP}^{2}\thinspace B$ (30)
while the corresponding signal to noise ratio is written
$\text{SNR}_{q}=\frac{V^{2}}{\sigma_{V}^{2}}=\frac{G^{2}(I_{\text{ac,2}}+\alpha
I_{\text{ac},1})^{2}}{2G^{2}(1+\alpha)eI_{\text{dc},1}\thinspace
B+2G^{2}S^{2}\text{NEP}^{2}\thinspace B}$ (31)
We will now assume that the set-up is spontaneously balanced ($\alpha=1$) and
take into account that the power detected by the receiver must not exceed a
maximum value $P_{\text{max}}$, due either to the saturation of the two
photodiodes or the digitizing range of the voltage $V$.
Initially, assume that this maximum value is defined by the absence of
saturation of photodiodes. Consequently
$I_{\text{dc},1}=SP_{\text{sat}}$ (32)
In the case of a window, anti-reflection coated on one side, multiple
reflections are dominated by the one that occurs on the uncoated side in
simple bounce. Therefore, equation (2) becomes
$I_{\text{dc},1}=\eta_{a}\left\\{T_{\text{ref},1}+T_{\text{sig},1}R_{\text{uncoat}}\right\\}SP$
(33)
where $P$ is the total power emitted by the source. The main difference
between the signal and reference channels is the presence of an additional
reflection on the BS1 cube splitter in the case of the signal channel.
Therefore, to a first approximation
$T_{\text{sig,1}}\sim\frac{T_{\text{ref,1}}}{2}$ (34)
Combining (32), (33) and (34), we get
$P=\frac{P_{\text{sat}}}{\eta_{a}T_{\text{ref},1}(1+R_{\text{uncoat}}/2)}\sim\frac{P_{\text{sat}}}{\eta_{a}T_{\text{ref},1}}$
(35)
To conclude, we need to know:
* •
the value of the geometric overlap factor $\eta_{a}$; the modal radius $w_{d}$
of the Gaussian beam after a propagation over a distance $d$ is given by :
$w_{d}=w_{f}\sqrt{1+\left(\frac{\lambda d}{\pi
w_{f}^{2}}\right)^{2}}\quad\text{with}\quad w_{f}=\frac{f\lambda}{\pi w_{0}}$
(36)
In our set-up, $d=1275$ mm, $w_{f}=0.67$ mm, and $w_{d}=0.92$ mm, which leads
to a geometric overlap factor $\eta_{a}$ of about 0.44.
* •
the transmission of the reference channel; as can be seen in Fig. 1, we
essentially have to consider two crossings of a splitter cube ($\times 0.5$
each), two crossings of a polarizer ($\times 0.87$ each) and four reflections
on a silver coating ($\times 0.95$ each). Therefore
$T_{\text{ref,1}}=(0.5)^{2}\times(0.87)^{2}\times(0.95)^{4}\sim 0.15$ (37)
The saturation power of the Nirvana receiver is 0.5 mW. So, to reach
saturation, the total power $P$ delivered by the superluminescent diode must
be equal to 7.5 mW, which corresponds to a driving current of about 180 mA
(for a maximum value of 1000 mA).
Now assume that the maximum power is defined by the digitizing range of the
voltage $V$, which is 10 volts. At the top of echo $m$, it follows that
$V=G(I_{\text{ac},1}+\alpha I_{\text{ac,2}})=G\mathcal{T}SP|\rho_{m}|\leqslant
10$ (38)
where
$\mathcal{T}\approx 4\eta_{a}\sqrt{T_{\text{ref},1}T_{\text{sig},1}}\approx
2\sqrt{2}\eta_{a}T_{\text{ref},1}$ (39)
The highest amplitude echo is obviously the one corresponding to the uncoated
face, and therefore
$P\leqslant\frac{10}{2\sqrt{2}\eta_{a}T_{\text{ref},1}GS\sqrt{R_{s}}}\sim
3.6\text{ mW}$ (40)
This last condition is the most restrictive one, and it thus defines the
maximum power $P_{\text{max}}$ on the photodiodes, namely
$P_{\text{max}}\approx\eta_{a}T_{\text{ref},1}P\sim 250\text{ $\mu$W}$ (41)
Accordingly
$I_{\text{dc},1}=SP_{\text{max}}\quad\text{;}\quad
I_{\text{ac},1}=\frac{S}{\sqrt{2}}|\rho_{m}|P_{\text{max}}$ (42)
By combining the equations (31) and (42), we obtain the following final
expression for the signal-to-noise ratio
$\text{SNR}_{q}=\frac{S^{2}|\rho_{m}|^{2}P_{\text{max}}^{2}}{(2eSP_{\text{max}}+S^{2}\text{NEP}^{2})\thinspace
B}$ (43)
The smallest value of the reflection coefficient $|\rho_{m}|^{2}$ that we are
able to measure with a signal to noise ratio of 10 is therefore defined by
$|\rho_{m}|_{q}^{2}=10\frac{2eSP_{\text{max}}+S^{2}\text{NEP}^{2}}{S^{2}P_{\text{max}}^{2}}B\approx
10\frac{2e}{SP_{\text{max}}}B\sim 1.6\times 10^{-14}B$ (44)
or $2\times 10^{-9}$ if the balanced receiver is used at its maximum bandwidth
($B=125$ kHz).
Using a similar approach, we can estimate the minimum value of the reflection
coefficient that can be detected in the presence of a residual impact of
source intensity noise. If the balanced receiver were operating perfectly, the
source intensity noise would not affect the voltage $V$. But the rejection of
these correlated noise sources is not perfect, which is quantified by the
measure of CMRR in balanced photodetection. Therefore
$\sigma_{V}^{2}=G^{2}\sigma_{I}^{2}=G^{2}S^{2}\sigma_{P}^{2}=G^{2}\times
10^{(\text{RIN}-\text{CMRR})/10}S^{2}P_{\text{max}}^{2}B$ (45)
where RIN is the relative intensity noise of the source ($-105$ dB/Hz for the
SLD) and the maximum attainable CMRR of NIRVANA receiver is 50 dB.
The expression of the signal $V$ is identical to that established in the shot
noise study, i.e.
$V=G(I_{\text{ac},2}+\alpha I_{\text{ac},1})\approx
2GI_{\text{ac},1}=2\sqrt{2}G\eta_{a}T_{\text{ref},1}S|\rho_{m}|P=2\sqrt{2}GS|\rho_{m}|P_{\text{max}}$
(46)
Therefore, the signal-to-noise ratio is expressed as
$\text{SNR}_{\text{RIN}}=\frac{V^{2}}{\sigma_{V}^{2}}=\frac{8|\rho_{m}|^{2}}{10^{(\text{RIN}-\text{CMRR})/10}B}$
(47)
and the smallest value of the reflection coefficient $|\rho_{m}|^{2}$ that we
are able to measure with a signal to noise ratio of 10 this time is defined by
$|\rho_{m}|_{\text{RIN}}^{2}\sim 10^{(\text{RIN}-\text{CMRR})/10}B$ (48)
or $4\times 10^{-11}$ if the balanced receiver is used at its maximum
bandwidth ($B=125$ kHz). This result is important because it shows that the
resolution of the measurement will remain limited by quantum noise for any
CMRR value between 35 dB and 50 dB. We will now assume that this condition is
satisfied in our theoretical estimation.
However, all the results we have just presented are related to the direct use
of the measurement signal $V(t)$, and not to its discrete Fourier transforms
$\mathcal{S}_{m}(F_{l})$, as defined by equation (16). It is therefore
necessary to take into account this key step of processing in the estimation
of the performance of our measurement method.
In the expression of the discrete Fourier transform (16), let us make the
changes of variable
$\bar{t}_{m}=t_{m}-\frac{N}{2}dt\quad\text{,}\quad
p=k+\frac{N}{2}\quad\text{and}\quad t_{p}=\bar{t}_{m}+p.dt$ (49)
which leads to
$\mathcal{S}_{m,l}=e^{-2i\pi
l\bar{t}_{m}dF}\sum\limits_{p=0}^{p=N-1}V_{m,p}\thinspace e^{-2i\pi
lpdFdt}\thinspace dt\quad\text{where}\quad dFdt=\frac{1}{(N-1)}$ (50)
This equation can be put in the following matrix form
$\vec{\mathcal{S}}_{m}=\textbf{A}.\vec{V}_{m}\quad\text{or}\quad\mathcal{S}_{m,l}=\sum\limits_{p=0}^{p=N-1}a_{lp}V_{m,p}\quad\text{where}\quad
a_{lp}=e^{-2i\pi l\bar{t}_{m}dF}\thinspace e^{-2i\pi\frac{lp}{N-1}}\thinspace
dt$ (51)
In the presence of shot noise on the measurement of the voltage $V(t)$, i.e.
on the components of the vector $\vec{V}_{m}$, this matrix equation becomes
$\vec{\mathcal{S}}_{m}+\vec{n}^{\prime}_{m}=\textbf{A}.(\vec{V}_{m}+\vec{n}_{m})$
(52)
where $\vec{n}_{m}$ and $\vec{n}^{\prime}_{m}$ are the noise vectors affecting
respectively the measurement of the vectors $\vec{V}_{m}$ and
$\vec{\mathcal{S}}_{m}$. We then introduce the covariance matrices of these
noise vectors, defined by [16]
$\textbf{$\Gamma$}^{n}_{m}=\langle\vec{n}_{m}.^{t}\vec{n}_{m}^{*}\rangle\quad\text{;}\quad\textbf{$\Gamma$}^{n^{\prime}}_{m}=\langle\vec{n^{\prime}}_{m}.^{t}\vec{n^{\prime}}_{m}^{*}\rangle$
(53)
where the bracketing denotes an ensemble averaging, t the transpose and ∗ the
complex conjugation. Using the definition of the vector $\vec{n}^{\prime}$, it
becomes
$\textbf{$\Gamma$}^{n^{\prime}}_{m}=\langle\textbf{A}.\vec{n}_{m}.^{t}\vec{n}_{m}^{*}.^{t}\textbf{A}^{*}\rangle=\langle\textbf{A}.\textbf{$\Gamma$}^{n}_{m}.^{t}\textbf{A}^{*}\rangle$
(54)
The noise affecting the measurement of the components of the vector $\vec{n}$
is a Gaussian white noise, of mean zero and variance $\sigma^{2}_{V}$ defined
by [see (30)]
$\sigma_{V}^{2}=4G^{2}eSP_{\text{max}}B+2G^{2}S^{2}\text{NEP}^{2}B=2G^{2}\left\\{2eSP_{\text{max}}+S^{2}\text{NEP}^{2}\right\\}B$
(55)
The covariance matrix $\textbf{$\Gamma$}^{n}_{m}$ is diagonal [16]. Moreover,
this variance depends neither on the component of the vector $\vec{V}_{m}$,
nor on the order $m$ of the echo. Therefore
$\textbf{$\Gamma$}^{n}_{m}=\sigma_{V}^{2}\textbf{$\mathbb{I}$}$ (56)
where $\mathbb{I}$ is the identity matrix. We deduce the expression of the
covariance matrix $\textbf{$\Gamma$}^{n^{\prime}}_{m}$, that is
$\textbf{$\Gamma$}^{n^{\prime}}_{m}=\sigma_{V}^{2}\langle\textbf{A}.^{t}\textbf{A}^{*}\rangle$
(57)
The elements of the covariance matrix $\textbf{$\Gamma$}^{n^{\prime}}_{m}$ are
therefore defined by
$[\textbf{$\Gamma$}^{n^{\prime}}_{m}]_{lp}=e^{-2i\pi(l-p)\bar{t}_{m}dF}\sum\limits_{q=0}^{N-1}\thinspace
e^{-2i\pi\frac{q(l-p)}{N-1}}\thinspace\sigma_{V}^{2}(dt)^{2}$ (58)
or, for the diagonal elements, the only ones necessary to estimate the noise
affecting the measurement of $\mathcal{S}_{m,l}$ [16, 17]
$[\textbf{$\Gamma$}^{n^{\prime}}_{m}]_{ll}=N\sigma_{V}^{2}(dt)^{2}$ (59)
Finally, the signal to noise ratio of our measurement is
$\text{SNR}_{\mathcal{S}_{m,l}}=\frac{|\mathcal{S}_{m}(F_{l})|^{2}}{[\textbf{$\Gamma$}^{n^{\prime}}_{m}]_{ll}}=\frac{|G\mathcal{T}\frac{c}{4v}\mathcal{B}_{m}^{*}\left(\frac{c}{2v}F_{l}\right)|^{2}}{N\sigma_{V}^{2}(dt)^{2}}=\left[G\mathcal{T}\frac{c}{4v}\right]^{2}\frac{|S(f_{l})\mathcal{P}(f_{l})\rho_{m}^{*}(f_{l})|^{2}}{N\sigma_{V}^{2}(dt)^{2}}$
(60)
where
$f_{l}=\frac{c}{2v}F_{l}\quad F_{l}>0\quad\text{;}\quad
P_{\text{max}}=\eta_{a}T_{\text{ref},1}\int\limits_{0}^{\infty}\mathcal{P}(f)\thinspace
df=\int\limits_{0}^{\infty}\mathcal{P}_{\text{max}}(f)\thinspace df$ (61)
Using the expression for the variance of the noise affecting the interferogram
measurement defined by (55), we get
$\text{SNR}_{\mathcal{S}_{m,l}}=\left[\mathcal{T}\frac{c}{4v}\right]^{2}\frac{S^{2}\mathcal{P}^{2}(f_{l})}{2(2eSP_{\text{max}}+S^{2}\text{NEP}^{2})B}\frac{|\rho_{m}(f_{l})|^{2}}{N(dt)^{2}}$
(62)
In addition
$\frac{1}{N(dt)^{2}}=\frac{1}{Ndt\times dt}=\frac{F_{s}}{\Delta T}$ (63)
Using the equations (39) and (63), the signal-to-noise ratio can be written as
follows
$\text{SNR}_{\mathcal{S}_{m,l}}=\left[\frac{c}{2v}\right]^{2}\frac{S^{2}\mathcal{P}_{\text{max}}^{2}(f_{l})}{(2eSP_{\text{max}}+S^{2}\text{NEP}^{2})B}\frac{F_{s}}{\Delta
T}\thinspace|\rho_{m}(f_{l})|^{2}$ (64)
The smallest detectable reflection coefficient with a signal-to-noise ratio of
at least 10 will furthermore be defined by
$|\rho_{m}(f_{l})|_{q}^{2}=10\left[\frac{2v}{c}\right]^{2}\frac{(2eSP_{\text{max}}+S^{2}\text{NEP}^{2})B}{S^{2}\mathcal{P}_{\text{max}}^{2}(f_{l})}\frac{\Delta
T}{F_{s}}$ (65)
and thus depends on the optical frequency $f_{l}$ through the saturation power
spectral density $\mathcal{P}_{\text{max}}(f_{l})$.
Let us suppose that this power spectral density is a rectangle function of
width $\Delta f$. Then
$P_{\text{max}}=\mathcal{P}_{\text{max}}.\Delta
f=\mathcal{P}_{\text{max}}\frac{c\Delta\lambda}{\lambda_{0}^{2}}\quad\Rightarrow\quad
c\mathcal{P}_{\text{max}}=\frac{\lambda_{0}^{2}}{\Delta\lambda}P_{\text{max}}$
(66)
The equation (65) then becomes
$|\rho_{m}(f_{l})|_{q}^{2}=10\left(\frac{\Delta\lambda}{\lambda_{0}}\right)^{2}\frac{(2eSP_{\text{max}}+S^{2}\text{NEP}^{2})B}{S^{2}P_{\text{max}}^{2}}\frac{F_{0}^{2}}{F_{s}}\Delta
T\approx
10\left(\frac{\Delta\lambda}{\lambda_{0}}\right)^{2}\frac{2eB}{SP_{\text{max}}}\frac{F_{0}^{2}}{F_{s}}\Delta
T$ (67)
Using the numerical values listed below
$\lambda_{0}$ = 1068 nm ; $\Delta\lambda$ = 33 nm
$e=1.6\times 10^{-19}$ C ; $S=0.8$ A/W ; $P_{\text{max}}=0.25$ mW ; NEP = 3
pW/$\sqrt{\text{Hz}}$
$B=125$ kHz ; $v=1$ mm/s ; $F_{0}=1.87$ kHz
$F_{s}=100$ kHz ; $\Delta T=2.92$ s
we find that the smallest measurable reflection coefficient with a signal-to-
noise ratio of 10 is on the order of $2\times 10^{-10}$ over a spectral band
of 30 nm, with a spectral resolution of 0.2 nm.
Note that if we reduce the duration $\Delta T$ of the processing window by a
factor of 10, this detection floor is lowered by the same ratio, and thus
reaches $2\times 10^{-11}$. The penalty to pay is the degradation of the
spectral resolution by the same factor, i.e. 2 nm.
#### 2.3.2 Alignment bias
So far, we have implicitly assumed that the window has its two sides perfectly
parallel and perpendicular to the direction of the incident beam. However, in
practice, these assumptions are most likely not verified, and we therefore
need to analyze the possible consequences of small alignment bias (typically
about 10 arc seconds).
The most general situation is schematically represented in Fig. 4, where
$\beta_{1}$ and $\beta_{2}$ are the angles between the incident beam and the
beams respectively reflected from the front and rear faces of the window.
Figure 4: Influence of the wedge and misalignment on the direction of the
beams reflected from the two window faces
All angles here are very small, so we can replace their sine by the value of
the angle in radians. The angles are counted in a positive clockwise
direction. If the wedge angle is included in the plane of incidence (which is
the worst case), then it is easy to show that
$\beta_{1}=2\theta_{1}\quad\text{;}\quad\beta_{2}=\beta_{1}-2n_{s}\alpha$ (68)
When the wedge angle $\alpha$ of the window is zero, the two angles
$\beta_{1}$ and $\beta_{2}$ are equal, and the truncation effects induced on
the spatial distribution of illumination of the two beams by the small size of
the photodiodes (1 mm) are thus identical, and taken into account by the
calibration procedure described in the section 2.2. Note that the beam
reflected by the rear face has a lateral shift compared to that reflected by
the front face, but it is extremely small (the lever arm is indeed the
thickness of the window, or 2 mm), and can therefore be neglected.
On the other hand, when the wedge angle of the window is not zero, the beam
reflected from the rear face has an angular bias $\delta\beta=-2n_{s}\alpha$
with respect to the beam reflected by the front face. This angular bias
$\delta\beta$ causes the appearance of a gap $\delta x$ between the centroids
of the two Gaussian beams during their detection by the photodiodes. This
offset is defined by $\delta x=d.\delta\beta$, where $d$ is the propagation
distance between the emission of the beam and its reception [$d\sim 1275$ mm,
see equation (36)]. If the window has, for example, a wedge angle $\alpha$ of
2 arc seconds, the relative displacement $\delta x$ will be on the order of 35
$\mu$m, which cannot be neglected a priori. Moreover, since we are interested
in measuring the spectral dependence of the reflection coefficient, our
modeling of the consequences of this lateral shift must take into account all
possible spectral dependencies.
In the presence of a misalignment $\theta_{1}$ and a wedge angle $\alpha$, the
overlap factor $\eta_{a}$ will depend on both the order $m$ of the echo and
the wavelength $\lambda$, and is written
$\eta_{a,m}(\lambda)=\frac{2}{\pi
w_{d}^{2}(\lambda)}\displaystyle\iint\limits_{\mathcal{C}}e^{\displaystyle-2\frac{[x-\beta_{m}d]^{2}+y^{2}}{w_{d}^{2}(\lambda)}}dxdy$
(69)
where $\mathcal{C}$ is a disk with center (0,0) and radius $a$, and
$w_{d}(\lambda)$ is defined by equation (36), in which the wavelength
dependence of the modal radius $w_{0}$ is now considered, namely [18]
$w_{0}(\lambda)=r_{0}\left(0.65+\frac{1.619}{[V(\lambda)]^{3/2}}+\frac{2.879}{[V(\lambda)]^{6}}\right)$
(70)
where $r_{0}$ is the core radius, and $V$ the normalized frequency
$V(\lambda)=\frac{2\pi
r_{0}\times\text{NA}}{\lambda}=2.405\frac{\lambda_{c}}{\lambda}$ (71)
NA being the numerical aperture of the fiber, and $\lambda_{c}$ its cutoff
wavelength, below which it is no longer single mode. In our case (PM980-XP),
$2r_{0}=5.5$ $\mu$m, $\lambda_{c}=870$ nm, and NA = 0.12.
Under these conditions, equation (21) becomes
$\frac{\mathcal{S}_{m}^{*}(F_{l})}{\mathcal{S}_{c}^{*}(F_{l})}=\frac{\eta_{a,m}(f_{l})\rho_{m}(f_{l})}{\eta_{a,c}(f_{l})\rho_{c}(f_{l})}\quad\text{where}\quad\lambda_{l}=\frac{c}{f_{l}}$
(72)
or, in the case of the determination of the spectral dependence of the
reflection coefficient of the coated face
$R_{2}^{\text{exp}}(\lambda_{l})=\left[\frac{\eta_{a,1}(\lambda_{l})}{\eta_{a,2}(\lambda_{l})}\right]^{2}R_{2}(\lambda_{l})=\mathcal{K}(\theta_{1},\lambda_{l};\alpha)R_{2}(\lambda_{l})$
(73)
The error induced on the measurement is therefore multiplicative and the
corrective factor $\mathcal{K}$ is equal to the square of the ratio of the
overlap factors of the first two echoes in the presence of a misalignment
$\theta_{1}$ and a wedge angle $\alpha$.
Figure 5 shows the dependence of this corrective factor $\mathcal{K}$ on the
angle of incidence $\theta_{1}$ and the wavelength $\lambda$, for two wedge
angles $\alpha$ equal to 2 arc seconds and 5 arc seconds respectively.
Figure 5: Dependence of the corrective factor $\mathcal{K}$ on the angle of
incidence $\theta_{1}$ (a) and the wavelength $\lambda$ (b) for wedge angle
$\alpha$ equal to 2 arcseconds and 5 arcseconds.
It can be seen that a very small wedge angle window ($\alpha\leqslant 2$
arcseconds) and a high alignment quality (better than $\pm 5$ arcseconds) are
needed to guarantee a relative measurement accuracy better than 1%. Our
alignment procedure implements an optimization of the recoupling of the SIG
and REF beams in a single mode optical fiber placed at the image focus of a
reflective collimator, the characteristics of the fiber and the collimator
being identical to those of the items used in emission. This allows us to
ensure that the error on the angle of incidence is comprised between 5 and 10
arc seconds. On the other hand, the spectral dependence of this corrective
factor is extremely small under any circumstances, and can therefore be
neglected.
## 3 Experimental demonstration
### 3.1 Operating conditions
In order to minimize measurement errors due to possible alignment bias, we
have procured from Light Machinery [19] a high quality test component,
consisting of a 7980 A grade fused silica window of 2 mm thickness and 25 mm
diameter, with a wedge angle $\alpha$ less than or equal to 2 arc seconds and
one side of which is coated with a V-shaped anti-reflection coating centered
at 1055 nm.
This component is installed in a piezoelectric gimbal (Thorlabs PGM1SE)
comprising of two independently controllable rotational movements around the
axes of the gimbal, that allow to maintain the center of the front face of the
window in a fixed position when adjusting its angular orientation. The minimum
adjustment step is 0.1 arc second for an angular range of approximately 1
degree.
The interferogram recordings were all made with a sampling rate of 100
kSamples/s and a digitizing range of $\pm 10$ Volts.
### 3.2 Results
The first interferogram (see Fig. 6) is recorded for a window orientation
where the front face is uncoated, a SLD driving current of 190 mA
($\lambda_{0}=1068$ nm, $\Delta\lambda=33$ nm), and a translation speed $v$ of
0.5 mm/s.
Figure 6: Interferogram obtained from a 2 mm thick silica wafer with uncoated
front side.
The upper graph represents the whole recording (20 seconds, i.e. 2 Msamples),
the black curve corresponding to the function $V(t)$ while the colored curves
define the processing windows associated with the functions $V_{m}(t)$ for
$m=1$ (in red), $m=2$ (in blue) and $m=3$ (in green). The lower graphs
correspond to zoomed-in view of individual echoes, the voltage scale being
adapted to the corresponding amplitude level of the echoes.
The portion of the interferogram before the first processing window can be
used to estimate the variance of the voltage $V(t)$. We find:
$\sigma_{V,\text{exp}}^{2}=1.93\times 10^{-6}\text{ Volts}^{2}$. This value
must be compared to the one predicted by our theoretical approach [cf. section
2.3.1, equations (45) and (55)], namely
$\sigma_{V,\text{th}}^{2}=\sigma_{V,q}^{2}+\sigma_{V,\text{RIN}}^{2}=2G^{2}\left\\{2eSP_{\text{max}}+S^{2}\text{NEP}^{2}\right\\}B+G^{2}10^{(\text{RIN}-\text{CMRR})/10}S^{2}P_{\text{max}}^{2}B$
(74)
Experimentally, the maximum power $P_{\text{max}}$, as expected, is defined by
the digitizing range (0.45 mW). The theoretical contributions are thus
distributed as follows
$\sigma_{V,q}^{2}=3.02\times 10^{-7}\text{
Volts}^{2}\quad\text{;}\quad\sigma_{V,\text{RIN}}^{2}=5.12\times
10^{-(3+\text{CMRR}/10)}\text{ Volts}^{2}$ (75)
By comparing all these values, we can deduce a likely estimate of the CMRR of
the NIRVANA receiver under our conditions of use, i.e. CMRR = 35 dB, which is
in agreement with the manufacturer’s data, the 50 dB of rejection being
reached only in the autobalanced mode (which cannot be implemented in our case
due to carrier frequency constraints).
As shown in Fig. 6, we have chosen to take into account the third echo in
order to be able to apply our method to the measurement of ultra-low
reflection coefficients and thus to estimate its ultimate sensitivity. For
this third echo, the equations (22) and (23) become
$\frac{|\mathcal{S}_{3}(F_{l})|^{2}}{|\mathcal{S}_{1}(F_{l})|^{2}}=\frac{|\rho_{3}(f_{l})|^{2}}{|\rho_{1}(f_{l})|^{2}}=\frac{|t_{1}(f_{l})r_{2}(f_{l})r^{\prime}_{1}(f_{l})r_{2}(f_{l})t^{\prime}_{1}(f_{l})|^{2}}{|r_{1}(f_{l})|^{2}}=T_{1}^{2}(f_{l})R_{2}^{2}(f_{l})$
(76)
and
$R_{\text{coat}}(f_{l})=\frac{1}{1-R_{s}(f_{l})}\frac{|\mathcal{S}_{3}(F_{l})|}{|\mathcal{S}_{1}(F_{l})|}$
(77)
The graphs in Fig. 7 illustrate the different steps of our data processing.
The graph a) shows the frequency dependence of the modulus squared of
$\mathcal{S}_{1}(F)$, the discrete Fourier transform of the signal $V_{1}(t)$.
The continuous red curve corresponds to the case where the width $\Delta T$ of
the processing window is equal to the time interval separating two consecutive
echoes, i.e. 5.86 s, while the red dots correspond to the result of the same
DFT calculation, but for a window width 10 times smaller ($\Delta T=0.59$ s).
This second curve is identical to the previous one, except that the frequency
sampling pitch is 10 times larger.
Figure 7: Illustration of the different steps used in the data processing in
the case of a 2 mm thick silica window coated with anti-reflection coating on
the rear side (see text for more details).
The graph b) shows the frequency dependence of the coefficients
$|\rho_{2}|^{2}$ (blue) and $|\rho_{3}|^{2}$ (green). The continuous curves
are associated with the maximum time window width, while the discrete points
are obtained with a window width reduced by a factor 10. The two processing
modalities lead to identical results in the case of the second echo, while the
correspondence is much less clear in the case of the third echo. This is
simply due to the fact that the signal-to-noise ratio is much lower in this
case, and that it is thus necessary to reduce the spectral resolution, by
varying the width of the processing window, to obtain a better quality result.
This conclusion is perfectly confirmed by the curves gathered in the last
graph (Fig. 7c), which present the wavelength dependence of the reflection
coefficient of the anti-reflective coated face. The continuous blue curve
corresponds to the measurement result obtained from the second echo and
presents a maximum spectral resolution (0.2 nm), while the green points
correspond to the one obtained from the third echo with a lower resolution (2
nm). Note that we observe a very good agreement between these two independent
determinations of the reflection coefficients above 100 ppm. It should be kept
in mind that the signal levels used for the second determination are analogous
to those that would be produced by the reflection on a coated face with a
reflection coefficient between 0.1 and 40 ppb!
By introducing in equation (67) the result of our experimental determination
of the variance of the noise affecting the interferogram measurement, we can
estimate the detection floor of our set-up in terms of measurement of the
$|\rho_{m}|^{2}$ coefficients, namely
$|\rho_{m}|_{\text{min}}^{2}\approx
10\left(\frac{\Delta\lambda}{\lambda_{0}}\right)^{2}\frac{\sigma_{V,\text{exp}}^{2}}{G^{2}S^{2}P_{\text{max}}^{2}}\frac{F_{0}^{2}}{F_{s}}\Delta
T=\left\\{\begin{aligned} &7.3\times 10^{-10}\quad\text{for }\Delta
T=5.86\text{ s}\\\ &7.3\times 10^{-11}\quad\text{for }\Delta T=0.59\text{
s}\end{aligned}\right.$ (78)
The second of these two values, where the signal level is sufficient for the
comparison to be valid, is in satisfactory agreement with that which can be
deduced from Fig. 7b ($|\rho_{3}|_{\text{min}}^{2}\sim 3\times 10^{-10}$).
We note on Fig. 7c that the V-shape characteristic of the behavior of this
type of anti-reflection coating is only partially obtained, because the
spectral range covered (1050 nm to 1090 nm, for the driving current used, i.e.
190 mA) does not allow its full measurement. Increasing the supply current to
its maximum value (1000 mA) allowed us to cover a wider range of wavelengths
(typically 1000 nm to 1090 nm), and thus to have access to the complete
expected V-shape, for both possible orientations of the sample used. Figures 8
and 9 show the results obtained when the front side of the window corresponds
to the coated side and the translation speed is 1 mm/s.
Figure 8: Interferogram obtained on a 2 mm thick silica wafer with coated
front side.
The highest amplitude echo is now the second one (the one corresponding to the
reflection on the uncoated back side).
For this new window orientation, the equations (76) and (77) become (the
calibration echo is indeed the second)
$\frac{|\mathcal{S}_{3}(F_{l})|^{2}}{|\mathcal{S}_{2}(F_{l})|^{2}}=\frac{|\rho_{3}(f_{l})|^{2}}{|\rho_{2}(f_{l})|^{2}}=\frac{|t_{1}(f_{l})r_{2}(f_{l})r^{\prime}_{1}(f_{l})r_{2}(f_{l})t^{\prime}_{1}(f_{l})|^{2}}{|t_{1}(f_{l})r_{2}(f_{l})t^{\prime}_{1}(f_{l})|^{2}}=R_{1}(f_{l})R_{2}(f_{l})$
(79)
and
$R_{\text{coat}}(f_{l})=\frac{1}{R_{s}(f_{l})}\frac{|\mathcal{S}_{3}(F_{l})|^{2}}{|\mathcal{S}_{2}(F_{l})|^{2}}$
(80)
Figure 9: Illustration of the different steps used in the data processing in
the case of a 2 mm thick silica window coated with anti-reflection coating on
the front side (see text for more details).
Fig. 9b shows the frequency dependence of the coefficients $|\rho_{1}|^{2}$
(red) and $|\rho_{3}|^{2}$ (green): as before, the continuous curves are
associated with the maximum width of the processing time window, while the
discrete points are obtained with a window width reduced by a factor of 10.
The two processing modalities lead this time to identical results in the case
of the two echoes. Indeed, the measurement on the third echo is now identical
to the one that would be produced by the reflection on a coated face with a
reflection coefficient between 80 ppb and 1.6 ppm (two reflections on the
uncoated face and one reflection on the coated face), to be compared to the
previous case (0.1 ppb and 40 ppb, for two reflections on the coated face and
one reflection on the uncoated face): the signal-to-noise ratio is therefore
much better.
Using in equation (78) the parameters corresponding to this second
experimental test [$\lambda_{0}=1042$ nm, $\Delta\lambda=82$ nm, $F_{0}=1.92$
kHz, $\Delta T=2.93$ s], the minimum value of the coefficients
$|\rho_{m}|^{2}$ that can be measured with a signal to noise ratio of 10 is
$|\rho_{m}|_{\text{min}}^{2}\approx\left\\{\begin{aligned}
&10^{-8}\quad\text{for }\Delta T=2.93\text{ s}\\\ &10^{-9}\quad\text{for
}\Delta T=0.29\text{ s}\end{aligned}\right.$ (81)
Again, these theoretical estimations are consistent with the experimental
results obtained with this reversed window orientation.
The agreement between the two experimental determinations of the reflection
coefficient of the coated face presented on Fig. 9c is very satisfactory in
terms of overall shape, but reveals the presence of a slight spectral shift
between the two curves (on the order of 5 nm).
Before proposing possible explanations for this experimental observation in
Section 4, it seems important to present in a comparative way all the
measurements we have made on this sample in a simple bounce configuration (see
Fig. 10).
Figure 10: Comparative presentation of the measured reflection coefficient of
the coated face obtained under different experimental conditions.
Note that the results obtained are not only independent of the translation
speed of the hollow retro-reflector and the driving current of the
superluminescent diode, but also of the orientation of the window (UNC for
uncoated, ARC for anti-reflection coated).
Let us now turn to the phase measurements that is possible thanks our
processing method. In accordance with the description of this method made in
Section 2.2, the phase shift at the reflection on the coated face is equal to
the opposite of the argument of the discrete Fourier transform (DFT) of the
signal associated with the first echo, when the front face of the window
corresponds to its coated face [see equation (25)].
The experimental result is presented in Fig. 11a (dark blue curve).
Figure 11: a) spectral dependence of the unwrapped DFT argument for the two
window orientations (light blue, uncoated face in front; dark blue, coated
face in front) - b) spectral dependence of the phase change at the reflection
on the coated face.
The general appearance of the spectral dependence of this phase corresponds to
what we would expect in the case of a V-coating, but presents some small
oscillations, whose origin is difficult to determine since the exact layers
stack formula is not known to us. We have therefore chosen to apply the same
experimental approach to the case where the front face of the sample
corresponds to its uncoated face. Indeed, in this case, the theoretical result
is known (phase shift equal to $\pi$ on the whole spectral range). The
experimentally obtained curve corresponds to the light blue curve of Fig. 11a.
It is approximately constant (average value of about 0.05$\pi$), but also
presents the same small oscillations as those observed in the case where the
front face of the sample corresponds to its coated face. A possible solution
to this problem consists of transposing to this phase measurement the
calibration principle used during the determination of the $|\rho_{m}|^{2}$
coefficients, but this time taking into account the two orientations of the
test window. Under these conditions, the equation (25) becomes
$\text{arg}[r_{\text{coat}}(f_{l})]-\text{arg}[r_{\text{uncoat}}(f_{l})]=-\text{arg}[\mathcal{S}_{1,\text{coat}}(F_{l})]+\text{arg}[\mathcal{S}_{1,\text{uncoat}}(F_{l})]$
(82)
or, taking into account the value of the phase shift on the uncoated side
$\phi_{\text{coat}}(f_{l})=\pi+\text{arg}[\mathcal{S}_{1,\text{uncoat}}(F_{l})]-\text{arg}[\mathcal{S}_{1,\text{coat}}(F_{l})]$
(83)
The result of this calibration procedure is shown in 11b (black curve). The
shape of the spectral dependence thus obtained is much more satisfactory, with
in particular the existence of a very clear symmetry around the wavelength
corresponding to the coating reflection minimum $\lambda_{\text{min}}$ (marked
by a vertical red line on Fig. 11b).
## 4 Discussion
The experimental results presented in Section 3.2 are in very good agreement
with the predictions of our theoretical model. The only two small deviations
that we could note are:
* •
the wavelength shift between the reflection spectra of the coated face
obtained in single bounce and double bounce for a 1000 mA driving current (see
Fig. 9),
* •
the necessity to implement a calibration procedure in the case of the
measurement of the phase shift induced by the reflection on the coated face.
The objective of the following sections is to identify possible explanations
for these two discrepancies.
### 4.1 Wedge angle
As pointed out in Section 2.3.2, the presence of a wedge angle on the
substrate causes a lateral shift between the beam reflected from its rear side
and that reflected from its front side. Obviously, the same phenomenon occurs
between the double bounce and the single bounce inside the window, the beam
deviation associated with the third echo being defined by
$\beta_{3}=2\theta_{1}-4n_{s}\alpha=\beta_{1}-4n_{s}\alpha=\beta_{2}-2n_{s}\alpha$
(84)
However, the modeling result presented in Section 2.3.2 shows that the
spectral dependence of this effect is negligible, and remains so in the case
of the differential shift between third and second echo. Furthermore, the
results obtained with a driving current of 190 mA (see Fig. 7c) show that this
effect is not always present and is therefore not solely attributable to the
wedge angle between the window faces.
### 4.2 Far field filtering
Even if it is unlikely that the spectral shift is related to it, it seemed
useful to us to test the influence, on the measurement result, of the
geometric filtering in the far field carried out by the detection photodiodes.
To do this, we replaced the Nirvana receiver by a receiver Thorlabs PDB210C in
which the diameter $2a$ of the active area of InGaAs photodiodes is 3 mm. At
the same time, the bandwidth $B$ of the RF output is increased to 1 MHz, while
the CMRR is simply 30 dB, for a trans-impedance gain $G$ of $5\times 10^{5}$
V/A and a NEP of 16 pW/$\sqrt{\text{Hz}}$.
Using the equations (4) and (36), we deduce from these functional
characteristics the new value of the geometric overlap coefficient, i.e.
$\eta_{a}=0.995$, which confirms that far-field filtering is indeed removed.
As in the case of the Nirvana balanced receiver, the maximum power is defined
by the condition (40) on the digitizing range
$P\leqslant\frac{10}{2\sqrt{2}\eta_{a}T_{\text{ref},1}GS\sqrt{R_{s}}}\sim
0.3\text{ mW}$ (85)
We deduce the value of $P_{\text{max}}$
$P_{\text{max}}=\eta_{a}T_{\text{ref},1}P=44\thinspace\mu\text{W}$ (86)
then that of the variance of voltage fluctuations using equation (74)
$\sigma_{V,\text{th}}^{2}=\sigma_{V,q}^{2}+\sigma_{V,\text{RIN}}^{2}=8.75\times
10^{-5}\text{ Volts}^{2}+9.8\times 10^{-6}\text{ Volts}^{2}=9.73\times
10^{-5}\text{ Volts}^{2}$ (87)
Figure 12 shows the interferogram obtained with the PDB210C Thorlabs receiver
on our test sample when the front side of the silica window corresponds to its
coated side (SLD driving current 450 mA, i.e. $\lambda_{0}=1060$ nm,
$\Delta\lambda_{0}=44$ nm; translation speed 1 mm/s).
Figure 12: Interferogram obtained with the PDB210C THORLABS receiver on the
2mm thick silica window coated on one side (driving current SLD 450 mA,
translation speed 1 mm/s).
As mentioned in section 3.2, the portion of the interferogram before the first
processing window can be used to estimate the variance of the voltage $V(t)$.
We get: $\sigma_{V,\text{exp}}^{2}=1.05\times 10^{-4}\text{ Volts}^{2}$, which
is in very good agreement with our theoretical prediction. This result allows
us to estimate the value of the smallest coefficient $|\rho_{m}|^{2}$ that can
be detected with a signal-to-noise ratio of 10 using this set-up. It comes
$|\rho_{m}|_{\text{min}}^{2}\approx
10\left(\frac{\Delta\lambda}{\lambda_{0}}\right)^{2}\frac{\sigma_{V,\text{exp}}^{2}}{G^{2}S^{2}P_{\text{max}}^{2}}\frac{F_{0}^{2}}{F_{s}}\Delta
T=\left\\{\begin{aligned} &6\times 10^{-7}\quad\text{for }\Delta T=2.93\text{
s}\\\ &6\times 10^{-8}\quad\text{for }\Delta T=0.29\text{
s}\end{aligned}\right.$ (88)
This detection floor is significantly higher than what we obtained with the
Nirvana receiver [see equations (78) and (81)] and may make it difficult to
detect a possible shift between the reflection spectra obtained in single
bounce or double bounce configuration. To try to circumvent this problem, we
decided to carry out, under the same experimental conditions, 4 successive
recordings of the same interferogram and to average the wavelength dependence
of the reflection coefficients thus obtained. The result of this approach is
shown in Fig. 13a.
Figure 13: Comparative presentation of the reflection spectra of the coated
face obtained (a) with the Thorlabs PDB210C receiver in single and double
bounce configuration - (b) with the Nirvana receiver (red, blue and green
curves) and the Thorlabs PDB210C receiver (magenta curve).
It is quite clear, in particular for values of the reflection coefficient
higher than 100 ppm, where the signal-to-noise ratio is sufficient, that these
two curves do not show any detectable shift (one will usefully refer to Figure
9c to better perceive the difference in behavior).
Moreover, this change of detector has confirmed that the modeling of the
signal-to-noise ratio that we presented in Section 2.3.1 accurately describes
the influence of the various characteristic parameters of the balanced
receiver on the quality of the measurement. And at the end, Fig. 13b shows
that the results we obtain with this method do not depend on the receiver
used, which is again very satisfactory and guarantees the reproducibility of
these measurements.
### 4.3 Dispersive OPD
To establish equation (8), we explicitly assumed that the optical path
difference $\Delta$ did not depend on the frequency $f$. This is a reasonable
assumption, as the configuration has been defined to approximate this
condition [10]. Let us explain our approach further: from the separation of
the two channels by the splitter cube BS1, each of them travels the following
paths in a glass
$\text{SIG: }c_{1}/2+c_{1}+c_{2}/2\quad\text{;}\quad\text{REF:
}c_{1}/2+p+c_{2}/2$ (89)
where $c_{j}$ is the size of the splitter cube BSj, ($j=1,2$) and $p$ is the
size of the right angle prism RAP.
The equation (89) shows that this right angle prism plays the same role as a
compensating plate in a Michelson interferometer. But, this compensation is
only realized if $p=c_{1}$. The optical components used are made of N-BK7
($n=1.505@1064$ nm) with a size of 25.4 mm and a manufacturing tolerance of
$\pm 0.25$ mm. This means that the OPD compensation is realized with an
uncertainty of $\delta\Delta=(n-1)(p-e_{1})\sim\pm 250$ $\mu$m.
We must therefore take into account the possible presence of a compensation
error in our set-up. This will induce a spectral dependence of the position of
the zero OPD and thus make the shape of the interferogram associated with the
first echo asymmetric. When this compensation is perfectly realized in a
Fourier transform spectrometer, because of this symmetry, it is not necessary
to record the interferogram for positive or negative values of the optical
path difference [13].
First of all, we will analyze whether the first echoes recorded from the
uncoated side of our test sample are symmetrical or not, and quantify their
possible asymmetry. To this end, we start by calculating the signal envelope
of recorded voltage $V_{1}$ using a Hilbert transform [20, 21, 22], namely
$\mathcal{E}_{1}(t)=\mathcal{H}\\{V_{1}\\}(t)=\frac{1}{\pi}\text{PV}\int\limits_{-\infty}^{+\infty}\frac{V_{1}(\tau)}{t-\tau}d\tau$
(90)
where PV denotes the Cauchy principal value. Then we decompose this envelope
into even and odd parts
$\mathcal{E}_{1}(t)=\frac{\mathcal{E}_{1}(t)+\mathcal{E}_{1}(-t)}{2}+\frac{\mathcal{E}_{1}(t)-\mathcal{E}_{1}(-t)}{2}=\mathcal{E}_{1,e}(t)+\mathcal{E}_{1,o}(t)$
(91)
and quantify the asymmetry using the following $\epsilon$ quantity
$\epsilon=\frac{\displaystyle\int\limits_{-\infty}^{+\infty}|\mathcal{E}_{1,o}(t)|\thinspace
dt}{\displaystyle\int\limits_{-\infty}^{+\infty}\mathcal{E}_{1,e}(t)\thinspace
dt}$ (92)
Figure 14 shows the result of this analysis for two different driving
currents, namely 190 mA and 1000 mA.
Figure 14: Analysis of the asymmetric character of the interferograms recorded
when reflection is from the uncoated face of the test sample [a) and d)
normalized power spectral density - b) and e) variation of the voltage $V_{1}$
as a function of the displacement $z_{1}$ of the translation stage (red curve)
and envelope of this signal (black curve) - c) and f) decomposition of the
envelope (black curve) into even (red curve) and odd (blue curve) parts ; a),
b), and c), driving current of 190 mA - d), e), and f), driving current of
1000 mA].
The asymmetry of the interferogram varies between 3.9% for a driving current
of 190 mA and 5.6% for a driving current of 1000 mA. This clearly confirms the
presence of a weakly dispersive part in the optical path difference of our
set-up.
To determine the influence of such a dispersive part, we have developed a
simulation program of our set-up under MATLAB and introduced in the expression
of the optical path of the REF channel, the crossing of a thickness $e$ of
N-BK7. The expression of the optical path difference then becomes
$\Delta(f)=2vt+e[n_{\text{BK7}}(f)-1]$ (93)
Figure 15: Influence of the spectral dispersion of the optical path difference
on the measurement result of the reflection phase shift on an anti-reflection
coated interface - a) Spectral dependence of the phase of the discrete Fourier
transform of the echo associated with the uncoated face (UNC-ARC, magenta
curve in the absence of dispersion, cyan curve in the presence of dispersion)
and that associated with the coated face (ARC-UNC, red curve in the absence of
dispersion, blue curve in the presence of dispersion) ; b) Spectral dependence
of the phase shift on the reflection obtained by using the calibration
procedure described in the text (green curve in the absence of dispersion,
black curve in the presence of dispersion, the vertical red line indicating
the centering wavelength of the anti-reflection coating).
This numerical modeling shows that the presence of such a dispersion of
optical path difference has no influence on the result of the reflection
coefficient measurements. Figure 15 shows the impact of this dispersion on the
results of a phase measurement, when the anti-reflection coating is a
SiO2/Nb2O5 bilayer deposited on a fused silica substrate and centered at 1050
nm. On graph a), the magenta and red curves are relative to the case without
dispersion for the two possible orientations of the sample, respectively UNC-
ARC and ARC-UNC. The cyan and blue curves are relative to the dispersive case
(N-BK7 thickness of 50 microns) for these two orientations. The presence of a
dispersive optical path difference causes the appearance of a linear variation
of phase as a function of the wavelength, but the implementation of the
calibration procedure defined at the end of the section 3.2 allows to find
perfectly the information of phase shift sought (see Fig. 15b). This validates
and justifies the implementation of such a procedure. On the other hand, the
presence of a dispersive OPD does not explain the slight oscillations
appearing on our experimental results. Further analysis will therefore be
necessary to understand their origin and to give our phase measurement results
the ultimate metrological quality.
## 5 Conclusion
In this paper, we have demonstrated that the implementation of a numerical
processing transposed from that used in Fourier transform spectrometry to
retro-reflection data acquired by a low-coherence balanced detection
interferometer scanned in optical path difference allows the determination of
the spectral dependence of the reflection coefficient of V-shaped anti-
reflection coatings over a spectral range on the order of 100 nm, with a
spectral resolution of up to 0.2 nm and a detection floor on the order of 0.1
ppm. Moreover, a posteriori choice of the width of the processing windows
allows to improve this performance by a factor 10 (10 ppb) at the cost of a
correlated degradation of the spectral resolution (2 nm).
Our method also allows the determination of the spectral dependence of the
phase shift due to the reflection on a coated interface. However, further
tests seem necessary to quantify the possible influence of a residual spectral
dispersion of the optical path difference of the interferometer on the
metrological quality of this phase measurement.
The modeling of the key parameters of the balanced receiver and their
influence on the signal-to-noise ratio using this new measurement method has
been experimentally verified, in particular by using detection systems from
two different manufacturers.
The lowest coefficient of reflection that can be measured with our set-up is
defined by equation (67), in which two quantities are implicit functions of
$v$, the translation speed, namely
$F_{0}=\frac{2v}{\lambda_{0}}\quad\text{and}\quad\Delta
T=\frac{n_{g}d_{s}}{v}$ (94)
Accordingly, equation (67) can be written in the following equivalent form
$|\rho_{m}(f_{l})|_{q}^{2}=10\left(\frac{\Delta\lambda}{\lambda_{0}}\right)^{2}\frac{2eB}{SP_{\text{max}}}\frac{4v}{F_{s}}\frac{n_{g}d_{s}}{\lambda_{0}^{2}}$
(95)
This shows that an optimized choice of two of the key parameters of the
balanced receiver (saturation power of the photodiodes $P_{\text{max}}$ and
detection bandwidth $B$), as well as the modification of some of the
experimental conditions (decrease of the translation speed $v$ down to 0.1
mm/s, increase of the sampling frequency $F_{s}$ up to 1 MHz) should allow to
significantly improve the sensitivity of our setup. Moreover, this increase in
saturation power will require the use of larger diameter photodiodes, which
will offer more flexibility and tolerance in the alignment procedure. The
application of all these modifications should result in an overall gain of 3
decades and thus would pave the way for the use of this new measurement method
for the characterization of light scattered by optical interfaces, coated or
not.
## Acknowledgments
This work is part of the StrayLight Working Group for the Laser Instrument
Group of the LISA Consortium. The authors would like to thank the AMIdex
Talents Management Program, Aix Marseille University and the French National
Space Center (CNES) for their financial support.
## Disclosures
The authors declare no conflicts of interest.
## Data availability
Data underlying the results presented in this paper are not publicly available
at this time but may be obtained from the authors upon reasonable request.
## References
* [1] H. A. Macleod, Thin-Film Optical Filters, 5th ed. (CRC Press, 2018).
* [2] H. K. Raut, V. A. Ganesh, A. S. Nair, and S. Ramakrishna, ”Anti-reflective coatings: A critical, in-depth review,” Energy Environ. Sci. 4, 3779 (2011).
* [3] J. A. Dobrowolski and B. T. Sullivan, "Universal antireflection coatings for substrates for the visible spectral region," Appl. Opt. 35, 4993-4997 (1996).
* [4] F. Lemarquis, T. Begou, J. Ruscica, D. Turover, and J. Lumeau "Broadband antireflection coatings for visible and infrared ranges", in International Conference on Space Optics - ICSO 2018, Proc. SPIE 11180, 1118043 (2019).
* [5] R. Okuda, R. Otowa, N. Uehara, "A high isolation thin-film filter coated on a GRIN lens for triple-play services," in Passive Components and Fiber-based Devices III, S. B. Lee, Y. Sun, K. Qiu, S. C. Fleming, I. H. White, eds., Proc. SPIE 6351, 63510S (2006).
* [6] C. Amra, M. Lequime, and M. Zerrad, Electromagnetic Optics of Thin-Film Coatings - Light Scattering, Giant Field Enhancement, and Planar Microcavities, (Cambridge University Press, 2021).
* [7] R. R. Willey, "Witness sample preparation for measuring antireflection coatings," Appl. Opt. 53, A52-A55 (2014).
* [8] H. Cui, B. Li, S. Xiao, Y. Han, J. Wang, C. Gao, and Y. Wang, "Simultaneous mapping of reflectance, transmittance and optical loss of highly reflective and anti-reflective coatings with two-channel cavity ring-down technique," Opt. Express 25, 5807-5820 (2017).
* [9] N. Uehara, R. Okuda, and T. Shidara, "Super antireflection coating at 1.5 $\mu$m," in Optical Interference Coatings, OSA Technical Digest Series (Optica Publishing Group, 2004), paper WA5.
* [10] I. Khan, M. Lequime, M. Zerrad, and C. Amra, "Detection of Ultralow Light Power Back-Reflected or Back-Scattered by Optical Components Using Balanced Low-Coherence Interferometry," Phys. Rev. Applied 16, 044055 (2021).
* [11] P. C. D. Hobbs, "Ultrasensitive laser measurements without tears," Appl. Opt.36, 903-920 (1997).
* [12] I. Khan, M. Lequime, M. Zerrad, and C. Amra, "Measurement of the spectral dependence of the amplitude and phase properties of laser line antireflection coatings using balanced low coherence interferometry," in Optical Interference Coatings Conference (OIC) 2022, R. Sargent and A. Sytchkova, eds., Technical Digest Series (Optica Publishing Group, 2022), paper ThB.8.
* [13] P. Fellgett, "I. - les principes généraux des méthodes nouvelles en spectroscopie interférentielle - A propos de la théorie du spectromètre interférentiel multiplex," J. Phys. Radium 19, 187-191 (1958).
* [14] J. Connes and P. Connes, "Near-Infrared Planetary Spectra by Fourier Spectroscopy. I. Instruments and Results," J. Opt. Soc. Am. 56, 896-910 (1966).
* [15] S. T. Thurman and J. R. Fienup, "Signal-to-noise ratio trade-offs associated with coarsely sampled Fourier transform spectroscopy," J. Opt. Soc. Am. A 24, 2817-2821 (2007).
* [16] N. Matallah, H. Sauer, F. Goudail, J.-C. Fontanella, Y. Ferrec, J. Taboury, P. Chavel, "Design and first results of a Fourier Transform imaging spectrometer in the 3-5 $\mu$m range," in Optical Design and Engineering IV, L. Mazuray, R. Wartmann, A. Wood, J.-L. M. Tissot, J. M. Raynor, eds., Proc. SPIE 8167, 81671S (2011).
* [17] Y. Ferrec, PhD thesis, "Spectro-imagerie aéroportée par transformée de Fourier avec un interféromètre à décalage latéral : réalisation et mise en oeuvre", Université Paris Sud 11, Orsay, France (2008).
* [18] D. Marcuse, "Gaussian approximation of the fundamental modes of graded-index fibers," J. Opt. Soc. Am. 68, 103-109 (1978).
* [19] https://lightmachinery.com/
* [20] P. Pavlicek and V. Michalek, “White-light interferometry - envelope detection by Hilbert transform and influence of noise,” Opt. Lasers Eng. 50, 1063–1068 (2012).
* [21] T. Pikalek, T. Fort, and Z. Buchta, "Detection techniques in low-coherence interferometry and their impact on overall measurement accuracy," Appl. Opt. 53, 8463-8470 (2014).
* [22] L. Xin, Z. Yang, J. Dou, Z. Liu, Z. Gao, and X. Zhang, "Hilbert transform-based envelope substitution method for non-uniform sampling signal correction in white-light interferometry," OSA Continuum 3, 824-834 (2020).
|
# Making hot Jupiters in stellar clusters: the importance of binary exchange
Daohai Li,1 Alexander J. Mustill,2 Melvyn B. Davies3 and Yan-Xiang Gong4
1Department of Astronomy, Beijing Normal University, No.19, Xinjiekouwai St,
Haidian District, Beijing, 100875, P.R.China
2Lund Observatory, Department of Astronomy and Theoretical Physics, Lund
University, Box 43, SE-221 00 Lund, Sweden
3Centre for Mathematical Sciences, Lund University, Box 118, 221 00 Lund,
Sweden
4College of Physics and Electronic Engineering, Taishan University, Taian
271000, China E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>(DL)
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
It has been suggested that the occurrence rate of hot Jupiters (HJs) in open
clusters might reach several per cent, significantly higher than that of the
field ($\sim$ a per cent). In a stellar cluster, when a planetary system
scatters with a stellar binary, it may acquire a companion star which may
excite large amplitude von Zeipel-Lidov-Kozai oscillations in the planet’s
orbital eccentricity, triggering high-eccentricity migration and the formation
of an HJ. We quantify the efficiency of this mechanism by modelling the
evolution of a gas giant around a solar mass star under the influence of
successive scatterings with binary and single stars. We show that the chance
that a planet $\in(1,10)$ au becomes an HJ in a Gyr in a cluster of stellar
density $n_{*}=50$ pc-3 and binary fraction $f_{\mathrm{bin}}=0.5$ is about 2%
and an additional 4% are forced by the companion star into collision with or
tidal disruption by the central host. An empirical fit shows that the total
percentage of those outcomes asymptotically reaches an upper limit determined
solely by $f_{\mathrm{bin}}$ (e.g., $10\%$ at $f_{\mathrm{bin}}=0.3$ and 18%
at $f_{\mathrm{bin}}=1$) on a timescale inversely proportional to $n_{*}$
($\sim$ Gyr for $n_{*}\sim 100$ pc-3). The ratio of collisions to tidal
disruptions is roughly a few, and depends on the tidal model. Therefore, if
the giant planet occurrence rate is 10 %, our mechanism implies an HJ
occurrence rate of a few times 0.1 % in a Gyr and can thus explain a
substantial fraction of the observed rate.
###### keywords:
planets and satellites: dynamical evolution and stability – planets and
satellites: formation – open clusters and associations: general – binaries:
general
††pubyear: 2015††pagerange: Making hot Jupiters in stellar clusters: the
importance of binary exchange–Making hot Jupiters in stellar clusters: the
importance of binary exchange
## 1 Introduction
Most stars form in a cluster together with tens to thousands of siblings (Lada
& Lada, 2003). As a prominent example, our Sun probably originated from a
cluster with a few thousand stars (Adams, 2010). Such a cluster environment
may intuitively seem hostile to planet formation, as the UV radiation from the
more massive cluster members may destroy the proto-planetary disc (e.g.,
Scally & Clarke, 2001; Adams et al., 2006; Winter et al., 2018; Nicholson et
al., 2019) and the stellar scattering may disperse either the disc (e.g.,
Pfalzner et al., 2005; Olczak et al., 2012; Vincke & Pfalzner, 2016; Portegies
Zwart, 2016) or the already-formed planets (e.g. Spurzem et al., 2009;
Malmberg et al., 2011; Li & Adams, 2015; Cai et al., 2017; Li et al., 2019;
van Elteren et al., 2019).
However, it turns out for open clusters in general, these effects are mild and
the planet formation/survivability within a few tens of au is not likely to be
affected by the cluster environment (Laughlin & Adams, 1998; Adams & Laughlin,
2001; Adams et al., 2006; Malmberg et al., 2011; Hao et al., 2013; Li & Adams,
2015; Cai et al., 2017; Fujii & Hori, 2019; Li et al., 2019, 2020a, 2020b). An
obvious example is again our solar system that originated from a sizeable
cluster but managed to retain objects out to at least tens of au. Thus one
would expect that planets around stars in open clusters should look similar to
those orbiting field stars. The observations are still sparse with only a
dozen planets found in clusters (Meibom et al., 2013; Quinn et al., 2012,
2014; Brucalassi et al., 2016; Obermeier et al., 2016; Ciardi et al., 2017;
Rizzuto et al., 2018; Livingston et al., 2019) but the data do not seem to
disagree with this inference (e.g., Meibom et al., 2013; Brucalassi et al.,
2017; Takarada et al., 2020).
Nonetheless, one exception may be the hot Jupiters (HJs) which seem
tentatively more populous in (some) open clusters. A radial velocity survey by
Quinn et al. (2012) reported the discovery of two HJs in the metal rich
([Fe/H]$\sim$0.19) 600-Myr-old Praesepe cluster and an occurrence rate of 3.8
% was derived. The detection of an HJ in the Hyades cluster (also metal rich
with [Fe/H]$\sim$0.13 and about 600 Myr old) was made by Quinn et al. (2014)
and combining the previous (non-detection) result (Paulson et al., 2004) the
authors estimated that the average HJ occurrence rate in Praesepe and Hyades
was 2.0 %. However, it is well known that the HJ/giant planet occurrence rate
in the field correlates with the host star’s metallicity (e.g., Gonzalez,
1997). After correcting for the solar metallicity, the derived occurrence rate
of HJs for the two open clusters became 1 % (Quinn et al., 2014), in good
agreement with that of the field ($\sim 1.2\%$; e.g., Wright et al., 2012).
Another radial velocity survey by Brucalassi et al. (2014, 2016) of the solar-
metallicity and solar-age cluster M67 yielded 3 HJs, leading to an occurrence
rate of 5.6 and 4.5 % considering single star hosts and for any star,
respectively. More recently, Takarada et al. (2020) looked into close-in giant
planets in the young 100-Myr-old, solar-metallicity open cluster Pleiades and
their non-detection has given rise an upper limit of 11 % for the HJ
occurrence rate in that cluster.
It therefore seems that the HJ occurrence rate in open clusters is no smaller
than that of the field, and in some occasions appreciably higher. This is,
however, under debate, and more data are needed.
The formation of HJs has recently been reviewed by Dawson & Johnson (2018).
There are three competing theories: in-situ, disc migration, and high
eccentricity (high-$e$) migration. In the former two cases, the HJs form in
the presence of the gaseous disc so the planet’s eccentricity probably remains
low because of disc damping while in the latter scenario, the planet’s
eccentricity is highly excited, leading to a very small pericentre distance;
then strong tidal interactions are activated, which shrink and circularise the
planet’s orbit, forming an HJ. Notably, four out of the six HJs discovered in
open clusters as discussed above have moderate eccentricities $\gtrsim 0.1$,
lending support to high-$e$ migration formation. Mechanisms able to excite
high eccentricities include the planets’ interaction (either direct scattering
or secular forcing; e.g., Rasio & Ford, 1996; Wu & Lithwick, 2011) or
perturbation by a distant companion star/planet (e.g., Wu & Murray, 2003;
Fabrycky & Tremaine, 2007; Malmberg et al., 2007a; Naoz et al., 2011) through
the von Zeipel-Kozai-Lidov mechanism (ZKL von Zeipel, 1909; Kozai, 1962;
Lidov, 1962).
As mentioned above, in an open cluster, the planetary system’s configuration
is not expected to be modified by the environment, but perhaps the cluster can
boost the formation of HJs via nudging the much further-out stellar companion
(or planets, see Wang et al., 2020b; Rodet et al., 2021; Wang et al., 2022,
and Section 6 for a discussion). But how does a planetary system acquire such
a companion star in the first place?
Stellar binary-single scattering might be a solution. When a stellar binary
scatters with a planetary system (effectively a single star as far as the
stellar dynamics are concerned), the latter may exchange with a component of
the binary and a new binary composed of the planetary system and the other
original component forms (e.g. Heggie, 1975; Hut & Bahcall, 1983). Li et al.
(2020b) showed that when scattering with a binary star, a planetary system
may, while remaining intact during the scattering, acquire a distant companion
star. For the Sun-Jupiter system, this scenario happens at a rate an order of
magnitude higher than that of the planet’s ejection. For an open cluster with
a stellar number density of 50 pc-3 and a binarity (the fraction of binary
systems among all systems) of 0.5, the Sun-Jupiter system has a chance of 10%
to obtain a companion within 100 Myr. Li et al. (2020b) also estimated that
half of those so-acquired companion stars can activate ZKL cycle in the
planet’s orbital evolution. But how efficiently can this process initiate
high-$e$ migration and create HJs? This is the question we want to answer in
this work.
The paper is organised as follows. In Section 2.2, we describe the simulation
strategy, detailing the modelling of the scattering and the tidal dissipation
model. The relevant timescales are compared in Section 3. We present two sets
of test simulations in Section 4 where a Jupiter mass planet is placed at 5
au. In Section 5 we show our population synthesis simulations where the
planets’ initial orbits are taken from the observed population and the cluster
properties are varied within reasonable ranges. We discuss the implications
and present the main results in Section 6 and Section 7, respectively.
## 2 Method
In open clusters, a planetary system may encounter more than one stellar
system (e.g., Malmberg et al., 2007b) and when not interacting with scattering
stars, the system evolves on its own. In the following, we first describe how
the stellar scatterings are generated and then how the simulation is designed.
### 2.1 Creation of the scatterings
We refer to the stellar systems that scatter with the planetary system as
scatterer and they can be a single star or a binary. The rate at which the
planetary system encounters a scatterer can be estimated with
$\Gamma=n_{*}\sigma v_{\mathrm{inf}},$ (1)
where $n_{*}$ is the number density of the stellar systems in the cluster
(including both single and binary systems), $v_{\mathrm{inf}}$ the relative
velocity of the scattering and $\sigma$ the encounter cross section. While
typical values for the former two can be found in standard references (e.g.,
Binney & Tremaine, 2008), $\sigma$ will be defined here and therefore
prescribes $\Gamma$. We would like to have a large enough $\sigma$ such that
no important encounters will be missed but also one that is not too large to
avoid the numerous weak unimportant scatterings overwhelming our simulation.
Therefore, $\sigma$ has to be chosen by the result of the encounter.
The outcome of a scattering event critically depends on how close the
planetary system and the scatterer get and this distance can be linked to the
scatterer mass, impact parameter $b$, and $v_{\mathrm{inf}}$ via gravitational
focusing. Li et al. (2020b) performed scattering experiments between the Sun-
Jupiter system and a stellar binary/single star, varying the stellar mass,
orbital semimajor axis (if the scatterer is a binary), and $v_{\mathrm{inf}}$.
They found that the maximum impact factor $b_{\mathrm{max}}$ at which an
encounter might still lead to disruption/exchange events can be approximated
by (the top equation in their figure 2 and after a little algebraic
manipulation)
$\displaystyle\log{b_{\mathrm{max}}(m_{\mathrm{tot}},a_{\mathrm{tot}},v_{\mathrm{inf}})\over
1\,\mathrm{au}}=2.04+0.51\log{m_{\mathrm{tot}}\over 1\,\mathrm{M}_{\odot}}$
(2) $\displaystyle+0.49\log{a_{\mathrm{tot}}\over
1\,\mathrm{au}}-1.00\log{v_{\mathrm{inf}}\over 1\,\mathrm{km\,s}^{-1}},$
in which $a_{\mathrm{tot}}=a_{\mathrm{pl}}+a_{\mathrm{bin}}$ is the sum of the
planetary semimajor axis and that of the scattering binary in au (if the
scatter is a single star, $a_{\mathrm{bin}}=0$); $m_{\mathrm{tot}}$ is total
mass of all the objects in Solar masses (M⊙); and $v_{\mathrm{inf}}$ is in km
s-1. Therefore, in order not to miss any important encounter, the cross
section has to be at least $\pi b^{2}_{\mathrm{max}}$ which depends on the
property of the scatterer and the planetary system as above.
Now we describe how the scatterers are created. Be the scatterer a single star
or a binary stellar system, the stellar mass is drawn independently from the
initial mass function by Kroupa (2001) with a lower limit of 0.1 M⊙ and an
upper limit of 10 M⊙. For a binary scatterer, the relative orbit is created
following the observed distribution of solar type binaries in the field
(Duquennoy & Mayor, 1991; Raghavan et al., 2010) as done in Li et al. (2020b).
As tight binaries behave effectively like a single star (Li et al., 2020b), in
our simulation, if the binary semimajor axis $a_{\mathrm{bin}}<1$ au, the two
components are merged as a single object. The upper limit for the
$a_{\mathrm{bin}}$ has been set to 1000 au, roughly where the binary becomes
soft in an open cluster (see Figure 1 below). And, $v_{\mathrm{inf}}$ is drawn
from a Maxwellian distribution with a mean of 1 km s-1. Finally, we draw
$b\in(0,b_{\mathrm{ext}})$ such that the probability distribution function is
proportional to $b$. Here the constant $b_{\mathrm{ext}}$ is the extreme value
for $b$ such that any encounter with $b>b_{\mathrm{ext}}$ cannot be important
in our simulations (so $b_{\mathrm{ext}}\geq b_{\mathrm{max}}$ for any
scattering parameter) and its determination will be discussed next.
The value of $b_{\mathrm{ext}}$ is effectively the same as the largest
possible $b_{\mathrm{max}}$ required by the combination of the highest
scatterer mass, largest $a_{\mathrm{bin}}$, and smallest $v_{\mathrm{inf}}$.
We consider a binary scatterer with the two components each of
$10\mathrm{M}_{\odot}$, $a_{\mathrm{bin}}=1000$ au, and
$v_{\mathrm{inf}}=0.01$ km s-1. Though $v_{\mathrm{inf}}$ can be arbitrarily
small, we note 0.01 km s-1 $\approx$ 0.01 pc Myr-1 and it takes such an
encounter a few hundreds of Myr to traverse a typical open cluster of a few pc
so the chance of these slow encounters is small. With these extreme values,
$b_{\mathrm{ext}}\sim 8$ pc and thus $\sigma_{\mathrm{ext}}\sim 2\times
10^{2}$ pc2. The scattering frequency is consequently $\Gamma=200n_{*}$ Myr-1
(where $n_{*}$ is in pc-3 and we have taken $v_{\mathrm{inf}}=1$ km s-1),
suggesting that the scatterings are extremely frequent. But a significant
proportion of the encounters will not do anything appreciable either to the
planetary system or to the scatterer (if binary) and do not need to be
considered because their $b$ is larger than the respective $b_{\mathrm{max}}$.
Therefore, upon creating a scatterer with its time of encounter, mass, orbit,
$b$ and $v_{\mathrm{inf}}$, if $b>b_{\mathrm{max}}$, we simply omit this
encounter and proceed to the next.
If the planetary system has an outer stellar companion, we will need to
account for the evolution of the companion orbit during the encounters as
well. Hence, in Equation (LABEL:eq-b-max), now
$a_{\mathrm{tot}}=a_{\mathrm{com}}+a_{\mathrm{bin}}$, the sum of the binary
and the companion semimajor axes.
Those scattering events are simulated in a way similar to FEWBODY (Fregeau et
al., 2004). On initiation, with the scatterer mass, $v_{\mathrm{inf}}$ and
$b$, we analytically move it to a distance such that the tidal perturbation on
the star-planet/stellar binary system relative to their internal forcing is
smaller than $10^{-4}$. During the scattering, we look for stable
binary/triple systems recursively and the scattering is deemed finished if all
triples are stable and/or the tidal perturbation on any binary by any other
object is small again.
### 2.2 Simulation strategy
In between the stellar encounters, the planetary system (if there is no
companion star) evolves on its own and this can be tracked analytically
following a two-body prescription. If there is a companion star or if the
planet’s pericentre distance $r_{\mathrm{peri,pl}}$ is small so general
relativity (GR) and tides are important, the planet’s orbit is propagated
numerically (GR and tides are also included during the scattering if needed).
In our implementation, the GR effect is approximated by the leading order
post-Newtonian potential (Kidder, 1995) following Bolmont et al. (2015). The
equilibrium tidal model (Hut, 1981) is adopted in this work as done in Bolmont
et al. (2015). In the formation of HJs, however, the planetary orbit can be
extremely eccentric and thus dynamical tides involving different modes in the
planet’s/star’s oscillation become important. Here we follow Beauge & Nesvorny
(2012) and simply vary the tidal quality factor $Q_{\mathrm{tide}}$ according
to $r_{\mathrm{peri,pl}}$ and the planet’s eccentricity $e_{\mathrm{pl}}$ to
mimic the said effect
$Q_{\mathrm{tide}}=10^{200e_{\mathrm{pl}}^{2}({r_{\mathrm{peri,pl}}\over
1\mathrm{au}}-0.022)}Q_{\mathrm{tide,0}},$ (3)
where $Q_{\mathrm{tide,0}}$ is $10^{7}$ and $5\times 10^{6}$ for the host star
and the planet respectively. Beauge & Nesvorny (2012) found the above formula
able to reproduce the migration and circularisation timescale predicted by the
dynamical tidal model of Ivanov & Papaloizou (2011) fairly well. Equation (3)
allows for an easy correction for the qualitative features of dynamical tides
within the framework of the equilibrium tidal model. When
$r_{\mathrm{peri,pl}}<0.022$ au, $Q_{\mathrm{tide}}<Q_{\mathrm{tide,0}}$ and
in the meantime, if $1-e_{\mathrm{pl}}\ll 1$, $Q_{\mathrm{tide}}$ can be much
smaller than $Q_{\mathrm{tide,0}}$ and tidal dissipation is efficient.
Otherwise, $Q_{\mathrm{tide}}>Q_{\mathrm{tide,0}}$ and tides are ineffective.
This set of tidal parameter is adopted throughout the work unless explicitly
stated otherwise. In a subset of the simulations, we have also introduced an
enhanced tidal model, where $Q_{\mathrm{tide,0}}$ is reduced by a factor of
ten compared to the values above to mock more efficient tidal damping models
(e.g., Wu, 2018).
Tidal evolution has to do with the exchange of energy and angular momenta of
the orbital motion and spins of the star/planet. While the tidal deformation
in the planet is the main driver for the orbital circularisation, that in the
star is able to further modify the orbit afterwards. As the planet’s spin
angular momentum is much less than that of its orbital motion, the orbital
angular momentum is effectively conserved in the first stage.
In this work, we stop the propagation of a planetary system once the planet’s
apocentre distance $r_{\mathrm{apo,pl}}$ drops below 1 au in order to save
computational time. We deem that beyond this point, the formation of an HJ is
unlikely to be disrupted by further encounters or the small planet orbit makes
it immune from further external perturbation and the formation of an HJ is
impossible. Therefore, until this point, the planet’s orbit is still highly
eccentric and only the planetary tide is important in our simulation, enabling
a simple handling of the spins of the two objects. The planet, due to its
small mass and physical radius, carries a much smaller spin angular momentum
than that of the orbital motion and its spin is aligned and
(pseudo-)synchronised with the orbital angular velocity at pericentre on a
timescale much shorter than that of orbital evolution (e.g., Hut, 1981;
Correia, 2009). Hence, we simply let the spin of the planet be in that status
in our simulation (e.g., Hamers & Tremaine, 2017). Though during the ZKL
cycles, the stellar spin may evolve in a very complicated manner (Storch et
al., 2014), it is not expected to affect the orbital dynamics during the
planet’s orbital shrinkage but may play a significant role latter on (e.g.,
Fabrycky & Tremaine, 2007), beyond the scope of this work. We let the stellar
spin be the current solar value and align it with the initial orbital plane of
the planet.
Both GR and tides are only effective when $r_{\mathrm{peri,pl}}$ is small. In
our code, the two are activated only if $r_{\mathrm{peri,pl}}<0.05$ au (ten
solar radii).
The code checks the planet’s orbital elements at the beginning and at the end
of all scatterings and also routinely does so not during a scattering. If the
planet’s apocentre distance drops below 1 au and at the same time, the
pericentre distance is below 0.02 au, we deem that an HJ forms.
Finally, the Bulirsch-Stoer integrator available in the MERCURY package
(Chambers, 1999) is adopted for propagating the state vectors of the objects
using an error tolerance of $10^{-12}$. And collisions between the objects are
also detected using the subroutines in MERCURY. The planetary system is
followed for 1 Gyr and the simulation is stopped if the planet becomes an HJ
or does not orbit the original host anymore.
## 3 Timescales
Numerous authors have examined high-$e$ migration in different context (e.g.,
Wu & Murray, 2003; Fabrycky & Tremaine, 2007, and see Naoz (2016) for a
review). We would like to briefly review some of the relevant timescales.
Under the point mass Newtionian gravity assumption, a companion star may
excite the planet’s orbital eccentricity to arbitrarily large eccentricities
via the ZKL mechanism (e.g., Ford et al., 2000; Takeda et al., 2008). This
picture changes as the ZKL cycles may be suppressed by other perturbations,
for example, the short-range forces GR and/or tides as discussed in this work,
exerting faster precession in the planet’s orbit (e.g., Naoz et al., 2013; Liu
et al., 2015; Naoz, 2016). The respective expressions for these timescales are
as follows. That of the ZKL cycle is (Antognini, 2015)
$\mathrm{ZKL}\sim{8\over 15\pi}{m_{\mathrm{host}}+m_{\mathrm{com}}\over
m_{\mathrm{com}}}{P^{2}_{\mathrm{com}}\over
P_{\mathrm{pl}}}(1-e^{2}_{\mathrm{com}})^{3/2},$ (4)
where $m_{\mathrm{host}}=1$ M⊙ is the mass of the planetary host star and the
companion mass $m_{\mathrm{com}}=0.3$ M⊙; $P_{\mathrm{com}}$ and
$P_{\mathrm{pl}}$ are the orbital periods of the companion star and the
planet, respectively. The timescale of the leading order GR precession is
(Naoz et al., 2013)
$\mathrm{GR}\sim{2\pi\over
3}{a^{5/2}_{\mathrm{pl}}c^{2}\over(Gm_{\mathrm{host}})^{3/2}}(1-e^{2}_{\mathrm{pl}}),$
(5)
where $c$ is the speed of light and $G$ the gravitational constant. Finally,
the tidal bulges raised on the planet lead to orbital precession on a
timescale (Naoz, 2016)
$\mathrm{Tide}\sim{m_{\mathrm{pl}}a^{13/2}_{\mathrm{pl}}\over
G^{1/4}k_{2}m_{\mathrm{host}}(m_{\mathrm{host}}+m_{\mathrm{pl}})R^{5}_{\mathrm{pl}}}{(1-e^{2}_{\mathrm{pl}})^{5}\over
1+{3\over 2}e^{2}_{\mathrm{pl}}+{1\over 8}e^{4}_{\mathrm{pl}}},$ (6)
where $k_{2}=0.38$ is the planet’s Love number (the same as the Jovian value
Bolmont et al., 2015) and $R_{\mathrm{pl}}=7\times 10^{4}$ km its physical
radius.
In the top panel of Figure 1, we show the precession timescales by the leading
order ZKL effect in red, GR in blue and tides in purple (the relevant
expressions are taken from Antognini, 2015; Naoz et al., 2013; Naoz, 2016) as
a function of $a_{\mathrm{pl}}$ fixing the planet pericentre
$r_{\mathrm{peri,pl}}=0.022$ au (where tidal effects becomes efficient in our
model; see Equation (3)). The companion orbit has been fixed at
$a_{\mathrm{com}}=400$ (solid line) or 200 au (dashed) and
$e_{\mathrm{com}}=0.7$ (see Figure 5 below for companion orbits from the
simulations). Reading from the plot, the ZKL timescale is inversely dependent
on $a_{\mathrm{pl}}$ while those of GR and tide positively. For
$a_{\mathrm{pl}}\gtrsim 1-2$ au, the planet’s orbital precession is mainly
driven by the companion star. Otherwise, those by GR and tides take over.
Whereas the ZKL timescale is insensitive to the $e_{\mathrm{pl}}$, both GR and
tides depend critically on it (or $r_{\mathrm{peri,pl}}$) and the larger the
$r_{\mathrm{peri,pl}}$, the longer the latter two timescales. This means for
$r_{\mathrm{peri,pl}}>0.022$ au, the ZKL effect prevails (at least for
$a_{\mathrm{pl}}\gtrsim 1-2$ au) and can excite $e_{\mathrm{pl}}$ to the point
where tides are important.
In middle panel, these timescales are shown as a function of
$r_{\mathrm{peri,pl}}$ (or equivalently $e_{\mathrm{pl}}$), now fixing
$a_{\mathrm{pl}}$ at 5 au. The timescale of the ZKL mechanism does not depend
on $e_{\mathrm{pl}}$/$r_{\mathrm{peri,pl}}$ and is shown as the red horizontal
lines. Both GR and tides depend on $r_{\mathrm{peri,pl}}$ and the latter more
steeply. For $r_{\mathrm{peri,pl}}\gtrsim 0.012$ au, ZKL timescale is the
shortest among the three. This strengthens the above argument and means that
for $a_{\mathrm{pl}}=5$ au, $r_{\mathrm{peri,pl}}$ may be lowered to $\sim$
0.012 au by the ZKL mechanism uninterruptedly.
Embedded in an open cluster, the companion star, once obtained by the
planetary system, is subject to further stellar scattering and may be thus
stripped. The lifetime of the central host–companion binary can be estimated
through
$\tau_{\mathrm{com}}\sim{1\over n_{*}\sigma_{\mathrm{disp}}v_{\mathrm{inf}}},$
(7)
where the scattering velocity $v_{\mathrm{inf}}=1$ km s-1 and cluster’s
stellar density $n_{*}\in(10,200)$ pc-1. And $\sigma_{\mathrm{disp}}$ is the
cross section for the companion star’s disruption from the planetary host
star. When the binary is hard, the term disruption means exchange so the
original host-companion pair ceases to exist and the relevant expression can
be found in Bacon et al. (1996); when then binary is soft, disruption
additionally includes ionisation and we refer to Hut & Bahcall (1983) for the
formulae. From those expressions, $\sigma_{\mathrm{disp}}$ depends on
$m_{\mathrm{host}}=1$ M⊙, $m_{\mathrm{com}}=0.3$ M⊙, the scatterer mass
$m_{\mathrm{scat}}=0.3$ M⊙, $a_{\mathrm{com}}$, $e_{\mathrm{com}}=0.7$ and
$m_{\mathrm{vinf}}=1$ km s-1. The bottom panel of Figure 1 displays this
timescale in black compared to that of the ZKL mechanism in red as a function
of $a_{\mathrm{com}}$ for different $a_{\mathrm{pl}}$ and $n_{*}$. The former
behaves discontinuously at $a_{\mathrm{com}}\sim 1000$ au where the hard–soft
boundary lies. Importantly, the figure shows clearly that for the
$a_{\mathrm{pl}}\in(1,10)$ au and $n_{*}\in(10,200)$ pc-3, the companion star
a few hundreds of au apart can enforce full ZKL cycles before it is removed by
stellar scattering.
Figure 1: Timescales of orbital precession caused by different mechanisms. The
three panels show the timescales for the planet’s orbital precession owing to
the ZKL mechanism of a companion star in red (different line type for
different $a_{\mathrm{pl}}$ and $a_{\mathrm{com}}$), the GR effect in red,
tides in purple, and the lifetime of the companion star in black (different
line type for different $n_{*}$).
We have shown in our model, a companion star at a few hundreds of au can, via
the ZKL mechanism, pump the planet’s eccentricity high enough to trigger
efficient tidal evolution, not interrupted by tides/GR or scattering stripping
of the companion. But we caution that the planet’s tidal orbital shrinkage and
dynamical decoupling from the companion star may take many ZKL cycles (e.g.,
Wu & Murray, 2003; Fabrycky & Tremaine, 2007; Anderson et al., 2016, and see
Figure 2 below) so the companion star has to survive much longer than the ZKL
timescale to this process. Therefore, the requirement that the ZKL timescale
is shorter than the companion lifetime is only necessary but not sufficient
for high-$e$ migration. In the following, we present a few concrete examples.
## 4 Test simulations
To validate our code, we first perform two sets of test simulations. In these
simulations, the planet host is the Sun. In the first, the planet is much like
our Jupiter with a circular orbit at 5 au from the central host. The normal
tidal model (with $Q_{\mathrm{tide,0}}=10^{7}$ and $5\times 10^{6}$ for the
host star and the planet) is adopted. A total of 3000 runs are done. This set
of simulations is referred to as our “Jupiter” run. In the second set, the
only difference is that the tidal quality factors are reduced by a factor of
ten (with $Q_{\mathrm{tide,0}}=10^{6}$ and $5\times 10^{5}$) and called
“JupEnT” (Jupiter enhanced tides). In both, the cluster property is $n_{*}=50$
pc-3 and binarity $f_{\mathrm{bin}}=0.5$ (number of binary system divided by
the sum of single and binary systems). The simulation parameters are listed in
Table 1.
Table 1: Initial setup of the simulations. The first column is the simulation designation, the second the cluster stellar number density $n_{*}$, the third the binarity $f_{\mathrm{bin}}$ (the total number of binary star systems divided by the sum of the number of binary and single star systems), the fourth and the fifth the planet’s orbital semimajor axis $a_{\mathrm{pl}}$ and eccentricity $e_{\mathrm{pl}}$, the sixth the tidal model, and the last the number of runs. sim ID | $n_{*}$ (pc-3) | $f_{\mathrm{bin}}$ | $a_{\mathrm{pl}}$ (au) | $e_{\mathrm{pl}}$ | tidal model | $\\#_{\mathrm{run}}$
---|---|---|---|---|---|---
Jupiter | 50 | 0.5 | 5 | 0 | normal | 3000
JupEnT | 50 | 0.5 | 5 | 0 | enhanced | 3000
Nominal | 50 | 0.5 | 1-10 | 0-0.95 | normal | 30000
LowDen | 10 | 0.5 | 1-10 | 0-0.95 | normal | 3000
HighDen | 200 | 0.5 | 1-10 | 0-0.95 | normal | 3000
LowBin | 50 | 0.1 | 1-10 | 0-0.95 | normal | 3000
HighBin | 50 | 0.9 | 1-10 | 0-0.95 | normal | 3000
### 4.1 Example planet evolution
Figure 2 shows the formation of an example HJ from the Jupiter run. In the
plot, the grey regions represent ongoing stellar scattering. Before 400 Myr,
the system experiences only one scattering as without a companion, the system
is only 5 au wide so a scatterer has to come with a very small impact
parameter according to Equation (LABEL:eq-b-max) to be potentially important
but these are rare. This scattering event does not lead to appreciable changes
in the planet’s $r_{\mathrm{peri,pl}}$ (red, bottom panel, left ordinate) or
$a_{\mathrm{pl}}$ (blue, bottom panel, left ordinate). Another scattering with
a stellar binary occurs at 420 Myr where the planetary system acquires a
companion with pericentre distance $r_{\mathrm{peri,com}}=170$ au (red, top
panel, left ordinate) and $a_{\mathrm{com}}=850$ au (blue, top panel, left
ordinate); and the planet’s inclination with respect to the companion’s
orbital plane is $i_{\mathrm{pl,com}}=57^{\circ}$ (purple, bottom panel, right
ordinate). Now ZKL cycles are activated in the planet’s orbit shown as the
phase-correlated oscillations in $r_{\mathrm{peri,pl}}$ and
$i_{\mathrm{pl,com}}$. The purple line in the top panel shows the planet’s
normalised vertical orbital angular momentum
$h_{\mathrm{z}}=\sqrt{1-e^{2}_{\mathrm{pl}}}\cos i_{\mathrm{pl,com}}$ relative
to the companion’s orbital plane with the right $y$-axis which is a conserved
quantity in the lowest order ZKL theory. As expected, $h_{\mathrm{z}}$ is
quasi-constant, at least before the next stellar scattering.
The wide orbit of the companion means that more distant scatterings need to be
taken into account, indicated by the increase in the number of the grey
regions after the acquisition of the companion. During each of these (distant)
scatterings, the planet’s orbit is not affected but the companion’s
$r_{\mathrm{peri,com}}$ and $a_{\mathrm{com}}$ as well as its inclination
$i_{\mathrm{pl,com}}$ can be instantly altered. This also changes
$h_{\mathrm{z}}$ (because the reference plane changes) so after each
scattering, the planet’s orbital elements evolve with a new pattern so
$r_{\mathrm{peri,pl}}$ and $i_{\mathrm{pl,com}}$ reach different extrema.
Finally, during the scattering at 509 Myr, $a_{\mathrm{com}}$ becomes 380 au,
and $i_{\mathrm{pl,com}}$ reaches almost $90^{\circ}$. Immediately after this
encounter, $r_{\mathrm{peri,pl}}$ is driven to $<0.01$ au. Then tidal effects
quickly shrink the orbit to completely within 1 au during the first minimum of
$r_{\mathrm{peri,pl}}$ so an HJ has formed and the simulation is stopped. This
type of outcome is called HJ_ZKL (formation of HJ by a companion star). It is
common (60 % of all the HJ_ZKL cases) for a companion star to experience
stellar scatterings before it leads to the formation of an HJ.
Is the high-$e$ migration process enhanced by these scatterings? To answer
this question we perform a simple test. For each system of the outcome HJ_ZKL,
we have taken a snapshot of it the moment the system acquires the companion
star that later leads to the HJ formation. From this snapshot, the system is
propagated with the companion star in isolation without any scattering until 1
Gyr. It turns that an HJ only forms in 40 % of these simulations. This
suggests that the scatterings between the companion star and other stars in
the cluster have boosted the HJ formation.
Figure 2: Formation of an HJ via HJ_ZKL. The bottom panel shows the time
evolution of the planet’s $r_{\mathrm{peri,pl}}$ (red) and $a_{\mathrm{pl}}$
(blue) in the left $y$-axis and $i_{\mathrm{pl,com}}$ (purple) in the right
$y$; the top panel shows the companion’s $r_{\mathrm{peri,com}}$ (red) and
$a_{\mathrm{com}}$ (blue) in left $y$-axis and the planet’s $h_{\mathrm{z}}$
(purple) in right $y$. The shaded regions represent ongoing stellar
scattering; quantities related the companion’s orbit are not shown as it can
be not well defined if the scattering is strong.
Not all planets where ZKL cycles of appreciable amplitudes are enabled turn
into HJs. For the vast majority of such planet orbits, the maximum
$e_{\mathrm{pl}}$ is simply not large enough such that
$r_{\mathrm{peri,pl}}>0.022$ au and efficient tidal dissipation is never
activated. And for another substantial fraction, the planet is driven into the
central host by the companion – tides- and GR-induced orbital precession is
outpaced by that of the ZKL effect so the latter goes untamed. The top panel
of Figure 3 shows such an example. At 494 Myr into the simulation, the
planetary system obtains a companion star of $a_{\mathrm{com}}=700$ au and
$e_{\mathrm{com}}=0.92$ and $i_{\mathrm{pl,com}}=30^{\circ}$. Subsequently
around 508 Myr, a scattering changes the companion orbit to
$a_{\mathrm{com}}=720$ au, $e_{\mathrm{com}}=0.96$ and
$i_{\mathrm{pl,com}}=67^{\circ}$. Now the ZKL cycles are greatly amplified and
after several cycles, noticeable higher-order effects manifest by driving down
the extreme $r_{\mathrm{peri,pl}}$ in each successive ZKL cycle (e.g., Naoz et
al., 2011). Then at 501 Myr, when $r_{\mathrm{peri,pl}}$ reaches 0.01 au,
$a_{\mathrm{pl}}$ drops by $\sim$ 10% by tides. During the subsequent dip of
$r_{\mathrm{peri,pl}}$, the planet dives into the central host directly before
tides are able to do anything. We note that the planet may be tidally
disrupted by the star en route to a collision. But the tidal disruption limit
for Jupiter around the Sun is about a few solar radii so we do not detect
tidal disruption and generally call those collisions. The outcome of the
collision with the central star as a result of the ZKL effect by the companion
is referred to as COL_ZKL.
Figure 3: Time evolution of a planet that end up in the fates COL_ZKL and
HJ_ZKL with different tidal $Q$. The left ordinate marks the planet’s
$r_{\mathrm{peri,pl}}$ (red) and $a_{\mathrm{pl}}$ (blue) and the right
ordinate $i_{\mathrm{pl,com}}$ (purple). The top panel shows the case of
COL_ZKL where the planet’s $Q_{\mathrm{tide,pl,0}}=5\times 10^{6}$ and the
bottom panel of HJ_ZKL where $Q_{\mathrm{tide,pl,0}}=5\times 10^{5}$; all the
other parameters are the same in the two.
### 4.2 Statistics
We count the number of planets that have different fates and show their
percentages in Table 2. The details of our simulations are presented in
Section 2.2. The second column shows the percentage of the outcome HJ_ZKL, the
third column HJ_SCAT (an HJ forms where the small $r_{\mathrm{peri,pl}}$ is
established during the scattering, but not forced by a bound companion star),
the fourth column COL_ZKL, and the fifth column EJEC (Jupiter turns into a
free floating planet without a host star).
Table 2: Percentage of planets with different fates. The first column shows the ID of the simulation set; from the second to the fifth, those for HJ_ZKL (formation of HJ via the ZKL mechanism by a companion), HJ_SCAT (formation of HJ where the small pericentre distance is achieved directly during the scattering), COL_ZKL (collision forced by the ZKL mechanism by a companion), and EJEC (ejection) are shown. The errors are 1-$\sigma$ dispersion from random resampling. The nominal set has 30000 runs while the others have 3000 for each. In Section 5.4, the sum of HJ_ZKL and COL_ZKL will be also referred to as ZCT_ZKL. sim ID | HJ_ZKL | HJ_SCAT | COL_ZKL | EJEC
---|---|---|---|---
Jupiter | $2.43_{-0.26}^{+0.30}$ | $0$ | $5.80_{-0.41}^{+0.47}$ | $23.6_{-0.7}^{+0.8}$
JupEnT | $4.27_{-0.40}^{+0.30}$ | $0.0667_{-0.0667}^{+0.0333}$ | $4.17_{-0.37}^{+0.33}$ | $23.4_{-0.8}^{+0.7}$
Nominal | $2.43_{-0.09}^{+0.09}$ | $0.127_{-0.017}^{+0.023}$ | $3.64_{-0.11}^{+0.13}$ | $20.1_{-0.2}^{+0.3}$
LowDen | $0.667_{-0.133}^{+0.133}$ | $0$ | $1.17_{-0.23}^{+0.17}$ | $4.47_{-0.37}^{+0.37}$
HighDen | $4.74_{-0.46}^{+0.46}$ | $0.0641_{-0.0641}^{0.0641}$ | $7.37_{-0.64}^{+0.71}$ | $52.4_{-1.2}^{+1.0}$
LowBin | $0.667_{-0.133}^{+0.133}$ | 0 | $0.967_{-0.167}^{+0.200}$ | $8.87_{-0.50}^{+0.43}$
HighBin | $3.77_{-0.37}^{+0.34}$ | $0.100_{-0.067}^{+0.039}$ | $6.23_{-0.37}^{+0.50}$ | $29.6_{-0.8}^{+0.7}$
Table 2 shows that about 24% of the planets are ejected for both the Jupiter
and the JupEnT runs. This can be compared to the simple predictions from
Equation (1). That equation, when integrated over time, prescribes the chance
that an event happens for a planetary system, if knowing the respective cross
section $\sigma$. A number of authors have estimated that for EJEC under
different assumptions (e.g., Laughlin & Adams, 1998; Adams et al., 2006; Li &
Adams, 2015; Wang et al., 2020a); here we take the value from Li et al.
(2020b) where the setup was the most similar to this work. From there,
$\sigma_{\mathrm{EJEC}}=9.7\times 10^{4}$ au2 implies a percentage of 12% for
EJEC in 1 Gyr for Jupiter’s ejection. So the two differ by a factor of two. Li
et al. (2020b) also measured the $\sigma$ for the Sun-Jupiter pair to acquire
a companion star and the inference was that almost all are expected to have a
companion within 1 Gyr. Here we find that 46% of the planetary systems in the
Jupiter run obtain at least companion at some point in the simulation.
Therefore, the percentages in this work agree with the expectations reasonably
well.
Considering the HJs in the Jupiter run, the percentage of HJ_ZKL is 2.4% and
that of HJ_SCAT is a hundred times smaller. So the formation of HJ as a direct
result of a scattering is extremely rare (Hamers & Tremaine, 2017). Compared
to HJ_ZKL, a significantly larger proportion, 5.8% end up as COL_ZKL, meaning
that in many cases, the ZKL cycle is not quenched by GR or tides. The temporal
evolution of the percentages will be deferred to Section 5.4 where we derive
their time dependence.
In our treatment of tides, Equation (3) prescribes how $Q_{\mathrm{tide}}$
varies depending on the planet orbit (Beauge & Nesvorny, 2012) and is a fit to
the model of Ivanov & Papaloizou (2011). Taking our nominal simulation as an
example, the minimum possible planetary $Q_{\mathrm{tide}}$, corresponding to
the most efficient tidal damping, is achieved when the planet is just touching
the surface of the central host and is about 2000 for $a_{\mathrm{pl}}=5$ au.
However, works looking into different modes have suggested that for extremely
eccentric orbits, the equivalent $Q$ can be much smaller, possibly reaching a
few tens or even a few (e.g., Wu, 2018; Yu et al., 2022). With a more
efficient tidal model, planets of the fate COL_ZKL may end up HJ_ZKL.
Figure 3 shows such an example. The initial conditions of the planetary system
as well as the sequences of the stellar scatterings are exactly the same for
the two panels. As we discussed earlier, when $Q_{\mathrm{tide,pl,0}}=5\times
10^{6}$ (top panel), tidal dissipation is not fast enough and the ZKL effect
goes unsuppressed and forces the planet onto the star (COL_ZKL). When
$Q_{\mathrm{tide,pl,0}}=5\times 10^{5}$ (bottom panel), tides efficiently
shrinks the planet’s orbit, detaches it from the companion star, and hence
stops further eccentricity excitation by the ZKL mechanism and an HJ forms
(HJ_ZKL).
As Table 2 shows, for the JupEnT run, the percentage for HJ_ZKL and CKL_ZKL
are almost the same, both about 4.2 %, so the creation of HJ_ZKL is boosted by
70 %. But the sum of HJ_ZKL and CKL_ZKL is 8.3 % which is in excellent
agreement with the Jupiter run, a phenomenon seen also in Petrovich (2015);
Anderson et al. (2016); Muñoz et al. (2016). In the JupEnT set, the percentage
of HJ_SCAT and EJEC are not affected by enhanced tides, both only related to
the scattering process.
Additionally, about 1.5% of the planets collide with their host star during
the scattering and 1.2% acquire orbits bound to the scatterer. We have omitted
discussion on these two states as they will not affect the creation of HJs.
But we note that both percentages are roughly a tenth of that of EJEC,
consistent with the ratios of their respective cross sections as derived in Li
et al. (2020b).
## 5 Population synthesis
In the previous section, we have shown with concrete examples that HJs may
form via high-$e$ migration initiated by a companion star that the planetary
host star acquires during a binary–single scattering in a stellar cluster. In
this section, we perform sets of population synthesis simulations and explore
the dependence of the efficiency of this mechanism on the properties of the
cluster and the planetary system.
### 5.1 Simulation parameters
We fix the central host to be the Sun and the planet’s physical parameters to
be those of Jupiter. For all the runs, the tidal model has been the normal one
(3) and no enhancement is effected. The planet’s orbital distribution as we
detail below is also the same for all following runs.
We take the orbital parameters from the observed population. The distribution
of the planet’s orbital period $P$ follows a broken power law as derived in
Fernandes et al. (2019) for radial velocity planets
$\mathrm{PDF}(P)\propto\begin{cases}\left({P/P_{\mathrm{b}}}\right)^{p_{1}}&\mathrm{if}P\leq
P_{\mathrm{b}}\\\
\left({P/P_{\mathrm{b}}}\right)^{p_{2}}&\mathrm{if}P>P_{\mathrm{b}}.\end{cases}$
(8)
Here $P_{\mathrm{b}}=2075$ d, $p_{1}=0.7$ and $p_{2}=-1.2$. The inner boundary
is 1 au as for closer-in planets, the ZKL timescales for the typical companion
orbits from binary-single exchange are longer than those of GR/tides (see
Figure 1) so $e_{\mathrm{pl}}$ cannot be excited to values high enough to
initiate efficient tidal damping. The outer boundary is somewhat arbitrary and
we just let it be 10 au. The observed population of wide-orbit ($>$ 10 au)
exoplanets is sparse and the errorbar in their distribution is large (e.g.,
Nielsen et al., 2019; Wagner et al., 2022). The grey histogram in Figure 4
shows the initial orbital distribution of the planet.
Figure 4: The initial orbital distribution of the planets and their fates for
the Nominal setup. The grey histogram in the big panel shows the planets’
distribution in the $a_{\mathrm{pl}}-e_{\mathrm{pl}}$ plane, darker colours
meaning more planets, as shown in the colour bar above. In that panel, the
scattered points show those that have the fates HJ_ZKL (red), COL_ZKL (blue),
and HJ_SCAT (purple). The bottom and the right panels show the percentage of
planets with those fates as a function of the initial orbit; the error bars
are 1-$\sigma$ dispersion from a bootstrapping process; the points are slights
shifted for better presentation.
Our eccentricity distribution follows a Beta distribution as proposed by
(Kipping, 2013) for radial velocity planets. 111Random number generators by
Richard Chandler and Paul Northrop have been used
https://www.ucl.ac.uk/~ucakarc/work/randgen.html. An upper limit of
$e_{\mathrm{pl}}=0.95$ is set, as this coincides roughly with the highest
observed eccentricity among the radial velocity planets (e.g., HD 20782 b,
though in a very wide binary system; see Jones et al., 2006) and also insures
that initial tidal effect is negligible even for $a_{\mathrm{pl}}=1$ au.
In one set of the runs, the cluster parameters are the same as the Jupiter
run, i.e., $n_{*}=50$ pc-3 and $f_{\mathrm{bin}}=0.5$. This forms our main
simulation set and is called the “Nominal” set. A total of 30000 runs are done
for this set.
Four additional sets of simulations with different cluster properties are
performed. In the sets “LowDen”and “HighDen”, $n_{*}=10$ and 200 pc-3,
respectively, both with $f_{\mathrm{bin}}=0.5$. And in the sets “LowBin” and
“HighBin”, $f_{\mathrm{bin}}=0.1$ and 0.9, respectively, both with $n_{*}=50$.
For those, 3000 runs are done for each. These parameters are listed in Table
1.
### 5.2 Results of the Nominal simulation set
We first analyse the results of the Nominal set. Table 2 shows that the
percentage of HJ_ZKL is 2.4%, that of HJ_SCAT 0.13%, 3.6% for COL_ZKL, and 20%
for EJEC. In comparison to the Jupiter run, it seems that the change is mild –
the creation of HJ_ZKL has the same efficiency and that of EJEC decreases by
less than 20%; HJ_SCAT is enhanced by a factor of a few but its contribution
to the formation of HJs is anyway less efficient by a factor of at least 20
compared to HJ_ZKL. For COL_ZKL, there is a 40% boost (at $\sim 5-\sigma$
level) in the Nominal run compared to the Jupiter run.
How does the planet’s initial orbit affect its fate? The large panel of Figure
4 displays as scattered points the initial orbital distribution of the planets
in the final states HJ_ZKL (red), COL_ZKL (blue), and HJ_SCAT (purple). The
bottom and the right panels of that figure present the percentage of planets
with the three fates as a function of $a_{\mathrm{pl}}$ and $e_{\mathrm{pl}}$.
The figure suggests that HJ_ZKL does not depend on the initial
$e_{\mathrm{pl}}$ and seems to show a weak negative dependence on
$a_{\mathrm{pl}}$ (e.g., Muñoz et al., 2016). The planet’s $a_{\mathrm{pl}}$
affects the planet’s evolution in many aspects. Figure 1 shows that for a
fixed $a_{\mathrm{com}}$, a larger $a_{\mathrm{pl}}$ means a smaller ZKL
timescale, facilitating HJ_ZKL. But this could turn out to be an adverse
effect as the ZKL effect could go untamed (by GR/tides) so that the planet
collides with the central host. On the other hand, efficient tidal dissipation
has to be activated so the planet’s orbit can be shrunk. From our tidal model,
this means $e_{\mathrm{pl}}>1-0.022\,\mathrm{au}/a_{\mathrm{pl}}$. Apparently,
the larger the $a_{\mathrm{pl}}$ the higher the $e_{\mathrm{pl}}$ is needed;
this works against HJ_ZKL for a larger $a_{\mathrm{pl}}$. Moreover, embedded
in a cluster, the constant stellar scatterings may alter the companion star’s
orbit and therefore interrupt the ZKL cycle. Overall, HJ_ZKL shows a weak
negative dependence on $a_{\mathrm{pl}}$ while for COL_ZKL, a clearer positive
dependence is seen.
Similarly, the effect of the initial $e_{\mathrm{pl}}$ on HJ_ZKL is weak. This
is seemingly counter-intuitive since a larger initial $e_{\mathrm{pl}}$
reduces the requirement on $i_{\mathrm{pl,com}}$ to excite $e_{\mathrm{pl}}$
to the same level (e.g., Li et al., 2014). Take $a_{\mathrm{pl}}=5$ au for
example, $e_{\mathrm{pl}}$ has to reach 0.996 to enable tidal dissipation (so
$r_{\mathrm{peri,pl}}=0.022$ au). Using the leading order ZKL theory, we have
performed a simple Monte Carlo simulation fixing the initial $e_{\mathrm{pl}}$
and randomly drawn $i_{\mathrm{pl,com}}$ and phase angles and the fraction of
orbits that can achieve a maximum $e_{\mathrm{pl}}$ of at least 0.996 is
calculated. We find that this fraction depends on $e_{\mathrm{pl}}$ very
mildly and an increase of the initial $e_{\mathrm{pl}}$ from $\sim 0$ to $\sim
0.9$ only boosts the fraction by 100%. But planets with initial
$e_{\mathrm{pl}}\gtrsim 0.9$ are rare in our simulations. This seems at odds
with Mustill et al. (2022a). In explaining the observed high eccentricity of
HR5183b, Mustill et al. (2022a) found that an initial $e_{\mathrm{pl}}$ (which
might be caused by planet–planet scattering) would enhance the chance that a
companion excites $e_{\mathrm{pl}}$ to the observed value. In that work, the
authors were examining the fraction of time that $e_{\mathrm{pl}}$ is higher
than a certain value whereas here it is the maximum $e_{\mathrm{pl}}$ ever
attained that matters.
We would like to assess what kind of scattering binaries help create HJ_ZKL
and how the properties of the companions are distributed for these systems.
But this may not be as straightforward as it may seem. As Figure 2 shows, the
orbit of the planetary system’s companion is subject to further alteration due
to stellar scattering. In that example, the companion star has remained bound
to the planetary system so the binary scatter that this companion is in
originally is the one that contribute directly to HJ_ZKL. But it can be more
complicated in that we have registered in our simulations cases where the
planetary system obtains a companion star after scattering with a binary and
an HJ does not form; after interactions with other scatterers (single or
binary), the original companion swaps with another star and this companion
triggers the process of HJ_ZKL. In this latter case, if the companion at the
time of the formation of the HJ comes from a binary scatterer, we record the
orbit of that binary; if not, we trace back to see if the predecessor of the
companion is from a binary and so on and so forth. The top left panel of
Figure 5 shows the semimajor axis of the scattering binary $a_{\mathrm{bin}}$
as a function of the mass $m_{\mathrm{com}}$ of the companion of the planetary
system. The top right and the bottom panels show the histogram of
$a_{\mathrm{bin}}$ and $m_{\mathrm{com}}$, respectively. The mass
$m_{\mathrm{com}}$ covers the full range of our initial mass function with a
median of 0.42 M⊙ while $a_{\mathrm{bin}}$ is broadly distributed from tens to
hundreds of au with a median of 110 au. The middle left panel of the figure
shows the companion’s $a_{\mathrm{com}}$ as a function of $m_{\mathrm{com}}$
and the middle right the histogram of $a_{\mathrm{com}}$. Compared to the
broad distribution $a_{\mathrm{bin}}$, $a_{\mathrm{com}}$ is more centred
around the median 220 au.
Figure 5: The distribution of the scattering binary’s $a_{\mathrm{bin}}$
leading to the outcome of HJ_ZKL and the planetary system’s companion’s
$a_{\mathrm{com}}$ as a function of the companion mass $m_{\mathrm{com}}$ in
the nominal set. From top to bottom, the three histograms show the
distribution of $a_{\mathrm{bin}}$, $a_{\mathrm{com}}$, and
$m_{\mathrm{com}}$.
Many workers have carried out population synthesis studies on HJ formation via
the ZKL mechanism (Naoz et al., 2012; Petrovich, 2015; Anderson et al., 2016;
Muñoz et al., 2016; Vick et al., 2019). Due to the inherently different
assumptions (like the usage of secular/full equations of motion, the tidal
model, and the cluster environment), our HJ formation rate HJ_ZKL cannot be
compared directly with theirs. But many common characteristics are observed:
e.g., the general HJ_ZKL rate of a few per cent and the invariability of the
sum of the rates of COL_ZKL and HJ_ZKL under different tidal efficiencies (see
Table 2 and Petrovich, 2015; Anderson et al., 2016; Muñoz et al., 2016). And
the cluster environment also introduces new features. For instance, the
preferred companion separation for HJ_ZKL here under stellar scatterings (a
few hundreds of au) is appreciably smaller than if the systems are in
isolation (wider than several hundreds of au Naoz et al., 2012; Petrovich,
2015) as a result of the disruption of the companion orbit.
### 5.3 Results of the other runs
The percentages of different fates for the other simulation sets are presented
in Table 2.
First, what role does the stellar number density $n_{*}$ play? By comparing
the sets LowDen and HighDen with the Nominal simulation, we observe that
decreasing/increasing $n_{*}$ has the effect of counteracting/boosting the
percentage of HJ_ZKL. Obviously, a lower/higher $n_{*}$ implies a lower/higher
scattering rate, which diminishes/enhances the chance that a planetary system
acquires a companion star and therefore the probability of HJ_ZKL. Also, a
lower/higher $n_{*}$ implies a longer/shorter lifetime of the so-acquired
companion star, allowing the ZKL effect more/less time to operate. According
to Figure 1, even for the HighDen run, for any $a_{\mathrm{com}}\lesssim$ 1000
au, the ZKL timescale is much shorter than the companion lifetime. Therefore,
the constraint from the companion star’s survivability is weak (but higher
order effects, not reflected in that figure, can operate on much longer
timescales; see e.g., Ford et al., 2000; Naoz et al., 2011; Antognini, 2015)
and as a consequence, for the parameter range considered here, a higher
$n_{*}$ means a higher percentage of HJ_ZKL. But the dependence may not be
linear. Comparing the Nominal with the HighDen run, an increase of $n_{*}$
from 50 to 200 pc-3 only increases the percentage of HJ_ZKL by a factor of
two, the main reason being that 1.5 times more planets are ejected in the
denser environment so the reservoir for HJ_ZKL is significantly smaller. A
comparison between the Nominal and the LowDen runs shows that decreasing
$n_{*}$ by 80% leads to a drop in the percentage of HJ_ZKL by more than 70%,
so the linearity towards smaller $n_{*}$ is more pronounced.
Much like the influence of $n_{*}$, $f_{\mathrm{bin}}$ affects a planetary
system in two ways: increasing the prospect of the acquisition of a companion
star and decreasing the lifetime of the companion. It turns out that a higher
$f_{\mathrm{bin}}$ (HighBin) gives rise to a higher percentage of HJ_ZKL and
vice versa (LowBin). We note that in the LowBin run, the effective density for
binaries ($n_{\mathrm{bin}}=n_{*}f_{\mathrm{bin}}$) is 50 pc${}^{-3}\times
0.1=5$ pc-3 coincident with that in the LowDen run 10 pc${}^{-3}\times 0.5=5$
pc-3, and the percentages of HJ_ZKL are in excellent agreement in the two sets
of simulations. This suggests that (when EJEC is not overwhelming) the
percentage of HJ_ZKL depends on the binary spatial density $n_{\mathrm{bin}}$
of the cluster only.
Then, not surprisingly, in all simulations, the percentage of HJ_SCAT is
smaller than that of HJ_ZKL by at least an order of magnitude so we omit
discussion on the former. And for all these simulation sets, the ratio of the
percentage of COL_ZKL and HJ_ZKL is about constant $\sim 1.5$, consistent with
the expectation both are results of the ZKL mechanism and depend on the
cluster property in similar ways. Therefore, we may broadly refer to both
COL_ZKL and HJ_ZKL as ACT_ZKL, meaning that extreme ZKL cycles are activated
where the planet either turns into an HJ or plummet into the central host,
i.e., ACT_ZKL=COL_ZKL+HJ_ZKL.
### 5.4 Empirical dependences on cluster parameters
In general, the rate that an event happens can be estimated with Equation (1)
by plugging in the appropriate $\sigma$. In calibrating $\sigma$, previous
works have often separated the effects of binary and single stars (Adams et
al., 2006; Li & Adams, 2015; Li et al., 2020b). Here we follow the same
approach.
In our scenario, ACT_ZKL is only affected by the binaries and single stars
cannot contribute. So the rate of ACT_ZKL can be approximated by
$A_{\mathrm{z}}{n_{\mathrm{bin}}\over 1\,\mathrm{pc}^{-3}}$ where
$A_{\mathrm{Z}}$ is a constant to be determined.
Apparently, ACT_ZKL may only occur for planet that is still revolving around
the host star (excluding those turning into HJs already). The size of this
reservoir is declining because of ACT_ZKL itself, ejection, and capture and
collision during the scattering (the latter two are minor and are not
discussed in detail in this work). Suppose the rate of all these effect
combined is $A_{\mathrm{r}}{n_{\mathrm{bin}}\over
1\,\mathrm{pc}^{-3}}+B_{\mathrm{r}}{n_{\mathrm{sin}}\over
1\,\mathrm{pc}^{-3}}$ (where $A_{\mathrm{r}}$ and $B_{\mathrm{r}}$ are
constants). Then the percentage of the size of the reservoir at time $t$
compared to the initial size is
$e^{-(A_{\mathrm{r}}{n_{\mathrm{bin}}\over
1\,\mathrm{pc}^{-3}}+B_{\mathrm{r}}{n_{\mathrm{sin}}\over
1\,\mathrm{pc}^{-3}}){t\over 1\,\mathrm{Myr}}}\times 100\%,$ (9)
where $t$ is the current time. Therefore, the rate of ACT_ZKL at $t$ is
${\mathrm{d}\mathrm{ACT\\_ZKL}\over\mathrm{d}t}=e^{(A_{\mathrm{r}}{n_{\mathrm{bin}}\over
1\,\mathrm{pc}^{-3}}+B_{\mathrm{r}}{n_{\mathrm{sin}}\over
1\,\mathrm{pc}^{-3}}){t\over
1\,\mathrm{Myr}}}A_{\mathrm{z}}{n_{\mathrm{bin}}\over 1\,\mathrm{pc}^{-3}}.$
(10)
When integrating from time 0 to $t$, the percentage of ACT_ZKL as a function
of time is
$\displaystyle\mathrm{ACT\\_ZKL}=$
$\displaystyle{A_{\mathrm{z}}{n_{\mathrm{bin}}\over 1\,\mathrm{pc}^{-3}}\over
A_{\mathrm{r}}{n_{\mathrm{bin}}\over
1\,\mathrm{pc}^{-3}}+B_{\mathrm{r}}{n_{\mathrm{sin}}\over
1\,\mathrm{pc}^{-3}}}$ (11)
$\displaystyle\times(1-e^{-(A_{\mathrm{r}}{n_{\mathrm{bin}}\over
1\,\mathrm{pc}^{-3}}+B_{\mathrm{r}}{n_{\mathrm{sin}}\over
1\,\mathrm{pc}^{-3}}){t\over 1\,\mathrm{Myr}}})\times 100\%.$
The top panel of Figure 6 shows the time evolution of the percentage of
ACT_ZKL for all the five population synthesis simulation sets. And we have
fitted those curves using Equation (11) above and the fitting parameters are
$A_{\mathrm{z}}=(3.0\pm 0.05)\times 10^{-6}$, $A_{\mathrm{r}}=(1.6\pm
0.2\times 10^{-5}$, $B_{\mathrm{r}}=(6.3\pm 1)\times 10^{-6}$. The result from
the fit is also presented. The agreement is fairly good and the largest
deviation is within two sigma. It seems that while the percentage of ACT_ZKL
for the HighDen set is plateauing toward the end of the simulation, those for
the other sets are still steadily increasing.
Now we rewrite Equation (11) using $n_{*}$ and $f_{\mathrm{bin}}$ as
$\displaystyle\mathrm{ACT\\_ZKL}=$
$\displaystyle{A_{\mathrm{z}}f_{\mathrm{bin}}\over
A_{\mathrm{r}}f_{\mathrm{bin}}+B_{\mathrm{r}}(1-f_{\mathrm{bin}})}$ (12)
$\displaystyle\times(1-e^{-[A_{\mathrm{r}}f_{\mathrm{bin}}+B_{\mathrm{r}}(1-f_{\mathrm{bin}})]{n_{*}\over
1\,\mathrm{pc}^{-3}}{t\over 1\,\mathrm{Myr}}})\times 100\%.$
Equation (12) shows that given enough time, the percentage of ACT_ZKL will
eventually get to an upper limit
$A_{\mathrm{z}}f_{\mathrm{bin}}/[A_{\mathrm{r}}f_{\mathrm{bin}}+B_{\mathrm{r}}(1-f_{\mathrm{bin}})]\times
100\%$ which is determined solely by $f_{\mathrm{bin}}$. As shown in the
bottom panel of Figure 6, this limiting value is increasing with
$f_{\mathrm{bin}}$, reaching 10% at $f_{\mathrm{bin}}=0.3$ and slowly
levelling off toward 18% at $f_{\mathrm{bin}}=1$.
The density $n_{*}$ prescribes how quickly the percentage of ACT_ZKL
approaches that limiting value. In the bottom panel of Figure 6, we plot the
percentage of ACT_ZKL as a function of $f_{\mathrm{bin}}$ at 100 Myr (black),
200 Myr (red), and 1 Gyr (blue) for $n_{*}=$ 20 pc-3 (solid line) and 200 -3
(dash-dotted line). For the lower density, the percentages at all times are
quasi-linearly dependent on $f_{\mathrm{bin}}$ but for the higher density, the
percentage of ACT_ZKL saturates toward the upper limit, implying that the
limit can be reached within a few Gyr for $n_{*}$ of a few hundred pc-3.
Figure 6: Percentage of planets with the outcome ACT_ZKL, the sum of HJ_ZKL
and COL_ZKL. The top panel shows the time evolution of ACT_ZKL from different
runs (points) and the respective fits (line) in different colours. The bottom
panel shows the percentage of ACT_ZKL as a function of $f_{\mathrm{bin}}$ for
$n_{*}=20$ (solid line) and 200 pc-3 (dash-dotted line) at 100 Myr (black),
200 Myr (red) and 1 Gyr (blue). The thick purple line is the upper limit of
the percentage for a given $f_{\mathrm{bin}}$.
Finally, we note that a fraction of those ACT_ZKL will be indeed HJ_ZKL while
the others will be COL_ZKL. The exact division depends on the details of the
tidal interaction; see the simulation sets Jupiter and JupEnT in Section 4.2
for a discussion. But the chances for ACT_ZKL and HJ_ZKL are comparable.
## 6 Discussion
### 6.1 Observational implications
As reviewed in the introduction section already, the observations of planets
in clusters have been sparse with only a few HJs detected so far. Here we only
discuss those found in dedicated surveys but not otherwise (e.g., Obermeier et
al., 2016; Ciardi et al., 2017; Rizzuto et al., 2018; Livingston et al.,
2019).
In total, 3 HJs have been found around 160 stars in Praesepe (NGC 2632) and
Hyades (Paulson et al., 2004; Quinn et al., 2012, 2014) so the HJ occurrence
rate is 2 %. Both clusters are $\sim$ 600 Myr old and metal rich. After
correcting for the solar metallicity, a rate of 1 % was derived (Quinn et al.,
2014), consistent with that of the field (Wright et al., 2012).
Brucalassi et al. (2016) surveyed 66 stars in M67 (NGC 6282) which is of solar
metallicity and age. Three HJs were found so the ccurrence rate is 4.5%;
removing the 12 stars that are in binaries, the HJ ccurrence rate around
single stars was 5.6%. These numbers are much higher than that of the field
(e.g., Wright et al., 2012). M67 has a high $f_{\mathrm{bin}}\sim 30-40\%$ on
average but could be as high as 70 % near the centre (Davenport & Sandquist,
2010; Geller et al., 2021). Being among the oldest open clusters, M67 is
highly evolved. From $N$-body simulations producing predictions consistent
with the observations, the cluster probably through its lifetime has lost the
majority of its total mass and $n_{*}$ at the core has remained largely around
100 pc-3 and $f_{\mathrm{bin}}$ has not evolved significantly either (Hurley
et al., 2005, 2007). Combined, this means that at the core (where solar mass
stars sink to), $n_{\mathrm{bin}}$ is perhaps several tens to a hundred pc-3,
within the optimal range for the HJ_ZKL production from the bottom panel of
Figure 6. If like the field, the primordial giant planet occurrence rate is
10-20 % within a few au (e.g., Cumming et al., 2008) at the core of M67, our
mechanism would predict an HJ occurrence rate of 1-2 %. But we note these
inferences are to be treated with caution; see Section 6.3 for a brief
discussion.
Curiously, the sample of Brucalassi et al. (2016) contained 12 stars with
companions but no planet was detected around those. This may seem to be at
odds with our mechanism showing that when the HJ forms, there is likely a
companion star. We remind that in their binaries, the change in radial
velocity is at least 1.7 km s-1 within a few hundreds to a thousand days
(Pasquini et al., 2012). Take a binary of a solar mass on a circular orbit of
semimajor axis $a$ for example. The orbital velocity is
$v\sim{30\,\mathrm{km\,s}^{-1}/\sqrt{a}}$ and angular velocity
$\omega\sim{2\pi\,\mathrm{yr}^{-1}/\sqrt{a^{3}}}$. In an edge-on
configuration, if the angle between the orbital velocity and line of sight is
$\theta$, the radial velocity is $v_{\mathrm{r}}=v\cos\theta$ and its change
after some time $T$, is $\delta v_{\mathrm{r}}=v\omega
T\sin\theta>1.7\,\mathrm{km\,s}^{-1}$ according to Pasquini et al. (2012).
Substituting the respective values, the binaries in Brucalassi et al. (2016)
have
$a<11\sqrt{T\over 1\,\mathrm{yr}}\sqrt{\sin\theta}\,\mathrm{au}\lesssim
20\,\mathrm{au}.$ (13)
In our mechanism, HJs tend to form with companions of a few hundreds of au
(Figure 5) not included by Brucalassi et al. (2016); moreover, such companions
may well be disrupted during the cluster evolution (e.g., Parker et al.,
2009). Hence, it is no surprise that in the binary sample of Brucalassi et al.
(2016), no HJs were observed.
### 6.2 Comparison with other mechanisms
Several works have been dedicated to the formation of HJs in star clusters.
Shara et al. (2016) used fully-fledged $N$-body cluster simulations to address
this issue, propagating the evolution of massive $\sim 2\times 10^{4}$-member
clusters with a binarity of 0.1 to a few Gyr. Their derived HJ formation rate
is 0.4% per star or 0.2% per planet and suggested that maybe tripling the
binarity could increase the formation rate by 200%. Hamers & Tremaine (2017)
examined how (multiple) stellar scattering helps create HJs via high-$e$
migration in globular clusters. Unlike in open clusters, in these densely-
populated environment, ejection is likely to remove most of the planets (e.g.,
Davies & Sigurdsson, 2001). After a careful search, Hamers & Tremaine (2017)
found that for an initial semimajor axis of a few au, the favourable stellar
density for making HJs is a few times $10^{4}$ pc-3. Wang et al. (2020b)
investigated the long-term evolution of a two-planet system after stellar
flybys, concluding that higher-$e$ migration could be triggered by
interplanetary ZKL mechanism and/or pure scattering, the former more efficient
when the two orbits are wide apart. Rodet et al. (2021) looked at a similar
two-object scenario but concentrated on the case of wide-separation orbits.
More recently, Wang et al. (2022) found that a multi-planet system may gain
enough angular momentum deficit such that the system may become unstable
afterwards and one of the planets might become an HJ.
Due to the different assumptions, it is impossible to make a full comparison
between these works and ours. We just make some comments below.
Hamers & Tremaine (2017) considered a single-planet system and omitted binary
stars, so therein, the planet’s small pericentre distance can only be achieved
during the scattering, in some sense close to our case HJ_SCAT. From our
simulations, the rate is for HJ_SCAT $\lesssim 0.1\%$ for a typical open
cluster setup. This implies for single-planet systems in open clusters, our
HJ_ZKL mechanism is the most efficient.
Shara et al. (2016); Wang et al. (2020b); Rodet et al. (2021); Wang et al.
(2022) have all considered multi-planet systems. In order for instability or
ZKL effects to occur within the planetary system, significant orbital angular
momentum must be extracted from the outermost object during the scattering and
the closest distance between the scatter and the planetary system must be
comparable to the size of the latter. Therefore, a “close scattering” for the
planetary system is needed and the wider the planetary system, the more
efficiently their models work. In contrast, in our model, the scattering
occurs between a planetary and a stellar binary and the former, as a whole,
exchanges with a component of the latter. Hence, the closest distance during
the scattering only needs to be comparable to size of the stellar binary which
is often much larger than that of the planetary system. In this sense, a
“close scattering” for the planetary system is not needed and the system never
experiences instantaneous orbital alterations during the scattering. Not
relying on a wide planetary system, our mechanism probably works better for
compact systems.
### 6.3 Caveats
In this work, we have tracked the evolution of a one-planet system in an open
cluster, simulating its scattering with single and binary stars using a simple
Monte Carlo approach. In order to study the effect of our proposed formation
mechanism for HJs, several potentially important factors have been omitted.
Open clusters, as the name suggests, are slowly losing their mass owning to
member star ejection and stellar evolution (Lamers & Gieles, 2006). In the
meantime, cluster properties, like $f_{\mathrm{bin}}$ and $n_{*}$, may also
evolve considerably over many-Myr timescales (e.g., Kroupa, 1995). Moreover,
the parent cluster may be born with substructures where but these diffuse on
many-Myr timescales (e.g., Goodwin & Whitworth, 2004; Parker & Meyer, 2012;
Parker & Quanz, 2012). Section 5.4 suggests that the rate of ACT_ZKL
asymptotically approaches a value determined by the cluster’s
$f_{\mathrm{bin}}$ on a timescale typically of 1 Gyr. Therefore, the cluster’s
parameters used in this work are Gyr-averaged values.
The binary evolution has also been ignored. Wider binaries may be subject to
disruption owing to stellar scattering (e.g., Kroupa, 1995; Parker et al.,
2009). Figure 1 shows that those wider than a few hundreds of au would have
been disrupted at a few hundreds of Myr so they cannot contribute to the
formation of HJs at a later time. However, Figure 5 shows that about half of
the binary scatterers that lead to the formation of HJs via HJ_ZKL have
$a_{\mathrm{bin}}<100$ au and are largely immune from breakup. So the
disruption of wide binaries in the cluster will potentially halve the HJ
formation percentage we predict.
Then, the stellar evolution is also omitted. In a Gyr, a $\sim 2$ M⊙ star will
evolve off the main sequence, shedding a large fraction of the initial mass
(e.g., Hurley et al., 2000). If in a binary, this may cause the binary orbit
to expand or even disrupt the binary totally (e.g., Veras et al., 2011).
Figure 5 shows that most of the companion stars (78%) that contribute to
HJ_ZKL are below 1 M⊙ and only 11% of the companions are above 2 M⊙. All such
binaries are wide so when the massive companion is evolved, the Roche lobe
will not be filled and the two stellar components evolve in isolation. Then as
the stellar masses is being lost, the companion’s orbit expands, making the
ZKL timescale longer and itself vulnerable to scattering disruption and the
outcome of HJ_ZKL unlikely.222This is not like the case where if the planet-
host is a massive star, its losing mass may enhance the ZKL effect (e.g.,
Shappee & Thompson, 2013; Stephan et al., 2021) and even lead to dynamical
instability (e.g., Kratter & Perets, 2012; Veras et al., 2017). Removing those
stars, the percentage of HJ_ZKL would drop by a few tens of per cent.
Studies of planets in clusters are limited by the relatively small number of
stars in clusters compared to the field. Recently, Winter et al. (2020)
calculated the phase space density for field stars using the full stellar
kinematic information (position and velocity) and found the HJ occurrence rate
was higher for stars in overdensities (which arise very much as a result of
small relative velocities but not spacial proximity). Further analyses
suggested that the multiplicity of a planetary system (Longmore et al., 2021),
the architecture of multi-planet systems (Chevance et al., 2021), and the
occurrence rates of some types of planets (Dai et al., 2021) also have to do
with the overdensities. If high phase space density now were to arise from a
high-density birth environment, this would be a powerful tool to study the
effects of birth environments on planetary system formation and early
evolution. However, the statistical significance of these findings is
questionable (Adibekyan et al., 2021). And it has been found that the stellar
overdensities reflect the galactic kinematic/dynamical evolution (Mustill et
al., 2022b; Kruijssen et al., 2021) and are not necessarily relics of a
clustered star formation. Therefore, we shy away from discussing the
implications of these results on our result.
Finally, we have only examined a lone planet around a solar mass star.
Statistically, how a multi-planet system evolves under stellar scattering
depends on the architecture of the system, and instant instability, delayed
instability may result (e.g., Malmberg et al., 2011; Li et al., 2019, 2020a;
Wang et al., 2022). If the system acquires a companion star, ZKL
cycles/instability may be initiated or not also depending on the planets’
configuration (e.g., Innanen et al., 1997; Malmberg et al., 2007a; Marzari et
al., 2022). To present a thorough discussion on this is beyond the scope of
this work.
## 7 Conclusions
We have proposed a formation channel for HJs in open star clusters: a
planetary system, through binary-single interactions, acquires a companion
star which then excite the planet’s orbit through ZKL mechanism, activating
high-eccentricity migration and giving rise to the creation of an HJ. Using
Monte Carlo simulations, we have modelled how a solar mass star hosting a lone
gas giant planet scatters with binary and single stars successively in an open
cluster, tracking the evolution of the planet under Newtonian gravity, GR, and
tides. Our main findings are as follows.
* •
If a solar mass star hosts a giant planet at a few au and acquires a companion
star a few hundreds of au distant, that companion is able to excite the
planet’s orbit through ZKL mechanism, before it is stripped by stellar
scattering in the cluster.
* •
As a consequence, the planet’s pericentre distance $r_{\mathrm{peri,pl}}$ may
reach a few solar radii. If so, the planet’s orbit can be shrunk by tidal
dissipation in a few Myr and an HJ results.
* •
In our nominal cluster with $n_{*}=50$ pc-3 and $f_{\mathrm{bin}}=0.5$, $\sim
2\%$ of single gas giants orbiting a solar mass star between 1 and 10 au will
become an HJ through the above channel in a Gyr.
* •
In the meantime, $\sim 4\%$ of the planets collide with or are tidally
disrupted by the host star because of the large-amplitude ZKL oscillations
forced by the companion star.
* •
And about 20% of the planets are ejected from their host star owing to stellar
scattering.
* •
A far smaller percentage $\lesssim 0.1\%$ of the planets can acquire a small
pericentre distance directly during stellar scattering and become HJs without
the need of a companion star.
* •
The total percentage of the formation of HJ and collision/tidal disruption
depends on the cluster properties. The cluster $f_{\mathrm{bin}}$ sets an
upper limit that will be reached given enough time ($10\%$ at
$f_{\mathrm{bin}}=0.3$ and 18% at $f_{\mathrm{bin}}=1$). And how quickly the
above limit is reached depends linearly on $n_{*}$: a few Gyr for $n_{*}$ of a
few hundred pc-3.
* •
Adopting a more efficient tidal model turns a fraction of the planets with the
outcome collision into HJs. In general, the likelihoods of the formation of HJ
and collision are comparable.
## Acknowledgements
The authors are grateful to the anonymous referee for the comments and
suggestions that help improve the manuscript. The authors acknowledge
financial support from the National Natural Science Foundation of China
(grants 12103007 and 12073019) and the Swedish Research Council (grant
2017-04945) and the Swedish National Space Agency (grant 120/19C) and the
Fundamental Research Funds for the Central Universities (grant 2021NTST08).
This work has made use of the HPC facilities at Beijing Normal University.
## Data Availability
The data underlying this paper will be shared on reasonable request to the
corresponding author.
## References
* Adams (2010) Adams F. C., 2010, Annual Review of Astronomy and Astrophysics, 48, 47
* Adams & Laughlin (2001) Adams F. C., Laughlin G., 2001, Icarus, 150, 151
* Adams et al. (2006) Adams F. C., Proszkow E. M., Fatuzzo M., Myers P. C., 2006, The Astrophysical Journal, 641, 504
* Adibekyan et al. (2021) Adibekyan V., et al., 2021, Astronomy & Astrophysics, 649, A111
* Anderson et al. (2016) Anderson K. R., Storch N. I., Lai D., 2016, Monthly Notices of the Royal Astronomical Society, 456, 3671
* Antognini (2015) Antognini J. M. O., 2015, Monthly Notices of the Royal Astronomical Society, 452, 3610
* Bacon et al. (1996) Bacon D., Sigurdsson S., Davies M. B., 1996, Monthly Notices of the Royal Astronomical Society, 281, 830
* Beauge & Nesvorny (2012) Beauge C., Nesvorny D., 2012, The Astrophysical Journal, 751, 119
* Binney & Tremaine (2008) Binney J., Tremaine S., 2008, Galactic dynamics. Princeton University Press, http://adsabs.harvard.edu/abs/2008gady.book.....B
* Bolmont et al. (2015) Bolmont E., Raymond S. N., Leconte J., Hersant F., Correia A. C. M., 2015, Astronomy & Astrophysics, 583, 15
* Brucalassi et al. (2014) Brucalassi A., et al., 2014, Astronomy and Astrophysics, 561, L9
* Brucalassi et al. (2016) Brucalassi A., et al., 2016, Astronomy & Astrophysics, 592, L1
* Brucalassi et al. (2017) Brucalassi A., et al., 2017, Astronomy & Astrophysics, 603, A85
* Cai et al. (2017) Cai M. X., Kouwenhoven M. B. N., Zwart S. F. P., Spurzem R., 2017, Monthly Notices of the Royal Astronomical Society, 470, 4337
* Chambers (1999) Chambers J. E., 1999, Monthly Notices of the Royal Astronomical Society, 304, 793
* Chevance et al. (2021) Chevance M., Diederik Kruijssen J. M., Longmore S. N., 2021, The Astrophysical Journal Letters, 910, L19
* Ciardi et al. (2017) Ciardi D. R., et al., 2017, The Astronomical Journal, 155, 10
* Correia (2009) Correia A. C. M., 2009, The Astrophysical Journal, 704, L1
* Cumming et al. (2008) Cumming A., Butler R. P., Marcy G. W., Vogt S. S., Wright J. T., Fischer D. A., 2008, Publications of the Astronomical Society of the Pacific, 120, 531
* Dai et al. (2021) Dai Y.-Z., Liu H.-G., An D.-S., Zhou J.-L., 2021, The Astronomical Journal, 162, 46
* Davenport & Sandquist (2010) Davenport J. R. A., Sandquist E. L., 2010, The Astrophysical Journal, 711, 559
* Davies & Sigurdsson (2001) Davies M. B., Sigurdsson S., 2001, Monthly Notices of the Royal Astronomical Society, 324, 612
* Dawson & Johnson (2018) Dawson R. I., Johnson J. A., 2018, Annual Review of Astronomy and Astrophysics, 56, 175
* Duquennoy & Mayor (1991) Duquennoy A., Mayor M., 1991, Astronomy and astrophysics, 248, 485
* Fabrycky & Tremaine (2007) Fabrycky D., Tremaine S., 2007, The Astrophysical Journal, 669, 1298
* Fernandes et al. (2019) Fernandes R. B., Mulders G. D., Pascucci I., Mordasini C., Emsenhuber A., 2019, The Astrophysical Journal, 874, 81
* Ford et al. (2000) Ford E. B., Kozinsky B., Rasio F. A., 2000, The Astrophysical Journal, 535, 385
* Fregeau et al. (2004) Fregeau J. M., Cheung P., Portegies Zwart S. F., Rasio F. A., 2004, Monthly Notices of the Royal Astronomical Society, 352, 1
* Fujii & Hori (2019) Fujii M., Hori Y., 2019, Astronomy & Astrophysics, 624, A110
* Geller et al. (2021) Geller A. M., Mathieu R. D., Latham D. W., Pollack M., Torres G., Leiner E. M., 2021, The Astronomical Journal, 161, 190
* Gonzalez (1997) Gonzalez G., 1997, Monthly Notices of the Royal Astronomical Society, 285, 403
* Goodwin & Whitworth (2004) Goodwin S. P., Whitworth A. P., 2004, Astronomy & Astrophysics, 413, 929
* Hamers & Tremaine (2017) Hamers A. S., Tremaine S., 2017, The Astronomical Journal, 154, 272
* Hao et al. (2013) Hao W., Kouwenhoven M. B., Spurzem R., 2013, Monthly Notices of the Royal Astronomical Society, 433, 867
* Heggie (1975) Heggie D. C., 1975, Monthly Notices of the Royal Astronomical Society, 173, 729
* Hurley et al. (2000) Hurley J. R., Pols O. R., Tout C. A., 2000, Monthly Notices of the Royal Astronomical Society, 315, 543
* Hurley et al. (2005) Hurley J. R., Pols O. R., Aarseth S. J., Tout C. A., 2005, Monthly Notices of the Royal Astronomical Society, 363, 293
* Hurley et al. (2007) Hurley J. R., Aarseth S. J., Shara M. M., 2007, The Astrophysical Journal, 665, 707
* Hut (1981) Hut P., 1981, Astronomy and Astrophysics, 99, 126
* Hut & Bahcall (1983) Hut P., Bahcall J. N., 1983, The Astrophysical Journal, 268, 319
* Innanen et al. (1997) Innanen K. a., Zheng J. Q., Mikkola S., Valtonen M. J., 1997, The Astronomical Journal, 113, 1915
* Ivanov & Papaloizou (2011) Ivanov P. B., Papaloizou J. C. B., 2011, Celestial Mechanics and Dynamical Astronomy, 111, 51
* Jones et al. (2006) Jones H. R. A., Butler R. P., Tinney C. G., Marcy G. W., Carter B. D., Penny A. J., McCarthy C., Bailey J., 2006, Monthly Notices of the Royal Astronomical Society, 369, 249
* Kidder (1995) Kidder L. E., 1995, Physical Review D, 52, 821
* Kipping (2013) Kipping D. M., 2013, Monthly Notices of the Royal Astronomical Society: Letters, 434, L51
* Kozai (1962) Kozai Y., 1962, The Astronomical Journal, 67, 579
* Kratter & Perets (2012) Kratter K. M., Perets H. B., 2012, The Astrophysical Journal, 753, 91
* Kroupa (1995) Kroupa P., 1995, Monthly Notices of the Royal Astronomical Society, 277, 1491
* Kroupa (2001) Kroupa P., 2001, Monthly Notices of the Royal Astronomical Society, 322, 231
* Kruijssen et al. (2021) Kruijssen J. M. D., Longmore S. N., Chevance M., Laporte C. F. P., Motylinski M., Keller B. W., Henshaw J. D., 2021, ArXiv: 2109.06182
* Lada & Lada (2003) Lada C. J., Lada E. A., 2003, Annual Review of Astronomy and Astrophysics, 41, 57
* Lamers & Gieles (2006) Lamers H. J. G. L. M., Gieles M., 2006, Astronomy & Astrophysics, 455, L17
* Laughlin & Adams (1998) Laughlin G., Adams F., 1998, The Astrophysical Journal, 508, L171
* Li & Adams (2015) Li G., Adams F. C., 2015, Monthly Notices of the Royal Astronomical Society, 448, 344
* Li et al. (2014) Li G., Naoz S., Kocsis B., Loeb A., 2014, The Astrophysical Journal, 785, 116
* Li et al. (2019) Li D., Mustill A. J., Davies M. B., 2019, Monthly Notices of the Royal Astronomical Society, 488, 1366
* Li et al. (2020a) Li D., Mustill A. J., Davies M. B., 2020a, Monthly Notices of the Royal Astronomical Society, 496, 1149
* Li et al. (2020b) Li D., Mustill A. J., Davies M. B., 2020b, Monthly Notices of the Royal Astronomical Society, 499, 1212
* Lidov (1962) Lidov M., 1962, Planetary and Space Science, 9, 719
* Liu et al. (2015) Liu B., Muñoz D. J., Lai D., 2015, Monthly Notices of the Royal Astronomical Society, 447, 747
* Livingston et al. (2019) Livingston J. H., et al., 2019, Monthly Notices of the Royal Astronomical Society, 484, 8
* Longmore et al. (2021) Longmore S. N., Chevance M., Diederik Kruijssen J. M., 2021, The Astrophysical Journal Letters, 911, L16
* Malmberg et al. (2007a) Malmberg D., Davies M. B., Chambers J. E., 2007a, Monthly Notices of the Royal Astronomical Society: Letters, 377, L1
* Malmberg et al. (2007b) Malmberg D., De Angeli F., Davies M. B., Church R. P., MacKey D., Wilkinson M. I., 2007b, Monthly Notices of the Royal Astronomical Society, 378, 1207
* Malmberg et al. (2011) Malmberg D., Davies M. B., Heggie D. C., 2011, Monthly Notices of the Royal Astronomical Society, 411, 859
* Marzari et al. (2022) Marzari F., Nagasawa M., Goździewski K., 2022, Monthly Notices of the Royal Astronomical Society, 510, 5050
* Meibom et al. (2013) Meibom S., et al., 2013, Nature, 499, 55
* Muñoz et al. (2016) Muñoz D. J., Lai D., Liu B., 2016, Monthly Notices of the Royal Astronomical Society, 460, 1086
* Mustill et al. (2022a) Mustill A. J., Davies M. B., Blunt S., Howard A., 2022a, Monthly Notices of the Royal Astronomical Society, 509, 3616
* Mustill et al. (2022b) Mustill A. J., Lambrechts M., Davies M. B., 2022b, Astronomy and Astrophysics, 658, A199
* Naoz (2016) Naoz S., 2016, Annual Review of Astronomy and Astrophysics, 54, 441
* Naoz et al. (2011) Naoz S., Farr W. M., Lithwick Y., Rasio F. A., Teyssandier J., 2011, Nature, 473, 187
* Naoz et al. (2012) Naoz S., Farr W. M., Rasio F. a., 2012, The Astrophysical Journal, 754, L36
* Naoz et al. (2013) Naoz S., Kocsis B., Loeb A., Yunes N., 2013, The Astrophysical Journal, 773, 187
* Nicholson et al. (2019) Nicholson R. B., Parker R. J., Church R. P., Davies M. B., Fearon N. M., Walton S. R. J., 2019, Monthly Notices of the Royal Astronomical Society, 485, 4893
* Nielsen et al. (2019) Nielsen E. L., et al., 2019, The Astronomical Journal, 158, 13
* Obermeier et al. (2016) Obermeier C., et al., 2016, The Astronomical Journal, 152, 223
* Olczak et al. (2012) Olczak C., Kaczmarek T., Harfst S., Pfalzner S., Portegies Zwart S., 2012, The Astrophysical Journal, 756, 123
* Parker & Meyer (2012) Parker R. J., Meyer M. R., 2012, Monthly Notices of the Royal Astronomical Society, 427, 637
* Parker & Quanz (2012) Parker R. J., Quanz S. P., 2012, Monthly Notices of the Royal Astronomical Society, 419, 2448
* Parker et al. (2009) Parker R. J., Goodwin S. P., Kroupa P., Kouwenhoven M. B. N., 2009, Monthly Notices of the Royal Astronomical Society, 397, 1577
* Pasquini et al. (2012) Pasquini L., et al., 2012, Astronomy & Astrophysics, 545, A139
* Paulson et al. (2004) Paulson D. B., Cochran W. D., Hatzes A. P., 2004, The Astronomical Journal, 127, 3579
* Petrovich (2015) Petrovich C., 2015, The Astrophysical Journal, 799, 27
* Pfalzner et al. (2005) Pfalzner S., Vogel P., Scharwächter J., Olczak C., 2005, Astronomy & Astrophysics, 437, 967
* Portegies Zwart (2016) Portegies Zwart S. F., 2016, Monthly Notices of the Royal Astronomical Society, 457, 313
* Quinn et al. (2012) Quinn S. N., et al., 2012, Astrophysical Journal Letters, 756, L33
* Quinn et al. (2014) Quinn S. N., et al., 2014, The Astrophysical Journal, 787, 27
* Raghavan et al. (2010) Raghavan D., et al., 2010, Astrophysical Journal, Supplement Series, 190, 1
* Rasio & Ford (1996) Rasio F. A., Ford E. B., 1996, Science, 274, 954
* Rizzuto et al. (2018) Rizzuto A. C., Vanderburg A., Mann A. W., Kraus A. L., Dressing C. D., Agüeros M. A., Douglas S. T., Krolikowski D. M., 2018, The Astronomical Journal, 156, 195
* Rodet et al. (2021) Rodet L., Su Y., Lai D., 2021, The Astrophysical Journal, 913, 104
* Scally & Clarke (2001) Scally A., Clarke C., 2001, Monthly Notices of the Royal Astronomical Society, 325, 449
* Shappee & Thompson (2013) Shappee B. J., Thompson T. A., 2013, The Astrophysical Journal, 766, 64
* Shara et al. (2016) Shara M. M., Hurley J. R., Mardling R. A., 2016, The Astrophysical Journal, 816, 59
* Spurzem et al. (2009) Spurzem R., Giersz M., Heggie D. C., Lin D. N. C., 2009, Astrophysical Journal, 697, 458
* Stephan et al. (2021) Stephan A. P., Naoz S., Gaudi B. S., 2021, The Astrophysical Journal, 922, 4
* Storch et al. (2014) Storch N. I., Anderson K. R., Lai D., 2014, Science, 345, 1317
* Takarada et al. (2020) Takarada T., Sato B., Omiya M., Hori Y., Fujii M. S., 2020, Publications of the Astronomical Society of Japan, 72, 104
* Takeda et al. (2008) Takeda G., Kita R., Rasio F. a., 2008, The Astrophysical Journal, 683, 1063
* van Elteren et al. (2019) van Elteren A., Portegies Zwart S., Pelupessy I., Cai M. X., McMillan S. L. W., 2019, Astronomy & Astrophysics, 624, A120
* Veras et al. (2011) Veras D., Wyatt M. C., Mustill A. J., Bonsor A., Eldridge J. J., 2011, Monthly Notices of the Royal Astronomical Society, 417, 2104
* Veras et al. (2017) Veras D., Mustill A. J., Gänsicke B. T., 2017, Monthly Notices of the Royal Astronomical Society, 465, 1499
* Vick et al. (2019) Vick M., Lai D., Anderson K. R., 2019, Monthly Notices of the Royal Astronomical Society, 484, 5645
* Vincke & Pfalzner (2016) Vincke K., Pfalzner S., 2016, The Astrophysical Journal, 828, 48
* von Zeipel (1909) von Zeipel H., 1909, Astronomische Nachrichten, 183, 345
* Wagner et al. (2022) Wagner K., Apai D., Kasper M., McClure M., Robberto M., 2022, The Astronomical Journal, 163, 80
* Wang et al. (2020a) Wang Y.-H., Perna R., Leigh N. W. C., 2020a, Monthly Notices of the Royal Astronomical Society, 496, 1453
* Wang et al. (2020b) Wang Y.-H., Leigh N. W. C., Perna R., Shara M. M., 2020b, The Astrophysical Journal, 905, 136
* Wang et al. (2022) Wang Y., Perna R., Leigh N. W. C., Shara M. M., 2022, Monthly Notices of the Royal Astronomical Society, 509, 5253
* Winter et al. (2018) Winter A. J., Clarke C. J., Rosotti G., Ih J., Facchini S., Haworth T. J., 2018, Monthly Notices of the Royal Astronomical Society, 478, 2700
* Winter et al. (2020) Winter A. J., Kruijssen J. M. D., Longmore S. N., Chevance M., 2020, Nature, 586, 528
* Wright et al. (2012) Wright J. T., Marcy G. W., Howard A. W., Johnson J. A., Morton T. D., Fischer D. A., 2012, The Astrophysical Journal, 753, 160
* Wu (2018) Wu Y., 2018, The Astronomical Journal, 155, 118
* Wu & Lithwick (2011) Wu Y., Lithwick Y., 2011, The Astrophysical Journal, 735, 109
* Wu & Murray (2003) Wu Y., Murray N., 2003, The Astrophysical Journal, 589, 605
* Yu et al. (2022) Yu H., Weinberg N. N., Arras P., 2022, The Astrophysical Journal, 928, 140
|
[a]Shigemi Ohta
# Isovector nucleon form factors from 2+1-flavor dynamical domain-wall lattice
QCD at the physical mass
###### Abstract
KEK-TH-2457
Nucleon isovector form factors calculated on a 2+1-flavor domain-wall-fermions
ensemble with strange and degenerate up and down quarks at physical mass and
lattice cut off, $a^{-1}$, of about 1.730(4) GeV, are reported. The ensemble
was generated jointly by RBC and UKQCD collaborations with a spatial extent of
$48a$ or about 5.5 fm. The form factors are calculated in collaboration with
LHP collaboration as well. The resulting shape parameters of the form factors,
such as vector-charge mean squared radius, $\langle r_{1}^{2}\rangle$, or
anomalous magnetic moment, $F_{2}(0)$ appear less dependent on possible
excited-state contaminations than the corresponding charges. Preliminary
estimates are $\langle r_{1}^{2}\rangle\sim 0.142(13)\,\mbox{fm}^{2}$ and
$F_{2}(0)\sim 3.22(8)$.
We report the isovector vector- and axialvector-current form factors of
nucleon calculated using a 2+1-flavor dynamical domain-wall fermions (DWF)
numerical lattice quantum chromodynamics (QCD) ensemble with lattice cut off,
$a^{-1}$, of about 1.730(4) GeV and physical light- and strange-quark mass.
DWF lattice QCD preserves continuum-like chiral and flavor symmetries at
relatively large lattice spacing, such as 0.1 fm. RIKEN-BNL-Columbia (RBC) and
UKQCD collaborations have been jointly generating dynamical 2+1-flavor DWF
numerical lattice-QCD ensembles for over a decade now [1, 2, 3, 4, 5, 6, 7, 8,
9, 10]. We have been working at physical mass for a while [9, 10]. We have
used some of these DWF ensembles for studying nucleon [11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25]. We found a deficit [11] in calculated
isovector axial charge, $g_{A}$, in comparison with its experimental value
[26]. About ten percent deficit of the calculated results with pion mass from
about 420 MeV to 170 MeV had not moved much [14, 15, 16, 17, 18, 20, 21, 22,
25] as we refined our analysis with lighter-mass ensembles. Almost all other
calculations confirmed this at similar lattice cuts off and quark mass [27,
28, 29, 30, 31]. Since then, more calculations at almost physical mass have
been conducted, bringing the calculated values closer to the experiment [31,
28, 32, 33, 34], sometimes covering the experimental value within relatively
large statistical and systematic errors.
However, our unitary DWF calculations with better chiral and flavor
symmetries, and consequently smaller systematic and statistical errors,
observe some deficits. Statistical significance of these results ranges
dependent on renormalizations used for the axialvector current[23, 24, 35]:
From about three standard deviations with the renormalization obtained in the
meson-sector calculation to about five standard deviations with the
renormalization using the nucleon vector charge. We note the corresponding
vector charge calculation suggests possible contamination from nearby excited
states [23, 24, 35, 36, 37], in contrast to earlier DWF calculations that did
not find any evidence for such contamination [25, 35].
On the other hand, the form factors calculated at finite momenta transfer
statistically fluctuate more than the charges calculated at zero momentum
transfer. As a result, the possible contamination from excited states is less
detectable. In other words, such possible contaminations could be hidden by
more significant statistical fluctuations in the shape parameters of the form
factors such as mean squared radii or magnetic moments [35].
The results presented here were calculated using the “48I” $48^{3}\times 96$
2+1-flavor dynamical Möbius DWF ensemble at physical mass with Iwasaki gauge
action of gauge coupling, $\beta=2.13$, or of lattice cut off of
$a^{-1}=1.730(4)$ GeV, jointly generated by the RBC and UKQCD collaborations
[9]. In total, 130 configurations, separated by 20 MD trajectories in the
range of trajectory number 620 to 980 and by 10 MD trajectories in the range
of trajectory number from 990 to 2160, except the missing 1050, 1070, 1150,
1170, 1250, 1270, and 1470, are used. Each configuration is deflated [38] with
2000 low Dirac eigenvalues. The “AMA” statistics trick [39], with $4^{4}=256$
AMA sloppy samples unbiased by four precision ones from each configuration, is
used. Gauge-invariant Gaussian smearing [40, 41] with similar parameters as in
the past RBC nucleon structure calculations is applied to nucleon source and
sink, separated in time by $8\leq T\leq 12$ lattice units. We obtained a
nucleon mass estimate of 947(6) MeV from this ensemble [23, 24, 35].
The nucleon isovector vector- and axialvector-current form factors are
experimentally measured in lepton elastic scatterings off, or $\beta$ decay
of, or muon capture by nucleons:
$\langle
p|V^{+}_{\mu}(x)|n\rangle=\bar{u}_{p}\left[\gamma_{\mu}F_{1}(q^{2})-i\sigma_{\mu\lambda}q_{\lambda}\frac{F_{2}(q^{2})}{2m_{N}}\right]u_{n}e^{iq\cdot
x},$ $\langle
p|A^{+}_{\mu}(x)|n\rangle=\bar{u}_{p}\left[\gamma_{\mu}\gamma_{5}F_{A}(q^{2})+\gamma_{5}q_{\mu}\frac{F_{P}(q^{2})}{2m_{N}}\right]u_{n}e^{iq\cdot
x}.$
They are related to various important nucleon observables such as: mean-
squared charge radii, $\langle r_{1}^{2}\rangle$, through the expansion of the
vector form factor, $\displaystyle F_{1}(Q^{2})=F_{1}(0)-\frac{1}{6}\langle
r_{1}^{2}\rangle Q^{2}+...$, in terms of momentum transfer squared,
$Q^{2}=|q^{2}|$, or anomalous magnetic moment, $F_{2}(0)$, or isovector axial
charge, $g_{A}=F_{A}(0)=1.2754(13)g_{V}$ [26], of nucleon that determines
neutron life and nuclear $\beta$ strengths that in turn determines nuclear
abundance.
Figure 1: Nucleon isovector vector form factor $F_{1}$ with one lattice unit
of momentum transfer squared, $Q^{2}=1$, plotted against source-sink
separations, $T$, of 8, 9, 10, 11, and 12 lattice units. The plateaux are
well-defined and consistent with each other: suspected excited-state
contamination is not detected.
To extract the form factors, we use the standard ratios,
$\displaystyle\frac{C_{\rm 3pt}^{\Gamma,O}(t_{\rm src},t,t_{\rm snk})}{C_{\rm
2pt}(t_{\rm src},t_{\rm snk})}$, of two-point,
$C^{(2)}(t_{\rm src},t_{\rm
snk})=\sum_{\alpha,\beta}\left(\frac{1+\gamma_{t}}{2}\right)_{\alpha\beta}\langle
N_{\beta}(t_{\rm snk})\bar{N}_{\alpha}(t_{\rm src})\rangle,$
and three-point,
$C^{(3)\Gamma,O}(t_{\rm src},t,t_{\rm
snk})=\sum_{\alpha,\beta}\Gamma_{\alpha\beta}\langle N_{\beta}(t_{\rm
sink})O(t)\bar{N}_{\alpha}(0)\rangle,$
correlators with a nucleon operator,
$N=\epsilon_{abc}(u_{a}^{T}C\gamma_{5}d_{b})u_{c}$. Plateaux of these ratios
in time between the source and sink are obtained with appropriate spin
($\Gamma=(1+\gamma_{t})/2$ or $(1+\gamma_{t})i\gamma_{5}\gamma_{k}/2$) or
momentum-transfer projections, which in turn give lattice bare value estimates
for the expected values, $\langle O\rangle$, for the relevant observables.
More specifically, for the form factors, ratios such as
$\frac{C^{(3)\Gamma,O}_{\rm GG}(t_{\rm src},t,t_{\rm snk},\vec{p}_{\rm
src},\vec{p}_{\rm snk})}{C^{(2)}_{\rm GG}(t_{\rm src},t_{\rm snk},\vec{p}_{\rm
src},\vec{p}_{\rm snk})}\times\sqrt{\frac{C^{(2)}_{\rm LG}(t,t_{\rm
snk},\vec{p}_{\rm src}))C^{(2)}_{\rm GG}(t_{\rm src},t,\vec{p}_{\rm
snk}))C^{(2)}_{\rm LG}(t_{\rm src},t_{\rm snk},\vec{p}_{\rm
snk}))}{C^{(2)}_{\rm LG}(t,t_{\rm snk},\vec{p}_{\rm snk}))C^{(2)}_{\rm
GG}(t_{\rm src},t,\vec{p}_{\rm src}))C^{(2)}_{\rm LG}(t_{\rm src},t_{\rm
snk},\vec{p}_{\rm src}))}}$
with point (L) or Gaussian (G) smearings, give plateaux dependent only on
momentum transfer. Further details can be found in our earlier publications,
such as Ref. [12].
We use the source-sink separation, $T$, from 8 to 12, following the earlier
studies for isovector charges and couplings [23, 24, 35]. All the 147 three-
momentum transfers $\vec{Q}$with $Q^{2}\leq 10$ are included. Note there is no
such three-momentum with $Q^{2}=7$ lattice units. The results for the vector
form factor at the minimum finite momentum transfer of $Q^{2}=1$ in the
lattice unit presented in Fig. 1 are encouraging: Since the numbers from the
shortest source-sink separation of $T=8$, with the minor statistical
fluctuations, are in agreement with the numbers from longer separations of
$T=9$, 10, 11, and 12 with successively more significant statistical
fluctuations, we should be able to extract shape parameters such as mean
square radii or magnetic moment from these form factors without detectable
contamination from excited states. As even the ground-state signals
deteriorate beyond $T\geq 11$, it is best to use the calculations with $T\leq
10$.
Indeed in the mean-squared charge radius, as defined by $\displaystyle\langle
r_{1}^{2}\rangle=\frac{6[F_{1}(Q^{2}=0)-F_{1}(Q^{2}=1)]}{Q^{2}=1}$, in Fig. 2,
Figure 2: The mean-squared radius, $\langle
r_{1}^{2}\rangle=\frac{6[F_{1}(Q^{2}=0)-F_{1}(Q^{2}=1)]}{Q^{2}=1}$ does not
seem to depend on the source-sink separation. These average $\sim
0.20(2)\mbox{fm}^{2}$ as compared to experiment:
$[(0.8409(4))^{2}+0.1161(22)=0.8682(29)](\mbox{fm})^{2}$
no excited-state contamination is detectable above the statistical errors: As
can be expected from the form factor values, $F_{1}(Q^{2}=1)$ shown in Fig. 1,
the estimates from shorter separations such as $T=8$ and 9 with less
statistical fluctuations are in agreement with those from longer separations
such as $T=10$, 11, and 12 with larger fluctuations.
The whole shape of the vector form factor, $F_{1}$, is presented in Fig. 3:
Figure 3: Isovector vector form factor, $F_{1}(Q^{2})$, scaled by the
corresponding charge, $F_{1}(0)$. The shape does not depend on source-sink
separation $T$. Note that the values for $Q^{2}=7$ are absent as there is no
lattice momentum transfer combination with $Q^{2}=7$.
The shape does not depend on source-sink separation $T$ in the sense that the
values calculated with shorter separations are well contained within the error
bars of the values calculated with longer separations. The suspected excited-
state contamination is not detected in the whole shape either.
This form factor is easily fit by a wide range of multipole forms,
$(\displaystyle F(Q^{2})\sim F(0)\left(1+\frac{Q^{2}}{M_{p}^{2}}\right)^{-p}$,
not only with $p=1$, 2, and 3 presented in Fig. 4
Figure 4: The isovector vector form factor is easily fit by a wide range of
multipole forms, $F(Q^{2})\sim
F(0)\left(1+\frac{Q^{2}}{M_{p}^{2}}\right)^{-p}$. Here vector form factor is
compared with multipoles $p=1$, 2, and 3, as well as a linear fit using
$Q^{2}=0$ and 1 only. All the fits result in similar estimates for the mean-
square radius.
but also with $p=4$, 5, 6, and 7, all resulting in $\chi^{2}$ per degree of
freedom below unity. The resulting mean-squared charge radius estimates of
$\langle r_{1}^{2}\rangle=6p/M_{p}^{2}\sim 0.14\ \mbox{\rm fm}^{2}$ do not
depend much on the multipolarity, $p$, in a wide range of $1\leq p\leq 7$. Nor
do they differ from the linear extrapolation using only $Q^{2}=0$ and 1.
However, the multipole form with $p\leq 1/2$ or $8\leq p$ does not work.
The induced tensor form factor, $F_{2}$, is presented in Fig. 5:
Figure 5: Isovector induced tensor form factor, $F_{2}(Q^{2})$, scaled by the
corresponding charge, $F_{1}(0)$. The shape does not depend on source-sink
separation $T$. These extrapolate to $\sim 3.2(2)$. The experiment is
2.7928473446(8) + 1.9130427(5) - 1 = 3.705874(5) [26]. Figure 6: Isovector
axialvector form factor, $F_{A}$, of the axialvector current, renormalized
with $Z_{A}=0.71191(5)$ obtained in the meson sector [9]. These extrapolate to
$\langle r_{A}^{2}\rangle\sim 0.20(2)\mbox{\rm fm}^{2}$.
Again the shape is not affected by excited states either in the sense that the
values calculated with shorter separations are well contained within the error
bars of the values calculated with longer separations.
This form factor also is well fitted by the same wide range of multipole forms
with $1\leq p\leq 7$. The resulting extrapolation estimates for the isovector
anomalous magnetic moment, $F_{2}(0)\sim 3.2$, do not depend on the
multipolarity in this range. Nor do they differ much from linear extrapolation
using the two smallest available momenta transfer, $Q^{2}=1$ and 2. The
multipole form with $p\leq 1/2$ or $8\leq p$ does not work.
Axialvector form factor, $F_{A}$, of the axialvector current is presented in
Fig. 6: This is an important observable for the ongoing neutrino experiments
but is poorly known only from bubble-chamber experiments in the 1970s.
The induced pseudoscalar form factor, $F_{P}$, of the axialvector current is
presented in Fig. 7:
Figure 7: Isovector pseudoscalar form factor, $F_{P}$, of the axialvector
current, renormalized with $Z_{A}=0.71191(5)$ [9]. These extrapolate to
$F_{P}(0)\sim 25(2)$: likely result in much smaller $g_{P}$ for muon capture.
This is another crucial observable for processes such as muon capture.
Both these form factors are easily fitted by the same wide range of multipole
forms with polarity $1\leq p\leq 7$. The resulting mean-squared radius
estimates of $\langle r_{A}^{2}\rangle\sim 0.18\ \mbox{\rm fm}^{2}$ or
$F_{P}(0)\sim 25$ do not depend much on the polarity $p$. Nor do they differ
from the linear extrapolations using only the two smallest available $Q^{2}$
values of either 0 and 1 or 1 and 2. The multipole form with $p\leq 1/2$ or
$8\leq p$ does not work.
In Table 1,
| | $T=8$ | 9 | 10 | 11 | 12 | experiment
---|---|---|---|---|---|---|---
$\langle r_{1}^{2}\rangle$ $[\mbox{fm}^{2}]$ | linear | 0.134(14) | 0.14(2) | 0.13(3) | 0.16(5) | 0.13(8) | 0.868(3)
| dipole | 0.135(6) | 0.143(8) | 0.142(13) | 0.14(2) | 0.13(3) |
$F_{2}(0)$ | linear | 3.159(4) | 3.250(6) | 3.242(8) | 3.252(13) | 3.61(2) | 3.705874(5)
| dipole | 3.10(5) | 3.15(6) | 3.22(8) | 3.24(11) | 3.5(2) |
$\langle r_{A}^{2}\rangle$ $[\mbox{fm}^{2}]$ | linear | 0.177(2) | 0.174(2) | 0.182(4) | 0.192(5) | 0.066(8) | –
| dipole | 0.177(7) | 0.174(10) | 0.176(14) | 0.18(2) | 0.15(3) |
$F_{P}(0)$ | linear | 21.01(3) | 22.61(5) | 23.90(7) | 23.04(11) | 26.5(2) | –
| dipole | 23(2) | 25(2) | 26(2) | 26(2) | 30(2) |
Table 1: The isovector form factor shape parameters obtained by dipole fits
agree with those from linear extrapolations using only the smallest two
$Q^{2}$ values. The vector-current parameters, however, disagree with well-
established experiments [26]. The errors are single-elimination jack-knife
statistical.
the isovector form factor shape parameters from the dipole fits are compared
with those from the linear extrapolations using only the two smallest
available $Q^{2}=0$: The form factor shape parameters from the dipole fits
agree well with the corresponding linear extrapolations. The corresponding
extrapolations using other multipolarities of $1\leq p\leq 7$ do not differ.
Consequently, the shape parameter estimates from other fit ansatzes, such as
bounded $z$ expansion, should not differ either, though we are yet to complete
such analyses.
However, the vector-current form factor shape parameters we obtain here do not
agree with the experiments. We should investigate smaller momentum transfers
than in the present study. Such an investigation can be conducted with twisted
boundary conditions in spatial directions for valence quark propagators.
The author thanks the members of LHP, RBC, and UKQCD collaborations,
particularly Sergey Syritsyn. The “48I” ensemble was generated using the IBM
Blue Gene/Q (BG/Q) “Mira” machines at the Argonne Leadership Class Facility
(ALCF) provided under the Incite Program of the US DOE, on the “DiRAC” BG/Q
system funded by the UK STFC in the Advanced Computing Facility at the
University of Edinburgh, and on the BG/Q machines at the Brookhaven National
Laboratory. The nucleon calculations were done using ALCF Mira. The author was
partially supported by the Japan Society for the Promotion of Sciences,
Kakenhi grant 15K05064.
## References
* Antonio _et al._ [2007] D. Antonio _et al._ (RBC and UKQCD Collaborations), Phys. Rev. D 75, 114501 (2007), arXiv:hep-lat/0612005 .
* Allton _et al._ [2007] C. Allton _et al._ (RBC and UKQCD Collaborations), Phys. Rev. D 76, 014504 (2007), arXiv:hep-lat/0701013 .
* Antonio _et al._ [2008a] D. Antonio _et al._ (RBC and UKQCD Collaborations), Phys. Rev. Lett. 100, 032001 (2008a), arXiv:hep-ph/0702042 .
* Antonio _et al._ [2008b] D. Antonio _et al._ (RBC and UKQCD Collaborations), Phys. Rev. D 77, 014509 (2008b), arXiv:0705.2340 [hep-lat] .
* Allton _et al._ [2008] C. Allton _et al._ (RBC and UKQCD Collaborations), Phys. Rev. D 78, 114509 (2008), arXiv:0804.0473 [hep-lat] .
* Aoki _et al._ [2011a] Y. Aoki _et al._ (RBC and UKQCD Collaborations), Phys. Rev. D 83, 074508 (2011a), arXiv:1011.0892 [hep-lat] .
* Aoki _et al._ [2011b] Y. Aoki _et al._ , Phys. Rev. D 84, 014503 (2011b), arXiv:1012.4178 [hep-lat] .
* Arthur _et al._ [2013] R. Arthur _et al._ (RBC and UKQCD Collaborations), Phys. Rev. D 87, 094514 (2013), arXiv:1208.4412 [hep-lat] .
* Blum _et al._ [2016] T. Blum _et al._ (RBC and UKQCD Collaborations), Phys. Rev. D 93, 074505 (2016), arXiv:1411.7017 [hep-lat] .
* Boyle _et al._ [2016] P. A. Boyle _et al._ , Phys. Rev. D 93, 054502 (2016), arXiv:1511.01950 [hep-lat] .
* Yamazaki _et al._ [2008] T. Yamazaki _et al._ (RBC and UKQCD Collaborations), Phys.Rev.Lett. 100, 171602 (2008), arXiv:0801.4016 [hep-lat] .
* Yamazaki _et al._ [2009] T. Yamazaki, Y. Aoki, T. Blum, H.-W. Lin, S. Ohta, _et al._ (RBC and UKQCD Collaborations), Phys.Rev. D79, 114505 (2009), arXiv:0904.2039 [hep-lat] .
* Aoki _et al._ [2010] Y. Aoki, T. Blum, H.-W. Lin, S. Ohta, S. Sasaki, _et al._ , Phys.Rev. D82, 014501 (2010), arXiv:1003.3387 .
* Ohta [2011] S. Ohta (RBC and UKQCD Collaborations), PoS LATTICE2011, 168 (2011), arXiv:1111.5269 [hep-lat] .
* Lin and Ohta [2012a] M. Lin and S. Ohta (RBC and UKQCD Collaborations), Prog. Part. Nucl. Phys. 67, 218 (2012a), arXiv:1112.5489 [hep-lat] .
* Lin and Ohta [2012b] M. Lin and S. Ohta (RBC and UKQCD Collaborations), PoS LATTICE2012, 171 (2012b), arXiv:1212.3235 [hep-lat] .
* Ohta [2013] S. Ohta (RBC and UKQCD Collaborations), PoS LATTICE2013, 274 (2013), arXiv:1309.7942 [hep-lat] .
* Ohta [2014] S. Ohta (RBC and UKQCD Collaborations), _Proceedings, 32nd International Symposium on Lattice Field Theory (Lattice 2014)_ , PoS LATTICE2014, 149 (2014), arXiv:1410.8353 [hep-lat] .
* Syritsyn _et al._ [2015] S. Syritsyn _et al._ , _Proceedings, 32nd International Symposium on Lattice Field Theory (Lattice 2014): Brookhaven, NY, USA, June 23-28, 2014_ , PoS LATTICE2014, 134 (2015), arXiv:1412.3175 [hep-lat] .
* Ohta [2016] S. Ohta (LHP, RBC, and UKQCD Collaborations), _Proceedings, 33rd International Symposium on Lattice Field Theory (Lattice 2015): Kobe, Japan, July 14-18, 2015_ , PoS LATTICE2015, 124 (2016), arXiv:1511.05126 [hep-lat] .
* Abramczyk _et al._ [2016] M. Abramczyk, M. Lin, A. Lytle, and S. Ohta (RBC and UKQCD Collaborations), _Proceedings, 34th International Symposium on Lattice Field Theory (Lattice 2016): Southampton, UK, July 24-30, 2016_ , PoS LATTICE2016, 150 (2016), arXiv:1610.09773 [hep-lat] .
* Ohta [2018a] S. Ohta (RBC and UKQCD Collaborations), _Proceedings, 35th International Symposium on Lattice Field Theory (Lattice 2017): Granada, Spain, June 18-24, 2017_ , EPJ Web Conf. 175, 06012 (2018a), arXiv:1710.06656 [hep-lat] .
* Ohta [2018b] S. Ohta, _Proceedings, 36th International Symposium on Lattice Field Theory (Lattice 2018): East Lansing, MI, United States, July 22-28, 2018_ , PoS LATTICE2018, 128 (2018b), arXiv:1810.09737 [hep-lat] .
* Ohta [2019] S. Ohta (LHP, RBC and UKQCD Collaborations), PoS LATTICE2019, 051 (2019), arXiv:1910.13860 [hep-lat] .
* Abramczyk _et al._ [2020] M. Abramczyk, T. Blum, T. Izubuchi, C. Jung, M. Lin, A. Lytle, S. Ohta, and E. Shintani, Phys. Rev. D 101, 034510 (2020), arXiv:1911.03524 [hep-lat] .
* Workman and Others [2022] R. L. Workman and Others (Particle Data Group), PTEP 2022, 083C01 (2022).
* Dragos _et al._ [2016] J. Dragos, R. Horsley, W. Kamleh, D. B. Leinweber, Y. Nakamura, P. E. L. Rakow, G. Schierholz, R. D. Young, and J. M. Zanotti, Phys. Rev. D94, 074505 (2016), arXiv:1606.03195 [hep-lat] .
* Bhattacharya _et al._ [2016] T. Bhattacharya, V. Cirigliano, S. Cohen, R. Gupta, H.-W. Lin, and B. Yoon, Phys. Rev. D94, 054508 (2016), arXiv:1606.07049 [hep-lat] .
* Liang _et al._ [2017] J. Liang, Y.-B. Yang, K.-F. Liu, A. Alexandru, T. Draper, and R. S. Sufian, Phys. Rev. D96, 034519 (2017), arXiv:1612.04388 [hep-lat] .
* Ishikawa _et al._ [2018] K.-I. Ishikawa, Y. Kuramashi, S. Sasaki, N. Tsukamoto, A. Ukawa, and T. Yamazaki (PACS), Phys. Rev. D98, 074510 (2018), arXiv:1807.03974 [hep-lat] .
* Chang _et al._ [2018] C. C. Chang _et al._ , Nature 558, 91 (2018), arXiv:1805.12130 [hep-lat] .
* Shintani _et al._ [2019] E. Shintani, K.-I. Ishikawa, Y. Kuramashi, S. Sasaki, and T. Yamazaki, Phys. Rev. D99, 014510 (2019), arXiv:1811.07292 [hep-lat] .
* Hasan _et al._ [2019] N. Hasan, J. Green, S. Meinel, M. Engelhardt, S. Krieg, J. Negele, A. Pochinsky, and S. Syritsyn, Phys. Rev. D99, 114505 (2019), arXiv:1903.06487 [hep-lat] .
* Harris _et al._ [2019] T. Harris, G. von Hippel, P. Junnarkar, H. B. Meyer, K. Ottnad, J. Wilhelm, H. Wittig, and L. Wrang, Phys. Rev. D100, 034513 (2019), arXiv:1905.01291 [hep-lat] .
* Ohta [2022] S. Ohta (UKQCD), PoS LATTICE2021, 529 (2022), arXiv:2111.12972 [hep-lat] .
* Bar and Colic [2021a] O. Bar and H. Colic, Phys. Rev. D 103, 114514 (2021a), arXiv:2104.00329 [hep-lat] .
* Bar and Colic [2021b] O. Bar and H. Colic, in _38th International Symposium on Lattice Field Theory_ (2021) arXiv:2109.14930 [hep-lat] .
* Clark _et al._ [2018] M. A. Clark, C. Jung, and C. Lehner, _Proceedings, 35th International Symposium on Lattice Field Theory (Lattice 2017): Granada, Spain, June 18-24, 2017_ , EPJ Web Conf. 175, 14023 (2018), arXiv:1710.06884 [hep-lat] .
* Shintani _et al._ [2015] E. Shintani, R. Arthur, T. Blum, T. Izubuchi, C. Jung, and C. Lehner, Phys. Rev. D91, 114511 (2015), arXiv:1402.0244 [hep-lat] .
* Alexandrou _et al._ [1994] C. Alexandrou, S. Gusken, F. Jegerlehner, K. Schilling, and R. Sommer, Nucl.Phys. B414, 815 (1994), arXiv:hep-lat/9211042 .
* Berruto _et al._ [2006] F. Berruto, T. Blum, K. Orginos, and A. Soni, Phys.Rev. D73, 054509 (2006), arXiv:hep-lat/0512004 [hep-lat] .
|
# PatchMix Augmentation to Identify Causal Features in Few-shot Learning
Chengming Xu∗, Chen Liu∗, Xinwei Sun, Siqian Yang, Yabiao Wang, Chengjie Wang,
Yanwei Fu ∗ indicates equal contributions. Xinwei Sun and Yanwei Fu are the
corresponding authors. This work was supported in part by the National Natural
Science Foundation of China Grant (62076067), Shanghai Municipal Science and
Technology Major Project (2021SHZDZX0103), and ZJ Lab. Chengming Xu, Xinwei
Sun and Yanwei Fu are with the School of Data Science and MOE Frontiers Center
for Brain Science, Fudan University. Yanwei Fu is also with Fudan ISTBI—ZJNU
Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University,
Jinhua, China. E-mail: {cmxu18, sunxinwei<EMAIL_ADDRESS>Chen Liu is
with the Department of Mathematics at the Hong Kong University of Science and
Technology. E-mail<EMAIL_ADDRESS>Siqian Yang, Yabiao Wang and
Chengjie Wang are with Youtu Lab, Tencent. Chengjie Wang is also with Shanghai
Jiao Tong Universtity. E-mail: {seasonsyang, caseywang,
<EMAIL_ADDRESS>
###### Abstract
The task of Few-shot learning (FSL) aims to transfer the knowledge learned
from base categories with sufficient labelled data to novel categories with
scarce known information. It is currently an important research question and
has great practical values in the real-world applications. Despite extensive
previous efforts are made on few-shot learning tasks, we emphasize that most
existing methods did not take into account the distributional shift caused by
sample selection bias in the FSL scenario. Such a selection bias can induce
spurious correlation between the semantic _causal_ features, that are causally
and semantically related to the class label, and the other _non-causal_
features. Critically, the former ones should be invariant across changes in
distributions, highly related to the classes of interest, and thus well
generalizable to novel classes, while the latter ones are not stable to
changes in the distribution. To resolve this problem, we propose a novel data
augmentation strategy dubbed as _PatchMix_ that can break this spurious
dependency by replacing the patch-level information and supervision of the
query images with random gallery images from different classes from the query
ones. We theoretically show that such an augmentation mechanism, different
from existing ones, is able to identify the causal features. To further make
these features to be discriminative enough for classification, we propose
_Correlation-guided Reconstruction_ (CGR) and _Hardness-Aware_ module for
instance discrimination and easier discrimination between similar classes.
Moreover, such a framework can be adapted to the unsupervised FSL scenario.
The utility of our method is demonstrated on the state-of-the-art results
consistently achieved on several benchmarks including miniImageNet,
tieredImageNet, CIFAR-FS , CUB, Cars, Places and Plantae, in all settings of
single-domain, cross-domain and unsupervised FSL. By studying the intra-
variance property of learned features and visualizing the learned features, we
further quantitatively and qualitatively show that such a promising result is
due to the effectiveness in learning causal features.
###### Index Terms:
Few-Shot Learning, spurious correlation, causal features, intra-variance
regularization
## 1 Introduction
Among many factors to the successful deep learning applications of modern
society, it is necessary and decisive to collect a large amount of labeled
training data. Typically, prevailing computer vision models such as ResNet [1]
and Faster R-CNN [2] are trained by millions of labeled examples and thus
achieve decent generalization ability. Unfortunately, for some cases such as
the rare species, it is not feasible to collect large amount of labeled and
diverse data which can be used for training.
Motivated by humans’ ability of learning to learn new objects/concepts with
few references, the task of Few-Shot Learning (FSL) is recently studied in the
computer vision and machine learning communities. Generally, the FSL aims at
learning a model, which can generalize to novel/target dataset with few
labelled data (_i.e._ , support sample) available, on base/source dataset of
vast annotated data.
The existing FSL methods [3, 4, 5] exploit the knowledge transferring from the
base to novel categories via meta-learning paradigm. One of the most commonly
used supervision signals by these meta-learning methods is the image class
labels from base dataset [6]. This is the same as many-shot learning. And
particularly in classical many-shot learning, the guidance from class labels
is common and effective under the independent and identically distributed
(_i.i.d_) assumption. Unfortunately, the FSL is suffered from the problem of
distribution shift (Chap.20 in [7]); and the testing distribution of novel
target classes is quite different from the training distribution of those
source/base classes. This is caused by the sample selection bias existed in
data collection and support/query set splitting.
Figure 1: The existing FSL methods typically train the models directly with
image-level annotations. In this way both the causal (_e.g._ the dogs) and
non-causal features (_e.g._ the people, trees and grass) are learned to build
up correlation with the class label, which can lose prediction ability and be
less effective on novel data due to the domain gap. Compared with those
methods, our proposed PatchMix can help the model learn disentangled causal
features by breaking up the dependency between causal and non-causal features,
thus making better generalization to novel categories.
Such a selection bias can induce spurious correlation between causal and non-
causal features of the classes of interest. These two kinds of features
causally and not causally influence the class label respectively. For example,
if the label is ”dog”, its texture, shape are causal features while those
features that are related to the ”people” are non-causal features. Critically,
the causal features should be reliably predictive of the classes of interest,
invariant across changes in distributions, and thus well generalizable to
novel classes. As deep models are typically optimized to well fit the training
data, such ”shortcut non-causal features” would be easier to be learned by
inheriting the spurious correlation [8, 9]. While the non-causal features may
be still good to the supervised learning on the i.i.d data [10], it may hurt
the performance of generalizing to novel data, which has different
distribution from the base data.
We give a detailed example in Fig. 1. As shown in the feature map, the model
has learned to highly associate the grass and persons with the dogs to help
recognize the ‘dog’ class. This makes sense, as obviously people would like to
play with dog on the grass quite often. This leads to the biased combination
of these elements in the sampled training data. Nevertheless, the learned non-
causal features, _i.e._ , the grass and the people, are no longer correlated
with other categories like horse, lion anymore. Compared with IFSL [11] which
also considers spurious correlation but only partly studies the bias inherited
from the pre-trained backbones, our paper will understand and address this
challenge in the general distribution shift in FSL. Even when there is no pre-
training stage, the spurious correlation can still commonly exists via such a
shift.
To this end, we propose enforcing a novel data-augmentation mechanism dubbed
as PatchMix that are effective in learning causal features and hence better
generalization to novel categories. Specifically, our PatchMix replaces some
patches from each query image with class label $A$ with a set of gallery
images from a random different class $B$. Meanwhile, the replaced patches are
labelled as $B$ (_i.e._ , gallery category), rather than the query category,
_i.e._ , $A$. In this regard, the spurious correlation between causal and non-
causal factors that often come up from different patches (_e.g._ , the dog and
the people) is largely weaken, endowing the model with the ability to
disentangle the causal features from others. In particular, we provide a
theoretical analysis from _Structural Causal Model_ (SCM). We show that our
PatchMix operation has the ability of reducing correlation between
spurious/non-causal and causal features; thus it is also able to defunction
the non-causal features during testing on novel classes. Such a
disentanglement can make our PatchMix prominent among other data augmentation
methods like CutMix [12] (see Sec. 3.3 for detailed analysis). After dropping
the non-causal features, our learned features can correlate better with class
labels, and thus have smaller variances among instances from the same novel
category. As the result, models trained with PatchMix can enjoy an easy
classification between different classes.
To make our causal features more discriminative for classification, we further
propose two modules to enhance the PatchMix, namely _hardness-aware_ and
_Correlation-Guided Reconstruction_ (CGR) modules. Specifically, in CGR,
original query image are reconstructed by selecting informative patches from
both query and gallery image based on similarity between each patch and the
query patches. This module can help our model to tell apart features from
different images, thus fulfilling instance discrimination equipped with better
learned features. To further learn discriminative features especially between
similar classes, we in the _hardness-aware_ module propose selecting the query
and gallery images from two classes that are globally most similar to each
other. Specifically, we formulate this problem into a Travelling Salesman
Problem (TSP) on the distance graph, in which categories and negative
similarity among them are formulated as nodes and edges of the graph. Equipped
with such a mixture strategy, we can control the hardness of training
episodes, thus leading to more robustness and better representations.
Moreover, based on CACTUs[13], our PatchMix can be well adapted to
unsupervised learning FSL (_i.e._ , only unlabeled base data is provided),
which is meaningful when the labeling cost is high. Concretely, the
unsupervised pre-training stage is replaced by our novel PatchMoCo which
constrains the patch-level contrast among different images. Then we involve
our PatchMix into the pseudo-label training stage.
In order to validate the efficacy of our method, we conduct experiments on
single-domain, cross-domain and unsupervised FSL with a wide range of
benchmark datasets including miniImageNet, tieredImageNet, CIFAR-FS and CUB.
Extensive results show that PatchMix can achieve the state-of-the-art
performance on all settings comparing with previous methods, implying the
generalization ability. Besides, we show that such an improvement can be
contributed to the ability of learning causal features, indicated by the
concentration of class of interest in visualized feature map and smaller
intra-variance of learned features among instances from the same category
(_a.k.a_ , neural collapse in [14]).
The contributions of this paper can be listed as follows: (1) We propose a
novel data augmentation method tailored for few-shot learning dubbed PatchMix,
that can remove the spurious correlation, and identify the causal features.
Critically, we leverage the well-established causal framework– _Structural
Causal Model_ to help explain and understand our PatchMix. To the best of our
knowledge, it is the first work that presents a unified causal learning
framework for data augmentation to identify causal features in the few-shot
learning tasks. (2) We enhance our PatchMix model with the newly proposed
_correlation-guided reconstruction_ and _hardness-aware_ modules. These new
modules can facilitate better feature learning. (3) We propose a novel
unsupervised FSL method including a new unsupervised pre-training strategy
named PatchMoCo, which is built upon the PatchMix, and a pseudo-label training
stage. (4) We conduct a vast amount of experiments on different settings and
various datasets. The results verify the effective of our proposed method.
Extensions. This paper is an extension of [15]. We have added the following
aspects based on the conference version: (1) We provide a way to explain how
PatchMix can help few-shot learning in the perspective of learning
disentangled causal features by providing a theoretical interpretation. (2) We
further enhance our PatchMix with two novel modules, _i.e_. correlation-guided
reconstruction and hardness-aware PatchMix. These two modules further improve
the efficacy of our PatchMix model. (3) We empirically show that the proposed
CGR and hardness-aware PatchMix can significantly improve the model.
Meanwhile, our proposed method can receive state-of-the-art results on several
settings. The codes&models will be released on the project pages.
## 2 Related Work
Few-shot recognition. Few-shot learning (FSL) aims to recognize the target
classes by adapting the prior ‘knowledge’ learned from base categories. Such
knowledge usually resides in a deep embedding model for the general-purpose
matching of the support and query image pairs. The embedding is normally
learned with enough training instances on base categories and updated by a few
training instances on novel categories. Recent efforts for FSL are made on
optimization, metric-learning and augmentation.
Optimization based methods [16, 5, 17, 18, 19, 20, 21, 22, 23] learn on the
base dataset a good initialization that can be quickly adapted to novel
dataset. Metric-learning based methods [3, 6, 24, 25, 4, 26, 27, 28, 29, 30,
31, 32, 33, 34, 35, 36, 37] learn a good embedding and an appropriate
comparison metric. Specifically, these methods provide a simple way to
recognize novel data by calculating distance metrics between the image
features and prototypes of each class. Augmentation based methods [38, 39, 40,
41] directly alleviate the dataset gap problems in few-shot learning by the
way of producing various kinds of data. Typically, these augmentation methods
utilize the generative model to create new training samples based on available
ones and random noise or using the augmented data in the testing phase, and
trying to learn a more robust new classifier for each task. Essentially, our
PatchMix can still be categorized as an augmentation method. However, our
method is different from those, as we are the _first_ to leverage augmentation
in learning causal features by removing spurious correlation caused by sample
selection bias, in the few-shot learning scenario. As we will show later, such
a disentanglement of causal features can decrease the variance of inner-class
instances’ features, leading to easier classification among different classes.
Note that some of the former FSL methods like MTL [19] utilize the strategy of
hard sample mining, by taking the classes with low accuracy as the hard class.
In contrast, our hardness-aware PatchMix encourages to mix each class with the
most similar class, such that the generated images can have more distracting
information to help improve the model robustness. Besides, by generating
images from similar classes, the learned causal features can be more
discriminative for classifying between these classes that are too similar (in
terms of their causal features, _e.g._ , to classify the dog and the wolf,
although the learned causal features can concentrate on their shape, they are
similar to each other) to be classified.
Data Augmentation. In the vanilla supervised learning, data augmentation is a
widely used technique to facilitate training the deep models. The naïve
augmentation strategies such as random flip and random crop serves as a data
regularization, and have been applied to many practical topics and learning
tasks [1, 42]. Recently many works [43, 12, 44, 45, 46, 47, 48] are proposed
by mixing or substituting content between images. Specifically, Mixup [43]
mixes two alternative images and the corresponding label. CutMix [12] proposes
to directly replace a randomly selected area. Empirically, their efficacy has
been evaluated on both fully-supervised and semi-supervised classification. In
FSL, some works [49] attempt to utilize manifold Mixup [50] to enhance the
pretraining process. However, these works only directly apply existing data
augmentation methods to FSL. As we will discussion in the following context,
our PatchMix, instead of data regularization, is exclusively effective on
removing spurious correlation for FSL, which cannot be realized by other data
augmentation methods, as shown in the comparison between CutMix and our
PatchMix in Sec. 3.3 in terms of the identification of causal features.
Causal Learning. Due to the invariance property of causal relation, there is
an increasing attention paid at the intersection between causal inference and
machine learning, see [51, 52, 53, 15, 54, 55] for out-of-domain
generalization by removing the confounding bias. These works target on
learning causal semantic features for better generalization, in the framework
of _Structural Causal Model_ (SCM) pioneered by Judea Pearl [56, 57]. Although
it is common for FSL to have distributional shifts between training and test
data due to sampling bias in data collection, few attempts have been made to
address this issue. The work that is most related to us is IFSL [11] that
proposed to remove the bias from pre-trained knowledge by considering an
intervened predictor, in the scenario when the pre-training step is adopted.
Our departure here, is leveraging SCM to explicitly model the spurious
correlation between causal and non-causal features, which can happen due to
sample selection bias even without the pre-training stage. With such a
modeling, we theoretically show that our PatchMix is guaranteed to identify
only the causal semantic features during learning. The empirical comparison of
ours with IFSL are in Sec. 4.3.
## 3 Methodology
Problem Formulation. We formulate few-shot learning in the meta-learning
paradigm. Particularly, the FSL model is learned via the episodes. The episode
should imitate the few-shot learning task: few support and query instances are
sampled from several categories to train/evaluate the embedding model; the
sampled support set is fed to the learner to produce a classifier, and then
the loss and accuracy computed on the sampled query set is used respectively
in training and testing phase. In general, we have two sets of data, namely
meta-train set
$\mathcal{D}_{s}=\left\\{\left(\mathbf{I}_{i},y_{i}\right),y_{i}\in\mathcal{C}_{s}\right\\}$
and meta-test set
$\mathcal{D}_{t}=\left\\{\left(\mathbf{I}_{i},y_{i}\right),y_{i}\in\mathcal{C}_{t}\right\\}$
corresponding to the base and novel dataset, individually. $\mathcal{C}_{s}$
and $\mathcal{C}_{t}$ ($\mathcal{C}_{s}\cap\mathcal{C}_{t}=\emptyset$)
represent base and novel category sets respectively. The goal of FSL is to
train a model on $\mathcal{D}_{s}$ which is well generalized to
$\mathcal{D}_{t}$. As the definition of FSL task, the model can learn from few
(_e.g._ , one or five) labelled data from each category of $\mathcal{C}_{t}$.
We follow the former methods [3, 4] to adopt an $N$-way $K$-shot meta-learning
strategy. Here $N$ denotes the number of categories in one episode and $K$
stands for the number of samples for each category in support set.
Specifically, for each episode $\mathcal{T}$, $N$ categories are randomly
sampled from $\mathcal{C}_{s}$ for training and $\mathcal{C}_{t}$ for testing,
$K$ instances each for these selected categories to construct a support set
$\mathcal{S}=\left\\{\left(\mathbf{I}_{i}^{\mathrm{supp}},y_{i}^{\mathrm{supp}}\right)\right\\}$.
Similarly we sample $M$ query samples per category, and thus construct the
query set
$\mathcal{Q}=\left\\{\left(\mathbf{I}_{i}^{\mathrm{q}},y_{i}^{\mathrm{q}}\right)\right\\}$,
and $\mathcal{S}\cap\mathcal{Q}=\emptyset$. Then the episode can be
represented as $\mathcal{T}=\left\\{\mathcal{S},\mathcal{Q}\right\\}$. In
total each episode has $NK$ support images and $NM$ query images. Note that
while some methods, _e.g._ [3] take different shot number during training and
testing, we keep $K$ the same when training and evaluating our model.
This section is organized as follows. We first introduce a base model called
DProto in Sec. 3.1 to help define our PatchMix. Then we introduce the whole
pipeline of our model which is overviewed in Fig. 2. In particular, two stages
of training are involved. For each stage, query image patches from training
episodes are exchanged via our proposed PatchMix (Sec. 3.2) to identify causal
features (Sec. 3.3). The features of mixed images are then used in few-shot
classification and reconstruction using the correlation-guided reconstruction
(CGR) module (Sec. 3.4.1). In the second stage, each training episode is
further enhanced with hardness-aware PatchMix (Sec. 3.4.2), in which images
from similar categories are mixed to induce more hardness for learning more
discriminative features.
### 3.1 A Base Model by Prototypes
We introduce a simple model derived from ProtoNet [3]. Specifically, given a
few-shot episode $\mathcal{T}$, we first utilize a feature extractor network
$\phi$ to obtain the feature maps of all images in the episode as
$X=\phi(\mathbf{I}),\mathbf{I}\in\mathcal{T}$, where $X\in\mathbb{R}^{c\times
h\times w}$. Then the prototype of the $i$-th class is calculated by taking
the spatial-wise average of all support features belonging to this category,
followed by the sample-wise average:
$p_{i}=\frac{1}{K}\sum_{j=1}^{NK}\bar{X}_{j}^{\mathrm{supp}}\cdot\mathbbm{1}(y_{j}^{\mathrm{supp}}=i),i=1,\cdots,N$
(1)
where $\bar{X}_{j}^{\mathrm{supp}}\in\mathbb{R}^{c}$ is the spatial mean of
$X_{j}^{\mathrm{supp}}$. For each query feature map $X^{\mathrm{q}}$, we build
its prediction confidence map $\hat{X}^{\mathrm{q}}\in\mathbb{R}^{N\times
h\times w}$ in the following way. For $i$-th class and spatial position
indexed by $s,t$,
$\hat{X}^{\mathrm{q}}_{i,s,t}=\frac{<X^{\mathrm{q}}_{:,s,t},p_{i}>}{\|X^{\mathrm{q}}_{:,s,t}\|\|p_{i}\|}$,
which is the normalized cosine distance between the prototype of $i$-th class
and the query feature vector in this position. Meanwhile, a global classifier
$f_{gc}$ consisting of a 1D convolutional layer is applied to $X^{\mathrm{q}}$
to get a prediction map.
This baseline model is a ProtoNet modified in two aspects: (1) We guide the
model with patch-level labels instead of image-level ones to bring the
stronger supervision. (2) We follow the former works [6] to add a global
classifier, which is used in training phase to enhance the supervision. Note
that some existing works propose other kinds of modifications on ProtoNet such
as learnable normalization [58], attention modules [24] and more sophisticated
distance metrics [25]. We do not use these terms so that our model is simple
enough to highlight the effect of our proposed PatchMix. We refer to this
baseline model as DProto in the rest of this paper.
Figure 2: The training framework of our proposed model on a 5-way 1-shot task.
In the first stage of training, for each query image we sample a gallery image
whose random patch is then inserted into the corresponding position of query
image. After feature extraction the classification result is achieved by
comparing the feature vector of each position of query feature map to the
averaged support feature vector, from which the classification loss is
calculated together with the mixed label map. Meanwhile the mixed query and
gallery feature map are processed with the correlation-guided module to
reorganize them into one feature map that can restore the original query
image. In the second stage, the PatchMix is further enhance with the hardness-
aware module which controls the difficulty of mixture based on the distance
among classes.
Training. For each input few-shot pair with inputs and outputs
$\\{X^{q},y^{q},\hat{X}^{q}\\}$ ($y_{q}$ is a one-hot encoded vector for
$|\mathcal{C}_{s}|$ classes), the objective function can be written as follow,
$\displaystyle\mathcal{L}$ $\displaystyle=\ell_{f}+\frac{1}{2}\ell_{g}$ (2)
$\displaystyle\ell_{g}$
$\displaystyle=-(\mathrm{log}\textrm{softmax}(f_{gc}(X^{\mathrm{q}})))^{T}y^{q}$
(3) $\displaystyle\ell_{f}$
$\displaystyle=\frac{1}{hw}\sum_{s,t}\mathrm{log}\frac{e^{-\hat{X}^{\mathrm{q}}_{y^{\mathrm{q}},s,t}}}{\sum_{i=1}^{N}e^{-\hat{X}^{\mathrm{q}}_{i,s,t}}}$
(4)
where $\ell_{f}$ is for the few-shot $N$-way classification, and $\ell_{g}$ is
for the global many-shot $|\mathcal{C}_{s}|$-way (e.g., 64 for miniImageNet)
classification. The feature extractor network $\phi$ is optimized based on
$\mathcal{L}$ across randomly sampled episodes from base data.
Testing. In the testing phase, only the feature extractor $\phi$ is used and
the global classifier $f_{gc}$ is not engaged in testing. For each query image
in an testing episode, we first get the confidence map $\hat{X}^{q}$ following
the above method, then the image-level prediction is achieved by spatially
averaging the confidence map.
### 3.2 Image Augmentation by PatchMix
In this part we formally present PatchMix as an augmentation method tailored
for few-shot learning. Concretely, for each query image
$\\{\mathbf{I}^{\mathrm{q}},y^{\mathrm{q}}\\}$, we randomly sample another
query image as the gallery image $\\{\mathbf{I}^{g},y^{g}\\}$ from which we
collect the information to switch. Next we follow [12] to randomly select a
box with width and height $(\hat{w},\hat{h})$ sampled from
$\displaystyle\lambda$ $\displaystyle\sim\mathrm{Unif}(0,1)$ (5)
$\displaystyle\hat{w}$
$\displaystyle=W\sqrt{1-\lambda},\hat{h}=H\sqrt{1-\lambda},$ (6)
where $W$, and $H$ are the width and height of the images. The center
coordinate $(c_{w},c_{h})$ sampled from
$c_{w}\sim\mathrm{Unif}(\lceil{\hat{w}/2}\rceil,W-\lceil{\hat{w}/2}\rceil)$,
$c_{h}\sim\mathrm{Unif}(\lceil{\hat{h}/2}\rceil,H-\lceil{\hat{h}/2}\rceil)$.
Denote $w_{1},w_{2},h_{1},h_{2}$ as the left, right, lower and upper boundary
of the box $(c_{w},c_{h},\hat{w},\hat{h})$. Then we can generate the
the mask $M$ and the mixed image $\tilde{\mathbf{I}}^{\mathrm{q}}$ as:
$\displaystyle M_{i,j}$
$\displaystyle=\left\\{\begin{array}[]{rcl}1&&{w_{1}\leq i\leq w_{2},h_{1}\leq
j\leq h_{2}}\\\ 0&&{o.w.}\\\ \end{array}\right.$ (9)
$\displaystyle\tilde{\mathbf{I}}^{\mathrm{q}}$
$\displaystyle=M\odot\mathbf{I}^{g}+(1-M)\odot\mathbf{I}^{\mathrm{q}}$ (10)
In comparison with CutMix [12] which adopts an image-level soft label as the
ground truth, we still keep the patch-level hard labels to be the supervision
information. We first interpolate the selected box to size of $w\times h$ as
$\displaystyle w_{1}^{{}^{\prime}},w_{2}^{{}^{\prime}}$
$\displaystyle=\frac{w}{W}w_{1},\frac{w}{W}w_{2}$ (11) $\displaystyle
h_{1}^{{}^{\prime}},h_{2}^{{}^{\prime}}$
$\displaystyle=\frac{h}{H}h_{1},\frac{h}{H}h_{2}$ (12)
Then the new label map is set as
$Y_{i,j}=\left\\{\begin{array}[]{rcl}y^{g}&&{w_{1}^{{}^{\prime}}\leq i\leq
w_{2}^{{}^{\prime}},h_{1}^{{}^{\prime}}\leq j\leq h_{2}^{{}^{\prime}}}\\\
y_{q}&&{o.w.}\\\ \end{array}\right.$ (13)
Finally the mixed image $\tilde{\mathbf{I}}^{\mathrm{q}}$ is fed into the
network, classified by method mentioned in Sec. 3.1 and guided by the label
map $Y$. In the subsequent section, we will introduce why such a simple
operation can help learn causal features for better generalization.
### 3.3 Causal Explanation of PatchMix
In this section, we explain the effectiveness of PatchMix in removing the
spurious correlation from the learned representation, in the framework of
_Structural Causal Model_ (SCM) [56]. To describe the causal relations over
all the variables, the SCM incorporates a directed acyclic graph (DAG)
$G=(\mathcal{V},\mathcal{E})$, such that $X\to Y\in\mathcal{E}$ means $X$ has
a direct causal effect to $Y$. Due to its ability to encode priories beyond
data, it has been increasingly leveraged to disentangle the _causal semantic
features_ [53, 15] from other features for out-of-distribution generalization.
Inspired by such a spirit, we provide an ad-hoc analysis to explain how the
PatchMix operator can remove the spurious correlation and only keep causal
features for prediction.
Figure 3: Causal graph of models (a) with sample selection bias, (b) using
PatchMix for patches consisting of single image, (c) using PatchMix for
patches consisting of two images, (d) using Patchmix for patches whose causal
features are completely removed and (e) using CutMix. The $S,Z$ respectively
denote the causal and non-causal features. $Y$ indicates the class label. $C$
is an indicator for train/test set. Red arrows mean the resource of spurious
correlation between class label and non-causal features. The patch image with
PatchMix can generated either from single image or two images, as respectively
marked by the dot blue boxes. For both types, the PatchMix can remove the
dependency from sampling bias. The CutMix replaces the original label $Y$ with
an interpolated one $Y_{c}$; hence it is determined by features from both
images.
The data collection procedure can often incur sampling bias, inducing the
dependency among some originally independent concepts, _e.g._ , the dog is
associated with the grass more often than with the river in the collected
dataset. This can cause the _spurious_ correlation between causal features and
non-causal ones, leading to non-causal features learned during training, which
can hurt the performance.
We formulate this spurious correlation in Fig. 3 (a), in which $S,Z$
respectively denote the causal (_i.e._ , the texture, shape of the dog) and
non-causal features (_i.e._ , features related to other objects such as
people) in terms of its relation with the outcome $Y$. The label $Y$ and non-
causal features $Z$ altogether generates the sampling procedure denoted as
$C$, _e.g._ , the sampler collected the dog (_i.e._ , outcome $Y$) on the
grass (_i.e._ , $Z$) in some communities when people walked with their pets.
Here, the $C$ with $C=1$ (or $C=0$) means that the corresponding sample is in
the training (or test) set. As $C$ is the collider on the path $S\leftarrow
Y\to C\leftarrow Z$, it induces the correlation between $S$ and $Z$, _i.e._ ,
$S\not\perp Z|C$. Moreover, for any sample $I$, the $Y\not\perp Z|S,I,C=1$,
making the non-causal features learned during training process. However, it
may fail to generalize as $P(S,Z|C=1)$ may not equal to $P(S,Z|C=0)$.
On the other hand, as shown in Fig. 3 (b),(c),(d), we may have three cases of
patch images after PatchMix operation: i) Figure 3 (b) contains only causal
features; ii) in Fig. 3 (c), part of both causal and non-causal features are
replaced by others; iii) in Fig. 3 (d), all causal information is removed.
For those patches in $\mathbf{I}^{q}$ that contain causal information, the
case described in Fig. 3 (b) can naturally decrease the influence of non-
causal features. Thus such a case is less correlated with the spurious
features, as it directly removes features from other patches that may be
spurious. For the case described in Fig. 3 (c), the generated patch integrates
$S$ and $Z$ randomly from two different images, breaking the original sampling
procedure and thus the selection bias it induces. In this regard, the causal
features $S$ no longer depends on $Z$. Formally speaking, we can have $Y\perp
Z|S,I$ for these patches in the training set, making it possible to eliminate
the non-causal features if the model is trained well.
Note that our Patchmix can also handle the extreme cases that some patches of
the ”causal features” are completely removed, which corresponds to the graph
in Fig. 3 (d). Theoretically our PatchMix can still reduce the correlation
between spurious features and causal features. Specifically, equipped with
PatchMix, these patches also randomly mix information from other patches. In
this regard, the patterns of original non-causal features are broken, and are
randomly replaced by others, making correlations between these non-causal
features and $Y$ no longer exist. In this way, compared to vanilla patch-level
training methods without any mixing strategies (_i.e._ , base model) or with
other mixing strategies, our PatchMix can enforce the model to pay more
attention to causal features, and remove spurious correlation of other non-
causal features. This result can be verified by a noticeable improvement of
our method over the base models in Tab. I in Tab. V, additionally with
interpretable visualization results in Fig. 7.
###### Theorem 3.1 (Disentangling Causal Features).
Suppose the neural network parameter is composed of two parts:
$(\psi,\theta)$, where $\psi:\mathcal{I}\to\mathbb{R}^{\mathrm{dim}(S)}$
extracts a representation $\psi(I)$ from the image, followed by $\theta$ for
predicting the label. We denote $p^{o}(\psi,\theta)(y|I)$ and
$p^{\mathrm{patch}}(\psi,\theta)(y|I)$ as the trained model on the original
data (with Fig. 3 (a)) and the one on the data after PatchMix (with Fig. 3
(b),(c),(d)). Assume that the structural equation for the image $I$ is
injective, _i.e._ , $I\leftarrow f_{I}(S,Z)$, then if both
$p^{o}(\psi,\theta)(y|I)$ and $p^{\mathrm{patch}}(\psi,\theta)(y|I)$ are
trained to perfectly fit the ground-truth distribution $p^{*}(y|I)$, we have:
* •
In $p^{o}(\psi,\theta)(y|I)$, the $\psi^{o}(I)$ is dependent on the non-causal
features $Z$.
* •
In $p^{\mathrm{patch}}(\psi,\theta)(y|I)$, it is with Lebesgue measure 0 for
$\psi^{\mathrm{patch}}(I)$ to depend on $Z$.
###### Remark 3.2.
The injective assumption on $f_{I}$ is widely assumed in the literature, such
as (variational) auto-encoder [59] since it has been empirically verified that
the image can be recovered perfectly from its latent embedding; causal
inference [60, 61] for identifiability consideration and few-shot learning
[62].
###### Proof.
For both $p^{o}(\psi,\theta)(y|I)$ and $p^{\mathrm{patch}}(\psi,\theta)(y|I)$,
we have $Y\perp I|\psi(I)$ since $\psi(I)$ blocks all paths in the neural
network from the input $I$ to the output $Y$, as the predicted label is the
same to the ground-truth label $Y$ since both $p^{o}(\psi,\theta)(y|I)$ and
$p^{\mathrm{patch}}(\psi,\theta)(y|I)$ equal to $p^{*}(y|I)$. According to
Fig. 3 (a), we have $Y\not\perp I|S$, therefore if $\psi^{o}(I)$ does not
depend on $Z$, _i.e._ , $\psi^{o}(I)=h(S)$ for some $h$, it fails to make $Y$
and $I$ conditionally independent. On the other hand, it is sufficient for
$\psi^{\mathrm{patch}}(I)$ to depend only on $S$ to make $Y\perp
I|\psi^{\mathrm{patch}}(I)$. Suppose $\psi^{\mathrm{patch}}(I)=h(S,Z)$ for
some $h$. For the patch from a single whole image (marked by dot blue box in
Fig. 3 (b)), the representation can only contain the causal features $S$ as it
removes other spurious correlated features. To show $h$ is independent to $Z$
for the patch as the mixture of two images (marked by Fig. 3 (c)), recall that
$p^{\mathrm{patch}}_{\theta}(y|\psi^{\mathrm{patch}}(I))=p^{*}(y|[f_{I}^{-1}]_{\mathcal{S}}(I))=p^{*}(y|s)$
with $I=f_{I}(s,z)$ and the injective assumption for $f_{I}$, we have that for
each $s$ and two different $z_{1},z_{2}$, we have
$p^{\mathrm{patch}}_{\theta}(y|h(s,z_{1}))=p^{\mathrm{patch}}_{\theta}(y|h(s,z_{2}))=p^{*}(y|s)$,
which means that the $p^{\mathrm{patch}}_{\theta}(y|h(s,z_{1}))$ does not
depend on $z$. Since it is with Lebesgue measure 0 for the parameter $\theta$
to be 0 on $h(S,Z)$, which means an unblocked path from $Z$ to $Y$ if $h$
depends on $Z$, which comes up a contradiction. Therefore, it is with Lebesgue
measure 0 to let $h$ depend on the non-causal feature $Z$. For Fig. 3 (d), as
original non-causal features $Z$ are randomly mixed with other features, the
correlation between $Z$ and $Y$ is thus broken. ∎
This conclusion means it generically holds for the learned representation to
eliminate the information of non-causal features. This can explain the benefit
of PatchMix in removing the non-causal features during learning, by breaking
the dependency between the causal features and the non-causal ones.
Comparison with CutMix. The difference of our method with CutMix is that the
latter adopts a soft label (namely $Y_{c}$) to integrated images (as shown in
Fig. 3 (e)), which is determined by the label from both images. Therefore,
such a soft label $Y_{C}$ is related to both features from two different
images. In this regard, $Y\not\perp Z|S,I$ for each $I$, leading to the non-
disentanglement between $S$ and $Z$. Therefore, this learning mechanism is not
endowed with disentanglement ability that can be helpful for transferring to
novel categories, as summarized in the following:
###### Theorem 3.3.
Denote $p^{\mathrm{cut}}(\psi,\theta)(y|I)$ as the trained model on the data
after CutMix (with Fig. 3 (d)). Under the same injective assumption on
$f_{I}$, we have that the $\psi^{\mathrm{cut}}(I)$ is dependent on features
$Z$, if $p^{\mathrm{cut}}(\psi,\theta)(y|I)$ is trained to equal to the
ground-truth distribution $p^{*}(y|I)$.
###### Proof.
Similar to the proof of Thm. 3.1, we have $Y_{c}\perp
I|\psi^{\mathrm{cut}}(I)$, since in Fig. 3 (d), we have $Y\not\perp Z|S,I$.
Thus, it is necessary for $\psi^{\mathrm{cut}}(I)$ to depend on both $S$ and
$Z$. Further, if the spurious correlation is strong enough, it is incapable to
disentangle $S$ from $Z$ as they play nearly symmetric roles in affecting $Y$
and generating the input $I$. ∎
Relationship with neural collapse. As introduced in Sec. 1, a recent work [14]
studies the neural collapse that the variance of features from the novel
category is small enough to separate different categories; and it claims that
the property of neural collapse can be well transferred to novel data given
sufficient base training data. Intuitively, the learned disentangled causal
features can better correlate with novel class labels compared with the non-
causal features. Therefore, the model trained with PatchMix is expected to
receive better neural collapse on novel data since PatchMix can help break up
the dependency between causal and non-causal features during training.
Figure 4: The module for correlation-guided reconstruction. Each position of
the mixed query and gallery feature map is compared with the known area from
the original image. Then the similarity is further normalized with gumbel-
softmax to sample an action map that is used to select the patches.
### 3.4 Discriminative Feature Enhancement
Although the non-causal features can be eliminated, it is not necessarily for
these learned causal features to be discriminative enough for classification
between similar classes. For example, the shape feature is causally related to
both the dog and the wolf; however, the learned feature from two classes may
be too similar to be classified. To make our causal features be more
discriminative, we further propose two modules, i.e., _correlation-guided
reconstruction_ module in Sec. 3.4.1 and _hardness-aware PatchMix_ module in
Sec. 3.4.2, respectively for instance discrimination and classifying between
similar classes.
#### 3.4.1 Correlation-guided Reconstruction Module
Inspired by recent works on self-supervised learning [63], we propose a novel
module while takes advantage of PatchMix procedure to contrast image features
from different categories. Specifically, given query image
$\mathbf{I}^{\mathrm{q}}$, gallery image $\mathbf{I}^{\mathrm{g}}$ and the
corresponding mixture mask $M$, we can have the resulting mixed image
$\tilde{\mathbf{I}}^{\mathrm{q}}$ and its feature map
$\tilde{X}^{\mathrm{q}}$. Meanwhile, by exchanging the role of
$\mathbf{I}^{\mathrm{q}}$ and $\mathbf{I}^{\mathrm{g}}$ and using $1-M$ as
mask, we can have image $\tilde{\mathbf{I}}^{g}$ and feature map
$\tilde{X}^{g}$ which contain the counterpart of
$\tilde{\mathbf{I}}^{\mathrm{q}}$. In this way, $\mathbf{I}^{\mathrm{q}}$ and
$\mathbf{I}^{\mathrm{g}}$ can be properly reconstructed by selecting correct
patches from these two feature maps. The selection process can be based on
successfully telling apart patch-level features from different images.
Therefore, we make the reconstruction of both of query images and gallery
images as the goal of this module.
Take the $(i,j)-$th patch in $\tilde{\mathbf{I}}^{\mathrm{q}}$, we evaluate
the confidence of it belonging to $\mathbf{I}^{\mathrm{q}}$ and
$\mathbf{I}^{\mathrm{g}}$ as
$\displaystyle\alpha^{\mathrm{q}}_{i,j}$
$\displaystyle=\sum_{m,n=1,1}^{h,w}\sum_{m,n\neq
i,j}M_{m,n}\cdot<\tilde{f}_{m,n}^{\mathrm{q}},\tilde{f}_{i,j}^{\mathrm{q}}>$
(14) $\displaystyle\alpha^{g}_{i,j}$
$\displaystyle=\sum_{m,n=1,1}^{h,w}\sum_{m,n\neq
i,j}M_{m,n}\cdot<\tilde{f}_{m,n}^{\mathrm{g}},\tilde{f}_{i,j}^{q}>$ (15)
In other words, we compare each patch to the known patches from
$\mathbf{I}^{\mathrm{q}}$ except itself. The confidence for belonging to class
$B$ can be calculated in the same way by replacing $M$ and
$\tilde{f}_{m,n}^{\mathrm{q}}$ with $1-M$ and $\tilde{f}_{m,n}^{g}$,
respectively. Then we can select these patches according to the confidence,
$\displaystyle\bar{X}^{\mathrm{q}}_{i,j}$
$\displaystyle=\hat{\alpha}^{g}_{i,j}\tilde{X}_{i,j}^{\mathrm{g}}+\hat{\alpha}^{\mathrm{q}}_{i,j}\tilde{X}_{i,j}^{q}$
(16) $\displaystyle\hat{\alpha}^{g}_{i,j}$
$\displaystyle=\sigma(\alpha^{\mathrm{q}}_{i,j}/T)$ (17)
$\displaystyle\hat{\alpha}^{\mathrm{q}}_{i,j}$
$\displaystyle=\sigma(\alpha^{\mathrm{q}}_{i,j}/T)$ (18)
where $T$ is temperature and $\sigma$ is a normalization function implemented
as gumbel-softmax [64]. After obtaining the merged feature
$\bar{X}^{\mathrm{q}},\bar{X}^{g}$, we process them with a decoder, whose
structure is a reversal of the feature extractor, to generate the
reconstructed images $\bar{\mathbf{I}}^{\mathrm{q}},\bar{\mathbf{I}}^{g}$.
The objective function of this module can be formulated as
$\displaystyle L_{CR}$ $\displaystyle=L_{sel}+\lambda L_{rec}$ (19)
$\displaystyle L_{sel}$
$\displaystyle=\mathrm{CE}(\hat{\alpha}^{\mathrm{q}},M)+\mathrm{CE}(\hat{\alpha}^{g},1-M)$
(20) $\displaystyle L_{rec}$
$\displaystyle=\|\hat{A}-A\|_{1}+\|\hat{B}-B\|_{1}$ (21)
It is noteworthy that unlike the common way in conditional image generation,
our CGR does not directly concatenate $\bar{X}^{\mathrm{q}}$ and $\bar{X}^{g}$
as input. Also, we do not adopt techniques like adaptive normalization [65].
Compared with reconstruction-based method, which was empirically shown [66] to
be not helpful for classification, we additionally separate the features from
different classes by successfully distinguish query images and their gallery
counterpart which have different labels. Without the correlation guidance, the
task can easily degenerate to the task of repairing images _e.g._ , image
inpainting, which unfortunately may not be necessarily helpful to few-shot
classification.
#### 3.4.2 Hardness-aware PatchMix Module
Since all query images are mixed with random sampled images from the sample
episode, the vanilla PatchMix lacks control of difficulty, thus it may be even
difficult for learned causal features to be discriminative enough, especially
between similar classes. To resolve this problem, we propose mixing images
from similar classes, in which the similarity among base categories can be
estimated by a coarse prediction model with PatchMix in the first stage.
Specifically, given an episode with support set $\mathcal{S}$ and a trained
feature extractor $\phi$, we can first get the prototype of each category
$p_{i},i=1,\cdots,N$ in the same way as Eq. 1. Then the similarity between
$i$-th class and $j$-th class can be represented by the normalized cosine
similarity between their prototypes:
$\mathrm{sim}(i,j)=\frac{<p_{i},p_{j}>}{\|p_{i}\|\|p_{j}\|}$ (22)
In this way, we can treat the relationship among the $N$ categories as a
complete $N$-node graph whose edges are the negative similarity of the
corresponding prototypes. Then we solve a Travelling Salesman Problem (TSP) by
setting a random class as the starting point, which results in a path with the
lowest cost, thus can generate images from classes that are easy to be
obfuscated owing to high similarity. Each node is assigned as the gallery
class to its parent node, followed by the PatchMix training scheme as
introduced before. The objective function of hardness-aware PatchMix can be
written as
$\displaystyle\mathcal{L}_{ha}$ $\displaystyle=\mathcal{L}+\ell_{kd}$ (23)
$\displaystyle\ell_{kd}$
$\displaystyle=\frac{1}{NQ}\sum_{i=1}^{NQ}\psi(F(\mathbf{I}_{i}^{\mathrm{q}}),\hat{F}(\mathbf{I}_{i}^{\mathrm{q}}))$
(24)
where $\mathcal{L}$ is defined in Eq. 2 and $\psi$ is a knowledge distillation
loss [67], which can be implemented as MSE or KL divergence.
In all, the workflow of our algorithm can be separated into two stage, as
stated in Alg. 1, including training the baseline model in Sec. 3.1 together
with PatchMix and CGR in the first stage and hardness-aware PatchMix in the
second stage.
Algorithm 1 Training FSL Model with PatchMix
0: $\mathcal{D}_{s}$: meta-train set
1: # coarse PatchMix
2: while not converge do
3: Sample batch of episodes $B$ from $\mathcal{D}_{s}$
4: for all episode do
5: for all query sample $\mathbf{I}_{i}^{\mathrm{q}}$ do
6: Apply PatchMix to $\mathbf{I}_{i}^{\mathrm{q}}$ according to Eq. 9
7: end for
8: end for
9: Calculate objective function $\mathcal{L}$ as in Eq. 2
10: Update model parameters according to $\mathcal{L}$
11: end while
12:
13: # hardness-aware PatchMix
14: while not converge do
15: Sample batch of episodes $B$ from $\mathcal{D}_{s}$
16: for all episode do
17: calculate class-wise similarity according to Eq. 22
18: assign gallery class
19: for all query sample $\mathbf{I}_{i}^{\mathrm{q}}$ do
20: Apply PatchMix to $\mathbf{I}_{i}^{\mathrm{q}}$ according to Eq. 9
21: end for
22: end for
23: Calculate objective function $\mathcal{L}$ as in Eq. 23
24: Update model parameters according to $\mathcal{L}$
25: end while
### 3.5 Adaptation to Unsupervised FSL
In few-shot learning, it is sometimes hard to get any labeled samples,
especially when the labeling cost is high. To resolve this problem, some
recent works [13, 68] attempt to emphasize unsupervised few-shot learning.
These works firstly adopt unsupervised learning methods to train the networks,
and then the trained networks are used for testing. For instance, CACTUs [13]
selects several unsupervised learning methods such as DeepCluster [69] to get
a representation vector for each sample. These representation vectors are then
utilized to cluster the samples so that each sample can be assigned by a
pseudo label. After obtaining the pseudo label for each sample, we can apply
some few-shot learning methods to train the networks.
Following the standard practice above, this section further extends PatchMix
to unsupervised FSL. Specifically, we adopt CACTUs-ProtoNet as our baseline.
There are two main components for CACTUs, _unsupervised pretraining_ and
_pseudo label training_. Our adaptation considers using PatchMix for both
parts.
(1) Unsupervised pretraining. We take into account the current progress in
unsupervised learning and employ a modified version of MoCo [63] based on our
PatchMix. For the original version of MoCo, a batch of sampled images are
augmented to generate two sets of images called the key images and the query
images. For each query image $\mathbf{I}^{\mathrm{q}}_{i}$, the key image
$\mathbf{I}^{key}_{i}$ that shares the same image before augmentation is its
positive sample and other key images are negative samples. These images are
fed into networks to get spatially averaged feature vectors, in which vectors
of the $i-$th key image are denoted as $\bar{X}_{i}^{key}\in\mathbb{R}^{N}$
and vectors of the $i-$th query image are denoted as
$\bar{X}_{i}^{\mathrm{q}}\in\mathbb{R}^{N}$. Here we use
$\hat{X}^{\mathrm{q}}_{ij}=\frac{<\bar{X}_{i}^{q},\bar{X}_{j}^{key}>}{\|\bar{X}_{i}^{q}\|\|\bar{X}_{j}^{key}\|}$
to denote the cosine similarity between two features. For each batch of query
images, we calculate its loss with the form
$\mathcal{L}_{MoCo}=\sum_{i}-\mathrm{log}\frac{e^{\hat{X}^{\mathrm{q}}_{ii}/T}}{\sum_{j}e^{\hat{X}^{\mathrm{q}}_{ij}/T}}$
(25)
where $T$ is the temperature.
To implement our modified version which we call PatchMoCo, we first consider a
dense variant of MoCo. We remove the global average pooling layer for query
features and get the spatial feature $X_{i}^{\mathrm{q}}\in\mathbb{R}^{N\times
h\times w}$. Each spatial position of $X_{i}^{\mathrm{q}}$ is denoted as
$X_{i,s,t}^{\mathrm{q}}\in\mathbb{R}^{N}$. Similarly the cosine similarity for
each position $s,t$ is
$\hat{X}^{\mathrm{q}}_{istj}=\frac{<{X}_{ist}^{q},\bar{X}_{j}^{key}>}{\|{X}_{ist}^{q}\|\|\bar{X}_{j}^{key}\|}$.
The altered loss has the following form,
$\mathcal{L}_{d}=\sum_{i}\sum_{st}-\mathrm{log}\frac{e^{\hat{X}^{\mathrm{q}}_{isti}/T}}{\sum_{j}e^{\hat{X}^{\mathrm{q}}_{istj}/T}}$
(26)
Afterwards, we add our PatchMix into the unsupervised pretraining. The process
of mixing patches is as the way in Sec. 3.2. We modify the form of loss
function. If some of its patches are switched for each image, these new
patches are negative samples. For these new patches, we cannot find their
positive samples, so we do not include them into the calculation of final
loss. For convenience, we get a mask $K^{\mathrm{q}}_{i}\in\\{0,1\\}^{h\times
w}$, where 1 means patch without switch and 0 means switched patch. The new
loss function is,
$\mathcal{L}_{PMC}=\sum_{i}\sum_{st}-K^{\mathrm{q}}_{ist}\mathrm{log}\frac{e^{\hat{X}^{\mathrm{q}}_{isti}/T}}{\sum_{j}e^{\hat{X}^{\mathrm{q}}_{istj}/T}}$
(27)
(2) Pseudo-label training: After we get pseudo-label by clustering the feature
of unsupervised pretraining, each training sample can be used as the same in
supervised FSL. Then PatchMix is applied to the training phase as in Sec. 3.2.
## 4 Experiments
### 4.1 Datasets and setting
Datasets. We mainly adopt four datasets for our experiments. i) miniImageNet
dataset [70], containing 600 images in each of the 100 categories, is a small
subset of ImageNet. We follow the split in [16], where 64, 16, 20 categories
are used for train, validation and test set, respectively. ii) tieredImageNet
dataset [71] is a larger subset of ILSVRC-12 dataset. It consists of 34
categories with 779,165 images in total. These categories are further broken
into 608 categories, where 351 categories are used for training, 97 for
validation and 160 for testing. iii) CIFAR-FS [72] divides CIFAR-100 into 64
meta-train categories, 16 meta-val categories and 20 meta-test categories. iv)
CUB [73], a bird dataset with 200 total categories and 6033 total images. In
few-shot learning, 100, 50, 50 species are respectively used for training,
validation and test sets. Besides, Cars [74], Places [75] and Plantae [76] are
also utilized in cross-domain setting following [77]. Images in all datasets
are resized to $84\times 84$ before training and testing.
Experimental setup. Stochastic Gradient Descent (SGD) [78] with $5e-4$ weight
decay and cosine learning rate decay [79] is used to optimize our model. For
miniImageNet and CIFAR-FS, the initial learning rate is set as $0.15$ and
$0.05$ for tieredImageNet. Random cropping, horizontal flipping and color
jittering are adopted for data augmentation during training, which is the same
as in CAN [6]. We test 2000 episodes sampled from meta-test set for all
experiments. For correlation-guided reconstruction, $\lambda_{1}$ is set as
$0.5$ for all datasets, $\lambda_{2}=0.1$ for tieredImageNet and $0.25$ for
other datasets. As for the hardness-aware PatchMix, we empirically set the
distillation function $\psi$ as MSE for 1-shot tasks and KL divergence for
5-shot tasks. Models and codes will be released.
### 4.2 Comparison with state-of-the-art methods
To extensively show the effectiveness of our method, we test our PatchMix
across three commonly-used settings, _i.e._ , single domain, cross domain and
unsupervised few-shot learning. For each setting, we compare our model with
the recent state-of-the-art competitors, where the average accuracy is
reported. For supervised learning settings, we additionally report the 95%
_confidence interval_ (CI).
Model | Backbone | miniImageNet
---|---|---
1-shot | 5-shot
ProtoNet [3] | Conv4 | 49.42$\pm$0.78 | 68.20$\pm$0.72
MatchingNet [70] | 43.56$\pm$0.84 | 55.31$\pm$0.73
RelationNet [4] | 50.44$\pm$0.82 | 65.32$\pm$0.70
MAML [5] | 48.70$\pm$1.75 | 63.11$\pm$0.92
Dynamic Few-shot [80] | 56.20$\pm$0.86 | 72.81$\pm$0.62
LEO [20] | WRN-28 | 61.76$\pm$0.08 | 77.59$\pm$0.12
PPA [81] | 59.60$\pm$0.41 | 73.74$\pm$0.19
Robust dist++ [82] | 63.28$\pm$0.62 | 81.17$\pm$0.43
wDAE [83] | 61.07$\pm$0.15 | 76.75$\pm$0.11
CC+rot [84] | 62.93$\pm$0.45 | 79.87$\pm$0.33
DC [41]∗ | 67.96$\pm$0.45 | 83.45$\pm$0.31
FEAT [24] | 65.10$\pm$0.20 | 81.11$\pm$0.14
TapNet [85] | Res-12 | 61.65$\pm$0.15 | 76.36$\pm$0.10
MetaOptNet [86] | 62.64$\pm$0.61 | 78.63$\pm$0.46
CAN [6] | 63.85$\pm$0.48 | 79.44$\pm$0.34
FEAT [24] | 66.78$\pm$0.20 | 82.05$\pm$0.14
E3BM [87] | 63.80$\pm$0.40 | 80.10$\pm$0.30
DSN-MR [88] | 64.60$\pm$0.72 | 79.51$\pm$0.50
Net-Cosine [89] | 63.85$\pm$0.81 | 81.57$\pm$0.56
FRN [30] | 66.45$\pm$0.19 | 82.83$\pm$0.13
Tian et.al. [90] | 64.82$\pm$0.60 | 82.14$\pm$0.43
ConstNet [91] | 64.89$\pm$0.23 | 79.95$\pm$0.37
IEPT [29] | 67.05$\pm$0.44 | 82.90$\pm$0.30
MELR [28] | 67.40$\pm$0.43 | 83.40$\pm$0.28
DMF [92] | 67.76$\pm$0.46 | 82.71$\pm$0.31
DeepEMD [25] | 68.77$\pm$0.29 | 84.13$\pm$0.53
Base Model | Res-12 | 64.96$\pm$0.51 | 80.51$\pm$0.33
Ours | 69.38$\pm$0.46 | 84.14$\pm$0.30
Ours+DC | 69.75$\pm$0.44 | 84.88$\pm$0.30
TABLE I: 5-way few-shot accuracies with $95\%$ confidence interval on miniImageNet, * denotes results reproduced by us. Model | tieredImageNet | CIFAR-FS
---|---|---
1-shot | 5-shot | 1-shot | 5-shot
CC+rot [84] | 70.53$\pm$0.51 | 84.98$\pm$0.36 | 75.38$\pm$0.31 | 87.25$\pm$0.21
DC [41]∗ | 74.05$\pm$0.48 | 88.30$\pm$0.32 | — | —
MetaOptNet [86] | 65.99$\pm$0.72 | 81.56$\pm$0.53 | 72.11$\pm$0.96 | 84.32$\pm$0.65
CAN [6] | 69.89$\pm$0.51 | 84.23$\pm$0.37 | — | —
FEAT [24] | 70.80$\pm$0.23 | 84.79$\pm$0.16 | — | —
E3BM [87] | 71.20$\pm$0.40 | 85.30$\pm$0.30 | — | —
DSN-MR [88] | 67.39$\pm$0.82 | 82.85$\pm$0.56 | — | —
FRN [30] | 72.06$\pm$0.25 | 86.89$\pm$0.14 | — | —
Tian et.al. [90] | 71.52$\pm$0.69 | 86.03$\pm$0.49 | 73.90$\pm$0.80 | 86.90$\pm$0.50
ConstNet [91] | — | — | 75.40$\pm$0.20 | 86.80$\pm$0.20
IEPT [29] | 72.24$\pm$0.50 | 86.73$\pm$0.34 | — | —
MELR [28] | 72.14$\pm$0.51 | 87.01$\pm$0.35 | — | —
DMF [92] | 71.89$\pm$0.52 | 85.96$\pm$0.35 | — | —
DeepEMD [25] | 74.29$\pm$0.32 | 87.08$\pm$0.60 | — | —
Ours | 73.48$\pm$0.51 | 87.35$\pm$0.32 | 77.87$\pm$0.49 | 88.94$\pm$0.32
Ours+DC | 75.06$\pm$0.48 | 88.92$\pm$0.32 | — | —
TABLE II: 5-way few-shot accuracies with $95\%$ confidence interval on
tieredImageNet and CIFAR-FS., * denotes results reproduced by us.
Single-domain results. We adopt three datasets including miniImageNet,
tieredImageNet and CIFAR-FS in the single domain setting, where the model is
trained on the meta-train set of each dataset and tested on the corresponding
meta-test set. The results are shown in Tab. I and Tab. II.
In both settings on miniImageNet, our model surpasses all competitors that
share the same backbone with us. Particularly, we improve over DeepEMD v2 [25]
(_i.e._ , the current state-of-the-arts method) by 0.61% in 1-shot setting.
Besides, our methods also performs better (_e.g._ , outperforms FEAT by 4.28%
on 1-shot and 3.03% on 5-shot), even compared with those with WRN-28 that is
much larger and hence has more capacity than Res-12. We observe that this
improvement is smaller on 5-shot setting, which may due to extra parameters
(_e.g._ , attention module in FEAT and meta-filter in DMF) or specifically-
designed but computational expensive algorithm (_e.g._ , the Earth Mover
Distance in DeepEMD) that can perform better with more support data. In
contrast, our Patchmix, which has the similar procedure of testing with the
basic ProtoNet, is more effective and efficient, in terms of prediction power
and implementation. This can be contributed to the better representation (more
specifically, causally semantic features as will be shown in Sec. 4.3) for
metric-based classification. Meanwhile, our model also enjoys a suitable
confidence interval, which means the robustness against episodes with
different categories and difficulty.
On tieredImageNet and CIFAR-FS the results are nearly consistent with those on
miniImageNet (_e.g._ , our model leads by 2.47% and 1.69% in 1-shot and 5-shot
on CIFAR-FS). Besides, our method is flexible to be further improved when
combined with other orthogonal methods, as implied by the improvement when
combined with DC (_i.e._ , Ours+DC) which enables the data augmentation on
both training and testing phases. Particularly, such a combined method
achieves the best performance on tieredImageNet111Note that as the pre-trained
weight for tieredImageNet is not released in DC, we reproduce it by the
(official DC codes. We also adopt the weight provided in S2M2 [49] (link)
strictly following [41]..
Cross-domain results. In this setting, we follow the previous methods to train
our model on the meta-train set of miniImageNet and test it on the meta-test
set of CUB, Cars, Places and Plantae. Since these datasets are fine-grained
ones, successful classification mainly requires the model to be able to
concentrate on some details which may not be useful in miniImageNet. Therefore
the collapsing base IV can have a more serious damage in such a setting. As
shown in Tab. III, our model outperforms the best competitor by at most 5.82%
on 1-shot and 3.14% on 5-shot among these datasets. This means that when
trained on miniImageNet, our model can learn the domain specific information,
but also additional knowledge that is useful in other domains.
Unsupervised few-shot learning results. In this setting, we use miniImageNet
as the target dataset. The available information is the same as that in
supervised single domain FSL except that labels are not assigned to the base
category images. We compare our method in Tab. IV with CACTUs [13] and UMTRA
[68]. As mentioned in Sec. 3.5, our model is built based on CACTUs-ProtoNet
with a changed cluster method, which results in a 1.80% and 2.56% improvement
on both 1-shot and 5-shot tasks. The superiority further reflects the efficacy
of our proposed PatchMix.
Model | CUB | Cars | Places | Plantae
---|---|---|---|---
1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot | 1-shot | 5-shot
RelationNet [4] | 42.44$\pm$0.77 | 57.77$\pm$0.69 | 29.11$\pm$0.60 | 37.33$\pm$0.68 | 48.64$\pm$0.85 | 63.32$\pm$0.76 | 33.17$\pm$0.64 | 44.00$\pm$0.60
GNN [93] | 45.69$\pm$0.68 | 62.25$\pm$0.65 | 31.79$\pm$0.51 | 44.28$\pm$0.63 | 53.10$\pm$0.80 | 70.84$\pm$0.65 | 35.60$\pm$0.56 | 52.53$\pm$0.59
LFT [77] | 47.47$\pm$0.75 | 66.98$\pm$0.68 | 31.61$\pm$0.53 | 44.90$\pm$0.64 | 55.77$\pm$0.79 | 73.94$\pm$0.67 | 35.95$\pm$0.58 | 53.85$\pm$0.62
LRP [94] | 48.29$\pm$0.51 | 64.44$\pm$0.48 | 32.78$\pm$0.39 | 46.20$\pm$0.46 | 54.83$\pm$0.56 | 74.45$\pm$0.47 | 37.49$\pm$0.43 | 54.46$\pm$0.46
Ours | 49.47$\pm$0.45 | 68.90$\pm$0.40 | 33.78$\pm$0.37 | 46.78$\pm$0.43 | 60.65$\pm$0.48 | 77.59$\pm$0.38 | 40.22$\pm$0.39 | 56.01$\pm$0.37
TABLE III: 5-way cross-domain few-shot accuracies with $95\%$ confidence interval on CUB, Cars, Places and Plantae. Model | Clustering | miniImageNet
---|---|---
1-shot | 5-shot
kNN | BiGAN | 25.56 | 31.10
linear | 27.08 | 33.91
MLP | 22.91 | 29.06
CACTUs-MAML [13] | 36.24 | 51.28
CACTUs-ProtoNets [13] | 36.62 | 50.16
kNN | DeepCluster | 28.90 | 42.25
linear | 29.44 | 39.79
MLP | 29.09 | 39.67
CACTUs-MAML [13] | 39.90 | 53.97
CACTUs-ProtoNets [13] | 39.18 | 53.36
CACTUs-ProtoNets [13]∗ | MoCo | 39.18 | 53.36
UMTRA [68] | N/A | 39.93 | 50.73
Ours | PatchMoCo | 41.73 | 55.92
TABLE IV: 5-way few-shot accuracies on miniImageNet in unsupervised setting.*
denotes results produced by us.
Augment | K=1 | K=5
---|---|---
CutMix | 65.91 | 79.10
Mixup | 64.69 | 79.08
Man. Mixup | 64.53 | 77.95
PatchMix | 68.34 | 83.16
CAN | K=1 | K=5
---|---|---
base | 65.05 | 81.42
+CutMix | 65.27 | 79.53
+PatchMix | 67.77 | 82.54
+Imp. PM | 67.79 | 82.76
Recons | K=1 | K=5
---|---|---
w/o | 68.34 | 83.16
vanilla | 68.59 | 83.25
Softmax | 68.81 | 83.78
CGR | 69.38 | 84.14
Distill | K=1 | K=5
---|---|---
vanilla | 68.61 | 83.46
w/o H | 68.82 | 83.62
local | 69.07 | 83.69
global | 69.38 | 84.14
(a)
(b)
(c)
(d)
Unsup. | K=1 | K=5
---|---|---
DC | 40.69 | 54.41
Ours | 41.73 | 55.92
Method | K=1 | K=5
---|---|---
IFSL | 64.78 | 80.08
PatchMix | 68.34 | 83.16
Strategy | K=1 | K=5
---|---|---
mix+ori | 68.06 | 82.95
all mix | 68.61 | 83.46
Grid size | K=1 | K=5
---|---|---
$6\times 6$ | 67.95 | 82.69
$11\times 11$ | 68.61 | 83.46
(e)
(f)
(g)
(h)
TABLE V: Ablation Studies on miniImageNet 5-way tasks. We show 1-shot(K=1) and 5-shot(K=5) results. (a) PatchMix on our baseline. We compare PatchMix with other commonly-used data augmentation methods based on our baseline model. (b) Plug-in: We apply our proposed method to CAN [6]. (c) CGR: we compare different instantiations of reconstruction as an auxiliary task for our CGR module. (d) Hardness: we try different implementations of the second stage training, including vanilla distillation, using PatchMix and PatchMix with two kinds of hardness. (e) Unsupervise: we test the proposed substitution for DeepCluster in unsupervised FSL. (f) Comparison with IFSL: we compare our model with IFSL, which also explores causal inference in FSL. (g) Mixture strategy: we try different strategies of using mixed images. (h) Pooling: we compare model trained with and without the last pooling layer in the Res-12 backbone Model | tieredImageNet | CIFAR-FS
---|---|---
1-shot | 5-shot | 1-shot | 5-shot
w/o | 72.28 | 86.24 | 76.57 | 88.15
vanilla | 72.13 | 86.71 | 76.42 | 88.21
Softmax | 72.54 | 86.78 | 77.06 | 87.27
CGR | 73.48 | 87.35 | 77.87 | 88.94
vanilla | 72.56 | 86.39 | 76.97 | 87.83
w/o H | 72.78 | 86.67 | 77.02 | 88.10
local | 72.81 | 86.81 | 77.30 | 88.52
global | 73.48 | 87.35 | 77.87 | 88.94
TABLE VI: Ablation study of CGR and hardness-aware PatchMix on tieredImageNet
and CIFAR-FS.
### 4.3 Ablation Study
To comprehensively validate the effectiveness of our method, we conduct a
series of ablation studies on the design of each sub-module. The accuracies in
both 1-shot and 5-shot settings on miniImageNet are reported in Tab. V.
#### 4.3.1 Substitution experiments for PatchMix
Comparison of PatchMix with other augmentation methods. Firstly and the most
importantly, we testify whether the data augmentation approach in the proposed
PatchMix can benefit the few-shot learning. To this end, we compare PatchMix
the baseline model that introduced in Sec. 3.1, together with three commonly
used data augmentation techniques including Mixup, Manifold Mixup and CutMix.
For simplicity we omit the performance of baseline and only report the
improvement or degeneration of each model compared with baseline. The results
in Tab. V(a) indicates that the CutMix can only improve the baseline by 0.49%
on 1-shot tasks; while in other settings these methods have no improvements
(especially, for manifold Mixup the 5-shot accuracy decreases by 1.39%). In
contrast, our proposed PatchMix can respectively increase by 2.92% and 3.82%
on 1-shot and 5-shot tasks. Such a noticeable improvement, which can be
contributed to the ability of disentangling causal features from others,
betokens PatchMix as an effective data augmentation method.
Moreover, our PatchMix can enjoy a better ”neural collapse” property in FSL.
Specifically, this property, as observed in [14] on FSL, means that the intra-
variance defined as the variance of features from each novel category, can
collapse to 0 for a properly trained neural network on sufficient base data.
Besides, such novel features form a simplex equiangular tight frame. However,
as shown in the experiments in [14], the intra-variance of novel categories
can be generally larger than that of the base categories on miniImageNet,
which limits the transferability of vanilla training strategy from base
categories to the novel ones. To show that our proposed PatchMix can help the
FSL models in terms of better neural collapse on novel categories, we
visualize the intra-variance of different training methods, including the
baseline model where no data augmentation is used as in [14] and commonly-used
augmentation methods like CutMix and Mixup. The results are presented in Fig.
5(a) and (b).
We observe that the model trained with PatchMix continuously enjoys a larger
intra-variance than the baseline model on base categories. The gap is about
17% in the first 40 epochs and 5% in the last 10 epochs. As for the novel
data, we imitate the testing process to randomly sample 5 novel classes each
time for evaluation and calculate the intra-variance with all samples of these
classes. Then we repeat this procedure for 20 times and visualize the averaged
intra-variance. The result is visualized in Fig. 5(c). While the intra-
variance with PatchMix is larger than that of baseline in the first 7 epochs,
it decreases much faster and reaches much smaller value in the end of
training. This result, together with the comparison between baseline and
PatchMix on CIFAR-FS in Fig. 6, empirically elucidates that our PatchMix can
indeed improve the behavior with regard to neural collapse in FSL in terms of
generating more collapsed novel features, due to the ability of ours in
learning causal features. Specifically, as the novel features are not
influenced by the non-causal features which are less correlated to the novel
categories, these novel features have decreased variance that is beneficial to
separate different categories for classification. Beyond the quantitative
results, we further visualize learned features of several images from
different novel categories that are extracted by models trained with and
without the proposed PatchMix, as shown in Fig. 7. We can find that while the
model without PatchMix can hardly focus on the target objects, our proposed
method can fix this problem, leading to better feature maps.
Figure 5: (a) Intra-variance of all 64 base classes on miniImageNet along all
epochs when training with 4 different methods. (b) A zoomed version of 80 to
95 epochs of the left image. (c) Comparison of intra-variance of the selected
5 novel classes between baseline and PatchMix. Our PatchMix can not only
control a larger IV during training, but also produce better novel class
representations with higher accuracy. Figure 6: Intra-variance of left: all 64
base classes right: repeatedly sampled 5 novel classes on CIFAR-FS along all
epochs when training with and without PatchMix. Similar to the results on
miniImageNet, PatchMix leads to both higher base IV and lower novel IV. Figure
7: Visualization of features from model trained Left: with or without PatchMix
and Right: with IFSL or PatchMix, using images in the novel set of
miniImageNet. Objects of interest are highlighted with yellow boxes. The
visualization further illustrates that our PatchMix can disentangle the
features to some extent.
Applying PatchMix to existing FSL models. In fact, our proposed PatchMix can
be directly applied to any few-shot learning method whose output is composed
of a confidence map. To demonstrate the utility of such an application, we
employ CAN [6] as a base model and compare the two stages of PatchMix along
with CAN against CutMix. Results in Tab. V(b) reveal that by applying CutMix
to CAN, the 1-shot accuracy is raised but the 5-shot performance is not
improved which is consistent with the results on our baseline. In the
opposite, utilizing PatchMix and improved PatchMix can boost the accuracy.
Concretely, the first stage is better than the basic CAN by 2.72% and 1.12% on
1-shot and 5-shot tasks, and the second stage results in a further improvement
on both 1-shot and 5-shot tasks. Such results reflect the potential of our
method as a plug-in method when solving few-shot learning problems.
Figure 8: The results of using different methods in our correlation-guided
R=reconstruction. We can find that while vanilla reconstruction and the model
using softmax produce a lot of artifacts, the images generated by the model
using gumbel-softmax as normalization function are more smooth, almost
restituting all basic information in the original images.
Effectiveness of PatchMix in unsupervised representation learning. As
introduced in Sec. 3.5, one of our main adaptation of PatchMix to unsupervised
FSL is the substitution of PatchMix to DeepCluster. To show the effectiveness,
we train our model with DeepCluster feature instead, whose results are shown
in Tab. V(e). We find that using DeepCluster the as clustering method with our
model is still better than the previous methods, and replacing DeepCluster
with PatchMix further improves by 1.04% and 1.51% on 1-shot and 5-shot tasks,
which reflects the efficacy of two modules in unsupervised setting.
Comparison with IFSL. In [11] the authors proposed an intervened predictor in
the testing pipeline from the perspective of causal inference. To compare the
effectiveness, we re-implement this method with Res12 as the backbone and
report the comparison result in Tab. V(f). We follow the original paper to
perform IFSL based on MTL [19]. As shown, our method can significantly
outperform the IFSL on both settings (specifically, respectively improve by
3.10% and 2.86% on 1-shot and 5-shot tasks). Besides, this phenomena also
holds even without CGR and hardness-aware modules. These results imply the
benefit of removing spurious correlation, when the pre-training stage is
missing.
#### 4.3.2 Ablation study for model variants among different design choices
Variants for hardness-aware PatchMix. To illustrate the role of our proposed
hardness-aware PatchMix, we compare several variants including vanilla
knowledge distillation, using PatchMix without hardness, using local hardness
and using global hardness. The results are shown in Tab. V(d) and Tab. VI. We
can find that while using vanilla knowledge distillation can help boost the
performance, the improvement is relatively small. Moreover, PatchMix without
hardness cannot bring enhancement on 5-shot tasks. This may attribute to that
the knowledge imposed by PatchMix has been learned in the teacher model, which
is transferred to the student model via distillation. However with more hard
examples in the training stage by mixing images from similar classes, we can
further promote the distillation process. We suspect that the gap is
attributed to the learned discriminative features for classification.
Variants for correlation-guided reconstruction. We test the model with and
without the proposed CGR. Specifically, the variants include model without
reconstruction, model with vanilla reconstruction that query feature map and
gallery feature map are directly concatenated and used as the input of the
decoder, model with correlation-guided reconstruction where softmax is used as
normalization function and our final CGR model where gumbel-softmax is used.
As can be seen in Tab. V(c) and Tab. VI, (1) utilizing vanilla model cannot
improve the base model. This means that a simple recovery without explicit
modelling of the patch selection cannot help the model learn better
representations, which is consistent with the discussion in Sec. 3.4.1. (2)
Using softmax as normalization function can bring smaller improvement and
sometimes even degeneration. In fact, the reconstruction results of this
method is obviously worse than the other two variants, as shown in Fig. 8. We
note that this method is the only one that cannot restore the basic shape and
colors from the input knowledge. The possible reason is that softmax function
makes weight $\alpha_{i,j}\in(0,1)$, which means that it will introduce
information from both patches in each position even if one of them is
uncorrelated to the target image, which may confuse the decoder. Thus even if
the model can learn correct similarity between patches from different images,
a bad supervision on the reconstruction hinders the training. (3) Our final
choice of gumbel-softmax benefits the model with 0.53% and 0.52% higher
accuracies on 1-shot and 5-shot tasks on miniImageNet, and consistent
improvement on CIFAR-FS and tieredImageNet. This method can provide both
better restoration and more precise classification compared to the former two
variants, which justifies the efficacy of CGR and the necessity of using
gumbel-softmax. The discrete distribution generated by gumbel-softmax does not
change the value of original feature, but reorganizing them between two
feature maps instead. Consequently, a good reconstruction is solely
conditioned on a correct selection of patches.
Mixture strategy. In Tab. V(g) we compare two different strategies of using
PatchMix, _i.e._ , using only mixed images or using them together with
original images. The results demonstrates that abandoning the images before
mixture is better by 0.55% and 0.51% on 1-shot and 5-shot tasks, and mixing
two types of images leads to similar performance of that of the baseline
model. One reason may be that such a mixing fails to disentangle causal and
non-causal features, in the same way as CutMix as analyzed in Sec. 3.3.
Does size of feature map affect the performance? One would ask if it is
necessary to modify the ResNet-12. We thus compare the model with and without
the last max pooling layer in Tab. V(h). The results show that deleting the
pooling can bring an improvement of 0.74% on 1-shot and 0.77% on 5-shot. We
argue that the reason is two fold. First, without the max pooling, we can
avoid the information loss to some extent, thus comparing the support and
query feature in a more detailed way. Second, while larger feature maps may
lead to some confusing supervision that some patches do not contain the target
object but is guided by the corresponding labels, our PatchMix can alleviate
this problem by imposing random information from other categories into these
patches, thus making the supervision more robust.
## 5 Conclusion
In this paper we analyze the necessity of learning disentangled causal
features, in order to remove the sample selection bias that is commonly met in
FSL. To solve these problems, we propose PatchMix to switch the patches and
corresponding supervision, which has been theoretically shown to learn causal
features by removing the spurious dependency between causal and non causal
features across patches. Additionally, we propose two extra modules to enhance
PatchMix with more discriminative features. Besides, we present an adaptation
of our model for unsupervised FSL. Experiment results among three different
settings reveal the efficacy of our proposed method.
## References
* [1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [2] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” _arXiv preprint arXiv:1506.01497_ , 2015.
* [3] J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” in _Adv. Neural Inform. Process. Syst._ , 2017.
* [4] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in _IEEE Conf. Comput. Vis. Pattern Recog._ , 2018.
* [5] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in _ICML_ , 2017.
* [6] R. Hou, H. Chang, M. Bingpeng, S. Shan, and X. Chen, “Cross attention network for few-shot classification,” in _Adv. Neural Inform. Process. Syst._ , 2019\.
* [7] K. P. Murphy, _Probabilistic Machine Learning: Advanced Topics_. Cambridge: MIT Press, 2022.
* [8] K. Xiao, L. Engstrom, A. Ilyas, and A. Madry, “Noise or signal: The role of image backgrounds in object recognition,” in _ICLR_ , 2021.
* [9] H. Shah, K. Tamuly, A. Raghunathan, P. Jain, and P. Netrapalli, “The pitfalls of simplicity bias in neural networks,” in _NeurPIS_ , 2020.
* [10] F. Khani and P. Liang, “Removing spurious features can hurt accuracy and affect groups disproportionately,” in _Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency_ , 2021, pp. 196–205.
* [11] Z. Yue, H. Zhang, Q. Sun, and X.-S. Hua, “Interventional few-shot learning,” _Advances in neural information processing systems_ , vol. 33, pp. 2734–2746, 2020.
* [12] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, “Cutmix: Regularization strategy to train strong classifiers with localizable features,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 6023–6032.
* [13] K. Hsu, S. Levine, and C. Finn, “Unsupervised learning via meta-learning,” _arXiv preprint arXiv:1810.02334_ , 2018.
* [14] T. Galanti, A. György, and M. Hutter, “On the role of neural collapse in transfer learning,” _arXiv preprint arXiv:2112.15121_ , 2021.
* [15] C. Liu, Y. Fu, C. Xu, S. Yang, J. Li, C. Wang, and L. Zhang, “Learning a few-shot embedding model with contrastive learning,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 35, no. 10, 2021, pp. 8635–8643.
* [16] S. Ravi and H. Larochelle, “Optimization as a model for few-shot learning,” in _Int. Conf. Learn. Represent._ , 2017.
* [17] A. Nichol, J. Achiam, and J. Schulman, “On first-order meta-learning algorithms,” _arXiv preprint arXiv:1803.02999_ , 2018.
* [18] Z. Li, F. Zhou, F. Chen, and H. Li, “Meta-sgd: Learning to learn quickly for few-shot learning,” _arXiv preprint arXiv:1707.09835_ , 2017.
* [19] Q. Sun, Y. Liu, T.-S. Chua, and B. Schiele, “Meta-transfer learning for few-shot learning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 403–412.
* [20] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell, “Meta-learning with latent embedding optimization,” _arXiv preprint arXiv:1807.05960_ , 2018.
* [21] X. Li, Q. Sun, Y. Liu, Q. Zhou, S. Zheng, T.-S. Chua, and B. Schiele, “Learning to self-train for semi-supervised few-shot classification,” _Advances in Neural Information Processing Systems_ , vol. 32, pp. 10 276–10 286, 2019.
* [22] Z. Peng, Z. Li, J. Zhang, Y. Li, G.-J. Qi, and J. Tang, “Few-shot image recognition with knowledge transfer,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 441–449.
* [23] C. Xing, N. Rostamzadeh, B. Oreshkin, and P. O. O Pinheiro, “Adaptive cross-modal few-shot learning,” _Advances in Neural Information Processing Systems_ , vol. 32, pp. 4847–4857, 2019.
* [24] H.-J. Ye, H. Hu, D.-C. Zhan, and F. Sha, “Few-shot learning via embedding adaptation with set-to-set functions,” in _IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2020, pp. 8808–8817.
* [25] C. Zhang, Y. Cai, G. Lin, and C. Shen, “Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 12 203–12 213.
* [26] Y. Hu, V. Gripon, and S. Pateux, “Leveraging the feature distribution in transfer-based few-shot learning,” _arXiv preprint arXiv:2006.03806_ , 2020\.
* [27] J. Snell and R. Zemel, “Bayesian few-shot classification with one-vs-each p$\backslash$’olya-gamma augmented gaussian processes,” _arXiv preprint arXiv:2007.10417_ , 2020.
* [28] N. Fei, Z. Lu, T. Xiang, and S. Huang, “Melr: Meta-learning via modeling episode-level relationships for few-shot learning,” in _International Conference on Learning Representations_ , 2020.
* [29] M. Zhang, J. Zhang, Z. Lu, T. Xiang, M. Ding, and S. Huang, “Iept: Instance-level and episode-level pretext tasks for few-shot learning,” in _International Conference on Learning Representations_ , 2020.
* [30] D. Wertheimer, L. Tang, and B. Hariharan, “Few-shot classification with feature map reconstruction networks,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 8012–8021.
* [31] J. Oh, H. Yoo, C. Kim, and S.-Y. Yun, “Boil: Towards representation change for few-shot learning,” _arXiv preprint arXiv:2008.08882_ , 2020.
* [32] J.-C. Su, S. Maji, and B. Hariharan, “When does self-supervision improve few-shot learning?” in _European Conference on Computer Vision_. Springer, 2020, pp. 645–666.
* [33] F. Wu, J. S. Smith, W. Lu, C. Pang, and B. Zhang, “Attentive prototype few-shot learning with capsule network-based embedding,” in _European Conference on Computer Vision_. Springer, 2020, pp. 237–253.
* [34] G. S. Dhillon, P. Chaudhari, A. Ravichandran, and S. Soatto, “A baseline for few-shot image classification,” _arXiv preprint arXiv:1909.02729_ , 2019\.
* [35] A. Afrasiyabi, J.-F. Lalonde, and C. Gagné, “Associative alignment for few-shot image classification,” in _European Conference on Computer Vision_. Springer, 2020, pp. 18–35.
* [36] K. Li, Y. Zhang, K. Li, and Y. Fu, “Adversarial feature hallucination networks for few-shot learning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 13 470–13 479.
* [37] J. Zhang, C. Zhao, B. Ni, M. Xu, and X. Yang, “Variational few-shot learning,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 1685–1694.
* [38] Y.-X. Wang, R. Girshick, M. Hebert, and B. Hariharan, “Low-shot learning from imaginary data,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7278–7286.
* [39] E. Schwartz, L. Karlinsky, J. Shtok, S. Harary, M. Marder, R. Feris, A. Kumar, R. Giryes, and A. M. Bronstein, “Delta-encoder: an effective sample synthesis method for few-shot object recognition,” _arXiv preprint arXiv:1806.04734_ , 2018.
* [40] J. Kim, H. Kim, and G. Kim, “Model-agnostic boundary-adversarial sampling for test-time generalization in few-shot learning.” in _ECCV (1)_ , 2020, pp. 599–617.
* [41] S. Yang, L. Liu, and M. Xu, “Free lunch for few-shot learning: Distribution calibration,” _arXiv preprint arXiv:2101.06395_ , 2021.
* [42] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _IEEE Conf. Comput. Vis. Pattern Recog._ , 2017\.
* [43] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization,” _arXiv preprint arXiv:1710.09412_ , 2017.
* [44] C. Michaelis, B. Mitzkus, R. Geirhos, E. Rusak, O. Bringmann, A. S. Ecker, M. Bethge, and W. Brendel, “Benchmarking robustness in object detection: Autonomous driving when winter is coming,” _arXiv preprint arXiv:1907.07484_ , 2019.
* [45] Y. Gong, Z. Zeng, L. Chen, Y. Luo, B. Weng, and F. Ye, “A person re-identification data augmentation method with adversarial defense effect,” _arXiv preprint arXiv:2101.08783_ , 2021.
* [46] P. Chen, S. Liu, H. Zhao, and J. Jia, “Gridmask data augmentation,” _arXiv preprint arXiv:2001.04086_ , 2020.
* [47] E. D. Cubuk, B. Zoph, J. Shlens, and Q. V. Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_ , 2020, pp. 702–703.
* [48] G. Ghiasi, Y. Cui, A. Srinivas, R. Qian, T.-Y. Lin, E. D. Cubuk, Q. V. Le, and B. Zoph, “Simple copy-paste is a strong data augmentation method for instance segmentation,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 2918–2928.
* [49] P. Mangla, N. Kumari, A. Sinha, M. Singh, B. Krishnamurthy, and V. N. Balasubramanian, “Charting the right manifold: Manifold mixup for few-shot learning,” in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , 2020, pp. 2218–2227.
* [50] V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, D. Lopez-Paz, and Y. Bengio, “Manifold mixup: Better representations by interpolating hidden states,” in _International Conference on Machine Learning_. PMLR, 2019, pp. 6438–6447.
* [51] M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz, “Invariant risk minimization,” _arXiv preprint arXiv:1907.02893_ , 2019.
* [52] B. Schölkopf, F. Locatello, S. Bauer, N. R. Ke, N. Kalchbrenner, A. Goyal, and Y. Bengio, “Toward causal representation learning,” _Proceedings of the IEEE_ , vol. 109, no. 5, pp. 612–634, 2021.
* [53] X. Sun, B. Wu, X. Zheng, C. Liu, W. Chen, T. Qin, and T.-Y. Liu, “Recovering latent causal factor for generalization to distributional shifts,” _Advances in Neural Information Processing Systems_ , vol. 34, 2021.
* [54] J. Peters, P. Bühlmann, and N. Meinshausen, “Causal inference by using invariant prediction: identification and confidence intervals,” _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , vol. 78, no. 5, pp. 947–1012, 2016.
* [55] D. Rothenhäusler, P. Bühlmann, and N. Meinshausen, “Causal dantzig: fast inference in linear structural equation models with hidden variables under additive interventions,” _The Annals of Statistics_ , vol. 47, no. 3, pp. 1688–1722, 2019.
* [56] J. Pearl, _Causality_. Cambridge University Press, 2009.
* [57] J. Pearl _et al._ , “Models, reasoning and inference,” _Cambridge, UK: CambridgeUniversityPress_ , vol. 19, p. 2, 2000.
* [58] B. Oreshkin, P. R. López, and A. Lacoste, “Tadam: Task dependent adaptive metric for improved few-shot learning,” in _Adv. Neural Inform. Process. Syst._ , 2018.
* [59] I. Khemakhem, D. Kingma, R. Monti, and A. Hyvarinen, “Variational autoencoders and nonlinear ica: A unifying framework,” in _International Conference on Artificial Intelligence and Statistics_. PMLR, 2020, pp. 2207–2217.
* [60] D. Janzing, J. Peters, J. Mooij, and B. Schölkopf, “Identifying confounders using additive noise models,” in _Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence_ , 2009, pp. 249–257.
* [61] J. Peters, J. M. Mooij, D. Janzing, and B. Schölkopf, “Causal discovery with continuous additive noise models,” 2014.
* [62] T. Teshima, I. Sato, and M. Sugiyama, “Few-shot domain adaptation by causal mechanism transfer,” in _International Conference on Machine Learning_. PMLR, 2020, pp. 9458–9469.
* [63] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 9729–9738.
* [64] E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with gumbel-softmax,” _arXiv preprint arXiv:1611.01144_ , 2016.
* [65] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2017, pp. 1501–1510.
* [66] T. Robert, N. Thome, and M. Cord, “Hybridnet: Classification and reconstruction cooperation for semi-supervised learning,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 153–169.
* [67] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” _arXiv preprint arXiv:1503.02531_ , 2015.
* [68] S. Khodadadeh, L. Bölöni, and M. Shah, “Unsupervised meta-learning for few-shot image classification,” _arXiv preprint arXiv:1811.11819_ , 2018\.
* [69] M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for unsupervised learning of visual features,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 132–149.
* [70] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra _et al._ , “Matching networks for one shot learning,” in _Adv. Neural Inform. Process. Syst._ , 2016.
* [71] M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel, “Meta-learning for semi-supervised few-shot classification,” 2018.
* [72] L. Bertinetto, J. F. Henriques, P. H. Torr, and A. Vedaldi, “Meta-learning with differentiable closed-form solvers,” _arXiv preprint arXiv:1805.08136_ , 2018.
* [73] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, “Caltech-ucsd birds 200,” 2010.
* [74] J. Krause, M. Stark, J. Deng, and L. Fei-Fei, “3d object representations for fine-grained categorization,” in _Proceedings of the IEEE international conference on computer vision workshops_ , 2013, pp. 554–561.
* [75] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 million image database for scene recognition,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 40, no. 6, pp. 1452–1464, 2017\.
* [76] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie, “The inaturalist species classification and detection dataset,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8769–8778.
* [77] H.-Y. Tseng, H.-Y. Lee, J.-B. Huang, and M.-H. Yang, “Cross-domain few-shot classification via learned feature-wise transformation,” _arXiv preprint arXiv:2001.08735_ , 2020.
* [78] L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in _Proceedings of COMPSTAT’2010_ , 2010.
* [79] I. Loshchilov and F. Hutter, “Sgdr: Stochastic gradient descent with warm restarts,” _arXiv preprint arXiv:1608.03983_ , 2016.
* [80] S. Gidaris and N. Komodakis, “Dynamic few-shot visual learning without forgetting,” in _IEEE Conf. Comput. Vis. Pattern Recog._ , 2018.
* [81] S. Qiao, C. Liu, W. Shen, and A. L. Yuille, “Few-shot image recognition by predicting parameters from activations,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2018, pp. 7229–7238.
* [82] N. Dvornik, C. Schmid, and J. Mairal, “Diversity with cooperation: Ensemble methods for few-shot classification,” in _Int. Conf. Comput. Vis._ , 2019\.
* [83] S. Gidaris and N. Komodakis, “Generating classification weights with gnn denoising autoencoders for few-shot learning,” in _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [84] S. Gidaris, A. Bursuc, N. Komodakis, P. Pérez, and M. Cord, “Boosting few-shot visual learning with self-supervision,” in _Int. Conf. Comput. Vis._ , 2019.
* [85] S. W. Yoon, J. Seo, and J. Moon, “Tapnet: Neural network augmented with task-adaptive projection for few-shot learning,” 2019.
* [86] K. Lee, S. Maji, A. Ravichandran, and S. Soatto, “Meta-learning with differentiable convex optimization,” in _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [87] Y. Liu, B. Schiele, and Q. Sun, “An ensemble of epoch-wise empirical bayes for few-shot learning,” in _European Conference on Computer Vision_. Springer, 2020, pp. 404–421.
* [88] C. Simon, P. Koniusz, R. Nock, and M. Harandi, “Adaptive subspaces for few-shot learning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 4136–4145.
* [89] B. Liu, Y. Cao, Y. Lin, Q. Li, Z. Zhang, M. Long, and H. Hu, “Negative margin matters: Understanding margin in few-shot classification,” _arXiv preprint arXiv:2003.12060_ , 2020.
* [90] Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola, “Rethinking few-shot image classification: a good embedding is all you need?” _arXiv preprint arXiv:2003.11539_ , 2020.
* [91] W. Xu, Y. Xu, H. Wang, and Z. Tu, “Attentional constellation nets for few-shot learning,” 2021.
* [92] C. Xu, Y. Fu, C. Liu, C. Wang, J. Li, F. Huang, L. Zhang, and X. Xue, “Learning dynamic alignment via meta-filter for few-shot learning,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 5182–5191.
* [93] V. Garcia and J. Bruna, “Few-shot learning with graph neural networks,” _arXiv preprint arXiv:1711.04043_ , 2017.
* [94] J. Sun, S. Lapuschkin, W. Samek, Y. Zhao, N.-M. Cheung, and A. Binder, “Explanation-guided training for cross-domain few-shot classification,” in _2020 25th International Conference on Pattern Recognition (ICPR)_. IEEE, 2021, pp. 7609–7616.
| Chengming Xu received his Bachelor’s degree in computer science at Fudan
University in 2018 and is now a fourth year PhD student majoring in statistics
advised by Prof. Yanwei Fu. His research interests include action analysis and
few-shot learning.
---|---
| Chen Liu is a PhD student at the Department of Mathematics at the Hong Kong
University of Science and Technology under the supervision of Prof. Yuan Yao.
He received the Bachelor degree of Engineering from the School of Mechanical
Engineering, Shanghai Jiaotong University, in 2018 and the Master degree of
Statistics from the School of Data Science, Fudan University, in 2021. His
current research interests include machine learning and its application to
computer vision.
---|---
| Xinwei Sun is currently an assistant professor with School of Data Science,
Fudan University. He received his Ph.D in school of mathematical sciences,
Peking University in 2018. His research interests mainly focus on statistical
machine learning, causal inference, with their applications on medical
imaging, computer vision and few-shot learning.
---|---
| Siqian Yang recevied his Ph.D. on Computer Science and Technology from
Tongji University in 2018. He is currently a researcher at Tencent YouTu Lab,
China. His research interests include image processing, computer vision, and
vehicular networks.
---|---
| Yabiao Wang is a Senior Researcher at Tencent Youtu lab,China. He received
his master degree from Zhejiang University in 2016. He published more than 30
conference papers including CVPR, ICCV,ECCV, and AAAI. He won more than 20
challenge titles. His research interests are object detection/segmentation,
few-shot learning and domain adaptation/generalization.
---|---
| Chengjie Wang received the B.S degree in computer science from Shanghai
Jiao Tong University, China, in 2011, and double M.S. degrees in computer
science from Shanghai Jiao Tong University, China and Waseda Univesity, Japan,
in 2014. He is currently the Research Director of Tencent YouTu Lab. His
research interests include computer vison and machine learning. He has
published more than 70 refereed papers on major Computer Vision and Artificial
Intelligence Conference and holds over 120 patents in these areas.
---|---
| Yanwei Fu received hist PhD degree from the Queen Mary University of
London, in 2014. He worked as post-doctoral research at Disney Research,
Pittsburgh, PA, from 2015 to 2016. He is currently a professor with Fudan
University. He was appointed as the Professor of Special Appointment (Eastern
Scholar) at Shanghai Institutions of Higher Learning in 2017, and awarded the
1000 Young talent scholar in 2018. He published more than 100
journal/conference papers including IEEE TPAMI, TMM, ECCV, and CVPR. His
research interests are one-shot learning for images and videos, learning based
3D reconstruction for modelling objects/bodies, robotic grasping, and image
generation/inpainting/editing.
---|---
|
# A Catalog of the Highest-Energy Cosmic Rays Recorded During Phase I of
Operation of the Pierre Auger Observatory
###### Abstract
A catalog containing details of the highest-energy cosmic rays recorded
through the detection of extensive air-showers at the Pierre Auger Observatory
is presented with the aim of opening the data to detailed examination.
Descriptions of the 100 showers created by the highest-energy particles
recorded between 1 January 2004 and 31 December 2020 are given for cosmic rays
that have energies in the range 78 EeV to 166 EeV. Details are also given of a
further nine very-energetic events that have been used in the calibration
procedure adopted to determine the energy of each primary. A sky plot of the
arrival directions of the most energetic particles is shown. No
interpretations of the data are offered.
Ultra-high-energy cosmic radiation (1733), Cosmic ray showers (327),
Experimental data (2371), Catalogs (205)
## 1 Introduction
The energy spectrum of cosmic rays extends to beyond 100 EeV. Where and how
these particles, dominantly the nuclei of the common elements up to iron, are
accelerated is one of the major puzzles of astroparticle physics. The flux
above 50 EeV is about 0.5 particles per km2 per century, so that measuring
their properties requires the detection of the cascades or air showers that
the particles create in the atmosphere. In this paper, the methods used by the
Pierre Auger Collaboration to obtain the arrival directions and energies of
the 100 highest-energy particles in the range 78 EeV to 166 EeV are outlined,
and details of the main features of the air showers produced by the cosmic
rays are presented. Phase I of operation of the Observatory ended on 31
December 2020. It is thus timely to release a catalog to demonstrate the
quality of the data that lie behind measurements of the energy spectrum, the
distribution of arrival directions, and the mass of the highest-energy cosmic
rays that have been reported elsewhere (Aab et al. (2020a), Aab et al.
(2017a), Aab et al. (2014a) and Aab et al. (2014b)). The events discussed here
are included in the data set recently used in a discussion of the arrival
directions of events above 32 eV (Abreu et al. (2022))111Two events with
energies close to 100 EeV, used in a recent study of mass composition (Yushkov
(2020)), are not included here, or in Abreu et al. (2022), as different
selection criteria were adopted.. No interpretations of the data are offered
in this paper. Recent reviews, together with some interpretations, of data on
high-energy cosmic-rays can be found in Mollerach & Roulet (2018) and in Alves
Batista et al. (2019). A discussion of present data on the highest-energy
cosmic-rays is included in the US Community Study on the Future of Particle
Physics 2021 (Coleman et al. (2022)).
The structure of the paper is as follows. In Section 2, after a brief outline
of the methods used to detect the highest-energy cosmic rays, the
instrumentation of the Auger Observatory, relevant to this paper, is
described. In Section 3, brief accounts of the techniques developed by the
Collaboration are given, including that used to assign the energy of the
primary particle that initiates each air shower, or event. In Section 4, the
catalog is described and some events within it are discussed in detail. These
descriptions have been prepared to aid scrutiny of the complete sample
publicly available at https://opendata.auger.org/catalog/. In Section 5, a sky
map of the arrival directions of the events is shown.
## 2 The detection of high-energy cosmic rays and the Pierre Auger
Observatory
### 2.1 The Detection of High-Energy Cosmic Rays
Above an energy of about 100 TeV, the flux of cosmic rays is so low that
detectors flown using balloons, or deployed in space, are insufficiently large
to detect useful numbers of primary particles directly. At higher energies,
the particles create cascades, largely of electrons, positrons, photons, and
muons, that propagate through the atmosphere as extensive air-showers. Such
showers can be detected through the charged particles and photons that reach
ground level, and by observing light emitted from the atmosphere. Properties
of the primary cosmic rays are inferred from studies of these showers.
If the incoming particle is a proton or an atomic nucleus, then, in the first
interaction with a nucleus in the atmosphere (usually nitrogen or oxygen),
hundreds of pions are created. The neutral pions decay rapidly into photons
that initiate electromagnetic cascades through pair production, with the
electrons and positrons subsequently producing bremsstrahlung radiation. The
electromagnetic cascade grows until the rates of energy loss through these two
processes are exceeded by the rate of energy loss by ionization. Charged pions
interact with nuclei to produce additional pions that further enrich the
cascade until their energy falls below $\sim$300 GeV when charged-pion decay
becomes more probable than interaction with nuclei. The nucleons of the
incoming primary lose, on average, about 50% of their energy in the first
interaction, and in further similar interactions, thus enhancing the number of
secondary particles in the shower. The charged particles and the accompanying
photons spread out laterally because of scattering, and because of the
transverse momentum of the particles produced in the collisions.
The shower of secondary particles can be detected in several ways. One method
is to spread a number of detectors over the ground: currently scintillation
counters or water-Cherenkov detectors are the most widely adopted. At
${\sim}1$ PeV, the footprint of the shower is about 104 m2, while, for the
energies of interest here, the equivalent scale is many square kilometres. The
number of detectors deployed in any shower array is, of necessity, a
compromise dictated by cost.
The particles of the shower can be thought of as traveling at close to the
speed of light in a slightly curved, disc-like, configuration similar to a
giant dinner-plate, with the density of particles falling-off rapidly from the
centre of the disc. The fall-off is described by a lateral distribution
function (LDF), knowledge of which is important for the reconstruction of
events. The zenith angle of a shower is determined typically to $\sim$1∘ from
the times of arrival of the first particles in the shower-disc at the
detectors.
Other methods of shower detection make use of the fluorescence radiation that
results from the excitation of molecules of atmospheric nitrogen by the
charged particles in the shower and of the Cherenkov light created as these
particles cross the atmosphere. Fluorescence radiation is emitted
isotropically and can be observed at large distances from the shower.
Detection is technically demanding as only about 5.6 photons are emitted in
the 300 to 400 nm band for each MeV of energy deposited (Ave et al. (2013)).
The challenge of detecting such light from a shower produced by a particle of
$\sim$3 EeV at 15 km is akin to trying to observe a 5-Watt light bulb moving
at the speed of light at this distance. By contrast, Cherenkov radiation is
much brighter with around 30 photons emitted between 400 and 700 nm per metre
of track (Galbraith (1958)), with the light concentrated along the direction
of travel of the shower and with a lateral spread dictated by that of the
electrons. In the events described below, scattered Cherenkov light is a
background that must be accounted for when reconstructing the properties of
showers with the fluorescence detectors.
Other aspects of shower detection, specific to the Auger Observatory, are
discussed in Section 2.2.
### 2.2 The Pierre Auger Observatory
The Pierre Auger Observatory is the largest cosmic-ray detector ever
constructed. It was designed to explore the properties of the highest-energy
cosmic rays with unprecedented statistical precision and this objective has
been achieved. The primary experimental targets were the determination of the
energy spectrum, the distribution of arrival directions and the mass
composition of cosmic rays above $\sim$1 EeV. Studies of lower-energy cosmic
rays, of particle physics, and of geophysical phenomena, now form important
additions to the scope of the project.
The Observatory is located near the city of Malargüe, Mendoza Province,
Argentina, between latitudes 35.0∘S and 35.3∘S and longitudes 69.0∘W and
69.4∘W. The mean altitude of the site is about 1400 m above sea-level,
corresponding to an atmospheric overburden of about
$875\,\text{g}/\text{cm}^{2}$. The Observatory comprises an installation of
about 1600 water-Cherenkov detectors, separated by 1500 m, laid out on a
triangular grid over an area of 3000 km2 (the Surface Detector, SD), and
overlooked by a Fluorescence Detector (FD) comprising four stations, each
containing 6 telescopes, each with 440 photomultipliers and a 13 m2 mirror. A
map of the site, showing the features relevant to this paper, is presented in
Figure 1. A detailed description of the instrumentation can be found in Aab et
al. (2015a).
The water-Cherenkov detectors (each $10\,\text{m}^{2}\times~{}1.2\,\text{m}$)
are used to measure the energy flow at the ground level carried by the flux of
muons, electrons, positrons, and photons in the air showers generated by the
primary particles. In near-vertical events, there are 10 times as many photons
as electrons and positrons, which in turn exceed the number of muons by about
the same factor. The average energy of the muons in a near-vertical shower is
$\sim$1 GeV, while the mean energy of the entities of the electromagnetic
component is $\sim$10 MeV. Thus, the electromagnetic radiation is largely
absorbed in the 3.2 radiation lengths of the 1.2 m depth of the water-
Cherenkov detectors, whereas most of the muons pass straight through, losing
energy only through ionization. The energy deposited in the water by the
shower components is expressed in terms of the signal, measured using three 9
inch photomultipliers, from a muon traversing vertically and is expressed in
terms of ‘Vertical Equivalent Muons’, or VEM, and corresponds to an energy
deposit of $\sim$250 MeV. In a vertical shower produced by a particle of 10
EeV, the signal, S(1000), at 1000 m from the densest region of the shower,
called the core, is $\sim$40 VEM, and is roughly a 50/50 mixture of signals
from muons and the electromagnetic component.
The times of arrival of particles at the water-Cherenkov detectors are
measured using GPS signals that are also exploited to locate the position of
each detector to 20 cm and 50 cm in the horizontal and vertical directions,
respectively. At the highest energies, the incoming direction can be
determined to better than 0.4∘ (Aab et al. (2020b)).
Figure 1: The layout of the Pierre Auger Observatory covering 3000 km2. Each
small dot corresponds to a water-Cherenkov detector. Fluorescence detectors
are located at Los Leones (LL), Coihueco (CO), Loma Amarilla (LA) and Los
Morados (LM). The 30∘ azimuthal fields of view of the six telescopes at each
site are shown by the radial lines emanating from them: the vertical reach of
the telescopes extends to an elevation of 28.6∘. Data are transmitted to the
central laboratory, located at a campus in Malargüe, using a purpose-built
communication network. The dashed lines show roads. Gaps in the layout of the
array arise due to difficulties with landowners. Steerable lasers (see text)
are located at the positions CLF and XLF.
The thickness of the shower disc (in nanoseconds) is defined as the time that
it takes for the signal amplitude to grow from 10 to 50% (this time, t1/2, is
referred to as the risetime). In events that arrive nearly vertically,
risetimes vary from a few nanoseconds close to the core, to $\sim$300 ns at
distances of $\sim$1 km, and decrease as the zenith angle increases.
The time-profiles of the signals recorded with the water-Cherenkov detectors
have been used in several studies. It has been possible to build observables
that allow inferences to be made about the mass composition, and to probe
hadronic interactions above the energies attained at the Large Hadron
Collider, with a statistical sample of $\sim$81,000 events (Aab et al.
(2017b)). Additionally, searches for photons and neutrinos in the cosmic
particle flux have been made (Aab et al. (2017c), Aab et al. (2019a)). Above
60∘, the risetimes of the signals are too fast to be measured accurately with
the electronics currently in use.
Measurements of the fluorescence light make possible a calorimetric estimate
of the energy of the primary particle (Aab et al. (2020a)) and provide a key
tool used in the determination of the mass of the primary particles (Aab et
al. (2014b)). For such studies, it is essential to monitor the atmosphere and
this is done using steerable lasers located at the positions marked CLF and
XLF in Figure 1 (Abreu et al. (2012)). These lasers are also used to make
independent checks on the accuracy of the reconstruction of the arrival
directions possible (Mostafá (2005)).
Data taking began on 1 January 2004 with 154 water-Cherenkov detectors and two
fluorescence stations partly operational. Observations with the
instrumentation of Figure 1 started in June 2008 and have been in progress
ever since. The surface detector is operated almost continuously, while
observations with the fluorescence detector are restricted to clear dark
nights. Phase I of the project was completed on 31 December 2020.
Instrumentation used in other Phase I studies are described in are described
in Aab et al. (2015a). It is thus timely to release a catalog giving details
of the extensive air-showers produced by the highest-energy cosmic rays
observed thus far. In addition to the detailed information on the 100 events
of the highest energy recorded between 1 January 2004 and 31 December 2020,
which are part of the set of events discussed by Abreu et al. (2022), nine
events of slightly lower energy, used for energy calibration, have been
included to increase the number of fluorescence events presented.
## 3 Reconstruction of shower parameters
The properties that can be determined most directly are the arrival direction
and the energy of the primary particle that initiates each air shower.
Estimating the mass of the incoming particle is more complex as it requires
assumptions to be made about the hadronic physics associated with interactions
of nucleons and pions and, at present, it is not possible to identify the mass
of the primaries except on an average basis (e.g., Aab et al. (2014b)). No
discussion of measurements relating to mass determination is included in this
paper. In the following sections, brief descriptions of the methods used to
find the arrival directions and the energies are given.
### 3.1 Recording of the data
Data from the surface detectors to be used in reconstruction are derived from
a relatively complex triggering procedure described in Abraham et al. (2010a).
Briefly, triggers from each station, tagged with the GPS time, are sent at a
rate of $\sim$20 Hz to a computer located at the campus in Malargüe (Figure 1)
via a purpose-built link for communications. The computer is used to search
for spatial and temporal coincidences between triggers from the detectors.
When a coincidence is found between at least three stations, data from
triggered detectors are downloaded (Abraham et al. (2010a)). In addition to
the trigger-time, the data include read-outs from Flash Analog-to-Digital
converters (FADCs) associated with each of the three photomultipliers in the
water-Cherenkov detectors. GPS time stamps have a precision of 12 ns, while
the FADCs are 10-bit running at 40 MHz. From the FADC information, the
amplitude and time structure of each signal are obtained.
Data from the fluorescence detectors are recorded in a different manner
(Abraham et al. (2010b)). The telescopes at each of the four fluorescence
stations are operated remotely from the Malargüe Campus or, since 2017,
additionally from various locations around the world. The camera of each
telescope contains 440 photomultipliers (pixels): the recording of signals and
time-stamps is completely independent of that used for the surface detectors.
A very loose criterion of a localized pattern of four pulses in consecutive
time order is adopted as the trigger at each fluorescence telescope. Those
triggers where a shower track can be found are transmitted to the central
computer, together with information on the geometry of the shower candidate.
From this information, the time of impact of the shower at a ground position
in the region of the surface detectors is computed, so that all FADC traces in
the region, arriving within 20 ${\upmu}$s, are also centrally recorded. After
each night of operation, data from the fluorescence triggers are then merged
with those data collected with the surface detectors: these form the hybrid
data set. For high-level analyses, several quality cuts are applied to the
fluorescence events, including those relating to cloud cover and atmospheric
aerosols. Further cuts are made to ensure that the selection of events is
unbiased with respect to primary particle mass (Aab et al. (2014a)). The
overall efficiency of these cuts is such that approximately 25% of SD events
with energies above 10 EeV, registered during FD operation, have an
accompanying good quality and unbiased FD shower profile.
### 3.2 Reconstruction of the arrival direction and energy of showers
#### 3.2.1 Introduction
While the reconstruction of the arrival direction of an air shower is
relatively straight-forward, as outlined in Section 3.2.2, the determination
of the parameter of the shower adopted as a surrogate for primary energy is
more difficult. This is because, as the zenith angle increases, the shower
loses the near-perfect circular symmetry found in an event generated by a
cosmic ray entering the atmosphere at 0∘. The loss of symmetry of the
distribution of the signal size in the plane perpendicular to the arrival
direction of a shower arises for several reasons: from geometrical effects
associated with the angles at which high-energy particles are emitted in early
interactions, from geometrical effects relating to the direction of travel of
particles entering the detectors, from attenuation - particularly of the
electromagnetic component - as the shower crosses the array, and from the
effect of the geomagnetic field. The most direct experimental evidence of
asymmetry is found in studies of the risetime of the signals from the water-
Cherenkov detectors (Aab et al. (2016)).
The consequences of asymmetries of the signal sizes have been studied in some
detail using simulations. Luce et al. (2021) have examined the impact on the
electromagnetic component. At 1000 m from the shower axis, the amplitude of
the asymmetry of the signal size is $\sim$50% in a shower produced by a
primary of 10 EeV at a zenith angle of 45∘. However, estimates of the
parameter used to define the shower size (the signal size at 1000 m from the
shower axis, S(1000) - see below) are changed by less than 10%. This is
largely because the contribution of muons to the total signal in a detector
rises with increasing zenith angle.
At relatively small zenith angles, simulation studies have also been used to
show that the effect of the geomagnetic field changes estimates of S(1000) by
only a few percent for angles around 45∘ (Abreu et al. (2011)). However, as
the zenith angle increases, the effect of this field becomes more evident
because of the increasingly long path-length of the muons as they cross the
atmosphere. In Figure 2 the densities of muons reaching the ground, again
estimated through simulation, are shown for three zenith angles.
Figure 2: Parameterized densities of muons for a 10 EeV proton shower at
zenith angles of 60∘, 70∘ and 80∘ arriving from azimuth, $\phi$ = 0∘. Radial
units are in kilometers. The coordinate system is defined in the plane
perpendicular to the shower direction with the y-axis parallel to the
projection of the Earth’s magnetic field, $B_{\rm{proj}}$, on that plane. The
magnitudes of the muon densities are indicated (32, 16, 8… per m2).
It is evident that the asymmetry of the radial distribution of the muons in
the shower increases with zenith angle, becoming particularly apparent above
70∘. At such angles, the electromagnetic part of the shower, arising
dominantly from the decay of neutral pions, has been largely absorbed as the
atmospheric thickness exceeds $2440\,\text{g}/\text{cm}^{2}$. However, an
electromagnetic component, arising from muon-bremsstrahlung, knock-on
processes and muon decay, is present and is time-synchronous with the muons,
so that the time-spread of the signals is small, as will be seen in events
discussed in Section 4.
Novel methods have been developed to analyze events of large zenith angle (Ave
et al. (2000), Aab et al. (2014c)) as discussed in Section 3.2.3. There is, of
course, no sharp transition between the zenith angle range in which
atmospheric absorption dominates and that in which geomagnetic effects assume
the greater importance. Above $\sim$60∘ the accuracy of reconstruction of both
the direction and energy are increasingly improved using the new techniques
(Schmidt (2010)), and accordingly the different approaches have been adopted
above and below this zenith angle.
#### 3.2.2 Reconstruction of events with zenith angle $<$ 60∘
The methods used to reconstruct events with zenith angle, $\theta<$ 60∘
recorded by the water-Cherenkov detectors are described in detail by Aab et
al. (2020b). The zenith angle is measured from the zenith while the azimuth
angle, $\phi$, is measured counter-clockwise from East. For showers as large
as those described here, all arrival directions are determined to better than
0.4∘. Accordingly, as deflections by the Galactic magnetic field of protons
exceed this number, even for the energies discussed here, no uncertainties are
given. An uncertainty of 0.4∘ in the zenith angle leads to an uncertainty in
energy of $<$ 0.2%.
The positions of the detectors with respect to the core of the shower are
found by fitting the observed signals to a lateral distribution function222
When the core of a shower falls close to a detector, the signal can be so
large that the electronic recording channels may saturate. This usually occurs
for detectors within about 500 m of the core where the signal is greater than
1000 VEM. An algorithm is used to estimate the true magnitude of the signal
from the amplitude of the undershoot which is introduced capacitatively.
Moreover, for signals larger than 2000 VEM, the PMT response is highly non-
linear so that only timing information is used and the signal is treated in
the LDF fit only as a lower limit to the actual size of the signal. Note that
the estimated true signal value is used in the LDF fit for saturated signals
smaller than 2000 VEM. For 50% of the events contained in the full data set,
the signal in one station is saturated: 3 of the events discussed below have
two saturated stations. Examples of saturated signals can be found in Section
4, and in the larger data base.. In general, because of the wide spacing of
the detectors, it is not possible to determine this function for every event.
Accordingly, an empirical description, based on the pioneering studies of
Greisen (1956), Greisen (1960) and Kamata & Nishimura (1958), has been
adopted:
$S_{\text{LDF}}(r)=\textit{S}(1000)\Biggl{[}\Biggl{(}\frac{r}{r_{\rm
opt}}\Biggr{)}\Biggl{(}\frac{r+r_{\rm s}}{r_{\rm opt}+r_{\rm
s}}\Biggr{)}\Biggr{]}^{\beta}$
with $r_{\rm s}$ fixed at 700 m. The slope factor, $\beta$, is negative,
changing from about $-2.6$ at $\theta$ = 0∘ to about $-1.9$ at 60∘. The
flattening of the lateral distribution function with increasing angle is
largely due to the increasing dominance of the muon component.
The quantity $r_{\rm opt}$ relates to the spacing of the detectors and is the
distance at which uncertainties in the reconstructed signal size, arising from
lack of knowledge of the lateral distribution function, is minimized (Hillas
(1977), Hillas et al. (1971)). For the detectors of the Auger Observatory,
where the spacing is 1500 m, $r_{\rm opt}$ has been shown to be close to 1000
m (Newton et al. (2007)). The signal size at this distance, S(1000), is used
to estimate the primary energy.
The average statistical uncertainty in the determination of S(1000) at the
highest energies is 8% (Aab et al. (2020b)). The uncertainty on the impact
point is $\sim$50 m. S(1000) is influenced by changes in atmospheric
conditions that affect the development of showers (Aab et al. (2017d)), and by
the geomagnetic field that impacts on the signal sizes in the shower (Abreu et
al. (2011)). Therefore, before using the shower-size estimator in the
calibration procedure (Section 3.3), corrections of $\sim$2% and $\sim$1% are
made for the atmospheric and geomagnetic effects, respectively.
#### 3.2.3 Reconstruction of events with zenith angles $>$ 60∘
The analysis of events with zenith angles $>$ 60∘ is important as extending
measurements to these angles enhances the exposure of the Observatory by 30%,
and extends sky coverage to regions that would otherwise be inaccessible.
However, as explained above, techniques different to those used to reconstruct
showers arriving at smaller zenith angles must be adopted. Showers with zenith
angles estimated to be as great as $\sim$90∘ have been recorded but, because
the distance between detectors, as seen by the shower, is substantially
shortened, the accuracy of reconstruction of the direction is badly degraded,
and we restrict selection to those with $\theta<$ 80∘, where the directional
uncertainties are $<$ 1∘. The procedures developed to analyze these events are
discussed in detail in Aab et al. (2014c).
Above 70∘ most of the particles at detector level are energetic muons
accompanied by an electromagnetic component in equilibrium with the muons
arising through bremsstrahlung, knock-on electrons and muon decay processes,
which makes up 25% of the signal beyond $\sim$1 km from the core and around
30% within 1 km. Except at extreme distances, approximately 80% of the signal
arrives within about 200 ns (see Figures 9 and 15 in Section 4 below). The
muons travel tens to hundreds of kilometers before detection and are deflected
significantly by the geomagnetic field. Thus, at ground level, the near-
cylindrical symmetry associated with near-vertical events is lost, as shown in
Figure 2.
For showers with an inclination between 60∘ to 70∘, and in particular at
distances closer than 1 km to the shower core, there is still a significant
contribution from the electromagnetic component, 67% at 60∘ and 100 m, and
accordingly this is included in the reconstruction (Valiño et al. (2010)).
The number of stations satisfying the trigger conditions above 60∘ increases
with $\sec\theta$ so that at 30 EeV the average number is $\sim$25 at 60∘,
while at 80∘ it is $\sim$45\. The method used for reconstruction is based on
fitting the signal pattern recorded to what is predicted from modeling the
shower development. The muon signal scales with energy as
$\rho_{\mu}\,(r)\propto E^{\alpha}$ with $\alpha$ in the range 0.90 to 0.95.
The expected density of muons at the ground is given by
$\rho_{\mu}(r)=N_{19}\,\rho_{\mu,19}(r,\theta,\phi)$, where $N_{19}$ is,
chosen by convention, as a measure of shower size using a reference shower
model and comparing the signals to those expected from simulated showers of 10
EeV with the same arrival direction. Simulations have shown that
$\rho_{\mu,19}(r,\theta,\phi)$, at fixed zenith and azimuth angle, varies by
only about 5% for changes in the energy and mass of the primary particle
(Dembinski et al. (2010)).
The absolute value of $N_{19}$ depends on the choice of mass composition and
hadronic model used in the simulation for the reference model, but the
dependence is constant with energy and between the primaries (Aab et al.
(2015b)). This uncertainty does not impact the estimate of the primary energy
because the constant shift is absorbed by the method used to determine the
energy scale, as outlined in Section 3.3.
#### 3.2.4 Reconstruction of events recorded with the Fluorescence Detectors
The Fluorescence Detectors provide calibration information from which the
energies of the more abundant events obtained with the water-Cherenkov
detectors alone can be derived. Measurements of the fluorescence emission also
give details of the longitudinal development of air showers in the atmosphere,
with the determination of the depth at which the deposition of energy is
greatest, the shower maximum. This is a key measurement for mass estimation.
Details of the reconstruction methods are discussed in Abraham et al. (2010b)
and Aab et al. (2014a) with only a brief description given here.
The 440 pixels in each camera, illuminated by light from the air shower, are
used to reconstruct a plane that includes the axis of the shower and the
location of the telescope. Within this plane, a three-dimensional
reconstruction of the arrival direction is obtained by determining the
geometry from the arrival times of the shower light at each pixel, and from
the time of the arrival of the shower particles at the water-Cherenkov
detector closest to the core of the shower. This hybrid technique, implemented
for the first time at the Auger Observatory, enhances the precision with which
the shower geometry is determined: the direction is known to $\sim$0.6∘
(Bonifazi (2009)). The signal from each pixel is recorded in 100 ns intervals
and the time and amplitude data are used to delineate the profile of the
shower development using techniques described by Unger et al. (2008). This
method allows differentiation between the various sources of detected light,
namely the fluorescence light, direct Cherenkov light, and light scattered
from the Cherenkov beam into the fluorescence telescope from air molecules and
aerosols.
For each 100 ns interval, the energy deposited in the slant-depth interval
corresponding to the measured light flux is estimated. These individual
estimates are fitted using the universal shower profile function described in
Andringa et al. (2011),
$f(X)=\Biggl{(}\frac{{\rm{d}}E}{{\rm{d}}X}\Biggr{)}_{\rm{max}}\Biggl{(}1+\frac{R}{L}\bigl{(}X-X_{\rm{max}}\bigr{)}\Biggr{)}^{1/R^{2}}\exp\Biggl{(}-\frac{X-X_{\rm{max}}}{RL}\Biggr{)},$
where $f(X)$ is the energy deposit in the slant-depth $X$ and
$({\rm{d}}E/{\rm{d}}X)_{\rm{max}}$ is the energy deposit at shower maximum.
$X_{\rm{max}}$ is the slant-depth of the maximum of the energy deposit, while
R and L are shape parameters loosely constrained in the fit to the average of
measured values (Dawson (2020)). The universal shower profile function is a
re-casting of the Gaisser-Hillas functional form (Gaisser & Hillas (1977)):
its adoption diminishes correlations between shape parameters.
The energy of each event ($\mathrm{E_{FD}}$) is determined by integration
under the area defined by the longitudinal profile, $f(X)$, that defines the
rise and fall of deposition of energy by the shower in the atmosphere, with
the addition of 20% at 0.1 EeV and 12% at 100 EeV respectively. This
augmentation accounts for energy that is not deposited in the atmosphere but
is carried into the ground largely by muons and neutrinos. The model-
independent methods of determining this factor are discussed in Aab et al.
(2019b). Above 10 EeV, the energy is determined with a statistical precision
of 8% and with a systematic uncertainty of $\sim$14% (Dawson (2020)).
### 3.3 Determination of the energy of the primary particles
The methods by which data from the surface detectors are calibrated to obtain
the energies of the primaries are detailed in Aab et al. (2020a). Use is made
of hybrid events, both for showers with $\theta$ $<$ 60∘ (referred to as
‘vertical events’) and for events of larger zenith angles (‘inclined events’).
For the vertical events, the measure of S(1000) is first adjusted to the value
that a shower would have had, had it arrived at 38∘ from the vertical,
$S\mathrm{{}_{38}}$, as this is the median zenith angle for the vertical
sample. Using the 3338 hybrid events that are available, the calibration
relationship is $E\mathrm{{}_{FD}}=A\,{S\rm{{}_{38}}}^{B}$, with $A=(0.186\pm
0.003)\,\text{EeV}$ and $B$ = 1.031 $\pm$ 0.004. The calibration constants $A$
and $B$ are then used to estimate the energy for all SD events,
$E\mathrm{{}_{SD}}$. The statistical uncertainty of $E\mathrm{{}_{SD}}$,
obtained by propagating the errors on $A$ and $B$, is 1% at the energies
considered in this paper. The energy resolution, obtained from the spread of
$E\mathrm{{}_{SD}}$ values at a given $E\mathrm{{}_{FD}}$ in the calibration
events, is $\sim$8% at the highest energies (Aab et al. (2020a)).
A similar calibration procedure is adopted for the events with $\theta>$ 60∘.
Here the calibration is made using $N\mathrm{{}_{19}}$ as the surrogate for
the shower size. The value of $N\mathrm{{}_{19}}$ is then adjusted to the
value ($N\mathrm{{}_{19,68}}$) for a shower arriving with 68∘, the median
zenith angle of the sample. The calibration is made with 389 events and the
values of $A$ and $B$ are $A$ = (5.32 $\pm$ 0.07) EeV and $B$ = 1.05 $\pm$
0.02, where $N\mathrm{{}_{19}}$ replaces $S\mathrm{{}_{38}}$. The smaller
number of events available for evaluation of the energy of the more inclined
events arises from the higher energy threshold required (4 EeV as against 3
EeV), and because there is a requirement for the shower maximum to be in the
field of view of the FD telescopes. For inclined events the maximum is very
distant from the impact point, effectively placing an upper limit on the
zenith angle of $\sim$73∘ for both to be observable. For these events, the
energy resolution is estimated as 12%, at the highest energies, from a
comparison of $N\mathrm{{}_{19}}$ with $E\mathrm{{}_{FD}}$ (Pierre Auger
Collaboration, in preparation)
For hybrid events, two estimates of the energy are available, namely that from
the one, or more, fluorescence measurements, and that from the determination
of S(1000) and the use of the calibration data. For consistency, the latter
value has been quoted in all cases as it is available for all events. Average
uncertainties in energy of 8% for vertical events and 12% for inclined events
are given. The systematic uncertainty in the energy estimates coming from
those in S(1000) depend on the distance spread of the signals in an event and
on the presence, or otherwise, of saturated stations. The dominant systematic
uncertainty in the energy estimates of 14% comes from the FD measurements.
## 4 The Events of the Catalog
The catalog presented in this paper contains details of the 100 highest energy
events recorded using the array of water-Cherenkov detectors of the Pierre
Auger Observatory, together with similar data for a further nine events used
in the energy-calibration procedures outlined in Section 3.3. Full details of
all 109 events are available at https://opendata.auger.org/catalog/. A list
summarising the events is also included there. In this section, features of
eight exemplary events are discussed in some detail to enable features in the
full set of data to be appreciated. One of the two hybrid events discussed
below has an energy lying just outside of the range of the top 100.
The events are identified with a catalog number (#$N$) that can be used to
locate it in the depository, and by a name, PAOddmmyy, that indicates the day,
month and year of detection.
### 4.1 Description of Individual Events
#### 4.1.1 Vertical Events
####
PAO191110 (#1): Some properties of the most energetic air-shower registered
with the Surface Detector are shown in Figure 3. The primary energy is (166
$\pm$ 13) EeV with the shower impacting the surface array at a zenith angle
$\theta$ of 58.6∘. It has a right ascension $\alpha$ of 128.9∘ and a
declination $\delta$ of $-52.0^{\circ}$. The top-middle panel shows the event
footprint on the ground, which spans an area of approximately (13 $\times$ 6)
$\mathrm{km^{2}}$, with 34 water-Cherenkov detectors (WCDs) triggered. Black
dots correspond stations that triggered randomly. The detectors struck are
shown in a plane perpendicular to the direction of arrival in the top right-
hand panel, where the red point corresponds to the position of the shower
core. The color coding and the blue arrow show the direction of propagation of
the air shower, evolving from green for detectors that trigger early through
to red for those that are triggered later. The radius of each circle is
proportional to $\log S$, where $S$ is the signal size measured in VEM.
Figure 3: Features of the most energetic event, (PAO191110, #1) recorded with
the Surface Detector of the Pierre Auger Observatory. See text for details.
In the left of the middle panel, the lateral distribution of the recorded
signals, as a function of the distance to the shower core, is shown. The
triggered (blue circles) and non-triggered stations (orange triangles) are
indicated. The event has two saturated stations (blue open circles) close to
the shower core. Events with two saturated detectors are rare occurrences:
only three events in the full data sample have two detectors that are
saturated simultaneously. The lateral spread of the signals is described by
the modified Nishimura-Kamata-Greisen (NKG) lateral distribution function
(LDF) discussed in Section 3.2.1. The value of the exponent $\beta$ in the LDF
is given in the top-left panel. In the right of the middle panel, the time
delays with respect to a fit that assumes a plane shower front is shown for
the triggered stations. The delays are measured in ns.
In the bottom three panels, the arrival time distributions of the signals
recorded at three detectors (marked 1 to 3 on the signal map) are displayed.
The different colors indicate the signals from the three photomultipliers in
each detector. These traces exemplify how signal shapes vary with respect to
the distance from the shower core. Here, and below, detectors have been
selected that lie close to the distance (1000 m) used to define the shower
size (Section 3.2.2), and at other distances, selected according to the
features being illustrated. It is known from direct measurements (Linsley &
Scarsi (1962)) that, except within a few meters of the shower axis, muons
precede the electromagnetic component. The arrival times of the two components
overlap to some extent, but the electromagnetic component lags the muon
signals by an amount that increases with distance from the shower core. At
1000 m, the risetime, ${t_{1/2}}(1000)$, in this event is close to 100 ns
(Section 3.2). The muons that are detected are typically minimum ionizing
particles: as a result their signals show a fast risetime and a decay time
that confines the signals over one to three 25 ns time bins. As the distance
to the shower core increases, there is more dispersion of the shower
particles, with smaller signals that are spread out in time.
####
PAO141021 (#4): An event of primary energy (155 $\pm$ 12) EeV arriving at the
ground-level at quasi-normal incidence (the measured zenith angle is 6.8∘) is
shown in Figure 4. The footprint of the event is more compact and less
elongated than that of PAO191110, (#1). The top-middle panel shows the
footprint on the ground, which spans an area of approximately (6 $\times$ 3)
km2: 13 WCDs are triggered. The middle panels show the lateral distribution of
the recorded signals as a function of the distance to the shower core on the
left, and on the right, the time delays with respect to a plane shower front,
perpendicular to the incoming direction of the shower.
Figure 4: Features of a vertical event with a reconstructed energy of 155 EeV,
(PAO141021, #4). See text for details.
The signals and arrival times (lower panels) of the particles recorded at the
three selected detectors are markedly different from those selected for event
PAO191110, (#1). The station with the largest signal (at $897\,$m from the
core) is above 1000 VEM, a factor of 2.6 greater than the signal in event
PAO191110 recorded at a similar distance, 924 m, from the core. As the
distance travelled through the atmosphere is substantially shorter for this
near-vertical event, the particles suffer less attenuation, resulting in a
larger contribution to the signal from the electromagnetic component. This is
reflected in the slower risetime: $t_{1/2}(1000)=(360~{}\pm~{}10)\,\text{ns}$.
Likewise, a $\beta$ value of $-2.6$ indicates that the LDF of this event is
steeper than that of event PAO191110 for which $\beta$ is $-2.0$.
####
PAO171228 (#8): An event with primary energy (132 $\pm$ 11) EeV arriving with
zenith angle $\theta=41.7^{\circ}$ shown in Figure 5. As can be seen in the
top-middle panel, only 19 WCDs have been triggered because the footprint of
this event extends beyond the limits of the array (the dashed grey line marks
the perimeter). Although the event is not fully contained, the reconstruction
of the main observables used in the various physics analyses (Section 3.2) is
of high-quality.
In the bottom-right panel (station id #1346) there is a signal of over 3 VEM
at about $6\,\upmu$s. Such signals are due to a contribution from direct light
reaching one photomultiplier and are likely caused by the passage of a
particle close to the location of the photomultiplier, perhaps moving in an
upward direction, or possibly due to light from an electron produced in a muon
decay where the decay electron has been emitted towards the photomultiplier.
Under these conditions, the Cherenkov photons are detected directly, and a
sharp, distinctive signal is recorded by a single photomultiplier, rather than
the broader signals produced when the light is scattered on the inner
reflective walls of the WCDs. The increase in signal size caused by the direct
light varies with distance and is typically about 1% at 1000 m for events
arriving close to the vertical.
Figure 5: Features of an event, (PAO171228, #8) recorded with WCDs located at
one of the boundaries of the Surface Detector. See text for details. Note that
in detector #1346 only two of the three photomultipliers were operational.
Reality dictates that it is impossible to keep all three photomultipliers
operational 100% of the time. Failures of two, or even all three,
photomultipliers inevitably occur. Typically 98% of all stations are active at
any time, sending triggers at 20 Hz to the central station (Section 3.1).
####
PAO110127 (#15): This event (Figure 6) has been selected to show some singular
signals that are relatively rare. In this event 14 water-Cherenkov detectors
were triggered and used to measure the energy, (116 $\pm$ 9) EeV, zenith angle
$\theta=24.9^{\circ}$, and risetime at 1000 m being (320 ${\pm}$ 10) ns.
However, the detector closest to the core (located at just over 500 m) shows a
saturated signal (see the bottom-left panel in the figure). In this case, the
saturation is due to the overflow of the finite dynamic range of the read-out
electronics. The procedure used to recover the majority of such signals is
discussed in Section 3.2.2 above.
The bottom-right panel (station id #1346) again exemplifies, as in Figure 5, a
signal of over 10 VEM at about 3.8 $\upmu$s that contains a contribution from
direct light reaching one photomultiplier.
Figure 6: Features of an event, (PAO110127, #15) in which one of the WCDs is
saturated. See text for details.
#### 4.1.2 Inclined Events
####
PAO150926 (#17): The inclined event, zenith angle $\theta=77.2^{\circ}$, with
the highest energy, (113 $\pm$ 14) EeV, is shown in Figure 7. The shower
triggered 75 WCDs in an elongated pattern on the ground, over an area close to
(35 $\times$ 6) km2. The shower particles must traverse long distances to
reach the ground at such inclinations. Thus, electromagnetic particles are
mostly absorbed in the atmosphere and the signals at the ground are produced
almost entirely by muons. In contrast to events with lower inclinations, most
of the signal arrives within a very short time of around 200 ns, independent
of the location within the shower footprint (see bottom row in Figure 7).
Likewise, the distribution of the integrated signal on the ground loses the
near-rotational symmetry of more vertical events (Section 3.2.1). Hence, the
distribution of the recorded signals as a function of the distance to the
shower core shown in the left middle panel cannot be described by a single
rotationally-symmetric function. In the middle-right panel, the delay of the
start of the signal in each triggered WCD with respect to a plane shower front
is presented. The shower is very asymmetric and cannot be well described by,
for example, a concentrically-inflated spherical model.
Figure 7: Features of the most energetic event, (PAO150926, #17) belonging to
the set of inclined showers ($\theta>~{}60^{\circ}$). See text for details.
The reconstruction, using a 2-dimensional pattern of muon densities at the
ground (Section 3.2.3) for this event, is presented in Figure 8. In the left
panel, the distribution of the triggered stations around the shower core in
the plane perpendicular to the shower direction (the shower plane) is shown in
polar coordinates. The coordinate system is such that the y-axis coincides
with the intersection of the ground plane with the shower plane (dashed line).
Polar angles close to zero (along the positive x-axis) correspond to stations
triggering before the shower core arrives at the ground (so-called ‘early
stations’), while angles towards 180∘ correspond to ‘late stations’. The
colored contour lines indicate the expected signal for the distribution of
muon densities that best fits the observed signals. The direction of the
component of Earth’s magnetic field in the shower plane is indicated by the
black arrow. Note how the signal pattern is distorted in the direction
perpendicular to the magnetic field. In addition to the distortion induced by
the geomagnetic field, there is a small difference between the signals of
early (right of dashed line) and late stations (left of dashed line). This
difference arises from the attenuation of muons, and also from the different
angles of incidence of muons on the detectors. In the right-hand panel slices
of the LDF parallel and perpendicular to the projected magnetic field are
shown.
Figure 8: _Left:_ 2D distribution of measured and expected signals in the
shower plane. Station markers are colored according to the signal. The
direction of the magnetic field in the shower plane ($B\mathrm{{}_{proj}}$) is
indicated by the black arrow. Triangular markers are stations which, seen from
the position of the core, lie within $\pm$ 45∘ of a direction perpendicular to
$B\mathrm{{}_{proj}}$. That is, these stations are in the direction of the
deflection that charged particles experience in the magnetic field. More
particles therefore reach those stations (enhancing the signal) compared to
stations that are at the same distance to the core but that lie along the
direction of the magnetic field (circular markers). The intersection of the
shower plane with the ground plane is shown by the dashed line. _Right:_
projection of the signal distributions as a function of the distance from the
shower core. The markers show the signal measured at the stations, while the
curves show the expected signal. Stations in the direction parallel to the
magnetic field are shown on the left, with stations in the direction
perpendicular to the magnetic field on the right.
####
PAO200313 (#30): This event (Figure 9) is the second highest-energy inclined
event with an energy of (104 $\pm$ 12) EeV. At a zenith angle of
$\theta=65.1^{\circ}$, this shower triggered 38 detector stations in an
elongated pattern on the ground (19 $\times$ 6) km2. As in the previous case,
the shower pattern at the ground shows some asymmetry. Even at this
inclination, there is a substantial electromagnetic component present and an
additional 3 km of atmosphere (the early-late effect) corresponds to more than
five radiation lengths. Thus, the asymmetry arises dominantly from the
difference in the attenuation of the electromagnetic component rather than
from deflections of the muons in the geomagnetic field. The effect is
illustrated in Figure 10.
Figure 9: Features of the second most energetic event, (PAO200330, #30)
belonging to the set of inclined showers ($\theta>60^{\circ}$). See text for
details.
Figure 10: _Left:_ 2D distribution of measured and expected signals in the
shower plane. _Right:_ the projection of the signal distribution onto the
distance from the shower core. Note the slight asymmetry between the left
(late stations) and right half (early stations) of the signal in the shower
plane (left panel) and the lack of asymmetry between stations parallel and
perpendicular to the magnetic field (right panel).
#### 4.1.3 Hybrid events
The first of the two events discussed here passes the high-quality criteria
applied to select the sub-sample of hybrid events used for energy calibration
(Section 3.3) of vertical events. The second event represents the most
energetic shower used in the calibration of inclined events. The details of
the ten most energetic hybrid events used for calibration, including those
described below, can be found at https://opendata.auger.org/catalog/.
####
PAO100815 (#84): This is the most energetic hybrid event, arriving at a zenith
angle $\theta=53.8^{\circ}$. Details of the event are shown in Figures 11 to
14. The energy estimate from the determination of S(1000) is (82 $\pm$ 7) EeV,
consistent with that from the fluorescence measurements of (85 $\pm$ 4) EeV.
There are 22 triggered stations with a footprint of about (7.5 $\times$ 6)
km2. The lateral distribution of signals is described by the modified NKG
function. The signals registered by the WCDs are shown in the bottom panels of
Figure 11. The light received at the station about 450 m from the shower core
(left panel) has saturated the dynamic range of the two photomultipliers (see
Section 3.3 and event PAO110127, #15 above) that were operational. The
amplitude difference indicates the complexity of the saturation process. For
the two detectors with distances to the core larger than 1000 m, the FADC show
the typical structure of shower signals, where the early parts of the FADC
traces are dominated by muons and the tails are populated with broader signals
due to photons, electrons and positrons. The risetime at 1000 m is (127 $\pm$
5) ns.
Figure 11: Features of the most energetic hybrid event, PAO100815, #84. See
text for details.
Fluorescence light was detected at all four FD stations. Each individual
hybrid-reconstruction passed the selection criteria. The reconstructed
profiles of the energy deposition in the atmosphere are shown in the lower
part of Figure 12, while the reconstructed energies (Section 3.3) and depths
of shower maximum ($X\mathrm{{}_{max}}$) are displayed in the upper section of
the figure.
Shower events crossing the field of view of a telescope at larger distances
have lower angular velocities than those that pass close to the telescope.
Additionally, when a shower is observed approaching the telescope, the signals
are registered more rapidly across the camera than for those from showers
moving away from it. These effects result in different angular velocities of
the shower images on the telescope cameras. Accordingly, the number of points
is different in the profiles of the energy deposit recorded at the individual
stations. The discrete binning of the energy deposits is a consequence of the
100 ns readout of the photomultipliers of the fluorescence telescopes.
The uncertainties in the energy and $X\mathrm{{}_{max}}$ estimates from
individual stations of the Fluorescence Detector differ mainly because
different amounts of Cherenkov light are detected at them. The relatively
larger fraction of Cherenkov light (12%) at the Los Leones station, results in
a larger uncertainty in the longitudinal profile because Cherenkov emission is
strongly beamed around the shower axis. Thus, a small uncertainty in the
shower geometry translates into a larger uncertainty in this profile when
compared with the estimate from Coihueco, where the Cherenkov light is only 5%
of the integral of the light flux. The uncertainty is also affected by other
effects, such as the distance of the shower to the FD sites, that result in
different numbers of photons being detected. At the Coihueco site, the shower
image is detected at two telescopes, giving rise to a gap in the
reconstruction of the profile of deposited energy. This occurs because the
times for which the shower image is close to the border of the field of view
of a telescope are rejected as it is not possible to make an accurate estimate
of the light flux. Overall, the $X\mathrm{{}_{max}}$ and energy estimates from
individual FD stations agree within quoted statistical uncertainties.
In Figure 13, the camera views are shown for all eight telescopes at the four
sites where the event was detected. The colors assigned to individual pixels
represent centroids of pulses in the photomultipliers, thus marking the
arrival time of fluorescence and Cherenkov light at the telescopes. Dark grey
pixels indicate pixels that triggered randomly that do not match the time fit
used to determine shower geometry (Section 3.2.4). These random triggers arise
from the night-sky background that varies for each detected shower and with
the direction in which a telescope is pointing. There are no such pixels in
the telescopes shown in event PAO140131, #101 (Figure 16). The horizontal axes
in the camera views correspond to local azimuth angles, defined counter-
clockwise from the back-wall of the FD station. The origin points to the
right, looking on to the shower from the position of the station. The vertical
axis is an angular elevation of the viewing direction of the FD pixels.
Figure 12: Reconstructed parameters of PAO100815, #84. FD stations used in
the reconstruction are distinguished by different colors. The red lines
correspond to fits to the profiles of the energy deposition using the
universal shower profile function (Section 3.2.4). The yellow bands are
centered on the combined weighted average of the measurements of
$X\mathrm{{}_{max}}$ and the energy at the FD sites. The widths of the bands
correspond to the statistical uncertainties of combinations. The uncertainty
in the SD energy is 8% (Section 3.3).
Figure 13: The camera views in all four telescopes for event PAO100815, #84.
The colors (violet to red) indicate the times (early to late) at which the
light reaches each pixel. Dark pixels are random coincidences and not used in
the reconstruction.
In Figure 14 a three-dimensional view of the event is exhibited.
Figure 14: Three-dimensional visualization of PAO100815, #84. The lines
correspond to the light rays and point to the telescopes of the fluorescence
detectors. The colors of the light rays and of the SD stations represent
trigger times of FD and SD PMTs, respectively.
####
PAO140131 (#101): This is the second most energetic hybrid event and belongs
to the dataset used to calibrate events with zenith angle above 60∘. The
zenith angle $\theta=60.8^{\circ}$. The energy reconstructed from the SD
signals is (78 $\pm$ 9) EeV, consistent with that from the fluorescence
measurement of (73 $\pm$ 8) EeV. With 30 triggered stations, the footprint is
elongated and covers an area of ($14\times 6$) km2. At 60∘, the depth of the
atmosphere is twice the atmospheric vertical depth. Thus the electromagnetic
component of the shower is partially quenched (see Section 3.2.2). The lateral
distribution function and the time delay of the start time signals are barely
asymmetric (see Figure 15) and can thus be described by the modified NKG
function used for the vertical reconstruction.
Fluorescence light was detected at three FD stations (Los Morados, Coihueco
and Loma Amarilla), but only the reconstruction for Loma Amarilla passed the
selection criteria. The profile of energy deposition (Figure 16, bottom-right)
is obtained from the profile of the detected light (Figure 16, bottom-left).
The color bands in the figure of the light flux profile show the contributions
from different light sources. Fluorescence light dominates, while Cherenkov
light scattered into the telescope makes up 10% of the integrated signal.
Figure 15: The parameters reconstructed using the data from the WCDs for
event, PAO140131, #101.
The top panels of Figure 16 show the camera views of the shower crossing two
adjacent telescopes at the Loma Amarilla site. The photomultipliers are
sequentially triggered (top-left panel with colors coding the trigger time).
The charges at each photomultiplier are proportional to the light flux
received at the entrance window of each telescope. The shower image is
detected in two telescopes giving rise to a gap in the reconstruction of the
profile.
Figure 16: Data from the fluorescence telescopes in event PAO140131, (#101).
The profile of the energy deposits (bottom right) is accompanied by the light
flux profile (bottom left) and camera views from two telescopes at Loma
Amarilla (top). The shower fell far from the telescopes with the closest point
to the shower axis being about 35 km.
## 5 A Sky Map of the 100 highest-energy events
A map showing the right ascension and declinations of the 100 highest-energy
events is displayed in Figure 17.
Figure 17: The positions of the arrival directions of the 100 highest-energy
cosmic rays detected at the Pierre Auger Observatory shown in equatorial
coordinates. The open circles show the arrival directions of the inclined
events. The graded color scale shows how the exposure varies with declination
for the whole data set. The white region, above $\sim$45∘, is not accessible
from the latitude of Malargüe. The dashed line indicates the Galactic Plane.
## Acknowledgments
The successful installation, commissioning, and operation of the Pierre Auger
Observatory would not have been possible without the strong commitment and
effort from the technical and administrative staff in Malargüe. We are very
grateful to the following agencies and organizations for financial support:
Argentina – Comisión Nacional de Energía Atómica; Agencia Nacional de
Promoción Científica y Tecnológica (ANPCyT); Consejo Nacional de
Investigaciones Científicas y Técnicas (CONICET); Gobierno de la Provincia de
Mendoza; Municipalidad de Malargüe; NDM Holdings and Valle Las Leñas; in
gratitude for their continuing cooperation over land access; Australia – the
Australian Research Council; Belgium – Fonds de la Recherche Scientifique
(FNRS); Research Foundation Flanders (FWO); Brazil – Conselho Nacional de
Desenvolvimento Científico e Tecnológico (CNPq); Financiadora de Estudos e
Projetos (FINEP); Fundação de Amparo à Pesquisa do Estado de Rio de Janeiro
(FAPERJ); São Paulo Research Foundation (FAPESP) Grants No. 2019/10151-2, No.
2010/07359-6 and No. 1999/05404-3; Ministério da Ciência, Tecnologia,
Inovações e Comunicações (MCTIC); Czech Republic – Grant No. MSMT CR LTT18004,
LM2015038, LM2018102, CZ.02.1.01/0.0/0.0/16_013/0001402,
CZ.02.1.01/0.0/0.0/18_046/0016010 and CZ.02.1.01/0.0/0.0/17_049/0008422;
France – Centre de Calcul IN2P3/CNRS; Centre National de la Recherche
Scientifique (CNRS); Conseil Régional Ile-de-France; Département Physique
Nucléaire et Corpusculaire (PNC-IN2P3/CNRS); Département Sciences de l’Univers
(SDU-INSU/CNRS); Institut Lagrange de Paris (ILP) Grant No. LABEX
ANR-10-LABX-63 within the Investissements d’Avenir Programme Grant No.
ANR-11-IDEX-0004-02; Germany – Bundesministerium für Bildung und Forschung
(BMBF); Deutsche Forschungsgemeinschaft (DFG); Finanzministerium Baden-
Württemberg; Helmholtz Alliance for Astroparticle Physics (HAP); Helmholtz-
Gemeinschaft Deutscher Forschungszentren (HGF); Ministerium für Kultur und
Wissenschaft des Landes Nordrhein-Westfalen; Ministerium für Wissenschaft,
Forschung und Kunst des Landes Baden-Württemberg; Italy – Istituto Nazionale
di Fisica Nucleare (INFN); Istituto Nazionale di Astrofisica (INAF); Ministero
dell’Istruzione, dell’Universitá e della Ricerca (MIUR); CETEMPS Center of
Excellence; Ministero degli Affari Esteri (MAE); México – Consejo Nacional de
Ciencia y Tecnología (CONACYT) No. 167733; Universidad Nacional Autónoma de
México (UNAM); PAPIIT DGAPA-UNAM; The Netherlands – Ministry of Education,
Culture and Science; Netherlands Organisation for Scientific Research (NWO);
Dutch national e-infrastructure with the support of SURF Cooperative; Poland –
Ministry of Education and Science, grant No. DIR/WK/2018/11; National Science
Centre, Grants No. 2016/22/M/ST9/00198, 2016/23/B/ST9/01635, and
2020/39/B/ST9/01398; Portugal – Portuguese national funds and FEDER funds
within Programa Operacional Factores de Competitividade through Fundação para
a Ciência e a Tecnologia (COMPETE); Romania – Ministry of Research, Innovation
and Digitization, CNCS/CCCDI UEFISCDI, grant no. PN19150201/16N/2019 and
PN1906010 within the National Nucleus Program, and projects number TE128, PN-
III-P1-1.1-TE-2021-0924/TE57/2022 and PED289, within PNCDI III; Slovenia –
Slovenian Research Agency, grants P1-0031, P1-0385, I0-0033, N1-0111; Spain –
Ministerio de Economía, Industria y Competitividad (FPA2017-85114-P and
PID2019-104676GB-C32), Xunta de Galicia (ED431C 2017/07), Junta de Andalucía
(SOMM17/6104/UGR, P18-FR-4314) Feder Funds, RENATA Red Nacional Temática de
Astropartículas (FPA2015-68783-REDT) and María de Maeztu Unit of Excellence
(MDM-2016-0692); USA – Department of Energy, Contracts No. DE-AC02-07CH11359,
No. DE-FR02-04ER41300, No. DE-FG02-99ER41107 and No. DE-SC0011689; National
Science Foundation, Grant No. 0450696; The Grainger Foundation; Marie Curie-
IRSES/EPLANET; European Particle Physics Latin American Network; and UNESCO.
## References
* Aab et al. (2014a) Aab, A., et al. 2014a, Phys. Rev. D, 90, 122005, doi: 10.1103/PhysRevD.90.122005
* Aab et al. (2014b) —. 2014b, Phys. Rev. D, 90, 122006, doi: 10.1103/PhysRevD.90.122006
* Aab et al. (2014c) —. 2014c, JCAP, 08, 019, doi: 10.1088/1475-7516/2014/08/019
* Aab et al. (2015a) —. 2015a, Nucl. Instrum. Meth. A, 798, 172, doi: 10.1016/j.nima.2015.06.058
* Aab et al. (2015b) —. 2015b, JCAP, 08, 049, doi: 10.1088/1475-7516/2015/08/049
* Aab et al. (2016) —. 2016, Phys. Rev. D, 93, 072006, doi: 10.1103/PhysRevD.93.072006
* Aab et al. (2017a) —. 2017a, Science, 357, 1266, doi: 10.1126/science.aan4338
* Aab et al. (2017b) —. 2017b, Phys. Rev. D, 96, 122003, doi: 10.1103/PhysRevD.96.122003
* Aab et al. (2017c) —. 2017c, Astrophys. J. Lett., 837, L25, doi: 10.3847/2041-8213/aa61a5
* Aab et al. (2017d) —. 2017d, JINST, 12, P02006, doi: 10.1088/1748-0221/12/02/P02006
* Aab et al. (2019a) —. 2019a, JCAP, 10, 022, doi: 10.1088/1475-7516/2019/10/022
* Aab et al. (2019b) —. 2019b, Phys. Rev. D, 100, 082003, doi: 10.1103/PhysRevD.100.082003
* Aab et al. (2020a) —. 2020a, Phys. Rev. D, 102, 062005, doi: 10.1103/PhysRevD.102.062005
* Aab et al. (2020b) —. 2020b, JINST, 15, P10021, doi: 10.1088/1748-0221/15/10/P10021
* Abraham et al. (2010a) Abraham, J., et al. 2010a, Nucl. Instrum. Meth. A, 613, 29, doi: 10.1016/j.nima.2009.11.018
* Abraham et al. (2010b) —. 2010b, Nucl. Instrum. Meth. A, 620, 227, doi: 10.1016/j.nima.2010.04.023
* Abreu et al. (2011) Abreu, P., et al. 2011, JCAP, 11, 022, doi: 10.1088/1475-7516/2011/11/022
* Abreu et al. (2012) —. 2012, JINST, 7, P09001, doi: 10.1088/1748-0221/7/09/P09001
* Abreu et al. (2022) —. 2022, Astrophys. J., 935, 170, doi: 10.3847/1538-4357/ac7d4e
* Alves Batista et al. (2019) Alves Batista, R., Biteau, J., Bustamante, M., et al. 2019, Frontiers in Astronomy and Space Sciences, 6, doi: 10.3389/fspas.2019.00023
* Andringa et al. (2011) Andringa, S., Conceicao, R., & Pimenta, M. 2011, Astropart. Phys., 34, 360, doi: 10.1016/j.astropartphys.2010.10.002
* Ave et al. (2000) Ave, M., Hinton, J. A., Vazquez, R. A., Watson, A. A., & Zas, E. 2000, Phys. Rev. Lett., 85, 2244, doi: 10.1103/PhysRevLett.85.2244
* Ave et al. (2013) Ave, M., et al. 2013, Astropart. Phys., 42, 90, doi: 10.1016/j.astropartphys.2012.12.006
* Bonifazi (2009) Bonifazi, C. 2009, Nucl. Phys. B Proc. Suppl., 190, 20, doi: 10.1016/j.nuclphysbps.2009.03.063
* Coleman et al. (2022) Coleman, A., et al. 2022. https://arxiv.org/abs/2205.05845
* Dawson (2020) Dawson, B. 2020, PoS, ICRC2019, 231, doi: 10.22323/1.358.0231
* Dembinski et al. (2010) Dembinski, H., Billoir, P., Deligny, O., & Hebbeker, T. 2010, Astroparticle Physics, 34, 128, doi: https://doi.org/10.1016/j.astropartphys.2010.06.006
* Gaisser & Hillas (1977) Gaisser, T. K., & Hillas, A. M. 1977, in International Cosmic Ray Conference, Vol. 8, 353
* Galbraith (1958) Galbraith, W. 1958, Extensive Air Showers (Butterworths Publication Limited), 185
* Greisen (1956) Greisen, K. 1956, Progress in Cosmic Ray Physics, Vol. 3 (North-Holland Publishing), 1
* Greisen (1960) —. 1960, Annual Review of Nuclear Science, 10, 63, doi: 10.1146/annurev.ns.10.120160.000431
* Hillas (1977) Hillas, A. M. 1977, Acta Phys. Acad. Sci. Hung., Suppl 29, 355
* Hillas et al. (1971) Hillas, A. M., Hollows, J., & Hunter, H. 1971, in International Cosmic Ray Conference, Vol. 3, 1001
* Kamata & Nishimura (1958) Kamata, K., & Nishimura, J. 1958, Progress of Theoretical Physics Supplement, 6, 93, doi: 10.1143/PTPS.6.93
* Linsley & Scarsi (1962) Linsley, J., & Scarsi, L. 1962, Phys. Rev., 128, 2384, doi: 10.1103/PhysRev.128.2384
* Luce et al. (2021) Luce, Q., Roth, M., Schmidt, D., & Veberic, D. 2021, PoS, ICRC2021, 435, doi: 10.22323/1.395.0435
* Mollerach & Roulet (2018) Mollerach, S., & Roulet, E. 2018, Progress in Particle and Nuclear Physics, 98, 85, doi: https://doi.org/10.1016/j.ppnp.2017.10.002
* Mostafá (2005) Mostafá, M. A. 2005, in International Cosmic Ray Conference, Vol. 7, 369. https://cds.cern.ch/record/965354
* Newton et al. (2007) Newton, D., Knapp, J., & Watson, A. A. 2007, Astropart. Phys., 26, 414, doi: 10.1016/j.astropartphys.2006.08.003
* Schmidt (2010) Schmidt, T. 2010, PhD thesis, Karlsruhe Institute of Technology
* Unger et al. (2008) Unger, M., Dawson, B. R., Engel, R., Schussler, F., & Ulrich, R. 2008, Nucl. Instrum. Meth. A, 588, 433, doi: 10.1016/j.nima.2008.01.100
* Valiño et al. (2010) Valiño, I., Alvarez-Muñiz, J., Roth, M., Vazquez, R., & Zas, E. 2010, Astroparticle Physics, 32, 304, doi: https://doi.org/10.1016/j.astropartphys.2009.09.008
* Yushkov (2020) Yushkov, A. 2020, PoS, ICRC2019, 482, doi: 10.22323/1.358.0482
A. Abdul Halim13, P. Abreu71, M. Aglietta53,51, I. Allekotte1, P. Allisona, K.
Almeida Cheminant69, A. Almela8,12, J. Alvarez-Muñiz78, J. Ammerman Yebra78,
G.A. Anastasi53,51, L. Anchordoqui85, B. Andrada8, S. Andringa71, C. Aramo49,
P.R. Araújo Ferreira41, E. Arnone62,51, J. C. Arteaga Velázquez66, H. Asorey8,
P. Assis71, M. Ave19, G. Avila11, E. Avocone56,45, A.M. Badescu74, A.
Bakalova31, A. Balaceanu72, F. Barbato44,45, J. Beattya, J.A. Bellido13,68, C.
Berat35, M.E. Bertaina62,51, X. Bertou1, G. Bhatta69, P.L. Biermannh, P.
Billoir34, V. Binet6, K. Bismark38,8, T. Bister41, J. Biteau36, J. Blazek31,
C. Bleve35, J. Blümer40, M. Boháčová31, D. Boncioli56,45, C. Bonifazi9,25, L.
Bonneau Arbeletche21, N. Borodai69, J. Bracki, T. Bretz41, P.G. Brichetto
Orchera8, F.L. Briechle41, P. Buchholz43, A. Bueno77, S. Buitink15, M.
Buscemi57,46, M. Büsken38,8, A. Bwembya79,80, K.S. Caballero-Mora65, L.
Caccianiga58,48, I. Caracas37, R. Caruso57,46, A. Castellina53,51, F.
Catalani18, G. Cataldi47, L. Cazon78, M. Cerda10, R. Cester62,51 J.A.
Chinellato21, J. Chirinos86 J. Chudoba31, L. Chytka32, R.W. Clay13, A.C. Cobos
Cerutti7, R. Colalillo59,49, A. Coleman89, M.R. Coluccia47, R. Conceição71, A.
Condorelli44,45, G. Consolati48,54, F. Contreras11, F. Convenga40, D. Correia
dos Santos27, C.E. Covault83, M. Cristinziani43, C.S. Cruz Sanchez4, S.
Dasso5,3, K. Daumiller40, B.R. Dawson13, R.M. de Almeida27, J. de Jesús8,40,
S.J. de Jong79,80, J.R.T. de Mello Neto25,26, I. De Mitri44,45, J. de
Oliveira17, D. de Oliveira Franco21, F. de Palma55,47, V. de Souza19, E. De
Vito55,47, A. Del Popolo57,46, O. Deligny33, L. Deval40,8, A. di Matteo51, M.
Dobre72, C. Dobrigkeit21, J.C. D’Olivo67, L.M. Domingues Mendes71, A.
Dorofeevi, R.C. dos Anjos24, J. Ebr31, M. Eman79,80, R. Engel38,40, I.
Epicoco55,47, M. Erdmann41, A. Etchegoyen8,12, H. Falcke79,81,80, J. Farmer88,
G. Farrar87, A.C. Fauth21, N. Fazzinif, F. Feldbusch39, F. Fenu62,51, B.
Fick86, J.M. Figueira8, A. Filipčič76,75, T. Fitoussi40, B. Flaggs89, T.
Fodran79, T. Fujii88,g, A. Fuster8,12, C. Galea79, C. Galelli58,48, B.
García7, H. Gemmeke39, F. Gesualdi8,40, A. Gherghel-Lascu72, P.L. Ghia33, U.
Giaccari79, M. Giammarchi48, J. Glombitza41, F. Gobbi10, F. Gollan8, G.
Golup1, M. Gómez Berisso1, P.F. Gómez Vitale11, J.P. Gongora11, J.M.
González1, N. González14, I. Goos1, D. Góra69, A. Gorgi53,51, M. Gottowik78,
T.D. Grubb13, F. Guarino59,49, G.P. Guedes22, E. Guido43, S. Hahn40,8, P.
Hamal31, M.R. Hampel8, P. Hansen4, D. Harari1, J. Hartoni, V.M. Harvey13, A.
Haungs40, T. Hebbeker41, D. Heck40, C. Hojvatf, J.R. Hörandel79,80, P.
Horvath32, M. Hrabovský32, T. Huege40,15, A. Insolia57,46, P.G. Isar73, P.
Janecek31, J.A. Johnsen84, J. Jurysek31, A. Kääpä37, K.H. Kampert37, B.
Keilhauer40, A. Khakurdikar79, V.V. Kizakke Covilakam8,40, H.O. Klages40, M.
Kleifges39, J. Kleinfeller10, F. Knapp38, J. Knappd,e, N. Kunka39, C.
Lachaudl, B.L. Lago16, N. Langner41, M.A. Leigui de Oliveira23, V. Lenok38, A.
Letessier-Selvon34, I. Lhenry-Yvon33, D. Lo Presti57,46, L. Lopes71, R.
López63, L. Lu90, Q. Luce38, J.P. Lundquist75, A. Machado Payeras21, D.
Mandat31, B.C. Manning13, J. Manshanden42, P. Mantschf, S. Marafico33, F.M.
Mariani58,48, A.G. Mariazzi4, I.C. Mariş14, G. Marsella60,46, D.
Martello55,47, S. Martinelli40,8, O. Martínez Bravo63, M.A. Martins78, M.
Mastrodicasa56,45, H.J. Mathes40, J. Matthewsb, G. Matthiae61,50, E.
Mayotte84,37, S. Mayotte84, P.O. Mazurf, G. Medina-Tanco67, J. Meinert37, D.
Melo8, A. Menshikov39, S. Michal32, M.I. Micheletti6, L. Miramonti58,48, S.
Mollerach1, F. Montanet35, L. Morejon37, C. Morello53,51, A.L. Müller31, K.
Mulrey79,80, R. Mussa51, M. Muzio87, W.M. Namasaka37, A. Nasr-Esfahani37, L.
Nellen67, G. Nicora2, M. Niculescu-Oglinzanu72, M. Niechciol43, D. Nitz86, I.
Norwood86, D. Nosek30, V. Novotny30, L. Nožka32, A Nucita55,47, L.A. Núñez29,
C. Oliveira19, M. Palatka31, J. Pallotta2, G. Parente78, A. Parra63, J.
Pawlowsky37, M. Pech31, J. Pȩkala69, R. Pelayo64, E.E. Pereira Martins38,8, J.
Perez Armand20, C. Pérez Bertolli8,40, L. Perrone55,47, S. Petrera44,45, C.
Petrucci56,45, T. Pierog40, M. Pimenta71, M. Platino8, B. Pont79, M.
Pothast80,79, M. Pourmohammad Shavar60,46, P. Privitera88, M. Prouza31, A.
Puyleart86, S. Querchfeld37, J. Rautenberg37, D. Ravignani8, M. Reininghaus38,
J. Ridky31, F. Riehn71, M. Risse43, V. Rizi56,45, W. Rodrigues de Carvalho79,
J. Rodriguez Rojo11, M.J. Roncoroni8, S. Rossoni42, M. Roth40, E. Roulet1,
A.C. Rovero5, P. Ruehl43, A. Saftoiu72, M. Saharan79, F. Salamida56,45, H.
Salazar63, G. Salina50, J.D. Sanabria Gomez29, F. Sánchez8, E.M. Santos20, E.
Santos31, F. Sarazin84, R. Sarmento71, R. Sato11, P. Savina90, C.M. Schäfer40,
V. Scherini55,47, H. Schieler40, M. Schimassek40, M. Schimp37, F.
Schlüter40,8, D. Schmidt38, O. Scholten15, H. Schoorlemmer79,80, P.
Schovánek31, F.G. Schröder89,40, J. Schulte41, T. Schulz40, S.J. Sciutto4, M.
Scornavacche8,40, A. Segreto52,46, S. Sehgal37, S.U. Shivashankara75, G.
Sigl42, G. Silli8, O. Sima72,c, R. Smau72, R. Šmída88, P. Sommersj, J.F.
Soriano85, R. Squartini10, M. Stadelmaier31, D. Stanca72, S. Stanič75, J.
Stasielak69, P. Stassi35, M. Straub41, A. Streich38,8, M. Suárez-Durán14, T.
Suomijärvi36, A.D. Supanitsky8, Z. Szadkowski70, A. Tapia28, C. Taricco62,51,
C. Timmermans80,79, O. Tkachenko40, P. Tobiska31, C.J. Todero Peixoto18, B.
Tomé71, Z. Torrès35, A. Travaini10, P. Travnicek31, C. Trimarelli56,45, M.
Tueros4, R. Ulrich40, M. Unger40, L. Vaclavek32, M. Vacula32, J.F. Valdés
Galicia67, L. Valore59,49, E. Varela63, A. Vásquez-Ramírez29, D. Veberič40, C.
Ventura26, I.D. Vergara Quispe4, V. Verzi50, J. Vicha31, L.M. Villaseñor
Cendejas63, J. Vink82, S. Vorobiov75, C. Watanabe25, A.A. Watsond, A.
Weindl40, L. Wiencke84, H. Wilczyński69, D. Wittkowski37, B. Wundheiler8, P.
Younkk A. Yushkov31, O. Zapparrata14, E. Zas78, D. Zavrtanik75,76, M.
Zavrtanik76,75
The Pierre Auger Collaboration
$\bullet$
1. 1
Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET), San
Carlos de Bariloche, Argentina
2. 2
Centro de Investigaciones en Láseres y Aplicaciones, CITEDEF and CONICET,
Villa Martelli, Argentina
3. 3
Departamento de Física and Departamento de Ciencias de la Atmósfera y los
Océanos, FCEyN, Universidad de Buenos Aires and CONICET, Buenos Aires,
Argentina
4. 4
IFLP, Universidad Nacional de La Plata and CONICET, La Plata, Argentina
5. 5
Instituto de Astronomía y Física del Espacio (IAFE, CONICET-UBA), Buenos
Aires, Argentina
6. 6
Instituto de Física de Rosario (IFIR) – CONICET/U.N.R. and Facultad de
Ciencias Bioquímicas y Farmacéuticas U.N.R., Rosario, Argentina
7. 7
Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET,
UNSAM), and Universidad Tecnológica Nacional – Facultad Regional Mendoza
(CONICET/CNEA), Mendoza, Argentina
8. 8
Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET,
UNSAM), Buenos Aires, Argentina
9. 9
International Center of Advanced Studies and Instituto de Ciencias Físicas,
ECyT-UNSAM and CONICET, Campus Miguelete – San Martín, Buenos Aires, Argentina
10. 10
Observatorio Pierre Auger, Malargüe, Argentina
11. 11
Observatorio Pierre Auger and Comisión Nacional de Energía Atómica, Malargüe,
Argentina
12. 12
Universidad Tecnológica Nacional – Facultad Regional Buenos Aires, Buenos
Aires, Argentina
13. 13
University of Adelaide, Adelaide, S.A., Australia
14. 14
Université Libre de Bruxelles (ULB), Brussels, Belgium
15. 15
Vrije Universiteit Brussels, Brussels, Belgium
16. 16
Centro Federal de Educação Tecnológica Celso Suckow da Fonseca, Nova Friburgo,
Brazil
17. 17
Instituto Federal de Educação, Ciência e Tecnologia do Rio de Janeiro (IFRJ),
Brazil
18. 18
Universidade de São Paulo, Escola de Engenharia de Lorena, Lorena, SP, Brazil
19. 19
Universidade de São Paulo, Instituto de Física de São Carlos, São Carlos, SP,
Brazil
20. 20
Universidade de São Paulo, Instituto de Física, São Paulo, SP, Brazil
21. 21
Universidade Estadual de Campinas, IFGW, Campinas, SP, Brazil
22. 22
Universidade Estadual de Feira de Santana, Feira de Santana, Brazil
23. 23
Universidade Federal do ABC, Santo André, SP, Brazil
24. 24
Universidade Federal do Paraná, Setor Palotina, Palotina, Brazil
25. 25
Universidade Federal do Rio de Janeiro, Instituto de Física, Rio de Janeiro,
RJ, Brazil
26. 26
Universidade Federal do Rio de Janeiro (UFRJ), Observatório do Valongo, Rio de
Janeiro, RJ, Brazil
27. 27
Universidade Federal Fluminense, EEIMVR, Volta Redonda, RJ, Brazil
28. 28
Universidad de Medellín, Medellín, Colombia
29. 29
Universidad Industrial de Santander, Bucaramanga, Colombia
30. 30
Charles University, Faculty of Mathematics and Physics, Institute of Particle
and Nuclear Physics, Prague, Czech Republic
31. 31
Institute of Physics of the Czech Academy of Sciences, Prague, Czech Republic
32. 32
Palacky University, RCPTM, Olomouc, Czech Republic
33. 33
CNRS/IN2P3, IJCLab, Université Paris-Saclay, Orsay, France
34. 34
Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Sorbonne
Université, Université de Paris, CNRS-IN2P3, Paris, France
35. 35
Univ. Grenoble Alpes, CNRS, Grenoble Institute of Engineering Univ. Grenoble
Alpes, LPSC-IN2P3, 38000 Grenoble, France
36. 36
Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
37. 37
Bergische Universität Wuppertal, Department of Physics, Wuppertal, Germany
38. 38
Karlsruhe Institute of Technology (KIT), Institute for Experimental Particle
Physics, Karlsruhe, Germany
39. 39
Karlsruhe Institute of Technology (KIT), Institut für Prozessdatenverarbeitung
und Elektronik, Karlsruhe, Germany
40. 40
Karlsruhe Institute of Technology (KIT), Institute for Astroparticle Physics,
Karlsruhe, Germany
41. 41
RWTH Aachen University, III. Physikalisches Institut A, Aachen, Germany
42. 42
Universität Hamburg, II. Institut für Theoretische Physik, Hamburg, Germany
43. 43
Universität Siegen, Department Physik – Experimentelle Teilchenphysik, Siegen,
Germany
44. 44
Gran Sasso Science Institute, L’Aquila, Italy
45. 45
INFN Laboratori Nazionali del Gran Sasso, Assergi (L’Aquila), Italy
46. 46
INFN, Sezione di Catania, Catania, Italy
47. 47
INFN, Sezione di Lecce, Lecce, Italy
48. 48
INFN, Sezione di Milano, Milano, Italy
49. 49
INFN, Sezione di Napoli, Napoli, Italy
50. 50
INFN, Sezione di Roma “Tor Vergata”, Roma, Italy
51. 51
INFN, Sezione di Torino, Torino, Italy
52. 52
Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo (INAF), Palermo,
Italy
53. 53
Osservatorio Astrofisico di Torino (INAF), Torino, Italy
54. 54
Politecnico di Milano, Dipartimento di Scienze e Tecnologie Aerospaziali ,
Milano, Italy
55. 55
Università del Salento, Dipartimento di Matematica e Fisica “E. De Giorgi”,
Lecce, Italy
56. 56
Università dell’Aquila, Dipartimento di Scienze Fisiche e Chimiche, L’Aquila,
Italy
57. 57
Università di Catania, Dipartimento di Fisica e Astronomia “Ettore Majorana”,
Catania, Italy
58. 58
Università di Milano, Dipartimento di Fisica, Milano, Italy
59. 59
Università di Napoli “Federico II”, Dipartimento di Fisica “Ettore Pancini”,
Napoli, Italy
60. 60
Università di Palermo, Dipartimento di Fisica e Chimica “E. Segrè”, Palermo,
Italy
61. 61
Università di Roma “Tor Vergata”, Dipartimento di Fisica, Roma, Italy
62. 62
Università Torino, Dipartimento di Fisica, Torino, Italy
63. 63
Benemérita Universidad Autónoma de Puebla, Puebla, México
64. 64
Unidad Profesional Interdisciplinaria en Ingeniería y Tecnologías Avanzadas
del Instituto Politécnico Nacional (UPIITA-IPN), México, D.F., México
65. 65
Universidad Autónoma de Chiapas, Tuxtla Gutiérrez, Chiapas, México
66. 66
Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Michoacán, México
67. 67
Universidad Nacional Autónoma de México, México, D.F., México
68. 68
Universidad Nacional de San Agustin de Arequipa, Facultad de Ciencias
Naturales y Formales, Arequipa, Peru
69. 69
Institute of Nuclear Physics PAN, Krakow, Poland
70. 70
University of Łódź, Faculty of High-Energy Astrophysics,Łódź, Poland
71. 71
Laboratório de Instrumentação e Física Experimental de Partículas – LIP and
Instituto Superior Técnico – IST, Universidade de Lisboa – UL, Lisboa,
Portugal
72. 72
“Horia Hulubei” National Institute for Physics and Nuclear Engineering,
Bucharest-Magurele, Romania
73. 73
Institute of Space Science, Bucharest-Magurele, Romania
74. 74
University Politehnica of Bucharest, Bucharest, Romania
75. 75
Center for Astrophysics and Cosmology (CAC), University of Nova Gorica, Nova
Gorica, Slovenia
76. 76
Experimental Particle Physics Department, J. Stefan Institute, Ljubljana,
Slovenia
77. 77
Universidad de Granada and C.A.F.P.E., Granada, Spain
78. 78
Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago
de Compostela, Santiago de Compostela, Spain
79. 79
IMAPP, Radboud University Nijmegen, Nijmegen, The Netherlands
80. 80
Nationaal Instituut voor Kernfysica en Hoge Energie Fysica (NIKHEF), Science
Park, Amsterdam, The Netherlands
81. 81
Stichting Astronomisch Onderzoek in Nederland (ASTRON), Dwingeloo, The
Netherlands
82. 82
Universiteit van Amsterdam, Faculty of Science, Amsterdam, The Netherlands
83. 83
Case Western Reserve University, Cleveland, OH, USA
84. 84
Colorado School of Mines, Golden, CO, USA
85. 85
Department of Physics and Astronomy, Lehman College, City University of New
York, Bronx, NY, USA
86. 86
Michigan Technological University, Houghton, MI, USA
87. 87
New York University, New York, NY, USA
88. 88
University of Chicago, Enrico Fermi Institute, Chicago, IL, USA
89. 89
University of Delaware, Department of Physics and Astronomy, Bartol Research
Institute, Newark, DE, USA
90. 90
University of Wisconsin-Madison, Department of Physics and WIPAC, Madison, WI,
USA
91. —–
92. a
Ohio State University, Columbus, OH, USA
93. b
Louisiana State University, Baton Rouge, LA, USA
94. c
also at University of Bucharest, Physics Department, Bucharest, Romania
95. d
School of Physics and Astronomy, University of Leeds, Leeds, United Kingdom
96. e
now at Deutsches Elektronen-Synchrotron DESY, Zeuthen, Germany
97. f
Fermi National Accelerator Laboratory, Fermilab, Batavia, IL, USA
98. g
now at Graduate School of Science, Osaka Metropolitan University, Osaka, Japan
99. h
Max-Planck-Institut für Radioastronomie, Bonn, Germany
100. i
Colorado State University, Fort Collins, CO, USA
101. j
Pennsylvania State University, University Park, PA, USA
102. k
Los Alamos National Laboratory, Los Alamos, NM, USA
103. l
Laboratoire AstroParticule et Cosmologie (APC), Université Paris 7, CNRS-
IN2P3, Paris, France
|
# Minimal Zee model for lepton ${g-2}$ and $W$-mass shifts
R. Primulando<EMAIL_ADDRESS>Center for Theoretical Physics,
Department of Physics, Parahyangan Catholic University, Jl. Ciumbuleuit 94,
Bandung 40141, Indonesia J. Julio<EMAIL_ADDRESS>National Research and
Innovation Agency, KST B. J. Habibie, South Tangerang 15314, Indonesia P.
Uttayarat<EMAIL_ADDRESS>Department of Physics, Srinakharinwirot
University, 114 Sukhumvit 23 Road, Wattana, Bangkok 10110, Thailand
###### Abstract
We present a unique Yukawa structure of the Zee model that can accommodate
neutrino oscillation data, solves the muon ${g-2}$ problem, and explains the
recent $W$-boson mass measurement. Our Yukawa structure is minimal in the
sense that it contains the least possible number of parameters. In this
minimal scenario, neutrino masses are quasidegenerate and are compatible with
both normal and inverted orderings. The mixing angle $\theta_{23}$ is
predicted to lie in the second (first) octant for normal (inverted) ordering.
In both cases, the $CP$ violating phase is close to 3$\pi/2$. The minimal
texture also predicts a large branching fraction of the heavy neutral Higgs
boson into a pair of electron and muon.
## I INTRODUCTION
The massive nature of neutrinos is the key evidence that the Standard Model
(SM) is incomplete. For the past 20 years, it has been the only conclusive
experimental evidence we have. However, recently two new experimental results
have emerged that are in serious tension with the SM predictions. First, the
measurement of the anomalous magnetic dipole moment of the muon by the
Fermilab Muon ${g-2}$ Collaboration Abi _et al._ (2021) confirms the previous
Brookhaven result Bennett _et al._ (2006). Together they worsen the
disagreement with the SM prediction Aoyama _et al._ (2020, 2012); Czarnecki
_et al._ (2003); Gnendiger _et al._ (2013); Davier _et al._ (2017);
Keshavarzi _et al._ (2018); Colangelo _et al._ (2019); Hoferichter _et al._
(2019); Davier _et al._ (2020); Keshavarzi _et al._ (2020); Kurz _et al._
(2014); Melnikov and Vainshtein (2004); Masjuan and Sánchez-Puertas (2017);
Colangelo _et al._ (2017); Hoferichter _et al._ (2018); Gérardin _et al._
(2019); Bijnens _et al._ (2019); Colangelo _et al._ (2020); Blum _et al._
(2020); Colangelo _et al._ (2014) to a $4.2\sigma$ level. This is commonly
referred to as the muon ${g-2}$ problem, and its resolution, barring a major
revision in the SM prediction, requires a new physics contribution with
$\delta a_{\mu}=(251\pm 59)\times 10^{-11}$. Second, the CDF measurement of
the $W$-boson mass has pushed the world average value to $m_{W}=80.4242\pm
0.0087$ GeV Aaltonen _et al._ (2022), resulting in a 6.1$\sigma$ tension with
the SM prediction Workman _et al._ (2022). We will refer to this as the
$W$-mass problem.
The Zee model of neutrino masses Zee (1980) contains enough ingredients to
address those problems.111For other works that can simultaneously explain the
muon $g-2$ and the $W$-mass problems, see Refs. Babu _et al._ (2022); Han
_et al._ (2022); Kawamura _et al._ (2022); Nagao _et al._ (2022); Arcadi and
Djouadi (2022); Bhaskar _et al._ (2022); Baek (2022); Zhou and Han (2022);
Kim _et al._ (2022a); Kim (2022); Dcruz and Thapa (2022); Chowdhury and Saad
(2022); Kim _et al._ (2022b); Chakrabarty (2022); Batra _et al._ (2022a, b,
c); Chowdhury _et al._ (2022). It induces neutrino masses via radiative
corrections with TeV-scale physics. It has several Yukawa couplings that,
together with such mass scale, can induce a sizable anomalous magnetic moment
of the muon. The $W$-mass problem can be solved by large oblique parameters,
significantly deviating from zero, thanks to its scalar content. Recently,
Ref. Chowdhury _et al._ (2022) has demonstrated an example of how to
alleviate those problems in the Zee model. However, their solutions contain a
large number of free parameters in the leptonic Yukawa sector, making it
rather difficult to test the model quantitatively.
In its general form, the Zee model is no stranger to such a large number of
parameters. Being an extension of the two-Higgs-doublet model with an extra
singly charged scalar, it is natural to expect tree-level flavor-violation
processes. Some authors have tried to cope with these issues by introducing an
extra symmetry. The first attempt was done by Wolfenstein Wolfenstein (1980),
who imposed a discrete symmetry such that only one of the Higgs doublets can
couple to fermions. As a result, tree-level flavor-changing neutral currents
(FCNCs) are forbidden, but the induced neutrino mass matrix has vanishing
diagonal elements. Such a structure, once compatible with the bimaximal
neutrino mixing, has been ruled out by recent solar and KamLAND data Abe _et
al._ (2016), which prefer a large but not maximal solar mixing angle. For
earlier analyses on this matter, see Refs. Petcov (1982); Smirnov and Tao
(1994); Smirnov and Tanimoto (1997); Jarlskog _et al._ (1999); Frampton and
Glashow (1999); Koide (2001); Frampton _et al._ (2002a); He (2004).
It should be noted that, in a more relaxed assumption, in which both Higgs
doublets can couple to leptons, the Zee model is still viable (see, e.g.,
Balaji _et al._ (2001); Hasegawa _et al._ (2003); He (2004); Babu and Julio
(2014); Primulando _et al._ (2022); Nomura and Yagyu (2019)). Here, tree-
level lepton-flavor-violation (LFV) processes are not completely forbidden but
their rates can be tamed to be below their respected bounds. This is
particularly realized in cases with flavor-dependent symmetries Babu and Julio
(2014); Primulando _et al._ (2022); in addition, the number of free
parameters of the model is greatly reduced, so that the neutrino mass matrix
can be expressed in terms of a few parameters, allowing an interplay between
neutrino oscillation data and LFV rates.
A well-defined question can be asked: within the Zee model, how many
parameters are actually needed to explain neutrino oscillation data together
with muon ${g-2}$ and $W$-mass problems? The answer to this question is the
objective of this article. We shall systematically search for a minimal
leptonic Yukawa structure that can accommodate neutrino masses and mixing and
solve the muon ${g-2}$ and the $W$-mass problems. Any such solutions must also
be consistent with existing experimental constraints, in particular, those of
lepton-flavor violations.
## II OVERVIEW OF THE ZEE MODEL
The Zee model is an extension of the two-Higgs-doublet model (2HDM) by a
singly charged scalar $\eta^{+}$. The model can be most conveniently described
in the Higgs basis Georgi and Nanopoulos (1979), where the two Higgs doublets,
$H_{1,2}$, are
$H_{1}=\begin{pmatrix}G^{+}\\\\[1.99997pt]
\dfrac{v+h_{1}+iG}{\sqrt{2}}\end{pmatrix},\quad
H_{2}=\begin{pmatrix}H^{+}\\\\[1.99997pt]
\dfrac{h_{2}+iA}{\sqrt{2}}\end{pmatrix}.$ (1)
Here $v$ is the vacuum expectation value, and $G^{+}$ and $G$ are the would-be
Goldstone bosons. In this work, we will assume $CP$ symmetry in the scalar
sector so that $h_{1,2}$ do not mix with $A$. Furthermore, motivated by the
current 125-GeV Higgs data, which are consistent with the SM predictions Aad
_et al._ (2016); Sirunyan _et al._ (2019a); Aad _et al._ (2020a), we shall
work in the decoupling limit of the 2HDM, so $h_{1}$ and $h_{2}$ do not mix.
This allows us to identify $h_{1}$ with the observed 125-GeV Higgs boson,
labeled $h$, and $h_{2}$ with the heavy $CP$-even Higgs boson, henceforth
called $H$.222Typically, the masses of the extra Higgs boson are assumed to be
heavier than 125 GeV. However, to the best of our knowledge, current data do
allow for the possibility that one or more of these extra Higgs bosons are
lighter than 125 GeV.
In our present work, we choose the basis where charged leptons are diagonal.
The leptonic Yukawa interactions responsible for neutrino mass generation are
given by
$\displaystyle\mathcal{L}\supset$
$\displaystyle\sqrt{2}\frac{(M_{\ell})_{ij}}{v}\bar{L}_{i}e_{Rj}H_{1}+Y_{ij}\bar{L}_{i}e_{Rj}H_{2}$
$\displaystyle+f_{ij}L_{i}^{T}C\epsilon L_{j}\eta^{+}+\text{h.c.},$ (2)
where $M_{\ell}={\rm diag}\left(m_{e},m_{\mu},m_{\tau}\right)$ is the charged
lepton mass matrix, $L_{i}$ and $e_{Ri}$ denote the lepton doublet and singlet
of the $i$th generation, respectively, $\epsilon\equiv i\sigma^{2}$ is an
antisymmetric tensor for $SU(2)$ indices contraction, and $C$ is the charge
conjugate matrix. The antisymmetric coupling matrix $f$ can be made real by
phase rotation, leaving $Y$, therefore, complex. In our current setup, we
assume that $H_{2}$ does not couple with quarks. This implies that quark
interactions within this model mimic those of the SM. Quarks can only couple
to $H_{1}$, inducing their masses according to
$\displaystyle\mathcal{L}\supset$ $\displaystyle
Y_{u}\bar{Q}_{L}u_{R}(\epsilon
H_{1}^{*})+Y_{d}\bar{Q}_{L}d_{R}H_{1}+\text{h.c.},$ (3)
where $Y_{u}$ and $Y_{d}$ are the corresponding up- and down-type quark Yukawa
couplings, respectively.
The presence of Yukawa couplings $f_{ij}$ by itself does not break the lepton
number because one can always assign lepton number $-2$ to $\eta^{+}$. In
order to break the lepton number, one needs to invoke a trilinear coupling,
$V\supset\mu H_{1}\epsilon H_{2}\eta^{-}+\text{h.c.}$, which is part of the
scalar potential. Now with both $f$ and the $\mu$ couplings on hand, the
lepton number can be broken by two units, leading to a Majorana neutrino mass
generation at one-loop level, see Fig. 1. The trilinear coupling $\mu$ induces
a mixing between the two charged states $H^{+}$ and $\eta^{+}$. This mixing,
however, is expected to be small because it is proportional to neutrino
masses. Thus, to a good approximation, one can treat $H^{+}$ and $\eta^{+}$ as
(nearly) mass eigenstates.
The fact that the Zee model is rich in scalars whose masses are about
electroweak scale indicates that they may give a significant contribution to
the electroweak oblique parameters. It has been shown that such parameters,
sensitive to scalar mass splittings, can be responsible to account for the new
CDF $W$-mass measurement Strumia (2022); Lu _et al._ (2022); Heo _et al._
(2022); Asadi _et al._ (2022).
In addition, the Yukawa couplings $f$ and $Y$ also induce lepton anomalous
magnetic dipole moments $\delta a_{\ell}$ and LFV decays. Since $f_{ij}$
couplings (via $\eta^{+}$ exchange) always give negative $\delta a_{\ell}$,
while data imply that $\delta a_{\mu}$ has to be positive, we simply diminish
their contributions by setting $\eta^{+}$ to be heavy and/or $f_{ij}$ to be
small; thus, any flavor-violation processes induced by such particle exchange
will be negligible too. As for $Y_{ij}$, it is important to note that they
cannot be all diagonal, otherwise one cannot reproduce a correct neutrino data
fit (see, e.g., Ref. He (2004)). With off-diagonal couplings present, it is
natural to expect the occurrences of flavor-violation processes in this model,
which also need to be controlled.
## III PHENOMENOLOGY
### III.1 Neutrino masses and mixing
The Majorana neutrino mass matrix is induced by Feynman diagram shown in Fig.
1. Since Majorana mass matrix must be symmetric in flavor basis, there is a
similar diagram but with internal particles being replaced by their charged
conjugates. Altogether they give
$M_{\nu}=\kappa(fM_{\ell}Y^{\dagger}+Y^{\ast}M_{\ell}f^{T}),$ (4)
where $\kappa$ is a constant containing a loop factor and the trilinear
coupling $\mu$. Note that in the above equation flavor indices are suppressed.
The flavor structures of the coupling matrices $f$ and $Y$ must be such that
they are able to support current neutrino oscillation data, shown in Table 1
together with their 1$\sigma$ ranges.
Figure 1: The one-loop diagram generating Majorana neutrino masses. A dot on
the internal fermion (scalar) line represents the chiral mass ($\mu$)
insertion.
To determine minimal textures leading to correct neutrino data, one should
recall that there are five observables that need to be fit; they are the three
mixing angles $\theta_{23},\theta_{13},\theta_{12}$, the neutrino mass
splitting ratio $R\equiv\Delta m_{\rm sol}^{2}/\Delta m_{\rm atm}^{2}$,333The
values of $\Delta m_{\rm sol}^{2}$ and $\Delta m_{\rm atm}^{2}$ can be
determined by fixing the overall coupling $\kappa$ once the ratio is
determined. and the Jarlskog invariant $J_{CP}$. The two coupling matrices $f$
and $Y$ contain $m$ and $n$ independent, nonvanishing components,
respectively. Since one component of each $f$ and $Y$ can be scaled out,
effectively there are $m+2n-3$ independent real parameters that contribute to
explaining those five observables. We found that the texture where
$(m,n)=(3,2)$ contains the least number of real parameters that can
accommodate oscillation data. Other $(m,n)$ textures with equal or smaller
number of parameters lead to a neutrino mass matrix with three or more
independent zeros, and therefore they are incompatible with neutrino data
Frampton _et al._ (2002b). Note that, for the aforementioned minimal texture
$(m,n)=(3,2)$, the number of effective parameters is less than the oscillation
observables (i.e., 4 vs. 5). This further indicates that at least one of the
oscillation parameters can be determined in terms of the others.
Table 1: Central values and the 1$\sigma$ ranges for neutrino oscillation parameters obtained from the 2021 updated analysis of NuFIT 5.1 (http://www.nu-fit.org), see also Esteban _et al._ (2020). Here and in what follows we use $s^{2}_{ij}\equiv\sin^{2}\theta_{ij}$. For $s_{23}^{2}$, we allow for the possibility that $\theta_{23}$ lies in the shallow minimum. Parameters | Normal ordering | Inverted ordering
---|---|---
$s^{2}_{12}$ | $0.304^{+0.012}_{-0.012}$ | $0.304^{+0.013}_{-0.012}$
$s^{2}_{23}$ | $0.450^{+0.019}_{-0.016}$ | $0.570^{+0.016}_{-0.022}$
$[s^{2}_{23}$ (shallow)] | $[0.565^{+0.016}_{-0.034}]$ | $[0.455^{+0.021}_{-0.022}]$
$s^{2}_{13}$ | $0.02246^{+0.00062}_{-0.00062}$ | $0.02241^{+0.00074}_{-0.00062}$
$\Delta m^{2}_{\rm sol}/10^{-5}~{}\text{eV}^{2}$ | $7.42^{+0.21}_{-0.20}$ | $7.42^{+0.21}_{-0.20}$
$\Delta m^{2}_{\rm atm}/10^{-3}~{}\text{eV}^{2}$ | $2.510^{+0.027}_{-0.027}$ | $2.490^{+0.026}_{-0.028}$
$R$ | $0.0296^{+0.0033}_{-0.0033}$ | $0.0298^{+0.0033}_{-0.0033}$
$J_{CP}$ | $-0.0254^{+0.0115}_{-0.0080}$ | $-0.0330^{+0.0044}_{-0.0011}$
Interestingly, with only two nonvanishing $Y_{ij}$, the neutrino mass matrix
will have at least two independent zeros, as already discussed in Ref.
Frampton _et al._ (2002b). Thus, there are only few of such $Y_{ij}$ pairings
consistent with neutrino oscillation data and prospectively inducing muon
${g-2}$. In terms of nonvanishing $Y_{ij}$, they are (i) ($Y_{\mu\tau},Y_{\tau
e}$), (ii) ($Y_{e\mu},Y_{\tau e}$), (iii) ($Y_{\mu e},Y_{\tau\mu}$), (iv)
($Y_{\mu e},Y_{e\mu}$), and (v) ($Y_{\mu e},Y_{e\tau}$). Note that we do not
include in our list pairs of $Y$ couplings that cannot induce $\delta a_{\mu}$
at all (e.g., $Y_{e\tau},Y_{\tau e}$), although they may give a good fit to
neutrino oscillation data.
Further inspection reveals that coupling pairs (i)–(iv) cannot induce enough
$\delta a_{\mu}$ to account for the discrepancy. Pairs (i) and (ii), for
example, are restricted by their neutrino flavor structures, while pairs (iii)
and (iv) are restricted by flavor constraints. Coupling pair (i) induces a
neutrino mass matrix with vanishing $e$-$e$ and $e$-$\tau$ elements, so it
admits the normal mass ordering with $m_{1}<m_{2}\ll m_{3}$. From Eq. (4),
$Y_{\mu\tau}$ always couples with the tau mass, whereas $Y_{\tau e}$ with the
electron mass, and for such ordering it is required that $(M_{\nu})_{\mu\mu}$
and $(M_{\nu})_{\tau\tau}$ be of the same order, and so one would expect that
$|Y_{\tau e}/Y_{\mu\tau}|\sim{\cal{O}}(m_{\tau}/m_{e})$. Since $Y_{\mu\tau}$
needs to be of $\mathcal{O}(1)$ to accommodate $\delta a_{\mu}$ (see Sec.
III.2), this is a clear indication that this pair is not a viable option. In
addition, the product of those two couplings is severely constrained by
$\mu\to e\gamma$ arising via internal chirality flip of tau mass, which puts
$|Y_{\tau e}Y_{\mu\tau}|\lesssim\text{few}\times 10^{-7}$ for
$m_{H,A}\sim\mathcal{O}(100)$ GeV. Similarly, for pair (ii), neutrino data
imply $|Y_{\tau e}/Y_{e\mu}|\sim{\cal O}(m_{\mu}/m_{e})$, which also indicates
that a sizable $\delta a_{\mu}$ cannot be obtained.
As for pair (iii), neutrino fit requires that $|Y_{\mu
e}/Y_{\tau\mu}|\sim\mathcal{O}(m_{\mu}/m_{e})$. While it seems that it could
induce a correct $\delta a_{\mu}$, these couplings, however, are constrained
by LFV processes, in particular the tree-level $\tau\to\mu\mu e$ and the one-
loop $\tau\to e\gamma$. As a result, the correction to muon ${g-2}$ can only
be at most $115\times 10^{-11}$, which is more than $2\sigma$ away from the
global average value. Pair (iv), on the other hand, is free from such LFV
decay constraints. However, it suffers from constraint on electric dipole
moment of the electron, putting ${\rm Im}(Y_{e\mu}Y_{\mu e})\lesssim 10^{-9}$
for $m_{H,A}\sim{\rm few~{}hundred~{}GeV}$. Together with $\delta a_{e}\sim
10^{-13}$ (see Sec. III.2), this will affect the coupling magnitudes,
resulting in a small $\delta a_{\mu}$.
It turns out that only (v) can solve the muon ${g-2}$ problem and is
consistent with lepton-flavor constraints. This texture leads to a neutrino
mass matrix with vanishing $e$-$\tau$ and $\tau$-$\tau$ elements. To get a
good fit, one needs $|Y_{\mu e}/Y_{e\tau}|\sim\mathcal{O}(m_{\tau}/m_{e})$.
Such coupling hierarchy is central to getting a sizable shift of muon
anomalous magnetic moment, and in the same time, suppressing LFV decay rates.
Recall that ${\rm diag}(m_{1},m_{2},m_{3})=U^{T}M_{\nu}U$, with $U$ being the
Pontecorvo-Maki-Nakagawa-Sakata matrix. The two zero conditions can then be
solved for $m_{1}$ and $m_{2}$, as in Frampton _et al._ (2002b). Coupled with
the fact that $\theta_{23}\simeq\pi/4$ and $\theta_{13}\ll 1$, one can
identify that neutrino masses in this scenario are quasidegenerate. The
$\theta_{23}$ itself is favored to lie in the second (first) octant for normal
(inverted) neutrino mass ordering; either case corresponds to the shallow
minimum of the global neutrino fit. Together with $\Delta m_{\rm
sol}^{2}/\Delta m_{\rm atm}^{2}\ll 1$, the $CP$-violating phase needs to be
close to $3\pi/2$ (i.e., nearly maximal). In the upcoming sections, we shall
focus our analysis on the context of the ($Y_{\mu e},Y_{e\tau}$) pair.
### III.2 Muon ${g-2}$ and flavor-violation constraints
At one-loop level, couplings $Y_{e\tau}$ and $Y_{\mu e}$ induce the following
shifts
$\displaystyle\delta a_{e}$
$\displaystyle=\frac{m_{e}^{2}}{96\pi^{2}}\left(\frac{|Y_{e\tau}|^{2}+|Y_{\mu
e}|^{2}}{m_{H}^{2}}+\frac{|Y_{e\tau}|^{2}+|Y_{\mu
e}|^{2}}{m_{A}^{2}}-\frac{|Y_{\mu e}|^{2}}{m_{H^{+}}^{2}}\right),$ (5)
$\displaystyle\delta a_{\mu}$ $\displaystyle=\frac{m_{\mu}^{2}|Y_{\mu
e}|^{2}}{96\pi^{2}}\left(\frac{1}{m_{H}^{2}}+\frac{1}{m_{A}^{2}}\right).$ (6)
For exotic Higgs boson masses at a few hundred GeV, one needs $|Y_{\mu
e}|\sim\mathcal{O}(1)$ to account for the present discrepancy, which is given
at $\delta a_{\mu}=(251\pm 59)\times 10^{-11}$ Abi _et al._ (2021). Such
${\cal O}(1)$ coupling also induces the electron anomalous magnetic moment, so
we must ensure that the induced $\delta a_{e}$ is also consistent with data.
The main issue with the electron magnetic moment is its SM value, which is
sensitive to the fine-structure constant. Recently, two measurements, one
using a cesium atom Parker _et al._ (2018) and the other a rubidium atom
Morel _et al._ (2020), have found $\alpha_{\rm Cs}^{-1}=137.035999046(27)$
and $\alpha_{\rm Rb}^{-1}=137.035999206(11)$, which differ by more than
$5\sigma$. When translated to the electron magnetic moment, the Cs result
suggests a $-2.4\sigma$ discrepancy, while the Rb suggests $+1.6\sigma$, each
with respect to the direct measurement Hanneke _et al._ (2008). In our
analysis, we treat the two measurements as independent. Specifically, we
combine them and infer the SM prediction for the electron magnetic moment
Aoyama _et al._ (2019). It is then compared with the direct measurement to
get $\delta a_{e}^{\rm comb}=(2.8\pm 3.0)\times 10^{-13}$. Note that the
combined $\delta a_{e}$ is positive, thanks to the smaller error of the Rb
experiment compared to that of the Cs.
In the present case, since the neutrino data fit dictates
$|Y_{e\tau}|\ll|Y_{\mu e}|$, one can always ignore $Y_{e\tau}$, leading to an
interesting relation, namely,
$\displaystyle\delta a_{e}\simeq\delta
a_{\mu}\left(\frac{m_{e}^{2}}{m_{\mu}^{2}}\right)\left(1-\frac{m_{H}^{2}m_{A}^{2}}{(m_{H}^{2}+m_{A}^{2})m_{H^{+}}^{2}}\right).$
(7)
This shows that, if $\delta a_{\mu}$ is found to be of order $10^{-9}$ for
$m_{H},m_{A},m_{H^{+}}\sim~{}\text{few~{}hundred~{}GeV}$, $\delta a_{e}$ is
predicted to be less than $1\times 10^{-13}$, which is in perfect agreement
with the $\delta a_{e}^{\rm comb}$ given above.
The couplings $Y_{e\tau}$ and $Y_{\mu e}$ lead to LFV decays $\tau\to\mu ee$
at tree level and $\tau\to\mu\gamma$ and $\tau\to 3\mu$ at one-loop level.
Note that the one-loop processes are suppressed by the internal electron mass,
so we expect the rates to be well below their bounds. Explicit expressions for
LFV observables in the decoupling limit can be found in Ref. Primulando _et
al._ (2022). Here, we note that amplitudes of these LFV decays are
proportional to $|Y_{e\tau}Y_{\mu e}|$. For $Y_{\mu e}\sim\mathcal{O}(1)$ and
the heavy Higgs masses around the weak scale, the tree-level LFV process
implies $|Y_{e\tau}|\lesssim 10^{-3}$–$10^{-2}$, also consistent with the
result from the neutrino data fit. On the other hand, LFV processes induced by
the antisymmetric couplings $f_{ij}$ are negligible because we assume that the
scale of $f$ is small and $\eta^{+}$ is heavy as mentioned at the end of Sec.
II.
### III.3 $W$-boson mass
With radiative corrections, the $W$ mass is expressed by the following
relation Sirlin (1980):
$\displaystyle m_{W}^{2}=\frac{\pi\alpha}{\sqrt{2}G_{F}s_{W}^{2}}(1+\Delta
r),$ (8)
where $\alpha=1/137.036$ is the QED fine-structure constant, $G_{F}$ is the
Fermi constant determined from the muon decay, $s_{W}^{2}\equiv
1-m_{W}^{2}/m_{Z}^{2}$, with $m_{Z}=91.1876$ GeV being the $Z$-boson mass,
represents the weak mixing angle, and $\Delta r$ is a parameter that contains
loop corrections to the tree-level one.
The parameter $\Delta r$ has two parts: one is due to electroweak oblique
parameters, and the other due to vertex and box contributions to the muon
decay Lopez-Val and Sola (2013). In the present scenario, the new physics
effects to the second parts are dominated by diagrams modifying the
$W$-$\mu$-$\nu_{\mu}$ vertex via $g\to g(1+\delta g_{\mu})$, with $g$ being
the $SU(2)$ gauge coupling, similar to the one discussed in Abe _et al._
(2017). The shift is given by
$\displaystyle\delta g_{\mu}=\frac{|Y_{\mu
e}|^{2}}{32\pi^{2}}\left[1-\xi(m_{H}^{2},m_{H^{+}}^{2})-\xi(m_{A}^{2},m_{H^{+}}^{2})\right].$
(9)
with
$\displaystyle\xi(x,y)=\frac{x+y}{4(x-y)}\ln\frac{x}{y}.$ (10)
Such a gauge coupling correction might not be negligible, particularly for
large $Y_{\mu e}$. However, its value is restricted by the measurements of
lepton-flavor universality $g_{\mu}/g_{e}=1.0018\pm 0.0014$ Pich (2014), with
$g_{\ell}$ being the corresponding $W$-$\ell$-$\nu_{\ell}$ coupling. A similar
correction also exists for $g_{e}$, but it is deemed irrelevant because the
coupling $Y_{e\tau}$ is small as required by neutrino data.
To calculate the $W$ mass, it is more beneficial to rewrite Eq. (8) such that
all SM contributions are subtracted (i.e., absorbed into $m_{W}^{2}|_{\rm
SM}$) Grimus _et al._ (2008). That is,
$\displaystyle m_{W}^{2}=$ $\displaystyle\left.m_{W}^{2}\right|_{\rm
SM}\left(1+\frac{s_{W}^{2}}{1-2s_{W}^{2}}\Delta r^{\prime}\right),$ (11)
where $m_{W}^{2}|_{\rm SM}=(80.357~{}\text{GeV})^{2}$ is the $W$-mass squared
predicted by the SM and
$\displaystyle\Delta
r^{\prime}=\frac{\alpha}{s_{W}^{2}}\left(-\frac{1}{2}S+(1-s_{W}^{2})T\right)-2\delta
g_{\mu},$ (12)
is the new physics contributions. We have neglected the $U$ parameter
contribution because we found its value small compared to the $S$ and the $T$
parameters. In the limit where $\eta^{+}$ is decoupled with other scalars,
those parameters are given by Grimus _et al._ (2008)
$\displaystyle S$
$\displaystyle=\frac{1}{24\pi}\left[(1-2s_{W}^{2})^{2}G(m_{H^{+}}^{2},m_{H^{+}}^{2},m_{Z}^{2})+G(m_{H}^{2},m_{A}^{2},m_{Z}^{2})-\ln\frac{m_{H^{+}}^{2}}{m_{H}^{2}}-\ln\frac{m_{H^{+}}^{2}}{m_{A}^{2}}\right],$
(13) $\displaystyle T$ $\displaystyle=\frac{1}{16\pi^{2}\alpha
v^{2}}\left[F(m_{H^{+}}^{2},m_{H}^{2})+F(m_{H^{+}}^{2},m_{A}^{2})-F(m_{A}^{2},m_{H}^{2})\right],$
(14)
where
$\displaystyle F(x,y)=\frac{x+y}{2}-\frac{xy}{x-y}\ln\frac{x}{y}.$ (15)
The explicit form of the $G(x,y,z)$ function can be found in Appendix C of
Ref. Grimus _et al._ (2008). We only note that $G(x,y,z)$ is symmetric in its
first two arguments.
The $T$ parameter is generally larger than the $S$ parameter, so the former is
supposed to play a greater role in the $W$-mass shift. However, it is worth
noting that in our setup there is an interplay between oblique parameters and
the vertex correction in getting the correct $W$ mass. The $\delta g_{\mu}$,
despite always inducing a positive shift in the $W$ mass, is not allowed to be
arbitrarily large because its value is constrained by the lepton-flavor
universality measurements, taken in our calculation to be within $2\sigma$.
For that reason, a sizable, nonvanishing $T$ in the range of $T\in(0.06,0.2)$
is still needed to account for the new average of the $W$ mass,
$m_{W}=80.4242\pm 0.0087$ GeV Aaltonen _et al._ (2022). Since the $T$
parameter vanishes when either one of the neutral Higgs bosons is degenerate
with the charged Higgs, it is also necessary that the masses of the extra
Higgs bosons be sufficiently split.
It should be noted that the extra Higgs boson masses are constrained by
collider searches as well. In our minimal scenario, they do not couple to
quarks. As a result, the $H$ and $A$ can be pair produced via Drell-Yan
processes, leading to multilepton signatures. Similarly, the pair-produced
charged Higgs would lead to a dilepton and missing energy signatures. This
process has been actively searched for at the LHC. The current experimental
limit, assuming $H^{+}H^{-}\to\mu^{+}\mu^{-}/e^{+}e^{-}+/E_{T}$, puts
$m_{H^{+}}\gtrsim 550$ GeV Sirunyan _et al._ (2019b); Aad _et al._ (2020b).
## IV EXPLICIT EXAMPLE
As an illustrative example, let us consider two benchmarks for the Yukawa
couplings as follows
* •
Benchmark B1 (normal ordering)
$\displaystyle\frac{f_{e\mu}}{f_{\mu\tau}}$
$\displaystyle=1.082,\quad\frac{f_{e\tau}}{f_{\mu\tau}}=8.285,$ (16)
$\displaystyle\frac{Y_{\mu e}}{Y_{e\tau}}$
$\displaystyle=(-7.660+1.941i)\times 10^{3},\quad$
* •
Benchmark B2 (inverted ordering)
$\displaystyle\frac{f_{e\mu}}{f_{\mu\tau}}$
$\displaystyle=1.240,\quad\frac{f_{e\tau}}{f_{\mu\tau}}=11.13,$ (17)
$\displaystyle\frac{Y_{\mu e}}{Y_{e\tau}}$ $\displaystyle=(6.191+0.956i)\times
10^{3},\quad$
Both benchmark values give a good fit to all five neutrino oscillation
parameters, as shown in Table 2. Once we determine the mass splitting ratio,
the overall neutrino mass can be deduced. From here, we find $\sum
m_{\nu}=0.196\,(0.229)$ eV for the B1 (B2) case, which is consistent with the
bound from the cosmic microwave background and the baryonic acoustic
oscillation measurements, setting $\sum m_{\nu}\leq 0.515$ eV Di Valentino
_et al._ (2020). It is not surprising that neutrino mass sums in both B1 and
B2 are comparable, thanks to the quasidegeneracy property of neutrino masses
in this scenario.
Given all coupling ratios above, we consider a set of scalar masses
$(m_{H},m_{A},m_{H^{+}})=(375,520,550)$ GeV and $Y_{\mu e}=3.4$. With these
values, the two benchmarks give $m_{W}=80.4295$ GeV and $\delta
a_{\mu}=147\times 10^{-11}$, which are within $2\sigma$ of the respective
experimental values. As anticipated, $\delta a_{e}=0.24\times 10^{-13}$ for
both B1 and B2. The branching ratios for the tree-level LFV decays $\tau\to\mu
ee$ are found to be about 2–3 orders of magnitude below the current
experimental bounds. The branching ratios for loop-induced processes, i.e.,
$\tau\to\mu\gamma$ and $\tau\to 3\mu$, owing to the electron mass suppression,
are found to be several orders of magnitude below their experimental bounds.
Table 2: Neutrino oscillation parameters and the sum of neutrino masses in the benchmark scenarios. Benchmark | $s^{2}_{12}$ | $s^{2}_{23}$ | $s^{2}_{13}$ | $R$ | $J_{CP}$ | $\sum m_{\nu}$ (eV)
---|---|---|---|---|---|---
B1 | 0.305 | 0.567 | 0.0220 | 0.0298 | -0.0331 | 0.196
B2 | 0.299 | 0.442 | 0.0220 | 0.0296 | -0.0330 | 0.229
## V CONCLUSION AND DISCUSSION
We have identified the minimal Yukawa structure of the Zee model that can
accommodate neutrino oscillation data, muon ${g-2}$, and $W$ mass, and are
consistent with LFV constraints. Such a structure consists of the $f$ coupling
matrix with all three independent components and the $Y$ coupling matrix with
only $Y_{e\tau}$ and $Y_{\mu e}$ being nonvanishing.
Our minimal structure features quasidegenerate neutrino masses, accommodating
both normal and inverted orderings. The mixing angle $\theta_{23}$ lies in the
second and first octants for normal and inverted mass orderings, respectively.
To guarantee a successful fit, one also needs to have $|Y_{\mu
e}/Y_{e\tau}|\sim m_{\tau}/m_{e}$. Such a large ratio plays an important role
in getting the desired value of $\delta a_{\mu}\sim 150\times 10^{-11}$, while
keeping $\delta a_{e}\lesssim 10^{-13}$ consistent with data. Furthermore,
rates for lepton-flavor-violation processes, such as $\tau\to\mu ee$, $\tau\to
e\gamma$, and $\tau\to 3\mu$, appear to be well below their respective bounds,
partly due to the large coupling hierarchy and electron mass suppression in
$\tau\to\mu$ transitions.
The flavor structure of the neutrino mass matrix in this scenario only
supports a shallow minimum of $\theta_{23}$. Should it be ruled out, one needs
to go to the next-to-minimal Yukawa structure. For example, the (2,3) Yukawa
texture with nonzero $f_{e\tau}$, $f_{\mu\tau}$, $Y_{e\tau}$, $Y_{\mu e}$ and
$Y_{\tau\tau}$ can accommodate the inverted mass ordering with $\theta_{23}$
in the second octant.
In our minimal scenario, we have $Y_{\mu e}\sim{\cal O}(1)$. This would lead
to large $H/A\to e^{\pm}\mu^{\mp}$ decays, which could be searched for at the
LHC. However, $H/A$ cannot be singly produced in our scenario. They can be
pair produced via Drell-Yan processes, leading to multilepton signatures. We
leave a careful collider study of such processes for possible future work.
## ACKNOWLEDGMENTS
The work of RP was supported by the Parahyangan Catholic University under
Grant No. III/LPPM/2023-02/32-P. The work of PU was supported in part by
Thailand National Science, Research and Innovation Fund (NSRF) via Program
Management Unit for Human Resources & Institutional Development, Research and
Innovation (PMU-B) under Grant No. B05F650021. PU also acknowledges the
supporting computing infrastructure provided by NSTDA, CU, CUAASC
(Chulalongkorn Academic Advancement into its 2nd Century Project, Thailand).
## References
* Abi _et al._ (2021) B. Abi _et al._ (Muon g-2), Phys. Rev. Lett. 126, 141801 (2021), arXiv:2104.03281 [hep-ex] .
* Bennett _et al._ (2006) G. W. Bennett _et al._ (Muon g-2), Phys. Rev. D 73, 072003 (2006), arXiv:hep-ex/0602035 .
* Aoyama _et al._ (2020) T. Aoyama _et al._ , Phys. Rept. 887, 1 (2020), arXiv:2006.04822 [hep-ph] .
* Aoyama _et al._ (2012) T. Aoyama, M. Hayakawa, T. Kinoshita, and M. Nio, Phys. Rev. Lett. 109, 111808 (2012), arXiv:1205.5370 [hep-ph] .
* Czarnecki _et al._ (2003) A. Czarnecki, W. J. Marciano, and A. Vainshtein, Phys. Rev. D67, 073006 (2003), [Erratum: Phys. Rev. D73, 119901 (2006)], arXiv:hep-ph/0212229 [hep-ph] .
* Gnendiger _et al._ (2013) C. Gnendiger, D. Stöckinger, and H. Stöckinger-Kim, Phys. Rev. D88, 053005 (2013), arXiv:1306.5546 [hep-ph] .
* Davier _et al._ (2017) M. Davier, A. Hoecker, B. Malaescu, and Z. Zhang, Eur. Phys. J. C77, 827 (2017), arXiv:1706.09436 [hep-ph] .
* Keshavarzi _et al._ (2018) A. Keshavarzi, D. Nomura, and T. Teubner, Phys. Rev. D97, 114025 (2018), arXiv:1802.02995 [hep-ph] .
* Colangelo _et al._ (2019) G. Colangelo, M. Hoferichter, and P. Stoffer, JHEP 02, 006 (2019), arXiv:1810.00007 [hep-ph] .
* Hoferichter _et al._ (2019) M. Hoferichter, B.-L. Hoid, and B. Kubis, JHEP 08, 137 (2019), arXiv:1907.01556 [hep-ph] .
* Davier _et al._ (2020) M. Davier, A. Hoecker, B. Malaescu, and Z. Zhang, Eur. Phys. J. C80, 241 (2020), [Erratum: Eur. Phys. J. C80, 410 (2020)], arXiv:1908.00921 [hep-ph] .
* Keshavarzi _et al._ (2020) A. Keshavarzi, D. Nomura, and T. Teubner, Phys. Rev. D101, 014029 (2020), arXiv:1911.00367 [hep-ph] .
* Kurz _et al._ (2014) A. Kurz, T. Liu, P. Marquard, and M. Steinhauser, Phys. Lett. B734, 144 (2014), arXiv:1403.6400 [hep-ph] .
* Melnikov and Vainshtein (2004) K. Melnikov and A. Vainshtein, Phys. Rev. D70, 113006 (2004), arXiv:hep-ph/0312226 [hep-ph] .
* Masjuan and Sánchez-Puertas (2017) P. Masjuan and P. Sánchez-Puertas, Phys. Rev. D95, 054026 (2017), arXiv:1701.05829 [hep-ph] .
* Colangelo _et al._ (2017) G. Colangelo, M. Hoferichter, M. Procura, and P. Stoffer, JHEP 04, 161 (2017), arXiv:1702.07347 [hep-ph] .
* Hoferichter _et al._ (2018) M. Hoferichter, B.-L. Hoid, B. Kubis, S. Leupold, and S. P. Schneider, JHEP 10, 141 (2018), arXiv:1808.04823 [hep-ph] .
* Gérardin _et al._ (2019) A. Gérardin, H. B. Meyer, and A. Nyffeler, Phys. Rev. D100, 034520 (2019), arXiv:1903.09471 [hep-lat] .
* Bijnens _et al._ (2019) J. Bijnens, N. Hermansson-Truedsson, and A. Rodríguez-Sánchez, Phys. Lett. B798, 134994 (2019), arXiv:1908.03331 [hep-ph] .
* Colangelo _et al._ (2020) G. Colangelo, F. Hagelstein, M. Hoferichter, L. Laub, and P. Stoffer, JHEP 03, 101 (2020), arXiv:1910.13432 [hep-ph] .
* Blum _et al._ (2020) T. Blum, N. Christ, M. Hayakawa, T. Izubuchi, L. Jin, C. Jung, and C. Lehner, Phys. Rev. Lett. 124, 132002 (2020), arXiv:1911.08123 [hep-lat] .
* Colangelo _et al._ (2014) G. Colangelo, M. Hoferichter, A. Nyffeler, M. Passera, and P. Stoffer, Phys. Lett. B735, 90 (2014), arXiv:1403.7512 [hep-ph] .
* Aaltonen _et al._ (2022) T. Aaltonen _et al._ (CDF), Science 376, 170 (2022).
* Workman _et al._ (2022) R. L. Workman _et al._ (Particle Data Group), PTEP 2022, 083C01 (2022).
* Zee (1980) A. Zee, Phys. Lett. B 93, 389 (1980), [Erratum: Phys.Lett.B 95, 461 (1980)].
* Babu _et al._ (2022) K. S. Babu, S. Jana, and V. P. K., Phys. Rev. Lett. 129, 121803 (2022), arXiv:2204.05303 [hep-ph] .
* Han _et al._ (2022) X.-F. Han, F. Wang, L. Wang, J. M. Yang, and Y. Zhang, Chin. Phys. C 46, 103105 (2022), arXiv:2204.06505 [hep-ph] .
* Kawamura _et al._ (2022) J. Kawamura, S. Okawa, and Y. Omura, Phys. Rev. D 106, 015005 (2022), arXiv:2204.07022 [hep-ph] .
* Nagao _et al._ (2022) K. I. Nagao, T. Nomura, and H. Okada, (2022), arXiv:2204.07411 [hep-ph] .
* Arcadi and Djouadi (2022) G. Arcadi and A. Djouadi, Phys. Rev. D 106, 095008 (2022), arXiv:2204.08406 [hep-ph] .
* Bhaskar _et al._ (2022) A. Bhaskar, A. A. Madathil, T. Mandal, and S. Mitra, (2022), arXiv:2204.09031 [hep-ph] .
* Baek (2022) S. Baek, (2022), arXiv:2204.09585 [hep-ph] .
* Zhou and Han (2022) Q. Zhou and X.-F. Han, (2022), arXiv:2204.13027 [hep-ph] .
* Kim _et al._ (2022a) J. Kim, S. Lee, P. Sanyal, and J. Song, Phys. Rev. D 106, 035002 (2022a), arXiv:2205.01701 [hep-ph] .
* Kim (2022) J. Kim, Phys. Lett. B 832, 137220 (2022), arXiv:2205.01437 [hep-ph] .
* Dcruz and Thapa (2022) R. Dcruz and A. Thapa, (2022), arXiv:2205.02217 [hep-ph] .
* Chowdhury and Saad (2022) T. A. Chowdhury and S. Saad, Phys. Rev. D 106, 055017 (2022), arXiv:2205.03917 [hep-ph] .
* Kim _et al._ (2022b) S.-S. Kim, H. M. Lee, A. G. Menkara, and K. Yamashita, Phys. Rev. D 106, 015008 (2022b), arXiv:2205.04016 [hep-ph] .
* Chakrabarty (2022) N. Chakrabarty, (2022), arXiv:2206.11771 [hep-ph] .
* Batra _et al._ (2022a) A. Batra, S. K. A, S. Mandal, H. Prajapati, and R. Srivastava, (2022a), arXiv:2204.11945 [hep-ph] .
* Batra _et al._ (2022b) A. Batra, P. Bharadwaj, S. Mandal, R. Srivastava, and J. W. F. Valle, Phys. Lett. B 834, 137408 (2022b), arXiv:2208.04983 [hep-ph] .
* Batra _et al._ (2022c) A. Batra, S. K. A., S. Mandal, and R. Srivastava, (2022c), arXiv:2204.09376 [hep-ph] .
* Chowdhury _et al._ (2022) T. A. Chowdhury, J. Heeck, A. Thapa, and S. Saad, Phys. Rev. D 106, 035004 (2022), arXiv:2204.08390 [hep-ph] .
* Wolfenstein (1980) L. Wolfenstein, Nucl. Phys. B 175, 93 (1980).
* Abe _et al._ (2016) K. Abe _et al._ (Super-Kamiokande), Phys. Rev. D 94, 052010 (2016), arXiv:1606.07538 [hep-ex] .
* Petcov (1982) S. T. Petcov, Phys. Lett. B 115, 401 (1982).
* Smirnov and Tao (1994) A. Y. Smirnov and Z.-j. Tao, Nucl. Phys. B 426, 415 (1994), arXiv:hep-ph/9403311 .
* Smirnov and Tanimoto (1997) A. Y. Smirnov and M. Tanimoto, Phys. Rev. D 55, 1665 (1997), arXiv:hep-ph/9604370 .
* Jarlskog _et al._ (1999) C. Jarlskog, M. Matsuda, S. Skadhauge, and M. Tanimoto, Phys. Lett. B 449, 240 (1999), arXiv:hep-ph/9812282 .
* Frampton and Glashow (1999) P. H. Frampton and S. L. Glashow, Phys. Lett. B 461, 95 (1999), arXiv:hep-ph/9906375 .
* Koide (2001) Y. Koide, Phys. Rev. D 64, 077301 (2001), arXiv:hep-ph/0104226 .
* Frampton _et al._ (2002a) P. H. Frampton, M. C. Oh, and T. Yoshikawa, Phys. Rev. D 65, 073014 (2002a), arXiv:hep-ph/0110300 .
* He (2004) X.-G. He, Eur. Phys. J. C 34, 371 (2004), arXiv:hep-ph/0307172 .
* Balaji _et al._ (2001) K. R. S. Balaji, W. Grimus, and T. Schwetz, Phys. Lett. B 508, 301 (2001), arXiv:hep-ph/0104035 .
* Hasegawa _et al._ (2003) K. Hasegawa, C. S. Lim, and K. Ogure, Phys. Rev. D 68, 053006 (2003), arXiv:hep-ph/0303252 .
* Babu and Julio (2014) K. S. Babu and J. Julio, Phys. Rev. D 89, 053004 (2014), arXiv:1310.0303 [hep-ph] .
* Primulando _et al._ (2022) R. Primulando, J. Julio, and P. Uttayarat, Eur. Phys. J. C 82, 253 (2022), arXiv:2201.01960 [hep-ph] .
* Nomura and Yagyu (2019) T. Nomura and K. Yagyu, JHEP 10, 105 (2019), arXiv:1905.11568 [hep-ph] .
* Georgi and Nanopoulos (1979) H. Georgi and D. V. Nanopoulos, Phys. Lett. B 82, 95 (1979).
* Aad _et al._ (2016) G. Aad _et al._ (ATLAS, CMS), JHEP 08, 045 (2016), arXiv:1606.02266 [hep-ex] .
* Sirunyan _et al._ (2019a) A. M. Sirunyan _et al._ (CMS), Eur. Phys. J. C 79, 421 (2019a), arXiv:1809.10733 [hep-ex] .
* Aad _et al._ (2020a) G. Aad _et al._ (ATLAS), Phys. Rev. D 101, 012002 (2020a), arXiv:1909.02845 [hep-ex] .
* Strumia (2022) A. Strumia, JHEP 08, 248 (2022), arXiv:2204.04191 [hep-ph] .
* Lu _et al._ (2022) C.-T. Lu, L. Wu, Y. Wu, and B. Zhu, Phys. Rev. D 106, 035034 (2022), arXiv:2204.03796 [hep-ph] .
* Heo _et al._ (2022) Y. Heo, D.-W. Jung, and J. S. Lee, Phys. Lett. B 833, 137274 (2022), arXiv:2204.05728 [hep-ph] .
* Asadi _et al._ (2022) P. Asadi, C. Cesarotti, K. Fraser, S. Homiller, and A. Parikh, (2022), arXiv:2204.05283 [hep-ph] .
* Frampton _et al._ (2002b) P. H. Frampton, S. L. Glashow, and D. Marfatia, Phys. Lett. B 536, 79 (2002b), arXiv:hep-ph/0201008 .
* Esteban _et al._ (2020) I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz, and A. Zhou, JHEP 09, 178 (2020), arXiv:2007.14792 [hep-ph] .
* Parker _et al._ (2018) R. H. Parker, C. Yu, W. Zhong, B. Estey, and H. Müller, Science 360, 191 (2018), arXiv:1812.04130 [physics.atom-ph] .
* Morel _et al._ (2020) L. Morel, Z. Yao, P. Cladé, and S. Guellati-Khélifa, Nature 588, 61 (2020).
* Hanneke _et al._ (2008) D. Hanneke, S. Fogwell, and G. Gabrielse, Phys. Rev. Lett. 100, 120801 (2008), arXiv:0801.1134 [physics.atom-ph] .
* Aoyama _et al._ (2019) T. Aoyama, T. Kinoshita, and M. Nio, Atoms 7, 28 (2019).
* Sirlin (1980) A. Sirlin, Phys. Rev. D 22, 971 (1980).
* Lopez-Val and Sola (2013) D. Lopez-Val and J. Sola, Eur. Phys. J. C 73, 2393 (2013), arXiv:1211.0311 [hep-ph] .
* Abe _et al._ (2017) T. Abe, R. Sato, and K. Yagyu, JHEP 07, 012 (2017), arXiv:1705.01469 [hep-ph] .
* Pich (2014) A. Pich, Prog. Part. Nucl. Phys. 75, 41 (2014), arXiv:1310.7922 [hep-ph] .
* Grimus _et al._ (2008) W. Grimus, L. Lavoura, O. M. Ogreid, and P. Osland, Nucl. Phys. B 801, 81 (2008), arXiv:0802.4353 [hep-ph] .
* Sirunyan _et al._ (2019b) A. M. Sirunyan _et al._ (CMS), Phys. Lett. B 790, 140 (2019b), arXiv:1806.05264 [hep-ex] .
* Aad _et al._ (2020b) G. Aad _et al._ (ATLAS), Eur. Phys. J. C 80, 123 (2020b), arXiv:1908.08215 [hep-ex] .
* Di Valentino _et al._ (2020) E. Di Valentino, A. Melchiorri, and J. Silk, JCAP 01, 013 (2020), arXiv:1908.01391 [astro-ph.CO] .
|
MmWave Mapping and SLAM for 5G and Beyond
Yu Ge, Ossi Kaltiokallio, Hyowon Kim, Jukka Talvitie, Sunwoo Kim, Lennart Svensson, Mikko Valkama, Henk Wymeersch
Yu Ge, Hyowon Kim, Lennart Svensson, Henk Wymeersch Chalmers University of Technology, 41296 Gothenburg, Sweden
Ossi Kaltiokallio, Jukka Talvitie, Mikko Valkama Tampere University, 33101 Tampere, Finland
Sunwoo Kim Hanyang University, Seoul, 04763, South Korea
*The sensing aspect in integrated sensing and communication is provided through either mono-static or bi-static sensing. Mono-static sensing is most commonly associated with radar-like mapping, where a sensor maps an environment and tracks moving objects in a local reference frame. In contrast bi-static mapping is most commonly associated with positioning / localization, where a user device 3D position is determined in a global reference frame, while environment mapping is mainly a side-product used to improve localization accuracy and coverage. In this chapter, we will provide an overview of the fundamental tools used to solve mapping, tracking, and simultaneous localization and mapping problems. In particular, methods based on random finite set theory and Bayesian graphical models will be introduced. A numerical study will compare these approaches in a variety of scenarios and contrast 5G mmWave with 6G D-band performance. The use of intelligent surfaces will also be considered.
Device localization and radar-like mapping are at the heart of integrated sensing and communication, enabling not only new services and applications, but can also improve communication quality with reduced overheads. These forms of sensing are however susceptible to data association problems, due to the unknown relation between measurements and detected objects or targets.
In this chapter, we provide an overview of the fundamental tools used to solve mapping, tracking, and simultaneous localization and mapping (SLAM) problems.
We distinguish the different types of sensing problems and then focus on mapping and SLAM as running examples. Starting from the applicable models and definitions, we describe the different algorithmic approaches, with a particular focus on how to deal with data association problems.
In particular, methods based on random finite set theory and Bayesian graphical models are introduced in detail. A numerical study with synthetic and experimental data is then used to compare these approaches in a variety of scenarios.
§ MOTIVATION AND INTRODUCTION
ISAC has become one of the main differentiators of 6G with respect to previous generations of communication systems [1]. Foreseen applications of ISAC in 6G include providing useful information for optimizing communication metrics, as well as
the support of challenging use cases, such as extended reality (XR) and cooperating robots, which require high data rate communication and cm-level, ultra-fast sensing. Opening up for operation at higher frequencies (such as D-band between 130 GHz and 170 GHz) where more bandwidth is available for Gb/s data rates, complemented with highly directional transmission with large aperture arrays, will lead to
high delay and angle resolution, needed to provide accurate 6D (3D orientation and 3D position) localization and sensing. The high carrier frequencies also lead to new opportunities for material characterization and spectroscopy. While it is yet unclear which of these applications will be part of the 6G ecosystem, the widespread use of sensing is unquestionable. Already in 5G mmWave, sensing can play an important role, providing significant situational awareness with little or no change to existing signals and infrastructure.
The term sensing is often limited from its broader definition (i.e., to detect events, to measure changes in the environment, or to measure a physical property) to mean radar-like sensing. However, it can also cover uplink and downlink channel (parameter) estimation, radio frequency sensing, spectroscopy, weather monitoring, as well as any downstream processes that rely on sensing data. In this sense, 5G and earlier generations already perform some form of sensing, if only for channel estimation [2] and positioning [3]. This definition of sensing also implies that ISAC is much more than simply using a common waveform over a common hardware and processing the backscattered signal, and thus also includes localizing connected ue and improving communication by aid of sensing information. This broader definition thus accounts for a large potential for 6G systems, a potential that can already be applied to 5G, but is largely untapped.
Sensing can be roughly grouped into 3 categories: monostatic, bistatic, and multistatic sensing. In monostatic sensing, the transmitter and receiver are co-located and thus share complete knowledge of the transmitted signal and clock [4]. This type of sensing is, for example, used in automotive radar, where static landmarks are mapped and/or dynamic objects are tracked. In a communication system, monostatic sensing can rely on abundant data signals, which improves availability and accuracy compared to the use of sparse pilots. In bistatic sensing, the receiver may only have partial knowledge of the transmitted signal and may not have access to the transmitter clock. Finally, in multistatic sensing, several receivers are employed to perform the sensing task. In certain sensing problems, the receiver or transmitter may itself have an unknown state (e.g., position and orientation), which must be inferred while tracking objects [5]. An example is radio localization, which relies on dedicated pilot signals to determine a ue's position and clock bias [3].
The transmitter or receiver with unknown state is (confusingly) called the sensor in the tracking and mapping literature. Solving problems with unknown sensor state include SLAM [6] (when the sensed objects/targets are static landmarks) and simultaneous localization and tracking (SLAT) [7] (when the sensed objects/targets are themselves moving). An overview of the different sensing modalities is depicted in Fig. <ref>.
Examples of different sensing modalities. In the flowchart, the main steps in radio SLAM are shown, starting with optimized signal shaping, based on prior sensor and map information, followed by signal transmission (either remote or at a sensor) and reception. Then a channel estimation routine is executed, which provides range and angle measurements. These measurements must be associated with previously seen targets, before the map and sensor state are updated.
Nearly all sensing applications that involve multiple objects/targets suffer from a so-called DA problem, though it is manifested in different ways for different types of sensing tasks [8]. In monostatic sensing, DA occurs due to the need to relate measurement at one time to detected objects (or tracks) at a previous time. In multistatic sensing, DA occurs due to the unknown relation between measurements at one receiver to measurements at another receiver, corresponding to the same object. In bistatic sensing, DA occurs similar to monostatic sensing due to the unknown relation between measurements and previously detected objects or tracks. Hence, in monostatic and bistatic sensing, DA is due to tracking over time, while snapshot-based DA is only present in multistatic sensing.
To solve DA problems, there exist a variety of solutions, ranging from the classic linear assignment algorithms to find the best association to probabilistic methods based on BP or RFS theory. Table <ref> provides an overview of the different mapping and tracking problems. Note that the pure localization problem (e.g., as in GPS or in 5G) does not suffer from any DA problem, since the different sources use distinct and separable signals.
Problem classification.
Term Objects Sensor state Data association
mapping static (landmarks) known unknown
tracking moving (targets) known unknown
localization – unknown known
SLAM static (landmarks) unknown unknown
SLAT moving (targets) unknown unknown
Given the importance of solving DA problems in ISAC in all types of sensing tasks, our ambition with this chapter is two-fold. First of all, we aim to give the reader with an introduction to mapping, tracking, localization, and SLAM in the context of mmWave communication, considering both monostatic and bistatic sensing. Second, we aim to provide a gentle introduction to the fundamentals of RFS theory filters to solve the corresponding DA problems. We note that mapping and tracking are special cases of SLAM and SLAT, respectively. For that reason, our focus will be on SLAM, while the corresponding mapping approaches are then easily obtained by fixing the sensor state.
Chapter organization
The remainder of this chapter is structured as follows. In Sect. <ref>, we describe a scenario for SLAM, considering both monostatic and bistatic settings. Important concepts such as state, dynamics, and measurements are introduced, along with the associated notations. In Sect. <ref>, the different methodologies for solving the ISAC SLAM problems are covered, starting from a broad introduction of the field, and then zooming in to two Bayesian approaches: one using RFS theory and another one using BP. Without any intention for mathematical completeness, several practical SLAM methods are described at a high level. Exemplary results using simulated and experimental data are provided in Sect. <ref>. Finally, Sect. <ref> concludes this chapter with an outlook of the main problems we foresee and potential solution strategies. These unsolved problems give an indication of the richness of ISAC SLAM.
3 pages; 1 figure on the role of mapping and SLAM in ISAC, including flow (channel estimation, signal design)
Henk, Mikko,
* Mapping: personal radar, relation to ISAC, esp. for monostatic sensing
* SLAM: vehicular, relation to ISAC
* overall process flow (Channel estimation, signal shaping, e.g, beamforming for next time interval), combined for mapping and SLAM
* Role in 5G, role in 6G
proposed story. @Mikko to comment.
* a. ISAC is an important evolution towards 6G systems, as well as in WiFi.
* b. Sensing can take on several forms, monostatic, bistatic, multistatic, and serve different functions.
* c. These functions are both direct include mapping of an environment, tracking objects, and indirect, such a localizing users, and improving communication functions.
* d. Sensing is integral part of 5G in two forms: direct via estimation of channel response parameters (angles, delays) and indirect via the computation of positions.
* e. In addition to these standard uses of sensing, 5G can in principle also be used for monostatic sensing (at either the UE or BS, leading to a map in the frame of reference of the sensor) or for bistatic SLAM (localizing a user while constructing a map of the environment).
* f. A more widespread use of sensing is foreseen in 6G, where larger bandwidths and larger arrays can provide performance comparable to state-of-the-art imaging radar.
* g. A common challenge with many sensing problems is the problem of associating measurements to tracks (static or dynamic objects), which is the focus of this chapter.
* i. some potential complementary notes or aspects: tendency of increased network densification is also helping; many new/timely use cases (e.g AR/VR/MR/XR glasses) call for accurate positioning and/or sensing information; at sub-THz even material characterization and ”seeing the invisible” type of scenarios could be feasible; ...
§ SCENARIOS AND MODELS
The considered scenario is illustrated in Fig. <ref> in which a single bs periodically sends a downlink prs to the ue and the signal can reflect and scatter as it propagates through the medium. The physical structures in the environment that reflect and scatter the signals can be decomposed into parametric point representation referred to as landmarks. Reflecting surfaces are modeled by the mirror image of the bs, as va. Small objects that scatter the signal are modeled by their location, as sp [5]. The state of the $i$-th landmark is $\boldsymbol{x}^i = [(\boldsymbol{x}^i_\textrm{LM})^\top \; m^i]^\top$, where $\boldsymbol{x}^i_\textrm{LM} \in \mathbb{R}^3$ represents the position of landmark $i$ and $m^i \in \lbrace BS, VA,SP \rbrace$ represents the landmark type. If the environment consists of $I$ landmarks in total, the map of the environment can be represented by a set of all landmarks $\mathcal{X} = \lbrace \boldsymbol{x}^1, \ldots, \boldsymbol{x}^I \rbrace$.
The full ue state comprises of the pose and clock. The pose represents the position and orientation of the ue, whereas the clock represents the required parameters needed to synchronize the local clock to the network clock (perfect synchronization of the ue to the bs is not a reasonable assumption in practice). Let $\boldsymbol{s}_{k-1}$ denote the ue state at time $k-1$ and assuming the process noise is zero-mean Gaussian, the transition density of the ue state can be expressed as
\begin{equation}\label{transition_density}
f(\boldsymbol{s}_{k}|\boldsymbol{s}_{k-1},\boldsymbol{u}_{k})=\mathcal{N}(\boldsymbol{s}_{k};\boldsymbol{v}\left(\boldsymbol{s}_{k-1},\boldsymbol{u}_{k} \right),\boldsymbol{Q}_{k-1}),
\end{equation}
where $\boldsymbol{Q}_{k-1}$ is the process noise covariance and $\boldsymbol{v}(\cdot)$ the transition model in which $\boldsymbol{u}_{k}$ denotes a known control input.
A mmWave downlink scenario with a single bs and a moving ue. The location of the bs is known. The BS sends OFDM signals to the ue over the environment. The ue utilizes the received signals to estimate a set of channel parameters $\mathcal{Z}_k$, which depend on the underlying geometry and can be used by the ue to track its own state and build a map of the environment.
At time step $k$, the ue receives OFDM downlink signals from the bs, e.g., as modeling in [9], where the propagation channel over subcarrier $\kappa$ can be expressed as
\begin{align}
\boldsymbol{H}_{\kappa,k}=\sum_{i=1}^{I_k} g_{k}^{i} e^{-\jmath 2 \pi \kappa \Delta_f \tau^{i}_k} \boldsymbol{a}_{\text{UE}}(\boldsymbol{\theta}_{k}^{i})\boldsymbol{a}^\top_{\text{BS}}(\boldsymbol{\phi}_{k}^{i}),
\end{align}
where $\Delta_f$ is the subcarrier spacing, and $\boldsymbol{a}_{\text{UE}}(\cdot)$ and $\boldsymbol{a}_{\text{BS}}(\cdot)$ are the ue and bs response vectors.
This channel consists of the LOS path, which is the path that signals reach the ue directly, and/or NLOS paths, which are the paths that signals are reflected by reflecting surfaces or scattered by small objects and then reach the ue. The $i$-th multipath can be described by a complex gain $g_{k}^{i}$, a TOA $\tau_{k}^{i}$, an AOA pair $\boldsymbol{\theta}_{k}^{i}$ in azimuth and elevation, and an AOD pair $\boldsymbol{\phi}_{k}^{i}$ in azimuth and elevation, which depend on the hidden geometric relation among the bs, ue and landmarks, as shown, e.g., in <cit.>.
Based on the downlink received pilot signals
at time step $k$,
the ue can execute a channel estimator, for example, [11, 12, 13], and acquire estimates of multipath angles and delays as measurements, denoted as $\mathcal{Z}_{k}=\{\boldsymbol{z}_{k}^{1},\dots, \boldsymbol{z}_{k}^{\hat{{I}}_{k}} \}$. Due to clutter and misdetected landmarks (e.g., due to low path amplitude $|g_{k}^{i}|$ or non-resolvable paths), $\hat{{I}}_{k}$ is usually not equal to real number of multipath $I_{k}$. Affected by measurement noise, the measurement originating from landmark $\boldsymbol{x}^{i}$ can be modeled as
\begin{align}
\end{align}
where $\boldsymbol{R}_k^i$ is the measurement covariance, which can be determined by the Fisher information matrix of channel parameters [14], and
$\boldsymbol{h}(\boldsymbol{x}^{i},\boldsymbol{s}_{k})=[\tau_{k}^{i},(\boldsymbol{\theta}_{k}^{i})^{\mathsf{T}},(\boldsymbol{\phi}_{k}^{i})^{\mathsf{T}}]^{\mathsf{T}}$ represents the nonlinear function that transforms the geometric information to the TOA, AOA and AOD.
3 pages, with 1 system model figure. Unified scenario description that includes mapping and SLAM. Math limited to interface towards SLAM. Separately discuss monostatic and bistatic. Make bistatic main theme, with brief subsection on monostatic (UE vs BS). Focus on bistatic. OR: have first a general section (TX, RX), then bistatic, and then monostatic.
Jukka, Ossi, Yu
* Simple scenario
* Landmarks and their state: VA, SP, extended objects
* Sensor and sensor state
* Measurements and their statistics.
* Optional- Architectures: uplink, downlink, sidelink, fusion
* Optional- Variations: RIS, multi-UE, collaborative SLAM
* Landmark state consists of 3D position $\mathbf{m} = [x, y, z]^T$. Could we also include the landmark type, $\mathbf{m} = [x, y, z, m]^T$, where $m \in \lbrace BS, VA, SP \rbrace$ so we don't have to explicitly state the source type in the equations? The landmark type is assigned when it is created and it does not change. In practice, we could just append a column vector of zeros to the Jacobian and it should work, right? -Ossi
* we can have a constant velocity movement model alongside a street with a couple of landmarks in the environment. The environment could contain VA, SP BS and the extended object which can be modeled as rough surfaces (rounghness and VA), sensor state (position, heading(if it is constant velocity then it is a constant) and clock bias). Measurement could be 5D, involve gain ( or not). -Yu
* 1. sensor state: Simple case (limited parameters, very general, as much as possible states could be used in SLAM, 3D), bicycle model with velocity and turn rate, steering model, go with the model we used (2D-position, z is known, heading, clock bias, known velocity, known turn rate, steering model) (Ossi)
* 2. Environment model(landmarks): 3D position for landmarks, type of the landmark (BS, VA, SP). Focus on the point object model and discuss a bit on the extended object model like smoothness or size (Ossi)
* 3. Measurement model and statistics: 5D measurement (TOA, AOAs, AODs) mainly focus on the previous we had, mentioned a little bit channel estimations, received signals in fre.-domain? Geometric function, constant noise covariance and mentioned covariance get from Fisher information matrix (Yu, one figure)
* 4. Two optional choices: let's decided after finishing the first 3 part.
§ METHODS FOR MAPPING AND SLAM
The objective of SLAM is to determine or approximate the joint posterior density of the ue trajectory and landmark states $f(\boldsymbol{s}_{1:k},\mathcal{X} \lvert \mathcal{Z}_{1:k}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0})$, given the initial ue pose $\boldsymbol{s}_{0}$, measurements $\mathcal{Z}_{1:k}$ and controls $\boldsymbol{u}_{1:k}$ up to and including time instant $k$. In this section, we first briefly overview the different SLAM methods, including classical methods, GraphSLAM, and AI-based methods. Then, we dive deeper into two methods, namely based on RFS and on BP, which are natural for dealing with DA.
§.§ Overview of Different Methods
Notations of important variables.
Notation Definition Notation Definition
$\boldsymbol{s}$ state of the user $\mathcal{X}$ landmark set
$\boldsymbol{x}$ single landmark state $\boldsymbol{x}_{\mathrm{LM}}$ landmark location
$m$ landmark type $\boldsymbol{\theta}$ angle of arrival (AOA) pair
$\boldsymbol{\phi}$ angle of departure (AOD) pair ${\tau}$ time of arrival (TOA)
$g$ channel gain $\mathcal{Z}$ measurement set
$\boldsymbol{z}$ single measurement $c$ speed of light
$k$ time index
$n$ particle index
$i$ Bernoulli component index under each multi-Bernoulli
$j$ global hypothesis (multi-Bernoulli) index
$s$ subcarrier index $p_{\mathrm{D}}$ detection probability
$\boldsymbol{u}$ control input $\boldsymbol{w}^{(n)}$ weight of particle $n$
$M$ Number of GM components $i$ index of GM component
2c$\boldsymbol{\eta}^{(n,i)}$, $\boldsymbol{\mu}^{(n,i)}$, $\boldsymbol{\Sigma}^{(n,i)}$ 2cweight, mean and covariance of GM component $i$
$\omega$ global hypothesis weight
Classical Methods
The SLAM problem requires solving the joint posterior density of the ue and landmarks, conditioned on
the recorded observations and control inputs up to the current time instant. A common approach
is to approximate the joint posterior density as a Gaussian distribution and utilizing for example an EKF for estimating the posterior [15]. An alternative solution to SLAM is based on an exact factorization of the posterior into a product of conditional landmark distributions and a distribution over ue trajectories, leading to the widely used FastSLAM algorithm [16]. The classical SLAM methods solve the problem in two steps. First, the DA is solved and thereafter, Bayesian filtering is used to approximate the posterior assuming the DA does not contain any uncertainty. This two-tiered approach has been demonstrated to work well in practice [15, 16], but it is sensitive to DA uncertainty [17].
In GraphSLAM,
the DA problem is usually addressed by the SLAM front-end approach [17, 18]. Once the DA is known, the GraphSLAM algorithms compute the joint posterior density of the ue trajectory and landmarks by transforming the joint posterior density into a graphical network [19]. The nodes in the graphical network represent the ue states at different place in time or landmarks, and the edges represent the constrains between two ue states, or between the landmark state and the ue state, which consists of a probability distribution over the relative transformations between two nodes [20]. These two types of constrains can be obtained by the motion model of the ue or the measurement model of the ue and landmarks [21]. Once the graph is constructed, the SLAM problem becomes determining the most likely configuration of these nodes that can maximize the posterior, which can be solved by standard optimization techniques [22, 23]. Unlike many other SLAM methods, e.g., EK-SLAM and FastSLAM, which operate online,
GraphSLAM performs batch estimation, where the received measurements are processed batch by batch.
AI-based SLAM
The combination of deep learning and SLAM, also known as Deep SLAM, is yet to become the default SLAM strategy but has received considerable attention in recent years, especially for vision SLAM (SLAM for vision sensors). The idea is often to maintain a traditional SLAM pipeline but to replace manually designed models and loss functions, which may not be optimal, with optimized neural networks.
For instance, [24, 25] describe methods to use deep learning for key-point detection, and [26] presents a technique to robustly estimate the fundamental matrix using deep learning. There are also promising attempts at replacing a larger part of the pipeline with deep learning. For instance, [27, 28] use deep learning to jointly estimate a depth map and a pose for each image, by pairwise comparison of images, and avoid some of the machinery which is common to most vision SLAM solutions.
Deep learning for mmWave SLAM remains virtually unexplored. However, in spite of the differences between mmWave and vision data, it is still possible that ideas from deep vision SLAM are applicable to mmWave SLAM. As an example, vision SLAM often involves a feature matching problem, where keypoints from different images are associated to each other, which resembles the DA problem that we face in mmWave SLAM. SuperGLUE [29] demonstrated excellent performance for the feature matching problem using transformer-like architectures [30], and related architectures have also been used to handle DA in settings that are more closely related to mmWave data [31].
A RFS is a random variable whose possible outcomes are sets with a finite number of unique elements. That means both the number of elements in the set and the elements themselves are random. Therefore, unlike random vectors, where both the number and the order of the elements are pre-fixed, RFS are invariant to the order of elements and can easily add or remove elements [32, 6, 33]. These merits make the RFS particularly attractive to be used to model the unknown environment and the detected measurements in SLAM problems, since uncertainty in both the number of landmarks/measurements as well as their individual states can be inherently modeled. In RFS-based SLAM methods, different RFS, for example, the LMB RFS in [34, 35], the GLMB RFS in [36, 37], the PPP RFS in [5, 6] and the PMBM RFS in [10, 38], are utilized to model the unknown environment.
To solve the SLAM problem, the joint posterior density of the ue and landmarks can be approximated by using particles [10, 39], or sigma-points [40, 41], or relying on EKF [38, 42], or IPLF [43].
BP-based SLAM with Factor Graphs
The above
SLAM problem can be efficiently solved by computing the MPD of the hidden random variables, including the DA.
The MPDs for SLAM are approximately determined by the BP (i.e., message passing) with the sum-product algorithm [44], known as BP-SLAM [45, 46].
Using the random vector instead of RFS, a landmark state is represented, developed from BP-based MTT [7].
To handle the unknown number of landmarks, the augmented random vector (consisting of the location and existence variable) is modeled.
In BP-SLAM, the correlation between ue state and landmark state cannot be tracked since the hidden variables are marginalized by BP.
The landmarks are divided into two parts: undetected landmarks and detected landmarks, similarly to PMBM-SLAM.
However, the Poisson part is partially adopted for modeling undetected landmarks, implicitly requiring additional process outside the BP framework.
Nevertheless, BP-SLAM is powerful due to its generality to the complex scenario (e.g., multiple bss).
BP is also used in some RFS-based SLAM methods, e.g., for computing the marginal probabilities of the DA [47].
§.§ RFS-SLAM
The joint posterior RFS-SLAM density can be decomposed as [6]
\begin{equation}
f(\boldsymbol{s}_{1:k},\mathcal{X} \lvert \mathcal{Z}_{1:k}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0}) = f(\boldsymbol{s}_{1:k}\lvert \mathcal{Z}_{1:k}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0}) f(\mathcal{X} \lvert \mathcal{Z}_{1:k}, \boldsymbol{s}_{0:k}). \label{eq:jointPDF}
\end{equation}
The recursion for the joint RFS-SLAM density is equivalent to jointly propagation the posterior density of the ue trajectory, $f(\boldsymbol{s}_{1:k}\lvert \mathcal{Z}_{1:k}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0})$, and the posterior density of the map that is conditioned on the ue trajectory, $f(\mathcal{X} \lvert \mathcal{Z}_{1:k}, \boldsymbol{s}_{0:k})$. Since $\mathcal{X}$ is an RFS (i.e., an unordered set of arbitrary cardinality), $f(\mathcal{X} \lvert \mathcal{Z}_{1:k}, \boldsymbol{s}_{0:k})$ is a so-called set density. An example of a set density is a PPP density, given by
\begin{equation}
f_{\mathrm{PPP}}(\mathcal{X})=e^{-\int {\lambda}(\boldsymbol{x}')\text{d}\boldsymbol{x}'} \prod_{\boldsymbol{x} \in \mathcal{X}} {\lambda}(\boldsymbol{x}),\label{PPP}
\end{equation}
where the argument $\mathcal{X}$ can be a set of arbitrary cardinality and ${\lambda}(\boldsymbol{x})$ denotes the PPP's intensity function.
Further examples will be provided in Sect. <ref>.
We consider the point target measurement model, where each landmark can give at most one measurement. Therefore, the RFS likelihood function $g(\mathcal{Z}_{k}|\boldsymbol{s}_{k},\mathcal{X})$ is given by <cit.>
\begin{align}
& g(\mathcal{Z}_{k}|\boldsymbol{s}_{k},\mathcal{X}) = e^{-\int c(\boldsymbol{z}) \mathrm{d} \boldsymbol{z}} \sum_{\mathcal{Z}_{k}^c\uplus\mathcal{Z}_{k}^1 \ldots \uplus \mathcal{Z}_{k}^{|\mathcal{X}|}=\mathcal{Z}_{k}}\prod_{\boldsymbol{z} \in \mathcal{Z}^c}c(\boldsymbol{z})\prod_{i=1}^{|\mathcal{X}|}\ell(\mathcal{Z}_{k}^i|\boldsymbol{s}_{k},\boldsymbol{x}^i),\label{likelihood}%\nonumber
\end{align}
where $\mathcal{Z}_{k}^c$ is the clutter measurement set with clutter intensity $c(\boldsymbol{z})$, and
\begin{align}
\ell(\mathcal{Z}_{k}^i |\boldsymbol{s}_{k},\boldsymbol{x}^i)=
\begin{cases}
1-p_{\text{D}}(\boldsymbol{x}^{i},\boldsymbol{s}_{k}) \quad & \mathcal{Z}_{k}^{i}=\emptyset, \\ p_{\text{D}}(\boldsymbol{x}^{i},\boldsymbol{s}_{k})f(\boldsymbol{z}|\boldsymbol{x}^{i},\boldsymbol{s}_{k}) \quad& \mathcal{Z}_{k}^{i}=\{\boldsymbol{z} \}, \\0 \quad & \mathrm{otherwise},
\end{cases}
\end{align}
where $p_{\text{D}}(\boldsymbol{x}^{i},\boldsymbol{s}_{k})\in [0,1]$ is the detection probability, and $f(\boldsymbol{z}|\boldsymbol{x}^{i},\boldsymbol{s}_{k})$ is given by (<ref>).
UE trajectory density
Given the priors of the UE $f(\boldsymbol{s}_{1:k-1}\lvert \mathcal{Z}_{1:k-1}, \boldsymbol{u}_{1:k-1},\boldsymbol{s}_{0})$ and the map $f(\mathcal{X} \lvert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k-1})$, the likelihood $g(\mathcal{Z}_{k}|\boldsymbol{s}_{k},\mathcal{X})$, and the motion model $f(\boldsymbol{s}_{k}|\boldsymbol{s}_{k-1},\boldsymbol{u}_{k})$, the updated UE density $f(\boldsymbol{s}_{1:k}\lvert \mathcal{Z}_{1:k}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0})$ can be obtained by
\begin{align}
&f(\boldsymbol{s}_{1:k}\lvert \mathcal{Z}_{1:k}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0}) = \frac{ f(\boldsymbol{s}_{1:k}\lvert \mathcal{Z}_{1:k-1}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0})f(\mathcal{Z}_{k}\lvert\mathcal{Z}_{1:k-1},\boldsymbol{s}_{0:k})}{p(\mathcal{Z}_{k}\lvert\mathcal{Z}_{1:k-1})},
\end{align}
with $f(\boldsymbol{s}_{1:k}\lvert \mathcal{Z}_{1:k-1}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0})= f(\boldsymbol{s}_{1:k-1}\lvert \mathcal{Z}_{1:k-1}, \boldsymbol{u}_{1:k-1},\boldsymbol{s}_{0})f(\boldsymbol{s}_{k}\lvert \boldsymbol{u}_{k},\boldsymbol{s}_{k-1})$ denoting the prediction of the ue trajectory.
Map density
The posterior map density $f(\mathcal{X} \lvert \mathcal{Z}_{1:k}, \boldsymbol{s}_{0:k})$ can be obtained by
\begin{align}
&f(\mathcal{X} \lvert \mathcal{Z}_{1:k}, \boldsymbol{s}_{0:k}) = \frac{f(\mathcal{X} \lvert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k})g(\mathcal{Z}_{k}|\boldsymbol{s}_{k},\mathcal{X})}{f(\mathcal{Z}_{k}\lvert\mathcal{Z}_{1:k-1},\boldsymbol{s}_{0:k})},
\end{align}
where $f(\mathcal{X} \lvert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k})=f(\mathcal{X} \lvert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k-1})$, as the landmarks are all fixed.
§.§.§ UE Trajectory Density Representation
We follow the RBP approach <cit.>, and approximate the posterior density of the ue trajectory using a weighted set of $N$ particles [50]
\begin{equation}
f(\boldsymbol{s}_{1:k}\lvert \mathcal{Z}_{1:k}, \boldsymbol{u}_{1:k},\boldsymbol{s}_{0}) \approx \sum_{n=1}^N w_k^{(n)} \delta \left(\boldsymbol{s}_k - \boldsymbol{s}_k^{(n)} \right),
\end{equation}
where $\delta(\cdot)$ is the Dirac delta distribution, $\boldsymbol{s}_k^{(n)}$ denotes the $n$-th particle and ${w}_k^{(n)}$ the associated weight. In RFS-SLAM, the ue trajectory is estimated using a PF and the posterior RFS-SLAM density is represented by a weighted set of $N$ particles $\lbrace w_k^{(n)}, \boldsymbol{s}_{0:k}^{(n)}, f(\mathcal{X} \lvert \mathcal{Z}_{1:k}, \boldsymbol{s}^{(n)}_{0:k}) \rbrace_{n=1}^N.$
In the following, we present two map representations based on PPP and PMBM RFS. Then, we will describe how the particle weights $w_k^{(n)}$ are computed.
§.§.§ PHD Map Representation
In PHD-SLAM, the posterior RFS map is approximated by a PPP RFS following assumed-density filtering [6, 51]
\begin{equation}
f(\mathcal{X} \vert \mathcal{Z}_{1:k}, \boldsymbol{s}_{0:k}^{(n)}) \approx \frac{\prod_{\boldsymbol{x} \in \mathcal{X}} v_{k \vert k}(\boldsymbol{x} \vert \boldsymbol{s}_{0:k}^{(n)})}{\exp \left( \int v_{k \vert k}(\boldsymbol{x} \vert \boldsymbol{s}_{0:k}^{(n)}) \textrm{d}x \right)},
\end{equation}
where $v_{k \vert k}(\boldsymbol{x} \vert \boldsymbol{s}_{0:k}^{(n)})$ denotes the PHD and
it is equal to the PPP intensity.
The PHD is parametrized using a GM [52]
\begin{equation}\label{eq:phd_gm}
v^{(n)}_{k \vert k}(\boldsymbol{x} \vert \boldsymbol{s}_{0:k}^{(n)}) = \sum_{i=1}^{M_{k}^{(n)}} \eta_{k}^{(n,i)} \mathcal{N}(\boldsymbol{\mu}_{k}^{(n,i)}, \boldsymbol{\Sigma}_{k}^{(n,i)}),
\end{equation}
where $M_{k}^{(n)}$ is the number of GM components at time $k$ and, $\eta_{k}^{(n,i)}$, $\mu_{k}^{(n,i)}$ and $\Sigma_{k}^{(n,i)}$ are the weight, mean and covariance of landmark $i$ for particle $n$ in corresponding order. In PHD-SLAM, the trajectory-conditioned map is estimated using a PHD filter and the overall PHD-SLAM density at time $k$ is represented by a set of $N$ particles $ \lbrace w_{k \vert k}^{(n)}, \boldsymbol{s}_{0:k}^{(n)}, v_{k \vert k}^{(n)}(\boldsymbol{x} \vert \boldsymbol{s}_{0:k}^{(n)}) \rbrace_{n=1}^N$.
It is worth noting that the integral of the PHD over any region gives the expected number of landmarks in that region, and the highest local concentration of expected number of landmarks is captured by local maximas of the PHD [52, 6]. Next, we summarize the PHD filtering recursion, that is, the prediction and update steps of the filter.
Prediction step
If the PHD at the previous time instant, $v^{(n)}_{k-1 \vert k-1}(\cdot)$, is a GM of the form given in (<ref>), then it follows that the predicted PHD is also a GM given by [52]
\begin{equation}
v^{(n)}_{k \vert k-1}(\boldsymbol{x} \vert \boldsymbol{s}_{k}^{(n)}) = v^{(n)}_{k-1 \vert k-1}(\boldsymbol{x} \vert \boldsymbol{s}_{k-1}^{(n)}) + v^{(n)}_{\textrm{B},k}(\boldsymbol{x} \vert \boldsymbol{s}_{k}^{(n)}).
\end{equation}
The parameters of $v^{(n)}_{k-1 \vert k-1}(\cdot)$ are unchanged since the landmarks are static, $v^{(n)}_{\textrm{B},k}(\cdot)$ is the birth process with $M_{\textrm{B},k}^{(n)}$ GM components, and the number of components in the predicted PHD is $M_{k \vert k-1}^{(n)} = M_{k-1 \vert k-1}^{(n)} + M_{\textrm{B},k}^{(n)}$. The birth process indicates where and with which intensities new landmarks appear and commonly the birth process is either measurement-driven [5] or is assumed to be known a priori [52].
Update step
Once measurement set $\mathcal{Z}_k$ is observed at time $k$, the predicted PHD $v^{(n)}_{k \vert k-1}(\cdot)$ can be updated and the posterior map PHD is given by [52, 6]
\begin{equation}\label{eq:phd_update}
v^{(n)}_{k \vert k}(\boldsymbol{x} \vert \boldsymbol{s}_{k}^{(n)}) = \big[1 - p_\textrm{D}(\boldsymbol{x}, \boldsymbol{s}_{k}^{(n)}) \big]v^{(n)}_{k \vert k-1}(\boldsymbol{x} \vert \boldsymbol{s}_{k}^{(n)}) + \sum_{\boldsymbol{z} \in \mathcal{Z}_k} \frac{\boldsymbol{\Lambda}(\boldsymbol{z}, \boldsymbol{x} \vert \boldsymbol{s}_{k}^{(n)}) }{c(\boldsymbol{z}) + \int \boldsymbol{\Lambda}(\boldsymbol{z}, \boldsymbol{x}' \vert \boldsymbol{s}_{k}^{(n)}) \textrm{d}\boldsymbol{x}'},
\end{equation}
\begin{equation}\label{eq:lambda}
\boldsymbol{\Lambda}(\boldsymbol{z}, \boldsymbol{x} \vert \boldsymbol{s}_{k}^{(n)}) = p_\textrm{D}(\boldsymbol{x}, \boldsymbol{s}_{k}^{(n)}) f(\boldsymbol{z} \vert \boldsymbol{x},\boldsymbol{s}_k^{(n)}) v^{(n)}_{k \vert k-1}(\boldsymbol{x} \vert \boldsymbol{s}_{k}^{(n)}).
\end{equation}
In (<ref>), the first term on the right hand side represents landmarks that are undetected and the latter term is the set of detected landmarks. Since every component of $v^{(n)}_{k \vert k -1}(\cdot)$ is updated by a miss detection and by every measurement, the number of GM components in the updated PHD $v^{(n)}_{k \vert k}(\cdot)$ is $M_{k \vert k}^{(n)} = M_{k \vert k-1}^{(n)} \times (\vert \mathcal{Z}_k \vert + 1)$. In practice, the weight, mean and covariance of the detected landmarks can be updated using any standard Kalman filtering technique such as the EKF and for details how to compute the GM parameters, the readers are referred to [5, 6, 52].
§.§.§ PMBM Map Representation
For notational convenience, we will drop the particle index $n$ in this section. However, please note that all the densities are conditioned on a given particle.
The PMBM-based SLAM filter utilizes the PMBM RFS to represent the environment, conditioned on the ue state. In [47, 48], the PMBM density is proven to be a conjugate prior for (<ref>). Therefore, the posterior keeps the PMBM format if the prior is a PMBM density, so that we can directly update the PMBM components.
Unlike the PHD-based SLAM filter, where the environment are modeled as a PPP RFS, the PMBM RFS considers two types of landmarks, the undetected and detected landmarks. The undetected landmarks are the landmarks that exist but have never been detected until the current time, which are modeled as a PPP RFS $\mathcal{X}_{\mathrm{U}}$. The detected landmarks are the landmarks that have been detected at least once until the current time, which are modeled as a MBM RFS $\mathcal{X}_{\mathrm{D}}$ [47, 53, 48]. As undetected and detected landmarks are disjoint, a PMBM RFS $\mathcal{X}$ can be expressed as the union of $\mathcal{X}_{\mathrm{U}}$ and $\mathcal{X}_{\mathrm{D}}$. By applying the FISST convolution, the density of the PMBM RFS $\mathcal{X}$ follows [33]
\begin{equation}
\end{equation}
where $\uplus$ represents the union of mutually disjoint sets, $f_{\mathrm{PMBM}}(\cdot)$ is the PMBM density, $f_{\mathrm{PPP}}(\cdot)$ is the PPP density and $f_{\mathrm{MBM}}(\cdot)$ is the MBM density.
The PPP density
has intensity
${\lambda}(\boldsymbol{x})=\mu {f}_{\mathrm{u}}(\boldsymbol{x})$. It represents that the cardinality of $\mathcal{X}_{\mathrm{U}}$ (the number of undetected landmarks) follows a Poisson distributed with given mean $\mu$, and all undetected landmarks are independently and identically distributed with a given spatial density ${f}_{\mathrm{u}}(\boldsymbol{x})$.
The MBM density considers multiple global hypotheses (more on this soon) with multiplicative weights such that
\begin{equation}
f_{\mathrm{MBM}}(\mathcal{X}_{\mathrm{D}})= \sum_{j\in\mathbb{I}}\omega^{j}\sum_{\mathcal{X}^{1}\uplus \dots \uplus
\mathcal{X}^{n}=\mathcal{X}_{\mathrm{D}}}\prod_{i=1}^{n}f^{j,i}_{\mathrm{B}}(\mathcal{X}^{i}),\label{MBM}
% \mathcal{X}^{n}=\mathcal{X}_{\mathrm{D}}}\prod_{i=1}^{|\mathcal{X}_{\mathrm{D}}|}f^{j,i}_{\mathrm{B}}(\mathcal{X}^{i}),\label{MBM}
\end{equation}
where $j$ is an index in the set of global hypotheses $\mathbb{I}$ [47]; $\omega^{j}$ is the weight for global hypothesis $j$, satisfying $\sum_{j\in\mathbb{I}}\omega^{j}=1$;
$n$ is the number of potentially detected landmarks; and $f_{\mathrm{B}}^{j,i}(\cdot)$ is the Bernoulli density of landmark $i$ under global hypothesis $j$, following
\begin{equation}
\begin{cases}
1-r^{j,i} \quad& \mathcal{X}^{i}=\emptyset, \\ r^{j,i}f^{j,i}(\boldsymbol{x}) \quad & \mathcal{X}^{i}=\{\boldsymbol{x}\}, \\ 0 \quad & \mathrm{otherwise},
\end{cases}
\end{equation}
where $r^{j,i} \in [0,1]$ is the existence probability, representing how likely the landmark exists, and $f^{j,i}(\cdot)$ is its state density if exists, which is assumed to be a Gaussian. A higher $r^{j,i}$ represents the corresponding landmark is more likely to exist, and a lower $r^{j,i}$ represents the corresponding landmark is more likely to not exist. Please note that if $\mathcal{X}^{i}$ is empty (the corresponding landmark does not exist) under the $j$-th global hypothesis, $r^{j,i}$ is 0, resulting the $f^{j,i}_{\mathrm{B}}(\mathcal{X}^{i})=1$.
By plugging (<ref>) and (<ref>) into (<ref>), we then can rewrite (<ref>) as
\begin{equation}
f_{\mathrm{PMBM}}(\mathcal{X})=\sum_{\mathcal{X}_{\mathrm{U}}\uplus\mathcal{X}_{\mathrm{D}}=\mathcal{X}}e^{-\int {\lambda}(\boldsymbol{x}')d\boldsymbol{x}'} \prod_{\boldsymbol{x} \in \mathcal{X}_{\mathrm{U}}} {\lambda}(\boldsymbol{x})\sum_{j\in\mathbb{I}}\omega^{j}\sum_{\mathcal{X}^{1}\uplus \dots \uplus \mathcal{X}^{n}=\mathcal{X}_{\mathrm{D}}}\prod_{i=1}^n
% {|\mathcal{X}_{\mathrm{D}}|}
\end{equation}
Then, (<ref>) can be parameterized by
$\lambda(\boldsymbol{x})$ and $\{\omega^{j},\{r^{j,i},f^{j,i}(\boldsymbol{x})\}_{i\in \mathbb{I}^{j}}\}_{j\in \mathbb{I}}$, where $\mathbb{I}^{j}$ is the index set of all considered landmarks under global hypotheses $j$.
Global and local hypotheses
Each detected landmark has been associated to at least one measurement. However, that measurements may have been a false alarm, so the landmark should be interpreted as a potentially existing landmark with a certain existence probability, which motivates the use of a Bernoulli density.
Given the measurements $\mathcal{Z}$, there are two cases for each previously detected landmark: (i) the landmark can be associated to a measurement; (ii) the landmark is not associated to any measurement (it is misdetected). Similarly, given the previously detected landmarks, there are two cases for each measurement: (a) it is associated to a previously detected landmark; (b) it is not associated to any of the previously detected landmarks, but comes from clutter or a newly detected landmark.
In other words, a single measurement can be associated either to a previously detected landmark, or a newly detected landmark or clutter, which are the local hypotheses.
The history of the local hypotheses of a potentially detected landmark, which is referred to a `single target association history hypothesis', incorporates information on when it was first detected and by which measurement, when it is misdetected, and when it is detected again with which measurement.
Single target association history hypotheses are generated per landmark, but may be globally inconsistent among landmarks (e.g., a measurement may be assigned to two landmarks).
A global hypotheses, which is also known as a valid DA, contains one single target association history hypothesis for each potentially detected landmark, with the constraint that each measurement is contained in only one of the single target association history hypotheses [47]. Fig. <ref> visualizes the local hypotheses, a single target association history hypothesis and the global hypothesis of an exemplary association problem. The summation in (<ref>) is over the global hypotheses.
Example of the global hypotheses of associating measurement sets at two time step $\mathcal{Z}_{1}=\{\boldsymbol{z}_{1}^{1},\boldsymbol{z}_{1}^{2}\}$ and $\mathcal{Z}_{2}=\{\boldsymbol{z}_{2}^{1}\}$ to two landmarks $\boldsymbol{x}^{1}$ and $\boldsymbol{x}^{2}$. Examples of the local hypotheses and the single target association history hypothesis are also shown. Each single measurement can be associated to a landmark or a clutter, and each single landmark can be detected with a measurement each time or misdetected (associated with $\emptyset$ in the figure).
Prediction step
Assume that the PMBM at time $k-1$ is the form of (<ref>) with parameters ${\lambda}_{k-1}(\boldsymbol{x})$, and $\{\omega^{j}_{k-1},\{r^{j,i}_{k-1},f^{j,i}_{k-1}(\boldsymbol{x})\}_{i\in \mathbb{I}_{k-1}^{j}}\}_{j\in \mathbb{I}_{k-1}}$.
As all landmarks are fixed, there is no prediction step, and the posterior map PMBM can be directly update by $\mathcal{Z}_{k}$, which is index by $p \in \{1,2,\cdots, |\mathcal{Z}_{k}|\}$. Please note that the PHD filter considers the newly detections in the prediction step, while the PMBM filter consider them in the update step.
Update step
In the PMBM update step, measurements are used to correct the landmark states. Four different cases are considered [48], based on the local hypotheses (i)–(ii) and (a)–(b) discussed above:
a) Undetected landmarks that remain undetected: As the previously undetected landmarks can be still undetected with possibility $1 - p_\textrm{D}(\boldsymbol{x})$, the updated intensity for remaining undetected landmarks is given by
\begin{equation}
{\lambda}_{k}(\boldsymbol{x}) = (1 - p_\textrm{D}(\boldsymbol{x})) {\lambda}_{k-1}(\boldsymbol{x}).
\end{equation}
b) Previously undetected landmark is detected for the first time with the measurement $\boldsymbol{z}_{k}^{p}$: A new Bernoulli will be created by the measurement $\boldsymbol{z}_{k}^{p}$ with state density $f_{k}^{p}(\boldsymbol{x}\vert \boldsymbol{z}_{k}^{p})$, existence probability $r_{k}^{p}$ and association weight $l_{k}^{p}$ such as:
\begin{align}
&r_{k}^{p}=\frac{\int p_{\mathrm{D}}(\boldsymbol{x})f(\boldsymbol{z}_{k}^{p}|\boldsymbol{x}) {\lambda}_{k-1}(\boldsymbol{x}) \text{d}\boldsymbol{x}}{c(\boldsymbol{z})+\int p_{\mathrm{D}}(\boldsymbol{x})f(\boldsymbol{z}_{k}^{p}|\boldsymbol{x}) {\lambda}_{k-1}(\boldsymbol{x}) \text{d}\boldsymbol{x}},\\
&f_{k}^{p}(\boldsymbol{x}\vert \boldsymbol{z}_{k}^{p})=\frac{p_{\mathrm{D}}(\boldsymbol{x})f(\boldsymbol{z}_{k}^{p}|\boldsymbol{x}) {\lambda}_{k-1}(\boldsymbol{x})}{\int p_{\mathrm{D}}(\boldsymbol{x})f(\boldsymbol{z}_{k}^{p}|\boldsymbol{x}) {\lambda}_{k-1}(\boldsymbol{x}) \text{d}\boldsymbol{x}},\\
&l_{k}^{p}=c(\boldsymbol{z})+\int p_{\mathrm{D}}(\boldsymbol{x})f(\boldsymbol{z}_{k}^{p}|\boldsymbol{x}) {\lambda}_{k-1}(\boldsymbol{x}) \text{d}\boldsymbol{x},\label{first_weight}%\\
%&\rho_{k}^{p}(\boldsymbol{z})=\int p_{\mathrm{D}}(\boldsymbol{x})\ell(\boldsymbol{z}_{k}^{p}|\boldsymbol{x})\lambda_{k-1}(\boldsymbol{x}) \text{d}\boldsymbol{x},
\end{align}
where $r_{k}^{p}\le 1$, which is due to the measurement can come from a real landmark (the landmark exists), or it can be a clutter (the landmark non-exists). Note that $j$ and $i$ do not appear, since the object was undetected before and it is not contained in any previous global hypothesis.
c) Previously detected landmark that is misdetected: A landmark not being associated to any measurements may be due to imperfect detection performance at the sensor or because the landmark in fact does not exist. Then, the updated Bernoulli with association weight $l_{k}^{l,i,0}$ will have a reduced existence and the same state density:
\begin{align}
\end{align}
where the index $0$ represents for no measurement being associated to the landmark. The numerator of (<ref>) denotes the probability of landmark $\boldsymbol{x}_{k-1}^{j,i}$ exists, but not detected, and $(1-r_{k-1}^{j,i})$ in the denominator of (<ref>) denotes the probability that landmark $\boldsymbol{x}_{k-1}^{j,i}$ does not exist.
d) Previously detected landmark that is detected again: If a previously detected landmark is detected again with any measurement $\boldsymbol{z}_{k}^{p}$,
then the existence and the state density of the corresponding updated Bernoulli and its association weight are:
\begin{align}
&f_{k}^{j,i,p}(\boldsymbol{x}\vert \boldsymbol{z}_{k}^{p})= \frac{p_{\mathrm{D}}(\boldsymbol{x}) f(\boldsymbol{z}_{k}^{p}|\boldsymbol{x})f_{k-1}^{j,i}(\boldsymbol{x})}{\int p_{\mathrm{D}}(\boldsymbol{x})f(\boldsymbol{z}_{k}^{p}|\boldsymbol{x})f_{k-1}^{j,i}(\boldsymbol{x})\text{d}\boldsymbol{x}},\\
&l_{k}^{j,i,p}= r_{k-1}^{j,i}\int p_{\mathrm{D}}(\boldsymbol{x})f(\boldsymbol{z}_{k}^{p}|\boldsymbol{x})f_{k-1}^{j,i}(\boldsymbol{x})\text{d}\boldsymbol{x}.\label{detect_again_weight}%\\
%& \rho_{k-1}^{j,i,p}(\boldsymbol{z}_{k}^{p})=\int p_{\mathrm{D}}(\boldsymbol{x})\ell(\boldsymbol{z}_{k}^{p}|\boldsymbol{x})f_{k-1}^{j,i}(\boldsymbol{x})\text{d}\boldsymbol{x}.
\end{align}
At this point, we have calculated all possible local associations of each landmark and each measurement, as well as the local association weights. However, we still need to form the global hypotheses. To form new global hypotheses, under each previous global hypothesis, we need to go through all possible DA. Each valid possibility will give rise a new global hypothesis. This will make the number of global hypotheses increase combinatorially for
each particle, which brings high computational cost
To avoid rapidly increasing global hypotheses, we can approximate this update by using Murty’s algorithm [54], which keeps $\gamma\ge 1$ best global hypotheses with highest likelihoods for each previous global hypothesis.
For previous global hypothesis $j$, measurements should be assigned to existing Bernoullis or newly created Bernoullis, with the constrains that one measurement can only to associated to one Bernoulli and each Bernoulli can be associated to at most one measurement. Therefore, we can construct a cost matrix using association weights $l_{k}^{p}$, $l_{k}^{j,i,0}$ and $l_{k}^{j,i,p}$ calculated in (<ref>), (<ref>), (<ref>) as <cit.>
\begin{align}
&\mathbf{L}^{j}_{k} = -\ln \left[
\begin{matrix}
\tilde{l}_{k}^{j,1,1} & \ldots & \tilde{l}_{k}^{j,|\mathbb{I}^{j}_{k-1}|,1} \\
\vdots & \ddots & \vdots \\
\tilde{l}_{k}^{j,1,|\mathcal{Z}_{k}|} & \ldots & \tilde{l}_{k}^{j,|\mathbb{I}^{j}_{k-1}|,|\mathcal{Z}_{k}|}
\end{matrix}
\left|
\,
\begin{matrix}
l^{1}_{k} & \ldots & 0 \\
\vdots & \ddots & \vdots \\
0 & \ldots & l^{|\mathcal{Z}_{k}|}_{k}
\end{matrix}
\right.
\right],
\end{align}
where $\tilde{l}^{j,i,p}_{k}=l^{j,i,p}_{k}/l^{j,i,0}_{k}$. The left $|\mathcal{Z}_{k}| \times |\mathbb{I}^{j}_{k-1}|$ sub-matrix in $\mathbf{L}^{j}_{k}$ corresponds to previously detected landmarks, the right $|\mathcal{Z}_{k}| \times |\mathcal{Z}_{k}|$ diagonal sub-matrix corresponds to newly detected landmarks, and the off-diagonal elements of the right sub-matrix are $-\infty$. The $\gamma$-best DA with weights
can be selected out by solving the assignment problem
\begin{align}\label{optimization_problem}
\text{minimize} \quad & \text{tr} \left(\mathbf{A}^{\mathsf{T}} \mathbf{L}^{j}_{k} \right) \\
\text{s.t.} \quad & [\mathbf{A} ]_{\alpha,\beta} \in
\{ 0, 1 \} \quad \forall \; \alpha,\beta \nonumber \\ %, \quad \forall A^{i,j} \nonumber \\
%i,j \in \lbrace 1,\ldots,n_k \rbrace \times \lbrace 1,\ldots,n_k+m_k \rbrace \\
& \sum\nolimits_{\beta=1}^{|\mathbb{I}^{j}_{k-1}| + |\mathcal{Z}_{k}|} [ \mathbf{A} ]_{\alpha,\beta} = 1, \quad \forall \; \alpha \nonumber \\ %\quad i \in \lbrace 1,\ldots,n_k \rbrace \nonumber \\
& \sum\nolimits_{\alpha=1}^{|\mathcal{Z}_{k}|} [\mathbf{A} ]_{\alpha,\beta} \in \{ 0, 1 \}, \quad \forall \; \beta \nonumber
\end{align}
using the Murty's algorithm [54], where $\mathbf{A}\in \{0,1\}^{|\mathcal{Z}_{k}| \times (|\mathcal{Z}_{k}|+|\mathbb{I}^{j}_{k-1}|)}$ is the assignment matrix. The solutions are denoted by $\mathbf{A}^{j,h}$, where $h$ is an index
in the index set of new DA under global hypothesis $j$, denoted as $\mathbb{H}^{j}_{k}$ with $|\mathbb{H}^{j}_{k}|\leq \gamma$. Each new DA has a weight $\omega^{j,h}_{k}$, given by
\begin{align}
\omega^{j,h}_{k} \propto \omega^{j}_{k-1} e^{-\text{tr} \left((\mathbf{A}^{j,h})^{\mathsf{T}}\mathbf{L}^{j}_{k}\right)}
\end{align}
with $\sum_{j\in \mathbb{I}_{k-1}}\sum_{h \in \mathbb{H}^{j}_{k}} \omega^{j,h}_{k}=1$. The index set of landmarks under the $j,h$-th “new DA” is denoted as $\mathbb{I}_{k}^{j,h}$, with $|\mathbb{I}_{k}^{j,h}|\leq |\mathbb{I}_{k-1}^{j}|+|\mathcal{Z}_{k}|$, as some new birth components may not exist under some DA.
After update, the map follows the PMBM format, with PPP intensity as ${\lambda}_{k}(\boldsymbol{x})$ and MBM components as $\{\{\omega^{j,h}_{k},\{r^{j,h,i}_{k},f^{j,h,i}_{k}(\boldsymbol{x})\}_{i\in \mathbb{I}_{k}^{j,h}}\}_{h\in \mathbb{H}_{k}^{j}}\}_{j\in \mathbb{I}_{k-1}}$. For the MBM, all DA can be represented by only using one index. Hence, we reorder all DA using index set $ \mathbb{I}_{k}=\{1,\cdots,\sum_{j\in \mathbb{I}_{k}}|\mathbb{H}^{j}_{k}|\}$. Then, MBM components can also be written as $\{\omega^{j}_{k},\{r^{j,i}_{k},f^{j,i}_{k}(\boldsymbol{x})\}_{i\in \mathbb{I}_{k}^{j}}\}_{j\in \mathbb{I}_{k}}$.
§.§.§ UE Trajectory Weight Computation
The posterior of the UE trajectory can be recursively approximated using the SIR PF for which the weight update is given by [50]
\begin{equation}\label{eq:pf_particle_weight}
w_k^{(n)} = w_{k-1}^{(n)} \frac{f(\mathcal{Z}_k \vert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k}^{(n)}) f(\boldsymbol{s}_k^{(n)} \vert \boldsymbol{s}_{k-1}^{(n)}, \boldsymbol{u}_k)}{q(\boldsymbol{s}^{(n)}_k \vert \boldsymbol{s}_{0:k-1}^{(n)}, \mathcal{Z}_{1:k}, \boldsymbol{u}_{1:k})}.
\end{equation}
A typical choice for the importance density $q(\cdot)$ is the motion model in (<ref>) [5, 6, 10], simplifying (<ref>) to
\begin{equation}\label{eq:pf_particle_weight_simplified}
w_k^{(n)} = w_{k-1}^{(n)} f(\mathcal{Z}_k \vert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k}^{(n)}).
\end{equation}
After updating the weight for each particle, the weights are normalized, $w_k^{(n)} = w_k^{(n)} / \sum_{n=1}^N w_k^{(n)}$, and resampling is performed every time step.
The computation of $f(\mathcal{Z}_k \vert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k}^{(n)})$ is different for the PHD and PMBM filters. With a PPP prior and a point object measurement model the term for the PHD filter is given by [5]
\begin{equation}\label{eq:measurement_weight}
f(\mathcal{Z}_k \vert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k}^{(n)}) = \prod_{\boldsymbol{z} \in \mathcal{Z}_k} \left( c(\boldsymbol{z}) + \int \boldsymbol{\Lambda}(\boldsymbol{z}, \boldsymbol{x} \vert \boldsymbol{s}_{k}^{(n)}) \textrm{d}\boldsymbol{x} \right),
\end{equation}
and it can be evaluated during the PHD update step. Correspondingly, for the PMBM filter, the term can be acquired by
\begin{align}
f(\mathcal{Z}_k \vert \mathcal{Z}_{1:k-1}, \boldsymbol{s}_{0:k}^{(n)}) & \approx e^{-\int {\lambda}^{(n)}_{k-1}(\boldsymbol{x}')\text{d}\boldsymbol{x}'-\int c(\boldsymbol{z}')\text{d}\boldsymbol{z}'}\label{normalized_item} \\&\times \sum_{j\in \mathbb{I}^{(n)}_{k-1}} \omega^{(n),j}_{k-1} \prod_{i \in \mathbb{I}_{k-1}^{(n),j}} l^{(n),j,i,0}_{k} \sum_{h \in \mathbb{H}^{(n),j}_{k}} e^{-\text{tr} \left((\mathbf{A}^{(n),j,h})^{\mathsf{T}}\mathbf{L}^{(n),j}_{k}\right)}. \nonumber
\end{align}
In (<ref>), the approximation is because not all possible DA are considered, and $e^{-\int {\lambda}^{(n)}_{k-1}(\boldsymbol{x}')\text{d}\boldsymbol{x}'}$ and $e^{-\int c(\boldsymbol{z}')\text{d}\boldsymbol{z}'}$ are the normalization items of PPP densities for previously undetected landmarks and clutter, respectively.
§.§.§ Complexity Reduction
It is worth noting that the above mentioned two RFS-SLAM filters have high complexity, due to the exponential growth in the number of particles with the dimension of the ue state, in the number of GM components in the updated PHD with time, and in the number of global hypotheses in the updated PMBM with time (if all global hypotheses are kept). Therefore, the complexity should be reduced in practical implementation.
For example, instead of following the RBP approach, we can approximate the joint posterior by using sigma-points [40, 41], or relying on linearization [38, 42, 43]. In PHD-based SLAM filters, GM components with weight lower than a set threshold can be pruned and similar GM components can be merged [52]. To reduce the complexity caused by the PMBM density, pruning those global hypotheses with low weights and/or keeping a certain number of global hypotheses can be used for reducing the number of global hypotheses. If such methods are applied, weights should be re-normalized. Bernoullis with low existence probabilities can be pruned or recycled, and the mixture intensity for the PPP part can also be pruned and/or merged [47, 48, 55]. Additionally, to further reduce the complexity, a PMB can be approximated from the resulting PMBM by applying algorithms, for example, the tomb, the momb [47], and the Kullback-Leibler
divergence minimization algorithms [56], which reduces the number of hypotheses to one after each update.
§.§ BP-based SLAM with Factor Graphs
The principle of BP-SLAM is to consider only vector random variables and not set random variables, and condition the joint density (<ref>) (after casting in vector form) on the global association. Then, by marginalization, the approximate posterior of the UE state and landmark states can be recovered with low computational complexity.
§.§.§ Basics of Factor Graph and BP
Factor graph representation of $f(\boldsymbol{x}|\boldsymbol{z})\propto \psi_1(\boldsymbol{x}_1)\psi_2(\boldsymbol{x}_2)\psi_3(\boldsymbol{x}_3)\psi_4(\boldsymbol{x}_1,\boldsymbol{x}_2)\psi_5(\boldsymbol{x}_1,\boldsymbol{x}_3)$. The variables and factors are respectively represented by the circles and squares.
BP, also known as the sum-product algorithm, is used in marginalizing the posterior density $f(\boldsymbol{x}|\boldsymbol{z})$ that can be factorized as the product of factors, expressed as
\begin{align}
f(\boldsymbol{x}|\boldsymbol{z}) \propto \prod_{i=1}^I
\psi_i(\boldsymbol{x}^{(i)}).
\end{align}
For the sake of argument, let us consider
the posterior density that can be factorized as $f(\boldsymbol{x}|\boldsymbol{z})\propto \psi_1(\boldsymbol{x}_1)\psi_2(\boldsymbol{x}_2)\psi_3(\boldsymbol{x}_3)\psi_4(\boldsymbol{x}_1,\boldsymbol{x}_2)\psi_5(\boldsymbol{x}_1,\boldsymbol{x}_3)$, where
$I=5$, $\boldsymbol{x}^{(1)} = \boldsymbol{x}_1$, $\boldsymbol{x}^{(2)} = \boldsymbol{x}_2$, $\boldsymbol{x}^{(3)} = \boldsymbol{x}_3$, $\boldsymbol{x}^{(4)} = [\boldsymbol{x}_1^\top,\boldsymbol{x}_2^\top]^\top$, $\boldsymbol{x}^{(5)} = [\boldsymbol{x}_1^\top,\boldsymbol{x}_3^\top]^\top$.
The factor graph of the factorization is represented by the variable $\boldsymbol{x}_j$ and factor $\psi_i$, depicted in Fig. <ref>.
By running BP on the factor graph, we can compute two types of messages (from the factor to variable; and from the variable to factor) and belief at each variable. The message from factor $i$ to variable $j$ is denoted by $\mathsf{m}_{\psi_i \rightarrow j}(\boldsymbol{x}_j)$, message from variable $j$ to factor $i$ is denoted by $\mathsf{m}_{j \rightarrow \psi_i}(\boldsymbol{x}_j)$, belief at variable $j$ is denoted by $\mathsf{b}(\boldsymbol{x}_j)$.
Examples for messages near variable $1$ and belief at variable $1$ (see, Fig. <ref>) are provided as follows.
Message 1 is given by
$\mathsf{m}_{\psi_1 \rightarrow 1}(\boldsymbol{x}_1) = \psi_1(\boldsymbol{x}_1)$,
two message 2 are respectively given by $\mathsf{m}_{\psi_4 \rightarrow 1}(\boldsymbol{x}_1) = \int \mathsf{m}_{2 \rightarrow \psi_2 }(\boldsymbol{x}_2) \psi_4(\boldsymbol{x}_1,\boldsymbol{x}_2) \mathrm{d} \boldsymbol{x}_2$ and $\mathsf{m}_{\psi_5 \rightarrow 1}(\boldsymbol{x}_1) = \int \mathsf{m}_{\psi_3 \rightarrow 3}(\boldsymbol{x}_3) \psi_5(\boldsymbol{x}_1,\boldsymbol{x}_3) \mathrm{d} \boldsymbol{x}_3$, and message 3 is given by $\mathsf{m}_{1 \rightarrow \psi_5}(\boldsymbol{x}_1)= \mathsf{m}_{\psi_1 \rightarrow 1}(\boldsymbol{x}_1)\mathsf{m}_{\psi_4 \rightarrow 1}(\boldsymbol{x}_1)$.
The belief $\mathsf{b}(\boldsymbol{x}_1)$ indicates update of BP at variable $1$, given by $\mathsf{b}(\boldsymbol{x}_1) \propto \mathsf{m}_{\psi_1 \rightarrow 1}(\boldsymbol{x}_1)\mathsf{m}_{\psi_4 \rightarrow 1}(\boldsymbol{x}_1)\mathsf{m}_{\psi_5 \rightarrow 1}(\boldsymbol{x}_1)$ [44].
§.§.§ Factorization and Factor Graph of Joint SLAM Density
Using the vector representation instead of the set representation, a landmark variable $\boldsymbol{y}=[\boldsymbol{x}^\top, \epsilon]^\top$ is introduced, where $\boldsymbol{x}$ denotes the landmark state, and $\epsilon\in\{0,1\}$ denotes the existence variable.
Then, the PDF of a landmark state is denoted by $f(\boldsymbol{x}, \epsilon)$.
The previously and newly detected landmark are respectively marked by $\tilde{\cdot}$ and $\breve{\cdot}$: for instance, $\tilde{\boldsymbol{x}}$ and $\breve{\boldsymbol{x}}$.
We denote by $I_{k-1}$ and $J_k$, respectively, the number of previously detected landmarks and the number of measurements.
We introduce two association variables $\boldsymbol{c}_k = [c_k^1,...,c_k^{I_{k-1}}]$, $\boldsymbol{d}_k = [d_k^1,...,d_k^{J_k}]$.
Here, $c_k^i \in \{0,1,\ldots,J_k\}$ denotes the association of landmark $i$ with measurement $j$, and $d_k^j \in \{0,1,\ldots,I_{k-1}\}$ denotes the association of measurement $j$ with landmark $i$.
We also introduce the association factor $\psi(c_k^i,d_k^j)$, modeled to be 0 if the association is not valid (i.e., $c_k^i = j$, $d_k^j \neq i$ or $c_k^i \neq j$, $d_k^j = i$), and 1 otherwise [47].
With the above association variable and vector representation, we introduce two variants likelihood functions from [48]: likelihood function for the undetected landmarks, denoted by $\tilde{l}(\boldsymbol{z}_k^j | \boldsymbol{s}_k, \breve{\boldsymbol{x}}_k,\breve{\epsilon}_k, d_k^j)$, given by
\begin{align}
\tilde{l}(\boldsymbol{z}_k^j | \boldsymbol{s}_k, \breve{\boldsymbol{x}}_k,\breve{\epsilon}_k, d_k^j)
\begin{cases}
& \breve{\epsilon}_k = 1,~d_k^j=0, \\
& \breve{\epsilon}_k = 0,~d_k^j=0,\\
& \breve{\epsilon}_k = 0 ,~d_k^j \neq 0,\\
0 & \breve{\epsilon}_k=1 ,~d_k^j \neq 0,
\end{cases}
\end{align}
and the likelihood function for the detected landmarks, denoted by $t(\boldsymbol{z}_k^{c_k^i} | \boldsymbol{s}_k, \tilde{\boldsymbol{x}}_k^i,\tilde{\epsilon}_k^i, c_k^i)$, given by
\begin{align}
t(\boldsymbol{z}_k^{c_k^i} | \boldsymbol{s}_k, \tilde{\boldsymbol{x}}_k^i,\tilde{\epsilon}_k^i, c_k^i)
\begin{cases}
& \tilde{\epsilon}_k=1,~c_k^i=j,\\
1-p_\mathrm{D}(\tilde{\boldsymbol{x}}_k^i,\boldsymbol{s}_k) & \tilde{\epsilon}_k=1,~c_k^i = 0,\\
& \tilde{\epsilon}_k=0,~c_k^i =0,\\
0 & \text{otherwise}.
\end{cases}
\end{align}
Now, we introduce the joint posterior density $f(\boldsymbol{s}_{k},\boldsymbol{y}_{k},\boldsymbol{c}_k,\boldsymbol{d}_k|\mathcal{Z}_{1:k})$, which can be factorized as
\begin{align}
f(\boldsymbol{s}_{k},\boldsymbol{y}_{k}, \boldsymbol{c}_k,\boldsymbol{d}_k|\mathcal{Z}_{1:k})
& \propto f_{k|k-1}(\boldsymbol{s}_{k}) \prod_{i=1}^{I_{k-1}}
t(\boldsymbol{z}_{k}^{c_{k}^i} | \boldsymbol{s}_{k},
\tilde{\boldsymbol{x}}_{k}^i,\tilde{\epsilon}_{k}^i, c_{k}^i)
% f(\boldsymbol{s}_{k-1},\boldsymbol{y}_{k-1},\boldsymbol{c}_{k-1},\boldsymbol{d}_{k-1}|\boldsymbol{z}_{1:k-1})
% f(\boldsymbol{s}_{k}|\boldsymbol{s}_{k-1})
\notag \\
\times
\prod_{j=1}^{J_{k}} \tilde{l}(\boldsymbol{z}_{k}^j | \boldsymbol{s}_{k}, \breve{\boldsymbol{x}}_{k},\breve{\epsilon}_{k}, d_{k}^j) \psi(c_k^i, d_k^j),
\label{eq:Ch3_BPSLAM_FactDen}
\end{align}
where $f_{k|k-1}(\boldsymbol{s}_{k})$ denotes the predicted density of ue, and $f_{k|k-1}(\tilde{\boldsymbol{x}}_{k}^i,\tilde{\epsilon}_{k}^i)$ denotes the predicted density of detected landmark $i$.
The factorized density can be depicted by a factor graph [44], which represents the relation among the random variables and real-valued functions for SLAM.
The SLAM process consists of two parts: i) prediction and ii) update.
For the sake of argument, let us consider a simple case as follows:
at time $k$, two previously detected landmarks $\tilde{\boldsymbol{y}}_{k-1}^1$ and $\tilde{\boldsymbol{y}}_{k-1}^2$ are given with the PDF $f(\tilde{\boldsymbol{x}}_{k-1}^1, \tilde{\epsilon}_{k-1}^1|\boldsymbol{z}_{1:k-1})$ and $f(\tilde{\boldsymbol{x}}_{k-1}^2, \tilde{\epsilon}_{k-1}^2|\boldsymbol{z}_{1:k-1})$; the ue observes one measurement $\boldsymbol{z}_k^1$ (so that at least one previously detected landmark is misdetected); and the ue prior $f(\boldsymbol{s}_{k-1}|\boldsymbol{z}_{1:k-1})$ is given.
Then, we depict a factor graph of the simple case in Fig. <ref>, and we define the MPD as the beliefs. For example: $f(\tilde{\boldsymbol{x}}_{k-1}^1, \tilde{\epsilon}_{k-1}^1|\boldsymbol{z}_{1:k-1}) \triangleq \mathsf{b}(\tilde{\boldsymbol{x}}_{k-1}^1, \tilde{\epsilon}_{k-1}^1)$ and $f(\boldsymbol{s}_{k-1}|\boldsymbol{z}_{1:k-1}) \triangleq \mathsf{b}(\boldsymbol{s}_{k-1})$.
We will detail the prediction and update steps with BP in the following section.
Factor graph representation of the joint posterior density for SLAM. Each red factor indicates the likelihood function, depending on all the measurements.
§.§.§ BP for SLAM
Here, we address the messages and beliefs, computed by BP running on the factor graph.
The messages and beliefs are interpreted with the message scheduling 1–7, as shown in Fig. <ref>.
Prediction (Message 1)
With the transition density, the messages for sensor and previously detected landmarks are computed (see green area in Fig. <ref>).
For example, the message for previously detected landmark 1 is
\begin{align}
% \mathsf{m}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1) = &\sum_{\tilde{\epsilon}_{k-1}^1 \in \{0,1\}}\int f(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1|\tilde{\boldsymbol{x}}_{k-1}^1,\tilde\epsilon_{k-1}^1)\mathsf{b}(\tilde{\boldsymbol{x}}_{k-1}^1,\tilde{\epsilon}_{k-1}^1) \text{d}{\tilde{\boldsymbol{x}}}_{k-1}^1,
\mathsf{m}(\tilde{\boldsymbol{y}}_k^1) = &\int f(\tilde{\boldsymbol{y}}_k^1|\tilde{\boldsymbol{y}}_{k-1}^1)\mathsf{b}(\tilde{\boldsymbol{y}}_{k-1}^1) \text{d}{\tilde{\boldsymbol{y}}}_{k-1}^1.
\end{align}
The messages for newly detected landmarks are generated, set to the intensity function $\lambda(\breve{\boldsymbol{y}})$ of (<ref>) (shown in the pink area in Fig. <ref>). The intensity function can be represented by the GM, and this modification outside the BP framework allows for handling the newly detected landmarks, which resembles to the process for undetected landmarks of the PMBM filter (see Sect. <ref>).
The ue message is $\mathsf{m}(\boldsymbol{s}_k) = \int f(\boldsymbol{s}_k|\boldsymbol{s}_{k-1})\mathsf{b}(\boldsymbol{s}_{k-1}) \text{d}\boldsymbol{s}_{k-1}$, where $f(\boldsymbol{s}_k|\boldsymbol{s}_{k-1})$ is the transition density of the sensor.
DA (Message 2–4)
The messages 1 are sent to the linked factor, and then the messages 2 for landmark-oriented associations and measurement-oriented associations are obtained.
For example, the message $\mathsf{m}(c_k^1 = 1)$ indicates detection of previously landmark 1 associated with the measurement $\boldsymbol{z}_k^1$, given by
\begin{align}
\mathsf{m}(c_k^1 = 1) = \iint \mathsf{m}(\boldsymbol{s}_k) \mathsf{m}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1)
f(\boldsymbol{z}_k^{1} | \boldsymbol{s}_k, \tilde{\boldsymbol{x}}_k^1)
\text{d}\boldsymbol{s}_k \text{d} \tilde{\boldsymbol{x}}_k^1,
\end{align}
and message $\mathsf{m}(c_k^1 = 0)$ indicates missed detection of previously landmark 1, given by
\begin{align}
\mathsf{m}(c_k^1 = 0) = \iint \mathsf{m}(\boldsymbol{s}_k)
\{ \mathsf{m}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1)
(1-p_\mathrm{D}) + \mathsf{m}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=0) \}
\text{d}\boldsymbol{s}_k \text{d} \tilde{\boldsymbol{x}}_k^1.
\end{align}
The messages $\mathsf{m}(d_k^1 = 0)$ and $\mathsf{m}(d_k^1 \neq 0)$ are defined similarly.
To obtain the messages 3, loopy BP is iteratively performed, detailed in [47].
The messages 4 are obtained by the product of messages 3 from the factor $\psi(\cdot,\cdot)$ to the linked variables.
For example, $\overline{\mathsf{m}}(d_k^1) = \psi^1(d_k^1)\psi^2(d_k^1)$.
Measurement Update (Message 5–7)
The messages 4 are sent to the linked factors, and the messages 5 and 6 are computed with the messages 1.
For example, the message $\overline{\mathsf{m}}_{t^1}(\boldsymbol{s}_k)$ is computed as
\begin{align}
& \overline{\mathsf{m}}_{t^1}(\boldsymbol{s}_k)\nonumber \\
& =
\int \overline{\mathsf{m}}(c_k^1=0) (1-\mathsf{p}_\mathrm{D})\mathsf{m}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1) +\overline{\mathsf{m}}(c_k^1 = 0) \mathsf{m}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=0)\notag \\
&+ \overline{\mathsf{m}}(c_k^1=1) \mathsf{p}_\mathrm{D}f(\boldsymbol{z}_k^j|\tilde{\boldsymbol{x}}_k^1,\boldsymbol{s}_k)\mathsf{m}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1) \text{d} \tilde{\boldsymbol{x}}_k^1.
\end{align}
For example, the message $\overline{\mathsf{m}}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1)$ and $\overline{\mathsf{m}}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=0)$ are computed as
\begin{align}
\overline{\mathsf{m}}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1)
& =
\int \overline{\mathsf{m}}(c_k^i=1) p_\textrm{D} f(\boldsymbol{z}_k^1 | \boldsymbol{s}_k, \tilde{\boldsymbol{x}}_k^1)
\mathsf{m}(\boldsymbol{s}_k) \text{d}\boldsymbol{s}_k+ \overline{\mathsf{m}}(c_k^i=0) (1-p_\textrm{D}).\\
\overline{\mathsf{m}}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=0)
& = \overline{\mathsf{m}}(c_k^1 = 0).
\end{align}
The messages 5 and 6 are sent to the linked variables.
Finally, the beliefs (i.e., messages 7) are computed by the product of the linked factors.
For example, the beliefs of previously detected landmark 1 ($\mathsf{b}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1)$ and $\mathsf{b}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=0)$) are computed as
\begin{align}
\mathsf{b}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1)
&= \overline{\mathsf{m}}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1) {\mathsf{m}}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=1), \\
\mathsf{b}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=0)
&= \overline{\mathsf{m}}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=0) {\mathsf{m}}(\tilde{\boldsymbol{x}}_k^1,\tilde{\epsilon}_k^1=0).
\end{align}
The belief of the sensor state, i.e., $\mathsf{b}(\boldsymbol{s}_k)$, is computed by the product of incoming messages from linked factors $t(\cdot)$ and $f(\boldsymbol{s}_k|\boldsymbol{s}_{k-1})$, as $ \mathsf{b}(\boldsymbol{s}_k) \propto \mathsf{m}(\boldsymbol{s}_k) \overline{\mathsf{m}}_{t^1}(\boldsymbol{s}_k)\overline{\mathsf{m}}_{t^2}(\boldsymbol{s}_k).$
§ RESULTS
In this section, performance of the two RFS-SLAM algorithms is demonstrated utilizing a small cell vehicular scenario with one bs. First, the simulation scenario and performance metrics are introduced and thereafter, the results are presented.
§.§ Simulation Setup
Scenario with the environment of a bs and 4 VAs and 4 SPs. The ue moves counterclockwise along the trail centered at the bs.
We consider a scenario illustrated in Fig. <ref> in which a ue performs a $360^\circ$ cycle around a bs that has a known location. The ue state is composed of the 2D location $[x,\;y]^\top$, heading $\alpha$ and clock bias $B$, and the motion is modeled using a coordinated turn model [57] and the bias using a Gaussian random walk model [58]. Thus, evolution of the ue state can be modeled as[It is to be noted that even though the complete ue state consists of the 3D position, 3D orientation and clock parameters, a lower dimensional approximation may be sufficient. We have assumed that the terrain within a small cell is flat, the clock drift is small, and orientation and position of the antenna are known and fixed with respect to the ue coordinate frame.]
\begin{equation}\label{dynamic_model}
\underset{\boldsymbol{s}_{k}}{\underbrace{\vphantom{\begin{bmatrix} x_{k-1} + \tfrac{2v}{\phi} \sin \! \left(\tfrac{\phi \Delta}{2}\right) \cos \! \left(\alpha_{k-1} + \tfrac{\phi \Delta}{2}\right) \\ y_{k-1} + \tfrac{2v}{\phi}\sin \! \left(\tfrac{\phi \Delta}{2}\right) \sin \! \left(\alpha_{k-1} + \tfrac{\phi \Delta}{2}\right) \\ \alpha_{k-1} + \phi \Delta \\ B_{k-1} \end{bmatrix}} \begin{bmatrix} x_{k} \\ y_{k} \\ \alpha_{k} \\ B_{k} \end{bmatrix}}} = \underset{\boldsymbol{v}\left(\boldsymbol{s}_{k-1},\boldsymbol{u}_{k} \right)}{\underbrace{\begin{bmatrix} x_{k-1} + \tfrac{2v}{\phi} \sin \! \left(\tfrac{\phi \Delta}{2}\right) \cos \! \left(\alpha_{k-1} + \tfrac{\phi \Delta}{2}\right) \\ y_{k-1} + \tfrac{2v}{\phi}\sin \! \left(\tfrac{\phi \Delta}{2}\right) \sin \! \left(\alpha_{k-1} + \tfrac{\phi \Delta}{2}\right) \\ \alpha_{k-1} + \phi \delta \\ B_{k-1} \end{bmatrix}}},
\end{equation}
where $\Delta = 0.5 \text{ s}$ denotes the sampling time interval and $\boldsymbol{u}_k = [v \; \phi]^\top$ is the control input with speed $v = 22.22 \text{ m/s}$ and turn rate $\phi = \pi/10 \text{ rad/s}$. The process noise covariance is $\boldsymbol{Q} = \textrm{diag}(0.2^2 \text{ m}^2, \; 0.2^2 \text{ m}^2, \; 0.0035^2 \text{ rad}^2, \; (0.2/c)^2 \text{ ns}^2)$ where $c$ denotes the speed of light. The RFS-SLAM density is approximated using $2000$ particles and the particles are initialized using $\boldsymbol{s}_0^{(n)} \sim \mathcal{N}\left(\boldsymbol{s}_0, \boldsymbol{P}_0 \right)$, where
$\boldsymbol{s}_0 = [v/\omega \text{ m}, \, 0 \text{ m}, \, \pi/2 \text{ rad}, \, 300/c \text{ ns}]^\top$ and
$\boldsymbol{P}_0 = \textrm{diag}(0.3^2 \text{ m}^2, \, 0.3^2 \text{ m}^2, \, 0.0052^2 \text{ rad}^2, \,\\ (0.3/c)^2 \text{ ns}^2)$. In practice, good initials can be acquired by using external sensors or by applying snapshot-based localization algorithms, like [59, 60].
The landmark states are unknown and there are four va and four sp in the environment (see Fig. <ref>). The va represent large reflecting surfaces such as walls, whereas the sp describe small scattering objects located near the walls such as traffic signs or street lamps. The bs and va are always visible, whereas the sp are only visible when in the fov of the ue and the fov radius for the sp is set to $50 \text{ m}$. The detection probability for the landmarks is set to $0.9$ if visible and $0$ otherwise. The received measurements are corrupted by additive zero-mean Gaussian noise with covariance $\boldsymbol{R} = \textrm{blkdiag}((0.1/c)^2 \text{ ns}^2, \, 10^{-4} \cdot \boldsymbol{I}_4 \text{ rad}^2)$. The initial birth intensity is ${\lambda}_{0}(\boldsymbol{x})= 1.5\times10^{-5}$. The clutter intensity is $c(\boldsymbol{z}) = 1/(4 \times 200 \pi^4)$, where the value in the numerator represents the average number of clutter measurements in the environment and $200$ in the denominator is the maximum sensing range.[The experimental setting and used parameters are the same or very close to the ones used in previous works [40, 41, 5, 38, 43, 42]. However, we want to note that our preliminary simulations indicate that the presented filters are able to cope much harder scenarios, for example when the clutter intensity is $20$ times higher. For comparative reasons, we have opted to utilize the simulation scenario used in previous works and in future research, more realistic and challenging scenarios will be considered.] Computational complexity of the filters are reduced by utilizing a Gaussian component reduction algorithm (see e.g., [52]) for which the pruning and merging thresholds are set to $10^{-4}$ and $50$, respectively.
The performance of the mapping algorithm is evaluated with the GOSPA metric which captures the localization accuracy of the estimator and penalizes for missed and false landmarks [61]. The ue state estimation accuracy is evaluated using the RMSE and the performance is benchmarked with respect to the PCRB. Overall, $10$ MC simulations are performed and the results are obtained by averaging over the different rounds.
§.§ Simulation Results
Mapping performance of the RFS-SLAM filters is illustrated in Fig. <ref> for va and Fig. <ref> for sp. As shown, the mapping accuracy of both filters improves gradually over time as more measurements are received. The large drops in Fig. <ref> correspond to time instances the sp are inside the fov for the first time. At these time instances the penalization term caused by miss detection reduces by a factor $\sqrt{20^2/2} \approx 14.14$. The RBP-PMBM SLAM filter provides slightly better mapping performance than the RBP-PHD SLAM filter, as the red solid lines are slightly lower than the blue solid lines in both Fig. <ref> and Fig. <ref>. The main reason the PHD filter cannot enumerate all possible DA explicitly, which brings additional errors, e.g., some landmarks are not updated by correct measurements, due to wrong DA or clutter, while the PMBM filter considers all DA, so it is more stable to misdetections and clutter.
scale only axis,
xlabel style=font=,font=,
xlabel=time step,
ylabel style=font=,font=,
ylabel=GOSPA distance [m],
axis background/.style=fill=white,
axis x line*=bottom,
axis y line*=left,
legend style=legend cell align=left, align=left, draw=white!15!black,font=
[color=mycolor1, line width=2.0pt]
table[row sep=crcr]
1 28.2842712474619
2 2.7880843963391
3 2.4853886183947
4 2.3228385044075
5 2.1175095841702
6 1.8470264333197
7 1.6507179333447
8 1.5607959488507
9 1.5685463128857
10 1.4986168289552
11 1.4669442698057
12 1.4143800599804
13 1.3989728873292
14 1.2911583746871
15 1.2597396990116
16 1.1734821238657
17 1.2490772745490
18 1.2320620059884
19 1.2690788412896
20 1.2183281213435
21 1.1893221116546
22 1.1976422126268
23 1.2215566724643
24 1.1376181682743
25 1.1506539156032
26 1.1493429225009
27 1.1440448350880
28 1.0885693238423
29 1.0794529729366
30 1.0929819868201
31 1.0777831039613
32 1.0904191708084
33 1.0558901066532
34 1.0437756804393
35 1.0420739020859
36 1.0599673486838
37 1.0200965413991
38 1.0287536990249
39 1.0411079264065
40 1.0274708642708
RBPF-PHD ($N=2000$)
[color=mycolor2, line width=2.0pt]
table[row sep=crcr]
1 28.2842712474619
2 2.78930141701978
3 2.32446139840791
4 2.17121558447884
5 1.82115646141388
6 1.60484954320867
7 1.55351387072331
8 1.46035089539028
9 1.37751905768746
10 1.2174285553943
11 1.24130215173988
12 1.23961317418424
13 1.26174815615497
14 1.20024233224708
15 1.20835125307248
16 1.20731518521581
17 1.21952889483442
18 1.20178233298079
19 1.16616119264323
20 1.1566872765527
21 1.16272576777966
22 1.17223481492582
23 1.16132267201094
24 1.16619586227553
25 1.12967464816263
26 1.13181875062247
27 1.12864229356282
28 1.14414990068978
29 1.13885957424995
30 1.1272093816621
31 1.11912758487675
32 1.12847769243793
33 1.10657424812441
34 1.09375093502827
35 1.08864909986161
36 1.07096172296724
37 1.05116368558258
38 1.03479449809016
39 1.02510499065596
40 1.00550151068623
RBPF-PMBM ($N=2000$)
[color=mycolor1, dashed, line width=2.0pt]
table[row sep=crcr]
1 28.2842712474619
2 2.5369817343009
3 2.2105169993125
4 1.9958571475115
5 1.7488038651990
6 1.4622227336075
7 1.3632399591344
8 1.1866575986038
9 1.0579905438911
10 0.9497277906169
11 0.8871659879340
12 0.8944741181527
13 0.8607156541147
14 0.7966781944874
15 0.7233338777787
16 0.6899121062752
17 0.6422430062748
18 0.6443343990111
19 0.6454782954822
20 0.6106208089047
21 0.5840309127962
22 0.6029961559650
23 0.5829301324485
24 0.5693789801494
25 0.5195480775775
26 0.5206985325560
27 0.5033221571243
28 0.5069799677808
29 0.5034731278923
30 0.4692172433956
31 0.4619457585075
32 0.4539876320037
33 0.4435373166480
34 0.4508523976395
35 0.4350975480747
36 0.4513722489219
37 0.4404879962771
38 0.4347689776182
39 0.4281674404470
40 0.4266966775860
[color=mycolor2, dashed, line width=2.0pt]
table[row sep=crcr]
1 28.2842712474619
2 2.50481254671722
3 2.15975125445184
4 1.88812010057678
5 1.67629477564052
6 1.40370259187689
7 1.28796655045911
8 1.08055181725384
9 0.970744470052632
10 0.873944227239782
11 0.813724958198609
12 0.82362052697663
13 0.826045424427935
14 0.791333729500504
15 0.725191255395564
16 0.691843405200248
17 0.650491400530099
18 0.647126806395219
19 0.650179153249307
20 0.620904958015927
21 0.60928716789141
22 0.607155061816892
23 0.590484578596535
24 0.582038036479209
25 0.537352939195712
26 0.538276810420086
27 0.520892654380735
28 0.519371790384624
29 0.510682474624188
30 0.477231148820631
31 0.469413456645528
32 0.461086818167981
33 0.448548294685662
34 0.452744521390125
35 0.438477616500357
36 0.45118701372675
37 0.439451285586612
38 0.432790305804327
39 0.420502959232876
40 0.417758243162409
41 0.418716269311889
Comparison of mapping performances for VAs among PHD and PMBM algorithms. The dashed lines are mapping with known poses.
scale only axis,
xlabel style=font=,font=,
xlabel=time step,
ylabel style=font=,font=,
ylabel=GOSPA distance [m],
axis background/.style=fill=white,
axis x line*=bottom,
axis y line*=left,
legend pos=south west,
legend style=legend cell align=left, align=left, draw=white!15!black,font=
[color=mycolor1, line width=2.0pt]
table[row sep=crcr]
1 28.2842712474619
2 24.5032659248780
3 24.5007242774168
4 24.5002324899663
5 24.5016789681090
6 24.5004539478380
7 24.5008964401672
8 20.4622646048076
9 20.0131509833640
10 20.0133611937891
11 20.0119881773558
12 20.0096977775858
13 20.0106708394470
14 20.0099860616740
15 20.0090333125572
16 20.0102991470173
17 20.0102132336465
18 16.5029537134969
19 14.1614384387760
20 14.1602550629907
21 14.1596906220332
22 14.1592780973325
23 14.1595515579472
24 14.1591670889720
25 14.1598427961679
26 14.1595138643238
27 14.1596278295811
28 4.7916627720049
29 0.7083736330933
30 0.7132392697282
31 0.6911818448010
32 0.6868713150480
33 0.6803538058926
34 0.6825095728698
35 0.6825372294598
36 0.6844574533427
37 0.6151479142839
38 0.5937439784893
39 0.5840694393835
40 0.5781101064955
RBPF-PHD ($N=2000$)
[color=mycolor2, line width=2.0pt]
table[row sep=crcr]
1 28.2842712474619
2 24.4983369705223
3 24.497474554312
4 24.4982508750791
5 24.4983517788561
6 24.4985383361492
7 24.4980601210073
8 20.4591598991138
9 20.0085949491436
10 20.0078953858571
11 20.0071216709901
12 20.0070581968672
13 20.0060507853061
14 20.0060892008266
15 20.0060764078242
16 20.006108852917
17 20.0060734914404
18 16.5000183047671
19 14.1567076914757
20 14.1557161229427
21 14.1553085550684
22 14.1549377939781
23 14.1544560802899
24 14.1543120724384
25 14.1541506677085
26 14.153628823289
27 14.1536752136521
28 4.77845308227962
29 0.637346236879395
30 0.620731969083888
31 0.590871847415731
32 0.601465042855217
33 0.58414965203808
34 0.583102338256643
35 0.585261146313085
36 0.582993160227075
37 0.552486171759295
38 0.548676691537558
39 0.545995469990216
40 0.540163089898967
RBPF-PMBM ($N=2000$)
[color=mycolor1, dashed, line width=2.0pt]
table[row sep=crcr]
1 28.2842712474619
2 24.4972147338942
3 24.4971305190562
4 24.4971305190562
5 24.4971305190562
6 24.4971305190562
7 24.4971305190562
8 20.4567400488743
9 20.0052556943538
10 20.0038445114270
11 20.0036907056784
12 20.0034232689581
13 20.0033467281302
14 20.0033467281302
15 20.0033467281302
16 20.0033467281302
17 20.0033467281302
18 16.4944974759445
19 14.1504934678351
20 14.1495941111210
21 14.1484730387945
22 14.1482379670914
23 14.1482918355992
24 14.1482918355992
25 14.1482918355992
26 14.1482918355992
27 14.1482918355992
28 4.6628463459784
29 0.4407485795316
30 0.4448858930235
31 0.4233156172628
32 0.4212663355632
33 0.4154613757081
34 0.4154613757081
35 0.4154613757081
36 0.4154613757081
37 0.3810083177486
38 0.3474558710956
39 0.3334000046261
40 0.3220074686763
[color=mycolor2, dashed, line width=2.0pt]
table[row sep=crcr]
1 28.2842712474619
2 24.4970350243814
3 24.496887480537
4 24.496887480537
5 24.496887480537
6 24.496887480537
7 24.496887480537
8 20.4568367940661
9 20.005545228312
10 20.0037827803057
11 20.003527134537
12 20.0031846953742
13 20.0030818714639
14 20.0030818714639
15 20.0030818714639
16 20.0030818714639
17 20.0030818714639
18 16.4949763936624
19 14.1492121444357
20 14.1488111890278
21 14.1480376094464
22 14.1477693510062
23 14.1477412409115
24 14.1477412409115
25 14.1477412409115
26 14.1477412409115
27 14.1477412409115
28 4.6729687466659
29 0.446151696569783
30 0.434089687754112
31 0.407568428169952
32 0.404436475028266
33 0.399926570334297
34 0.399926570334297
35 0.399926570334297
36 0.399926570334297
37 0.352981651472961
38 0.3252377375775
39 0.310799861811079
40 0.306456092037541
Comparison of mapping performances for SPs among PHD and PMBM algorithms. The dashed lines are mapping with known poses.
Estimation accuracy of the ue pose and the PCRB [42] are summarized in Fig. <ref>.
The RBP-PMBM SLAM filter outperforms the RBP-PHD SLAM filter slightly, as its RMSEs are closer to the bounds. It is because that the RBP-PMBM SLAM filter explicitly considers all possible DA, resulting in larger ESS than the RBP-PHD SLAM. Specifically, the RBP-PMBM SLAM filter has a 6.79 % ESS, while the RBP-PHD SLAM filter has a 4.65 % ESS. However, enumerating DA makes the RBP-PMBM SLAM filter has much more complexity than the RBP-PHD SLAM filter, as described in Sect. <ref> and Sect. <ref>. The accuracy of both methods could be improved using more particles but this comes with the expense of added computational complexity.
scale only axis,
bar shift auto,
xticklabels=position,heading,clock bias,
ylabel style=font=,
ylabel=RMSE of state [m], [deg], [ns],
axis background/.style=fill=white,
legend style=at=(-0.15,1), anchor=north east, legend cell align=left, align=left, draw=white!15!black
[ybar, bar width=0.145, fill=mycolor1, draw=black, area legend] table[row sep=crcr]
1 0.2444
2 0.2255
3 0.3864
[forget plot, color=white!15!black] table[row sep=crcr]
0.5 0
3.5 0
RBPF-PHD ($N=2000$)
[ybar, bar width=0.145, fill=mycolor2, draw=black, area legend] table[row sep=crcr]
1 0.2305
2 0.2047
3 0.3695
[forget plot, color=white!15!black] table[row sep=crcr]
0.5 0
3.5 0
RBPF-PMBM ($N=2000$)
[ybar, bar width=0.145, fill=mycolor3, draw=black, area legend] table[row sep=crcr]
1 0.1720
2 0.1980
3 0.3011
[forget plot, color=white!15!black] table[row sep=crcr]
0.5 0
3.5 0
Comparison of state estimation accuracy.
It is valuable to mention that only mapping is required in some applications, for example when the sensor state is known or can be acquired and only the unknown environment needs to be estimated. This mapping problem is equivalent to the SLAM problem with knowing sensor states. Pure mapping performances of the PHD and the PMBM filters are shown as dashed lines in Fig. <ref> and Fig. <ref>. They have the similar tendency as the mapping performances of the SLAM filters.
* Items we haven't presented: birth process, estimators, computing the Gaussian moments.
* Should the computational complexity be discussed on some level?
* In the PMBM filter, is every measurement used to generate a new landmark?
* How is the adaptive detection probability calculated?
* RMSE in Fig. <ref> computed over entire trajectory?
* How is resampling performed? What is the ESS?
§.§ Monostatic Sensing
ISAC allows monostatic radar-like mapping of objects in the environment [62, 63], where a ue with known dynamics exploits the single-bounce backscattered uplink communication signals to map the surrounding environment. This is demonstrated in the following using experimental RF measurements. The reader is referred to [63] for further details on the evaluation scenario, experimental setup and channel estimator, and to [62] for details on the PHD filter based mapping algorithm.
The evaluation scenario is illustrated in Fig. <ref>, which is a $2$ meters wide and $60$ meters long office corridor at the Hervanta Campus of Tampere University, Finland. The mapping related measurements are acquired over a $26$ meter trajectory in half a meter intervals as superimposed in Fig. <ref>. The path traverses from right to left and the angular scanning range in each position is from $-90^\circ$ to $90^\circ$. The RF measurements are acquired using state-of-the-art mmWave equipment shown in Fig. <ref>. In the experiment, the phased array beam-steering operation is emulated using two directive horn antennas mounted on a mechanical steering system enabling accurate beam steering in the whole azimuth plane. The antennas are assumed co-located so that the system can be characterized to operate in a monostatic radar like fashion.
The OFDM uplink waveform follows the available 5G NR numerology at the $28$ GHz carrier frequency and the entire $400$ MHz channel bandwidth is utilized to maximize the sensing resolution. The consecutive OFDM symbols are coherently combined to improve the SNR of the range-angle charts and the sparse map recovery problem is solved using the ISTA. The obtained sparse range-angle charts are then subject to a target detection phase that provides the measurement input $\mathcal{Z}_k$ for the subsequent mapping algorithm. The environment is characterized by reflection and scattering landmarks for which the transition densities are known under monostatic operation and deterministic ue movement. The measurements and landmarks are modeled using RFS and in the following, results for a dynamic PHD filter based mapping algorithm are presented.
[Experimental environment]
[Measurement setup]
The experimental environment and measurement setup illustrated in (a) and (c), respectively. In (b), measurement acquisition locations () and ISTA range-angle estimates converted to 2D Euclidean coordinates (). In (d), reflection () and scattering () point estimates obtained using the PHD filter. In (b) and (d), the black thin lines illustrate walls, whereas the black thick lines indicate wooden/metallic drawers (see (a)).
An example mapping result is illustrated in Fig. <ref>. Converting the ISTA range-angle estimates to 2D Euclidean coordinates, which we refer to as the measurement map from now on, gives a rough outline of the corridor. However, the measurements do not align particularly well with the walls and multi-bounce signals induce artefacts behind the walls creating false landmarks into the map. The PHD filter improves the estimated map in three ways. First of all, the filtering density is conditioned on a sequence of measurements as opposed to the measurement map that solely relies on the individual measurements. Second, the mobility model acts as a spatial filter and some of the multi-bounce propagation paths are removed since they are not inline with the mobility model. Lastly, the PHD successfully removes spurious measurements which are modeled as clutter in the filter. It is expected that the PMBM filter would yield similar or slightly better accuracy than the PHD filter, but since we have not tested the PMBM filter in the context of monostatic mapping, this is left for future research.
An alternative map representation is a grid-based map [64, 65]. A grid-based map divides the area into cells and, the range-angle measurements are projected to the corresponding cells which then represents occupancy in that cell. Grid-based mapping approaches provide a straightforward and robust solution to mapping. However, the accuracy is limited by the cell size, which itself is proportional to memory and processing capabilities, and grid maps are only suitable for mapping static environments. Feature-based approaches combine high accuracy with low complexity, and easily generalizes to moving objects. As we have demonstrated above, a feature-based PHD filter is a viable option for mapping. Furthermore, the map could be enhanced even further using various post-processing techniques such as smoothing [63, 66], but we have omitted to do so for brevity.
2 pages, 3 figures
Radar data
* Radar measurements - TVT paper
* Maybe also raytracing.
* range-angle charts
* Highlight practical issues (channel estimation, dynamic landmark state, field-of-view)
* Can we show results with PHD and raytracing / measured data
* TU to come up with plan by next week
§ OUTLOOK
The objective of this chapter was to introduce the problem statements and state-of-the-art methods for localization, mapping and SLAM for ISAC in 5G and Beyond 5G communication systems. In particular, methods related to RFS and Bayesian graphical models were highlighted. Performance evaluation of bistatic and monostatic sensing, as well as simulation and experimental results show the power of these methods, based only on standard communication signals.
With new technologies and new use cases in 6G, significant research challenges remain. For example, dealing with moving targets rather than static landmarks, which requires reasonable mobility models. The use of Doppler information, generally ignored in ISAC, could be highly beneficial. The extension to multistatic, distributed sensing scenarios is also challenging, considering both practical aspects (synchronization and calibration) as well as very difficult and distributed DA problems. Distributed, cooperative processing and sensor fusion will provide a higher sense of situational awareness, but come with unique DA and timing issues. Synchronization is in general a challenging topic, especially when using cheap devices with low-quality oscillators.
Several radio enablers towards 6G will lead to specific problems, such as the extreme bandwidths at upper mmWave and THz spectrum, which will provide a very detailed view of the environment, where our simple point object models no longer hold, and where AI-based post-processing can play an important role, e.g., for determining gestures, classify objects, and perform non-radar type sensing. While scattering will be less pronounced at higher frequencies, specular reflections can appear as single-bounce (considered in this chapter), but also double- and triple bounces, which require careful handling in SLAM filters. Higher frequencies also will require more directional beamforming, which leads to radio resource management questions. Should one take a communication-centric view, where sensing is an add-on, or a service-centric view where communication and sensing are fundamental services, which each will be allocated appropriate slices (in time, frequency, and space)?
Another enabler is the user of reconfigurable intelligent surfaces (RISs), which can serve as location references, smart reflectors, and provide additional delay and angle measurements.
In summary, there is no shortage and research questions that lie on the intersection of communication, signal processing, and artificial intelligence for the years to come.
This work was supported, in part, by the European Commission through the H2020 project Hexa-X (Grant Agreement no. 101015956) and the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation. The work was also supported by the Academy of Finland under the projects #319994, #338224, #323244, and #328214.
2 pages. Main challenges for future
Henk, Mikko
* post-processing
* multi-bounce
* RIS
* multistatic?
* please add topics
* view on role of mapping and SLAM in ISAC, maybe also tracking and SLAT.
* large drift on clock bias (instability between measurements)
* When it comes to mobile radar, some things that we could discuss (raised shortly also during a call) are around the fact that which physical resource and physical signals terminal could use for efficient sensing and mapping. Optimal would be downlink SSB like beam-sweeping but there isnt directly such available in NR. SRS could be one but its controlled by gNB, so full sweep is feasible only if gNB says so (commonly configures only a few beams for beam management/tracking). Then again, one could also consider a collaborative approach where multiple UEs share their models in multisensor spirit ? .. however not sure if that helps either as each UE would anyway have his beam towards base-station, so can we really illuminate targets rich enough ?
* Uncertain UE orientation and related angle measurements at the UE (measurement fusion in an important role)
* Dynamic environment and moving objects (opportunity to tackle with Doppler estimation)
* With RF measurements, it's difficult to observe other than specular reflections from large distances (requirement for perpendicular surfaces). This is different, e.g., from Lidar.
§ APPENDIX
Properties of Gaussian distributions are summarized below. A PDF of a Gaussian random variable $\mathbf{x} \in \mathbb{R}^n$ with mean $\mathbf{m} \in \mathbb{R}^n$ and covariance $\boldsymbol{P} \in \mathbb{R}^{n \times n}$ is defined as
\begin{equation}\label{eq:gaussian_pdf}
\mathcal{N}(\mathbf{x} \vert \mathbf{m}, \mathbf{P}) = (2 \pi)^{-n/2} \lvert \mathbf{P} \rvert^{-1/2} \exp{\left(-\tfrac{1}{2} \left(\mathbf{x} - \mathbf{m} \right)^T \mathbf{P}^{-1} \left(\mathbf{x} - \mathbf{m} \right) \right)},
\end{equation}
where $\lvert \mathbf{P} \rvert$ is the determinant of $\mathbf{P}$. The joint distribution of random variables $\mathbf{x} \in \mathbb{R}^n$ and $\mathbf{y} \in \mathbb{R}^m$, is given by
\begin{equation}\label{eq:joint_distribution}
\begin{pmatrix}
\mathbf{x} \\ \mathbf{y}
\end{pmatrix} \sim \mathcal{N} \left( \begin{bmatrix}
\mathbf{m} \\ \boldsymbol{\mu}
\end{bmatrix}, \begin{bmatrix}
\mathbf{P} & \mathbf{C} \\ \mathbf{C}^T & \mathbf{S}
\end{bmatrix} \right),
\end{equation}
for which the moments are
\begin{align}
\boldsymbol{\mu} &= \int \mathrm{E} \lbrace \mathbf{y} \vert \mathbf{x} \rbrace \pi(\mathbf{x}) \mathrm{d}\mathbf{x}, \\
\mathbf{S} &= \int (\mathrm{E} \lbrace \mathbf{y} \vert \mathbf{x} \rbrace - \boldsymbol{\mu}) (\mathrm{E} \lbrace \mathbf{y} \vert \mathbf{x} \rbrace - \boldsymbol{\mu})^T \pi(\mathbf{x}) \mathrm{d}\mathbf{x} + \int \mathrm{Cov} \lbrace \mathbf{y} \vert \mathbf{x} \rbrace \pi(\mathbf{x}) \mathrm{d}\mathbf{x}, \\
\mathbf{C} &= \int (\mathbf{x} - \mathbf{m}) (\mathrm{E} \lbrace \mathbf{y} \vert \mathbf{x} \rbrace - \boldsymbol{\mu})^T \pi(\mathbf{x}) \mathrm{d}\mathbf{x},
\end{align}
and where $\pi(\mathbf{x}) = \mathcal{N}(\mathbf{x} \vert \mathbf{m},\mathbf{P})$ denotes the linearization density. The marginal and conditional distributions of $\mathbf{x}$ and $\mathbf{y}$ are given by
\begin{align}
\mathbf{x} &\sim \mathcal{N}(\mathbf{m}, \mathbf{P}) \label{eq:marginal_distribution_of_x}, \\
\mathbf{y} &\sim \mathcal{N}(\boldsymbol{\mu}, \mathbf{S}) \label{eq:marginal_distribution_of_y}, \\
\mathbf{x} \vert \mathbf{y} &\sim \mathcal{N} \left(\mathbf{m} + \mathbf{C} \mathbf{S}^{-1} \left( \mathbf{y} - \boldsymbol{\mu} \right), \mathbf{P} - \mathbf{C} \mathbf{S}^{-1} \mathbf{C}^T \right) \label{eq:conditional_distribution_of_x_given_y}, \\
\mathbf{y} \vert \mathbf{x} &\sim \mathcal{N} \left(\boldsymbol{\mu} + \mathbf{C}^T \mathbf{P}^{-1} \left( \mathbf{x} - \mathbf{m} \right), \mathbf{S} - \mathbf{C}^T \mathbf{P}^{-1} \mathbf{C} \right). \label{eq:conditional_distribution_of_y_given_x}
\end{align}
§ SECTION HEADING
§ SECTION HEADING
Use the template chapter.tex together with the document class SVMono (monograph-type books) or SVMult (edited books) to style the various elements of your chapter content.
Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the automatism for all your cross-references and citations. And please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
§ SECTION HEADING
Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the automatism for all your cross-references and citations.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
Use the standard environment to typeset your equations, e.g.
\begin{equation}
a \times b = c\;,
\end{equation}
however, for multiline equations we recommend to use the environment[In physics texts please activate the class option to depict your vectors in boldface-italic type - as is customary for a wide range of physical subjects].
\begin{eqnarray}
\left|\nabla U_{\alpha}^{\mu}(y)\right| &\le&\frac1{d-\alpha}\int
\left|\nabla\frac1{|\xi-y|^{d-\alpha}}\right|\,d\mu(\xi) =
\int \frac1{|\xi-y|^{d-\alpha+1}} \,d\mu(\xi) \\
&=&(d-\alpha+1) \int\limits_{d(y)}^\infty
\frac{\mu(B(y,r))}{r^{d-\alpha+2}}\,dr \le (d-\alpha+1)
\int\limits_{d(y)}^\infty \frac{r^{d-\alpha}}{r^{d-\alpha+2}}\,dr
\label{eq:01}
\end{eqnarray}
§.§ Subsection Heading
Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the automatism for all your cross-referencescross-references and citationscitations as has already been described in Sect. <ref>.
Please do not use quotation marks when quoting texts! Simply use the environment – it will automatically be rendered in line with the preferred layout.
§.§.§ Subsubsection Heading
Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the automatism for all your cross-references and citations as has already been described in Sect. <ref>, see also Fig. <ref>[If you copy text passages, figures, or tables from other works, you must obtain permission from the copyright holder (usually the original publisher). Please enclose the signed permission with the manuscript. The sourcespermission to print must be acknowledged either in the captions, as footnotes or in a separate section of the book.]
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
If the width of the figure is less than 7.8 cm use the command to flush the caption on the left side of the page. If the figure is positioned at the top of the page, align the sidecaption with the top of the figure – to achieve this you simply need to use the optional argument with the command
Paragraph Heading
Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the automatism for all your cross-references and citations as has already been described in Sect. <ref>.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
For typesetting numbered lists we recommend to use the environment – it will automatically rendered in line with the preferred layout.
* Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.
* Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.
* Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.
* Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.
Subparagraph Heading
In order to avoid simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Use the automatism for all your cross-references and citations as has already been described in Sect. <ref>, see also Fig. <ref>.
For unnumbered list we recommend to use the environment – it will automatically be rendered in line with the preferred layout.
* Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development, cf. Table <ref>.
* Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.
* Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.
* Livelihood and survival mobility are oftentimes coutcomes of uneven socioeconomic development.
If the width of the figure is less than 7.8 cm use the command to flush the caption on the left side of the page. If the figure is positioned at the top of the page, align the sidecaption with the top of the figure – to achieve this you simply need to use the optional argument with the command
Run-in Heading Boldface Version Use the automatism for all your cross-references and citations as has already been described in Sect. <ref>.
Run-in Heading Boldface and Italic Version Use the automatism for all your cross-references and citations as has already been described in Sect. <ref>paragraph.
Run-in Heading Displayed Version Use the automatism for all your cross-references and citations as has already been described in Sect. <ref>paragraph.
Please write your table caption here
Classes Subclass Length Action Mechanism
Translation mRNA$^a$ 22 (19–25) Translation repression, mRNA cleavage
Translation mRNA cleavage 21 mRNA cleavage
Translation mRNA 21–22 mRNA cleavage
Translation mRNA 24–26 Histone and DNA Modification
$^a$ Table foot note (with superscript)
§ SECTION HEADING
Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the automatism for all your cross-references and citations as has already been described in Sect. <ref>.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
If you want to list definitions or the like we recommend to use the enhanced environment – it will automatically rendered in line with the preferred layout.
Type 1That addresses central themes pertainng to migration, health, and disease. In Sect. <ref>, Wilson discusses the role of human migration in infectious disease distributions and patterns.
Type 2That addresses central themes pertainng to migration, health, and disease. In Sect. <ref>, Wilson discusses the role of human migration in infectious disease distributions and patterns.
§.§ Subsection Heading
In order to avoid simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Use the automatism for all your cross-references and citations citations as has already been described in Sect. <ref>.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
If you want to emphasize complete paragraphs of texts we recommend to use the newly defined class option and the newly defined environment . This will produce a 15 percent screened box 'behind' your text.
If you want to emphasize complete paragraphs of texts we recommend to use the newly defined class option and environment . This will produce a 15 percent screened box 'behind' your text.
§.§.§ Subsubsection Heading
Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the automatism for all your cross-references and citations as has already been described in Sect. <ref>.
Please note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
Theorem text goes here.
Definition text goes here.
Proof text goes here.
Paragraph Heading
Instead of simply listing headings of different levels we recommend to let every heading be followed by at least a short passage of text. Further on please use the automatism for all your cross-references and citations as has already been described in Sect. <ref>.
Note that the first line of text that follows a heading is not indented, whereas the first lines of all subsequent paragraphs are.
Theorem text goes here.
Definition text goes here.
Proof text goes here.
Trailer Head
If you want to emphasize complete paragraphs of texts in an we recommend to
If you want to emphasize complete paragraphs of texts in an we recommend to
If you want to emphasize complete paragraphs of texts in an we recommend to
If you want to emphasize complete paragraphs of texts in an we recommend to
Program Code
If you want to emphasize complete paragraphs of texts in an we recommend to
If you want to emphasize complete paragraphs of texts in an we recommend to
If you want to emphasize complete paragraphs of texts in an we recommend to
Background Information
If you want to emphasize complete paragraphs of texts in an
we recommend to
Legal Text
If you want to emphasize complete paragraphs of texts in an we recommend to
If you want to include acknowledgments of assistance and the like at the end of an individual chapter please use the environment – it will automatically be rendered in line with the preferred layout.
§ APPENDIX
When placed at the end of a chapter or contribution (as opposed to at the end of the book), the numbering of tables, figures, and equations in the appendix section continues on from that in the main text. Hence please do not use the command when writing an appendix at the end of your chapter or contribution. If there is only one the appendix is designated “Appendix”, or “Appendix 1”, or “Appendix 2”, etc. if there is more than one.
\begin{equation}
a \times b = c
\end{equation}
[1]
Fan Liu, Yuanhao Cui, Christos Masouros, Jie Xu, Tony Xiao Han, Yonina C Eldar,
and Stefano Buzzi.
Integrated sensing and communications: Towards dual-functional
wireless networks for 6G and beyond.
IEEE Journal on Selected Areas in Communications, 2022.
[2]
Christian R Berger, Zhaohui Wang, Jianzhong Huang, and Shengli Zhou.
Application of compressive sensing to sparse channel estimation.
IEEE Communications Magazine, 48(11):164–174, 2010.
[3]
Satyam Dwivedi, Ritesh Shreevastav, Florent Munier, Johannes Nygren, Iana
Siomina, Yazid Lyazidi, Deep Shrestha, Gustav Lindmark, Per Ernström,
Erik Stare, et al.
Positioning in 5G networks.
IEEE Communications Magazine, 59(11):38–44, 2021.
[4]
Christian Sturm and Werner Wiesbeck.
Waveform design and signal processing aspects for fusion of wireless
communications and radar sensing.
Proceedings of the IEEE, 99(7):1236–1259, 2011.
[5]
Hyowon Kim, Karl Granström, Lin Gao, Giorgio Battistelli, Sunwoo Kim, and
Henk Wymeersch.
5G mmWave cooperative positioning and mapping using multi-model
PHD filter and map fusion.
IEEE Transactions on Wireless Communications, 2020.
[6]
John Mullane, Ba-Ngu Vo, Martin D Adams, and Ba-Tuong Vo.
A random-finite-set approach to Bayesian SLAM.
IEEE Transactions on Robotics, 27(2):268–282, 2011.
[7]
Florian Meyer, Thomas Kropfreiter, Jason L Williams, Roslyn Lau, Franz
Hlawatsch, Paolo Braca, and Moe Z Win.
Message passing algorithms for scalable multitarget tracking.
Proc. IEEE, 106(2):221–259, Feb. 2018.
[8]
Ronald PS Mahler.
"Statistics 101" for multisensor, multitarget data fusion.
IEEE Aerospace and Electronic Systems Magazine, 19(1):53–64,
[9]
Robert W Heath, Nuria Gonzalez-Prelcic, Sundeep Rangan, Wonil Roh, and Akbar M
An overview of signal processing techniques for millimeter wave
MIMO systems.
IEEE Journal of Selected Topics in Signal Processing,
10(3):436–453, 2016.
[10]
Yu Ge, Fuxi Wen, Hyowon Kim, Meifang Zhu, Fan Jiang, Sunwoo Kim, Lennart
Svensson, and Henk Wymeersch.
5G SLAM using the clustering and assignment approach with diffuse
Sensors (Basel, Switzerland), 20(16), August 2020.
[11]
Andreas Richter.
Estimation of Radio Channel Parameters: Models and Algorithms.
PhD thesis, Ilmenau University of Technology, 2005.
[12]
Alex B. Gershman, Michael Rübsamen, and Marius Pesavento.
One- and two-dimensional direction-of-arrival estimation: An overview
of search-free techniques.
Signal Processing, 90(5):1338 – 1349, 2010.
[13]
Fan Jiang, Yu Ge, Meifang Zhu, and Henk Wymeersch.
High-dimensional channel estimation for simultaneous localization and
In IEEE Wireless Communications and Networking Conference
(WCNC), pages 1–6, 2021.
[14]
Zohair Abu-Shaban, Xiangyun Zhou, Thushara Abhayapala, Gonzalo Seco-Granados,
and Henk Wymeersch.
Error bounds for uplink and downlink 3D localization in 5G
millimeter wave systems.
IEEE Transactions on Wireless Communications, 17(8):4939–4954,
[15]
M.W.M.G. Dissanayake, P. Newman, S. Clark, H.F. Durrant-Whyte, and M. Csorba.
A solution to the simultaneous localization and map building (SLAM)
IEEE Transactions on Robotics and Automation, 17(3):229–241,
[16]
Michael Montemerlo, Sebastian Thrun, Daphne Koller, and Ben Wegbreit.
FastSLAM: a factored solution to the simultaneous localization and
mapping problem.
In Eighteenth National Conference on Artificial Intelligence,
page 593–598, USA, 2002. American Association for Artificial Intelligence.
[17]
J. Neira and J.D. Tardos.
Data association in stochastic mapping using the joint compatibility
IEEE Transactions on Robotics and Automation, 17(6):890–897,
[18]
A Nuchter, Kai Lingemann, Joachim Hertzberg, and Hartmut Surmann.
6D slam with approximate data association.
In ICAR'05. Proceedings., 12th International Conference on
Advanced Robotics, 2005., pages 242–249. IEEE, 2005.
[19]
Giorgio Grisetti, Rainer Kümmerle, Cyrill Stachniss, and Wolfram Burgard.
A tutorial on graph-based SLAM.
IEEE Intelligent Transportation Systems Magazine, 2(4):31–43,
[20]
Sebastian Thrun and Michael Montemerlo.
The graph SLAM algorithm with applications to large-scale mapping
of urban structures.
The International Journal of Robotics Research,
25(5-6):403–429, 2006.
[21]
Feng Lu and Evangelos Milios.
Globally consistent range scan alignment for environment mapping.
Autonomous robots, 4(4):333–349, 1997.
[22]
Edwin Olson, John Leonard, and Seth Teller.
Fast iterative alignment of pose graphs with poor initial estimates.
In Proceedings IEEE International Conference on Robotics and
Automation., pages 2262–2269, 2006.
[23]
Giorgio Grisetti, Cyrill Stachniss, and Wolfram Burgard.
Nonlinear constraint network optimization for efficient map learning.
IEEE Transactions on Intelligent Transportation Systems,
10(3):428–439, 2009.
[24]
Yuki Ono, Eduard Trulls, Pascal Fua, and Kwang Moo Yi.
LF-Net: Learning local features from images.
In Advances in Neural Information Processing Systems (NeurIPS),
pages 1–6, 2018.
[25]
Kwang Moo Yi, Eduard Trulls, Vincent Lepetit, and Pascal Fua.
LIFT: Learned invariant feature transform.
In European conference on computer vision (ECCV), pages
467–483. Springer, 2016.
[26]
René Ranftl and Vladlen Koltun.
Deep fundamental matrix estimation.
In European conference on computer vision (ECCV), pages
284–299, 2018.
[27]
Michael Bloesch, Jan Czarnowski, Ronald Clark, Stefan Leutenegger, and Andrew J
CodeSLAM — learning a compact, optimisable representation for
dense visual slam.
In Proceedings of the IEEE conference on computer vision and
pattern recognition (CVPR), pages 2560–2568, 2018.
[28]
Zachary Teed and Jia Deng.
DROID-SLAM: Deep visual slam for monocular, stereo, and RGB-D
In Advances in Neural Information Processing Systems (NeurIPS),
volume 34, 2021.
[29]
Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich.
SuperGlue: Learning feature matching with graph neural networks.
In Proceedings of the IEEE conference on computer vision and
pattern recognition (CVPR), pages 4938–4947, 2020.
[30]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.
Attention is all you need.
Advances in Neural Information Processing Systems (NeurIPS),
30, 2017.
[31]
Juliano Pinto, Georg Hess, William Ljungbergh, Yuxuan Xia, Lennart Svensson,
and Henk Wymeersch.
Next generation multitarget trackers: Random finite set methods vs
transformer-based deep learning.
In IEEE International Conference on Information Fusion
(FUSION). IEEE, 2021.
[32]
Ronald PS Mahler.
Multitarget Bayes filtering via first-order multitarget moments.
IEEE Transactions on Aerospace and Electronic Systems,
39(4):1152–1178, 2003.
[33]
Ronald PS Mahler.
Advances in Statistical Multisource-Multitarget Information
Artech House, 2014.
[34]
Hendrik Deusch, Stephan Reuter, and Klaus Dietmayer.
The labeled multi-Bernoulli SLAM filter.
IEEE Signal Processing Letters, 22(10):1561–1565, 2015.
[35]
Hendrik Deusch.
Random finite set-based localization and SLAM for highly
automated vehicles.
PhD thesis, Universität Ulm, 2016.
[36]
Diluka Moratuwage, Martin Adams, and Felipe Inostroza.
$\delta$-generalised labelled multi-Bernoulli simultaneous
localisation and mapping.
In International Conference on Control, Automation and
Information Sciences (ICCAIS), pages 175–182, 2018.
[37]
Diluka Moratuwage, Martin Adams, and Felipe Inostroza.
$\delta$-generalized labeled multi-Bernoulli simultaneous
localization and mapping with an optimal kernel-based particle filtering
Sensors, 19(10):2290, 2019.
[38]
Yu Ge, Ossi Kaltiokallio, Hyowon Kim, Fan Jiang, Jukka Talvitie, Mikko Valkama,
Lennart Svensson, Sunwoo Kim, and Henk Wymeersch.
A computationally efficient EK-PMBM filter for bistatic mmWave
radio SLAM.
IEEE Journal on Selected Areas in Communications, 2022.
[39]
Yu Ge, Hyowon Kim, Fuxi Wen, Lennart Svensson, Sunwoo Kim, and Henk Wymeersch.
Exploiting diffuse multipath in 5G SLAM.
In IEEE Global Communications Conference, pages 1–6, 2020.
[40]
H. Kim, K. Granström, L. Gao, G. Battistelli, S. Kim, and
H. Wymeersch.
Joint CKF-PHD filter and map fusion for 5G multi-cell SLAM.
In ICC 2020 - 2020 IEEE International Conference on
Communications, pages 1–6, 2020.
[41]
H. Kim, K. Granström, S. Kim, and H. Wymeersch.
Low-complexity 5G SLAM with CKF-PHD filter.
In IEEE International Conference on Acoustics, Speech and Signal
Processing, pages 5220–5224, 2020.
[42]
Ossi Kaltiokallio, Yu Ge, Jukka Talvitie, Henk Wymeersch, and Mikko Valkama.
mmWave simultaneous localization and mapping using a
computationally efficient EK-PHD filter.
In IEEE International Conference on Information Fusion
(Fusion), pages 1–6, 2021.
[43]
Yu Ge, Yibo Wu, Fan Jiang, Ossi Kaltiokallio, Jukka Talvitie, Mikko Valkama,
Lennart Svensson, and Henk Wymeersch.
Iterated posterior linearization PMB filter for 5G SLAM.
arXiv preprint arXiv:2112.02575, 2021.
[44]
Frank R Kschischang, Brendan J Frey, and H-A Loeliger.
Factor graphs and the sum-product algorithm.
IEEE Trans. Int. Theory, 47(2):498–519, Feb. 2001.
[45]
Erik Leitinger, Florian Meyer, Franz Hlawatsch, Klaus Witrisal, Fredrik
Tufvesson, and Moe Z Win.
A belief propagation algorithm for multipath-based SLAM.
IEEE Trans. Wireless Commun., 18(12):5613–5629, Sep. 2019.
[46]
Erik Leitinger, Stefan Grebien, and Klaus Witrisal.
Multipath-based SLAM exploiting AoA and amplitude information.
In Proc. IEEE Int. Conf. Commun. Workshops (ICC Workshops),
pages 1–7, Shanghai, China, May 2019.
[47]
Jason L Williams.
Marginal multi-Bernoulli filters: RFS derivation of MHT,
JIPDA, and association-based MeMBer.
IEEE Transactions on Aerospace and Electronic Systems,
51(3):1664–1687, 2015.
[48]
Ángel F García-Fernández, Jason L Williams, Karl Granström,
and Lennart Svensson.
Poisson multi-Bernoulli mixture filter: Direct derivation and
IEEE Transactions on Aerospace and Electronic Systems,
54(4):1883–1901, 2018.
[49]
Simo Särkkä.
Bayesian Filtering and Smoothing.
Cambridge University Press, 2013.
[50]
M.S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp.
A tutorial on particle filters for online nonlinear/non-Gaussian
Bayesian tracking.
IEEE Transactions on Signal Processing, 50(2):174–188, 2002.
[51]
Thomas Peter Minka.
A family of algorithms for approximate Bayesian inference.
PhD thesis, Massachusetts Institute of Technology, 2001.
[52]
B.-N. Vo and W.-K. Ma.
The Gaussian mixture probability hypothesis density filter.
IEEE Transactions on Signal Processing, 54(11):4091–4104,
[53]
Maryam Fatemi, Karl Granström, Lennart Svensson, Francisco JR Ruiz, and
Lars Hammarstrand.
Poisson multi-Bernoulli mapping using Gibbs sampling.
IEEE Transactions on Signal Processing, 65(11):2814–2827,
[54]
Katta G Murty.
Letter to the editor—an algorithm for ranking all the assignments
in order of increasing cost.
Operations Research, 16(3):682–687, 1968.
[55]
Jason L Williams.
Hybrid Poisson and multi-Bernoulli filters.
In International Conference on Information Fusion, pages
1103–1110. IEEE, 2012.
[56]
Jason L Williams.
An efficient, variational approximation of the best fitting
multi-Bernoulli filter.
IEEE Transactions on Signal Processing, 63(1):258–273, 2014.
[57]
Michael Roth, Gustaf Hendeby, and Fredrik Gustafsson.
EKF/UKF maneuvering target tracking using coordinated turn models
with polar/cartesian velocity.
In 17th International Conference on Information Fusion, 2014.
[58]
Simo Särkkä.
Bayesian Filtering and Smoothing.
Institute of Mathematical Statistics Textbooks. Cambridge University
Press, 2013.
[59]
Fuxi Wen and Henk Wymeersch.
5G synchronization, positioning, and mapping from diffuse
IEEE Wireless Communications Letters, 10(1):43–47, 2020.
[60]
Anastasios Kakkavas, Henk Wymeersch, Gonzalo Seco-Granados, Mario
H Castañeda García, Richard A Stirling-Gallacher, and Josef A
Power allocation and parameter estimation for multipath-based 5G
IEEE Transactions on Wireless Communications, 2021.
[61]
Abu Sajana Rahmathullah, Ángel F. García-Fernández, and Lennart
Generalized optimal sub-pattern assignment metric.
In International Conference on Information Fusion (Fusion),
pages 1–8, 2017.
[62]
Jukka Talvitie, Ossi Kaltiokallio, Elizaveta Rastorgueva-Foi, Carlos Baquero
Barneto, Musa Furkan Keskin, Henk Wymeersch, and Mikko Valkama.
Indoor mapping with a mobile radar using an EK-PHD filter.
In 2021 IEEE 32nd Annual International Symposium on Personal,
Indoor and Mobile Radio Communications (PIMRC), pages 1–6, 2021.
[63]
Carlos Baquero Barneto, Elizaveta Rastorgueva-Foi, Furkan Keskin, Taneli
Riihonen, Matias Turunen, Jukka Talvitie, Henk Wymeersch, and Mikko Valkama.
Millimeter-wave mobile sensing and environment mapping: Models,
algorithms and validation.
IEEE Transactions on Vehicular Technology, 71(4):3900–3916,
[64]
Anna Guerra, Francesco Guidi, Antonio Clemente, Raffaele D'Errico, Laurent
Dussopt, and Davide Dardari.
Millimeter-wave backscattering measurements with transmitarrays for
personal radar applications.
In IEEE Globecom Workshops, pages 1–6, 2015.
[65]
Carlos Baquero Barneto, Taneli Riihonen, Matias Turunen, Mike Koivisto, Jukka
Talvitie, and Mikko Valkama.
Radio-based sensing and indoor mapping with millimeter-wave 5G NR
In International Conference on Localization and GNSS, 2020.
[66]
Ossi Kaltiokallio, Jukka Talvitie, Yu Ge, Henk Wymeersch, and Mikko Valkama.
mmwave mapping using PHD with smoothed track confirmation and
multi-bounce suppression.
In 2022 IEEE International Conference on Communications,
Workshop on Emerging Topics in 6G Communications, 2022.
References may be cited in the text either by number (preferred) or by author/year.[Make sure that all references from the list are cited in the text. Those not cited should be moved to a separate Further Reading section or chapter.] If the citatiion in the text is numbered, the reference list should be arranged in ascending order. If the citation in the text is author/year, the reference list should be sorted alphabetically and if there are several works by the same author, the following order should be used:
* all works by the author alone, ordered chronologically by year of publication
* all works by the author with a coauthor, ordered alphabetically by coauthor
* all works by the author with several coauthors, ordered chronologically by year of publication.
The styling of references[Always use the standard abbreviation of a journal's name according to the ISSN List of Title Word Abbreviations, see <http://www.issn.org/en/node/344>] depends on the subject of your book:
* The two recommended styles for references in books on mathematical, physical, statistical and computer sciences are depicted in [67, 68, 69, 70, 71] and [72, 73, 74, 75, 76].
* Examples of the most commonly used reference style in books on Psychology, Social Sciences are [77, 78, 79, 80, 81].
* Examples for references in books on Humanities, Linguistics, Philosophy are [82, 83, 84, 85, 86].
* Examples of the basic Springer Nature style used in publications on a wide range of subjects such as Computer Science, Economics, Engineering, Geosciences, Life Sciences, Medicine, Biomedicine are [87, 88, 90, 89, 91].
[67] Broy, M.: Software engineering — from auxiliary to key technologies. In: Broy, M., Dener, E. (eds.) Software Pioneers, pp. 10-13. Springer, Heidelberg (2002)
[68] Dod, J.: Effective substances. In: The Dictionary of Substances and Their Effects. Royal Society of Chemistry (1999) Available via DIALOG.
<http://www.rsc.org/dose/title of subordinate document. Cited 15 Jan 1999>
[69] Geddes, K.O., Czapor, S.R., Labahn, G.: Algorithms for Computer Algebra. Kluwer, Boston (1992)
[70] Hamburger, C.: Quasimonotonicity, regularity and duality for nonlinear systems of partial differential equations. Ann. Mat. Pura. Appl. 169, 321–354 (1995)
[71] Slifka, M.K., Whitton, J.L.: Clinical implications of dysregulated cytokine production. J. Mol. Med. (2000) doi: 10.1007/s001090000086
[72] J. Dod, in The Dictionary of Substances and Their Effects, Royal Society of Chemistry. (Available via DIALOG, 1999),
<http://www.rsc.org/dose/title of subordinate document. Cited 15 Jan 1999>
[73] H. Ibach, H. Lüth, Solid-State Physics, 2nd edn. (Springer, New York, 1996), pp. 45-56
[74] S. Preuss, A. Demchuk Jr., M. Stuke, Appl. Phys. A 61
[75] M.K. Slifka, J.L. Whitton, J. Mol. Med., doi: 10.1007/s001090000086
[76] S.E. Smith, in Neuromuscular Junction, ed. by E. Zaimis. Handbook of Experimental Pharmacology, vol 42 (Springer, Heidelberg, 1976), p. 593
[77] Calfee, R. C., & Valencia, R. R. (1991). APA guide to preparing manuscripts for journal publication. Washington, DC: American Psychological Association.
[78] Dod, J. (1999). Effective substances. In: The dictionary of substances and their effects. Royal Society of Chemistry. Available via DIALOG.
<http://www.rsc.org/dose/Effective substances.> Cited 15 Jan 1999.
[79] Harris, M., Karper, E., Stacks, G., Hoffman, D., DeNiro, R., Cruz, P., et al. (2001). Writing labs and the Hollywood connection. J Film Writing, 44(3), 213–245.
[80] O'Neil, J. M., & Egan, J. (1992). Men's and women's gender role journeys: Metaphor for healing, transition, and transformation. In B. R. Wainrig (Ed.), Gender issues across the life cycle (pp. 107–123). New York: Springer.
[81]Kreger, M., Brindis, C.D., Manuel, D.M., Sassoubre, L. (2007). Lessons learned in systems change initiatives: benchmarks and indicators. American Journal of Community Psychology, doi: 10.1007/s10464-007-9108-14.
[82] Alber John, Daniel C. O'Connell, and Sabine Kowal. 2002. Personal perspective in TV interviews. Pragmatics 12:257–271
[83] Cameron, Deborah. 1997. Theoretical debates in feminist linguistics: Questions of sex and gender. In Gender and discourse, ed. Ruth Wodak, 99–119. London: Sage Publications.
[84] Cameron, Deborah. 1985. Feminism and linguistic theory. New York: St. Martin's Press.
[85] Dod, Jake. 1999. Effective substances. In: The dictionary of substances and their effects. Royal Society of Chemistry. Available via DIALOG.
http://www.rsc.org/dose/title of subordinate document. Cited 15 Jan 1999
[86] Suleiman, Camelia, Daniel C. O'Connell, and Sabine Kowal. 2002. `If you and I, if we, in this later day, lose that sacred fire...': Perspective in political interviews. Journal of Psycholinguistic Research. doi: 10.1023/A:1015592129296.
[87] Brown B, Aaron M (2001) The politics of nature. In: Smith J (ed) The rise of modern genomics, 3rd edn. Wiley, New York
[88] Dod J (1999) Effective Substances. In: The dictionary of substances and their effects. Royal Society of Chemistry. Available via DIALOG.
<http://www.rsc.org/dose/title of subordinate document. Cited 15 Jan 1999>
[89] Slifka MK, Whitton JL (2000) Clinical implications of dysregulated cytokine production. J Mol Med, doi: 10.1007/s001090000086
[90] Smith J, Jones M Jr, Houghton L et al (1999) Future of health insurance. N Engl J Med 965:325–329
[91] South J, Blass B (2001) The future of modern genomics. Blackwell, London
|
# A variation of continuity in $n$-normed spaces
Sibel Ersan
Maltepe University
Turkey
<EMAIL_ADDRESS>Sibel Ersan
Faculty of Engineering and Natural Sciences
Maltepe University
Istanbul, Turkey<EMAIL_ADDRESS>
###### Abstract.
The s-th forward difference sequence that tends to zero, inspired by the
consecutive terms of a sequence approaching zero, is examined in this study.
Functions that take sequences satisfying this condition to sequences
satisfying the same condition are called s-ward continuous. Inclusion theorems
that are related to this kind of uniform continuity and continuity are also
considered. Additionally, the concept of $s$-ward compactness of a subset of
$X$ via $s$-quasi-Cauchy sequences are investigated. One finds out that the
uniform limit of any sequence of $s$-ward continuous function is $s$-ward
continuous and the set of $s$-ward continuous functions is a closed subset of
the set of continuous functions.
###### Key words and phrases:
compactness, continuity, $n$-normed space
###### 2020 Mathematics Subject Classification:
40A05, 40A25, 40A30, 54C35
## 1\. Introduction and Preliminaries
Although some evaluations was first made about the axioms of an abstract
n-dimensional metric, the main developments regarding the definition of the
2-metric, 2-normed spaces and their topological properties were described by
Gähler [22] then the results of these concepts were extended to the most
generalized case $n$-metric and $n$-normed spaces where $n$ is a natural
number by Gähler[23]. Shortly after the concept of $n$-normed space is
introduced, the concept of $2$-inner product space is also defined in [3].
Afterwards many authors have done lots of impressive improvements in
$n$-normed spaces or in $2$-inner product spaces ([2, 20, 19, 13, 15, 11,
16]).
Firstly we recall the notion of $n$-normed space:
###### Definition 1.1.
An $n$-norm on a real vector space $X$ of dimension $d$, where $2\leq n\leq d$
is a real valued function $\|.,...,.\|$ on $X^{n}$ which satisfies the
properties:
1. (1)
$\|\zeta_{1},\zeta_{2},...,\zeta_{n}\|=0$ if and only if
$\zeta_{1},\zeta_{2},...,\zeta_{n}$ are linearly dependent,
2. (2)
$\|\zeta_{1},\zeta_{2},...,\zeta_{n}\|=||\zeta_{k_{1}},...,\zeta_{k_{n}}||$
for every permutation $(k_{1},...,k_{n})$ of $(1,...,n)$,
3. (3)
$\|\zeta_{1},\zeta_{2},...,\delta\zeta_{n}\|=|\delta|\|\zeta_{1},\zeta_{2},...,\zeta_{n}\|$
for any real number $\delta$,
4. (4)
$\|m+n,\zeta_{1},...,\zeta_{n-1}\|\leq\|m,\zeta_{1},...,\zeta_{n-1}\|+\|n,\zeta_{1},...,\zeta_{n-1}\|$.
A set $X$ is an $n$-normed space with an n-norm $\|.,...,.\|$.
In [12],
(1)
$\|\zeta_{1},...,\zeta_{n}\|_{p}=\left[\frac{1}{n!}\sum_{t_{1}}...\sum_{t_{n}}\det(\zeta_{it_{k}})^{p}\right]^{1/p}.$
is given as an example of an $n$-norm on $l^{p}\times...\times l^{p}$ space
for $1\leq p<\infty$. Also if $p=\infty$, the $n$-norm on
$l^{\infty}\times...\times l^{\infty}$ is given as [1]
(2)
$\|\zeta_{1},...,\zeta_{n}\|_{\infty}=\sup_{t_{1}}...\sup_{t_{n}}\det(x_{it_{k}}).$
###### Definition 1.2.
A sequence $(x_{k})$ converges to an $\zeta\in X$ in an $n$-normed space $X$
if for each $\epsilon>0$, there exists a positive integer $\tilde{k}$ such
that for every $k\geq\tilde{k}$
(3) $\|x_{k}-\zeta,\mu_{1},\ldots,\mu_{n-1}\|<\epsilon,\ \ \ \
\forall\mu_{1},\ldots,\mu_{n-1}\in X.$
###### Definition 1.3.
A sequence $(x_{k})$ is a Cauchy sequence if for each $\epsilon>0$, there
exists a positive integer $t_{0}$ such that for every $k,m\geq t_{0}$
(4) $\|x_{k}-x_{m},\mu_{1},\ldots,\mu_{n-1}\|<\epsilon,\ \ \ \
\forall\mu_{1},\ldots,\mu_{n-1}\in X.$
If each Cauchy sequence in $X$ converges to an element of $X$, we call $X$ is
complete and if $X$ is complete, then it is called an $n$-Banach space.
In recent times, the notion of quasi-Cauchy sequences is given in [4]. The
distance between consecutive terms of a sequence tending to zero is expressed
by Burton and Coleman with the idea of quasi-Cauchy sequence. Then using this
idea, different types of continuities were defined for real functions in [6,
7] as ward continuity, statistically ward continuity, lacunary ward continuity
and etc. They were also studied in $2$-normed space in [24, 9, 10].
The aim of this research is to give a generalization of the notions of a
quasi-Cauchy sequence and ward continuity of a function to the notions of an
$s$-quasi-Cauchy sequence and $s$-ward continuity of a function in an
$n$-normed space for any fixed positive integer $s$. Also interesting theorems
related to ordinary continuity, uniform continuity, compactness and $s$-ward
continuity are proved. This paper contains not only an extension of results of
[24] to an $n$-normed space, but also includes new results in $2$-normed
spaces as a special case for $n=2$.
## 2\. Main results
In this paper $X$, $\mathbb{R}$ and $s$ will denote a first countable
$n$-normed space with an $n$-norm $\|.,...,.\|$, the set of all real numbers
and a fixed positive integer, respectively. Now we give the notion of
$s$-quasi Cauchyness of a sequence in $X$:
###### Definition 2.1.
A sequence $(x_{k})$ of points in $X$ is said to be $s$-quasi-Cauchy if for
all $\mu_{1},\mu_{2},...,\mu_{n-1}\in X$ it satisfies
(5)
$\lim_{k\rightarrow\infty}||\Delta_{s}x_{k},\mu_{1},\mu_{2},...,\mu_{n-1}||=0$
where $\Delta_{s}x_{k}=x_{k+s}-x_{k}$ for each positive integer $k$.
If one chooses $s=1$, the sequences returns to the ordinary quasi-Cauchy
sequences and also using the equality
$x_{k+s}-x_{k}=x_{k+s}-x_{k+s-1}+x_{k+s-1}-x_{k+s-2}...-x_{k+2}+x_{k+2}-x_{k+1}+x_{k+1}-x_{k},$
we see that any quasi-Cauchy sequence is $s$-quasi-Cauchy, however the
converse is not true.
Any Cauchy sequence is $s$-quasi-Cauchy, so is any convergent sequence. A
sequence of partial sums of a convergent series is $s$-quasi-Cauchy. One notes
that the set $\Delta_{s}(X)$, the set of $s$-quasi-Cauchy sequences in $X$, is
a vector space. If $(x_{k}),(y_{k})$ are $s$-quasi-Cauchy sequences in $X$ so
(6)
$\displaystyle\lim_{k\rightarrow\infty}||\Delta_{s}x_{k},\mu_{1},\mu_{2},...,\mu_{n-1}||=0\
\ \textrm{and}$ (7)
$\displaystyle\lim_{k\rightarrow\infty}||\Delta_{s}y_{k},\mu_{1},\mu_{2},...,\mu_{n-1}||=0.$
for all $\mu_{1},\mu_{2},...,\mu_{n-1}\in X$. Therefore
$\displaystyle\lim_{k\rightarrow\infty}||\Delta_{s}(x_{k}+y_{k}),\mu_{1},\mu_{2},...,\mu_{n-1}||\leq\lim_{k\rightarrow\infty}||\Delta_{s}x_{k},\mu_{1},\mu_{2},...,\mu_{n-1}||$
(8)
$\displaystyle+\lim_{k\rightarrow\infty}||\Delta_{s}y_{k},\mu_{1},\mu_{2},...,\mu_{n-1}||=0.$
So the sum of two $s$-quasi-Cauchy sequence is again $s$-quasi-Cauchy, it is
clear that $(ax_{k})$ is an $s$-quasi-Cauchy sequence in $X$ for any constant
$a\in\mathbb{R}$.
###### Definition 2.2.
A subset $A$ of $X$ is called $s$-ward compact if any sequence in the set $A$
has an $s$-quasi-Cauchy subsequence.
If a set $A$ is an $s$-ward compact subset of $X$, then any subset of $A$ is
$s$-ward compact. Moreover any ward compact subset of $X$ is $s$-ward compact.
Union of finite number of $s$-ward compact subset of $X$ is $s$ ward compact.
Any sequentially compact subset of $X$ is $s$-ward compact.
For each real number $\alpha>0$, an $\alpha$-ball with center $a$ in $X$ is
defined as
(9) $B_{\alpha}(a,x_{1},...,x_{n-1})=\\{x\in
X:||a-x,x_{1}-x,...,x_{n-1}-x||<\alpha\\}$
for $x_{1},...,x_{n-1}\in X$. The family of all sets
$W_{i}(a)=B_{\alpha_{i}}(a,x_{i_{1}},...,x_{i_{(n-1)}})$ where $i=1,2,..$ is
an open basis in $a$. Let $\beta_{n-1}$ be the collection of linearly
independent sets $B$ with $n-1$ elements. For $B\in\beta_{n-1}$, the mapping
$p_{B}(x)=||x,x_{1},...,x_{n-1}||,\ \textrm{for}\ x\in X,\ \
x_{1},...,x_{n-1}\in B$
defines a seminorm on $X$ and the collection $\\{p_{B}:B\in\beta_{n-1}\\}$ of
seminorms makes $X$ a locally convex topological vector space. For each $x\in
X$, different from zero, there exists $x_{1},...,x_{n-1}\in B$ such that
$x,x_{1},x_{2},...,x_{n-1}$ are linearly independent so
$||x,x_{1},...,x_{n-1}||\neq 0$, which makes $X$ a Hausdorff space. A
neighborhood of origin for this topology is in a form of a finite intersection
$\bigcap_{i=1}^{n}\\{x\in X:||x,x_{i_{1}}-x,...,x_{i_{(n-1)}}-x||<\epsilon\\}$
where $\epsilon>0$.
Now the following theorem characterizes totally boundedness not only valid for
$n$-normed spaces but also valid for the $2$-normed spaces. It extends the
results for quasi-Cauchy sequences given in [24] for $2$-normed valued
sequences to $n$-normed valued $s$-quasi-Cauchy sequences in which $s=1$ gives
earlier results given for $2$-normed spaces. It should be noted that Theorem 3
in [8] can not be obtained just by putting $n=1$ in the $n$-normed space to
get in a normed space, which is awkward, whereas the following theorem is
interesting as a point of studying a new space.
###### Lemma 2.1.
If a subset of $X$ is totally bounded then every sequence in $A$ contains an
$s$-quasi-Cauchy subsequence.
###### Proof.
Let $A$ be totally bounded. Let $(x_{n})$ be any sequence in $A$. Since $A$ is
covered by a finite number of balls of $X$ of diameter less than 1. One of
these sets, which we denote by $A_{1}$, must contain $x_{n}$ for infinitely
many values of $n$. Choose a positive integer $n_{1}$ such that $x_{n_{1}}\in
A_{1}$. Since $A_{1}$ is totally bounded, it is covered by a finite number of
balls of diameter less than 1/2. One of these balls, which we denote by
$A_{2}$, must contain $x_{n}$ for infinitely many $n$. Let $n_{2}$ be a
positive integer such that $n_{2}>n_{1}$ and $x_{n_{2}}\in A_{2}$. Since
$A_{2}\subset A_{1}$, it follows that $x_{n_{2}}\in A_{1}$. Continuing in this
way, a ball $A_{k}$ of $A_{k-1}$ of diameter less than $1/k$ and a term
$x_{n_{k}}\in A_{k}$ of the sequence $(x_{n})$, where $n_{k}>n_{k-1}$ for any
positive integer $k$. Since $x_{n_{k}},x_{n_{k+1}},...,x_{n_{k+s}},...$ lie in
$A_{k}$ and diameter $(A_{k})$ less than $1/k$, then $x_{n_{k}}$ is an
$s$-quasi-Cauchy subsequence of $(x_{n})$. ∎
###### Theorem 2.2.
A subset of $X$ is totally bounded if and only if it is $s$-ward compact for
any positive integer $s$.
###### Proof.
If $A$ is totally bounded, then every sequence of $A$ has an $s$-quasi-Cauchy
subsequence by Lemma 2.1. So the set $A$ is $s$-ward compact for any fixed
positive integer $s$. For the converse, think $A$ is not a totally bounded
set. Choose any $x_{1}\in A$ and $\alpha>0$. Since $A$ is not totally bounded,
the neighborhood of a point $x_{1}$ in $A$ which is defined by
$B_{\alpha}(x_{1},\mu^{1}_{1},...,\mu^{1}_{n-1})=\\{y\in
A;||x_{1}-y,\mu^{1}_{1}-y,...,\mu^{1}_{n-1}-y||<\alpha\\}$ is not equal to
$A$. There is an $x_{2}\in A$ such that $x_{2}\notin
B_{\alpha}(x_{1},\mu^{1}_{1},...,\mu^{1}_{n-1})$, that is,
$||x_{1}-x_{2},\mu^{1}_{1}-x_{2},...,\mu^{1}_{n-1}-x_{2}||\geq\alpha$. Since
$A$ is not totally bounded
$B_{\alpha}(x_{1},\mu^{1}_{1},...,\mu^{1}_{n-1})\cup
B_{\alpha}(x_{2},\mu^{2}_{1},...,\mu^{2}_{n-1})\neq A$ where
$B_{\alpha}(x_{2},\mu^{2}_{1},...,\mu^{2}_{n-1})$ is the neighborhood of a
point $x_{2}$ in $A$. Continuining the procedure, a sequence $(x_{k})$ of
points in $A$ can be obtained as
$x_{k+s}\not\in\bigcup_{i=1}^{k+s-1}B_{\alpha}(x_{i},\mu^{i}_{1},...,\mu^{i}_{n-1})$.
So $||x_{k+s}-x_{k},\mu^{i}_{1}-x_{k},...,\mu^{i}_{n-1}-x_{k}||\geq\alpha$ and
all nonzero $\mu^{i}_{1},...,\mu^{i}_{n-1}\in A$ where $i=1,...,k+s-1.$ So the
sequence $(x_{k})$ has not any $s$-quasi-Cauchy subsequence. Therefore $A$ is
not $s$-ward compact. ∎
###### Definition 2.3.
A function $f$ is called $s$-ward continuous on a subset $A$ of $X$ if
(10)
$lim_{k\rightarrow\infty}||\Delta_{s}x_{k},\mu_{1},\mu_{2},...,\mu_{n-1}||=0$
is satisfied for all $\mu_{1},\mu_{2},...,\mu_{n-1}\in X$, then
(11)
$lim_{k\rightarrow\infty}||\Delta_{s}f(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||=0.$
In the following we give that any $s$-ward continuous function is continuous.
###### Theorem 2.3.
Any $s$-ward continuous function on a subset $A$ of $X$ is continuous on $A$.
###### Proof.
Let the function $f$ be $s$-ward continuous on $A\subset X$ and any sequence
$(x_{k})$ in $A$ converge to $\zeta$, that is
(12) $lim_{k\rightarrow\infty}||x_{k}-\zeta,\mu_{1},\mu_{2},...,\mu_{n-1}||=0$
for all $\mu_{1},\mu_{2},...,\mu_{n-1}\in X$. Let us write a new sequence
using some terms of the sequence $(x_{k})$ as
$(t_{m})=(x_{1},...,x_{1},\zeta,...,\zeta,x_{2},...,x_{2},\zeta,...,\zeta,...,x_{n},...,x_{n},\zeta,...,\zeta,...).$
where same terms repeated s-times. Every convergent sequences is Cauchy and
moreover any Cauchy sequence is $s$-quasi-Cauchy, then it follows that
$\displaystyle
lim_{m\rightarrow\infty}||\Delta_{s}t_{m},\mu_{1},\mu_{2},...,\mu_{n-1}||=lim_{m\rightarrow\infty}||t_{m+s}-t_{m},\mu_{1},\mu_{2},...,\mu_{n-1}||=0$
in which either
$lim_{m\rightarrow\infty}||t_{m+s}-\zeta,\mu_{1},\mu_{2},...,\mu_{n-1}||=0$
or
$lim_{m\rightarrow\infty}||\zeta-t_{m},\mu_{1},\mu_{2},...,\mu_{n-1}||=0$
for every $\mu_{1},\mu_{2},...,\mu_{n-1}$. This result implies $(t_{m})$ is an
$s$-quasi Cauchy sequence. Since the function $f$ is assumed to be $s$-ward
continuous, using this assumption we get
$\displaystyle
lim_{m\rightarrow\infty}||\Delta_{s}f(t_{m}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
(13)
$\displaystyle=lim_{m\rightarrow\infty}||f(t_{m+s})-f(t_{m}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||=0$
in which either
$lim_{m\rightarrow\infty}||f(t_{m+s})-f(\zeta),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||=0$
or
$lim_{m\rightarrow\infty}||f(\zeta)-f(t_{m}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||=0.$
So $(f(x_{k}))$ converges to $f(\zeta)$. ∎
As the sum of two $s$-ward continuous function on $A$ is $s$-ward continuous
and $cf$ is $s$-ward continuous for any constant real number $c$, the set of
$s$-ward continuous functions on $A$ is a vector subspace of vector space of
all continuous function on $A$.
###### Theorem 2.4.
Every $s$-ward continuous function on $A\subset X$ is ward continuous on $A$.
###### Proof.
Assume that $(x_{k})$ is a quasi-Cauchy sequence in $A$ and $f$ is any
$s$-ward continuous function on $A$. If $s=1$, the result is obvious. Let
$s>1$ and a sequence
$(t_{m})=(\underbrace{x_{1},x_{1},...,x_{1}}_{s-times},\underbrace{x_{2},x_{2},...,x_{2}}_{s-times},...,\underbrace{x_{n},x_{n},...,x_{n}}_{s-times},...)$
be $s$-quasi-Cauchy, i.e.
(14)
$lim_{m\rightarrow\infty}||\Delta_{s}t_{m},\mu_{1},\mu_{2},...,\mu_{n-1}||=0.$
We have
(15)
$lim_{m\rightarrow\infty}||\Delta_{s}f(t_{m}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||=0$
by using the $s$-ward continuity of the function $f$. Therefore
(16) $lim_{m\rightarrow\infty}||\Delta
f(t_{m}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||=0$
So $s$-ward continuity of the function $f$ implies that the ward continuity of
$f$ on $A\subset X$. ∎
###### Theorem 2.5.
The image of an $s$-ward compact subset of $X$ by an $s$-ward continuous
function is $s$-ward compact.
###### Proof.
Assume that $f$ is an $s$-ward continuous function and $A$ is an $s$-ward
compact subset of $X$. Choose a sequence $t$ as $t=(t_{k})\in f(A)$ and say
$(t_{k})=f(x_{k})$ where $x_{k}\in{A}$. $A$ is $s$-ward compact so there is a
subsequence $(x_{m})$ of $(x_{k})$ with
(17)
$\lim_{m\rightarrow\infty}||\Delta_{s}x_{m},\mu_{1},\mu_{2},...,\mu_{n-1}||=0$
for all $\mu_{1},\mu_{2},...,\mu_{n-1}\in X$. Using the $s$-ward continuity of
$f$ we have
(18)
$\lim_{m\rightarrow\infty}||\Delta_{s}f(x_{m}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||=0,$
so there is an $s$-quasi-Cauchy subsequence $(f(x_{m}))$ of $t$. The result
implies that the subset $f(A)\subset X$ is $s$-ward compact. ∎
$s$-ward continuous image of any compact subset of $X$ is compact. It is
easily evaluated from Theorem 2.3.
###### Theorem 2.6.
If $f$ is uniformly continuous on $A\subset X$, then it is $s$-ward continuous
on $A$.
###### Proof.
Let $f$ be a uniformly continuous function on $A$, and the sequence $(x_{k})$
be an $s$-quasi-Cauchy sequence in $A$. Our aim is to prove the sequence
$(f(x_{k}))$ is also an $s$-quasi-Cauchy sequence in $A$. Take any
$\varepsilon>0$. There exists a $\delta>0$ such that if
(19) $||x-y,\mu_{1},\mu_{2},...,\mu_{n-1}||<\delta\ \ \textrm{then}\ \
||f(x)-f(y),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||<\epsilon.$
There exists an $\tilde{k}=\tilde{k}(\delta)$ for this $\delta>0$ such that
(20) $||\Delta_{s}x_{k},\mu_{1},\mu_{2},...,\mu_{n-1}||<\delta$
for every $\mu_{1},\mu_{2},...,\mu_{n-1}\in X$ whenever $k>\tilde{k}$. Uniform
continuity of $f$ on $A$ for every $k>\tilde{k}$ implies
(21)
$||\Delta_{s}f(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||<\varepsilon$
for every $f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})\in X$. The sequence
$(f(x_{k}))$ is $s$-quasi-Cauchy so the function $f$ is $s$-ward continuous. ∎
###### Theorem 2.7.
Uniform limit of a sequence of $s$-ward continuous function is $s$-ward
continuous.
###### Proof.
Let $(f_{t})$ be a sequence of $s$-ward continuous functions and it be
uniformly convergent sequence to a function $f$. Pick an $s$-quasi-Cauchy
sequence $(x_{k})$ in $A$ and choose any $\varepsilon>0$. There is an integer
$N\in Z^{+}$ such that
(22)
$||f_{t}(x)-f(x),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||<\frac{\varepsilon}{3}$
for every $x\in{A}$, for all $f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})\in X$
whenever $t\geq N$. Using the $s$-ward continuity of $f_{N}$, there is a
positive integer $N_{1}(\varepsilon)>N$ such that
(23)
$||\Delta_{s}f_{t}(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||<\frac{\varepsilon}{3}$
for every $t\geq N_{1}$. Now for $t\geq N_{1}$ we have
$\displaystyle||\Delta_{s}f(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
$\displaystyle=||f(x_{k+s})-f(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
$\displaystyle\leq||f(x_{k+s})-f_{N}(x_{k+s}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
$\displaystyle+||\Delta_{s}f_{N}(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
(24)
$\displaystyle+||f_{N}(x_{k})-f(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||<\frac{\varepsilon}{3}+\frac{\varepsilon}{3}+\frac{\varepsilon}{3}=\varepsilon.$
So the function $f$ is $s$-ward continuous on $A$. ∎
###### Theorem 2.8.
The collection of the $s$-ward continuous functions on $A\subset X$ is a
closed subset of the collection of every continuous functions on $A$.
###### Proof.
Let $E$ be a collection of all $s$-ward continuous functions on $A\subset X$
and $\overline{E}$ is the closure of $E$. $\overline{E}$ is defined as for
every $x\in X$ there exists $x_{k}\in E$ with
$\lim_{k\rightarrow\infty}x_{k}=x$ and $E$ is closed if $E=\overline{E}$. It
is obvious that $E\subseteq\overline{E}$. Let $f$ be any element of the set of
all closure points of $E$ which means there exists a sequence of points
$f_{t}$ in $E$ as
(25)
$\lim_{t\rightarrow\infty}||f_{t}-f,f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||=0$
for all $f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})\in X$ and also $f_{t}$ is a
$s$-ward continuous function. Choose the sequence $(x_{k})$ as any $s$-quasi-
Cauchy sequence. Since $(f_{t})$ converges to $f$, for every $\varepsilon>0$
and $x\in{E}$, there is any $N_{0}$ such that for every $t\geq{N_{0}}$,
(26)
$||f(x)-f_{t}(x),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||<\frac{\varepsilon}{3}.$
As $f_{N}$ is $p$-ward continuous, $N_{1}>N_{0}$ exists such that for all
$t\geq{N_{1}}$,
(27)
$||\Delta_{s}f_{N}(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||<\frac{\varepsilon}{3}.$
Hence for all $t\geq{N_{1}}$,
$\displaystyle||\Delta_{s}f(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
$\displaystyle=||f(x_{k+s})-f(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
$\displaystyle\leq||f(x_{k+s})-f_{N}(x_{k+s}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
$\displaystyle+||f(x_{k})-f_{N}(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||$
(28)
$\displaystyle+||\Delta_{s}f_{N}(x_{k}),f(\mu_{1}),f(\mu_{2}),...,f(\mu_{n-1})||<\frac{\varepsilon}{3}+\frac{\varepsilon}{3}+\frac{\varepsilon}{3}=\varepsilon.$
Since the function $f$ is $s$-ward continuous in $E$ then $E=\overline{E}$,
using Theorem 2.3, it ends the proof. ∎
## 3\. Conclusion
The notion of an $n$-normed space was given by thinking if there is a problem
where $n$-norm topology works however norm topology doesn’t. As an application
of the notion of n-norm, we can examine that if a term in the definition of
n-norm shows the change of shape then the n-norm stands for the associated
volume of this surface. Suppose that for any particular output one needs
n-inputs but with one main input and other (n-1)-inputs as dummy inputs
required which accomplish the operation, so this concept may be used as an
application in many areas of science. The generalization of the notions of
quasi-Cauchy sequences and ward continuous functions to the notions of
$s$-quasi-Cauchy sequences and $s$-ward continuous functions in $n$-normed
spaces are investigated in this paper. Also we find out some interesting
inclusion theorems related to the concepts of ordinary continuity, uniform
continuity, $s$-ward continuity, and $s$-ward compactness. We prove that the
uniform limit of a sequence of $s$-ward continuous function is $s$-ward
continuous and the set of $s$-ward continuous functions is a closed subset of
the set of continuous functions. We recommend research $s$-quasi-Cauchy
sequences of points and fuzzy functions in an $n$-normed fuzzy space as a
further study. However, due to the different structure, the methods of proof
will not be similar to the one in this study. (see [14], [21]). Also we
recommend investigate $s$-quasi-Cauchy sequences of double sequences in
$n$-normed spaces as another further study (see [17], [18]).
## Declarations
Ethical Approval Not applicable
Competing interests Not applicable
Authors’ contributions Not applicable
Funding Not applicable
Availability of data and materials Not applicable
## References
* [1] A. Malceski, $l^{\infty}$ as n-normed space, Mat. Bilten., 21, (1997), 103-110.
* [2] A. Misiak, n–inner product spaces, Mathematische Nachrichten, 140, (1989), 299–319.
* [3] C. Diminnie, S. Gähler and A. White, $2$-inner product spaces, Demonstratio Math., 6, (1973), 525-536.
* [4] D. Burton, J. Coleman, Quasi-Cauchy sequences, Amer. Math. Monthly, 117 4 (2010), 328-333.
* [5] F. Lael, K. Nourouzi, Compact Operators Defined on 2-Normed and 2-Probabilistic Normed Spaces, Mathematical Problems in Engineering, 2009, (2009), Article Number: 950234.
* [6] H. Cakalli, Forward continuity, J. Comput. Anal. Appl., 13, 2, (2011), 225-230. MR 2012c:26004
* [7] H. Cakalli, Variations on Quasi-Cauchy Sequences, Filomat, 29:1, (2015), 13-19.
* [8] H. Cakalli, Statistical quasi-Cauchy Sequences, Mathematical and Computer Modelling, 54, (2011), 1620-1624.
* [9] H. Cakalli, S. Ersan, Strongly lacunary ward continuity in 2-normed spaces, The Scientific World Journal, 2014 (2014), 5 pages, Article ID 479679.
* [10] H. Cakalli, S. Ersan, Lacunary ward continuity in 2-normed spaces, Filomat, 29 (10) (2015), 2257-2263.
* [11] H. Dutta, On some n-normed linear space valued difference sequences, Journal of the Franklin Institute, 348, (2011), 2876-2883.
* [12] H. Gunawan, The space of p-summable sequences and its natural n-norm, Bull. Austral. Math. Soc., 64, (2001), 137-147.
* [13] H. Gunawan, M. Mashadi, On n–normed spaces, International Journal of Mathematics and Mathematical Sciences, 27(10), (2001), 631–639.
* [14] Lj D. R., Kocinac, V. A., Khan, K. M. A. S., Alshlool and H., Altaf, On some topological properties of intuitionistic 2-fuzzy n-normed linear spaces Hacettepe Journal of Mathematics and Statistics, 49 1, (2020), 208-220.
* [15] M. Gurdal, A. Sahnier, Ideal convergence in n–normal spaces and some new sequence spaces via n–norm, Malaysian Journal of Fundamental and Applied Sciences, 4(1), (2014).
* [16] M. Gurdal, N. Sari and E. Savas, A-statistically localized sequences in n-normed spaces, Commun. Fac. Sci. Univ. Ank. Ser. A1 Math. Stat., 69, 2, (2020), 1484-1497.
* [17] M. Mursaleen and SK, Sharma, Riesz Lacunary Almost Convergent Double Sequence Spaces Defined By Sequence Of Orlicz Functions Over N-Normed Spaces, TWMS Journal Of Pure And Applied Mathematics, 8 1, (2017),43-63.
* [18] N. Khan, Classes of I-Convergent Double Sequences over n-Normed Spaces, Journal of Function Spaces, (2016), Article Number7594031.
* [19] R. Malceski, Strong $n$-convex $n$-normed spaces, Mat. Bilten, 21, (1997), 81-102.
* [20] S.S. Kim and Y. J. Cho, Strict convexity in linear $n$-normed spaces, Demonstratio Math., 29 4, (1996),739-744.
* [21] S. Altundag and E. Kamber, Lacunary Delta-statistical convergence in intuitionistic fuzzy n-normed space, Journal Of Inequalities And Applications, 40, (2014).
* [22] S. Gähler, $2$-metrische Räume und ihre topologische Struktur, Math. Nachr., 26, (1963), 115-148.
* [23] S. Gähler, Untersuchungen über verallgemeinerte $m$-metrische Räume I, Math. Nachr., 40, (1969), 165-189.
* [24] S. Ersan and H. Cakalli, Ward continuity in $2$-normed Spaces, Filomat, 29:7, (2015), 1507-1513.
|
# jaCappella Corpus: A Japanese a Cappella Vocal Ensemble Corpus
###### Abstract
We construct a corpus of Japanese _a cappella_ vocal ensembles (_jaCappella
corpus_) for vocal ensemble separation and synthesis. It consists of 35
copyright-cleared vocal ensemble songs and their audio recordings of
individual voice parts. These songs were arranged from out-of-copyright
Japanese children’s songs and have six voice parts (lead vocal, soprano, alto,
tenor, bass, and vocal percussion). They are divided into seven subsets, each
of which features typical characteristics of a music genre such as jazz and
enka. The variety in genre and voice part match vocal ensembles recently
widespread in social media services such as YouTube, although the main targets
of conventional vocal ensemble datasets are choral singing made up of soprano,
alto, tenor, and bass. Experimental evaluation demonstrates that our corpus is
a challenging resource for vocal ensemble separation. Our corpus is available
on our project page.
Index Terms— Corpus, vocal ensemble, singing voice, audio source separation,
singing voice synthesis
## 1 Introduction
Vocal ensemble is a widespread group singing form across cultures and
languages and has been gathering attention in the field of music information
retrieval (MIR) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Owing to successes of deep
learning in the this field [11, 12], recent studies on vocal ensemble have
adopted a data-driven approach: for example, multipitch estimation [3, 4],
vocal ensemble separation [6, 7, 5], automatic transcription [8], synthesis of
unison voices [9], and double-tracking [10]. For the development of such
methods, the availability of suitable vocal ensemble datasets is essential.
There are a few publicly available datasets on vocal ensembles [1, 5, 3, 2].
Table 1 shows the specifications of the vocal ensemble datasets. Most of them
focus on choral singing, particularly four-part singing of soprano (S), alto
(A), tenor (T), and bass (Bs). This SATB format is widespread in Western music
and audio recordings of traditional choral music have been dealt with. Some of
the datasets [3, 2] include recordings of vocal ensembles with multiple
singers per voice part to analyze interactions between singers, while most
vocal ensemble studies focus on vocal ensembles with one singer per voice part
[8, 4, 3, 6, 7, 5]. The datasets of [3, 2] include audio recordings of singing
voices during exercises and a few songs were often used. In [5], more than 40
songs were used to develop a vocal ensemble separation method. This should be
because the use of same songs for training and test leads to data leakage.
Unlike choral singing, recent vocal ensemble songs include more various voice
parts such as lead vocal (Vo) and vocal percussion (VP). VP is the imitation
of percussion sounds by mouth and can accounts for rhythm. Thus, it fits the
vocal ensemble arrangement of various music genre songs such as jazz and
reggae. Such vocal ensemble songs have been actively shared in social media
services (e.g., YouTube and TikTok), forming a new vocal ensemble style.
Despite their importance, there do not exist datasets that feature this style
and can be easily used for research, to the best of our knowledge. Since the
availability of data is crucial for developing MIR methods on vocal ensembles,
the lack of suitable datasets requires a costly and painstaking process of
data collection.
In this paper, we construct a corpus of Japanese a cappella vocal ensembles,
named _jaCappella corpus_ , by arranging out-of-copyright Japanese children’s
songs. It includes 35 vocal ensemble songs composed of seven subsets (five
songs per subset). The songs in each subset were arranged with a different
music genre, which ensures the variety in music genre and singing styles. All
songs have six voice parts: Vo, S, A, T, Bs, and VP. This format is popular in
Japanese vocal ensemble arrangement, which we examined by collecting
commercial vocal ensemble songs. To avoid copyright-related restriction, we
obtained all necessary copyrights and neighboring rights of our songs.
Reserving these rights allows users of the jaCappella corpus to share
processed audio signals to the extent necessary for research and it can also
open the way for commercial use. This is one of the advantages of manually
creating music data because ensuring such a broad use is usually difficult to
achieve with a method that artificially mixes singing voices from existing
datasets [4]. To further enhance the usability of this corpus, we provide the
musical scores of the songs in the MusicXML format [13].
Our corpus includes audio signals of individual voice parts sung by 20 semi-
professional singers. A vocal ensemble was assembled per subset and each voice
part was sung by one singer. The recordings were performed per singer. We can
use the recorded signals for various tasks on vocal ensembles. For example, we
can use the jaCappella corpus for vocal ensemble separation, which we will
demonstrate in Section 3. In this paper, we focus on monaural mixtures but our
corpus can be used for multichannel situations such as bleeding sound
reduction (e.g., [14]). Another task of our interest is singing voice
synthesis for vocal ensemble, which we refer to as vocal ensemble synthesis.
This research direction aims to clarify how to artificially reproduce vocal
ensemble performed by humans. Since our corpus covers various music genres, it
would be useful to synthesize ensemble singing voices with singing styles
other than choral singing. The corpus is available at
https://tomohikonakamura.github.io/jaCappella_corpus/.
Table 1: Specifications of our jaCappella corpus and conventional vocal ensemble datasets Corpus/Dataset | Voice parts | Duration [min] | # songs | Genre | Publicly avail.
---|---|---|---|---|---
Choral Singing [1] | S, A, T, Bs | 7 | 3 | Choral music | Yes
Dagstuhl ChiorSet [2] | S, A, T, Bs | 55 | 2 | Choral music | Yes
ESMUC Choir [3] | S, A, T, Bs | 31 | 3 | Choral music | Yes
Bach Chorales and | S, A, T, Bs | 104 | 48 | Choral and barbershop music | No
Barbershop Quartets [5]
jaCappella (ours) | Vo, S, A, T, Bs, VP | 34 | 35 | Jazz, punk rock, bossa nova, popular, | Yes
reggae, enka, neutral (children’s song)
## 2 Our jaCappella Corpus
We constructed the jaCappella corpus in three steps: corpus design,
arrangement into vocal ensemble songs, and recording of singing voices. In the
following, we describe the details of the three steps and analyze the
distributions of lyrics, comparing the created songs with commercial vocal
ensemble songs.
### 2.1 Corpus design
Table 2: Gender frequency of commercial music collection Gender | Voice part [%]
---|---
Vo | S | A | T | Bs | VP
Male | 81.0 | 0.0 | 0.0 | 66.7 | 100.0 | 100.0
Female | 19.0 | 100.0 | 100.0 | 33.3 | 0.0 | 0.0
We design the jaCappella corpus based on analysis of commercial vocal ensemble
songs. We collected 60 musical pieces from a Japanese sheet music store [15]
and examined them in terms of voice parts, gender of singers, and variety of
music genres. In addition, we address copyright issues to legally distribute
our corpus.
Voice parts: 46 of the 60 musical pieces have six voice parts. 42 of the 46
pieces consist of Vo, S, A, T, Bs, and VP, while the other 4 pieces have
fourth chorus parts instead of Vo. Following the majority, we adopted the six-
part format of Vo, S, A, T, Bs, and VP. We call the 42 pieces _the commercial
music collection_ and determined other conditions on the basis of this
collection.
Gender of singers: Table 2 shows the gender frequencies of the singers of the
commercial music collection. The female singers are dominant for S and A. The
male singers are dominant for T, Bs, and VP. We followed the majority for
these parts. Since the gender of a Vo singer strongly depends on musical
pieces, we chose female Vo singers for convenience on singing voice synthesis.
Variety of music genres: The commercial music collection includes various
music genres, for example, Japanese popular music, rock, and enka (Japanese
traditional ballad). The variety in genre is essentially different from choral
songs and should be considered in corpus design. We divided songs into several
subsets and arranged the songs in each subset to different genres.
Copyrights of musical pieces: The distribution of a corpus that includes
commercial musical pieces is extremely restrictive because we must not violate
their copyrights and neighboring rights. To avoid this restriction, we decided
to arrange out-of-copyright songs into vocal ensemble songs and reserve their
copyrights and neighboring rights necessary for research purposes.
### 2.2 Arrangement into vocal ensemble songs
Fig. 1: Excerpt of musical score of “Poplar” in our jaCappella corpus.
In accordance with our design strategy (see Section 2.1), we arranged 35 out-
of-copyright Japanese children’s songs into vocal ensemble songs. The
arrangement was performed by five arrangers. The original children’s songs
were selected from two books of Japanese children’s songs [16, 17]111The used
songs were categorized into _doyo_ and _shoka_. Doyo are well-known Japanese
children’s songs and shoka are Japanese children’s songs taught in school.,
which include musical scores of melodies with lyrics. The Vo parts of our
songs were arranged in accordance with the musical scores and the other parts
were newly composed. The lyrics were given for all parts except for the VP
part.
The 35 songs are divided into seven subsets (five songs per subset). The six
subsets are named _jazz, punk rock, bossa nova, popular, reggae,_ and _enka_.
The songs in each subset were arranged to the corresponding music genre. For
example, the Bs parts of the jazz subset have sequences of equal-duration
notes whose pitches alternately go up and down (a.k.a. walking bass lines).
This is one of the typical features of jazz music. The remaining subset, named
_neutral_ , consists of the songs that were arranged to retain the mood of the
original songs.
We created musical scores of the songs in the MusicXML format [13]. This
format can include various musical symbols and lyrics and is widely used in
MIR studies [18] and singing voice synthesis [19]. Fig. 1 shows an example of
the created musical scores. The created songs include sections where multiple
singers synchronously sing the same lyrics (the region surrounded by red
lines) and sections where part of singers sing asynchronously with the other
singers (the region surrounded by the blue lines). Our songs well capture
these characteristics that frequently appear in the commercial music
collection.
### 2.3 Recording of singing voices
Table 3: Singer IDs and average durations of audio recordings in our jaCappella corpus. Singer IDs are comma-separated from left to right for Vo, S, A, T, Bs, and VP, respectively Subset | Singer IDs | Total dur. [s]
---|---|---
Jazz | Vo1, S1, A1, T1, Bs1, VP1 | 226.7
Punk rock | Vo2, S2, A2, T2, Bs1, VP2 | 310.7
Bossa nova | Vo3, S3, A2, T3, Bs2, VP3 | 334.5
Popular | Vo1, S1, A1, T1, Bs1, VP1 | 352.5
Reggae | Vo3, S3, A2, T3, Bs2, VP3 | 228.7
Enka | Vo2, S2, A2, T2, Bs1, VP2 | 361.1
Neutral | Vo1, S4, A3, T4, Bs1, VP4 | 260.1
We conducted recordings of singing voices along with the vocal ensemble songs.
The recordings were performed per singer to avoid COVID-19 infection. The
singing voices were recorded with Shure SM58 microphones in recording studios
at a sampling frequency of 48 kHz. We employed 20 Japanese semi-professional
singers: three singers for Vo, four singers for S, three singers for A, four
singers for T, two singers for Bs, and four singers for VP. The Vo, S, and A
singers were female and the T, Bs, and VP singers were male. The vocal
ensembles were assembled per subset. Table 3 shows the singer identifiers
(IDs) and total durations of the songs per subset.
After the recordings, we created the best take by combining several takes as
in an usual music production process. The pitch corrections were applied to
these signals when the pitches of the singing voices differed from those of
the musical scores. We did not use effects such as reverb or an equalizer
often used in a commercial music production process. All voice parts in each
song were time-aligned, which makes it easier to use the data for vocal
ensemble separation and synthesis. We also provided the mixtures of all voice
parts. The resultant audio signals were saved as monaural audio files in the
RIFF WAVE format with 24-bit linear quantization.
(a) Vo (b) S
(c) A (d) T
(e) Bs (f) VP
Fig. 2: Spectrogram examples of all voice parts.
Fig. 2 show examples of spectrograms of all voice parts. We can find that the
energies of S, A, T, and Bs are distributed in partly overlapped pitch ranges
and they change synchronously in part. The spectrogram of VP has percussive
characteristics, but the time-frequency components similar to a voiced
spectrogram appear in the range of around 4 to 5 s. This observation shows
that VP has intermediate characteristics between percussive sounds and voices.
### 2.4 Analysis of lexical and non-lexical syllables
Table 4: Frequency of lexical and non-lexical syllables of our jaCappella corpus and commercial music collection (Commercial) Voice part | jaCappella [%] | Commercial [%]
---|---|---
Lexical | Non-lexical | Lexical | Non-lexical
Vo | 85.2 | 14.8 | 91.1 | 8.9
S | 41.9 | 58.1 | 34.2 | 65.8
A | 32.9 | 67.1 | 35.1 | 64.9
T | 33.3 | 66.7 | 36.3 | 63.7
Bs | 0.4 | 99.6 | 2.9 | 97.1
As in the commercial music collection, the created songs include ordinary
Japanese syllables and nonsense syllables such as “ah” and “dm” (purple lyrics
in Fig. 1). The nonsense syllables appear particularly in S, A, T, and Bs. To
distinguish them, we call the former _the lexical syllables_ and the latter
_the non-lexical syllables_.
The frequencies of these syllables differ in voice part. Table 4 shows the
syllable frequencies of the voice parts in the jaCappella corpus and
commercial music collection. We omitted VP since it does not have lyrics. The
frequency of the lexical syllables is high for Vo and quickly decreases from S
to Bs. Since most of the Bs syllables imitate bass instrument sounds, the
frequency of non-lexical syllables (e.g., “dm”, “bon”, and “ban”) is quite
high. These tendencies commonly appear in both collections, showing that our
corpus captures biases of real vocal ensemble songs in terms of lexical and
non-lexical syllables. For Vo and S, the jaCappella corpus has more non-
lexical syllables than the music commercial collection. This is because some
original songs have short lyrics. When we created vocal ensemble versions of
these original songs, we added non-lexical syllables (e.g., “fu” and “la”)
before and after the original lyrics so that the resultant songs were longer
than at least 30 s.
## 3 Vocal Ensemble Separation Experiment
### 3.1 Experimental settings
As an example application, we used the jaCappella corpus for a vocal ensemble
separation task. This task aims at separating audio signals of the six voice
parts from their monaural mixture. The sampling frequency was set to 48 kHz.
We chose seven songs for test, one from each subset. The total duration of the
test songs was 350.2 s. The other songs were used for training. Note that the
separation was done in the singer-closed setting. We computed average
improvements of scale-invariant source-to-distortion ratio (SI-SDR) [20] as an
evaluation metric. The input SI-SDRs of the test data ranged from -23 to -1.3
dB, which shows the difficulty of the separation task.
To increase the amount of training data, we used an intersong mixing method
proposed in [21]. It synthesizes realistic mixtures by mixing audio signals of
voice parts of different songs, changing their tempi and pitches adequately.
The augmented data of the 28 training songs were divided into training and
validation sets. The total duration of the training and validation sets were
7650.7 and 2811.7 s, respectively. We experimentally observed that this data
augmentation improved the separation performance. We also used on-the-fly data
augmentations [22]: random cropping of the training audio segments and random
amplification within the range of $[0.25,1.25]$.
### 3.2 Compared methods
We compared three methods: X-UMX [22], an adaptation of DPTNet [23] for vocal
ensemble separation (DPTNet) [5], and MRDLA [24]. X-UMX performs the
separation in the spectrogram domain while DPTNet and MRDLA separates mixtures
in the waveform domain. All methods estimated monaural audio signals of voice
parts from monaural mixtures. The first and last layers of the waveform-based
methods were modified to have one input channel and six output channels,
respectively. We trained all methods with a NVIDIA A100 GPU and picked up the
trained models at the epoch with the lowest validation loss. The other
settings were as follows:
X-UMX: This method was a baseline method in Music Demixing Challenge 2021
[25]. It was originally proposed for music source separation and we applied it
to vocal ensemble separation. The number of epochs was 1000 and the other
parameters were the same as in the official
implementation222https://github.com/asteroid-
team/asteroid/tree/master/egs/musdb18/X-UMX.
DPTNet: This method is an adaptation of DPTNet for the separation of four-part
vocal ensembles. We set the kernel sizes and strides of the first
convolutional and last transposed convolutional layers to 32 and 16,
respectively. The number of epochs was 600 and the other settings were the
same as in the official
implementation333https://github.com/saurjya/asteroid/tree/4e00daa4c4da77bbee6c0109fa4e2c3611217e72.
Since this method uses permutation invariant training [26], the SI-SDRs of
this method were computed after the best permutations were obtained.
MRDLA [24]: This method is a wavelet-based extension of Wave-U-Net [27]. It
was originally proposed for music source separation and we adapted it to vocal
ensemble separation. We adopted the network architecture with the Haar
wavelet, which achieved nearly best separation performance in [24] and is easy
to implement. We applied it the following modifications. The number of
channels of all layers were increased. More specifically, with the notation
used in [24], $C^{\text{(e)}}$ increased from 18 to 32. The kernel sizes of
convolutional layers were increased to 21 and parametric rectified linear
units were replaced with Gaussian error linear units. We used a loss function
that combines time-domain and time-frequency-domain loss functions [28]. These
modifications greatly improved the separation performance. The number of
epochs was 1000 and the other settings were the same as X-UMX.
The codes used are publicly available at
https://github.com/TomohikoNakamura/asteroid_jaCappella.
### 3.3 Results
Table 5 shows average SI-SDR improvements per voice part. Compared with X-UMX,
the waveform-based methods (DPTNet and MRDLA) provided higher separation
performance. This result indicates that the waveform-based methods are
suitable for vocal ensemble separation. DPTNet provided highest average SI-
SDRs for Vo, T, and Bs. MRDLA achieved highest average SI-SDRs for the other
voice parts. Their average performances were similar for most of the voice
parts, but DPTNet clearly outperforms MRDLA for Bs. The VP sounds were well
separated in all methods, showing that the percussive characteristics of VP
spectrograms make the separation easier. This result is reminiscent of the
fact that drum sounds tend to be well separated in music source separation of
vocals, bass, drums, and other instruments [25].
Some separated audio signals are available at
https://tomohikonakamura.github.io/Tomohiko-Nakamura/demo/jaCappella_sep. When
listening to them, we can find that the separated signals of X-UMX
(particularly for Bs) tend to lack high frequency components and those of the
waveform-based methods tend to include high frequency noises. A similar
tendency was observed in music source separation [29]. The separated signals
of DPTNet included high frequency noises in all voice parts. MRDLA provided
clearer but sometimes choppy separation results. For all methods, the
separated signals of Vo, S, A, and T were often contaminated with the other
voice parts, sometimes with higher energies than the target sources. This
trend is common in all methods, showing the difficulty of this separation
task. Compared with these voice parts, the separated signals of Bs and VP had
less leakage from the other voice parts. This should be because Bs was in the
much lower pitch range and the timbre of VP greatly differs from the other
voice parts as mentioned in Section 2.3. These results clearly show that the
jaCappella corpus is a challenging resource to develop and test vocal ensemble
separation methods.
Table 5: Average SI-SDR improvements of separation methods Method | Voice part [dB]
---|---
Vo | S | A | T | Bs | VP
X-UMX | 7.5 | 10.7 | 13.5 | 10.2 | 9.1 | 21.0
DPTNet | 8.9 | 8.5 | 11.9 | 14.9 | 19.7 | 21.9
MRDLA | 8.7 | 11.8 | 14.7 | 11.3 | 10.2 | 22.1
## 4 Conclusion
We constructed the jaCappella corpus that consists of 35 Japanese vocal
ensemble songs and separate audio recordings of individual voice parts. The
created songs are vocal ensemble arrangements of Japanese children’s songs.
Unlike conventional vocal ensemble datasets, their voice parts include Vo and
VP and the arrangement cover various music genres. We confirmed that our
corpus has distributions of lexical and non-lexical syllable similar to
commercial vocal ensemble songs. Experimental evaluation demonstrated that the
jaCappella corpus is a challenging resource to develop and test algorithms of
vocal ensemble separation. We believe that our corpus accelerates MIR research
on vocal ensembles.
## References
* [1] H. Cuesta, E. Gómez, A. Martorell, and F. Loáiciga, “Analysis of intonation in unison choir singing,” in Proc. Int. Conf. Music Perception Cognition, 2018.
* [2] S. Rosenzweig, H. Cuesta, C. Weiß, F. Scherbaum, E. Gómez, and M. Müller, “Dagstuhl ChoirSet: A multitrack dataset for mir research on choral singing,” Trans. Int. Soc. Music Inf. Retrieval, vol. 3, no. 1, pp. 98–110, 2020.
* [3] H. Cuesta, Data-driven Pitch Content Description of Choral Singing Recordings, Ph.D. thesis, Universitat Pompeu Fabra, 2022.
* [4] H. Cuesta, B. McFee, and E. Gómez, “Multiple f0 estimation in vocal ensembles using convolutional neural networks,” in Proc. Int. Soc. Music Inf. Retrieval Conf., 2020, pp. 302–309.
* [5] S. Sarkar, E. Benetos, and M. Sandler, “Vocal harmony separation using time-domain neural networks,” in Proc. INTERSPEECH, 2021, pp. 3515–3519.
* [6] M. Gover and P. Depalle, “Score-informed source separation of choral music,” in Proc. Int. Soc. Music Inf. Retrieval Conf., 2020.
* [7] D. Petermann, P. Chandna, H. Cuesta, J. Bonada, and E. Gómez, “Deep learning based source separation applied to choir ensembles,” in Proc. Int. Soc. Music Inf. Retrieval Conf., 2020, pp. 733–739.
* [8] A. McLeod, R. Schramm, M. Steedman, and E. Benetos, “Automatic transcription of polyphonic vocal music,” Appl. Sci., vol. 7, no. 12, 1285, 2017.
* [9] P. Chandna, H. Cuesta, D. Petermann, and E. Gómez, “A deep-learning based framework for source separation, analysis, and synthesis of choral ensembles,” Frontiers Signal Process., vol. 2, 2022.
* [10] H. Tamaru, Y. Saito, S. Takamichi, T. Koriyama, and H. Saruwatari, “Generative moment matching network-based neural double-tracking for synthesized and natural singing voices,” IEICE Trans. Inf. Systems, vol. E103.D, no. 3, pp. 639–647, 2020\.
* [11] H. Purwins, B. Li, T. Virtanen, J. Schlüter, S.-Y. Chang, and T. Sainath, “Deep learning for audio signal processing,” IEEE J. Selected Topics Signal Process., vol. 13, no. 2, pp. 206–219, 2019.
* [12] J.-P. Briot, G. Hadjeres, and F.-D. Pachet, Deep Learning Techniques for Music Generation, Springer, 2020.
* [13] M. Good, “MusicXML for notation and analysis,” in The Virtual Score: Representation, Retrieval, Restoration, Walter B. H. and Eleanor S.-F., Eds., pp. 113–124. MIT Press, 2001.
* [14] Y. Mizobuchi, D. Kitamura, T. Nakamura, H. Saruwatari, Y. Takahashi, and K. Kondo, “Prior distribution design for music bleeding-sound reduction based on nonnegative matrix factorization,” in Proc. Asia-Pacific Signal Inf. Process. Assoc. Annu. Summit Conf., 2021, pp. 651–658.
* [15] “A cappella musical scores,” [Online]. Available: https://elevato-music.com/?mode=cate&cbid=1727017&csid=0, Accessed Oct. 18, 2022, (in Japanese).
* [16] Nobara-sya, Ed., Syoka: Meiji, Taisho, and Syowa, Nobara-sya, 2009, (in Japanese).
* [17] Nobara-sya and Shoji Kubo, Eds., Doyo, Nobara-sya, revised edition, 2010, (in Japanese).
* [18] M. Müller, Fundamentals of Music Processing: Audio, Analysis, Algorithms, Applications, Springer, first edition, 2015.
* [19] Y. Hono, K. Hashimoto, K. Oura, Y. Nankaku, and K. Tokuda, “Sinsy: A deep neural network-based singing voice synthesis system,” IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 29, pp. 2803–2815, 2021.
* [20] J. Le Roux, S. Wisdom, H. Erdogan, and J. R. Hershey, “SDR – half-baked or well-done?,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2019, pp. 626–630.
* [21] A. Défossez, “Hybrid spectrogram and waveform source separation,” in Proc. Music Demixing Challenge Workshop, 2021.
* [22] R. Sawata, S. Uhlich, S. Takahashi, and Y. Mitsufuji, “All for one and one for all: Improving music separation by bridging networks,” 2020, arXiv preprint, 2010.04228.
* [23] J. Chen, Q. Mao, and D. Liu, “Dual-path transformer network: Direct context-aware modeling for end-to-end monaural speech separation,” in Proc. INTERSPEECH, Oct. 2020, pp. 2642–2646.
* [24] T. Nakamura, S. Kozuka, and H. Saruwatari, “Time-domain audio source separation with neural networks based on multiresolution analysis,” IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 29, pp. 1687–1701, 2021.
* [25] Y. Mitsufuji, G. Fabbro, S. Uhlich, F.-R. Stöter, A. Défossez, M. Kim, W. Choi, C.-Y. Yu, and K.-W. Cheuk, “Music demixing challenge 2021,” Frontiers Signal Process., vol. 1, 2022.
* [26] D. Yu, M. Kolbæk, Z.-H. Tan, and J. Jensen, “Permutation invariant training of deep models for speaker-independent multi-talker speech separation,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2017, pp. 241–245.
* [27] D. Stoller, S. Ewert, and S. Dixon, “Wave-U-Net: A multi-scale neural network for end-to-end audio source separation,” in Proc. Int. Soc. Music Inf. Retrieval Conf., Sept. 2018, pp. 334–340.
* [28] Z. Kong, W. Ping, A. Dantrey, and B. Catanzaro, “Speech denoising in the waveform domain with self-attention,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., 2022, pp. 7867–7871.
* [29] N. Schaffer, B. Cogan, E. Manilow, M. Morrison, P. Seetharaman, and B. Pardo, “Music separation enhancement with generative modeling,” 2022, arXiv: 2208.12387.
|
# Diverse Multi-Answer Retrieval with Determinantal Point Processes
Poojitha Nandigam∗, Nikhil Rayaprolu∗, Manish Shrivastava
Language Technologies Research Centre (LTRC)
International Institute of Information Technology, Hyderabad, India
<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Often questions provided to open-domain question answering systems are
ambiguous. Traditional QA systems that provide a single answer are incapable
of answering ambiguous questions since the question may be interpreted in
several ways and may have multiple distinct answers. In this paper, we address
multi-answer retrieval which entails retrieving passages that can capture
majority of the diverse answers to the question. We propose a re-ranking based
approach using Determinantal point processes utilizing BERT as kernels. Our
method jointly considers query-passage relevance and passage-passage
correlation to retrieve passages that are both query-relevant and diverse.
Results demonstrate that our re-ranking technique outperforms state-of-the-art
method on the AmbigQA dataset. **footnotetext: Equal contribution
## 1 Introduction
The objective of open-domain question answering is to provide answers to
queries utilising a large collection of documents from the World Wide Web,
Wikipedia etc. More than 50% of questions present in a widely used open-domain
QA dataset (Natural Questions Kwiatkowski et al. (2019)) comprise of ambiguous
questions (Min et al. (2020)). Questions that are ambiguous may be interpreted
in a number of ways and as a result, they need various answers. In this paper,
we concentrate on questions with multiple distinct answers.
Open domain question-answering systems are designed to generate answers from
several data sources. Since similar information can be present across multiple
data sources, it introduces a significant amount of redundancy. Traditional
open-domain QA (Chen et al. (2017)) systems comprise of a Retriever, which
retrieves passages relevant to the question. A passage retriever is primarily
concerned with retrieving passages that are relevant to the query, and it does
not address redundancy in the passages during retrieval. To be able to produce
diverse answers to the question, the passages retrieved must be both relevant
to the question and distinct from one another. After the retrieval stage, we
introduce a novel re-ranking approach to handle redundant passages. As a
result, the re-ranked passages would capture most of the diverse answers to
the question. In this paper, we investigate the multi-answer retrieval task,
which entails retrieving passages that can cover the distinct answers.
Re-ranking methods have been employed previously to improve the question
answering accuracy significantly(Wang et al. (2019);Nogueira and Cho (2019);
Min et al. (2021); Clark and Gardner (2017)). Min et al. (2021) tackles
diverse multi-answer retrieval by proposing a re-ranker based on an auto-
regressive framework in which each passage selected is dependent on the
passages chosen at a previous time step.
Determinantal Point Processes (DPP) (Kulesza and Taskar (2012)) are
probabilistic models that are effective at identifying diverse subsets of
elements from a collection while preserving quality. DPP methods have proven
effective in natural language processing tasks where there is a need for
diverseness. Cho et al. (2019), Li et al. (2019), and Cho et al. (2020),
Sharghi et al. (2017) have used DPPs to perform summarization by choosing
salient but also diverse items to be included in the summaries. In this paper,
we propose an unsupervised re-ranking technique for multi-answer retrieval
utilising Determinantal point processes and BERT to model the kernels.
Our contributions can be summarized as follows: 1) We propose a re-ranking
method based on determinantal point processes that focuses on diverse passage
retrieval.
2) Since our approach is unsupervised, our method does not require a large
amount of data unlike prior re-ranking methodsMin et al. (2021). Instead, we
rely on DPP to identify the most relevant passages to the question that are
distinct from one another.
3) We demonstrate that our technique outperforms the state-of-the-art method
on the AmbigQA dataset using $\mathrm{MRECALL}$ @ $k$ metrics.
## 2 Related Work
Many open domain question answering systems(Chen et al. (2017); Yang et al.
(2019); Izacard and Grave (2021); Guu et al. (2020); Lee et al. (2019)) adopt
the retriever-reader method by retrieving the relevant documents and later
applying neural techniques to predict the answer. The retriever-reader method
was first proposed by Chen et al. (2017). DrQA(Chen et al. (2017)) uses
Wikipedia as knowledge source and employs a sparse retrieval method using TF-
IDF and a recurrent neural network to identify the answer spans. While Yang et
al. (2019) adopts Anserini retriever(Yang et al. (2017)) using BM25 as the
ranking function and BERT model (Devlin et al. (2018)) as the reader. Sparse
retrieval based methods, such as TF-IDF and BM25, face challenges when
retrieving relevant passages that do not match the question’s exact terms.
Dense retrieval-based approaches, on the other hand, overcome this problem by
mapping each word into a vector space in which words with similar meanings
tend to be closer together. ORQA (Lee et al. (2019)) and DPR (Karpukhin et al.
(2020)) employ a question and passage encoder based on BERT and compute a
relevance score. Using this relevance score, the retriever retrieves the most
relevant documents from the corpus.
## 3 Determinantal Point Processes for Re-ranking
Re-ranker acts as a filter to pick a limited number of passages that can be
used as input to generate answers to the questions. We formulate the task of
passage re-ranking as a subset selection problem. Our objective is to choose a
subset of passages ($Y$) of size $k$ from the ground set $\mathcal{Y}$
comprising $\mathsf{N}$ passages that covers all of the answers to a given
question $q$ . DPP models a distribution on all the subsets of the ground set
$\mathcal{Y}$ jointly considering the quality and diversity. A subset $Y$ is
drawn according to the probability distribution $P$.
$P(Y;L)\varpropto det(L_{Y})$ (1) $P(Y;L)=\frac{\det(L_{Y})}{\det(L+I)}$ (2)
where $I$ is the identity matrix,
$L\in\mathbb{R}^{\mathsf{N}\times\mathsf{N}}$ is a positive semi-definite
matrix referred as $L$-ensemble, $\det(.)$ denotes the determinant of a
matrix, and $L_{y}$ is the submatrix of $L$ indexed by items in $Y$. $L$
matrix jointly considers query-passage relevance as well as passage-passage
correlation through eq. 3.
$L_{ij}=Q(i,q)\cdot S(i,j)\cdot Q(j,q)$ (3)
DPP focuses on two measures - quality and similarity ( Fig 1). Quality score
$Q(i,q)$ measures how salient the passage $i$ is and whether it contains an
answer to the question $q$. Similarity score $S(i,j)$ is computed between two
passages $i$ and $j$ to incorporate diversity in the passages. DPP assigns a
probability to a set Y proportional to the determinant of $L$-ensemble which
may be interpreted geometrically as the volume of the parallepiped covered by
the quality and similarity measures (Kulesza and Taskar (2012)). A diverse
passage subset occupies more volume than a subset of similar passages,
therefore DPP assigns higher probability to diverse and relevant passages
rather than the most relevant and similar passages. If passages are relevant
and diverse, then the passages can cover multiple distinct answers to the
question.
Figure 1: An overview of the proposed re-ranking method using DPP. A
similarity score between the passages and a quality score between the question
and passage are computed. These two scores are utilised to construct the DPP
kernel matrix.
### 3.1 BERT for Similarity matrix
To compute the similarity scores, we use a pretrained BERT model (Devlin et
al. (2018); Reimers and Gurevych (2019)) to generate embeddings for every
passage. The model takes the passage as input and produces a 768 dimensional
dense embedding. We use these embeddings to calculate the cosine similarity of
all passages and compute a similarity matrix
$S\in\mathbb{R}^{\mathsf{N}\times\mathsf{N}}$ for the whole passage set. All
the values in the similarity matrix lie in the range of $\lceil 0,1\rceil$. If
passages $i$ and $j$ are similar, the similarity value $S(i,j)$ lies closer to
$1$, if they are distinct, the value lies closer to $0$, and if $i=j$,
$S(i,j)$ becomes equal to 1.
$S(i,j)=cosine\\_sim(BERT_{A}(i),BERT_{A}(j))$ (4)
### 3.2 BERT for Quality matrix
We use a pretrained BERT model trained on MS MARCO (Nguyen et al. (2016)) for
computing the Quality matrix. The model takes in a query and a passage and
generates the quality score. Higher quality score indicates that the passage
is most relevant to the query and therefore most likely to answer the query.
Unlike for computing similarity matrix, we do not perform cosine similarity
over the model’s outputs to produce a score, instead, we use a BERT encoder
that concatenates both query and passage and generates a score. The quality
matrix $Q\in\mathbb{R}^{\mathsf{N}\times\mathsf{N}}$ is computed by performing
the matrix multiplication of the scores ($N\times 1$) with it’s transpose
resulting in $N\times N$ dimensioned vector . These quality scores are then
normalized to lie between $\lceil 0,1\rceil$.
$Q(i,j)=Norm(BERT_{B}([i;j]))$ (5)
### 3.3 Sampling
Traditional DPP sampling algorithms have higher run-time complexity when $L$
matrix is large. We apply an efficient sampling technique - BFGMInference (Li
et al. (2019);Chen et al. (2018)). BFGMInference approximates a greedy
approach to select a passage that maximizes the $det(L_{Y})$ and adds it to
the passage subset.
$\begin{array}[]{c}f(Y)=\log\operatorname{det}\left(L_{Y}\right)\\\
k=\underset{i\in\mathcal{Y}\backslash
Y}{\arg\max}f(Y\cup\\{i\\})-f(Y)\end{array}$ (6)
Models | Top5 | Top 10
---|---|---
| AmbigQA-Dev | AmbigQA-Dev
DPR+(Min et al. (2021)) | 55.2/36.3 | 59.3/39.6
DPR+ \+ Nogueira and Cho (2019) | 63.4/43.1 | 65.8/46.4
JPR(Min et al. (2021)) | 64.8/45.2 | 67.1/48.2
QRR | 62.0/42.3 | 70.8/57.6
DPP-R | 66.9/53.5 | 72.8/58.8
Table 1: Performance of various models on AmbigQA dataset. Each row contains
the $\mathrm{MRECALL}$ @ $k$ metrics for single answer retrieval and multi-
answer retrieval respectively.
## 4 Experiments
In this section, we discuss about the passage retrieval method, the dataset we
used in our experiments, the evaluation metric, and the results of our
experiments.
### 4.1 Passage retrieval
Wikipedia is utilised as the corpus for retrieving passages for the questions.
Each Wikipedia article is broken into multiple passages containing the same
number of words. We retrieve query-relevant passages from Wikipedia using the
Dense Passage Retriever (DPR) (Karpukhin et al. (2020); Lin et al. (2021)).
DPR computes encodings for all passages extracted from the Wikipedia corpus
and builds an index. The inner product of the query and passage encodings is
used to determine the similarity scores between them. Passages with the
highest scores are the ones that are most relevant to the query, and these
passages are subsequently sent into the re-ranker as input.
### 4.2 Dataset
We evaluated our method on an open-domain question-answering dataset AmbigQA
(Min et al. (2020)), which contains multiple-answer questions. The dataset was
created from an anonymised collection of Google search queries submitted by
users seeking information on different subjects. It consists of 14,042
question-answer pairs derived from the Natural questions dataset (Kwiatkowski
et al. (2019)) and is split into train, validation, and test sets. Train set
consists of 10,036 question-answer pairs, validation set consists of 2,002
examples, and test set consists of 4,042 examples.
### 4.3 Evaluation metric
$\mathrm{MRECALL}$ @ $k$ (Min et al. (2021)) is used to evaluate the re-
ranking of passages for questions with diverse answers. As per this metric, if
a query has $n$ answers, the $k$ passages that are retrieved must cover all of
the answers. If $n<=k$, all answers must be covered; if $n>k$, the passages
retrieved must contain at least $k$ answers. A retrieval is deemed successful
if the passages retrieved include all or at least $k$ of the answers to the
query.
### 4.4 Results
We compare our technique to a few additional baselines, all of which were
assessed using the $\mathrm{MRECALL}$ @ $k$ metric on the AmbigQA dataset.
* •
DPR+ Min et al. (2021) integrates REALM (Guu et al. (2020)) with DPR
(Karpukhin et al. (2020)). As described in Section 4.1, DPR is a dense
retrieval based technique that utilizes the FAISS library to retrieve the
relevant documents. Encoders for the query and passage are initialized using
REALM and the DPR training method is followed.
* •
DPR+ \+ Nogueira and Cho (2019) employs DPR+ for the first stage of retrieval
and the re-ranking method in Nogueira and Cho (2019) is applied on the
retrieved passages.
* •
JPR Min et al. (2021) employs DPR+ as the initial ranker and an auto-
regressive framework is adopted as a re-ranker to generate diverse passages.
* •
Query Relevance Re-ranking(QRR) In this method, we first calculate the quality
scores for each passage (described in section 3.2) and then we sort the
passages based on these scores to pick the top-$k$ passages. Here, similarity
among the passages is not considered.
* •
DPP-R We employ our method described in section 3 to retrieve highly diverse
and relevant passages.
We calculate the performance of diverse multi passage retrieval using the
$\mathrm{MRECALL}$ @ $k$ measure described in section 4.3. Evaluation on the
AmbigQA dataset demonstrates that our approach outperforms existing re-ranking
techniques. Our technique requires no human annotations for multi passage re-
ranking while outperforming existing methods, as shown in Table 1. DPP is
modelled to select a subset of high-quality and diverse passages, which
contributes to the success of our method for this task. Experiments
demonstrate that the DPP-based technique achieves promising results for
retrieving passages containing diverse answers.
## 5 Discussion
Impact on QA system’s performance: An Open domain question answering system’s
pipeline consists of three stages. 1) Retrieval 2) Re-ranking followed by 3)
Answer extraction. Improvements in any of these stages significantly improve
the overall system’s ability to answer a question. Nogueira and Cho (2019),
Min et al. (2021) have shown that the use of a re-ranker has led to end-to-end
QA improvements. Based on the results presented in Table 1, the DPP method
enhances re-ranking for both single and multi-answer questions. We believe
that this improvement in re-ranking will also improve the overall performance
of the end-to-end QA system.
Impact of diversity: DPP-R and JPR retrieve diverse passages utilising DPP
and auto-regressive framework, respectively. Other approaches like QRR,
retrieve just passages that are relevant to the query and do not tackle
passage redundancy. We observe that our approach using DPP performs better
than the QRR method. In order to re-rank, QRR simply considers how relevant a
passage is to the query, and it retrieves the top-k passages with the highest
relevance score for a given query. On the other hand, DPP-R takes into account
how relevant the passage is to the query and also how similar passages are to
each other, in order to eliminate redundant passages leading to diversity in
the retrieved passages. DPP-R and JPR outperform other methods that do not
emphasise diversity in multi-answer retrieval. For single answer retrieval,
DPP-R and JPR have fared better than other methods, with the minor exception
that QRR beats JPR in top-10 re-ranking. This demonstrates that diversity is
an important aspect to consider during the re-ranking stage.
## 6 Conclusion
In this paper, we propose a DPP-based approach to improve the diverseness of
the retrieved passages. We compare our method to the state-of-the-art method
and outperform it by $3\%$ (top $5$), $8\%$ (top $10$) for single-answer
questions, and $18\%$ (top$5$) and $21\%$ (top$10$) for multi-answer retrieval
on AmbigQA dataset.
## References
* Chen et al. (2017) Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In _ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)_ , volume 1, pages 1870–1879.
* Chen et al. (2018) Laming Chen, Guoxin Zhang, and Eric Zhou. 2018. Fast greedy map inference for determinantal point process to improve recommendation diversity. _Advances in Neural Information Processing Systems_ , 31.
* Cho et al. (2020) Sangwoo Cho, Logan Lebanoff, Hassan Foroosh, and Fei Liu. 2020. Improving the similarity measure of determinantal point processes for extractive multi-document summarization. _ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference_ , pages 1027–1038.
* Cho et al. (2019) Sangwoo Cho, Chen Li, Dong Yu, Hassan Foroosh, and Fei Liu. 2019. Multi-document summarization with determinantal point processes and contextualized representations. _arXiv_.
* Clark and Gardner (2017) Christopher Clark and Matt Gardner. 2017. Simple and effective multi-paragraph reading comprehension. _arXiv preprint arXiv:1710.10723_.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming Wei Chang. 2020. REALM: Retrieval-Augmented language model pre-training. In _37th International Conference on Machine Learning, ICML 2020_ , volume PartF16814, pages 3887–3896.
* Izacard and Grave (2021) Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In _EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference_ , pages 874–880.
* Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. 2020. Dense passage retrieval for open-domain question answering.
* Kulesza and Taskar (2012) Alex Kulesza and Ben Taskar. 2012. Determinantal point processes for machine learning.
* Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. _Transactions of the Association of Computational Linguistics_.
* Lee et al. (2019) Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. _arXiv preprint arXiv:1906.00300_.
* Li et al. (2019) Lei Li, Wei Liu, Marina Litvak, Natalia Vanetik, and Zuying Huang. 2019. In conclusion not repetition: Comprehensive abstractive summarization with diversified attention based on determinantal point processes. _CoNLL 2019 - 23rd Conference on Computational Natural Language Learning, Proceedings of the Conference_ , pages 822–832.
* Lin et al. (2021) Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: An easy-to-use python toolkit to support replicable ir research with sparse and dense representations. _arXiv preprint arXiv:2102.10073_.
* Min et al. (2021) Sewon Min, Kenton Lee, Ming-Wei Chang, Kristina Toutanova, and Hannaneh Hajishirzi. 2021. Joint passage ranking for diverse multi-answer retrieval. _arXiv preprint arXiv:2104.08445_.
* Min et al. (2020) Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. Ambigqa: Answering ambiguous open-domain questions.
* Nguyen et al. (2016) Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. In _CoCo@ NIPS_.
* Nogueira and Cho (2019) Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. _arXiv preprint arXiv:1901.04085_.
* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics.
* Sharghi et al. (2017) Aidean Sharghi, Jacob S Laurel, and Boqing Gong. 2017. Query-focused video summarization: Dataset, evaluation, and a memory network based approach. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 4788–4797.
* Wang et al. (2019) Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019. Multi-passage bert: A globally normalized bert model for open-domain question answering. _arXiv preprint arXiv:1908.08167_.
* Yang et al. (2017) Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In _Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , pages 1253–1256.
* Yang et al. (2019) Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. In _NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Demonstrations Session_ , pages 72–77.
|
# PERISTOLE: PackagE that geneRates tIme delay plotS caused by graviTatiOnaL
lEnsing
T.S.Sachin Venkatesh Delhi Technological University
Delhi 110042, India Gaurav Pundir Indian Institute of Science Education and
Research, Pune
Maharashtra 411008, India
###### Abstract
We present PERISTOLE to study the various time delays associated with the
pulsar rotation and other general relativistic aspects of binary pulsars. It
is made available as an open-source python package which takes some parameters
of the double pulsar system as input and outputs the rotational and
latitudinal lensing delays along with the geometric and Shapiro delays that
arise due to gravitational lensing. This package was intended to provide a way
to quickly analyse, evaluate and study the differences between variations of
the same systems and also to quantify the consequences that different
parameters have over the system. Through this research note, we briefly
describe the motivation behind PERISTOLE and showcase its capabilities using
the only double pulsar system ever found, J0737-3039.
Astronomy software (1855), Binary pulsars (153), Astroinformatics (78)
††software: numpy (Harris et al., 2020), matplotlib (Hunter, 2007)
## Introduction
### Binary Pulsars
Binary pulsars have been recorded to have a periodic variation in their pulse
arrival time which warrants the use of relativistic corrections to adjust for
their timing model, especially for close binaries which include double Pulsar
systems, Pulsar-White Dwarf systems and Pulsar-Neutron Star systems, generally
noted as binary Pulsar-body systems from now on. Such systems are described to
be excellent astrophysical laboratories to test for extreme general
relativistic effects and are prime candidates for ushering into the next stage
of gravitational wave physics (Damour & Taylor, 1992). But these double
Pulsars and Pulsar-NS systems are an elusive breed and their rarity comes from
the condition that they not only have to have survived multiple mass transfer
stages, accretion leading to two late stage supernovae processes but also the
nomenclature condition that neutron stars can be classified as pulsars only if
their radio beams point towards us. This, mounting upon the condition that
they have to have near edge-on orbit configurations with respect to the
observer for proper analysis further shrinks the sample space available to us
for a large scale investigation.
The orbital motion of a pulsar gives rise to special relativistic aberration
affecting the emission direction with respect to its pulsar spin axis
(Komesaroff, 1970). The longitudinal change gives rise to the longitudinal
delay which shifts the arrival time of the pulse while the latitudinal change
results in the distortion of the pulse profile. The bending of the radio beam
also introduces rotational delay, accentuated at the point of superior
conjunction. Additionally, this light bending also splits the pulse profile
into two gravitationally lensed images of the source which are defined as
dominant and sub-dominant images.
PERISTOLE aims to act as a mock analysis and graphing tool where the user can
simulate different binary Pulsar-body systems to study the aforementioned
delays and determine the role that the system parameters play in accentuating
or extenuating these delays.
### PSR J0737-3039
There have been various studies on the different types of delays associated
with Double Neutron Star (DNS) systems supported by observational evidence but
there has been only one study in similar standing until now for double pulsar
systems (Lorimer, 2005), simply due to the fact that only one such system has
ever been found, PSR J0737-3039. We now list some of the system parameters
relevant to this study. This system’s primary pulsar, ‘A’ has a mass of
1.337$M_{\odot}$ and the companion, ‘B’ has a mass of 1.25$M_{\odot}$. ‘A’ is
a milli-second pulsar with a period of 22.7 ms while the companion has a
period of 2.773 sec. The orbital semi-major axis of the system is 8.784
$\times$ $10^{8}$ m with an eccentricity of 0.0878 and its longitude of
perisastron being 74°. This system has a near edge-on configuration which aids
in the accurate analysis of the different delays associated with the system.
## Functionality
Figure 1: Amplification factor, combined geometric and gravitational delay,
rotational lensing and latitudinal lensing delay plots for both dominant
(left) and subdominant (right) images of ‘A’ at superior conjunction
(McLaughlin et al., 2004)
PERISTOLE offers the user the ability to plot upto 6 different graphs related
to a binary pulsar-body system such as the magnification factor of the pulse
due to gravitational lensing, the geometric, Shapiro and the combine geometric
and gravitational delay, rotational and latitudinal lensing delay.
* •
Amplification/Magnification factor
$A=\frac{\left(\frac{R}{R_{E}}\right)^{2}+2}{\left(\frac{R}{R_{E}}\right)\sqrt{\left(\frac{R}{R_{E}}\right)^{2}+4}}$
(1)
* •
Geometric delay
$(\Delta t)_{\mathrm{geometric}}=\frac{R_{g}}{c}\left(\frac{\Delta
R_{\pm}}{R_{E}}\right)^{2}$ (2)
* •
Shapiro delay
$(\Delta
t)_{\mathrm{gravitational}}=-\frac{R_{g}}{c}\ln\left[\frac{\sqrt{{r\sin
i\sin\psi}^{2}+R_{\pm}^{2}}-r\sin i\sin\psi}{a\left(1-e^{2}\right)}\right]$
(3)
* •
Rotational lensing delay
$(\Delta t)_{rotational}=-\frac{\Delta
R_{\pm}}{R}\frac{r}{a_{\|}}\frac{\sin\eta\cos\psi-\cos
i\cos\eta\sin\psi}{\Omega_{p}\sin\zeta}$ (4)
* •
Latitudinal lensing delay
$(\Delta t)_{latitudinal}=\frac{\Delta
R_{\pm}}{R}\frac{r}{a_{\|}}\frac{\cos\eta\cos\psi+\cos
i\sin\eta\sin\psi}{\Omega_{p}\sin\zeta\tan\chi_{0}}$ (5)
where $i$ is the orbital phase, $a$ is the orbital semimajor axis, $R_{E}$ is
the Einstein Radius and $e$ is the eccentricity of the system, for an in-depth
study of the formulae, please refer to Lai & Rafikov (2005) and Rafikov & Lai
(2006) and for more information about the user defined parameters, please
refer to the documentation. To demonstrate the capabilities of PERISTOLE, we
recreate the amplification, combined geometric and Shapiro delays, rotational
lensing delay and latitudinal lensing delay plots for both the dominant and
sub-dominant images of PSR J0737-3039A as a function of the orbital phase from
Rafikov & Lai (2006).
## Installation, Documentation, and Future
PERISTOLE is made available to the open source community to encourage
functionalities, QoL improvements and better science. It can be found on
Github111https://github.com/centarsirius/peristole and is archived in Zenodo
Venkatesh & Pundir (2022). To aid seamless installation of the package, it is
available on PyPi222https://pypi.org/project/peristole/ and can be installed
via the pip command. The Github page also contains a jupyter notebook hosted
on google colab to allow users to perform low volume analysis or quickly check
out the package before installing. PERISTOLE can output publication-ready
plots and further documentation is available at
http://peristole.readthedocs.io. We hope to add more functionalities in the
future such as analyzing the effects of the system on its immediate
environment, reduce the number of parameters required to graph the time delays
and automatically import system parameters from online archives like MAST and
SIMBAD based on the system identifier.
The authors would like to thank the organizers of the Code/Astro 2022
workshop333https://semaphorep.github.io/codeastro/ for imparting the knowledge
required to develop astronomy related software and the Code/Astro community
for their constant support throughout the development of this package.
## References
* Damour & Taylor (1992) Damour, T., & Taylor, J. H. 1992, Phys. Rev. D, 45, 1840, doi: 10.1103/PhysRevD.45.1840
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Komesaroff (1970) Komesaroff, M. M. 1970, Nature, 225, 612, doi: 10.1038/225612a0
* Lai & Rafikov (2005) Lai, D., & Rafikov, R. R. 2005, ApJ, 621, L41, doi: 10.1086/429146
* Lorimer (2005) Lorimer, D. R. 2005, Living Reviews in Relativity, 8, 7, doi: 10.12942/lrr-2005-7
* McLaughlin et al. (2004) McLaughlin, M. A., Lyne, A. G., Lorimer, D. R., et al. 2004, ApJ, 616, L131, doi: 10.1086/426813
* Rafikov & Lai (2006) Rafikov, R. R., & Lai, D. 2006, ApJ, 641, 438, doi: 10.1086/500346
* Venkatesh & Pundir (2022) Venkatesh, S., & Pundir, G. 2022, centarsirius/peristole: v1.1.0, Zenodo, doi: 10.5281/zenodo.6744000
|
# Towards clarifying the possibility of observation of the LHCb hidden-charm
pentaquarks
$P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$, $P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$
in near-threshold charmonium photoproduction off protons and nuclei
E. Ya. Paryev
Institute for Nuclear Research, Russian Academy of Sciences,
Moscow 117312, Russia
###### Abstract
We study the near-threshold $J/\psi$ meson photoproduction from protons and
nuclei by considering incoherent direct non-resonant (${\gamma}p\to{J/\psi}p$,
${\gamma}n\to{J/\psi}n$) and two-step resonant (${\gamma}p\to
P_{ci}^{+}\to{J/\psi}p$, ${\gamma}n\to P_{ci}^{0}\to{J/\psi}n$, $i=1$, 2, 3,
4; $P_{c1}^{+,0}=P_{c}^{+,0}(4312)$, $P_{c2}^{+,0}=P_{c}^{+,0}(4337)$,
$P_{c3}^{+,0}=P_{c}^{+,0}(4440)$, $P_{c4}^{+,0}=P_{c}^{+,0}(4457)$) charmonium
production processes. We calculate the absolute excitation functions, energy
and momentum distributions for the non-resonant, resonant and for the combined
(non-resonant plus resonant) production of $J/\psi$ mesons on protons as well
as, using the nuclear spectral function approach, on carbon and tungsten
target nuclei at near-threshold incident photon energies by assuming the spin-
parity assignments of the hidden-charm resonances $P_{c}^{+,0}(4312)$,
$P_{c}^{+,0}(4337)$, $P_{c}^{+,0}(4440)$ and $P_{c}^{+,0}(4457)$ as
$J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$
within three different realistic scenarios for the branching ratios of their
decays to the ${J/\psi}p$ and ${J/\psi}n$ modes (0.25, 0.5 and 1%). We show
that will be very hard to measure the $P_{ci}^{+}$ pentaquark states through
the scan of the $J/\psi$ total photoproduction cross section on a proton
target in the near-threshold energy region around the resonant photon energies
of 9.44, 9.554, 10.04 and 10.12 GeV if these branching ratios $\sim$ 1% and
less. We also demonstrate that at these photon beam energies the $J/\psi$
energy and momentum combined distributions considered reveal distinct
sensitivity to the above scenarios, respectively, at ”low” $J/\psi$ total
energies and momenta, which implies that they may be an important tool to
provide further evidence for the existence of the pentaquark $P_{ci}^{+}$ and
$P_{ci}^{0}$ resonances and to get valuable information on their decay rates
to the ${J/\psi}p$ and ${J/\psi}n$ final states. The measurements of these
distributions could be performed in the future at the 12 GeV CEBAF facility.
## 1\. Introduction
The study of the exotic hadronic states – the hidden-charm pentaquark
resonances – has received considerable interest in recent years and is one of
the most exciting topics of the nuclear and hadronic physics nowadays after
the discovery by the LHCb Collaboration pentaquark states $P_{c}^{+}(4380)$
and $P_{c}^{+}(4450)$ in the ${J/\psi}p$ invariant mass spectrum of the
$\Lambda^{0}_{b}\to K^{-}({J/\psi}p)$ decays [1] and, especially, after the
observation by the Collaboration of three new narrow resonances
$P_{c}^{+}(4312)$, $P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$ in these decays
[2], based on additional collected data and on an improved selection strategy,
instead of initially claimed $P_{c}^{+}(4380)$ and $P_{c}^{+}(4450)$ states.
Very recently, the LHCb Collaboration discovered a new narrow hidden-charm
pentaquark denoted as $P_{c}^{+}(4337)$ in the invariant mass spectrum of the
${J/\psi}p$ in the $B_{s}^{0}\to{J/\psi}p{\bar{p}}$ decays [3]. On the other
hand, the search for the LHCb pentaquarks $P_{c}^{+}(4312)$, $P_{c}^{+}(4440)$
and $P_{c}^{+}(4457)$ by the GlueX Collaboration in Hall D at JLab through a
scan of the cross section of elastic reaction ${\gamma}p\to{J/\psi}p$ from
threshold of 8.21 GeV and up to incident photon energy $E_{\gamma}=11.8$ GeV
gave no evidence for them with present statistics and set the model-dependent
upper limits on branching ratios of $P_{c}^{+}(4312)\to{J/\psi}p$,
$P_{c}^{+}(4440)\to{J/\psi}p$ and $P_{c}^{+}(4457)\to{J/\psi}p$ decays of
several percent [4]. The preliminary results from a factor of 10 more data on
the $J/\psi$ photoproduction on a hydrogen target, collected in the Hall C
JLab E12-16-007 experiment (the so-called $J/\psi$-007 experiment), also show
no $P_{c}^{+}$ signal [5]. In this experiment the $e^{+}e^{-}$ pairs from the
$J/\psi$ decays were detected in coincidence using the two high momentum
spectrometers of Hall C: the Super High Momentum Spectrometer (SHMS) and High
Momentum Spectrometer (HMS) for the electron and the positron, respectively.
In recent publications [6] and [7] the role, respectively, of the LHCb
pentaquarks $P_{c}^{+}(4450)$ and $P_{c}^{+}(4312)$, $P_{c}^{+}(4440)$,
$P_{c}^{+}(4457)$ in charmonium photoproduction on protons and nuclei has been
investigated at near-threshold initial photon energies $E_{\gamma}\leq 11$
GeV. Here, the description was based on the consideration of the incoherent
direct (${\gamma}N\to{J/\psi}N$) and two-step (${\gamma}p\to
P_{c}^{+}(4450)\to{J/\psi}p$ and ${\gamma}p\to P_{c}^{+}(4312)\to{J/\psi}p$,
${\gamma}p\to P_{c}^{+}(4440)\to{J/\psi}p$, ${\gamma}p\to
P_{c}^{+}(4457)\to{J/\psi}p$) $J/\psi$ production processes. As a measure for
this role, the incident photon energy dependence of $J/\psi$ production cross
sections on protons and nuclei (excitation functions) has been adopted. It was
found that it is insignificant for the pentaquark resonances considered if
branching ratios of their decays to the ${J/\psi}p$ mode are less than a few
percent, which is in line with the results of the JLab experiments [4, 5].
In view of the aforementioned, to get a robust enough information for or
against the existence of the LHCb hidden-charm pentaquarks and to understand
their better, it is crucial to investigate the possibility of their
observation by measuring not only the excitation functions for $J/\psi$ meson
production from photon-induced reactions on protons and nuclei at near-
threshold photon energies, predicted in Refs. [6, 7], but also the $J/\psi$
energy and momentum distributions in these reactions, not predicted in the
previous papers [6, 7]. Their prediction is the main aim of the present study.
In it, we consider the contribution of the $P_{c}^{+,0}(4312)$,
$P_{c}^{+,0}(4337)$, $P_{c}^{+,0}(4440)$ and $P_{c}^{+,0}(4457)$ resonances to
near-threshold $J/\psi$ photoproduction off protons and nuclei by adopting the
Breit-Wigner shape for this contribution and by employing the recent
experimental data [4] on the total and differential cross sections of the
${\gamma}p\to{J/\psi}p$ process to estimate the background contribution. The
consideration is mainly based on the model, developed in Refs. [6, 7]. We
present the predictions obtained within our present approach for the $J/\psi$
energy and momentum distributions in ${\gamma}p$ as well as in ${\gamma}$12C
and ${\gamma}$184W reactions at near-threshold incident photon energies. These
predictions may be useful in planning future $J/\psi$ photoproduction
experiments at the CEBAF facility.
## 2\. Theoretical framework
### 2.1. Direct non-resonant $J/\psi$ production processes
At near-threshold photon beam energies below 11 GeV of our interest 111)These
energies are well within the present capabilities of the upgraded CEBAF
facility at JLab, which is providing an opportunity to study the observed
[1–3] by the LHCb Collaboration exotic hidden-charm pentaquark states in
exclusive ${\gamma}p\to{J/\psi}p$ reactions in all experimental Halls A, B, C,
D [4, 5].), the following direct non-resonant elementary charmonium production
processes with the lowest free production threshold ($\approx$ 8.21 GeV)
contribute to the $J/\psi$ photoproduction on nuclei [6–8]:
$\gamma+p\to J/\psi+p,$ (1) $\gamma+n\to J/\psi+n.$ (2)
The modification of the masses of the final high-momentum $J/\psi$ meson and
proton (see below) in the nuclear medium will be ignored in the present study.
Disregarding the absorption of incident photons in the energy range of
interest as well as the $J/\psi$ meson quasielastic rescatterings on target
nucleons [9], and describing the charmonium final-state absorption in the
nuclear matter by the absorption cross section $\sigma_{{J/\psi}N}$ 222)For
which we will use the value $\sigma_{{J/\psi}N}=3.5$ mb [6–10].), we represent
the inclusive differential cross section for the production of $J/\psi$ mesons
with the momentum ${\bf p}_{J/\psi}$ on nuclei in the direct non-resonant
processes (1), (2) in the form [6–10]:
$\frac{d\sigma_{{\gamma}A\to{J/\psi}X}^{({\rm dir})}({\bf p}_{\gamma},{\bf
p}_{J/\psi})}{d{\bf
p}_{J/\psi}}=I_{V}[A,\sigma_{{J/\psi}N}]\left<\frac{d\sigma_{{\gamma}p\to{J/\psi}p}({\bf
p}_{\gamma},{\bf p}_{J/\psi})}{d{\bf p}_{J/\psi}}\right>_{A},$ (3)
where
$I_{V}[A,\sigma]=2{\pi}A\int\limits_{0}^{R}r_{\bot}dr_{\bot}\int\limits_{-\sqrt{R^{2}-r_{\bot}^{2}}}^{\sqrt{R^{2}-r_{\bot}^{2}}}dz\rho(\sqrt{r_{\bot}^{2}+z^{2}})\exp{\left[-A{\sigma}\int\limits_{z}^{\sqrt{R^{2}-r_{\bot}^{2}}}\rho(\sqrt{r_{\bot}^{2}+x^{2}})dx\right]},$
(4) $\left<\frac{d\sigma_{{\gamma}p\to{J/\psi}p}({\bf p}_{\gamma},{\bf
p}_{J/\psi})}{d{\bf p}_{J/\psi}}\right>_{A}=\int\int P_{A}({\bf p}_{t},E)d{\bf
p}_{t}dE\left[\frac{d\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s^{*}},{\bf
p}_{J/\psi})}{d{\bf p}_{J/\psi}}\right]$ (5)
and
$s^{*}=(E_{\gamma}+E_{t})^{2}-({\bf p}_{\gamma}+{\bf p}_{t})^{2},$ (6)
$E_{t}=M_{A}-\sqrt{(-{\bf p}_{t})^{2}+(M_{A}-m_{p}+E)^{2}}.$ (7)
Here, $d\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s^{*}},{\bf p}_{J/\psi})/d{\bf
p}_{J/\psi}$ is the off-shell differential cross section for the production of
$J/\psi$ meson in process (1) at the ”in-medium” ${\gamma}p$ c.m. energy
$\sqrt{s^{*}}$ 333)In Eq. (3), it is assumed that the cross sections for
$J/\psi$ meson production in ${\gamma}p$ and ${\gamma}n$ interactions are
equal to each other [6–8].); $\rho({\bf r})$ and $P_{A}({\bf p}_{t},E)$ are
normalized to unity the nucleon density and the nuclear spectral function
(they are given in Refs. [11, 12]) of target nucleus with mass number $A$,
having mass $M_{A}$ and radius $R$; ${\bf p}_{\gamma}$ and $E_{\gamma}$ are
the momentum and energy of the incident photon ($E_{\gamma}=|{\bf
p}_{\gamma}|=p_{\gamma}$) in the laboratory system; ${\bf p}_{t}$ and $E$ are
the momentum and binding energy of the intranuclear target proton,
participating in the reaction channel (1); $m_{p}$ is the free space proton
mass.
Also, as previously in [6–8], we will suppose that the off-shell differential
cross section
$d\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s^{*}},{\bf p}_{J/\psi})/d{\bf
p}_{J/\psi}$ for $J/\psi$ production in process (1) is the same as the
corresponding on-shell cross section
$d\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s},{\bf p}_{J/\psi})/d{\bf p}_{J/\psi}$
determined for the off-shell kinematics of this process and in which the
vacuum c.m. energy squared $s$, given by the formula
$s=W^{2}=(E_{\gamma}+m_{p})^{2}-{\bf
p}_{\gamma}^{2}=m_{p}^{2}+2m_{p}E_{\gamma},$ (8)
is replaced by the in-medium expression (6). The above off-shell differential
cross section is then (cf. [8, 10]):
$\frac{d\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s^{*}},{\bf p}_{J/\psi})}{d{\bf
p}_{J/\psi}}=\frac{\pi}{I_{2}(s^{*},m_{J/\psi},m_{p})E_{J/\psi}}$ (9)
$\times\frac{d\sigma_{{\gamma}p\to{J/\psi}{p}}(\sqrt{s^{*}},\theta_{J/\psi}^{*})}{d{\bf\Omega}_{J/\psi}^{*}}\frac{1}{(\omega+E_{t})}\delta\left[\omega+E_{t}-\sqrt{m_{p}^{2}+({\bf
Q}+{\bf p}_{t})^{2}}\right],$
where
$I_{2}(s^{*},m_{J/\psi},m_{p})=\frac{\pi}{2}\frac{\lambda(s^{*},m_{J/\psi}^{2},m_{p}^{2})}{s^{*}},$
(10)
$\lambda(x,y,z)=\sqrt{{\left[x-({\sqrt{y}}+{\sqrt{z}})^{2}\right]}{\left[x-({\sqrt{y}}-{\sqrt{z}})^{2}\right]}},$
(11) $\omega=E_{\gamma}-E_{J/\psi},\,\,\,\,{\bf Q}={\bf p}_{\gamma}-{\bf
p}_{J/\psi},\,\,\,\,E_{J/\psi}=\sqrt{m^{2}_{J/\psi}+{\bf p}_{J/\psi}^{2}}.$
(12)
Here, the off-shell c.m. charmonium angular distribution in reaction (1)
$d\sigma_{{\gamma}p\to{J/\psi}{p}}(\sqrt{s^{*}},\theta_{J/\psi}^{*})/d{\bf\Omega}_{J/\psi}^{*}$
as a function of the $J/\psi$ production c.m. polar angle
$\theta_{J/\psi}^{*}$ is given by [10]:
$\frac{d\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s^{*}},\theta^{*}_{J/\psi})}{d{\bf\Omega}_{J/\psi}^{*}}=a{\rm
e}^{b_{J/\psi}(t-t^{+})}\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s^{*}}),$ (13)
where $\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s^{*}})$ is the off-shell total
cross section for $J/\psi$ meson production in this reaction and
$t=m_{J/\psi}^{2}-2E^{*}_{\gamma}E^{*}_{J/\psi}+2p^{*}_{\gamma}p^{*}_{J/\psi}\cos{\theta^{*}_{J/\psi}},$
(14)
$E_{\gamma}^{*}=p^{*}_{\gamma},\,\,\,\,E_{J/\psi}^{*}=\sqrt{m^{2}_{J/\psi}+p^{*2}_{J/\psi}},$
(15)
$p_{\gamma}^{*}=\frac{1}{2\sqrt{s^{*}}}\lambda(s^{*},0,E_{t}^{2}-p_{t}^{2}),$
(16)
$p_{J/\psi}^{*}=\frac{1}{2\sqrt{s^{*}}}\lambda(s^{*},m_{J/\psi}^{2},m_{p}^{2}),$
(17)
$t^{+}=t(\cos{\theta^{*}_{J/\psi}}=1)=m_{J/\psi}^{2}-2E^{*}_{\gamma}E^{*}_{J/\psi}+2p^{*}_{\gamma}p^{*}_{J/\psi},$
(18) $t-t^{+}=2p^{*}_{\gamma}p^{*}_{J/\psi}(\cos{\theta^{*}_{J/\psi}}-1).$
(19)
The angle of charmonium production in the ${\gamma}p$ c.m. system,
$\theta^{*}_{J/\psi}$, is expressed through its production angle,
$\theta_{J/\psi}$, in the laboratory system ($\cos{\theta_{J/\psi}}={\bf
p}_{\gamma}{\bf p}_{J/\psi}/p_{\gamma}p_{J/\psi}$) by means of equation [8,
10]:
$\cos{\theta_{J/\psi}^{*}}=\frac{p_{\gamma}p_{J/\psi}\cos{\theta_{J/\psi}}+(E_{\gamma}^{*}E_{J/\psi}^{*}-E_{\gamma}E_{J/\psi})}{p_{\gamma}^{*}p_{J/\psi}^{*}}.$
(20)
The condition of normalization
$\int\limits_{4\pi}a{\rm e}^{b_{J/\psi}(t-t^{+})}d{\bf\Omega}_{J/\psi}^{*}=1$
(21)
gives for the parameter $a$ in Eq. (13) the following expression:
$a=\frac{p^{*}_{\gamma}p^{*}_{J/\psi}b_{J/\psi}}{\pi}\left[1-{\rm
e}^{-4p^{*}_{\gamma}p^{*}_{J/\psi}b_{J/\psi}}\right]^{-1}.$ (22)
Parameter $b_{J/\psi}$ in Eqs. (13), (21), (22) is an exponential $t$-slope of
the differential cross section of the reaction ${\gamma}p\to{J/\psi}p$ in the
near-threshold energy region. It should be pointed out that the differential
cross section of this reaction was also very recently measured in the
$J/\psi$-007 experiment [13] as a function of the photon energy in the range
of $9.1~{}{\rm GeV}\leq E_{\gamma}\leq 10.6~{}{\rm GeV}$ with the aim to
explore the impact of the collected data on the determination of the proton’s
gravitational form factors, the proton-mass radius, and the contribution of
the trace anomaly to the proton mass. Since the $t$-slope $b_{J/\psi}$ was not
determined in [13], we will adopt for it the GlueX result [4], namely:
$b_{J/\psi}$ $\approx$ 1.67 GeV-2. We will use this value in our calculations.
Now consider the off-shell total cross section
$\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s^{*}})$ for $J/\psi$ production in
process (1). In line with the above-mentioned, it is the same as the vacuum
cross section $\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s})$, in which the vacuum
c.m. energy squared s, defined by the formula (8), is replaced by the in-
medium expression (6). For the vacuum total cross section
$\sigma_{{\gamma}p\to{J/\psi}p}(\sqrt{s})$ at near-threshold photon energies
we have used the following parametrization [7] of the available here
experimental information [4] on it from the GlueX experiment, based on the
near-threshold predictions of the two gluon and three gluon exchange model
[14]:
$\sigma_{{\gamma}p\to{J/\psi}p}({\sqrt{s}})=\sigma_{2g}({\sqrt{s}})+\sigma_{3g}({\sqrt{s}}),$
(23)
where 2$g$ and 3$g$ exchanges cross sections $\sigma_{2g}({\sqrt{s}})$ and
$\sigma_{3g}({\sqrt{s}})$ are given in Ref. [7] by formulas (7) and (8),
respectively.
Figure 1: (Color online) The total cross section for the background reaction
${\gamma}p\to{J/\psi}p$ as a function of the photon energy $E_{\gamma}$.
Dashed and dotted-dashed curves are, respectively, calculations on the basis
of the two gluon and three gluon exchange model [14]. Solid curve is the
incoherent sum of the above two calculations. The GlueX experimental data are
from Ref. [4]. The arrow indicates the threshold energy of 8.21 GeV for direct
non-resonant charmonium photoproduction off a free target proton at rest.
Figure 2: (Color online) Plot of the allowed final $J/\psi$ meson and proton
momenta in the direct non-resonant ${\gamma}p\to{J/\psi}p$ reaction, occurring
in the laboratory system in the free space at initial photon energy of 9.44
GeV, as functions of their production angles with respect to the photon beam
direction in this system. Figure 3: (Color online) The same as in Fig. 2, but
for the initial photon energy of 10.12 GeV.
Fig. 1 shows that the GlueX near-threshold data are well fitted by only the
combination (23) of the two gluon and three gluon exchange cross sections and
in the resonance incident photon energy range $\sim$ 9.5–10.0 GeV the main
contribution to the elastic $J/\psi$ production comes from the three gluon
exchanges.
At the initial photon energies of interest the $J/\psi$ mesons are produced at
small laboratory polar angles (see below). Therefore, we will calculate the
$J/\psi$ momentum distributions from considered target nuclei for the
laboratory solid angle
${\Delta}{\bf\Omega}_{J/\psi}$=$0^{\circ}\leq\theta_{J/\psi}\leq 20^{\circ}$,
and $0\leq\varphi_{J/\psi}\leq 2{\pi}$ (cf. [10]). Then, integrating the
differential cross section (3) over this solid angle, we can represent the
differential cross section for charmonium production from the direct non-
resonant processes (1) and (2) into this solid angle as follows:
$\frac{d\sigma_{{\gamma}A\to{J/\psi}X}^{({\rm
dir})}(p_{\gamma},p_{J/\psi})}{dp_{J/\psi}}=\int\limits_{{\Delta}{\bf\Omega}_{J/\psi}}d{\bf\Omega}_{J/\psi}\frac{d\sigma_{{\gamma}A\to{J/\psi}X}^{({\rm
dir})}({\bf p}_{\gamma},{\bf p}_{J/\psi})}{d{\bf p}_{J/\psi}}p_{J/\psi}^{2}$
(24) $=2{\pi}I_{V}[A,\sigma_{{J/\psi}N}]\int\limits_{\cos
20^{\circ}}^{1}d\cos{{\theta_{J/\psi}}}\left<\frac{d\sigma_{{\gamma}p\to{J/\psi}{p}}(p_{\gamma},p_{J/\psi},\theta_{J/\psi})}{dp_{J/\psi}d{\bf\Omega}_{J/\psi}}\right>_{A}.$
Before going further, we now consider, adopting the relativistic kinematics,
more simpler case of the free space production of $J/\psi$ mesons and protons
in the process ${\gamma}p\to{J/\psi}p$, proceeding on a free target proton
being at rest, to get an idea about their kinematic characteristics allowed in
this process at incident photon energies considered. The kinematics of two-
body reaction with a threshold (as in our present case) tell us that the
laboratory polar $J/\psi$ and final proton production angles $\theta_{J/\psi}$
and $\theta_{p}$ vary from 0 to a maximal values $\theta^{\rm max}_{J/\psi}$
and $\theta^{\rm max}_{p}$, correspondingly, i.e.:
$0\leq\theta_{J/\psi}\leq\theta^{\rm max}_{J/\psi},$ (25)
$0\leq\theta_{p}\leq\theta^{\rm max}_{p};$ (26)
where
$\theta^{\rm max}_{J/\psi}={\rm
arcsin}[(\sqrt{s}p^{*}_{J/\psi})/(m_{J/\psi}p_{\gamma})],$ (27) $\theta^{\rm
max}_{p}={\rm arcsin}[(\sqrt{s}p^{*}_{p})/(m_{p}p_{\gamma})].$ (28)
Here, the $J/\psi$ c.m. momentum $p^{*}_{J/\psi}$ is determined by Eq. (17),
in which the in-medium c.m. energy squared $s^{*}$ should be replaced by the
vacuum collision energy squared $s$, defined by the formula (8), and
$p^{*}_{p}$ is the final proton c.m. momentum. It is equal to the $J/\psi$
c.m. momentum $p^{*}_{J/\psi}$. From Eqs. (27), (28) one can get, for example,
that
$\theta^{\rm max}_{J/\psi}=5.570^{\circ},\,\,\,\,\theta^{\rm
max}_{p}=18.686^{\circ}$ (29)
at initial photon beam energy of $E_{\gamma}=9.44$ GeV and
$\theta^{\rm max}_{J/\psi}=6.768^{\circ},\,\,\,\,\theta^{\rm
max}_{p}=22.892^{\circ}$ (30)
at photon energy of $E_{\gamma}=10.12$ GeV. Energy-momentum conservation in
the reaction (1), taking place in a vacuum, leads to two different solutions
for the laboratory $J/\psi$ meson and final proton momenta $p_{J/\psi}$ and
$p_{p}$ at given laboratory polar production angles $\theta_{J/\psi}$ and
$\theta_{p}$, belonging, correspondingly, to the angular intervals (25) and
(26):
$p^{(1,2)}_{J/\psi}(\theta_{J/\psi})=\frac{p_{\gamma}\sqrt{s}E^{*}_{J/\psi}\cos{\theta_{J/\psi}}\pm(E_{\gamma}+m_{p})\sqrt{s}\sqrt{p^{*2}_{J/\psi}-{\gamma^{2}_{\rm
cm}}{v^{2}_{\rm
cm}}m^{2}_{J/\psi}\sin^{2}{\theta_{J/\psi}}}}{(E_{\gamma}+m_{p})^{2}-p^{2}_{\gamma}\cos^{2}{\theta_{J/\psi}}},$
(31)
$p^{(1,2)}_{p}(\theta_{p})=\frac{p_{\gamma}\sqrt{s}E^{*}_{p}\cos{\theta_{p}}\pm(E_{\gamma}+m_{p})\sqrt{s}\sqrt{p^{*2}_{p}-{\gamma^{2}_{\rm
cm}}{v^{2}_{\rm
cm}}m^{2}_{p}\sin^{2}{\theta_{p}}}}{(E_{\gamma}+m_{p})^{2}-p^{2}_{\gamma}\cos^{2}{\theta_{p}}}.$
(32)
Here, ${\gamma_{\rm cm}}=(E_{\gamma}+m_{p})/\sqrt{s}$, $v_{\rm
cm}=p_{\gamma}/(E_{\gamma}+m_{p})$, the $J/\psi$ total c.m. energy
$E^{*}_{J/\psi}$ is defined above by Eq. (15),
$E^{*}_{p}=\sqrt{m^{2}_{p}+p^{*2}_{p}}$ and sign ”+” in the numerators of Eqs.
(31), (32) corresponds to the first solutions $p^{(1)}_{J/\psi}$,
$p^{(1)}_{p}$ and sign ”-” - to the second ones $p^{(2)}_{J/\psi}$,
$p^{(2)}_{p}$. Looking at the expressions (31) and (32), one can come to the
conclusion that the first solutions $p^{(1)}_{J/\psi}$ and $p^{(1)}_{p}$ as
well as the second ones $p^{(2)}_{J/\psi}$ and $p^{(2)}_{p}$ have different
dependencies, respectively, on the production angles $\theta_{J/\psi}$ and
$\theta_{p}$ within the angular intervals [0, $\theta_{J/\psi}^{\rm max}]$ and
[0, $\theta_{p}^{\rm max}]$. Namely, the former drop and the latter ones
increase as the production angles $\theta_{J/\psi}$ and $\theta_{p}$ increase
in these intervals (cf. Figs. 2 and 3) and
$p^{(1)}_{J/\psi}(\theta_{J/\psi}^{\rm
max})=p^{(2)}_{J/\psi}(\theta_{J/\psi}^{\rm
max})=p_{J/\psi}(\theta_{J/\psi}^{\rm max}),$ (33)
$p^{(1)}_{p}(\theta_{p}^{\rm max})=p^{(2)}_{p}(\theta_{p}^{\rm
max})=p_{p}(\theta_{p}^{\rm max}),$ (34)
where
$p_{J/\psi}(\theta_{J/\psi}^{\rm
max})=(p_{\gamma}m_{J/\psi}^{2}\cos{\theta_{J/\psi}^{\rm
max}})/(\sqrt{s}E^{*}_{J/\psi}),$ (35) $p_{p}(\theta_{p}^{\rm
max})=(p_{\gamma}m_{p}^{2}\cos{\theta_{p}^{\rm max}})/(\sqrt{s}E^{*}_{p}).$
(36)
According to Eqs. (35), (36), for $E_{\gamma}=9.44$ GeV we get then that
$p_{J/\psi}(\theta_{J/\psi}^{\rm max})=6.600$ GeV/c and $p_{p}(\theta_{p}^{\rm
max})=1.593$ GeV/c. For $E_{\gamma}=10.12$ GeV we obtain
$p_{J/\psi}(\theta_{J/\psi}^{\rm max})=6.745$ GeV/c and $p_{p}(\theta_{p}^{\rm
max})=1.471$ GeV/c (cf. Figs. 2 and 3). These figures show that the
kinematically allowed charmonium laboratory momenta and total energies in the
direct non-resonant ${\gamma}p\to{J/\psi}p$ process, taking place on the free
target proton at rest, at given initial photon energy vary within the
following momentum and energy ranges:
$p^{(2)}_{J/\psi}(0^{\circ})\leq p_{J/\psi}\leq p^{(1)}_{J/\psi}(0^{\circ}),$
(37) $E^{(2)}_{J/\psi}(0^{\circ})\leq E_{J/\psi}\leq
E^{(1)}_{J/\psi}(0^{\circ}),$ (38)
where the quantities $p^{(1,2)}_{J/\psi}(0^{\circ})$ are defined above by Eq.
(31) and
$E^{(1,2)}_{J/\psi}(0^{\circ})=\sqrt{m_{J/\psi}^{2}+[p^{(1,2)}_{J/\psi}(0^{\circ})]^{2}}$.
Finally, we calculate the $J/\psi$ energy distribution
$d\sigma_{{\gamma}p\to{J/\psi}{p}}[\sqrt{s},p_{J/\psi}]/dE_{J/\psi}$ from the
reaction ${\gamma}p\to{J/\psi}p$ within the kinematically allowed interval
(38). Integration of the more general differential cross section (9) over the
angle $\theta_{J/\psi}$, when this angle varies in the allowed angular region
(25), in the limits: ${\bf p}_{t}\to 0$, $E_{t}\to m_{p}$ and $s^{*}\to s$
yields:
$\frac{d\sigma_{{\gamma}p\to{J/\psi}{p}}[\sqrt{s},p_{J/\psi}]}{dE_{J/\psi}}=2\pi\int\limits_{\cos{\theta_{J/\psi}^{\rm
max}}}^{1}d\cos{\theta_{J/\psi}}p_{J/\psi}E_{J/\psi}\frac{d\sigma_{{\gamma}p\to{J/\psi}p}[\sqrt{s},{\bf
p}_{J/\psi}]}{d{\bf p}_{J/\psi}}=$ (39)
$=\left(\frac{2{\pi}\sqrt{s}}{p_{\gamma}p^{*}_{J/\psi}}\right)\frac{d\sigma_{{\gamma}p\to{J/\psi}{p}}[\sqrt{s},\theta_{J/\psi}^{*}(x_{0})]}{d{\bf\Omega}_{J/\psi}^{*}}~{}{\rm
for}~{}E^{(2)}_{J/\psi}(0^{\circ})\leq E_{J/\psi}\leq
E^{(1)}_{J/\psi}(0^{\circ}),$
where
$x_{0}=\frac{[p^{2}_{\gamma}+p^{2}_{J/\psi}+m^{2}_{p}-(\omega+m_{p})^{2}]}{2p_{\gamma}p_{J/\psi}},\,\,\,\,\,p_{J/\psi}=\sqrt{E^{2}_{J/\psi}-m^{2}_{J/\psi}}$
(40)
and the quantity $\cos{\theta_{J/\psi}^{*}(x_{0})}$ is defined by Eq. (20), in
which one has to perform the replacement: $\cos{\theta_{J/\psi}}\to x_{0}$,
and the photon and $J/\psi$ c.m. momenta $p^{*}_{\gamma}$ and $p^{*}_{J/\psi}$
are defined by formulas (16) and (17), correspondingly, in which one needs
also to make the substitutions: $E_{t}\to m_{p}$, $p_{t}\to 0$ and $s^{*}\to
s$. We will adopt the expression (39) for evaluating the free space $J/\psi$
energy distribution from the direct process (1), proceeding on a proton at
rest, for incident photon beam resonant energies of 9.44, 9.554, 10.04 and
10.12 GeV (see below).
Figure 4: (Color online) The non-resonant total cross section $\sigma_{1}$ for
the reaction ${\gamma}p\to{J/\psi}p$ (solid curve) and the incoherent sum
(dotted curve) of it and the total cross section $\sigma_{2}$ (short-dashed
curve) for the resonant $J/\psi$ production in the processes ${\gamma}p\to
P^{+}_{c}(4312)\to{J/\psi}p$, ${\gamma}p\to P^{+}_{c}(4337)\to{J/\psi}p$,
${\gamma}p\to P^{+}_{c}(4440)\to{J/\psi}p$ and ${\gamma}p\to
P^{+}_{c}(4457)\to{J/\psi}p$, calculated assuming that the resonances
$P^{+}_{c}(4312)$, $P^{+}_{c}(4337)$, $P^{+}_{c}(4440)$ and $P^{+}_{c}(4457)$
with the spin-parity quantum numbers $J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$,
$J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$ decay to ${J/\psi}p$ with the lower
allowed relative orbital angular momentum $L=0$ with all four branching
fractions $Br[P^{+}_{ci}\to{J/\psi}p]=$1%, as functions of photon energy. The
left and four right arrows indicate, correspondingly, the threshold energy
$E^{\rm th}_{\gamma}=8.21$ GeV for the reaction ${\gamma}p\to{J/\psi}p$
proceeding on a free target proton being at rest and the resonant energies
$E^{\rm R1}_{\gamma}=9.44$ GeV, $E^{\rm R2}_{\gamma}=9.554$ GeV, $E^{\rm
R3}_{\gamma}=10.04$ GeV and $E^{\rm R4}_{\gamma}=10.12$ GeV.
### 2.2. Two-step resonant $J/\psi$ production processes
At photon energies below 11 GeV, incident photons can produce the observed
[1–3] experimentally non-strange charged $P^{+}_{c}(4312)$, $P^{+}_{c}(4337)$,
$P^{+}_{c}(4440)$, $P^{+}_{c}(4457)$ pentaquark resonances with quark
structure $|P^{+}_{c}>=|uudc{\bar{c}}>$ and predicted [15], but non-observed
yet their neutral isospin partners $P^{0}_{c}(4312)$, $P^{0}_{c}(4337)$,
$P^{0}_{c}(4440)$, $P^{0}_{c}(4457)$ 444)The minimal quark content of the
$P_{c}^{0}$ states is $|P^{0}_{c}>=|uddc{\bar{c}}>$. Following the observation
of the narrow pentaquarks $P^{+}_{c}(4312)$, $P^{+}_{c}(4440)$ and
$P^{+}_{c}(4457)$ by the LHCb Collaboration [1, 2], it was proposed to search
for the $P_{c}^{0}$ states in ${\pi^{-}}p\to{J/\psi}n$ reaction [16].)
directly in the first inelastic collisions with intranuclear protons and
neutrons 555)We remind that, for example, the threshold (resonant) energies
$E^{\rm R1}_{\gamma}$, $E^{\rm R2}_{\gamma}$, $E^{\rm R3}_{\gamma}$ and
$E^{\rm R4}_{\gamma}$ for the photoproduction of the $P_{c}^{+}$ resonances
with pole masses $M_{c1}^{+}=4311.9$ MeV, $M_{c2}^{+}=4337.0$ MeV,
$M_{c3}^{+}=4440.3$ MeV and $M_{c4}^{+}=4457.3$ MeV [2, 3] on a free target
proton being at rest are $E^{\rm R1}_{\gamma}=9.44$ GeV, $E^{\rm
R2}_{\gamma}=9.554$ GeV, $E^{\rm R3}_{\gamma}=10.04$ GeV and $E^{\rm
R4}_{\gamma}=10.12$ GeV, respectively.) :
$\displaystyle{\gamma}+p\to P^{+}_{c}(4312),$ $\displaystyle{\gamma}+p\to
P^{+}_{c}(4337),$ $\displaystyle{\gamma}+p\to P^{+}_{c}(4440),$
$\displaystyle{\gamma}+p\to P^{+}_{c}(4457);$ (41) $\displaystyle{\gamma}+n\to
P^{0}_{c}(4312),$ $\displaystyle{\gamma}+n\to P^{0}_{c}(4337),$
$\displaystyle{\gamma}+n\to P^{0}_{c}(4440),$ $\displaystyle{\gamma}+n\to
P^{0}_{c}(4457).$ (42)
Furthermore, the produced pentaquark resonances can decay into the final
states ${J/\psi}p$ and ${J/\psi}n$, which will additionally contribute to the
$J/\psi$ yield in the ($\gamma$,$J/\psi$) reactions on protons and nuclei:
$\displaystyle P^{+}_{c}(4312)\to J/\psi+p,$ $\displaystyle P^{+}_{c}(4337)\to
J/\psi+p,$ $\displaystyle P^{+}_{c}(4440)\to J/\psi+p,$ $\displaystyle
P^{+}_{c}(4457)\to J/\psi+p;$ (43) $\displaystyle P^{0}_{c}(4312)\to
J/\psi+n,$ $\displaystyle P^{0}_{c}(4337)\to J/\psi+n,$ $\displaystyle
P^{0}_{c}(4440)\to J/\psi+n,$ $\displaystyle P^{0}_{c}(4457)\to J/\psi+n.$
(44)
The branching ratios $Br[P^{+}_{ci}\to{J/\psi}p]$ 666)Here, $i=$1, 2, 3, 4 and
$P^{+}_{c1}$, $P^{+}_{c2}$, $P^{+}_{c3}$ and $P^{+}_{c4}$ stand for
$P^{+}_{c}(4312)$, $P^{+}_{c}(4337)$, $P^{+}_{c}(4440)$ and $P^{+}_{c}(4457)$,
respectively. Analogously, $P^{0}_{c1}$, $P^{0}_{c2}$, $P^{0}_{c3}$ and
$P^{0}_{c4}$ will denote below the $P^{0}_{c}(4312)$, $P^{0}_{c}(4337)$,
$P^{0}_{c}(4440)$ and $P^{0}_{c}(4457)$ states.) of the decays (43) have not
been determined yet. Model-dependent upper limits on branching fractions
$Br[P^{+}_{c}(4312)\to{J/\psi}p]$, $Br[P^{+}_{c}(4440)\to{J/\psi}p]$ and
$Br[P^{+}_{c}(4457)\to{J/\psi}p]$ of several percent were set by the GlueX
Hall-D experiment [4] at JLab, having a moderate statistics (about 470
$J/\psi$ events). Preliminary results from a factor of 10 more data (about
4000 $J/\psi$ events), collected in the $J/\psi$–007 Hall-C experiment [17] at
JLab as well, focused on the large $t$ region 777)In which the rather flat
resonant production of $J/\psi$ through the $P_{c}^{+}$ is expected to be
enhanced relative to the suppressed here mostly forward diffractive
production.) in searching for the LHCb hidden-charm pentaquarks [1, 2], also
observe no signals for them and will set more stringent upper limits on the
above branching fractions and on pentaquark-$J/\psi$ couplings. Based on the
branching ratios and fractions measured by the LHCb and GlueX Collaborations,
the authors of Ref. [18] obtain that a lower limit of
$Br[P^{+}_{c}\to{J/\psi}p]$ is of the order of 0.05% $\sim$ 0.5%. Taking into
account these findings, we will adopt in our study for the four branching
ratios $Br[P^{+}_{ci}\to{J/\psi}p]$ of the decays (43) three following
conservative options: $Br[P^{+}_{ci}\to{J/\psi}p]=0.25$, 0.5 and 1%
($i=1,2,3,4$), and in line with Ref. [15], will assume that
$Br[P^{0}_{ci}\to{J/\psi}n]=Br[P^{+}_{ci}\to{J/\psi}p]$. This will allow us to
get a better impression of the size of the effect of branching fractions
$Br[P^{+}_{ci}\to{J/\psi}p]$ and $Br[P^{0}_{ci}\to{J/\psi}n]$ on the resonant
$J/\psi$ yield in ${\gamma}p\to{J/\psi}p$ as well as in ${\gamma}$12C
$\to{J/\psi}X$ and ${\gamma}$184W $\to{J/\psi}X$ reactions. Moreover, we will
also suppose, analogously to [15], for the $P_{ci}^{0}$ states the same pole
masses $M_{ci}^{0}$ and total decay width $\Gamma_{ci}^{0}$ as those
$M_{ci}^{+}$ and $\Gamma_{ci}^{+}$ for their hidden-charm charged counterparts
$P_{ci}^{+}$, i.e.: $M_{ci}^{0}=M_{ci}^{+}$ and
$\Gamma_{c1}^{0}=\Gamma_{c1}^{+}=9.8$ MeV,
$\Gamma_{c2}^{0}=\Gamma_{c2}^{+}=29.0$ MeV,
$\Gamma_{c3}^{0}=\Gamma_{c3}^{+}=20.6$ MeV,
$\Gamma_{c4}^{0}=\Gamma_{c4}^{+}=6.4$ MeV [2, 3].
In line with Refs. [6, 7, 19], we suppose that the in-medium spectral
functions $S_{ci}^{+}(\sqrt{s^{*}},\Gamma_{ci}^{+})$ and
$S_{ci}^{0}(\sqrt{s^{*}},\Gamma_{ci}^{0})$ of the intermediate $P_{ci}^{+}$
and $P_{ci}^{0}$ resonances are described by the non-relativistic Breit-Wigner
distributions 888)We ignore, for reasons of the simplification of
calculations, the modification of the $P_{ci}^{+}$ and $P_{ci}^{0}$ masses and
total decay widths in the nuclear matter in our present study.) :
$S_{ci}^{+}(\sqrt{s^{*}},\Gamma_{ci}^{+})=\frac{1}{2\pi}\frac{\Gamma_{ci}^{+}}{(\sqrt{s^{*}}-M_{ci}^{+})^{2}+({\Gamma}_{ci}^{+})^{2}/4}$
(45)
and
$S_{ci}^{0}(\sqrt{s^{*}},\Gamma_{ci}^{0})=\frac{1}{2\pi}\frac{\Gamma_{ci}^{0}}{(\sqrt{s^{*}}-M_{ci}^{0})^{2}+({\Gamma}_{ci}^{0})^{2}/4}.$
(46)
The in-medium total cross sections for production of these resonances with the
possible spin-parity quantum numbers $J^{P}=(1/2)^{-}$ for $P_{c1}^{+}$ and
$P_{c1}^{0}$, $J^{P}=(1/2)^{-}$ for $P_{c2}^{+}$ and $P_{c2}^{0}$,
$J^{P}=(1/2)^{-}$ for $P_{c3}^{+}$ and $P_{c3}^{0}$, and $J^{P}=(3/2)^{-}$ for
$P_{c4}^{+}$ and $P_{c4}^{0}$ 999)Which might be assigned to them within the
hadronic molecular scenario for their internal structure (cf. [7, 15,
20–22]).) in reactions (41), (42) can be determined, using the spectral
functions (45), (46) and known branching fractions
$Br[P_{ci}^{+}\to{\gamma}p]$ and $Br[P_{ci}^{0}\to{\gamma}n]$ ($i=1$, 2, 3,
4), as follows [6, 7, 19]:
$\sigma_{{\gamma}p\to
P_{ci}^{+}}(\sqrt{s^{*}},\Gamma_{ci}^{+})=f_{ci}\left(\frac{\pi}{p^{*}_{\gamma}}\right)^{2}Br[P_{ci}^{+}\to{\gamma}p]S_{ci}^{+}(\sqrt{s^{*}},\Gamma_{ci}^{+})\Gamma_{ci}^{+},\,\,i=1,2,3,4$
(47)
and
$\sigma_{{\gamma}n\to
P_{ci}^{0}}(\sqrt{s^{*}},\Gamma_{ci}^{0})=f_{ci}\left(\frac{\pi}{p^{*}_{\gamma}}\right)^{2}Br[P_{ci}^{0}\to{\gamma}n]S_{ci}^{0}(\sqrt{s^{*}},\Gamma_{ci}^{0})\Gamma_{ci}^{0},\,\,i=1,2,3,4.$
(48)
Here, the c.m. 3-momentum in the incoming ${\gamma}N$ channel,
$p^{*}_{\gamma}$, is defined above by Eq. (16) 101010)For simplicity, we
assume that the neutron mass $m_{n}$ is equal to the proton mass $m_{p}$.) and
the ratios of the spin factors $f_{c1}=1$, $f_{c2}=1$, $f_{c3}=1$, $f_{c4}=2$.
Since we are mainly interested in the resonance $P_{c}$ region, which is not
far from the ${J/\psi}N$ production threshold, we suppose in line with [7, 19,
23] that the hidden-charm pentaquarks $P_{ci}^{+}$ and $P_{ci}^{0}$ decays to
${J/\psi}p$ and ${J/\psi}n$ modes are dominated by the lowest partial waves
with zero relative orbital angular momentum $L$. In this case, adopting the
vector-meson dominance model, one can obtain that the branching ratios
$Br[P_{ci}^{0}\to{\gamma}n]$ and $Br[P_{ci}^{+}\to{\gamma}p]$ are equal to
each other (cf. [24])
$Br[P_{ci}^{0}\to{\gamma}n]=Br[P_{ci}^{+}\to{\gamma}p]$ (49)
and the latter for $P^{+}_{c}(4312)$, $P^{+}_{c}(4440)$ and $P^{+}_{c}(4457)$
are expressed in the framework of this model via the branching fractions
$Br[P^{+}_{c}(4312)\to{J/\psi}p]$, $Br[P^{+}_{c}(4440)\to{J/\psi}p]$ and
$Br[P^{+}_{c}(4457)\to{J/\psi}p]$ by formula (24) from Ref. [7] and within
this model we get that
$Br[P^{+}_{c}(4337)\to{\gamma}p]=1.48\cdot
10^{-3}Br[P^{+}_{c}(4337)\to{J/\psi}p].$ (50)
Using Eqs. (47)–(49), we have
$\sigma_{{\gamma}p\to
P_{ci}^{+}}(\sqrt{s^{*}},\Gamma_{ci}^{+})=\sigma_{{\gamma}n\to
P_{ci}^{0}}(\sqrt{s^{*}},\Gamma_{ci}^{0}).$ (51)
According to Eq. (47), for example, the free total cross sections
$\sigma_{{\gamma}p\to P_{ci}^{+}\to{J/\psi}p}(\sqrt{s},\Gamma_{ci}^{+})$ for
resonant charmonium production in the two-step processes (41)/(43), taking
place on the target proton at rest, can be represented as follows [6, 7]:
$\sigma_{{\gamma}p\to
P_{ci}^{+}\to{J/\psi}p}(\sqrt{s},\Gamma_{ci}^{+})=\sigma_{{\gamma}p\to
P_{ci}^{+}}(\sqrt{s},\Gamma_{ci}^{+})\theta[\sqrt{s}-(m_{J/\psi}+m_{p})]Br[P_{ci}^{+}\to{J/\psi}p].$
(52)
Here, $\theta(x)$ is the step function and the c.m. 3-momentum in the incoming
${\gamma}p$ channel, $p^{*}_{\gamma}$, entering into Eq. (47), is determined
above by the formula (16), in which one has to make the replacements
$E_{t}^{2}-p_{t}^{2}\to m^{2}_{p}$ and $s^{*}\to s$. In line with Eqs. (47)
and (50), we see that these cross sections are proportional to
$Br^{2}[P_{ci}^{+}\to{J/\psi}p]$. This fact enables us to evaluate upper
limits on the branching fractions $Br[P_{c}^{+}(4312)\to{J/\psi}p]$,
$Br[P_{c}^{+}(4440)\to{J/\psi}p]$ and $Br[P_{c}^{+}(4457)\to{J/\psi}p]$, which
are expected from preliminary results of the JLab E12-16-007 experiment [17].
According to them, upper limits on the cross sections (52) for
$P_{c}^{+}(4312)$, $P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$ states almost an
order of magnitude below the respective GlueX limits [4]. With this and within
the representation of Eq. (52), we readily obtain the following relation
between upper limits on the above branching fractions, which are expected from
the $J/\psi$-007 experiment, and those already available from the GlueX
experiment:
$Br_{J/\psi-007}[P_{ci}^{+}\to{J/\psi}p]\approx(1/\sqrt{10})Br_{\rm
GlueX}[P_{ci}^{+}\to{J/\psi}p]$ ($i=1$, 3, 4). Model-dependent upper limits on
the latter ratios of 4.6%, 2.3% and 3.8% for $P_{c}^{+}(4312)$,
$P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$, assuming for each $P_{ci}^{+}$ spin-
parity combination $J^{P}=(3/2)^{-}$, were set by the GlueX Collaboration [4].
So that, following the above relation, we get that
$Br_{J/\psi-007}[P_{c}^{+}(4312)\to{J/\psi}p]\approx$1.46%,
$Br_{J/\psi-007}[P_{c}^{+}(4440)\to{J/\psi}p]\approx$0.73% and
$Br_{J/\psi-007}[P_{c}^{+}(4457)\to{J/\psi}p]\approx$1.20%. This means that
our choice of 1% for upper value of the branching ratios
$Br[P_{ci}^{+}\to{J/\psi}p]$ for all 4 states is quite reasonable and
justified.
Accounting for the fact that the most of the narrow $P_{ci}^{+}$ amd
$P_{ci}^{0}$ resonances ($i=1$, 2, 3, 4), having vacuum total decay widths in
their rest frames of 9.8, 29.0, 20.6 and 6.4 MeV [2, 3], respectively, decay
to ${J/\psi}p$ and ${J/\psi}n$ outside of the considered target nuclei [6] as
well as the results presented both in Refs. [6, 7, 10] and above by Eqs. (3),
(4), (51), (52), we can obtain the following expression for the $J/\psi$
inclusive differential cross section arising from the production and decay of
intermediate resonances $P_{ci}^{+}$ and $P_{ci}^{0}$ in ${\gamma}A$
reactions:
$\frac{d\sigma_{{\gamma}A\to{J/\psi}X}^{({\rm sec})}({\bf p}_{\gamma},{\bf
p}_{J/\psi})}{d{\bf p}_{J/\psi}}=\frac{d\sigma_{{\gamma}A\to
P_{ci}^{+}\to{J/\psi}p}^{({\rm sec})}({\bf p}_{\gamma},{\bf
p}_{J/\psi})}{d{\bf p}_{J/\psi}}+\frac{d\sigma_{{\gamma}A\to
P_{ci}^{0}\to{J/\psi}n}^{({\rm sec})}({\bf p}_{\gamma},{\bf
p}_{J/\psi})}{d{\bf p}_{J/\psi}}=$ (53) $=I_{V}[A,\sigma^{\rm
in}_{{P_{c}}N}]\left<\frac{d\sigma_{{\gamma}p\to P_{ci}^{+}\to{J/\psi}p}({\bf
p}_{\gamma},{\bf p}_{J/\psi})}{d{\bf p}_{J/\psi}}\right>_{A},i=1,2,3,4,$
where
$\left<\frac{d\sigma_{{\gamma}p\to P_{ci}^{+}\to{J/\psi}p}({\bf
p}_{\gamma},{\bf p}_{J/\psi})}{d{\bf p}_{J/\psi}}\right>_{A}=\int\int
P_{A}({\bf p}_{t},E)d{\bf p}_{t}dE\left[\frac{d\sigma_{{\gamma}p\to
P_{ci}^{+}\to{J/\psi}p}(\sqrt{s^{*}},{\bf p}_{J/\psi})}{d{\bf
p}_{J/\psi}}\right],$ (54)
and
$\frac{d\sigma_{{\gamma}p\to P_{ci}^{+}\to{J/\psi}p}(\sqrt{s^{*}},{\bf
p}_{J/\psi})}{d{\bf p}_{J/\psi}}=\sigma_{{\gamma}p\to
P_{ci}^{+}}(\sqrt{s^{*}},\Gamma_{ci}^{+})\theta[\sqrt{s^{*}}-(m_{J/\psi}+m_{p})]\times$
(55) $\times\frac{1}{\Gamma_{ci}^{+}(\sqrt{s^{*}},{\bf p}_{\gamma})}\int d{\bf
p}_{p}\frac{d\Gamma_{P_{ci}^{+}\to{J/\psi}p}(\sqrt{s^{*}},{\bf
p}_{J/\psi},{\bf p}_{p})}{d{\bf p}_{J/\psi}d{\bf p}_{p}},$
$\frac{d\Gamma_{P_{ci}^{+}\to{J/\psi}p}(\sqrt{s^{*}},{\bf p}_{J/\psi},{\bf
p}_{p})}{d{\bf p}_{J/\psi}d{\bf
p}_{p}}=\frac{1}{2E_{ci}^{+}}\frac{1}{2J+1}|M_{P_{ci}^{+}\to{J/\psi}p}|^{2}(2\pi)^{4}\delta(E_{ci}^{+}-E_{J/\psi}-E_{p})\times$
(56) $\times\delta({\bf p}_{ci}^{+}-{\bf p}_{J/\psi}-{\bf
p}_{p})\frac{1}{(2\pi)^{3}{2E_{J/\psi}}}\frac{1}{(2\pi)^{3}{2E_{p}}},$
$\Gamma_{ci}^{+}(\sqrt{s^{*}},{\bf
p}_{\gamma})=\Gamma_{ci}^{+}/\gamma_{ci}^{+},$ (57)
$E_{ci}^{+}=E_{\gamma}+E_{t},\,\,\,\,\,{\bf p}_{ci}^{+}={\bf p}_{\gamma}+{\bf
p}_{t},\,\,\,\,\,\gamma_{ci}^{+}=E_{ci}^{+}/\sqrt{s^{*}}.$ (58)
Here, $E_{p}$ is the final proton total energy ($E_{p}=\sqrt{m^{2}_{p}+{\bf
p}^{2}_{p}}$) and $|M_{P_{ci}^{+}\to{J/\psi}p}|^{2}$ is summarized over spin
states of initial and final particles matrix element squared describing the
decays (43) for given $i$. The quantity $I_{V}[A,\sigma^{\rm in}_{P_{c}N}]$ in
Eq. (53) is defined above by Eq. (4), in which one needs to make the
substitution $\sigma\to\sigma^{\rm in}_{P_{c}N}$. Here the quantity
$\sigma^{\rm in}_{P_{c}N}$ denotes the inelastic total cross sections of the
free $P_{c}N$ interaction. Our estimates [6] 111111)These estimates also show
that we can neglect quasielastic $P_{ci}^{+}{N}$ and $P_{ci}^{0}{N}$
rescatterings in their way out of the target nucleus.), based on the
${J/\psi}p$ molecular scenario for the $P_{c}^{+}$ pentaquarks, show that this
quantity can be evaluated as $\sigma^{\rm in}_{P_{c}N}\approx 33.5$ mb. We
will use this value throughout our calculations. In view of the aforesaid, the
hidden-charm pentaquarks $P_{ci}^{+}$ (and $P_{ci}^{0}$) decays to ${J/\psi}p$
(and ${J/\psi}n$) are dominated by the lowest partial $s$-waves with zero
relative orbital angular momentum. This implies that the matrix elements
squared $|M_{P_{ci}^{+}\to{J/\psi}p}|^{2}$ (and
$|M_{P_{ci}^{0}\to{J/\psi}n}|^{2}$) lead to an isotropic angular distributions
of the $P_{ci}^{+}\to{J/\psi}p$ (and $P_{ci}^{0}\to{J/\psi}n$) decays for the
considered spin-parity assignments of the $P_{ci}^{+}$ (and $P_{ci}^{0}$)
states. With this, we can readily obtain the following relation between
$|M_{P_{ci}^{+}\to{J/\psi}p}|^{2}$ and the partial width
$\Gamma_{P_{ci}^{+}\to{J/\psi}p}$ of the $P_{ci}^{+}\to{J/\psi}p$ decay (cf.
[10]):
$\frac{1}{2J+1}\frac{|M_{P_{ci}^{+}\to{J/\psi}p}|^{2}}{(2\pi)^{2}}=\frac{2s^{*}}{\pi{p^{*}_{J/\psi}}}\Gamma_{P_{ci}^{+}\to{J/\psi}p}.$
(59)
With it, we find for the expression (55) a more simpler form (cf. Eq. (9)):
$\frac{d\sigma_{{\gamma}p\to P_{ci}^{+}\to{J/\psi}p}(\sqrt{s^{*}},{\bf
p}_{J/\psi})}{d{\bf p}_{J/\psi}}=\sigma_{{\gamma}p\to
P_{ci}^{+}}(\sqrt{s^{*}},\Gamma_{ci}^{+})\theta[\sqrt{s^{*}}-(m_{J/\psi}+m_{p})]\times$
(60)
$\times\frac{1}{I_{2}(s^{*},m_{J/\psi},m_{p})}Br[P_{ci}^{+}\to{J/\psi}p]\frac{1}{4E_{J/\psi}}\frac{1}{(\omega+E_{t})}\delta\left[\omega+E_{t}-\sqrt{m_{p}^{2}+({\bf
Q}+{\bf p}_{t})^{2}}\right],$
where the quantities $\omega$ and ${\bf Q}$ are defined above by Eq. (12). We
will employ this expression in our calculations of the $J/\psi$ momentum
distribution from the processes (41)–(44) in ${\gamma}A$ reactions.
Integrating the differential cross section (53) over the angular range of
${\Delta}{\bf\Omega}_{J/\psi}$=$0^{\circ}\leq\theta_{J/\psi}\leq 20^{\circ}$,
$0\leq\varphi_{J/\psi}\leq 2{\pi}$ of our interest, we represent this
distribution for given $i$ in this angular range in the following form (cf.
Eq. (24)):
$\frac{d\sigma_{{\gamma}A\to{J/\psi}X}^{({\rm
sec})}(p_{\gamma},p_{J/\psi})}{dp_{J/\psi}}=\int\limits_{{\Delta}{\bf\Omega}_{J/\psi}}d{\bf\Omega}_{J/\psi}\frac{d\sigma_{{\gamma}A\to{J/\psi}X}^{({\rm
sec})}({\bf p}_{\gamma},{\bf p}_{J/\psi})}{d{\bf p}_{J/\psi}}p_{J/\psi}^{2}$
(61) $=2{\pi}I_{V}[A,\sigma^{\rm in}_{{P_{c}}N}]\int\limits_{\cos
20^{\circ}}^{1}d\cos{{\theta_{J/\psi}}}\left<\frac{d\sigma_{{\gamma}p\to
P_{ci}^{+}\to{J/\psi}{p}}(p_{\gamma},p_{J/\psi},\theta_{J/\psi})}{dp_{J/\psi}d{\bf\Omega}_{J/\psi}}\right>_{A},\,\,\,i=1,2,3,4.$
Before going to the next step, we calculate the free space resonant $J/\psi$
energy distribution $d\sigma_{{\gamma}p\to
P_{ci}^{+}\to{J/\psi}{p}}[\sqrt{s},p_{J/\psi}]/dE_{J/\psi}$ from the two-step
processes (41)/(43), proceeding on the free target proton at rest, in addition
to that from the background ${\gamma}p\to{J/\psi}p$ reaction (cf. Eq. (39)).
The energy-momentum conservation in these precesses leads to the conclusion
that the kinematical characteristics of $J/\psi$ mesons produced in them and
in this reaction are the same at given incident photon energy. The full on-
shell differential cross section $d\sigma_{{\gamma}p\to
P_{ci}^{+}\to{J/\psi}{p}}[\sqrt{s},{\bf p}_{J/\psi}]/d{\bf p}_{J/\psi}$ can be
obtained from more general one (60) in the limits: ${\bf p}_{t}\to 0$,
$E_{t}\to m_{p}$ and $s^{*}\to s$. Its integration over the laboratory polar
angle $\theta_{J/\psi}$, when this angle belongs to the allowed angular
interval (25), gives:
$\frac{d\sigma_{{\gamma}p\to
P_{ci}^{+}\to{J/\psi}{p}}[\sqrt{s},p_{J/\psi}]}{dE_{J/\psi}}=2\pi\int\limits_{\cos{\theta_{J/\psi}^{\rm
max}}}^{1}d\cos{\theta_{J/\psi}}p_{J/\psi}E_{J/\psi}\frac{d\sigma_{{\gamma}p\to
P_{ci}^{+}\to{J/\psi}p}[\sqrt{s},{\bf p}_{J/\psi}]}{d{\bf p}_{J/\psi}}=$ (62)
$=\sigma_{{\gamma}p\to
P_{ci}^{+}}(\sqrt{s},\Gamma_{ci}^{+})\theta[\sqrt{s}-(m_{J/\psi}+m_{p})]\times$
$\times\left(\frac{\sqrt{s}}{2p_{\gamma}p^{*}_{J/\psi}}\right)Br[P_{ci}^{+}\to{J/\psi}p]~{}{\rm
for}~{}E^{(2)}_{J/\psi}(0^{\circ})\leq E_{J/\psi}\leq
E^{(1)}_{J/\psi}(0^{\circ}).$
Eq. (62) shows that the free space $J/\psi$ energy distribution, which arises
from the production/decay chains (41)/(43), exhibits a completely flat
behavior within the allowed energy range (38).
Figure 5: (Color online) The direct non-resonant $J/\psi$ energy distribution
in the free space elementary process ${\gamma}p\to{J/\psi}p$, calculated in
line with Eq. (39) at initial photon resonant energy of 9.44 GeV in the
laboratory system (solid curve). The resonant $J/\psi$ energy distributions in
the two-step processes ${\gamma}p\to P_{c}^{+}(4312)\to{J/\psi}p$,
${\gamma}p\to P_{c}^{+}(4337)\to{J/\psi}p$, ${\gamma}p\to
P_{c}^{+}(4440)\to{J/\psi}p$ and ${\gamma}p\to P_{c}^{+}(4457)\to{J/\psi}p$,
calculated in line with Eq. (62) at the same incident photon energy of 9.44
GeV assuming that the resonances $P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$,
$P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$ with the spin-parity assignments
$J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$,
correspondingly, all decay to the ${J/\psi}p$ with branching fractions 0.25%
(respectively, red dashed, blue dotted, dark cyan dashed-doted and magenta
dashed-dotted-dotted curves). Incoherent sum of the direct non-resonant
$J/\psi$ energy distribution and resonant ones, calculated supposing that the
resonances $P_{c}^{+}(4312)$ and $P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$,
$P_{c}^{+}(4440)$, $P_{c}^{+}(4457)$ with the same spin-parity combinations
all decay to the ${J/\psi}p$ with branching fractions 0.25, 0.5 and 1%
(respectively, dark yellow short-dashed, wine short-dashed-dotted, olive
dashed-dotted and navy short-dotted, pink dotted, royal dashed-dotted-dotted
curves), all as functions of the total $J/\psi$ energy $E_{J/\psi}$ in the
laboratory system. The vertical dotted lines indicate the range of $J/\psi$
allowed energies in this system for the considered direct non-resonant and
resonant $J/\psi$ production off a free target proton at rest at given initial
photon resonant energy of 9.44 GeV. Figure 6: (Color online) The direct non-
resonant $J/\psi$ energy distribution in the free space elementary process
${\gamma}p\to{J/\psi}p$, calculated in line with Eq. (39) at initial photon
resonant energy of 9.554 GeV in the laboratory system (solid curve). The
resonant $J/\psi$ energy distributions in the two-step processes ${\gamma}p\to
P_{c}^{+}(4312)\to{J/\psi}p$, ${\gamma}p\to P_{c}^{+}(4337)\to{J/\psi}p$,
${\gamma}p\to P_{c}^{+}(4440)\to{J/\psi}p$ and ${\gamma}p\to
P_{c}^{+}(4457)\to{J/\psi}p$, calculated in line with Eq. (62) at the same
incident photon energy of 9.554 GeV assuming that the resonances
$P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$, $P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$
with the spin-parity assignments $J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$,
$J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$, correspondingly, all decay to the
${J/\psi}p$ with branching fractions 0.25% (respectively, red dashed, blue
dotted, dark cyan dashed-doted and magenta dashed-dotted-dotted curves).
Incoherent sum of the direct non-resonant $J/\psi$ energy distribution and
resonant ones, calculated supposing that the resonances $P_{c}^{+}(4337)$ and
$P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$, $P_{c}^{+}(4440)$, $P_{c}^{+}(4457)$
with the same spin-parity combinations all decay to the ${J/\psi}p$ with
branching fractions 0.25, 0.5 and 1% (respectively, dark yellow short-dashed,
wine short-dashed-dotted, olive dashed-dotted and navy short-dotted, pink
dotted, royal dashed-dotted-dotted curves), all as functions of the total
$J/\psi$ energy $E_{J/\psi}$ in the laboratory system. The vertical dotted
lines indicate the range of $J/\psi$ allowed energies in this system for the
considered direct non-resonant and resonant $J/\psi$ production off a free
target proton at rest at given initial photon resonant energy of 9.554 GeV.
Figure 7: (Color online) The direct non-resonant $J/\psi$ energy distribution
in the free space elementary process ${\gamma}p\to{J/\psi}p$, calculated in
line with Eq. (39) at initial photon resonant energy of 10.04 GeV in the
laboratory system (solid curve). The resonant $J/\psi$ energy distributions in
the two-step processes ${\gamma}p\to P_{c}^{+}(4312)\to{J/\psi}p$,
${\gamma}p\to P_{c}^{+}(4337)\to{J/\psi}p$, ${\gamma}p\to
P_{c}^{+}(4440)\to{J/\psi}p$ and ${\gamma}p\to P_{c}^{+}(4457)\to{J/\psi}p$,
calculated in line with Eq. (62) at the same incident photon energy of 10.04
GeV assuming that the resonances $P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$,
$P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$ with the spin-parity assignments
$J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$,
correspondingly, all decay to the ${J/\psi}p$ with branching fractions 0.25%
(respectively, red dashed, blue dotted, dark cyan dashed-doted and magenta
dashed-dotted-dotted curves). Incoherent sum of the direct non-resonant
$J/\psi$ energy distribution and resonant ones, calculated supposing that the
resonances $P_{c}^{+}(4440)$ and $P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$,
$P_{c}^{+}(4440)$, $P_{c}^{+}(4457)$ with the same spin-parity combinations
all decay to the ${J/\psi}p$ with branching fractions 0.25, 0.5 and 1%
(respectively, dark yellow short-dashed, wine short-dashed-dotted, olive
dashed-dotted and navy short-dotted, pink dotted, royal dashed-dotted-dotted
curves), all as functions of the total $J/\psi$ energy $E_{J/\psi}$ in the
laboratory system. The vertical dotted lines indicate the range of $J/\psi$
allowed energies in this system for the considered direct non-resonant and
resonant $J/\psi$ production off a free target proton at rest at given initial
photon resonant energy of 10.04 GeV. Figure 8: (Color online) The direct non-
resonant $J/\psi$ energy distribution in the free space elementary process
${\gamma}p\to{J/\psi}p$, calculated in line with Eq. (39) at initial photon
resonant energy of 10.12 GeV in the laboratory system (solid curve). The
resonant $J/\psi$ energy distributions in the two-step processes ${\gamma}p\to
P_{c}^{+}(4312)\to{J/\psi}p$, ${\gamma}p\to P_{c}^{+}(4337)\to{J/\psi}p$,
${\gamma}p\to P_{c}^{+}(4440)\to{J/\psi}p$ and ${\gamma}p\to
P_{c}^{+}(4457)\to{J/\psi}p$, calculated in line with Eq. (62) at the same
incident photon energy of 10.12 GeV assuming that the resonances
$P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$, $P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$
with the spin-parity assignments $J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$,
$J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$, correspondingly, all decay to the
${J/\psi}p$ with branching fractions 0.25% (respectively, red dashed, blue
dotted, dark cyan dashed-doted and magenta dashed-dotted-dotted curves).
Incoherent sum of the direct non-resonant $J/\psi$ energy distribution and
resonant ones, calculated supposing that the resonances $P_{c}^{+}(4457)$ and
$P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$, $P_{c}^{+}(4440)$, $P_{c}^{+}(4457)$
with the same spin-parity combinations all decay to the ${J/\psi}p$ with
branching fractions 0.25, 0.5 and 1% (respectively, dark yellow short-dashed,
wine short-dashed-dotted, olive dashed-dotted and navy short-dotted, pink
dotted, royal dashed-dotted-dotted curves), all as functions of the total
$J/\psi$ energy $E_{J/\psi}$ in the laboratory system. The vertical dotted
lines indicate the range of $J/\psi$ allowed energies in this system for the
considered direct non-resonant and resonant $J/\psi$ production off a free
target proton at rest at given initial photon resonant energy of 10.12 GeV.
Figure 9: (Color online) The direct non-resonant momentum distribution of
$J/\psi$ mesons, produced in the reaction ${\gamma}{\rm{}^{12}C}\to{J/\psi}X$
in the laboratory polar angular range of 0∘–20∘ and calculated in line with
Eq. (24) at initial photon resonant energy of 9.44 GeV in the laboratory
system (solid curve). The resonant momentum distributions of $J/\psi$ mesons,
produced in the two-step processes ${\gamma}p(n)\to
P_{c}^{+}(4312)(P_{c}^{0}(4312))\to{J/\psi}p(n)$, ${\gamma}p(n)\to
P_{c}^{+}(4337)(P_{c}^{0}(4337))\to{J/\psi}p(n)$, ${\gamma}p(n)\to
P_{c}^{+}(4440)(P_{c}^{0}(4440))\to{J/\psi}p(n)$ and ${\gamma}p(n)\to
P_{c}^{+}(4457)(P_{c}^{0}(4457))\to{J/\psi}p(n)$ and calculated in line with
Eq. (61) at the same incident photon energy of 9.44 GeV assuming that the
resonances $P_{c}^{+,0}(4312)$, $P_{c}^{+,0}(4337)$, $P_{c}^{+,0}(4440)$ and
$P_{c}^{+,0}(4457)$ with the spin-parity assignments $J^{P}=(1/2)^{-}$,
$J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$, correspondingly,
all decay to the ${J/\psi}p(n)$ with branching fractions 0.25% (respectively,
red dashed, blue dotted, dark cyan dashed-doted and magenta dashed-dotted-
dotted curves) and their incoherent sum (orange dotted curve). Incoherent sum
of the direct non-resonant $J/\psi$ momentum distribution and resonant ones,
calculated supposing that the resonances $P_{c}^{+,0}(4312)$,
$P_{c}^{+,0}(4337)$, $P_{c}^{+,0}(4440)$, $P_{c}^{+,0}(4457)$ with the same
spin-parity combinations all decay to the ${J/\psi}p(n)$ with branching
fractions 0.25, 0.5 and 1% (respectively, green short-dashed, navy short-
dotted and pink short-dashed-dotted curves), all as functions of the $J/\psi$
momentum $p_{J/\psi}$ in the laboratory frame. Figure 10: (Color online) The
direct non-resonant momentum distribution of $J/\psi$ mesons, produced in the
reaction ${\gamma}{\rm{}^{184}W}\to{J/\psi}X$ in the laboratory polar angular
range of 0∘–20∘ and calculated in line with Eq. (24) at initial photon
resonant energy of 9.44 GeV in the laboratory system (solid curve). The
resonant momentum distributions of $J/\psi$ mesons, produced in the two-step
processes ${\gamma}p(n)\to P_{c}^{+}(4312)(P_{c}^{0}(4312))\to{J/\psi}p(n)$,
${\gamma}p(n)\to P_{c}^{+}(4337)(P_{c}^{0}(4337))\to{J/\psi}p(n)$,
${\gamma}p(n)\to P_{c}^{+}(4440)(P_{c}^{0}(4440))\to{J/\psi}p(n)$ and
${\gamma}p(n)\to P_{c}^{+}(4457)(P_{c}^{0}(4457))\to{J/\psi}p(n)$ and
calculated in line with Eq. (61) at the same incident photon energy of 9.44
GeV assuming that the resonances $P_{c}^{+,0}(4312)$, $P_{c}^{+,0}(4337)$,
$P_{c}^{+,0}(4440)$ and $P_{c}^{+,0}(4457)$ with the spin-parity assignments
$J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$,
correspondingly, all decay to the ${J/\psi}p(n)$ with branching fractions
0.25% (respectively, red dashed, blue dotted, dark cyan dashed-doted and
magenta dashed-dotted-dotted curves) and their incoherent sum (orange dotted
curve). Incoherent sum of the direct non-resonant $J/\psi$ momentum
distribution and resonant ones, calculated supposing that the resonances
$P_{c}^{+,0}(4312)$, $P_{c}^{+,0}(4337)$, $P_{c}^{+,0}(4440)$,
$P_{c}^{+,0}(4457)$ with the same spin-parity combinations all decay to the
${J/\psi}p(n)$ with branching fractions 0.25, 0.5 and 1% (respectively, green
short-dashed, navy short-dotted and pink short-dashed-dotted curves), all as
functions of the $J/\psi$ momentum $p_{J/\psi}$ in the laboratory frame.
Figure 11: (Color online) The same as in Fig. 9, but for the initial photon
energy of 10.12 GeV. Figure 12: (Color online) The same as in Fig. 10, but for
the initial photon energy of 10.12 GeV.
## 3\. Results
The free space direct non-resonant $J/\psi$ production total cross section
(23) (solid curve), the total cross section for the resonant $J/\psi$
production in the processes (41)/(43) determined on the basis of Eq. (52) for
the considered spin-parity assignments of the hidden-charm resonances
$P_{ci}^{+}$ ($i=1$, 2, 3, 4) and for branching ratios
$Br[P_{ci}^{+}\to{J/\psi}p]=1$% for all four $P_{ci}^{+}$ states (short-dashed
curve) and the combined (non-resonant plus resonant) $J/\psi$ production total
cross section (dotted curve) are presented in Fig. 4 as functions of photon
energy. It can be seen from this figure that the $P_{c}^{+}(4312)$ and
$P_{c}^{+}(4337)$ as well as $P_{c}^{+}(4440)$ and $P_{c}^{+}(4457)$
resonances exhibit itself as two narrow overlapping peaks, respectively, at
$E_{\gamma}=9.44$ and $E_{\gamma}=9.554$ GeV as well as at $E_{\gamma}=10.04$
and $E_{\gamma}=10.12$ GeV. The strengths of these four peaks reach a value
$\sim$ 0.1–0.2 nb. Whereas, the non-resonant contribution in the resonance
region is of about 1 nb. As a result, the combined total cross section of the
reaction ${\gamma}p\to{J/\psi}p$ has no distinct peak structures,
corresponding to the $P_{ci}^{+}$ states, and it is practically not
distinguished from that for the background reaction. If
$Br[P_{ci}^{+}\to{J/\psi}p]=0.25$ and 0.5%, then the resonant $J/\psi$ yield
will be even more less than the non-resonant one. This means that will be very
hard to measure the $P_{ci}^{+}$ pentaquark states in $J/\psi$ total
photoproduction cross section on a proton target in the near-threshold energy
region. Evidently, to see their experimentally one needs to consider such
observable, which is appreciably sensitive to the $P_{c}^{+}$ signal in some
region of the available phase space. For example, the large $t$ region of the
differential cross section $d\sigma/dt$ in the $J/\psi$-007 experiment [17],
where the $t$-dependence of the background $J/\psi$ meson production is
suppressed while its resonant production is rather flat. This is also
supported by the findings of Ref. [25], where the photoproduction of initially
claimed by the LHCb Collaboration hidden charm pentaquark states
$P_{c}^{+}(4380)$ and $P_{c}^{+}(4450)$ with the spin-parity assignments of
$(3/2^{-},5/2^{+})$ or $(3/2^{+},5/2^{-})$, respectively, on the proton target
was considered by including the $t$-channel diffractive Pomeron exchanges and
the $s$-channel pentaquark productions. Here, by assuming that the pentaquark
states decay into the ${J/\psi}p$ mode with fraction of 5% was, in particular,
shown that the contributions from the $P_{c}^{+}$ states calculated at
resonant c.m. energies $W=4.38$ GeV and 4.45 GeV for the two spin-parity
combinations considered make the differential cross section of the
${\gamma}p\to{J/\psi}p$ reaction strongly deviated from the diffractive one at
off-forward angles in the c.m. frame. This cross section indeed is rather flat
at these angles and overestimates at them significantly the contributions from
the diffractive Pomeron exchanges. Furthermore, the predictions for the
differential cross section of the ${\gamma}p\to{J/\psi}p$ reaction, obtained
in Ref. [26] within the approach in which the Pomeron-exchange model with the
parameters determined from fitting the available total cross section data up
to $W=300$ GeV is used to calculate the non-resonant amplitudes as well as the
partial decay widths of nucleon resonances with hidden charm,
$N^{*}_{c{\bar{c}}}$, predicted by the considered meson-baryons ($MB$)
coupled-channel models to estimate the $N^{*}_{c{\bar{c}}}\to MB$ transition
matrix elements and the vector-meson dominance model to evaluate ${\gamma}p\to
N^{*}_{c{\bar{c}}}$ as ${\gamma}p\to Vp\to N^{*}_{c{\bar{c}}}$ with
$V=\rho,\omega,J/\psi$ are adopted, demonstrate that the $N^{*}_{c{\bar{c}}}$
can be readily identified in the near-threshold differential cross section of
the ${\gamma}p\to{J/\psi}p$ process at large angles where the contribution
from Pomeron exchanges becomes insignificant. It should also be noted that an
earlier prediction of the differential cross section of this process, made in
Ref. [27] at the resonant energy point $W=4.412$ GeV by considering the non-
resonant (${\gamma}p\to{J/\psi}p$) and resonant (${\gamma}p\to
N^{*}_{c{\bar{c}}}(4412)\to{J/\psi}p$) ${J/\psi}p$ photoproduction using,
respectively, the two gluon and three gluon exchange model [14] and vector-
meson dominance model to generate vector mesons $\rho,\omega,J/\psi$ from
photon which rescatter with the target proton to form intermediate hidden
charmed nucleon resonance $N^{*}_{c{\bar{c}}}(4412)$, shows as well that this
cross section is quite weakly dependent on the c.m.s. $J/\psi$ production
angle. It should be additionally pointed out that the feasibility of detecting
the $P_{c}^{+}(4450)$ resonance with the spin-parity quantum numbers
$J^{P}=3/2^{-}$ and $J^{P}=5/2^{+}$ in near-threshold $J/\psi$ photoproduction
off protons in the CLAS12 experiment at JLab was also discussed in Ref. [23]
in the framework of a two-component model containing the directly produced
resonance, diffractive background and accounting for the experimental
resolution effects. The contribution of the $P_{c}^{+}(4450)$ state, produced
through the vector-meson dominance mechanism, was parametrized using the
Breit-Wigner ansatz and the non-resonant contribution was described by the
Pomeron exchanges. The fit of the available at that time data points for
differential cross section of the ${\gamma}p\to{J/\psi}p$ reaction, with
$|t|\leq~{}1.5$ GeV2, covering energy range from threshold to
$E_{\gamma}\sim~{}120$ TeV in the lab frame, with this model showed that the
upper limits for branching ratio $Br[P_{c}^{+}(4450)\to{J/\psi}p]$ of the
$P_{c}^{+}(4450)$ pentaquark range from 23% to 30% for $J=3/2$, depending on
the experimental resolution, and from 8% to 17% for $J=5/2$. These are
essentially larger than those of several percent set later on by the GlueX
Collaboration [4]. Finally, it is worth noting that the photoproduction of the
$J/\psi$ off the proton near threshold was studied in Ref. [28] using a novel
final ${J/\psi}p$ production mechanism via the open charm
$\Lambda_{c}^{+}{\bar{D}}^{0}$ and $\Lambda_{c}^{+}{\bar{D}}^{*0}$
intermediate states. The authors found that the existing experimental data [4]
on ${\gamma}p\to{J/\psi}p$ can be well described within the suggested
mechanism. Moreover, they identified a clear experimental signature for this
mechanism: within it must be pronounced cusps at the
$\Lambda_{c}^{+}{\bar{D}}^{0}$ and $\Lambda_{c}^{+}{\bar{D}}^{*0}$ thresholds
in the energy dependence of the total cross section of the
${\gamma}p\to{J/\psi}p$ reaction, and found that the data [4] consistent with
this feature within their accuracy. One may hope that further measurements of
the $J/\psi$ photoproduction off the proton at JLab with higher statistics
than GlueX will provide a deeper understanding of the $J/\psi$ photoproduction
mechanism.
Taking into account the aforementioned, now we consider the $J/\psi$ energy
distribution from the considered ${\gamma}p\to{J/\psi}p$ elementary reaction.
The model developed by us allows to calculate the direct non-resonant $J/\psi$
energy distribution from this reaction, the resonant ones from the
production/decay chains (41)/(43), proceeding on the free target proton being
at rest. They were calculated according to Eqs. (39), (62), respectively, for
incident photon resonant energies of 9.44, 9.554, 10.04 and 10.12 GeV. The
resonant $J/\psi$ energy distributions were determined for the considered
spin-parity assignments of the $P_{c}^{+}(4312)$, $P_{c}^{+}(4337)$,
$P_{c}^{+}(4440)$, $P_{c}^{+}(4457)$ resonances for branching fractions
$Br[P_{ci}^{+}\to{J/\psi}p]=$ 0.25% for all four states. These dependencies,
together with the incoherent sum of the non-resonant $J/\psi$ energy
distribution and resonant ones, calculated assuming that all the resonances
$P_{c}^{+}(4312)$ and $P_{ci}^{+}$ ($i=1$, 2, 3, 4), $P_{c}^{+}(4337)$ and
$P_{ci}^{+}$ ($i=1$, 2, 3, 4), $P_{c}^{+}(4440)$ and $P_{ci}^{+}$ ($i=1$, 2,
3, 4), $P_{c}^{+}(4457)$ and $P_{ci}^{+}$ ($i=1$, 2, 3, 4) decay to the
${J/\psi}p$ mode with three adopted options for the branching ratios
$Br[P_{ci}^{+}\to{J/\psi}p]$, as functions of the $J/\psi$ total energy
$E_{J/\psi}$ are shown, respectively, in Figs. 5, 6, 7, 8. It is seen from
these figures that the resonant $J/\psi$ production cross sections show a flat
behavior at all allowed energies $E_{J/\psi}$. Whereas the non-resonant cross
section drops fastly as $E_{J/\psi}$ decreases. At incident photon resonant
energies of 9.44, 9.554, 10.04 and 10.12 GeV of interest its strength is
essentially larger than those of the resonant $J/\psi$ production cross
sections, calculated for the value of the branching ratios
$Br[P_{ci}^{+}\to{J/\psi}p]=$ 0.25% for ”high” allowed $J/\psi$ total energies
greater than $\approx$ 7.25 GeV. Whereas at ”low” $J/\psi$ total energies
(below 7.25 GeV) and for each considered photon energy the contribution from
the resonance with the centroid at this energy, decaying to the ${J/\psi}p$
with the branching ratio of 0.25%, is much larger than the non-resonant one.
Thus, for instance, in this case for the $J/\psi$ mesons with total energy of
6.5 GeV their resonant production cross section is enhanced compared to the
non-resonant one by sizeable factors of about 2.9, 3.6, 9.5 and 22.5 at
initial photon energies of 9.44, 9.554, 10.04 and 10.12 GeV, respectively.
Moreover, this contribution is also substantially larger than those, arising
from the decays of another three pentaquarks to the ${J/\psi}p$ channel with
the branching ratios $Br[P_{ci}^{+}\to{J/\psi}p]=$ 0.25%, at the above-
mentioned ”low” $J/\psi$ total energies. As a result, at each considered
photon energy the $J/\psi$ meson combined energy distribution, deriving from
the direct $J/\psi$ meson production and from the decay of the pentaquark
resonance located at this energy to the ${J/\psi}p$ mode, reveals here a clear
sensitivity to the adopted variations in the branching ratio of this decay.
Thus, for example, for the $J/\psi$ mesons with total energy of 6.5 GeV and
for the lowest considered incident photon energy of 9.44 GeV this $J/\psi$
combined distribution is enhanced for the values of this ratio of 0.25, 0.5
and 1% by notable factors of about 4.0, 12.5 and 46.8, respectively, as
compared to that from the directly produced $J/\psi$ mesons. And for the
highest initial photon energy of 10.12 GeV of our interest, at which the
resonance $P_{c}^{+}(4457)$ appears as peak structure in the total cross
section of the exclusive reaction ${\gamma}p\to{J/\psi}p$, the analogous
factors become much larger and they are of about 23.5, 90.8 and 360.3,
respectively. Furthermore, one can see that the above ”partial” combined
energy distribution of the $J/\psi$ mesons is practically indistinguishable
from their ”total” combined differential energy distribution, arising from the
direct and resonant $J/\psi$ meson production via the production/decay chains
(41)/(43). This implies, on the one hand, that the differences between the
combined results, obtained by using a conservative value of the branching
fractions of the decays $P_{ci}^{+}\to{J/\psi}p$ of 0.25% and the non-resonant
background, as well as between the combined results, determined by employing
the values of the branching ratios of these decays of 0.25 and 0.5%, 0.5 and
1%, are quite sizeable and experimentally measurable at ”low” charmonium total
energies. On the other hand, at each incident photon resonant energy
considered the observation here of the specific hidden-charm LHCb pentaquark
will be practically not influenced by the presence of the another three
hidden-charm pentaquark states and by the background reaction. Since the
$J/\psi$ production differential cross sections have a small absolute values
$\sim$ 0.01–0.1 nb/GeV at ”low” $J/\psi$ total energies $E_{J/\psi}$, their
measurement requires both high luminosities and large-acceptance detectors.
Such measurement might be performed in the near future at the JLab in Hall A
within the planned here high-statistics ($\sim$ 800k $J/\psi$ events in
photoproduction) and high-precision E12-12-006 experiment using the SoLID
detector [5, 17].
The momentum dependencies of the absolute non-resonant, resonant and combined
$J/\psi$ meson differential cross sections, correspondingly, from the direct
(1), (2), two-step (41)/(43), (42)/(44) and direct plus two-step $J/\psi$
production processes in $\gamma$12C and $\gamma$184W interactions, calculated
on the basis of Eqs. (24), (61) for laboratory polar angles of 0∘–20∘ and for
incident photon lowest resonant energy of 9.44 GeV, are shown, respectively,
in Figs. 9 and 10. The same as in these figures, but for initial highest
photon resonant energy of 10.12, is presented in Figs. 11 and 12. The resonant
momentum differential cross sections for the production of $J/\psi$ mesons in
the two-step processes ${\gamma}p\to P_{ci}^{+}\to{J/\psi}p$ and ${\gamma}n\to
P_{ci}^{0}\to{J/\psi}n$ ($i=1$, 2, 3, 4), proceeding on the intranuclear
nucleons of carbon and tungsten target nuclei, were obtained for three
employed values of the branching ratios $Br[P_{ci}^{+}\to{J/\psi}p]$ and
$Br[P_{ci}^{0}\to{J/\psi}n]$. It can be seen from these figures that the total
contribution to the $J/\psi$ production on both these nuclei, coming from the
intermediate $P_{ci}^{+}$ and $P_{ci}^{0}$ states decaying to the ${J/\psi}p$
and ${J/\psi}n$ modes with branching fractions of 0.25%, shows practically
flat behavior, and it is significantly larger than that from the background
processes (1), (2) in the ”low”-momentum regions of 4.5–5.5 GeV/c and 4.5–6
GeV/c for considered photon beam energies of 9.44 and 10.12 GeV, respectively.
As a result, in them the combined charmonium yield is completely governed by
the presence of the $P_{ci}^{+}$ and $P_{ci}^{0}$ states in its production.
Its strength is almost totally determined by the branching ratios
$Br[P_{ci}^{+}\to{J/\psi}p]$ and $Br[P_{ci}^{0}\to{J/\psi}n]$ used in the
calculations with a value, which is still large enough to be measured, as one
may hope, at the CEBAF (cf. [13]), and which increases by a factor of about
ten for both photon beam energies considered when going from carbon target
nucleus to tungsten one 121212)It is interesting to note that the
photoproduction of $J/\psi$-3He bound state ([3He]J/ψ) on a 4He target has
been investigated in Ref. [29] using the impulse approximation, several
$\gamma+N\to J/\psi+N$ models based on the Pomeron-exchange and accounting for
the pion-exchange mechanism at low energies, and various $J/\psi$-nucleus
potentials. The upper boundary of the predicted total cross sections was found
to be very small – it is about 0.1–0.3 pb. The possibility of photoproduction
of a six quark-$J/\psi$ bound state ([$q^{6}$]J/ψ) on the 3He target has been
studied in Ref. [29] as well. The upper boundary of the predicted total cross
sections of $\gamma+^{3}{\rm He}\to[q^{6}]_{J/\psi}+N$ was obtained to be
slightly larger than in the preceding case – it is about 2–4 pb, depending on
the model of $\gamma+N\to J/\psi+N$ used in the calculations. These
predictions may facilitate the planning of possible measurements of [3He]J/ψ
and [$q^{6}$]J/ψ bound states at JLab.) . This leads to the well separated and
experimentally distinguishable differences between all combined calculations,
corresponding to the adopted options for these ratios, for both target nuclei
and for both photon energies considered. Since the $J/\psi$ meson production
differential cross sections at photon beam energy of 9.44 GeV are larger than
those at the energy of 10.12 GeV by a factor of about 20 in the above
”low”-momentum regions, their measurements on light and especially on heavy
nuclear targets at photon energies in the ”low”-energy resonance region will
open an opportunity to determine accurately the above branching ratios – at
least to distinguish between their realistic options of 0.25, 0.5 and 1%. Such
measurements could also be performed in the future at the JLab in the
framework of the proposed here E12-12-006 experiment [5, 17].
Accounting for the above considerations, we can conclude that the near-
threshold $J/\psi$ energy and momentum distribution measurements in photon-
induced reactions both on protons and on nuclear targets will provide further
evidence for the existence of the pentaquark $P_{ci}^{+}$, $P_{ci}^{0}$
resonances, and will shed light on their decay rates to the channels
${J/\psi}p$ and ${J/\psi}n$.
## 4\. Epilogue
In this paper we studied the near-threshold $J/\psi$ meson photoproduction
from protons and nuclei by considering incoherent direct non-resonant
(${\gamma}p\to{J/\psi}p$, ${\gamma}n\to{J/\psi}n$) and two-step resonant
(${\gamma}p\to P_{ci}^{+}\to{J/\psi}p$, ${\gamma}n\to P_{ci}^{0}\to{J/\psi}n$,
$i=1$, 2, 3, 4; $P_{c1}^{+,0}=P_{c}^{+,0}(4312)$,
$P_{c2}^{+,0}=P_{c}^{+,0}(4337)$, $P_{c3}^{+,0}=P_{c}^{+,0}(4440)$,
$P_{c4}^{+,0}=P_{c}^{+,0}(4457)$) charmonium production processes. We have
calculated the absolute excitation functions, energy and momentum
distributions for the non-resonant, resonant and for the combined (non-
resonant plus resonant) production of $J/\psi$ mesons on protons as well as,
using the nuclear spectral function approach, on carbon and tungsten target
nuclei at near-threshold incident photon energies by assuming the spin-parity
assignments of the hidden-charm resonances $P_{c}^{+,0}(4312)$,
$P_{c}^{+,0}(4337)$, $P_{c}^{+,0}(4440)$ and $P_{c}^{+,0}(4457)$ as
$J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$, $J^{P}=(1/2)^{-}$ and $J^{P}=(3/2)^{-}$
within three different realistic scenarios for the branching ratios of their
decays to the ${J/\psi}p$ and ${J/\psi}n$ modes (0.25, 0.5 and 1%). It was
shown that will be very hard to measure the $P_{ci}^{+}$ pentaquark states
through the scan of the $J/\psi$ total photoproduction cross section on a
proton target in the near-threshold energy region around the resonant photon
energies of 9.44, 9.554, 10.04 and 10.12 GeV if these branching ratios $\sim$
1% and less. It was also demonstrated that at these photon beam energies the
$J/\psi$ energy and momentum combined distributions considered reveal distinct
sensitivity to the above scenarios, respectively, at ”low” $J/\psi$ total
energies and momenta, which implies that they may be an important tool to
provide further evidence for the existence of the pentaquark $P_{ci}^{+}$ and
$P_{ci}^{0}$ resonances and to get valuable information on their decay rates
to the ${J/\psi}p$ and ${J/\psi}n$ final states. The measurements of these
distributions could be performed in the near future at the JLab in Hall A
within the planned here high-statistics ($\sim$ 800k $J/\psi$ events in
photoproduction) and high-precision E12-12-006 experiment using the SoLID
detector.
## References
* [1] R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett. 115, 072001 (2015);
arXiv:1507.03414 [hep-ex].
* [2] R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett. 122, 222001 (2019);
arXiv:1904.03947 [hep-ex].
* [3] R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett. 128, 062001 (2022);
arXiv:2108.04720 [hep-ex].
* [4] A. Ali et al. (The GlueX Collaboration), Phys. Rev. Lett. 123, 072001 (2019);
arXiv:1905.10811 [nucl-ex].
* [5] J. Arrington et al., arXiv:2112.00060 [nucl-ex].
* [6] E. Ya. Paryev and Yu.T. Kiselev, Nucl. Phys. A 978, 201 (2018);
arXiv:1810.01715 [nucl-th].
* [7] E. Ya. Paryev, Nucl. Phys. A 996, 121711 (2020);
arXiv:2003.00788 [nucl-th].
* [8] E. Ya. Paryev and Yu.T. Kiselev, Phys. At. Nucl. 81 (5), 566 (2018).
* [9] E. Ya. Paryev, Yu. T. Kiselev and Yu. M. Zaitsev, Nucl. Phys. A 968, 1 (2017).
* [10] E. Ya. Paryev, Nucl. Phys. A 1023, 122452 (2022);
arXiv:2205.00728 [hep-ph].
* [11] S. V. Efremov and E. Ya. Paryev, Eur. Phys. J. A 1, 99 (1998).
* [12] E. Ya. Paryev, Eur. Phys. J. A 7, 127 (2000).
* [13] B. Duran et al., arXiv:2207.05212 [nucl-ex].
* [14] S. J. Brodsky, E. Chudakov, P. Hoyer and J. M. Laget, Phys. Lett. B 498, 23 (2001).
* [15] T. Gutsche and V. E. Lyubovitskij, Phys. Rev. D 100, 094031 (2019);
arXiv:1910.03984 [hep-ph].
* [16] X.-Y. Wang, J. He, X.-R. Chen, Q. Wang, and X. Zhu, Phys. Lett. B 797, 134862 (2019);
arXiv:1906.04044 [hep-ph].
* [17] S. Joosten. Quarkonium production near threshold at JLab and EIC,
9th Workshop of the APS Topical Group on hadron physics (2021).
URL
https://indico.jlab.org/event/412/contributions/8266/attachments/6888/9385/20210416-GHP-
Jpsi-Threshold.pdf
* [18] X. Cao, J.-P. Dai, Phys. Rev. D 100, 054033 (2019);
arXiv:1904.06015 [hep-ph].
* [19] M. Karliner and J. L. Rosner, Phys. Lett. B 752, 329 (2016);
arXiv:1508.01496 [hep-ph].
* [20] X.-Y. Wang, X.-R. Chen, and J. He, Phys. Rev. D 99, 114007 (2019).
* [21] C.-J. Xiao et al., Phys. Rev. D 100, 014022 (2019).
* [22] H. X. Chen, W. Chen and S.-L. Zhu, Phys. Rev. D 100, 051501 (2019);
arXiv:1903.11001 [hep-ph].
* [23] A. N. Hiller Blin et al., Phys. Rev. D 94, 034002 (2016);
arXiv:1606.08912 [hep-ph].
* [24] E. Ya. Paryev, Chinese Physics C, Vol. 44, No. (10), 104101 (2020);
arXiv:2007.01172 [nucl-th].
* [25] Q. Wang, X.-H. Liu, and Q. Zhao, Phys. Rev. D 92, 034022 (2015);
arXiv:1508.00339 [hep-ph].
* [26] J. J. Wu, T.-S. H. Lee, and B. S. Zou, Phys. Rev. C 100, 035206 (2019);
arXiv:1906.05375 [nucl-th].
* [27] Y. Huang, J. He, H.-F. Zhang, and X.-R. Chen, J. Phys. G: Nucl. Part. Phys. 41, 115004 (2014);
arXiv:1305.4434 [nucl-th].
* [28] M.-L. Du, V. Baru, F.-K. Guo, C. Hanhart, U.-G. Meissner, A. Nefediev, I. Strakovsky, Eur. Phys. J. C 80, 1053 (2020);
arXiv:2009.08345 [hep-ph].
* [29] J. J. Wu and T.-S. H. Lee, Phys. Rev. C 86, 065203 (2012);
arXiv:1210.6009 [nucl-th].
|
Data-efficient Modeling of Optical Matrix Multipliers Using Transfer Learning
A. Cem1,*, O. Jovanovic1, S. Yan2, Y. Ding1, D. Zibar1, F. Da Ros1
1DTU Electro, Technical University of Denmark, DK-2800, Kongens Lyngby,
Denmark
2School of Optical & Electrical Information, Huazhong Univ. of Science and
Technology, 430074, Wuhan, China
<EMAIL_ADDRESS>
###### Abstract
We demonstrate transfer learning-assisted neural network models for optical
matrix multipliers with scarce measurement data. Our approach uses $<10\%$ of
experimental data needed for best performance and outperforms analytical
models for a Mach-Zehnder interferometer mesh.
## 1 Introduction
Optical neural networks (NNs) have emerged as a high-speed and energy-
efficient solution for accelerating machine learning tasks [1]. Various
photonic integrated circuit (PIC) architectures have been proposed for
implementing linear layers through optical matrix multiplication (OMM).
Specifically, performing unitary transformations using Mach-Zehnder
interferometer (MZI) meshes has drawn large attention in recent years [1, 2].
For OMM, MZI meshes are programmed by tuning the phase shifters associated
with the individual MZIs such that a desired weight matrix is realized. A
common choice is to use thermo-optic phase shifters, for which analytical
models relating the heater voltages to the phase shifts exist [3]. Programming
a chip accurately using such models may be challenging due to fabrication
errors and thermal crosstalk, which has led to the emergence of a variety of
offline and online calibration techniques [4, 5]. One such offline technique
proposes to model the MZI mesh using a NN that can be trained using
experimental measurements. However, this approach requires significantly more
measurements compared to analytical models to outperform them [6].
In this work, we propose the use of a simple analytical model efitted to the
PIC to numerically generate a synthetic dataset, which is used to pre-train a
NN model. Then, we apply transfer learning (TL) by re-training a part of the
NN model with few experimental measurements to reduce the impact of the
discrepancies between the experimental and numerical training data, similar to
the proposal of [7] for Raman amplifiers. The TL-assisted model achieves a
root-mean-square modeling error of 1.6 dB for a fabricated PIC, 0.5 dB lower
than the analytical model, and it approaches the performance of the NN over
the full dataset with only $<10\%$ of experimental data.
## 2 Data-efficient Modeling of MZI Meshes with Transfer Learning
The model for a MZI mesh implementing OMM relates the tunable heater voltages
$\mathbf{V}$ to the matrix describing the implemented linear transformation
$\mathbf{W}$. An analytical model (AM) that can be trained to model
fabrication errors and also account for thermal crosstalk is given below [6]:
$W_{i,j}=\alpha_{i,j}\prod_{m\in
M_{i,j}}\frac{1}{4}\left|\frac{\sqrt{ER}-1}{\sqrt{ER}+1}\pm
e^{i(\phi^{(0)}_{m}+\sum_{n=1}^{N_{MZI}}\phi^{(2)}_{m,n}V_{n}^{2})}\right|^{2},\vspace{-.2cm}$
(1)
where $\alpha$ is the optical loss, $M_{i,j}$ is the set of MZIs from input
$j$ to output $i$, $ER$ is the MZI extinction ratio, $N_{MZI}$ is the number
of MZIs, and $\phi^{(0)}\&\phi^{(2)}$ are the phase parameters to be trained.
Alternatively, a NN can model a PIC with higher accuracy compared to AM.
However, training a NN model requires a high number of experimental
measurements and AM outperforms the NN model when only few are available [6].
When working with a limited number of measurements, we propose to combine the
two modeling approaches: (i) train AM with the available experimental data,
(ii) generate synthetic data numerically using AM, (iii) pre-train NN model
using synthetic data, (iv) re-train NN model using experimental data to
improve accuracy. The NN weights and biases in the first layer are kept
constant after pre-training to preserve the knowledge gained from AM, as shown
in Fig. 1a).
## 3 Experimental Setup and Results
The experimental setup for OMM by a $3\times 3$ matrix is shown in Fig. 1b).
Details regarding the PIC and the measurement procedure are discussed in [8]
and [6], respectively. Values for the $9$ applied voltages were sampled from
$9$ i.i.d. uniform distributions from $0$ to $2$ V, which corresponds to one
half-period of the MZIs. $400$ sets of $\\{\mathbf{V},\mathbf{W}\\}$ were used
to train AM and two NN models with and without TL, while $700$ measurements
were reserved for testing. A third NN model without TL was trained using
$4400$ training measurements for comparison.
Fig. 1: (a) Architecture of the TL-assisted NN model for the PIC. Solid lines
represent the weights that are fixed during re-training. (b) Experimental
measurement setup for data acquisition from the PIC. ASE: amplified
spontaneous emission, MCF: multi-core fiber.
AM was trained in MATLAB and was used to generate $50,000$ new synthetic
measurements using random voltages as inputs. The histograms for the
experimental and synthetic datasets are shown in Fig. 2a). The distributions
of weights are very similar for the datasets except for the matrix weights
below $-60$ dB, which are only present in the synthetic data. Such datapoints
were discarded, resulting in a $<2\%$ reduction in synthetic dataset size. The
remaining training dataset was used to pre-train the NN shown in Fig. 1a) with
a hyperbolic tangent activation function and $L_{1}$ and $L_{2}$
regularization parameters $\lambda_{L1}=5\times 10^{-4}$ and
$\lambda_{L2}=9\times 10^{-9}$ using PyTorch with the L-BFGS optimizer. The
number of nodes in the hidden layers, $\lambda_{L1}$, and $\lambda_{L2}$ were
all optimized on a validation set separately for all 3 NN models to minimize
the root-mean-square error (RMSE) between the predicted and measured matrix
weights in dB.
The testing RMSEs for the models are shown in Fig. 2b). 20 different seeds
were used to randomly obtain 400 samples from the 4400 available experimental
measurements as well as initializing the NNs. The results show that while the
NN model is able to achieve RMSE $<1$ dB when trained using the entire
dataset, it cannot model the PIC with RMSE $<3$ dB when a limited amount of
data is available. In contrast, the median RMSEs are $2.1$ and $1.6$ dB for AM
and the TL-assisted model, respectively. The NN model obtained using TL
clearly outperforms AM for all 20 different training sets with 400
measurements and approaches the performance of the NN model without TL over
the full dataset, but requires only $<10\%$ of the experimentally measured
data.
Fig. 2: (a) Histograms of matrix weights for the synthetic and experimental
training datasets, normalized individually to match the estimated probability
density functions (PDFs). (b) Testing RMSEs for the models, number of
experimental measurements used is given in parentheses. Boxes show $25^{th}$
and $75^{th}$ percentiles while the whiskers show $10^{th}$ and $90^{th}$
percentiles for 20 random seeds.
## 4 Conclusion
We describe and experimentally evaluate the use of transfer learning to fine-
tune a NN model for an optical matrix multiplier by pre-training the NN model
using synthetic data generated using a less accurate analytical model. Our
proposed approach results in less prediction error compared to using an
analytical model or a NN model individually when measurement data is scarce.
Transfer learning-assisted NNs can be used to alleviate the practical
limitations of data-driven PIC models regarding experimental data acquisition,
which is especially critical for larger and more complex MZI mesh
architectures.
Acknowledgment Villum Foundations, Villum YI, OPTIC-AI (no. 29344), ERC CoG
FRECOM (no. 771878), National Natural Science Foundation of China (no.
62205114), the Key R&D Program of Hubei Province (no. 2022BAA001).
## References
* [1] B. Shastri et al., “Photonics for artificial intelligence and neuromorphic computing,” Nat. Phot. 15, 102-114 (2021).
* [2] Y. Shen et al., “Deep learning with coherent nanophotonic circuits,” Nat. Phot. 11, 441-446 (2017).
* [3] M. Milanizadeh et al., “Canceling thermal cross-talk effects in photonic integrated circuits,” JLT 37, 1325-1332, (2019).
* [4] M. Milanizadeh et al., “Control and Calibration Recipes for Photonic Integrated Circuits,” JSTQE 26, 1-10 (2020).
* [5] H. Zhang et al., “Efficient On-Chip Training of Optical NNs Using Genetic Algorithm,” ACS Photonics 8, (2021).
* [6] A. Cem et al., “Data-driven Modeling of MZI-based Optical Matrix Multipliers,” arXiv:2210.09171, (2022).
* [7] U. C. de Moura et al., “Fiber-agnostic machine learning-based Raman amplifier models,” JLT, (2022).
* [8] Y. Ding et al., “Reconfigurable SDM Switching Using Novel Silicon Photonic Integrated Circuit,” Sci. Rep. 6, (2016).
|
# Thermalization and disentanglement with a nonlinear Schrödinger equation
Eyal Buks<EMAIL_ADDRESS>Andrew and Erna Viterbi Department of
Electrical Engineering, Technion, Haifa 32000, Israel
###### Abstract
We study a recently proposed modified Schrödinger equation having an added
nonlinear term. For the case where a stochastic term is added to the
Hamiltonian, the fluctuating response is found to resemble the process of
thermalization. Disentanglement induced by the added nonlinear term is
explored for a system made of two coupled spins. A butterfly-like effect is
found near fully entangled states of the spin-spin system. A limit cycle
solution is found when one of the spins is externally driven.
## I Introduction
In 1935 Schrodinger_807 Schrödinger has identified a self-inconsistency in
the quantum to classical transition process Penrose_4864 ; Leggett_939 ;
Leggett_022001 , which became known as the problem of quantum measurement.
This problem is related to the phenomenon of quantum entanglement. Exploring
possible mechanisms of disentanglement, may help resolving this long-standing
problem.
Processes such as disentanglement require nonlinear time evolution. A variety
of nonlinear terms Geller_2111_05977 that can be added to the Schrödinger
equation have been explored before Weinberg_336 ; Weinberg_61 ; Doebner_397 ;
Doebner_3764 ; Gisin_5677 ; Kaplan_055002 ; Munoz_110503 . In most previous
proposals, the purpose of the added nonlinear terms is to generate a
spontaneous collapse Bassi_471 .
Here we explore a recently proposed modified Schrödinger equation having an
added nonlinear term Buks_355303 (see section II). The proposed equation can
be constructed for any physical system having Hilbert space of finite
dimensionality, and it does not violate unitarity of the time evolution.
The effect of the added term on the dynamics of a single spin 1/2 is studied
in section III. The spin’s response to an applied fluctuating magnetic field
is found to mimic the process of thermalization Diosi_451 ; Molmer_524 ;
Dalibard_580 ; Semin_063313 (see section IV). Disentanglement induced by the
nonlinear term is explored with two coupled spins (see section V). A
butterfly-like effect is found near fully entangled spin-spin states.
The system can become unstable when one spin is externally driven (see section
VI). Limit cycle solutions for the modified Schrödinger equation are found in
the instability region. The instability of the modified Schrödinger equation
is closely related to an instability found with a similar spin-spin system
Levi_053516 , when the equations of motion generated by the standard
Schrödinger equation are analyzed using the mean field approximation
breuer2002theory ; Drossel_217 ; Hicke_024401 ; Klobus_034201 .
## II The modified Schrödinger equation
Let $\mathcal{H}$ be a time-independent Hermitian Hamiltonian of a given
physical system. Consider a modified Schrödinger equation for the state vector
$\left|\psi\right\rangle$ given by Buks_355303
$\frac{\mathrm{d}}{\mathrm{d}t}\left|\psi\right\rangle=\left(-i\hbar^{-1}\mathcal{H}+\gamma_{\mathrm{D}}M_{\mathrm{D}}\right)\left|\psi\right\rangle\;,$
(1)
where $\mathrm{d}/\mathrm{d}t$ is a time derivative. In the added (to the
standard Schrödinger equation) term $\gamma_{\mathrm{D}}M_{\mathrm{D}}$, the
rate $\gamma_{\mathrm{D}}$ is a positive coefficient, and the operator
$M_{\mathrm{D}}$ is derived from a given non-zero state vector
$\left|\Psi\right\rangle$ according to (the state vector
$\left|\Psi\right\rangle$ is not required to be normalized)
$M_{\mathrm{D}}=-\sqrt{\frac{\left\langle\Psi\right.\left|\Psi\right\rangle}{1-\left\langle\mathcal{P}\right\rangle}}\left(\mathcal{P}-\left\langle\mathcal{P}\right\rangle\right)\;,$
(2)
where the projection operator $\mathcal{P}$ is given by
$\mathcal{P}=\frac{\left|\Psi\right\rangle\left\langle\Psi\right|}{\left\langle\Psi\right.\left|\Psi\right\rangle}\;,$
(3)
and the expectation value $\left\langle\mathcal{P}\right\rangle$ is given by
$\left\langle\mathcal{P}\right\rangle=\frac{\left\langle\psi\right|\mathcal{P}\left|\psi\right\rangle}{\left\langle\psi\right.\left|\psi\right\rangle}=\frac{\left|\left\langle\Psi\right.\left|\psi\right\rangle\right|^{2}}{\left\langle\Psi\right.\left|\Psi\right\rangle\left\langle\psi\right.\left|\psi\right\rangle}\;.$
(4)
The modified Schrödinger equation yields a modified master equation for the
pure state density operator
$\rho=\left|\psi\right\rangle\left\langle\psi\right|$ given by (note that
$M_{\mathrm{D}}^{{\dagger}}=M_{\mathrm{D}}$ and
$\mathcal{H}^{{\dagger}}=\mathcal{H}$)
$\frac{\mathrm{d}\rho}{\mathrm{d}t}=\frac{\left[\mathcal{H},\rho\right]}{i\hbar}+\gamma_{\mathrm{D}}\left(\rho
M_{\mathrm{D}}+M_{\mathrm{D}}\rho\right)\;.$ (5)
Note that $\left(\mathrm{d/d}t\right)\operatorname{Tr}\rho=0$ provided that
$\operatorname{Tr}\rho=1$ (i.e. $\left|\psi\right\rangle$ is normalized) [see
Eq. (2), and note that $\left\langle
O\right\rangle\equiv\operatorname{Tr}\left(\rho O\right)$ for an arbitrary
observable $O=O^{{\dagger}}$], and that
$\left(\mathrm{d/d}t\right)\operatorname{Tr}\rho^{2}=0$ provided that
$\rho^{2}=\rho$ [note that $\left\langle M_{\mathrm{D}}\right\rangle=0$, see
Eq. (2)]. Henceforth it is assumed that $\left|\psi\right\rangle$ is
normalized, and that $\rho^{2}=\rho$. The modified master equation (5) yields
a modified Heisenberg equation given by
$\frac{\mathrm{d}\left\langle
O\right\rangle}{\mathrm{d}t}=\frac{\left\langle\left[O,\mathcal{H}\right]\right\rangle}{i\hbar}+\gamma_{\mathrm{D}}\left\langle
M_{\mathrm{D}}O+OM_{\mathrm{D}}\right\rangle\;,$ (6)
where $O=O^{{\dagger}}$ is a given observable that does not explicitly depend
on time.
## III One spin
As an example, consider a spin 1/2 particle. The $2\times 2$ density matrix
$\rho$ is expressed as
$\rho=\frac{1+\bm{k}\cdot\bm{\sigma}}{2}\;,$ (7)
where $\bm{k}=\left(k_{x},k_{y},k_{z}\right)$ is a real vector, and
$\bm{\sigma}=\left(\sigma_{x},\sigma_{y},\sigma_{z}\right)$ is the Pauli
matrix vector
$\sigma_{x}=\left(\begin{array}[c]{cc}0&1\\\
1&0\end{array}\right),\;\sigma_{y}=\left(\begin{array}[c]{cc}0&-i\\\
i&0\end{array}\right),\;\sigma_{z}=\left(\begin{array}[c]{cc}1&0\\\
0&-1\end{array}\right)\;.$ (8)
The Hamiltonian $\mathcal{H}$ is assumed to be given by
$\hbar^{-1}\mathcal{H}=\bm{\omega\cdot\sigma}$, where
$\bm{\omega}=\left(\omega_{x},\omega_{y},\omega_{z}\right)$ is a constant real
vector. With the help of the identity
$\left(\bm{\sigma}\cdot\bm{a}\right)\left(\bm{\sigma}\cdot\bm{b}\right)=\bm{a}\cdot\bm{b}+i\bm{\sigma}\cdot\left(\bm{a}\times\bm{b}\right)$,
where $\bm{a}$ and $\bm{b}$ are three-dimensional vectors, one finds that [see
Eq. (6), and note that
$\operatorname{Tr}\sigma_{x}=\operatorname{Tr}\sigma_{y}=\operatorname{Tr}\sigma_{z}=0$,]
$\frac{\mathrm{d}\bm{k}}{\mathrm{d}t}=2\left(\bm{\omega}\times\bm{k}\right)+\gamma_{\mathrm{D}}\left\langle
M_{\mathrm{D}}\bm{\sigma}+\bm{\sigma}M_{\mathrm{D}}\right\rangle\;.$ (9)
Consider the case where $\left|\Psi\right\rangle$ is taken to be a normalized
eigenvector of $\bm{\hat{s}\cdot\sigma}$, where
$\bm{\hat{s}}=\left(s_{x},s_{y},s_{z}\right)$ is a constant real unit vector,
and the corresponding eigenvalue is $+1$ (i.e.
$\left\langle\Psi\right.\left|\Psi\right\rangle=1$,
$\bm{\hat{s}\cdot\hat{s}}=1$ and
$\bm{\hat{s}\cdot\sigma}\left|\Psi\right\rangle=\left|\Psi\right\rangle$). For
this case $\sqrt{\left(1-\bm{\hat{s}}\cdot\bm{k}\right)/2}\left\langle
M_{\mathrm{D}}\bm{\sigma}+\bm{\sigma}M_{\mathrm{D}}\right\rangle=\left(\bm{\hat{s}}\cdot\bm{k}\right)\bm{k}-\bm{\hat{s}}$
[see Eq. (2)], and thus (compare with Refs. Kowalski_1 ; Fernengel_385701 ;
Kowalski_167955 )
$\frac{\mathrm{d}\bm{k}}{\mathrm{d}t}=2\bm{\omega}\times\bm{k}+\gamma_{\mathrm{D}}\frac{\left(\bm{\hat{s}}\cdot\bm{k}\right)\bm{k-\hat{s}}}{\sqrt{\frac{1-\bm{\hat{s}}\cdot\bm{k}}{2}}}\;.$
(10)
The following holds
$\bm{\hat{s}}=\bm{\hat{s}}_{\parallel}+\bm{\hat{s}}_{\perp}$, where the
parallel $\bm{\hat{s}}_{\parallel}$ and perpendicular $\bm{\hat{s}}_{\perp}$
(with respect to $\bm{k}$) components are given by
$\bm{\hat{s}}_{\parallel}=\left(\bm{k}\cdot\bm{k}\right)^{-1}\left(\bm{k}\cdot\bm{\hat{s}}\right)\bm{k}$
and
$\bm{\hat{s}}_{\perp}=-\left(\bm{k}\cdot\bm{k}\right)^{-1}\bm{k}\times\left(\bm{k}\times\bm{\hat{s}}\right)$
[recall the vector identity
$\bm{A}\times\left(\bm{B}\times\bm{C}\right)=\left(\bm{A}\cdot\bm{C}\right)\bm{B}-\left(\bm{A}\cdot\bm{B}\right)\bm{C}$].
Thus, for the case where $\left|\bm{k}\right|=1$ (i.e. $\bm{k}\cdot\bm{k}=1$)
Eq. (10) becomes
$\displaystyle\frac{\mathrm{d}\bm{k}}{\mathrm{d}t}$
$\displaystyle=2\bm{\omega}\times\bm{k}-\gamma_{\mathrm{D}}\frac{\bm{\hat{s}}_{\perp}}{\sqrt{\frac{1-\bm{\hat{s}}_{\parallel}\cdot\bm{k}}{2}}}$
$\displaystyle=\left(2\bm{\omega}+\gamma_{\mathrm{D}}\frac{\bm{\hat{s}}\times\bm{k}}{\sqrt{\frac{1-\bm{\hat{s}}\cdot\bm{k}}{2}}}\right)\times\bm{k}\;.$
(11)
The above result (11) indicates that the radial component of
$\mathrm{d}\bm{k/}\mathrm{d}t$ vanishes [i.e.
$\left(\mathrm{d}\bm{k/}\mathrm{d}t\right)\cdot\bm{k}=0$] on the surface of
the Bloch sphere (i.e. when $\left|\bm{k}\right|=1$).
By multiplying Eq. (10) by
$\bm{\hat{\omega}}=\bm{\omega}/\left|\bm{\omega}\right|$ one finds that
$\frac{\mathrm{d}k_{\parallel}}{\mathrm{d}t}=\gamma_{\mathrm{D}}\frac{\bm{\hat{s}}\cdot\left(\left(\bm{k}\cdot\bm{\hat{\omega}}\right)\bm{k-\hat{\omega}}\right)}{\sqrt{\frac{1-\bm{\hat{s}}\cdot\bm{k}}{2}}}\;,$
(12)
where $k_{\parallel}=\bm{k}\cdot\bm{\hat{\omega}}$, hence
$\mathrm{d}k_{\parallel}/\mathrm{d}t=0$ for $\bm{k}=\pm\bm{\hat{\omega}}$. For
the case where $\bm{\hat{\omega}}=\bm{\hat{z}}$ Eq. (12) becomes
$\frac{\mathrm{d}k_{\parallel}}{\mathrm{d}t}=\gamma_{\mathrm{D}}\frac{\left(s_{x}k_{x}+s_{y}k_{y}\right)k_{z}+s_{z}\left(k_{z}^{2}-1\right)}{\sqrt{\frac{1-\bm{\hat{s}}\cdot\bm{k}}{2}}}\;.$
(13)
When $\gamma_{\mathrm{D}}\ll\left|\bm{\omega}\right|$ the dynamics is
dominated by the term $2\bm{\omega}\times\bm{k}$ in Eq. (10), which gives rise
to spin precession. For this case the averaged value of the term
$s_{x}k_{x}+s_{y}k_{y}$ in Eq. (13) is nearly zero. Note that $k_{z}^{2}-1\leq
0$, hence for this case $\bm{k}\rightarrow+\bm{\hat{\omega}}$
($\bm{k}\rightarrow-\bm{\hat{\omega}}$) in the limit $t\rightarrow\infty$ when
$s_{z}<0$ ($s_{z}>0$). The plot shown in Fig. 1(a) is obtained by numerically
integrating the modified Schrödinger equation (10) for the case where
$\bm{\omega}$ is parallel to the $\bm{\hat{z}}$ direction, and
$\gamma_{\mathrm{D}}/\left|\bm{\omega}\right|=0.25$. As can be seen from Eq.
(11), to first order in $\gamma_{\mathrm{D}}/\left|\bm{\omega}\right|$ the
$\bm{k}$ fixed point is located at
$\pm\left(\bm{\hat{\omega}}+\left(\gamma_{\mathrm{D}}/\left|\bm{\omega}\right|\right)\left(2\left(1-\bm{\hat{s}}\cdot\bm{\hat{\omega}}\right)\right)^{-1/2}\bm{\hat{s}}\times\bm{\hat{\omega}}\right)$
[see the red star symbol in Fig. 1(a)].
Figure 1: One spin. The black solid line is obtained by numerically
integrating of the one-spin modified Schrödinger equation (10). The blue solid
(dashed) line connects the origin and the point $\bm{\hat{s}}$
($-\bm{\hat{s}}$), and $\bm{\omega}$ is parallel to the $\bm{\hat{z}}$
direction. (a) The ratio $\gamma_{\mathrm{D}}/\left|\bm{\omega}\right|=0.25$.
The red star symbol represents the analytical approximation for the $\bm{k}$
fixed point given by
$\pm\left(\bm{\hat{\omega}}+\left(\gamma_{\mathrm{D}}/\left|\bm{\omega}\right|\right)\left(2\left(1-\bm{\hat{s}}\cdot\bm{\hat{\omega}}\right)\right)^{-1/2}\bm{\hat{s}}\times\bm{\hat{\omega}}\right)$,
which is valid when $\gamma_{\mathrm{D}}\ll\left|\bm{\omega}\right|$. (b) The
ratio $\gamma_{\mathrm{D}}/\left|\bm{\omega}\right|=25$. The red star symbol
represents the analytical approximation for the $\bm{k}$ fixed point given by
$-\bm{\hat{s}}+2\left(\left|\bm{\omega}\right|/\gamma_{\mathrm{D}}\right)\bm{\hat{s}}\times\bm{\hat{\omega}}$,
which is valid when $\gamma_{\mathrm{D}}\gg\left|\bm{\omega}\right|$.
In the opposite extreme case of
$\gamma_{\mathrm{D}}\gg\left|\bm{\omega}\right|$, the dynamics is dominated by
the term proportional to $\gamma_{\mathrm{D}}$ in Eq. (10). Note that
$\left(\bm{\hat{s}}\cdot\bm{k}\right)\bm{k-\hat{s}}=0$ for
$\bm{k}=\pm\bm{\hat{s}}$, and that the term proportional to
$\gamma_{\mathrm{D}}$ in Eq. (10) gives rise to attraction (repulsion) of
$\bm{k}$ to the point $-\bm{\hat{s}}$ ($+\bm{\hat{s}}$). Consider the case
where $\bm{k}=-\bm{\hat{s}}+\bm{\delta}$, and where
$\bm{\hat{s}}\cdot\bm{\delta}=0$ ($\bm{\delta}$ is considered as
infinitesimally small). To first order in $\left|\bm{\delta}\right|$ Eq. (10)
yields
$\frac{\mathrm{d}\bm{\delta}}{\mathrm{d}t}=2\bm{\omega}\times\left(-\bm{\hat{s}}+\bm{\delta}\right)-\gamma_{\mathrm{D}}\bm{\delta}+O\left(\left|\bm{\delta}\right|^{2}\right)\;.$
(14)
hence the $\bm{k}$ point
$-\bm{\hat{s}}+2\left(\left|\bm{\omega}\right|/\gamma_{\mathrm{D}}\right)\bm{\hat{s}}\times\bm{\hat{\omega}}$
is nearly a stable fixed point when
$\gamma_{\mathrm{D}}\gg\left|\bm{\omega}\right|$ [see the red star symbol in
Fig. 1(b)].
## IV Spin in thermal equilibrium
The effect of coupling between the spin and its environment can be accounted
for using the modified Schrödinger equation (1) provided that a fluctuating
magnetic field is added Slichter_Principles . Consider the case where the spin
Hamiltonian $\mathcal{H}$ is given by
$\hbar^{-1}\mathcal{H}=\bm{\omega\cdot\sigma}$, where
$\bm{\omega}=\omega_{0}\bm{\hat{z}}+\left(\omega_{x},\omega_{y},\omega_{z}\right)$,
where $\omega_{0}$ is a constant, and where $\omega_{x}\left(t\right)$,
$\omega_{y}\left(t\right)$ and $\omega_{z}\left(t\right)$ represent the effect
of a fluctuating magnetic field. The following is assumed to hold
$\left\langle\omega_{x}\right\rangle=\left\langle\omega_{y}\right\rangle=\left\langle\omega_{z}\right\rangle=0$,
where $\left\langle{}\right\rangle$ denotes time averaging (i.e. the
fluctuating field has a vanishing averaged value), and the correlation
function
$\left\langle\omega_{i}\left(t\right)\omega_{j}\left(t^{\prime}\right)\right\rangle$
is given by
$\left\langle\omega_{i}\left(t\right)\omega_{j}\left(t^{\prime}\right)\right\rangle=\delta_{ij}\omega_{\mathrm{s}}^{2}\exp\left(-\frac{\left|t-t^{\prime}\right|}{\tau_{\mathrm{s}}}\right)\;,$
(15)
where both the variance $\omega_{\mathrm{s}}^{2}$ and the correlation time
$\tau_{\mathrm{s}}$ are positive constants, and where
$i,j\in\left\\{x,y,z\right\\}$. The added fluctuating magnetic field gives
rise to longitudinal $T_{\mathrm{s}1}^{-1}$ and transverse
$T_{\mathrm{s}2}^{-1}$ relaxation rates given by [see Eqs. (17.274) and
(17.275) of Ref. Buks_QMLN ]
$\frac{1}{T_{\mathrm{s}1}}=\frac{2\omega_{\mathrm{s}}^{2}\tau_{\mathrm{s}}}{1+\omega_{0}^{2}\tau_{\mathrm{s}}^{2}}\;,$
(16)
and
$\frac{1}{T_{\mathrm{s}2}}=\frac{1}{2T_{\mathrm{s}1}}+\omega_{\mathrm{s}}^{2}\tau_{\mathrm{s}}\;.$
(17)
The effect of a fluctuating magnetic field is demonstrated by the plot shown
in Fig. 2. The time evolution of $\bm{k}$ is evaluated by numerically
integrating the modified Schrödinger equation (1) with added fluctuating
magnetic field. The parameters used for the calculation are listed in the
figure caption. The Wiener-Khinchine theorem is employed to relate the given
correlation function (15) to the power spectrum, which, in turn, is used to
derive the variance of Fourier coefficients of $\omega_{x}$, $\omega_{y}$ and
$\omega_{z}$. The variance values are employed for generating random Fourier
coefficients, which in turn, allow generating the functions
$\omega_{x}\left(t\right)$, $\omega_{y}\left(t\right)$ and
$\omega_{z}\left(t\right)$ in a way consistent with Eq. (15).
Figure 2: Thermal equilibrium. The red line represents the static magnetic
field direction, and the black line the Bloch vector $\bm{k}$. In this
calculation $\omega_{0}=10$, $\gamma_{\mathrm{D}}=5$,
$\bm{\hat{s}}=\bm{\hat{z}}$, $\omega_{\mathrm{s}}^{2}=10$ and
$\tau_{\mathrm{s}}=5$.
To account for the effect of the fluctuating field, a longitudinal relaxation
term proportional to $T_{\mathrm{s}1}^{-1}$ is added to Eq. (13). Consider the
case where
$T_{\mathrm{s}1}^{-1}\ll\gamma_{\mathrm{D}}\ll\left|\bm{\omega}\right|$ and
$\bm{\hat{s}}=\bm{\hat{z}}$. For this case Eq. (13) has a steady state
solution given by
$k_{\parallel}=-1+1/\left(1+2\gamma_{\mathrm{D}}T_{\mathrm{s}1}\right)$ [the
term proportional to $s_{x}k_{x}+s_{y}k_{y}$ in Eq. (13) has been disregarded,
since it is assumed that $\gamma_{\mathrm{D}}\ll\left|\bm{\omega}\right|$].
The corresponding effective temperature $T_{\mathrm{eff}}$ is given by
$T_{\mathrm{eff}}=\frac{\hbar\omega_{0}}{2k_{\mathrm{B}}\tanh^{-1}\left(1-\frac{1}{1+2\gamma_{\mathrm{D}}T_{\mathrm{s}1}}\right)}\;,$
(18)
where $k_{\mathrm{B}}$ is the Boltzmann’s constant.
## V Two spins
Consider a two spin 1/2 system in a pure state $\left|\psi\right\rangle$ given
by
$\left|\psi\right\rangle=a\left|++\right\rangle+b\left|+-\right\rangle+c\left|-+\right\rangle+d\left|--\right\rangle$.
### V.1 Single spin angular momentum
The single-spin angular momentum (in units of $\hbar/2$) vector operators are
denoted by $\bm{S}_{1}=\left(S_{1x},S_{1y},S_{1z}\right)$ and
$\bm{S}_{2}=\left(S_{2x},S_{2y},S_{2z}\right)$, and the total spin angular
momentum is $\bm{S}=\bm{S}_{1}+\bm{S}_{2}=\left(S_{x},S_{y},S_{z}\right)$. A
given single-spin linear operator of spin 1 (2) is represented by the $4\times
4$ matrix $\sigma_{0}\otimes K$ ($K\otimes\sigma_{0}$) where $\otimes$ denotes
the Kronecker tensor product, $\sigma_{0}$ is the $2\times 2$ identity matrix,
and where $K$ is the $2\times 2$ matrix representation of the given single-
spin operator. The matrix representations of $\bm{S}_{1}\cdot\bm{\hat{u}}_{1}$
and $\bm{S}_{2}\cdot\bm{\hat{u}}_{2}$, where
$\bm{\hat{u}}_{1}=\left(\sin\theta_{1}\cos\varphi_{1},\sin\theta_{1}\sin\varphi_{1},\cos\theta_{1}\right)$
and
$\bm{\hat{u}}_{2}=\left(\sin\theta_{2}\cos\varphi_{2},\sin\theta_{2}\sin\varphi_{2},\cos\theta_{2}\right)$
are unit vectors, are thus given by [see Eq. (8)]
$\displaystyle\bm{S}_{1}\cdot\bm{\hat{u}}_{1}$
$\displaystyle\dot{=}\left(\begin{array}[c]{cccc}\cos\theta_{1}&0&\sin\theta_{1}e^{-i\varphi_{1}}&0\\\
0&\cos\theta_{1}&0&\sin\theta_{1}e^{-i\varphi_{1}}\\\
\sin\theta_{1}e^{i\varphi_{1}}&0&-\cos\theta_{1}&0\\\
0&\sin\theta_{1}e^{i\varphi_{1}}&0&-\cos\theta_{1}\end{array}\right)\;,$ (23)
and
$\displaystyle\bm{S}_{2}\cdot\bm{\hat{u}}_{2}$
$\displaystyle\dot{=}\left(\begin{array}[c]{cccc}\cos\theta_{2}&\sin\theta_{2}e^{-i\varphi_{2}}&0&0\\\
\sin\theta_{2}e^{i\varphi_{2}}&-\cos\theta_{2}&0&0\\\
0&0&\cos\theta_{2}&\sin\theta_{2}e^{-i\varphi_{2}}\\\
0&0&\sin\theta_{2}e^{i\varphi_{2}}&-\cos\theta_{2}\end{array}\right)\;.$ (29)
With the help of Eqs. (LABEL:S1*u1) and (LABEL:S2*u2) and the normalization
condition $aa^{\ast}+bb^{\ast}+cc^{\ast}+dd^{\ast}=1$ one finds that
$\left|\left\langle\bm{S}_{1}\right\rangle\right|^{2}=\left|\left\langle\bm{S}_{2}\right\rangle\right|^{2}=1-4\left|ad-
bc\right|^{2}\;.$ (31)
Note that the normalization condition implies that $\left|ad-bc\right|^{2}\leq
1/4$. In standard quantum mechanics the term $ad-bc$ is time-independent,
provided that the spins are decoupled [see Eq. (8.121) of Ref. Buks_QMLN ].
The term $\left|ad-bc\right|$ can be extracted from the partial transpose
$\rho^{\mathrm{T}1}$ ($\rho^{\mathrm{T}2}$) of the spin-spin density operator
with respect to spin 1 (2) using the relation
$\det\rho^{\mathrm{T}1}=\det\rho^{\mathrm{T}2}=-\left|ad-bc\right|^{4}$
Peres_1413 .
Consider the case where
$\left\langle\bm{S}_{1}\right\rangle=\left\langle\bm{S}_{2}\right\rangle=0$.
For this case the following holds $\left\langle
S_{1z}\right\rangle=a^{\ast}a+b^{\ast}b-c^{\ast}c-d^{\ast}d=0$, $\left\langle
S_{2z}\right\rangle=a^{\ast}a-b^{\ast}b+c^{\ast}c-d^{\ast}d=0$, $\left\langle
S_{1+}\right\rangle=2\left(a^{\ast}c+b^{\ast}d\right)=0$ and $\left\langle
S_{2+}\right\rangle=2\left(a^{\ast}b+c^{\ast}d\right)=0$, where
$S_{n\pm}=S_{nx}\pm iS_{ny}$ and $n\in\left\\{1,2\right\\}$. The conditions
$\left\langle S_{1z}\right\rangle=\left\langle S_{2z}\right\rangle=0$ imply
that $a^{\ast}a=d^{\ast}d$ and $b^{\ast}b=c^{\ast}c$, whereas the conditions
$\left\langle S_{1+}\right\rangle=\left\langle S_{2+}\right\rangle=0$ yield
$a^{\ast}/d=-b^{\ast}/c=-c^{\ast}/b$. Hence the state vector
$\left|\psi\right\rangle$ for this case has the form
$\displaystyle\left|\psi\right\rangle$
$\displaystyle=\frac{\cos\frac{\theta_{\psi}}{2}e^{-\frac{i\phi_{\alpha}}{2}}}{\sqrt{2}}\left|++\right\rangle+\frac{\sin\frac{\theta_{\psi}}{2}ie^{-\frac{i\phi_{\beta}}{2}}}{\sqrt{2}}\left|+-\right\rangle$
$\displaystyle+\frac{\sin\frac{\theta_{\psi}}{2}ie^{\frac{i\phi_{\beta}}{2}}}{\sqrt{2}}\left|-+\right\rangle+\frac{\cos\frac{\theta_{\psi}}{2}e^{\frac{i\phi_{\alpha}}{2}}}{\sqrt{2}}\left|--\right\rangle\;,$
(32)
where $\theta_{\psi}$, $\phi_{\alpha}$ and $\phi_{\beta}$ and are real. Note
that $ad-bc=1/2$ for the state (32).
The operator $R$ is defined by [note that
$\bm{S}_{1}\cdot\bm{S}_{2}=\bm{S}_{2}\cdot\bm{S}_{1}$, see Eqs. (LABEL:S1*u1)
and (LABEL:S2*u2)]
$R=\bm{S}_{1}\cdot\bm{S}_{2}-\left\langle\bm{S}_{1}\right\rangle\cdot\left\langle\bm{S}_{2}\right\rangle\;,$
(33)
and its expectation value $\left\langle R\right\rangle$ is given by
$\left\langle
R\right\rangle=4\left(\left|ad\right|^{2}-\left|bc\right|^{2}\right)-4\operatorname{Re}\left(\left(b^{\ast}{}^{2}+c^{\ast}{}^{2}\right)\left(ad-
bc\right)\right)\;.$ (34)
Note that $\left\langle R\right\rangle=0$ when $ad-bc=0$.
### V.2 Single spin purity
The single-spin purity $P$ is given by $P=1-2\left|ad-bc\right|^{2}$ [see Eq.
(8.642) of Ref. Buks_QMLN ]. It is bounded by $1/2\leq P\leq 1$ [recall that
$\left|ad-bc\right|^{2}\leq 1/4$]. In terms of the purity $P$, Eq. (31) reads
$\left|\left\langle\bm{S}_{1}\right\rangle\right|^{2}=\left|\left\langle\bm{S}_{2}\right\rangle\right|^{2}=2\left(P-1/2\right)$.
### V.3 Spin-spin disentanglement
Spin-spin disentanglement is generated by the term proportional to
$\gamma_{\mathrm{D}}$ in the modified Schrödinger equation (1) provided that
the bra vector $\left\langle\Psi\right|$ is taken to be given by Buks_355303 ;
Wootters_2245
$\left\langle\Psi\right|=d\left\langle++\right|-c\left\langle+-\right|-b\left\langle-+\right|+a\left\langle--\right|\;.$
(35)
Note that $\left\langle\Psi\right|$ (35) is normalized provided that
$\left|\psi\right\rangle$ is normalized, and that
$\left\langle\Psi\right.\left|\psi\right\rangle=2\left(ad-bc\right)$ [compare
with Eq. (31)].
The plots shown in Fig. 3 are based on numerical integration of the spin-spin
modified Schrödinger equation (1) with $\mathcal{H}=0$, and with
$\left\langle\Psi\right|$ given by Eq. (35). In the panels labelled by the
number ’1’ (’2’), the single-spin purity is initially low $P\simeq 1/2$ (high
$P\simeq 1$), i.e. initially$\ \left|ad-bc\right|^{2}\simeq 1/4$ ($\left|ad-
bc\right|^{2}\simeq 0$). The three-dimensional plots labelled by the letter
’a’ (’b’) display the time evolution of $\left\langle\bm{S}_{1}\right\rangle$
($\left\langle\bm{S}_{2}\right\rangle$). In these plots, red straight lines
are drawn between the origin and the initial value of
$\left\langle\bm{S}_{n}\right\rangle$, whereas green lines represent time
evolution of $\left\langle\bm{S}_{n}\right\rangle$, where
$n\in\left\\{1,2\right\\}$. The plots labelled by the letters ’c’ and ’d’
display the time evolution of $P$ and $\left\langle R\right\rangle$ [see Eq.
(33)], respectively. For both cases ’1’ and ’2’,
$\left|\left\langle\bm{S}_{1}\right\rangle\right|^{2}=\left|\left\langle\bm{S}_{2}\right\rangle\right|^{2}\rightarrow
1$ [i.e. $\left|ad-bc\right|^{2}\rightarrow 0$ and $P\rightarrow 1$, see Eq.
(31)] in the long time limit $t\rightarrow\infty$, i.e. entanglement vanishes
in this limit.
Figure 3: Spin-spin disentanglement. In the panels labelled by the numbers ’1’
(’2’), the single-spin purity is initially $P\simeq 1/2$ ($P\simeq 1$). The
(initial value) time evolution of $\left\langle\bm{S}_{1}\right\rangle$ and
$\left\langle\bm{S}_{2}\right\rangle$ is indicated by red (green) lines in the
plots labelled by the letter ’a’ and ’b’, respectively. The single spin purity
$P$ and $\left\langle R\right\rangle$ are shown as a function of time $t$ in
the plots labelled by the letters ’c’ and ’d’, respectively.
The case where initially $P\simeq 1/2$ [see Fig. 3 (a1), (b1), (c1) and (d1)]
is explored by numerically integrating the modified Schrödinger equation (1)
with the initial condition (before normalization) of
$\left|\psi\right\rangle=\left|\psi_{0}\right\rangle+\epsilon\left|\psi_{\mathrm{p}}\right\rangle$,
where
$\left|\psi_{0}\right\rangle=\left|\mathrm{B}_{0,0}\right\rangle=2^{-1/2}\left(\left|+-\right\rangle-\left|-+\right\rangle\right)$
is the singlet Bell state (which is invariant under single-spin basis
transformation, and which is fully entangled), $\epsilon$ is a small positive
number, and
$\left|\psi_{\mathrm{p}}\right\rangle=\alpha\left|++\right\rangle+\beta\left|+-\right\rangle+\gamma\left|-+\right\rangle+\delta\left|--\right\rangle$
is normalized. For this case, to first order in $\epsilon$ initially
$\left\langle S_{1+}\right\rangle=-\left\langle
S_{2+}\right\rangle=2^{1/2}\left(\delta-\alpha^{\ast}\right)\epsilon$ and
$\left\langle S_{1z}\right\rangle=-\left\langle
S_{2z}\right\rangle=2^{-1/2}\left(\beta+\beta^{\ast}+\gamma+\gamma^{\ast}\right)\epsilon$,
i.e. initially
$\left\langle\bm{S}_{1}\right\rangle=-\left\langle\bm{S}_{2}\right\rangle$.
The plots in Fig. 3 (a1), (b1), (c1) and (d1) demonstrate that both
$\left\langle\bm{S}_{1}\right\rangle$ and
$\left\langle\bm{S}_{2}\right\rangle$ increase in magnitude with time while
remaining nearly anti-parallel to each other as they both approach the Bloch
sphere surfaces. The red star symbols in Fig. 3(a1) and (b1) indicate the
initial values of
$\left\langle\bm{S}_{1}\right\rangle/\left|\left\langle\bm{S}_{1}\right\rangle\right|$
and
$\left\langle\bm{S}_{2}\right\rangle/\left|\left\langle\bm{S}_{2}\right\rangle\right|$.
As the plots in Fig. 3(a1) and (b1) demonstrate, time evolution leaves these
normalized values nearly unchanged, provided that $\epsilon\ll 1$ (the value
of $\epsilon=0.1$ has been used for generating the plots).
The case shown in Fig. 3 (a1), (b1), (c1) and (d1) demonstrates strong
dependency of the long time value of $\left|\psi\right\rangle$ on its initial
value. This dependency, which becomes extreme when $\left|\psi\right\rangle$
is initially fully entangled, resembles the butterfly effect. In the limit
$\epsilon\rightarrow 0$, angular momentum is conserved by the modified
Schrödinger equation, provided that
$\left|\psi_{0}\right\rangle=\left|\mathrm{B}_{0,0}\right\rangle$. This can be
attributed to the fact that
$S_{x}\left|\mathrm{B}_{0,0}\right\rangle=S_{y}\left|\mathrm{B}_{0,0}\right\rangle=S_{z}\left|\mathrm{B}_{0,0}\right\rangle=0$.
Conservation of the angular momentum $\bm{\hat{z}}$ component is obtained when
the initial state is the Bell triplet state
$\left|\mathrm{B}_{1,0}\right\rangle=2^{-1/2}\left(\left|+-\right\rangle+\left|-+\right\rangle\right)$,
which is also fully entangled, and for which
$S_{z}\left|\mathrm{B}_{1,0}\right\rangle=0$.
The case where initially $P\simeq 1$ is demonstrated by Fig. 3 (a2), (b2),
(c2) and (d2). For this case both $\left\langle\bm{S}_{1}\right\rangle$ and
$\left\langle\bm{S}_{2}\right\rangle$ are initially close to the Bloch sphere
surfaces. Consequently, the time evolution of $\left|\psi\right\rangle$
towards a fully product state does not significantly change
$\left\langle\bm{S}_{1}\right\rangle$ and
$\left\langle\bm{S}_{2}\right\rangle$ (time evolution is represented by the
green lines).
## VI Instability
Consider a system composed of two spins 1/2. The first spin, which is labelled
as ’$\mathrm{a}$’, has a relatively low angular frequency
$\omega_{\mathrm{a}}$ in comparison with the angular frequency
$\omega_{\mathrm{b}}$ of the second spin, which is labelled as ’$\mathrm{b}$’,
and which is externally driven. The angular momentum vector operator of
particle $\mathrm{a}$ ($\mathrm{b}$) is denoted by $\bm{S}_{\mathrm{a}}$
($\bm{S}_{\mathrm{b}}$). The Hamiltonian $\mathcal{H}$ of the closed system is
given by
$\mathcal{H}=\omega_{\mathrm{a}}S_{\mathrm{az}}+\omega_{\mathrm{b}}S_{\mathrm{bz}}+\frac{\omega_{1}\left(S_{\mathrm{b+}}+S_{\mathrm{b-}}\right)}{2}+V\;,$
(36)
where the driving amplitude and angular frequency are denoted by $\omega_{1}$
and $-\omega_{\mathrm{p}}=\omega_{\mathrm{b}}-\Delta$, respectively ($-\Delta$
is the driving detuning), the operators $S_{\mathrm{a\pm}}$ are given by
$S_{\mathrm{a\pm}}=S_{\mathrm{ax}}\pm iS_{\mathrm{ay}}$, and the rotated
operators $S_{\mathrm{b\pm}}$ are given by
$S_{\mathrm{b\pm}}=\left(S_{\mathrm{bx}}\pm iS_{\mathrm{by}}\right)e^{\mp
i\omega_{\mathrm{p}}t}$. The coupling term is given by
$V=g\hbar^{-1}\left(S_{\mathrm{a+}}+S_{\mathrm{a-}}\right)S_{\mathrm{bz}}$,
where $g$ is a coupling rate. In a rotating frame, the matrix representation
of the transformed Hamiltonian $\mathcal{H}^{\prime}$ is given by
$\mathcal{H}^{\prime}\dot{=}\hbar\Omega$, where the $4\times 4$ matrix
$\Omega$ is given by
$\Omega=\left(\begin{array}[c]{cccc}\frac{\omega_{\mathrm{a}}+\Delta}{2}&\frac{\omega_{1}}{2}&\frac{g}{2}&0\\\
\frac{\omega_{1}}{2}&\frac{\omega_{\mathrm{a}}-\Delta}{2}&0&-\frac{g}{2}\\\
\frac{g}{2}&0&\frac{-\omega_{\mathrm{a}}+\Delta}{2}&\frac{\omega_{1}}{2}\\\
0&-\frac{g}{2}&\frac{\omega_{1}}{2}&\frac{-\omega_{\mathrm{a}}-\Delta}{2}\end{array}\right)\;.$
(37)
Disentanglement is generated by the modified Schrödinger equation (1) by
choosing the bra vector $\left\langle\Psi\right|$ to be given by Eq. (35). The
expectation values of $\left\langle\bm{S}_{1}\right\rangle$ and
$\left\langle\bm{S}_{2}\right\rangle$, which are shown in Fig. 4(a) and in
Fig. 4(b), respectively, are calculated by numerically integrating the
modified Schrödinger equation (1). For the plot shown in Fig. 4, for which the
Hartmann–Hahn matching condition $\omega_{\mathrm{a}}=\omega_{\mathrm{R}}$
Hartmann1962 ; Yang_1 ; Levi_053516 is assumed to be satisfied, where
$\omega_{\mathrm{R}}=\sqrt{\omega_{1}^{2}+\Delta^{2}}$ is the Rabi angular
frequency, both $\left\langle\bm{S}_{1}\right\rangle$ and
$\left\langle\bm{S}_{2}\right\rangle$ undergo a limit cycle (LC). The
instability responsible for the LC was studied in Ref. Levi_053516 , in which
the equations of motion generated by the Hamiltonian (36) were treated using
in the mean-field approximation. This example demonstrates the connection
between the mean-field approximation and disentanglement.
Figure 4: Dipolar LC. The expectation values of
$\left\langle\bm{S}_{1}\right\rangle$ and
$\left\langle\bm{S}_{2}\right\rangle$ are shown in (a) and (b), respectively.
The modified Schrödinger equation (1), with the Hamiltonian (36) and the bra
vector (35) is numerically integrated. The parameters used for the calculation
are $\gamma_{\mathrm{D}}=10^{3}$, $\omega_{\mathrm{a}}=10^{2}$,
$\omega_{1}=-\Delta=2^{-1/2}\omega_{\mathrm{a}}$ and $g=0.2$. Fluctuating
magnetic field with parameters $\omega_{\mathrm{s}}^{2}=1$ and
$\tau_{\mathrm{s}}=0.05$ is added [see Eq. (15)].
## VII Summary
In summary, both processes of thermalization and disentanglement can be
modeled using a recently proposed modified Schrödinger equation. The added
nonlinear term can give rise to instabilities and LC solutions. On the other
hand, it remains unclear whether quantum mechanics can be self-consistently
reformulated based on the proposed modified Schrödinger equation. Future study
will be devoted to exploring candidate formalisms.
## VIII Acknowledgments
This work was supported by the Israeli science foundation, the Israeli
ministry of science, and by the Technion security research foundation. We
thank Michael R. Geller for helpful discussions.
## References
* (1) E. Schrodinger, “Die gegenwartige situation in der quantenmechanik”, Naturwissenschaften, vol. 23, pp. 807, 1935.
* (2) Roger Penrose, “Uncertainty in quantum mechanics: faith or fantasy?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 369, no. 1956, pp. 4864–4890, 2011.
* (3) A. J. Leggett, “Experimental approaches to the quantum measurement paradox”, Found. Phys., vol. 18, pp. 939–952, 1988.
* (4) A. J. Leggett, “Realism and the physical world”, Rep. Prog. Phys., vol. 71, pp. 022001, 2008.
* (5) Michael R Geller, “Fast quantum state discrimination with nonlinear ptp channels”, arXiv:2111.05977, 2021.
* (6) Steven Weinberg, “Testing quantum mechanics”, Annals of Physics, vol. 194, no. 2, pp. 336–386, 1989.
* (7) Steven Weinberg, “Precision tests of quantum mechanics”, in THE OSKAR KLEIN MEMORIAL LECTURES 1988–1999, pp. 61–68. World Scientific, 2014.
* (8) H-D Doebner and Gerald A Goldin, “On a general nonlinear schrödinger equation admitting diffusion currents”, Physics Letters A, vol. 162, no. 5, pp. 397–401, 1992.
* (9) H-D Doebner and Gerald A Goldin, “Introducing nonlinear gauge transformations in a family of nonlinear schrödinger equations”, Physical Review A, vol. 54, no. 5, pp. 3764, 1996.
* (10) Nicolas Gisin and Ian C Percival, “The quantum-state diffusion model applied to open systems”, Journal of Physics A: Mathematical and General, vol. 25, no. 21, pp. 5677, 1992.
* (11) David E Kaplan and Surjeet Rajendran, “Causal framework for nonlinear quantum mechanics”, Physical Review D, vol. 105, no. 5, pp. 055002, 2022.
* (12) Manuel H Muñoz-Arias, Pablo M Poggi, Poul S Jessen, and Ivan H Deutsch, “Simulating nonlinear dynamics of collective spins via quantum measurement and feedback”, Physical review letters, vol. 124, no. 11, pp. 110503, 2020.
* (13) Angelo Bassi, Kinjalk Lochan, Seema Satin, Tejinder P Singh, and Hendrik Ulbricht, “Models of wave-function collapse, underlying theories, and experimental tests”, Reviews of Modern Physics, vol. 85, no. 2, pp. 471, 2013.
* (14) Eyal Buks, “Disentanglement and a nonlinear schrödinger equation”, Journal of Physics A: Mathematical and Theoretical, vol. 55, no. 35, pp. 355303, 2022.
* (15) Lajos Diósi, “Stochastic pure state representation for open quantum systems”, Physics Letters A, vol. 114, no. 8-9, pp. 451–454, 1986.
* (16) Klaus Mølmer, Yvan Castin, and Jean Dalibard, “Monte carlo wave-function method in quantum optics”, JOSA B, vol. 10, no. 3, pp. 524–538, 1993.
* (17) Jean Dalibard, Yvan Castin, and Klaus Mølmer, “Wave-function approach to dissipative processes in quantum optics”, Physical review letters, vol. 68, no. 5, pp. 580, 1992.
* (18) V Semin, I Semina, and F Petruccione, “Stochastic wave-function unravelling of the generalized lindblad equation”, Physical Review E, vol. 96, no. 6, pp. 063313, 2017.
* (19) Roei Levi, Sergei Masis, and Eyal Buks, “Instability in the hartmann-hahn double resonance”, Phys. Rev. A, vol. 102, pp. 053516, Nov 2020.
* (20) Heinz-Peter Breuer, Francesco Petruccione, et al., The theory of open quantum systems, Oxford University Press on Demand, 2002.
* (21) Barbara Drossel, “What condensed matter physics and statistical physics teach us about the limits of unitary time evolution”, Quantum Studies: Mathematics and Foundations, vol. 7, no. 2, pp. 217–231, 2020.
* (22) C Hicke and MI Dykman, “Classical dynamics of resonantly modulated large-spin systems”, Physical Review B, vol. 78, no. 2, pp. 024401, 2008.
* (23) Waldemar Kłobus, Paweł Kurzyński, Marek Kuś, Wiesław Laskowski, Robert Przybycień, and Karol Życzkowski, “Transition from order to chaos in reduced quantum dynamics”, Physical Review E, vol. 105, no. 3, pp. 034201, 2022.
* (24) Krzysztof Kowalski, “Linear and integrable nonlinear evolution of the qutrit”, Quantum Information Processing, vol. 19, no. 5, pp. 1–31, 2020\.
* (25) Bernd Fernengel and Barbara Drossel, “Bifurcations and chaos in nonlinear lindblad equations”, Journal of Physics A: Mathematical and Theoretical, vol. 53, no. 38, pp. 385701, 2020.
* (26) K Kowalski and J Rembieliński, “Integrable nonlinear evolution of the qubit”, Annals of Physics, vol. 411, pp. 167955, 2019.
* (27) Charles P Slichter, Principles of magnetic resonance, vol. 1, Springer Science & Business Media, 2013.
* (28) Eyal Buks, Quantum mechanics - Lecture Notes, http://buks.net.technion.ac.il/teaching/, 2020.
* (29) Asher Peres, “Separability criterion for density matrices”, Physical Review Letters, vol. 77, no. 8, pp. 1413, 1996.
* (30) William K Wootters, “Entanglement of formation of an arbitrary state of two qubits”, Physical Review Letters, vol. 80, no. 10, pp. 2245, 1998.
* (31) SR Hartmann and EL Hahn, “Nuclear double resonance in the rotating frame”, Physical Review, vol. 128, no. 5, pp. 2042, 1962.
* (32) Pengcheng Yang, Martin B Plenio, and Jianming Cai, “Dynamical nuclear polarization using multi-colour control of color centers in diamond”, EPJ Quantum Technology, vol. 3, pp. 1–9, 2016.
|
Tsu-Yuan Hsu1∗, Chen-An Li2∗, Tung-Yu Wu3, Hung-yi Lee4
# Model Extraction Attack against Self-supervised Speech Models
###### Abstract
Self-supervised learning (SSL) speech models generate meaningful
representations of given clips and achieve incredible performance across
various downstream tasks. Companies can provide services of these models by
building APIs. However, each of these APIs may suffer a model extraction
attack (MEA), which refers to an adversary stealing the model functionality
with limited query access. In this work, we propose an MEA framework against
the SSL speech models. Our MEA framework learns multiple output
representations of given clips to extract the target SSL speech models. We
demonstrate various selection methods on speech corpus to construct limited
query access. We also study the MEA on different speech corpus. We evaluate
the effectiveness of our MEA framework on four diverse downstream tasks. To
our knowledge, this is the first attempt to steal a large-scale speech model.
Index Terms: Self-supervised learning, speech representation learning, model
extraction attack ††*equal contributions
## 1 Introduction
Recent advances in self-supervised learning (SSL) speech models [1, 2, 3]
build meaningful representations of speech and achieve incredible performance
in many tasks [4]. Regarding the current SSL-based natural language processing
APIs, such as official GPT-3 [5] APIs, which provide services for generating
data embeddings, it can be expected that SSL speech-processing APIs would also
come into sight in the future. This kind of APIs take text or speech provided
by users as input and generate corresponding feature representations for the
training of downstream models.
However, each of these APIs may suffer a model extraction attack (MEA), which
refers to an adversary stealing the model functionality by limited query
access. MEA has posed a non-negligible threat to online deep learning
applications. Previous work [6] has shown that the adversary may extract
models used in remote APIs simply by querying them. Since the training of
models and the collection of datasets may have cost a tremendous amount of
time and money, this kind of attack causes sizeable financial losses to
victimized companies. Hence, it is an urgent task for researchers to study how
the adversary may perform the attack.
MEAs against different classes of machine learning and deep learning models
are broadly studied. Through some direct queries to the remote API, simple
regression models and multilayer perceptrons (MLPs) can be easily stolen [6].
Convolutional neural networks (CNNs) are also vulnerable to the attack [7, 8].
[7] randomly selects images on hand, queries the API with selected images to
fetch fake labels, and utilizes the image-label pairs to train the local
surrogate model. Knockoff Nets [8] adopts reinforcement learning to actively
sample images, which surpasses the random-selection approach. Recurrent Neural
Networks (RNNs) have also been studied to be attacked. [9] studies on attacks
based on features of RNNs and LSTMs for classification and regression tasks.
Advanced models such as graph neural networks (GNNs) have also been examined
[10, 11]. For instance, [10] demonstrates that, after collecting query graphs,
a surrogate model can be effectively learned by minimizing RMSE loss between
the remote GNN-based API's and its graph responses. Besides model-type-
specific extraction attacks, there are also some works [12, 13] committed to
stealing certain information of remote APIs. Model hyperparameters have been
pointed out as a potential target, and a framework is proposed to verify the
feasibility of hyperparameter extraction [12]. Metamodel methods [13] learn a
classifier to predict model attributes, such as model architectures, adopted
optimizers, and types of training datasets.
Large-scale SSL models [1, 2, 3, 14, 15, 16] are more critical potential
targets of MEA since the SSL-model-based APIs can serve as a powerful feature
extractor to generate representations of input data that help users implement
various applications such as text question answering (QA) with BERT [14] and
automatic speech recognition (ASR) with wav2vec 2.0 [1]. The training and
fine-tuning of an SSL model require much time and effort. However, the MEAs
against SSL models are still underexplored. Though there have been some works
[17, 18, 19, 20] investigating MEA against text models, to our knowledge,
approaches for speech models have not been discussed. Moreover, current text-
model-based methods still exist some restrictions. [17, 18, 19] apply random
sampling strategy to select query data, while [20] presents a hybrid strategy
that integrates algebraic-based and learning-based methods. Nonetheless, they
all assume attackers have access to output logits of downstream tasks and that
each time the attack merely focuses on stealing one task. ††Note that this
work aims to capture the research community’s attention to potential issues in
terms of speech-based SSL APIs, rather than simply an attempt to conduct MEA.
With this work, we anticipate more discussions in this field to help build a
more robust and comprehensive ecosystem of speech-based Machine Learning as a
Service (MLaaS).
In this work, we propose and implement the MEA against SSL speech models. In
particular, we design several clip-selection methods that identify informative
data. The selected clips are further used as queries to the victim model to
get corresponding representations for the local surrogate model's supervised
training. This enables our surrogate model to approximate the victim model's
performances with only a small number of clips, i.e., queries. We demonstrate
MEA on various selection methods and different speech corpus. Specifically,
four diverse downstream tasks are conducted in Section 3 to evaluate the
effectiveness of our proposed clip selection methods and the whole extraction
pipeline. To our knowledge, this is the first attempt to steal a large-scale
speech model. Furthermore, since our framework's presented active-sampling
methods only need access to data embeddings instead of logits, we can extract
the remote SSL speech model directly rather than just a downstream task.
Figure 1: Illustration of our model extraction attack framework.
## 2 Methods
In this paper, the victim model refers to the model being queried, such as the
APIs mentioned in Section 1. The pre-trained surrogate model refers to the
model pre-trained on unlabeled corpus and the extracted surrogate model refers
to the model after performing model extraction on the victim model. We assume
the requests to the victim model are limited, and the knowledge of the victim
model's architecture and its training corpus is lacking. The query limitation
is defined as the total length of waveforms. The information returned by the
victim model includes representations of input data.
As shown in Figure 1, our model extraction attack process includes several
steps:
1. (1)
Pre-train a model on unlabeled corpus $\mathcal{X}$ to get the Pre-trained
Surrogate.
2. (2)
An active selection method is applied to sample a small portion of clips from
dataset $\mathcal{X}$ to form a subset $\mathcal{X_{S}}$. The sampling process
is done until the total length of the waveforms in $\mathcal{X_{S}}$ is no
less than the preset length limitation $H$.
3. (3)
The Pre-trained Surrogate is trained with $\mathcal{X_{S}}$ and the obtained
Victim's representations to perform the model extraction, i.e., steal the
functionality of the Victim to get the Extracted Surrogate.
### 2.1 Selection Methods
In this section, we elaborate on several proposed clip selection methods used
to construct $\mathcal{X_{S}}$ for model extraction.
#### 2.1.1 SSL Pre-training Loss Selection
We state that SSL pre-training loss can serve as a good metric to sample
waveforms. Clips with high pre-training loss are regarded as hard samples
which baffle the current surrogate model and are thus worth engaging in the
afterward teacher-student supervised training. As a result, we evaluate the
dataset $\mathcal{X}$ on the pre-trained surrogate model and calculate each
waveform's pre-training loss. The dataset $\mathcal{X_{S}}$ iteratively
samples waveform with the highest loss in $\mathcal{X}$ until its total clip
length reaches $H$.
#### 2.1.2 Content-based Selection
A selection approach based on the acoustic content of speech clips is also
proposed. We argue that, with the knowledge of each recording's acoustic
information, it is possible to sample a small portion of clips that represents
the overall distribution of the whole corpus. To achieve this, we first
leverage the pre-trained surrogate model to generate all the waveform
representations. Secondly, each waveform timestamp's token is determined by
feeding its representation into a clustering model fit on 10% of the
representations, with consecutive same class removed. We then take these
tokens as the content of a speech clip. Finally, we iteratively sample the
corpus with the farthest point sampling (FPS) [21]. To be specific, after
randomly picking the first clip, we calculate its token-based trigram Jaccard
distance to all other clips:
$\displaystyle D_{x,y}=1-J(X,Y)=1-\frac{|X\cap Y|}{|X\cup Y|},$ (1)
where $x$ and $y$ are two distinct clips (a sampled clip and an unsampled one
in our case), while $X$ and $Y$ are their token trigram sets. Specifically,
each element in the set is a set containing three consecutive tokens generated
by the clustering model. $J(X,Y)$ is the Jaccard similarity. Among all
unselected clips, we choose the one with the farthest distance (lowest
similarity) to the previously-selected clip as the second sampled clip. If
there have been multiple clips already sampled, an unsampled clip's distance
to them is defined as the Jaccard distance to its nearest sampled clip. The
FPS process is repeated until the total length of sampled clips is no less
than $H$.
#### 2.1.3 Transcription-based Selection
In this method, we select clips with different content. We assume that every
speech in the dataset has its corresponding transcription. We utilize a pre-
trained language model to generate [CLS] token embeddings for speech
transcriptions. Next, a clustering model is fit on [CLS] token embeddings to
get clustering labels. Finally, we evenly select the corresponding clips in
$\mathcal{X}$ from each cluster to increase the data diversity into
$\mathcal{X}_{s}$.
Table 1: Evaluation results with querying dataset LibriSpeech 960-hour for KS,
IC, and ER in accuracy and SD in diarization error rate. The first three rows
are w/o any query. BaselineR refers to performing model extraction on Random
Surrogate w/ random selection. BaselineP, Transcription, SSL, and Content
refer to the Extracted Surrogate w/ random selection and the selection methods
elaborated in Section 2. ToplineH and ToplineW refer to the Extracted
Surrogate w/ unlimited queries to the HuBERT Base and WavLM Base+,
respectively. Best result among BaselineR, BaselineP, Transcription (if any),
SSL, and Content of each experiment is marked in bold.
gray!20 | | KS (Acc $\uparrow$) | IC (Acc $\uparrow$) | ER (Acc $\uparrow$) | SD (DER $\downarrow$)
---|---|---|---|---|---
Pre-trained Surrogate | 92.41 | 77.91 | 57.7 | 7.86
HuBERT Base | 96.30 | 98.34 | 64.92 | 5.88
WavLM Base+ | 97.37 | 99.00 | 68.65 | 3.50
ToplineH | 95.85 | 94.83 | 62.74 | 6.76
ToplineW | 96.33 | 95.68 | 64.02 | 6.72
gray!20Method | Victim Model | 0.1h | 1h | 10h | 0.1h | 1h | 10h | 0.1h | 1h | 10h | 0.1h | 1h | 10h
BaselineR | HuBERT Base | 83.47 | 89.65 | 94.26 | 63.67 | 79.73 | 87.29 | 56.41 | 57.72 | 60.39 | 11.02 | 10.05 | 7.56
BaselineP | HuBERT Base | 92.05 | 94.16 | 94.35 | 75.43 | 87.43 | 88.98 | 56.91 | 60.28 | 60.06 | 9.15 | 7.64 | 7.18
Transcription | HuBERT Base | 92.11 | 93.57 | 94.22 | 76.19 | 84.29 | 90.17 | 58.68 | 59.78 | 61.48 | 9.61 | 7.63 | 7.61
SSL | HuBERT Base | 93.12 | 95.30 | 95.07 | 79.09 | 88.74 | 90.43 | 59.34 | 60.03 | 62.11 | 9.14 | 8.32 | 7.16
Content | HuBERT Base | 92.08 | 94.45 | 95.29 | 77.54 | 88.72 | 91.27 | 57.99 | 60.36 | 61.49 | 9.28 | 7.80 | 6.93
BaselineR | WavLM Base+ | 81.50 | 88.93 | 93.02 | 60.77 | 78.01 | 86.45 | 55.18 | 58.00 | 61.55 | 12.46 | 10.48 | 7.56
BaselineP | WavLM Base+ | 93.22 | 93.51 | 94.35 | 78.30 | 87.77 | 91.48 | 59.41 | 61.56 | 61.95 | 9.62 | 7.31 | 7.52
SSL | WavLM Base+ | 93.67 | 94.68 | 95.46 | 82.92 | 90.11 | 92.33 | 59.87 | 61.78 | 63.71 | 8.96 | 7.74 | 7.23
Content | WavLM Base+ | 93.18 | 94.55 | 95.13 | 79.41 | 90.4 | 92.17 | 58.84 | 62.29 | 62.70 | 8.74 | 7.77 | 6.84
### 2.2 Model Extraction
In this paper, we adopt the objective function of DistilHuBERT [22] to perform
model extraction. We assume that the victim model returns multiple
representations per query because weighted-sum multiple hidden states of the
pre-train model could significantly improve the downstream performance [4].
The surrogate model is followed by multiple separate prediction heads which
learn the victim model's representations from different layers.
Given victim model's $n$-th output representation $h^{(n)}$ and the prediction
head vector learn from victim model’s $n$-th output $\hat{h}^{(n)}$, The
objection function $\mathcal{L}_{\text{extract}}$ can be shown as follows:
$\displaystyle\mathcal{L}_{\text{extract}}^{(n)}$
$\displaystyle=\mathcal{L}_{cos}^{(n)}+\mathcal{L}_{l1}^{(n)}$
$\displaystyle=\sum_{t=1}^{T}\left[-\log\sigma\left(\cos\left(h_{t}^{(n)},\hat{h}_{t}^{(n)}\right)\right)+\frac{1}{D}\lVert
h_{t}^{(n)}-\hat{h}_{t}^{(n)}\rVert_{1}\right]$ (2)
$\displaystyle\mathcal{L}_{\text{extract}}$ $\displaystyle=\sum_{n\in
N}\mathcal{L}_{\text{extract}}^{(n)}=\sum_{n\in
N}\left[\mathcal{L}_{cos}^{(n)}+\mathcal{L}_{l1}^{(n)}\right]$ (3)
where $t\in[T]$, $T$ is the number of timestamps, $N$ is the set of the
layers' indices, $\sigma(\cdot)$ denotes sigmoid function, $cos(\cdot,\cdot)$
denotes cosine similarity function, $D$ is the feature dimension of the
representation.
## 3 Experiments
### 3.1 Experimental Setup
Experiments are implemented with s3prl v0.3.4 and fairseq [23] 0.12.2. We use
fairseq for pre-training our surrogate model. For s3prl, we use it for
performing model extraction and evaluation.
#### 3.1.1 Data and Preprocessing
We use LibriSpeech 960-hour [24] to pre-train the surrogate model and as the
querying dataset to the victim model in our main results. Extensive
experiments are conducted using Wall Street Journal [25] and Aishell-1 [26] as
querying datasets. Considering the victim model should have an instance-wise
querying-length constraint, the maximum input sequence length is set to 15.6
seconds, and each clip longer than 15.6s is split. Also, we set the minimum
waveform length to 2s, which means the clip less than 2 seconds will be
dropped.
#### 3.1.2 Victim Model
We consider HuBERT Base and WavLM Base+ as the victim models. Each of them has
a 12-layer transformer encoder. Features from different layers contain various
information [3, 22], such as speaker-related, semantic, and content
information. Therefore, we assume the victim model returns the representations
of each transformer layer, and the 4${}^{\text{th}}$, 8${}^{\text{th}}$, and
12${}^{\text{th}}$-layer representations are used to perform the model
extraction.
The pre-training dataset of HuBERT Base is exactly the same as our querying
dataset, i.e. LibriSpeech 960-hour while that of WavLM Base+ is a 94k-hour
dataset, implying the overlap with our querying dataset is less than 1.1%. We
assume the latter one to be more realistic. That is, the API (victim model) is
pre-trained on a large-scale dataset and should cope with speech clips from
various domains well. However, we only have a small piece of the dataset, and
we intend to query the victim model as efficiently as possible.
#### 3.1.3 Surrogate Model
Our surrogate model is initialized as a 7-layer CNN extractor following a
2-layer transformer encoder, called Random Surrogate. HuBERT pre-training is
applied on Random Surrogate with LibriSpeech 960-hour to obtain Pre-trained
Surrogate. The first iteration is trained for 250k steps by the MFCC clustered
labels. The second iteration is trained for 400k steps by the clustered labels
generated from the $1^{\text{st}}$ transformer encoder layer features of
first-iteration Pre-trained Surrogate.
In the selection stage, the corresponding features of sampled clips are
retrieved by querying the victim model, where the clips are obtained by either
random, pre-training loss, content-based, or transcription-based selection. We
adopt the k-means model [27] as the clustering model used for content- and
transcription-based selection where the number of clusters is 250. For SSL
pre-training loss selection, we use the self-supervised cluster-prediction
loss of HuBERT. For transcription-based selection, we use RoBERTa [16] as our
language model.
In the model extraction stage, our surrogate model (Random Surrogate or Pre-
trained Surrogate) is trained on the clip-feature pairs for $10k\times H$
steps with batch size 24, where $H$ is the hour of sampled clips and
$H=0.1,1,10$ are considered in this paper. The learning rate linearly
increases to 0.0002 in the first 7% of the total steps and then linearly
decreases to zero in the left steps.
### 3.2 Evaluation
We use SUPERB [4] benchmark to evaluate our surrogate model. In SUPERB, the
surrogate model is frozen and a trainable linear layer is adopted to weigh
each layer's output features of our surrogate model, including one CNN
extractor's, two transformer encoder layers', and three prediction heads'
outputs. The downstream models then use the weighted feature as the input. In
this paper, four downstream tasks, including keyword spotting (KS) [28],
intent classification (IC) [29], emotion recognition (ER) [30], and speaker
diarization (SD) [31], are selected to evaluate the surrogate model.
Table 2: Performances of SID and SD with victim model WavLM Base+. Results
better than Pre-trained Surrogate (65.80 for SID and 7.86 for SD) are marked
in bold.
| SID | SD
---|---|---
Method | 0.1h | 1h | 10h | 0.1h | 1h | 10h
BaselineP | 55.17 | 65.90 | 71.42 | 9.62 | 7.31 | 7.52
Most Speakers | 56.55 | 64.76 | 71.55 | 9.83 | 8.35 | 7.74
(a) Accuracy
(b) Agreement
Figure 2: Performances of BaselineP, Content, SSL, and Topline on KS with
victim model WavLM Base+. Agreement refers to the prediction consistency
between the victim model and the surrogate model.
### 3.3 Main Results
In Table 1, BaselineP shows an overall-better performance compared to
BaselineR, especially in the highly low-resource setting, which indicates the
effectiveness of pre-training the surrogate model. Note that ``low-resource''
in this paper refers to the limitation on the number of queries.
For the proposed selection methods discussed in Section 2.1, Transcription has
a similar performance compared to BaselineP. This indicates that selecting the
queries based on speech transcriptions is not effective. From Table 1 and
Figure 2, though there’s still room for improvements, our methods have
generally outperformed the random selection baseline on KS, IC, and ER, making
a step forward in efficient data selection and MEA. Note that similar
agreement results as Figure 2(b) can be obtained on IC and
ER.${}^{\text{\ref{note2}}}$
From Table 1, we observed that all methods and baselines suffer performance
degradation in low-resource situations on SD. Therefore, we conducted
extensive experiments on another speaker task: speaker identification (SID)
[32], and provide Most Speakers, referring to the most-speaker selection,
which samples as many speakers as possible, to tackle speaker tasks. The
results are shown in Table 2111Only part of the results are shown due to page
limitation.. However, the performance still does not improve under the low-
resource setting.
### 3.4 Mismatched Querying Datasets
Table 3: Performances of SD with the victim model HuBERT Base. The querying
datasets are denoted in parentheses.
| SD (Aishell-1) | SD (WSJ)
---|---|---
Method | 0.1h | 1h | 10h | 0.1h | 1h | 10h
BaselineP | 9.09 | 8.25 | 7.21 | 8.55 | 8.44 | 7.80
SSL | 8.31 | 7.46 | 6.90 | 9.45 | 7.73 | 7.88
Content | 9.21 | 8.20 | 6.88 | 9.50 | 7.82 | 7.93
(a) Aishell-1
(b) WSJ
Figure 3: IC performances of Content, SSL, and BaselineP with querying dataset
Aishell-1 and WSJ.
The checkpoints of the pre-trained models are often released while the pre-
training datasets are not publicly available due to commercial use, privacy
issue, etc. Therefore, we examine WSJ and AISHELL-1 as querying datasets to
simulate the situation that the pre-training dataset of Pre-trained Surrogate
is unknown. It is worth mentioning that AISHELL-1 is a Chinese corpus, which
means the surrogate model's pre-training dataset and our querying dataset are
in different languages.
As shown in Figure 3, there is no obvious performance difference between our
proposed methods and BaselineP (the same conclusion can be drawn on KS, ER,
and SD). From Table 1 and Table 3${}^{\text{\ref{note2}}}$, we observe that
extracting the models with the AISHELL-1 corpus achieves a slightly better
performance on SD than with the WSJ and the LibriSpeech corpus. On the other
hand, extracting the victim model with the LibirSpeech corpus usually
outperforms the other two corpora in other tasks, which means that performing
model extraction with the different datasets pre-trained on the surrogate
model may significantly affect the performance.
## 4 Conclusion and Future Works
This work makes the first attempt to conduct the model extraction attack
against SSL speech models. Experimental results on four diverse tasks in
SUPERB show that our proposed selection methods outperform the naive random
data selection. In the future, we expect to explore a more effective data
selection method and find a way to avoid ineffective data selection resulting
from the mismatch between pre-training and querying dataset as mentioned in
Section 3.4.
The authors would like to thank ISCA and the organising committees of past
INTERSPEECH conferences for their help and for kindly providing the previous
version of this template.
As a final reminder, the 5th page is reserved exclusively for references. No
other content must appear on the 5th page. Appendices, if any, must be within
the first 4 pages. The references may start on an earlier page, if there is
space.
## References
* [1] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, ``wav2vec 2.0: A framework for self-supervised learning of speech representations,'' _Advances in neural information processing systems_ , vol. 33, pp. 12 449–12 460, 2020.
* [2] W.-N. Hsu, B. Bolte, Y.-H. H. Tsai, K. Lakhotia, R. Salakhutdinov, and A. Mohamed, ``Hubert: Self-supervised speech representation learning by masked prediction of hidden units,'' _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 29, pp. 3451–3460, 2021.
* [3] S. Chen, C. Wang, Z. Chen, Y. Wu, S. Liu, Z. Chen, J. Li, N. Kanda, T. Yoshioka, X. Xiao _et al._ , ``Wavlm: Large-scale self-supervised pre-training for full stack speech processing,'' _IEEE Journal of Selected Topics in Signal Processing_ , 2022.
* [4] S. wen Yang, P.-H. Chi, Y.-S. Chuang, C.-I. J. Lai, K. Lakhotia, Y. Y. Lin, A. T. Liu, J. Shi, X. Chang, G.-T. Lin, T.-H. Huang, W.-C. Tseng, K. tik Lee, D.-R. Liu, Z. Huang, S. Dong, S.-W. Li, S. Watanabe, A. Mohamed, and H. yi Lee, ``SUPERB: Speech Processing Universal PERformance Benchmark,'' in _Proc. Interspeech 2021_ , 2021, pp. 1194–1198.
* [5] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell _et al._ , ``Language models are few-shot learners,'' _Advances in neural information processing systems_ , vol. 33, pp. 1877–1901, 2020.
* [6] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, ``Stealing machine learning models via prediction $\\{$APIs$\\}$,'' in _25th USENIX security symposium (USENIX Security 16)_ , 2016, pp. 601–618.
* [7] J. R. Correia-Silva, R. F. Berriel, C. Badue, A. F. de Souza, and T. Oliveira-Santos, ``Copycat cnn: Stealing knowledge by persuading confession with random non-labeled data,'' in _2018 International Joint Conference on Neural Networks (IJCNN)_. IEEE, 2018, pp. 1–8.
* [8] T. Orekondy, B. Schiele, and M. Fritz, ``Knockoff nets: Stealing functionality of black-box models,'' in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2019, pp. 4954–4963.
* [9] T. Takemura, N. Yanai, and T. Fujiwara, ``Model extraction attacks on recurrent neural networks,'' _Journal of Information Processing_ , vol. 28, pp. 1010–1024, 2020.
* [10] Y. Shen, X. He, Y. Han, and Y. Zhang, ``Model stealing attacks against inductive graph neural networks,'' in _2022 IEEE Symposium on Security and Privacy (SP)_. IEEE, 2022, pp. 1175–1192.
* [11] B. Wu, X. Yang, S. Pan, and X. Yuan, ``Model extraction attacks on graph neural networks: Taxonomy and realisation,'' in _Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security_ , 2022, pp. 337–350.
* [12] B. Wang and N. Z. Gong, ``Stealing hyperparameters in machine learning,'' in _2018 IEEE symposium on security and privacy (SP)_. IEEE, 2018, pp. 36–52.
* [13] S. J. Oh, B. Schiele, and M. Fritz, ``Towards reverse-engineering black-box neural networks,'' in _Explainable AI: Interpreting, Explaining and Visualizing Deep Learning_. Springer, 2019, pp. 121–144.
* [14] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, ``BERT: Pre-training of deep bidirectional transformers for language understanding,'' in _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. Minneapolis, Minnesota: Association for Computational Linguistics, Jun. 2019, pp. 4171–4186. [Online]. Available: https://aclanthology.org/N19-1423
* [15] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, ``Xlnet: Generalized autoregressive pretraining for language understanding,'' _Advances in neural information processing systems_ , vol. 32, 2019.
* [16] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, ``Roberta: A robustly optimized bert pretraining approach,'' _arXiv preprint arXiv:1907.11692_ , 2019.
* [17] K. Krishna, G. S. Tomar, A. P. Parikh, N. Papernot, and M. Iyyer, ``Thieves on sesame street! model extraction of bert-based apis,'' in _International Conference on Learning Representations_ , 2020. [Online]. Available: https://openreview.net/forum?id=Byl5NREFDr
* [18] C. Chen, X. He, L. Lyu, and F. Wu, ``Killing one bird with two stones: Model extraction and attribute inference attacks against bert-based apis,'' _arXiv e-prints_ , pp. arXiv–2105, 2021.
* [19] X. He, L. Lyu, L. Sun, and Q. Xu, ``Model extraction and adversarial transferability, your BERT is vulnerable!'' in _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_. Online: Association for Computational Linguistics, Jun. 2021, pp. 2006–2012. [Online]. Available: https://aclanthology.org/2021.naacl-main.161
* [20] S. Zanella-Beguelin, S. Tople, A. Paverd, and B. Köpf, ``Grey-box extraction of natural language models,'' in _International Conference on Machine Learning_. PMLR, 2021, pp. 12 278–12 286.
* [21] Y. Eldar, M. Lindenbaum, M. Porat, and Y. Y. Zeevi, ``The farthest point strategy for progressive image sampling,'' _IEEE Transactions on Image Processing_ , vol. 6, no. 9, pp. 1305–1315, 1997.
* [22] H.-J. Chang, S.-w. Yang, and H.-y. Lee, ``Distilhubert: Speech representation learning by layer-wise distillation of hidden-unit bert,'' in _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2022, pp. 7087–7091.
* [23] M. Ott, S. Edunov, A. Baevski, A. Fan, S. Gross, N. Ng, D. Grangier, and M. Auli, ``fairseq: A fast, extensible toolkit for sequence modeling,'' in _Proceedings of NAACL-HLT 2019: Demonstrations_ , 2019.
* [24] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, ``Librispeech: an asr corpus based on public domain audio books,'' in _2015 IEEE international conference on acoustics, speech and signal processing (ICASSP)_. IEEE, 2015, pp. 5206–5210.
* [25] D. B. Paul and J. Baker, ``The design for the wall street journal-based csr corpus,'' in _Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992_ , 1992.
* [26] H. Bu, J. Du, X. Na, B. Wu, and H. Zheng, ``Aishell-1: An open-source mandarin speech corpus and a speech recognition baseline,'' in _2017 20th conference of the oriental chapter of the international coordinating committee on speech databases and speech I/O systems and assessment (O-COCOSDA)_. IEEE, 2017, pp. 1–5.
* [27] S. Lloyd, ``Least squares quantization in pcm,'' _IEEE transactions on information theory_ , vol. 28, no. 2, pp. 129–137, 1982.
* [28] P. Warden, ``Speech commands: A public dataset for single-word speech recognition.'' _Dataset available from http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz_ , 2017.
* [29] L. Lugosch, M. Ravanelli, P. Ignoto, V. S. Tomar, and Y. Bengio, ``Speech Model Pre-Training for End-to-End Spoken Language Understanding,'' in _Proc. Interspeech 2019_ , 2019, pp. 814–818. [Online]. Available: http://dx.doi.org/10.21437/Interspeech.2019-2396
* [30] C. Busso, M. Bulut, C.-C. Lee, A. Kazemzadeh, E. Mower, S. Kim, J. N. Chang, S. Lee, and S. S. Narayanan, ``Iemocap: Interactive emotional dyadic motion capture database,'' _Language resources and evaluation_ , vol. 42, pp. 335–359, 2008.
* [31] J. Cosentino, M. Pariente, S. Cornell, A. Deleforge, and E. Vincent, ``Librimix: An open-source dataset for generalizable speech separation,'' _arXiv preprint arXiv:2005.11262_ , 2020.
* [32] A. Nagrani, J. S. Chung, W. Xie, and A. Zisserman, ``Voxceleb: Large-scale speaker verification in the wild,'' _Computer Speech & Language_, vol. 60, p. 101027, 2020.
|
# AAAI Press Anonymous Submission
Instructions for Authors Using LaTeX
Vedant Shah*,1,2, Aditya Agrawal *,1, Lovekesh Vig 3, Ashwin Srinivasan1,
Gautam Shroff3, Tanmay Verlekar1
# Neural Feature-Adaptation for Symbolic Predictions Using Pre-Training and
Semantic Loss
Vedant Shah*,1,2, Aditya Agrawal *,1, Lovekesh Vig 3, Ashwin Srinivasan1,
Gautam Shroff3, Tanmay Verlekar1
###### Abstract
We are interested in neuro-symbolic systems consisting of a high-level
symbolic layer for explainable prediction in terms of human-intelligible
concepts; and a low-level neural layer for extracting symbols required to
generate the symbolic explanation. Unfortunately real data is often imperfect.
This means that even if the symbolic theory remains unchanged, we may still
need to address the problem of mapping raw data to high-level symbols, each
time there is a change in the data acquisition environment or equipment (for
example, the clinical explanation of a heart arrhythmia could be unchanged,
but the raw data could vary from one hospital to another). Manual
(re-)annotation of the raw data each time this happens is laborious and
expensive; and automated labelling methods are often imperfect, especially for
complex problems. Recently, the NEUROLOG system proposed the use of a semantic
loss function that allows an existing feature-based symbolic model to guide
the extraction of feature-values from raw data, using a mechanism called
‘abduction’. However, the experiments demonstrating the use of semantic loss
through abduction appear to rely heavily on a domain-specific pre-processing
step that enables a prior delineation of feature locations in the raw data. In
this paper, we examine the use of semantic loss in domains where such pre-
processing is not possible, or is not obvious. Using controlled experiments on
two simulated datasets, we show that without any prior information about the
features, the NEUROLOG approach can continue to predict accurately even with
substantially incorrect feature predictions (that is, predictions are correct,
but explanations are wrong). We show also that prior information about the
features in the form of (even imperfect) pre-training can help correct this
situation. These findings are replicated on the original problem considered by
NEUROLOG, without the use of feature-delineation. This suggests that symbolic
explanations constructed for data in a domain could be re-used in a related
domain, by ‘feature-adaptation’ of pre-trained neural extractors using the
semantic loss function constrained by abductive feedback.
## Introduction
In “Machine Intelligibility and the Duality Principle” (Muggleton and Michie
1997), the authors propose that a 2-way human interaction between human and
machine necessarily brings up a ‘duality principle’ in the construction of
software, defined as follows:
> Software involved in human/computer inter-action should be designed at two
> interconnected levels: a) a declarative, or self-aware level, supporting
> ease of adaptation and human inter-action, and b) a procedural, or skill
> level, supporting efficient and accurate computation.
Recent work in Machine Learning (ML) with highly accurate deep neural networks
(DNNs) has given rise to a similar kind of duality principle for constructing
DNN-based models with humans-in-the-loop. Communicating results from a DNN
clearly needs to be ‘human-friendly’, referring to abstract concepts–like
entities, features, and relations–already known to the human. These may not
necessarily have any direct counterpart to the concepts actually being used by
the DNN to arrive at its prediction. Interest is also growing in communicating
human-knowledge to a DNN (Dash et al. 2022), that is both precise and
reasonably natural. In (Muggleton and Michie 1997), symbolic logic is
suggested as a choice of representation for the declarative level of software
concered with human-interaction. The main reasons for this are that it is
expressive enough to capture complex concepts precisely, it supports
reasoning, and can be converted reasonably easily to controlled forms of
natural language (see for example (Schwitter 2010)). The utility of a symbolic
model for explanations is reflected in substantial interest in techniques such
as LIME (Ribeiro, Singh, and Guestrin 2016), which results in an embodiment of
the duality principle by constructing human-intelligble models for black-box
predictors. In this paper, we are interested in the converse aspect of the
duality principle, namely: suppose we have a human-intelligible symbolic
theory, a priori, describing concepts and relationships that are expected in
intelligible explanations. Some situations where this can occur are: (a) the
theory may encode accepted knowledge in the domain, deliberately leaving out
data-specific aspects; (b) the theory may have developed elsewhere, and we
want to see if can be adopted for local-use; or (c) the theory may have been
developed locally, but equipment changes may require it to be ‘re-
calibrated’.111Of course, there will be cases where the symbolic theory itself
may have to be revised, or even completely re-constructed to suit local needs.
We do not consider that here. Such a high-level declarative theory usually
contains no mechanisms for linking abstract concepts to actual raw data. How
should the lower (procedural) layer adapt to the provision of this high-level
specification? The authors in (Muggleton and Michie 1997) do not offer any
solutions.
In this paper, we adopt the position that the duality principle entails a
hierarchical system design, specified as a form of function-decomposition. We
examine a recent implementation (Tsamoura and Michael 2021) in which a DNN
implements the low-level procedures for linking high-level concepts in a
domain-theory to raw data. In itself, a hierarchical design for AI systems is
not new, and has been adopted at least from the early robotic applications
like Shakey and Freddy, although not for reasons of communicating with humans.
But the increasingly widespread use of DNNs as the preferred choice of ML
models, and the increasing need for human interaction requires us to confront
issues of machine-intelligibility arising from the use of neuro-symbolic
systems with humans-in-the-loop.
## Hierarchical Neuro-Symbolic Modelling
We assume the ML model implements a function $h:{\cal X}\mapsto{\cal Y}$.
Here, ${\cal X}$ denotes the set of (raw) data instances, and ${\cal Y}$
denotes the set of values of a dependent variable. Then one formulation of $h$
as a hierarchical neuro-symbolic model is to specify $h$ as the composition of
symbolic and neural functions. A specification for identifying models of this
kind is shown in Fig. 1.
Given:
(a) Data-instances $\\{(x_{i},y_{i})\\}_{i=1}^{N}$ of some task $T$, where
$x_{i}\in{\cal X}$ and $y_{i}\in{\cal Y}$; (b) A set of symbolic
representations of the data ${\cal J}$; and (c) and a loss-function $L$ $L$;
Find:
$n:{\cal X}\mapsto{\cal J}$ and $s:{\cal J}\mapsto{\cal Y}$ such that
$\sum_{i}E[L(y_{i},s(n(x_{i}))]$ is minimised.
Figure 1: A partial specification for hierarchical neuro-symbolic learning
from data.
Recently, an implementation for meeting the specification in Fig. 1 with some
restrictions has been proposed. We describe this next.
### $NEUROLOG$
NEUROLOG (Tsamoura and Michael 2021) is an implementation for learning
hierarchical neuro-symbolic models, with the following characteristics:
1. (a)
The task $T$ is a classification task, with the dependent variable $Y$ taking
values from the set ${\cal Y}$, which has some distinguished class-label
$\bot$;
2. (b)
${\cal J}$ is a set representing the data in a propositional logic. In ML-
terms, this is entails representing the data by a finite set of features
${\cal F}$ = $\\{f_{1},f_{2},\ldots,f_{k}\\}$. For simplicity of exposition,
we will take these features to be Boolean-valued, and ${\cal
J}=\\{0,1\\}^{k}$;
3. (c)
$s$ is known a priori. For each value $y\in{\cal Y}$, $s$ is assumed to be of
the form:222 Here, ${\mathbf{f}}$ is a $k$-tuple of values assigned to
$f_{1},\ldots,f_{k}$ ($f=0$ denotes $f$ is $false$, and $f=1$ denotes $f$ is
$true$); and $s_{y}({\mathbf{f}})$ = ($c_{1,y}|_{\mathbf{f}}\vee\allowbreak
c_{2,y}|_{\mathbf{f}}\vee\allowbreak\cdots\vee\allowbreak
c_{n_{y},y}|_{\mathbf{f}}$). The $c_{i,y}$ are conjunctive formulae defined
over values of features in ${\cal F}$.
$\begin{split}s({\mathbf{f}})&=y_{1}~{}~{}~{}~{}~{}~{}~{}{\mathrm{if}}~{}s_{y_{1}}({\mathbf{f}})=true\\\
&=y_{2}~{}~{}~{}~{}~{}~{}~{}{\mathrm{if}}~{}s_{y_{2}}({\mathbf{f}})=true\\\
&=\cdots\\\ &=\bot~{}~{}~{}~{}~{}~{}~{}{\mathrm{otherwise}}\end{split}$
4. (d)
For each feature $f\in{\cal F}$, there are task-specific pre-processing
functions $p_{T,f}:{\cal X}\mapsto X_{f}$. These are used to separate identify
subsets of the raw data where the feature $f$ takes specific values (like $0$
or $1$). Additionally, an overall task-specific function $p_{T}:{\cal
X}\mapsto X_{f_{1}}\times X_{f_{2}}\times\cdots X_{f_{k}}$ is defined as
$p_{T}(x)=(p_{f_{1}}(x),p_{f_{2}}(x),\cdots,p_{f_{k}}(x))$. The example below
will make this clearer;
5. (e)
The neural function $n$ to be identified is now a composition of $p_{T}$ with
$n^{\prime}:X_{f_{1}}\times X_{f_{2}}\times\cdots
X_{f_{k}}\mapsto\\{0,1\\}^{k}$. That is, for $x\in{\cal X}$, the prediction
$Y=NEUROLOG(x)$, where $NEUROLOG(x)=s(n^{\prime}(p_{T}(x)))$;
6. (f)
Given a training data-instance $(x_{i},y_{i})$ the implementation
progressively updates parameters of the neural network implementing
$n^{\prime}$ by computing a ‘ semantic loss’$L(y_{i},NEUROLOG(x_{i}))$
function that uses abductive feedback from the conjuncts in $s_{y_{i}}$.
###### Example 1 ($NEUROLOG$)
In (Tsamoura and Michael 2021), the authors describe the implementation of
$NEUROLOG$ using an example of a 3 $\times$ 3 chess-board with 1 black and 2
distinct white pieces. The authors define a set of atomic predicates of the
form $at(P,(X,Y))$ $X,Y\in\\{1,2,3\\}$. Each such grounded predicate indicates
the presence of a chess piece at position (X,Y) where
$P\in\\{b(k),w(k),w(q),w(r),w(b),w(n),w(p)\\}$. b(.) refers to a black piece,
w(.) refers to a white piece and k,q,r,b,n,p refer to king, queen, rook,
bishop, knight and pawn respectively. Additionally, the authors provide a
logical theory $T$ that captures domain knowledge about the problem. The
authors use a neural network to embed raw data into symbolic representations
that can be used by the theory. The task specific preprocessing function
$p_{T}$ involves segmenting each of the 9 blocks (features) on the 3 $\times$
3 chess board into separate inputs.
Figure 2: (a) $NEUROLOG$: the image input is pre-processed by segmenting out
individual squares, thereby making the downstream learning with semantic loss
easier. The diagram is for the chess problem described in (Tsamoura and
Michael 2021). (b) ${NEUROLOG}^{-}$: the image input is no longer pre-
processed but the feature extraction network is potentially imperfect.
The task specific pre-processing could be seen as a form of prior information
about the features, for realistic applications it may not always be obvious
what these functions should be. We consider pre-training as a simple
alternative for introducing prior information about the features.
### ${NEUROLOG}$ without Pre-Processing
What is to be done if it is not possible to specify or implement the task-
specific pre-processor $p_{T}$? Since the pre-processing step essentially
constitutes prior information about the task, we can distinguish two kinds of
implementations of $n$: one that is obtained with no prior knowledge
(correctly, an uninformed prior); and one that is obtained without task-
specific pre-processing but with task-independent prior information. For
simplicity, we will denote these two models as implementing the functions
$n_{0}$ (no prior) and $n_{1}$ (task-independent prior). The corresponding
neuro-symbolic model are $s(n_{0}(\cdot))$ and $s(n_{1}(\cdot))$, which we
denote by ${NEUROLOG}_{0}^{-}$ and ${NEUROLOG}_{1}^{-}$ (the minus superscript
signifying that it denotes ${NEUROLOG}$ without pre-processing).
One option for $n_{1}$ is to start with prior values for parameters obtained
from some form of pre-training. That is, suppose there exists an
implementation $\hat{n}_{1}:\hat{{\cal X}}\mapsto\\{0,1\\}^{k}$, and the
parameters of this model are modified using the data drawn from ${\cal X}$ to
yield $n_{1}:{\cal X}\mapsto\\{0,1\\}^{k}$ that is reasonably consistent
(defined using the loss function) with the constraints imposed by $s$. This
may require ${\cal X}\subseteq\hat{{\cal X}}$, or at least a sufficient large
overlap between ${\cal X}$ and $\hat{{\cal X}}$). Further, we would expect the
neuro-symbolic model obtained in this way not to be as effective as one with
task-specific pre-processing. But, it allows the practically useful prospect
of implementing a form of $NEUROLOG$ of ‘feature-adaptation’ (see Fig. 2). The
extent to which this can help when pre-processing is not possible is the focus
of the experimental investigation below.
## Experimental Evaluation
### Aims
For brevity, we use $N(\cdot)$ to denote $NEUROLOG$ with pre-processing;
$N_{0}^{-}$ to denote ${NEUROLOG}_{0}^{-}$ (that is, $NEUROLOG$ without pre-
processing and without a pre-trained feature-extractor); and $N_{1}^{-}$ to
denote ${NEUROLOG}_{1}^{-}$ (that is, $NEUROLOG$ without pre-processing and a
with a pre-trained feature-extractors). We aim to investigate the following
conjectures:
Pre-Processing.
If pre-processing is possible, then models constructed by $N$ are better than
those constructed by $N_{0}^{-}$ and $N_{1}^{-}$;
Pre-Training.
If pre-processing is not possible, then models constructed by $N_{1}^{-}$ are
better than models constructed by $N_{0}^{-}$.
The following clarifications are necessary: (a) By “better”, we will mean
higher predictive accuracy and higher explanatory fidelity (a definition for
this is in the Methods section); and (b) Using synthetic problems and
simulated data, we are able to obtain $N_{1}^{-}$ models starting from
progressively poorer pre-trained models. This allows us to examine
qualifications to the conjectures, based on the corresponding $N_{1}^{-}$
models.
### Materials
#### Problems
We report results from experiments conducted on two different problem domains:
the Chess problem reported in (Tsamoura and Michael 2021) and synthetic time-
series data obtained by controlled simulation.
Chess. We refer the reader to (Tsamoura and Michael 2021) for details of this
problem and simply summarise the main aspects here. The raw data consists of a
3 $\times$ 3 chess board with 3 chess pieces – a black king and two distinct
white pieces. Task-specific pre-processing involves a ‘segmentation’ step that
separates each data-instance into the 9 squares in a pre-specified order. Each
square corresponds to a (multi-valued) feature, with values indicating whether
it is empty, or which of 7 other pieces it has (black king, white king, white
rook, white knight, white pawn, white bishop, white knight). The logical
theory encodes the conditions for class-labels denoting: safe, draw, and mate
according to the usual rules of chess (or ‘illegal’ otherwise). In the
terminology of this paper, this corresponds to $m$ Yes. definitions for each
class-label (the number $m$ is different for each class) in terms of a
disjunct of conjunctions of the 9 features (with $\bot$ = ‘illegal’).
We use the same data provided with and code adapted from the author’s
implementation333https://bitbucket.org/tsamoura/neurolog/src/master/ of
$NEUROLOG$ for our experiments.
Time Series. We generate synthetic examples of simple time-series data, to
resemble aspects of the Chess problem. Each time-series consists of three
features in the form of different shapes, sampled from a pool of nine shapes:
{Blank, SemiCircle, SquareWave, Triangle, Gaussian, Quadrant, Trapezium,
Quatrefoil, W-wave}. Each shape spans 50 time steps and three of the these
shapes are sampled and concatenated end-to-end to form one cycle. One time
series is a periodic repetition of approximately 1.5 such cycles, extending
over 256 time-steps. An example of some randomly generated time-series is
shown in Fig. 3.
Figure 3: (a, b) Examples of randomly generated train and test time-series
data for the conjunct (Gaussian $\land$ Quatrefoil $\land$ Quadrant). (c, d)
Examples of randomly generated train and test time-series data for the
conjunct (SquareWave $\land$ W_wave $\land$ Gaussian). Note that the order in
which the shapes occur can be different between a train and test data point,
as in (c, d).
As with the Chess data, it is possible to devise a ‘segmented’ input for
$NEUROLOG$ that converts a time-series into 3 features in a pre-defined order
(left-to-right; $f_{1}$, $f_{2}$, $f_{3}$). A symbolic ‘theory’ about the data
is constructed as follows, by assuming a set of 4 class-labels (A,B,C, and D).
A definition of a class-label essentially consists of a random number of
conjuncts, each with 3 randomly chosen shapes (the details are in “Methods”).
Data for each class label are then generated to be consistent with the target
theory for that class label.
#### Machines and Environments
All the experiments were run on machines with Intel(R) Xeon(R) E5-2698 v4 CPU
end to end with an allocation of 8GB of memory. We use PyTorch to program the
Neural Networks. $NEUROLOG$ uses A-System (Nuffelen and Kakas 2001) running
over SICStus Prolog 4.5.1 for the abduction process. However, for our
experiments, we cache the abductive feedback for all the three classes in the
chess domain in text files and use those for calculating the semantic loss.
### Methods
We first describe some procedures required for conducting the experiments.
#### Generation of Time-Series Problems
Recall the time-series problems consist of a periodically repeating pattern of
3 shapes drawn from a possible set of 9 shapes. For the experiments, we need,
a symbolic theory and data instances (time-series) consistent with the
symbolic theory. For our experiments, the symbolic theory $s$ will assign one
of 4 class-labels to data instances. The definition used by $s$ for each
class-label is obtained as follows:
1. 1.
We partition the set of nine shapes into groups of three features each. These
groups are assigned to one of the three features each. This is done randomly.
Each feature can only take one of the shapes from the assigned group at a time
as a possible value.
2. 2.
Next, we randomly sample one shape from the three groups each. The combination
of these three shapes forms a randomly sampled conjunct.
3. 3.
Next, we decide the number of conjuncts to be assigned to each class. We
define the parameters lower bound and upper bound. For each class in the
theory, number of conjuncts to be assigned is determined by randomly sampling
a number from [lower bound, upper bound). Note that different classes can have
different number of conjuncts.
4. 4.
For each class, we randomly sample the conjuncts required as discussed in Step
1. We take care that conjuncts do not overlap between classes.
An example of a randomly sampled $s$ is given below:
The result of these steps is a randomly drawn symbolic theory about the data-
instances. Using this theory, data-instances consistent with the theory are
generated as follows:
1. 1.
For each conjunct assigned to a particular class, we generate a cycle by
concatenating the shapes in the conjunct. Let’s denote these three shapes as
$\psi_{1}$, $\psi_{2}$ and $\psi_{3}$.
2. 2.
To differentiate the test data from the train data, we concatenate the shapes
in the order $\psi_{1},\psi_{2},\psi_{3}$ for the training data, whereas for
test data, 30% of the time series are constructed by concatenating them in the
same order and the rest are constructed by concatenating them them in the
order $\psi_{1},\psi_{3},\psi_{2}$ (see Fig. 3).
3. 3.
The cycle is repeated to get a total of 10000 time steps for the training data
and 750 time steps for testing data.
4. 4.
We add Gaussian noise to these time series to add randomness.
5. 5.
These time series are then cut into non-overlapping sequences of length 256
each giving us a total of 39 samples per conjunct for training data and 9
samples for testing data. Note that the total amount of training and test data
for the time-series experiments depends on the total number of conjuncts in
the theory, which varies across random runs.
#### Task-independent Pre-training
The steps followed for testing the conjectures is straightforward: is:
* Repeat $R$ times:
1. 1.
For $T=$ $Chess$, $Time-Series$
1. (a)
Randomly generate training and test data as described above (applicable only
for time series data)
2. (b)
Let $s_{T}$ be the symbolic theory for task $T$
3. (c)
Let $p_{T}$ be the task-specific pre-processor for task $T$ and $N$ denote the
$NEUROLOG$ model obtained using the training data, along with $s_{T}$ and the
pre-processor $p_{T}$
4. (d)
Let $\hat{n}_{\alpha}$ be an approximate pre-trained model for the features
($\alpha$ denotes an approximation parameter: see details below) and
$N_{1,\alpha}^{-}$ be the ${NEUROLOG}^{-}$ model obtained using the training
data, along with $s_{T}$ and with parameters of the neural feature-extractor
initialised using the parameters of $\hat{n}_{\alpha}$
5. (e)
Let $N_{0}^{-}$ be the ${NEUROLOG}^{-}$ model obtained using the training
data, along with $s_{T}$ and with parameters of a completely untrained neural
feature-extractor;
6. (f)
Record estimates of predictive accuracy and explanatory fidelity of $N$,
$N_{1,\alpha}^{-}$, and $N_{0}^{-}$ on the test data (see details below)
2. 2.
Test the “Pre-Processing” conjecture by comparing the (mean) estimates of
predictive accuracy and explanatory fidelity for $N$ against those obtained
for $N_{1,\cdot}^{-}$ and $N_{0}^{-}$
3. 3.
Test the “Pre-Training” conjecture by comparing the (mean) estimates of
predictive accuracy and explanatory fidelity for $N_{1,\cdot}^{-}$ againsts
those obtained for $N_{0}^{-}$.
The following details are relevant:
* •
For all experiments $R=5$;
* •
Training and test data sizes for Chess are 9000 and 2000 respectively. For
Time Series the corresponding numbers are about 1000 and 200 (numbers vary due
to the theory being sampled randomly); We use lower bound = 5 and upper bound
= 8.
* •
We restrict $\alpha$ to $0.1$, $0.2$ and $0.3$; denoting low, medium and high
levels of difference with the true feature detector
* •
Predictive accuracy is the usual ratio of correct predictions to total
instances predicted. The estimates are from predictions on the test data
instances.
* •
Explanatory fidelity refers to the ratio of correctly explained instances to
the total number of instances. An instance is correctly explained if the
conjunct used to generate the class label is the (only) conjunct that is
determined to be true given the features extracted by the neural layer;
* •
We report the metric after taking a running average to 100.
* •
Model performance is represented by the pair $(P,E)$, where $P$ denotes
predictive accuracy and $E$ denotes explanatory fidelity. Let model $M1$ have
performance $(P1,E1)$ and model $M2$ have performance $(P2,E2)$. If $P1>P2$
and $E1>E2$ then we will say model $M1$ is better than model $M2$.
| | Pred Acc.(%) | Expl Fid.(%)
---|---|---|---
$N$ | | 95.41 (0.66) | 94.67 (0.69)
$N_{1,\alpha}^{-}$ | $\alpha=0.1$ | 84.71 (1.34) | 62.71 (3.80)
$\alpha=0.2$ | 80.31 (1.67) | 48.12 (5.07)
$\alpha=0.3$ | 78.79 (2.68) | 43.32 (4.17)
$N_{0}^{-}$ | | 79.88 (1.57) | 0.10 (0.13)
(a) Chess
| | Pred Acc.(%) | Expl Fid.(%)
---|---|---|---
$N$ | | 100 (0.00) | 100 (0.00)
$N_{1,\alpha}^{-}$ | $\alpha=0.1$ | 82.07 (6.59) | 76.05 (10.23)
$\alpha=0.2$ | 81.48 (4.44) | 76.08 (7.78)
$\alpha=0.3$ | 81.56 (4.61) | 75.37 (10.28)
$N_{0}^{-}$ | | 75.79 (11.86) | 36.81 (16.85)
(b) Time Series
Table 1: Performance of models with pre-processsing ($N$); without pre-
processing but with pre-training ($N_{1,\alpha}^{-}$); without pre-processing
and without pre-training ($N_{0}^{-}$. “Pred. Acc.” refers to (mean)
predictive accuracy; “Expl. Fid.” refers to (mean) explanatory fidelity. The
number in brackets is the standard deviation. Lower values of $\alpha$
indicates that the pre-trained approximation $\hat{n}_{\alpha}$ is closer to
the correct model for feature-subset identification (the values of $\alpha$
correspond to low, medium, and high-levels of difference between predictions
from $\hat{n}$ and the correct prediction.
#### Approximate Pre-Training ($\alpha$)
Approximate models for pre-training are generated by deliberately altering
feature-labels. The value of $\alpha$ defines the percentage of feature labels
that are corrupted during the time of pretraining.
* •
For the chess experiment, the position and identity of each of the three
pieces has a probability of $\alpha$ to be changed to one of the other
possible values which is randomly chosen.
* •
For the time series experiment, the labels for each of the three shapes has a
probability of $\alpha$ of being changed to one of the other two possible
values (i.e. the remaining two values from the same ”group”).
### Results
Figure 4: Increase in Prediction Accuracy and Explanatory Fidelity of the
approximate models after finetuning with semantic loss
The main results of the empirical study are tabulated in Tables 1(a), (b). The
tabulations suggest the following: (a) If pre-processing is possible, then
clearly better models (that is, with higher predictive accuracy and higher
explanatory fidelity) result from using task-specific pre-processing than from
using task-independent pre-training. (b) If pre-processing is not possible,
then it is usually better to employ a pre-trained model than to start with an
uninformed model; (c) As expected, as the pre-trained approximation becomes
progressively uninformative (medium to high values of $\alpha$), performance
on predictive and explanatory fronts decreases. However, even with with fairly
poor initial approximation, explanatory fidelity remains substantially better
than starting with an uninformed model; and (d) The loss in performance due to
lack of pre-processing can be offset to some extent through the use of pre-
training, especially with a good initial approximation (low value of
$\alpha$). Taken together, these results provide empirical support for the
Pre-Processing and Pre-Training conjectures.
We now turn to a more detailed finding from the experiments for cases when
pre-processing is not an option. It is evident from Table 1 that there is a
substantial ‘gap’ between the predictive accuracy and explantory fidelity
estimate. This is more pronounced as the pre-trained approximator gets worse;
and especially obvious for the model $N_{0}^{-}$ that does not employ a pre-
trained approximator. That is, there are more predictions that are correct,
but for the wrong reasons. We find that this is because without sufficient
prior information about the features, the neural networks training process
allows a convergence to arbitrary conjuncts that are associated with the
correct class label. This tends to keep predictive accuracy from falling, but
results in substantially poorer accuracy of correct feature-identification.
This is shown for Chess in Table 2, where the effect is especially pronounced
(the results from the model with pre-processing are shown purely for
reference). We also show the comparision between the approximate model
$n_{\alpha}$ and $N_{1,\alpha}^{-}$ in Fig. 4. The increase in explanatory
fidelity is clearly visible and is more significant in the case of chess
domain.
Model | | Mean F-score
---|---|---
| | $f_{1}$ | $f_{2}$ | $f_{3}$ | $f_{4}$ | $f_{5}$ | $f_{6}$ | $f_{7}$ | $f_{8}$ | $f_{9}$
$N_{1,\alpha}^{-}$ | $\alpha=0.1$ | 0.94 | 0.95 | 0.91 | 0.96 | 0.98 | 0.95 | 0.95 | 0.95 | 0.94
$\alpha=0.2$ | 0.94 | 0.93 | 0.89 | 0.94 | 0.97 | 0.91 | 0.91 | 0.93 | 0.92
$\alpha=0.3$ | 0.91 | 0.94 | 0.85 | 0.92 | 0.89 | 0.91 | 0.88 | 0.86 | 0.92
$N_{0}^{-}$ | | 0.22 | 0.13 | 0.29 | 0.50 | 0.58 | 0.30 | 0.30 | 0.25 | 0.36
$N$ | | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99 | 0.99
Table 2: Comparing mean F1-scores for feature-prediction in the chess domain.
The models are as in Table 1.
The symbolic theory for Chess is significantly more complex than the ones used
for the Time Series data (the conjunct size in the Chess theory is 9, compared
to 3 for Time Series and the number of conjuncts belonging to each class in
Chess is also significantly higher). We conjecture that if pre-processing is
not possible, then for complex theories pre-training to some extent may be
needed to attain reasonable levels of explanatory fidelity.
## Related Work
While the impressive performance of deep learning architectures underscores
the pattern recognition capabilities of neural networks, deep models still
struggle to imbibe explicit domain knowledge and perform logical reasoning.
Symbolic systems on the other hand are adept at reasoning over explicit
knowledge but can only ingest symbolic inputs. The community has been striving
to combine the pattern recognition capabilities of neural networks with the
reasoning and knowledge representation power of symbolic techniques. Towards
this end, one line of attack has been to feed knowledge rich symbolic features
explicitly as inputs to a neural network instead of relying on the network to
learn these features from low level data (Lodhi 2013; Dash et al. 2018; Dash,
Srinivasan, and Vig 2021). These techniques are useful when i) Knowledge
regarding high level symbolic features is easily accessible and ii) The high
level features are easily computed from low level data.
The second line of attack involves using a neural network to ingest noisy
unstructured real world data (Images, Time Series, Speech or Text) and predict
the symbolic features that can be ingested by a symbolic model (Sunder et al.
2019). In many situations, although knowledge about the appropriate symbolic
features is available and a symbolic theory for making predictions exists, the
raw input data is not easily transformed into the symbolic inputs necessary
for the symbolic theory. Doing so accurately via neural networks would require
a large volume of annotated data for each symbolic input which is often
infeasible to obtain for a new domain. Recent efforts towards end-to-end neuro
symbolic training (Manhaeve et al. 2018; neurolog) aim to address this
limitation by obviating the need to learn neural feature detectors in
isolation prior to integration with the symbolic theory. This paper is
concerned with this type of neuro symbolic architecture.
Among systems that attempt to replace symbolic computations with
differentiable functions (Gaunt et al. 2017) develop a framework for creation
of end-to-end trainable systems that learn to write interpretable algorithms
with perceptual components, however the transformation of the theory into
differentiable functions is restricted to work for a subset of possible
theories. Logic Tensor Networks (LTNs)(Donadello, Serafini, and d’Avila Garcez
2017) integrate neural networks with first-order fuzzy logic to allow for
logically constrained learning from noisy data. Along similar lines the
DeepProbLog(Manhaeve et al. 2018) system introduces the notion of neural
predicates to probabilistic logic programming(Raedt, Kimmig, and Toivonen
2007) that allows for backpropogation across both the neural and symbolic
modules. The ABL (Dai et al. 2019) system was the first to use abductive
reasoning to refine neural predictions via consistency checking between the
predicted neural features and the theory. This system was recently refined by
using a similarity based consistency optimization (Huang et al. 2021), that
relies on assumptions about the inter class and intra class neural features.
Recently Abductive Meta-Interpretive Learning(Dai and Muggleton 2021) was
proposed to jointly learn neural feature predictors and the logical theory via
induction and abduction. While the above abduction based approaches require
iterative retraining of the neural module, and only utilize some of the
abduced inputs for backpropagation, the NEUROLOG system (Tsamoura and Michael
2021) uses the complete set of abduced inputs and backpropogates errors using
Semantic loss (Xu et al. 2018) that more fully capture theory semantics.
## Concluding Remarks
A limitation of prior abduction based neuro-symbolic approaches is that they
only experiment with cleanly delineated inputs and the neural network does not
have the additional burden of learning where to attend while predicting a
feature. However, in many situations such feature delineation is not possible.
Consider a dataset of images containing multiple objects at random locations
and a logical feature corresponding to whether the number of cars in an image
is greater than a threshold. In this case it is not obvious how one would
slice the image, and therefore the burden falls on the neural network to
extract this feature directly from the full image. We demonstrate with
NEUROLOG that for such cases where the network is forced to operate on the
full inputs, and is trained only with semantic loss, it fails to learn the
correct sub-symbolic features but can still yield high predictive accuracy. We
further show that the resulting model can be improved to a degree with even
partially pre-trained neural models with the extent of the improvement
dependent on the quality of the initial pre-training. This is very useful for
situations where the same theory applies to multiple domains with varying
distributions of sub-symbolic features. The imperfect feature detectors
trained on a different distribution can be significantly improved upon with
semantic loss and abductive feedback. The results from our experiments
reinforce the message: Pre-process. Otherwise pre-train.
## References
* Dai and Muggleton (2021) Dai, W.; and Muggleton, S. H. 2021. Abductive Knowledge Induction from Raw Data. In Zhou, Z., ed., _Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021_ , 1845–1851. ijcai.org.
* Dai et al. (2019) Dai, W.; Xu, Q.; Yu, Y.; and Zhou, Z. 2019. Bridging Machine Learning and Logical Reasoning by Abductive Learning. In Wallach, H. M.; Larochelle, H.; Beygelzimer, A.; d’Alché-Buc, F.; Fox, E. B.; and Garnett, R., eds., _Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada_ , 2811–2822.
* Dash et al. (2022) Dash, T.; Chitlangia, S.; Ahuja, A.; and Srinivasan, A. 2022. A review of some techniques for inclusion of domain-knowledge into deep neural networks. _Scientific Reports_ , 12.
* Dash, Srinivasan, and Vig (2021) Dash, T.; Srinivasan, A.; and Vig, L. 2021. Incorporating symbolic domain knowledge into graph neural networks. _Mach. Learn._ , 110(7): 1609–1636.
* Dash et al. (2018) Dash, T.; Srinivasan, A.; Vig, L.; Orhobor, O. I.; and King, R. D. 2018. Large-Scale Assessment of Deep Relational Machines. In Riguzzi, F.; Bellodi, E.; and Zese, R., eds., _Inductive Logic Programming - 28th International Conference, ILP 2018, Ferrara, Italy, September 2-4, 2018, Proceedings_ , volume 11105 of _Lecture Notes in Computer Science_ , 22–37. Springer.
* Donadello, Serafini, and d’Avila Garcez (2017) Donadello, I.; Serafini, L.; and d’Avila Garcez, A. S. 2017. Logic Tensor Networks for Semantic Image Interpretation. In Sierra, C., ed., _Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017_ , 1596–1602. ijcai.org.
* Gaunt et al. (2017) Gaunt, A. L.; Brockschmidt, M.; Kushman, N.; and Tarlow, D. 2017. Differentiable Programs with Neural Libraries. In Precup, D.; and Teh, Y. W., eds., _Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017_ , volume 70 of _Proceedings of Machine Learning Research_ , 1213–1222. PMLR.
* Huang et al. (2021) Huang, Y.-X.; Dai, W.-Z.; Cai, L.-W.; Muggleton, S.; and Jiang, Y. 2021. Fast Abductive Learning by Similarity-based Consistency Optimization. In Beygelzimer, A.; Dauphin, Y.; Liang, P.; and Vaughan, J. W., eds., _Advances in Neural Information Processing Systems_.
* Lodhi (2013) Lodhi, H. 2013. Deep Relational Machines. In _ICONIP_.
* Manhaeve et al. (2018) Manhaeve, R.; Dumancic, S.; Kimmig, A.; Demeester, T.; and De Raedt, L. 2018. DeepProbLog: Neural Probabilistic Logic Programming. In Bengio, S.; Wallach, H.; Larochelle, H.; Grauman, K.; Cesa-Bianchi, N.; and Garnett, R., eds., _Advances in Neural Information Processing Systems_ , volume 31. Curran Associates, Inc.
* Muggleton and Michie (1997) Muggleton, S.; and Michie, D. 1997. Machine Intelligibility and the Duality Principle. In _Software Agents and Soft Computing: Towards Enhancing Machine Intelligence, Concepts and Applications_ , 276–292. Berlin, Heidelberg: Springer-Verlag. ISBN 3540625607.
* Nuffelen and Kakas (2001) Nuffelen, B. V.; and Kakas, A. C. 2001. A-system: Declarative Programming with Abduction. In _LPNMR_.
* Raedt, Kimmig, and Toivonen (2007) Raedt, L. D.; Kimmig, A.; and Toivonen, H. 2007. ProbLog: A Probabilistic Prolog and Its Application in Link Discovery. In Veloso, M. M., ed., _IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007_ , 2462–2467.
* Ribeiro, Singh, and Guestrin (2016) Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_.
* Schwitter (2010) Schwitter, R. 2010. Controlled Natural Languages for Knowledge Representation. In _Coling 2010: Posters_ , 1113–1121. Beijing, China: Coling 2010 Organizing Committee.
* Sunder et al. (2019) Sunder, V.; Srinivasan, A.; Vig, L.; Shroff, G.; and Rahul, R. 2019. One-shot Information Extraction from Document Images using Neuro-Deductive Program Synthesis. _CoRR_ , abs/1906.02427.
* Tsamoura and Michael (2021) Tsamoura, E.; and Michael, L. 2021. Neural-Symbolic Integration: A Compositional Perspective. In _AAAI Conference on Artificial Intelligence_.
* Xu et al. (2018) Xu, J.; Zhang, Z.; Friedman, T.; Liang, Y.; and den Broeck, G. V. 2018. A Semantic Loss Function for Deep Learning with Symbolic Knowledge. In Dy, J. G.; and Krause, A., eds., _Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018_ , volume 80 of _Proceedings of Machine Learning Research_ , 5498–5507. PMLR.
## Appendix
## Data
### Using MNIST Digits for Chess Data
As done in the original NEUROLOG
experiments444https://bitbucket.org/tsamoura/neurolog/src/master/ we use the
MNIST555http://yann.lecun.com/exdb/mnist/ handwritten digits dataset for
performing experiments on the chess domain. Each of the eight possible values
for a position on the 3 $\times$ 3 chess board is mapped to a number from 0 -
7 as given: Empty \- 0, Black King \- 1, White Rook \- 2, White Bishop \- 3,
White Knight \- 4, White King \- 5, White Pawn \- 6, White Queen \- 7. An
image of a chess board is made by placing an instance of an image of a digit
from the MNIST dataset corresponding to a chess piece (or Empty) at it’s
respective position. Each MNIST image has a dimension of 28 $\times$ 28\.
Hence, an image of a chess board has the dimensions 84 $\times$ 84\. Fig. 5
shows an example image of a chess board in a given configuration.
Figure 5: Image used for a chess board having a White Bishop at $(1,1)$, a
White Queen at $(1,2)$ and a Black King at $(3,1)$
### Logical Theory Used for Time Series Experiments
Methods section in the main paper describes how we randomly sample logical
theories for experiments on time-series data. Table 3 shows one such example
theory where the Class column contains the four classes that we use for time-
series domain experiments and the Conjuncts columns contains disjunction of
conjuncts assigned to a given class. This theory was sampled using lower bound
= 2 and upperbound = 5. For brevity. we use numbers to represent shapes: 0 -
Blank, 1 - SemiCircle, 2 - Triangle, 3 - Gaussian, 4 - SquareWave, 5 -
Quadrant, 6 - Trapezium, 7 - Quatrefoil, 8 - W-wave.
Class | Conjuncts
---|---
A | $(2\land 0\land 7)\lor(4\land 3\land 6)\lor(1\land 5\land 8)$
B | $(4\land 5\land 8)\lor(1\land 0\land 7)$
C | $(2\land 5\land 6)\lor(1\land 3\land 7)\lor(4\land 0\land 8)$
D | $(1\land 5\land 7)\lor(4\land 1\land 6)$
Table 3: Example Theory for an experiment on Time-Series Data
### Shapes used for Time-Series Experiments
The data used for time-series experiments is made by composing shapes derived
from a set of 9 shapes which are generated procedurally. Each shape spans 50
time-steps. Fig. 6 shows all of the 9 shapes and the equations used to
generate them. 3 of such shapes are joined together and a guassian noise of
mean 0 and standard deviation of 0.05 is added to make the timeseries.
(a) Blank
(b) SemiCircle
(c) Triangle
(d) Gaussian
(e) SquareWave
(f) Quadrant
(g) Trapezium
(h) Quatrefoil
(i) W-wave
Figure 6: Figures and equations of 9 shapes used for making the synthetic
time-series data
### Segmenting Features for NEUROLOG Experiments on Time-Series Data
Running $NEUROLOG$ on the time-series domain requires a neat segmentation of
the three features (shapes) present in a time-series. Each shape spans 50
time-steps. We use this knowledge to perform the segmentation. For $NEUROLOG$
experiments, we split the training and test series in lengths of 150 instead
of 256\. This is to ensure that the start of each time-series data point
coincides with the start of a shape. Then we break down the 150 time-steps
into 3 consecutive segments of 50 time-steps each which correspond to the
three features.
## Corrupting Feature Labels
### Corrupting Piece Information in Chess Data
Let Let $\mathcal{P}$ be the set of all chess pieces in the chess domain $\cal
C$ be a set defined as below and $\alpha$ with is the corruption factor
$\mathcal{P}=\\{b(k),w(r),w(b),w(n),w(k),w(p),w(q)\\}$
$\mathcal{C}=\\{1,2,3\\}$
In the chess domain, a feature label for a particular instance of a chess
board can be represented as given below where where $P_{i}\in\mathcal{P}$ are
the identities of the chess piece and $X_{i},Y_{i}\in\mathcal{C}$ are their
coordinates on the chess board, with the constraint that one of the pieces is
a black king and the other two pieces are distinct white pieces.
$\\{at(P_{1},(X_{1},Y_{1})),at(P_{2},(X_{2},Y_{2})),at(P_{3},(X_{3},Y_{3}))\\}$
To corrupt the feature label for a particular instance of chess data, the
steps given below are executed, each with a probability of $\alpha$.
$P_{i}\leftarrow p;p\sim\mathcal{P}/\\{P_{i}\\}$ $\forall i\in\\{1,2,3\\}$
$X_{i}\leftarrow x;x\sim\mathcal{C}/\\{X_{i}\\}$ $\forall i\in\\{1,2,3\\}$
$Y_{i}\leftarrow y;y\sim\mathcal{P}/\\{Y_{i}\\}$ $\forall i\in\\{1,2,3\\}$
### Corrupting Shape labels for Time-Series Data
As described in the main paper, the set of 9 shapes is randomly partitioned
into a 3 sets of 3 shapes and each of these sets is assigned to the three
features of the time-series. Let $\mathcal{S}_{1}$, $\mathcal{S}_{2}$ and
$\mathcal{S}_{3}$ be these three sets.
Each of the features can take values from the respective set of shapes. For a
particular experiment, let $v_{1},v_{2},v_{3}$ represent the values taken by
Feature 1, Feature 2 and Feature 3 respectively. Then
$v_{1}\in\mathcal{S}_{1}$, $v_{2}\in\mathcal{S}_{2}$ and
$v_{2}\in\mathcal{S}_{3}$. Further, a feature label for an instance of the
time-series data can be represented as $\\{v_{1},v_{2},v_{3}\\}$.
Let the training data be represented by $\mathcal{D}$. For corruption, the
training dataset $\mathcal{D}$ is divided in a ratio of $\alpha:(1-\alpha)$.
Let the two divisions be represented as $\mathcal{D}_{\alpha}$ and
$\mathcal{D}_{1-\alpha}$. The feature label
$\\{v_{1\alpha},v_{2\alpha},v_{3\alpha}\\}$ of every data point in
$\mathcal{D}_{\alpha}$ is corrupted as:
$v_{1\alpha}\leftarrow v;v\sim\mathcal{S}_{1}/\\{v_{1\alpha}\\}$
$v_{2\alpha}\leftarrow v;v\sim\mathcal{S}_{2}/\\{v_{2\alpha}\\}$
$v_{3\alpha}\leftarrow v;v\sim\mathcal{S}_{3}/\\{v_{3\alpha}\\}$
$\mathcal{D}_{\alpha}$ with corrupted labels, denoted by
$\hat{\mathcal{D}_{\alpha}}$ is then rejoined with $\mathcal{D}_{1-\alpha}$
and shuffled to form the corrupted training dataset $\hat{\mathcal{D}}$:
$\hat{\mathcal{D}}=\\{\hat{\mathcal{D}_{\alpha}},\mathcal{D}_{1-\alpha}\\}$
## Network Architectures
### Chess Experiments
The feature extractors use a 2D Convolutional Neural Network with ReLU
activations for embedding the images in case of both $NEUROLOG$ and
$NEUROLOG^{-}$.
$NEUROLOG$: We use the same model as used in the original implementation. The
input to the network is a 28$\times$28 image and the output is a vector of
length 8 for classifying among the 8 classes (7 chess pieces + empty class).
The architecture is as given below:
* •
Embedding Layers
* –
Conv2D (1, 6), 5$\times$5 filters, stride 1, padding 0
* –
MaxPool2D, 2$\times$2 filters
* –
ReLU
* –
Conv2D (6,16), 5$\times$5 filters, stride 1, padding 0
* –
MaxPool2D, 2$\times$2 filters
* –
ReLU
* •
Classification MLP
* –
FC (256, 120)
* –
ReLU
* –
FC (120, 84)
* –
ReLU
* –
FC (84, 8)
$NEUROLOG^{-}$: We use a network similar to the network used above with input
and output layers slightly modified to support larger input and output sizes.
The input to the network is an 84$\times$84 image, whereas the output is a
vector of size 72 (9$\times$8). The first 8 outputs are used for classifying
the first block of the chess board, next 8 outputs are used for classifying
the second block and so on.
* •
Embedding Layers
* –
Conv2D (1, 6), 5$\times$5 filters, stride 2, padding 0
* –
MaxPool2D, 2$\times$2 filters
* –
ReLU
* –
Conv2D (6, 16), 5$\times$5 filters, stride 1, padding 0
* –
MaxPool2D, 2$\times$2 filters
* –
ReLU
* •
Classification MLP
* –
FC (1024, 512)
* –
ReLU
* –
FC (512, 256)
* –
ReLU
* –
FC (256, 72)
### Time Series Experiments
The feature extractors use a 1D Convolutional Neural Network with ReLU
activations for embedding the images in case of both $NEUROLOG$ and
$NEUROLOG^{-}$.
$NEUROLOG$: The input to the network is a signal of length 50 which is
obtained from separating the shapes. The output is a vector of size 9 for
classifying from one of the 9 shapes. The architecture is as given below:
* •
Embedding Layers
* –
Conv1D (1, 32), kernel size 8 filters, stride 1, padding 0
* –
ReLU
* –
Conv1D (32, 32), kernel size 4, stride 1, padding 0
* –
ReLU
* –
MaxPool1D, kernel size 2
* •
Classification MLP
* –
FC (640, 256)
* –
ReLU
* –
Dropout 0.5 probability
* –
FC (256, 9)
$NEUROLOG^{-}$: We use a network similar to the network used above with input
and output layers slightly modified to support a larger input size. The input
to the network is a 256 length signal. The output is a vector of size 9
(3$\times$3). The first 3 outputs are used for determining the shape taken by
the first feature from $\mathcal{S}_{1}$ of the signal, the next 3 for the
determining the shape taken by Feature 2 from $\mathcal{S}_{2}$ and last three
for determining the shape taken by Feature 3 from $\mathcal{S}_{3}$
* •
Embedding Layers
* –
Conv1D (1, 32), kernel size 8 filters, stride 1, padding 0
* –
ReLU
* –
Conv1D (32, 32), kernel size 4, stride 1, padding 0
* –
ReLU
* –
MaxPool1D, kernel size 2
* •
Classification MLP
* –
FC (3936, 256)
* –
ReLU
* –
Dropout 0.5 probability
* –
FC (256, 9)
## Deduction through Logical Theories
In this section, we describe how the outputs of the neural feature extractor
are used for predicting the features and the final class.
### Determining the Predicted Pieces in Chess Domain
1. 1.
From the outputs of the network, each of the 9 square are classified into
belonging to one of the 8 states.
2. 2.
The abductive feedback is collected for all the three classes safe, mate,
draw.
3. 3.
The predicted set of features is compared to all the abductive proofs in the
abductive feedbacks in an attempt to find an exact match.
4. 4.
If a match is found the corresponding class is considered to be the predicted
class. If a match is not found, the prediction is considered to be invalid.
### Determining the Predicted Conjunct in Time-Series Domain
1. 1.
Categorical probability distributions are calculated (using softmax) over the
three sets $\mathcal{S}_{1}$, $\mathcal{S}_{2}$ and $\mathcal{S}_{3}$. Let’s
denote these distribution by $P_{\mathcal{S}_{1}}$, $P_{\mathcal{S}_{2}}$ and
$P_{\mathcal{S}_{3}}$.
2. 2.
The probability of each of the conjuncts present in the theory is calculated
by multiplying the probability of the component shapes. For eg. the
probability of the conjunct $2\land 0\land 7$ in the theory given in Table 3
is calculated as $P_{\mathcal{S}_{1}}(2)\times P_{\mathcal{S}_{1}}(0)\times
P_{\mathcal{S}_{1}}(7)$
3. 3.
The conjunct with the maximum probability is considered to be the predicted
conjunct and the corresponding class is considered to be the predicted class.
## Additional Training and Evaluation Details
In this section we talk about some additional details about the experiments:
### Miscellaneous Details
* •
Cross Entropy Loss is used for pretraining the neural feature extractors in
experiments on both chess and time-series domains.
* •
For the chess domain, we perform experiments on the ’BSV’ scenario as
described in NEUROLOG (Tsamoura, Hospedales, and Michael 2021) only. We DO NOT
investigate the ’NGA’ and ’ISK’ scenarios.
* •
For chess experiments we use all 9000 data points provided in the train data
for training and 2000 randomly sampled data points from the test data for
evaluation.
### Hyperparameters
Table 4 lists further details and hyperparameters used in the experiments
| $NEUROLOG$ | $NEUROLOG^{-}$
---|---|---
| | Pretraining | Finetuning
Chess Experiments
Learning Rate | 0.001 | 0.0001 | 0.00001
Optimizer | Adam | Adam | Adam
Train Epochs | 15 | 40 | 15
Data Transforms | MinMax Scaling | MinMax Scaling | MinMax Scaling
Batch Size | 1 | 64 | 1
Time Series Experiments
Learning Rate | 0.0001 | 0.0001 | 0.0001
Optimizer | Adam | Adam | Adam
Train Epochs | 300 | 200 | 300
Data Transforms | Z Normalization | Z Normalization | Z Normalization
Batch Size | 64 | 64 | 64
Upper Bound | 8 | 8 | 8
Lower Bound | 5 | 5 | 5
Table 4: Experimental Details for the Experiments. MinMax Scaling is done
between [-0.5,0.5]
## Additional Experiments and Results
We further probe the effectiveness of $NEUROLOG^{-}$ on feature extractors
pretrained with higher levels of noise in the time-series domain. We increase
the level of corruption from 0.1 to 0.6 in steps of 0.1. Table 5 shows the
results on the same.
Model | | Class acc. | Expl fid.
---|---|---|---
$N$ | | 100 (0.00) | 100 (0.00)
$N_{1,\alpha}^{-}$ | $\alpha=0.1$ | 82.07 (6.59) | 76.05 (10.23)
$\alpha=0.2$ | 81.48 (4.44) | 76.08 (7.78)
$\alpha=0.3$ | 81.56 (4.61) | 75.37 (10.28)
$\alpha=0.4$ | 76.80 (7.73) | 67.70 (12.57)
$\alpha=0.5$ | 69.12 (7.24) | 52.09 (11.44)
$\alpha=0.6$ | 62.71 (2.77) | 29.88 (3.04)
$N_{0}^{-}$ | | 75.79 (11.86) | 36.81 (16.85)
Table 5: Class accuracy and Explanatory Fidelity for decreasing quality of
feature extractor (by increasing corruption $\alpha$). Figure 7: Pretrain and
Finetune Class accuracy and Explanatory fidelity for different levels of
corruption
Fig. 7 shows that the relative improvement using the semantic loss increases
as the quality of feature extractors worsens, till a point. For the time
series data, this maxima lies between $\alpha$ 0.4 and 0.5 with a gain of
28-33 % in both class accuracy and exploratory fidelity.
|
Current address: ]Department of Condensed Matter Physics, Weizmann Institute
of Science, Rehovot, Israel
# Magnetic interactions of 4$f$ electrons in the topological insulator
chalcogenide Bi2Se3
J. C. Souza<EMAIL_ADDRESS>[ Instituto de Física “Gleb Wataghin”,
UNICAMP, 13083-859, Campinas, SP, Brazil M. Carlone POSMAT, Faculdade de
Ciências, UNESP, C.P. 473, 17033-360, Bauru, SP, Brazil G. G. Lesseux 1\.
Physikalisches Institut, Universität Stuttgart, D-70569, Germany H. B. Pizzi
Instituto de Física “Gleb Wataghin”, UNICAMP, 13083-859, Campinas, SP, Brazil
G. S. Freitas Instituto de Física “Gleb Wataghin”, UNICAMP, 13083-859,
Campinas, SP, Brazil Los Alamos National Laboratory, Los Alamos, New Mexico
87545, USA R. R. Urbano Instituto de Física “Gleb Wataghin”, UNICAMP,
13083-859, Campinas, SP, Brazil P. A. Venegas Departamento de Física,
Faculdade de Ciências, UNESP, C.P. 473, 17033-360, Bauru, SP, Brazil P. G.
Pagliuso Instituto de Física “Gleb Wataghin”, UNICAMP, 13083-859, Campinas,
SP, Brazil Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
###### Abstract
The gap opening mechanism of a topological insulator, the quantum anomalous
Hall effect and the axion physics are still pressing open questions and a
microscopic viewpoint to further understand the role of magnetism in topology
is highly desirable. In this work we have performed a microscopic
investigation, by means of electron spin resonance (ESR) along with
complementary bulk measurements, on the chalcogenide (Bi1-xGdx)2Se3 ($x$ = 0,
0.001, 0.002 and 0.006). Our analysis of the Gd3+ spin dynamics reveal no
significant change of the Fermi surface as a function of Gd3+ concentration,
which indicates that the 4$f$ magnetism is different from the non-local
effects induced by transition metals ($d$ electrons) substitutions.
Additionally, we observe an unusual evolution of the Gd3+ ESR spectra as a
function of the applied magnetic field, which we discuss considering the
magnetic interaction between Gd3+ 4$f$ electrons and impurity centers such as
Se vacancies. This interaction would give rise to a local weak
antilocalization effect surrounding the Gd3+ ions. Such mechanism is
observable due to particular details of the Gd3+ 4$f$ electrons magnetism in
this system compared to $d$ electrons. Our work points out that rare earth
substitutions in this model topological insulator is a promising path to
explore the axion insulating systems.
###### pacs:
76.30.-v, 71.20.Lp
††preprint: APS/123-QED
## I I. Introduction
The application of topology in condensed matter physics has been responsible
to unveil new quantum phases of matter, the topological insulators (TIs) in
two [1, 2, 3] and three dimensions [4, 3] being the most prominent and heavily
explored systems. The insulating bulk with protected spin-polarized gapless
surface states is a highly attractive characteristic of TIs [4, 5]. Another
interesting property rises from the interplay between topology and magnetism,
where the time reversal symmetry breaking could result in axion physics and
the quantum anomalous hall effect (QAHE) [6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 17].
The (Bi,Sb)2(Te,Se)3 chalcogenides have been established as the model systems
of three dimensional TIs due to the single Dirac cone near the Fermi level
[18, 19, 20]. Naturally, such systems have been able to host the interplay
between magnetism and topology [7, 8, 9]. The first realization of the QAHE
was achieved in (V,Cr):(Bi,Sb)2Te3 thin films, however it is possible to
obtain a fully quantized Hall conductivity only at milikelvin temperatures [7,
8, 9, 10]. This limitation may be due to the presence of thermally activated
bulk carriers, which cause a breakdown of the QAHE [21]. One remaining
question relies on the role of the underlying magnetic coupling mechanism.
More specifically, it is imperative to understand how the QAHE can be affected
by more complex magnetic interactions which goes beyond the so-called van
Vleck mechanism. This question has recently been explored by electron spin
resonance (ESR), x-ray absorption and resonant photoelectron spectroscopy [22,
23, 24, 25, 26, 27, 28, 29]. Indeed the $p-d$ hybridization, along with the
$d$ states occupation and the consequent Sb and Te polarization, near the
Fermi level seems to play an important role to the magnetism of substituted
chalcogenides, which goes beyond the van Vleck mechanism [25, 30, 26, 29].
Exploring heterostructures is also a promising, yet challenging, path to
achieve higher temperatures in the QAHE [17, 31, 32, 11, 33, 34, 35].
One little explored but promising route to understand the role of the
magnetism and its influence in the bulk is to investigate the magnetism of
4$f$ electrons. For instance, Sm-substituted chalcogenides could display
higher order topological insulators phases with chiral hinge states [36, 12].
Europium substituted chalcogenides shows antiferromagnetic correlations,
however Eu has a 2+ valence, while most of the rare-earths shows a 3+ state
[37]. Angle resolved photoemission spectroscopy studies show the robustness of
the surface states to a Gd3+ concentration of $\sim$ 0.1 in Bi2Te3 and TlBiSe2
[38, 39]. Therefore, further exploring 4$f$-substituted systems can be highly
interesting to understand the gap opening mechanism and the axion insulating
phase in these model systems [40, 41, 42, 39, 43].
In this context, Gd3+, which also induces antiferromagnetic correlations [44,
45, 42], is an ideal testbed due to the stable valence and the weak
crystalline electrical field (CEF) effects. Although Gd3+-substituted
chalcogenides have been explored in macroscopic [44, 45] and previous ESR
studies [44, 46, 47, 48], a detailed investigation of the 4$f$ local magnetic
effects and the spin dynamics induced by these substitutions, as well as a
comparison to more traditional $d$ systems is missing [27, 23].
In this work we locally explore Gd3+-substituted Bi2Se3 using ESR at different
frequencies ($\nu$ = 9.5 and 34 GHz). We show that the introduction of Gd3+
ions does not alter the carriers near the Fermi surface, which are mainly $p$
states. This is a different mechanism from the substitution using ions where
the magnetism comes from $d$ states. Such difference is also manifested into
the Gd3+ spin dynamics and in the macroscopic properties of the system.
Additionally, the Gd3+ ESR data show an unusual evolution as a function of the
applied magnetic field. While for lower field we obtain a Gd3+ response with
resolved fine structure, which is more consistent with Gd3+ ions in an
insulating environment, at higher fields the system tends to behave as a
conventional metal, revealing a single additional line with collapsed fine
structure. We discuss the evolution of the Gd3+ local environment under the
light of a possible local weak antilocalization (WAL) effect [49, 50, 51, 52],
which is a product of the interplay between strong spin-orbit coupling and the
interaction between 4$f$ local moments and impurity centers such as Se
vacancies. Our results shed light on the microscopic mechanism involving the
introduction of 4$f$ electrons into the chalcogenides.
## II II. Methods
The chalcogenides have a rhombohedral crystal structure. Single crystalline
samples of (Bi1-xGdx)2Se3 ($x$ = 0, 0.001, 0.002 and 0.006) were grown by the
stoichiometric melting technique. High purity Bi, Gd and Se elements were put
inside of an alumina crucible with the ratio of (2 - $2x$):$2x$:3. The
crucible was vacuum sealed in a quartz tube, heated to 800∘C for 72 hours and
cooled down to room temperature at 2∘C per hour. We cut the crystals in a
rectangular-like shape. Typical sample sizes were 0.5 x 2 x 0.3 mm3. The
structure and phase purity were checked by x-ray powder diffraction using a
commercial diffractometer (Cu-K$\alpha$), where we confirmed the single phase
nature of our samples. We have performed elemental analysis using energy and
wavelenght dispersive spectroscopy, which revealed a small, but measurable,
amount of Se vacancies ($\sim$ 0.1). Magnetic susceptibility measurements were
performed in a commercial SQUID magnetometer. Specific-heat measurements were
done in a commercial small-mass calorimeter system. Electrical resistivity
data were acquired using a four-probe configuration with a $dc$ resistance
bridge. ESR measurements were performed on single crystals in X- and Q-band
($\nu$ = 9.5 and 34 GHz respectively) spectrometer equipped with a goniometer
and a He-flow cryostat in the temperature range of 4 K $\leq$ $T$ $\leq$ 300 K
at low power $P$ $\leq$ 2 mW. The ESR spectra were analyzed using the software
SPEKTROLYST.
## III III. Results
Figure 1: (Top panel) Magnetic susceptibility for $H$ applied parallel to the
[001] direction, (middle panel) longitudinal resistivity and (lower panel)
specific heat as a function of temperature for (Bi1-xGdx)2Se3. The insets show
the (top panel) susceptibility at high temperatures, (middle panel) Hall
resistivity as a function of applied magnetic field for different temperatures
for $x$ = 0.002, and (bottom panel) the specific heat divided by temperature
as a function of $T^{2}$ at low temperatures.
Figure 1 summarizes the macroscopic properties of (Bi1-xGdx)2Se3. The top
panel shows the magnetic susceptibility as a function of temperature. From a
Curie-Weiss fitting $\chi=\chi_{0}+C/(T-\theta$), where $\chi_{0}$ is the
$T$-independent term, $C$ is a constant and $\theta$ the Curie-Weiss
temperature, at high temperatures (150 K $\leq$ $T$ $\leq$ 300 K) we can
estimate the concentration of Gd3+ in (Bi1-xGdx)2Se3. Assuming 7.94
$\mu_{B}$/Gd, we obtained $x$ $\approx$ 0.001, 0.002 and 0.006 with a $\theta$
= -1(5) K for all samples. Taking into account the nuclear diamagnetism, we
can also estimate the Pauli magnetic susceptibility $\chi_{p}$ = 20(10)
$\mu$emu/mol Oe for all samples.
The middle panel shows the longitudinal resistivity $\rho_{xx}$ as a function
of temperature for (Bi1-xGdx)2Se3. We obtain a metallic-like behavior, which
is expected due to the presence of Se vacancies in single crystals [50, 53,
54, 55]. Nonetheless, we only observe small and not systematic differences
between samples, which can be attributed to a small variation of Se vacancies
from crystal to crystal. As a result, the residual resistivity $\rho_{0}$,
which can be associated with disorder in the system, can differ between
samples from different batches. The lack of a systematic change indicates that
Gd3+ substitution at the Bi site does not introduce carriers into the system,
and presumably its 4$f$ electrons have no relevant role in the Fermi surface.
This is in contrast to substitutions using transition metal ions where the
magnetism originates from $d$ orbitals [56, 23, 29, 57]. The absence of
Gd3+-introduced carriers into the system as a function of Gd3+ concentration
is also supported by the Hall resistivity ($\rho_{xy}$), which is Gd3+
concentration and temperature independents, as shown in the inset of the
middle panel. The Hall response to the applied magnetic field is linear and
positive, consistent with the transport properties being dominated by a single
band of holes ($p$-type). From a linear fit ($\rho_{xy}$ = 1/ne, where $e$ is
the electron charge) we can extract a carrier density of $n_{h}$ = 5(3).1018
h/cm3.
The bottom panel shows the specific heat divided by the temperature
$c_{p}$/$T$ as a function of temperature for (Bi1-xGdx)2Se3. There is a slight
difference between different Gd3+ concentrations, which reinforces the lack of
change in the Fermi level due to the Gd3+ substitution. Performing linear fits
for $c_{p}$/$T$ as a function of $T^{2}$, top inset of the bottom panel, it is
possible to extract the Sommerfeld coefficient $\gamma$ = 0.8(3) mJ/mol K2 for
all samples. For a free conduction electron gas model $\gamma=(2/3)\pi
k_{B}^{2}\eta(E_{F})$, where $k_{B}^{2}$ is the Boltzmann constant and
$\eta(E_{F})$ the density of states (DOS) at the Fermi level per spin. We
extract $\eta(E_{F})$ = 0.16(6) states/eV mol spin for all samples. We can
further analyze the role of electron-electron ($ee$) interactions in the bulk
by comparing the estimated Pauli susceptibility ($\chi_{p}^{theor}$ =
2$\mu_{B}^{2}\eta(E_{F})$, where $\mu_{B}$ is the Bohr magneton) with the
experimental value. We obtain $\chi_{p}^{theor}$ = 15 $\mu$emu/mol Oe. These
results indicate that the DOS at the Fermi level is not affected as a function
of Gd3+ concentration and, if any, the role of $ee$ interactions is negligible
in the bulk. It is noteworthy that recent results point out that $ee$
correlations do play a role in the surface states of chalcogenides [58, 59,
60].
Figure 2: (a) X-band ($\nu$ = 9.5 GHz) Gd3+ ESR spectra for
(Bi0.994Gd0.006)2Se3 and (b) its resonance field angle dependence at $T$ = 4 K
with the applied magnetic field $H$ parallel to the [001] direction. In (b)
the field is rotated towards the $ab$ plane. (c) Q-band ($\nu$ = 34 Ghz) Gd3+
ESR spectra. The blue solid lines are simulations considering the relative
intensities for each of the seven Gd3+ fine structure lines. The magenta solid
lines are fits considering the hexagonal CEF Hamiltonian described into the
text.
As mentioned earlier, Gd3+ is a $S$-ion ($L$ = 0, $S$ = 7/2). As such, the CEF
effects appears only due to a small mixing of excited states in the ground
state, resulting in a intermediate coupling [61, 62]. Due to this small
correction, the CEF splitting is of the order of the Zeeman energy and we are
able to observe the Gd3+ fine structure, with a selection rule of $\Delta
m_{s}$ = $\pm$ 1 [61, 62]. Another consequence of the $S$-ion nature is that
the $g$-value of an isolated Gd3+ ion is independent of the symmetry of the
matrix, which makes the analysis of the Gd3+ spin dynamics more
straightforward [61, 62]. With that in mind, in order to have a microscopic
insight about the local effects of Gd3+ ions, we have performed ESR in two
different frequencies. Figure 2 shows the Gd3+ ESR spectra at $T$ = 4 K for
$x$ = 0.006 with the applied field $H$ parallel to the [001] direction and its
angle dependence as an example. Fig. 2(a) shows the Gd3+ X-band ESR spectrum,
where we can observe the resolved Gd3+ fine structure displaying seven
transition lines [61, 62, 63] \- with their expected relative intensities.
Such observation excludes the possibility of Gd3+ ions having distinct sites
and/or being interstitial in Bi2Se3. It also corroborates that we only have
one phase in our crystals. Each individual resonance line can be described as
a Dysonian line shape, which is typical when the skin depth $\delta$ is
smaller than the sample size $d$ [64, 65]. Indeed, from the resistivity
measurements shown in Fig. 1(b), we can estimated for X-band $\delta$ $\sim$
11 $\mu$m ($\ll$ $d$ $\sim$ 300 $\mu$m for our samples) at $T$ = 4 K,
consistent with the Dysonian line shape. Each individual line, represented by
the power absorption derivative d$P$/d$H$ as a function of $H$, can be
described as an admixture of absorption and dispersion derivatives
$\frac{dP}{dH}\propto(1-\lambda)\frac{\mathrm{d}}{\mathrm{d}x}\left(\frac{1}{1+x^{2}}\right)+\lambda\frac{\mathrm{d}}{\mathrm{d}x}\left(\frac{x}{1+x^{2}}\right),$
(1)
where $\lambda$ is the asymmetric parameter of the line shape and $x$ =
$2(H-H_{r})/\Delta H$ [64], wherein $H_{r}$ is the Gd3+ resonance field and
$\Delta H$ the linewidth.
As already mentioned, the position of the Gd3+ fine structure is dependent on
the local symmetry of Gd3+ ions. In the case of the Bi2Se3, the local symmetry
is hexagonal, where the spin Hamiltonian is given by
$\begin{split}\mathcal{H}=&\frac{1}{3}b_{2}^{0}O_{2}^{0}+\frac{1}{60}\left(b_{4}^{0}O_{4}^{0}+b_{4}^{3}O_{4}^{3}\right)+\\\
&\frac{1}{1260}\left(b_{6}^{0}O_{6}^{0}+b_{6}^{3}O_{6}^{3}+b_{6}^{6}O_{6}^{6}\right)+g\mu_{B}\mathbf{H}\mathbf{S},\end{split}$
(2)
where $b_{n}^{m}$ are the $n$ order CEF parameters and $O_{m}^{n}$ the Stevens
operator. The Gd3+ fine structure resonance fields are going to be highly
angular dependent [61, 62]. Therefore, we fitted the angular dependence of the
Gd3+ fine structure resonance fields with Eq. 2, as shown by the solid magenta
lines in Fig. 2(b). The best parameters obtained were $b_{2}^{0}$ = 37.5(5)
Oe, $b_{4}^{3}$ = 0.03(2) Oe and $b_{6}^{6}$ = - 0.3(2) Oe. The values of
$b_{4}^{0}$, $b_{6}^{0}$ and $b_{6}^{3}$ were negligible. The dominant term,
which is the axial $b_{2}^{0}$, is consistent with previous CEF studies of
Gd3+ in Bi2Se3 [47]. Albeit the contribution from smaller terms may change
from this previous study to our result, it is important to observe that the
absolute signs of $b_{2}$ $>$ 0, $b_{4}$ $>$ 0 and $b_{6}$ $<$ 0 are in
agreement between both studies [47]. The change between which terms are
negligible from one to another may be due to those terms being much smaller
than the dominant one, and we can obtain a local minimum different for each
rotation. Nonetheless, our obtained CEF parameters are in accordance with
[47]. The blue solid line is a simulation considering the value of $b_{2}^{0}$
of this hexagonal CEF Hamiltonian.
At this point it is interesting to compare our X-band Gd3+ spectrum in Bi2Se3
with previous reports of Gd3+ and Mn2+ in Bi2Te3 [23, 46, 44]. While both (X-
and Q-bands) Gd3+ spectra show resolved fine structure, the same does not
happen for Mn2+, in which only one resonance is observed even for the smallest
Mn2+ concentration ($\sim$ 0.005) [23]. Mn2+ is a $S$ = 5/2 ion, therefore
there is an exchange narrowing of the fine structure going from $d$ to $f$
states, which reflects different substitution effects. The exchange narrowing
will result from the large exchange interaction from $d$ states with the $p$
polarized states in these systems [56, 23, 29, 66, 67].
Fig. 2(c) shows the Gd3+ Q-band ESR spectrum at $T$ = 4 K. The blue solid line
is a simulation of the same CEF scheme of the X-band data, normalized by the
intensity of the Gd3+ fine structure. There is a clear difference when
compared to the X-band data, with a stronger - 1/2 $\leftrightarrow$ 1/2
transition, which we will further call Gd3+ central line, at the top of the
seven fine structure lines. Such coexistence can be linked to two different
environments for Gd3+ ions distributed along the sample. Although the
distribution of the Gd3+ ions appears to be structurally homogeneous, due to
the X-band data, the electronic environment around the localized moments seems
to show an interesting evolution as a function of the applied magnetic field.
Such effect may be connected with the Se vacancies distribution, as we discuss
below analyzing the Gd3+ spin dynamics in more details.
Figure 3: (a) Gd3+ spectrum splitting and the axial term $b_{2}^{0}$, (b)
$\Delta H$ and (c) $g$-shift as a function of temperature for
(Bi0.994Gd0.006)2Se3 for the applied magnetic field parallel to the [001]
direction. $\Delta H$ and $g$-shift were extracted from the ESR central line.
The red dashed lines in (b) represent the linear fits used to extract the
Korringa rates. We used $g_{theor}$ = 1.993(1) in order to calculate the
$g$-shift for all samples.
The Gd3+ ESR spectrum temperature evolution and the spin dynamics can bring up
valuable information about the introduction of 4$f$ states in Bi2Se3 and their
field dependence. Figure 3 summarizes this temperature evolution. The ESR fine
structure split, defined by the difference between the resonance field $H_{r}$
of the $\mp$ 7/2 $\leftrightarrow$ $\mp$ 5/2 transitions ($H_{r7}$ \-
$H_{r1}$), is shown in Fig. 3(a). Since the axial term of the CEF parameters
is the most dominant one, $b_{2}^{0}$ is going to have to be directly
proportional to the CEF splitting. We also show the value of $b_{2}^{0}$ for
each CEF splitting in Fig. 3(a). The Gd3+ splitting is nearly constant
($\Delta_{H_{r7}-H_{r1}}^{4-50K}$/$T$ $\leq$ 0.4 Oe/K) until $\sim$ 50 K for
both bands. At higher temperatures we observe a more significant
($\Delta_{H_{r7}-H_{r1}}^{50-300K}$/$T$ $\sim$ 1 Oe/K) and systematic
reduction of the fine structure splitting. This narrowing can be related with
the interaction between carriers or, more likely, due to changes in the CEF
effects related to the thermal expansion of the compound [47]. Indeed,
previous neutron diffraction and pair density function analysis have shown a
local anharmonic thermal expansion in Bi2Se3 [68]. In particular, the two Se
sites have a distinct thermal expansion in reference to the Bi site [68].
Therefore, this anharmonic expansion should affect the CEF effects in the Bi
site and, consequently, should influence the Gd3+ CEF splitting. Our results
are in agreement with previous results [47, 48] and shows the similarity of
the fine structure between X- and Q-bands spectra.
From the analysis using an admixture of absorption and dispersion derivatives
using Eq. 1 one can extract the Gd3+ $\Delta H$ and $H_{rc}$ of the central
line. As a result, we can also obtain the experimental Gd3+ $g$-value
$g_{exp}$ = $h\nu/\mu_{B}H_{rc}$, where $h$ is the Planck constant and
$\mu_{B}$ the Bohr magneton. Figures 3(b) and 3(c) show the Gd3+ linewidth
$\Delta H$ of the central line and $g$-shift $\Delta g$ = $g_{exp}$ \-
$g_{theor}$ as a function of temperature. Here $g_{theor}$ = 1.993(1), which
is the Gd3+ $g$-value in an insulating matrix. The Gd3+ central line is a
reliable resonance to analyze the trend of the Gd3+ spin dynamics due to a
smaller influence of CEF effects in the $g$-value [62].
Regarding the $\Delta H$ of the Gd3+ central line, all the obtained data show
a linear increase at high temperatures ($T$ $\geq$ 80 K). We focused our
analysis on the high-$T$ region to avoid the contribution of possible
Gd3+-Gd3+ interactions at low-$T$, especially for samples with higher
concentrations and higher fields. Such linear increase can be attributed to a
relaxation process through the exchange interaction between the Gd3+ 4$f$
local moments and the carriers, which eventually results in the spin-flip
scattering of the latter. This spin-spin relaxation mechanism is known as
Korringa relaxation [61, 62, 63]. From a linear fit $\Delta H$ = $\Delta
H_{0}$ \+ $bT$ we can extract the Gd3+ residual linewidth $\Delta H_{0}$ and
the Korringa rate $b$. The results are summarized in Table 1. While $b$ is
related to the spin-spin relaxation, $\Delta H_{0}$ can be associated to,
e.g., disorder and sample homogeneity [61, 62, 63].
Table 1: ESR parameters extracted from the Gd3+ spin dynamics analysis for (Bi1-xGdx)2Se3. | X-band | Q-band
---|---|---
| $\Delta H_{0}$ | $b$ | $\langle J_{fp}^{2}(\textbf{q})\rangle^{1/2}$ | $J_{fp}(0)$ | $\Delta H_{0}$ | $b$ | $\langle J_{fp}^{2}(\textbf{q})\rangle^{1/2}$ | $J_{fp}(0)$
| Oe | Oe/K | meV | meV | Oe | Oe/K | meV | meV
$x$ = 0.006 | 37(2) | 0.015(5) | $\sim$ 5 | $\sim$ 80 | 38(2) | 0.020(5) | $\sim$ 5 | $\sim$ 20
$x$ = 0.002 | 29(2) | 0.016(5) | $\sim$ 5 | $\sim$ 80 | 36(2) | 0.017(5) | $\sim$ 5 | $\sim$ 20
$x$ = 0.001 | 28(2) | 0.013(5) | $\sim$ 5 | $\sim$ 80 | 31(2) | 0.015(5) | $\sim$ 5 | $\sim$ 20
Turning our attention to the Gd3+ X-band results, $b$ is, within experimental
uncertainty, the same for all concentrations, which indicates the absence of
exchange bottleneck effects [61, 62, 63]. $\Delta H_{0}$ increases
systemically as a function of Gd3+ concentration, which is expected due to the
increase of disorder in the system. At low temperatures we see a deviation of
the Korringa rate, which might be associated with the interaction between
local moments and impurity centers. As such, as mentioned earlier, a proper
analysis is to focus on the high temperature Gd3+ spin dynamics data. Looking
now to the field dependency, we still obtain the same Korringa rate for all
concentrations [Fig. 3(b)]; however $\Delta H_{0}$ increases systematically
when comparing X- and Q-bands results for the same concentration. This is an
indication of inhomogeneous broadening, which raises most likely from slightly
different CEF states around the Gd3+ sites [61, 62, 63]. Bi2Se3 is well known
to host Se vacancies intrinsically, which can show a small inhomogeneity
across the sample [50, 69, 70]. Moreover, previous nuclear magnetic resonance
measurements have shown that, in polycrystalline Bi2Se3, defect regions
segregate into domains [71].
Before analyzing the spin relaxation of our Gd3+ probe in more details, it is
instructive to first look to the $g$-shift as a function of temperature,
applied magnetic field and Gd3+ concentration - Figure 3(c). On one hand, $s$
and/or $d$ carriers have a ferromagnetic (atomic) interaction with 4$f$ local
moments, which produces a positive $g$-shift. On the other hand, $p$ and/or
$f$ carriers magnetic interaction with 4$f$ local moments occurs through the
so-called virtual bound states [72], which results in an antiferromagnetic
(covalent) interaction between them. In this second case, the result is a
negative $g$-shift. Therefore, the sign of the $g$-shift is crucial to have
information about the nature of the wave function of the carriers of the
system.
As such, an important result reported here is the negative $g$-shift,
confirming $p$ states as the main carriers near the Fermi level [18]. This
result is consistent with previous angle resolved spectroscopy studies [73,
74]. Looking more specifically to the X-band data, at low temperatures we can
see a systematic decrease of the $g$-shift for all concentrations. This is due
to an antiferromagnetic interaction, which is in accordance with the observed
AFM ground state in Eu2+ and Gd3+-substituted chalcogenides [75, 37]. However,
such decrease is observed even for samples with a concentration as low as $x$
= 0.001, further suggesting that such contribution comes from a possible
interaction between 4$f$ local moments and impurity centers, such as Se
vacancies, which also have a signature in the Gd3+ $\Delta H$ $T$-dependence -
Fig. 3 (b). Similar contributions recently observed in organic salts were also
interpreted to have origin in impurity centers [76, 77, 78, 79, 80]. At high
temperatures, the Gd3+ $\Delta g$ values are $T$-independent within
experimental uncertainty, which shows that dynamic effects are negligible into
the systems [81]. While the Gd3+ $g$-values for $x$ = 0.001 and 0.006 are
virtually identical, as expected, for $x$ = 0.002 we observe a subtle, but
systematic, increase of the $g$-value. Such small difference might also be
related with samples with $x$ = 0.002 showing a higher quantity of impurity
centers, such as Se vacancies. This interpretation is corroborated by a few
points: The first one is that the increase of $\Delta H_{0}$ for $x$ = 0.002
going from X- to Q-band is relatively larger when compared to other Gd3+
concentrations. Secondly, the deviation at low temperatures of the linear
increase in the Gd3+ $\Delta H$ are more pronounced for $x$ = 0.002 when
compared to other concentrations. Finally, the $g$-shift for all three
concentrations are the same for Q-band measurements, which indicates that
there is no change in $\eta(E_{F})$ as a function of Gd3+ concentration,
consistent with specific heat measurements.
Interestingly, there is also a decrease of the negative Gd3+ $g$-shift as a
function of the applied magnetic field for all concentrations of
(Bi1-xGdx)2Se3. In the absence of dynamic, bottleneck and multiple band
effects, $b$ and $\Delta g$ can be described as
$b=\frac{\pi k_{B}}{g\mu_{B}}\langle
J_{fp}^{2}(\textbf{q})\rangle\eta^{2}(E_{F})\frac{K(\alpha)}{(1-\alpha)^{2}},$
(3)
$\Delta g=J_{fp}(0)\,\frac{\eta(E_{F})}{(1-\alpha)},$ (4)
where $k_{B}$ is the Boltzmann constant, $J_{fp}(0)$ is the effective exchange
interaction between the Gd3+ local moments and the carriers for the momentum
transfer $q$ = 0, and $\langle J_{fp}^{2}(\textbf{q})\rangle^{1/2}$ is the
average of the exchange interaction with momentum transfer $q$ at the residual
Fermi surface [61, 62, 63]. Any possible relevant $ee$ correlations are taken
into account in the Stoner enhancement factor $(1-\alpha)^{-1}$ [82, 83] and
in the Korringa exchange factor $K(\alpha)$ [84, 85]. As already shown, the
$ee$ correlations in the bulk does not seem to play an important role,
therefore we assume $\alpha$ = 0 and $K(\alpha)$ = 1. The estimated exchange
interactions for each Gd3+ concentration and applied magnetic field are
summarized in Table 1. Although we should underestimate $J_{fp}(0)$ and
$<J_{fp}^{2}(\textbf{q})>^{1/2}$ due to remaining CEF effects and a local
reduction of the DOS, the trend observed in our analysis qualitatively
trustworthy.
## IV IV. Discussion
As clearly shown in Table 1, in X-band measurements we do have a clear
q-dependence in the exchange interaction for (Bi1-xGdx)2Se3, which appears to
reduce upon increasing the applied magnetic field (Q-band). A notion of this
effect can be obtained looking to the real space. Our results indicate that,
at lower fields, there is a stronger exchange interaction surrounding the Gd3+
ions. In other words, it appears that occurs a localization of the carriers
surrounding the Gd3+ ions. The increase of the magnetic field appears to
induce a magnetic breakdown to the effect, and the system starts to behave
more like a regular metal with a simple Fermi surface with no q-dependence.
During this whole process, we should expect that the average of the
interaction between conduction electrons and 4$f$ local moments is almost
constant, which explains why there is an evolution of $\Delta g$ while $b$
remains unchanged. The last important piece of information to pinpoint our
interpretation comes from the Gd3+ spectra evolution as a function of the
applied magnetic field. The transition from an insulating-like Gd3+ spectrum
at X-band to a collapse of the CEF splitting at Q-band data indicates a
destructive-like interference of the carriers surrounding the Gd3+ ions, which
is the so-called local WAL effect.
A direct consequence of the local WAL effect, we may also understand the
collapse of the CEF splitting of the Gd3+ ions from a viewpoint of CEF
contributions from the lattice and the carriers. The CEF parameters may be
affected by the surrounding charges at the Gd3+ site and eventually they can
be strongly reduced. In other words, it appears that, due to the $S$-ion
character of Gd3+ ions, the influence of the conduction electrons could become
relevant to the CEF effects compared to the lattice charges, as already shown
in half-Heusler systems [86, 87]. As such, a change in the local charge
distribution could cause a collapse of the Gd3+ fine structure due to a strong
reduction of the crystal field parameters associated with two contributions of
different signs of the CEF parameters (lattice and carriers contributions).
Since the second order crystal field parameter $b_{2}^{0}$ is much larger than
the fourth order ones in (Bi1-xGdx)2Se3 [47], we can estimate $|b_{2}^{0}|$ =
32/3*12 $\leq$ 0.9 Oe for Q-band measurements [88].
One important concern about this proposed scenario is the comparison with
previous transport results. Albeit previous reports show that even bulk Bi2Se3
samples have a WAL effect at low fields, and thinner samples have a more
significant critical field ($H$ $\leq$ 1 T) [50], we must pay attention that
all the results report WAL effects at low temperatures. In order to understand
the origin of this discrepancy, it is important to understand that transport
and ESR are different techniques. While the response in transport is a
macroscopic, global property, in ESR we obtain a microscopic viewpoint. In ESR
measurements we have two distinct relaxation channels: the relaxation through
the spin-phonon process and the spin-spin relaxation, which is faster and
involves the carriers of the system [61, 62, 63]. From our Gd3+ spin dynamics,
Fig. 3, we clearly obtain signatures that the coupling between the 4$f$ local
moments and the carriers are relevant, and they should dominate the relaxation
of the system. Concomitantly, Gd3+ has $L$ = 0, which means that the spin-
phonon coupling is rather weak. That means that any potential scattering
involving phonons, which would mask the WAL signatures, are going to be
heavily suppressed. Such locality also help us to have an insight of the
difference of critical fields. For thicker transport samples, the cusp has a
critical field of only $\approx$ 1 kOe, while our X-band measurements occur at
fields of 3.5 kOe. Again, the phonons contribution are going to be heavily
suppressed, which means that it is understandable that the critical field for
the local WAL effect might be comparable to the fields of thinner samples.
Nonetheless, it is worth to note that Q-band measurements have fields applied
at the order of $\approx$ 12 kOe. An alternate scenario could also rely on
strain effects, which are originated from surface effects, playing a bigger
role in Q-band measurements. In this scenario, the central line could have a
smaller line width and we would be able to describe the data without an
additional collapsed spectrum. Two different results indicate that this is an
unlikely explanation. First of all, the data is better described with two
resonances of different line widths at the same $g$-value. Even with we do not
take this fact into account, the intensity of the central line should be, at
least, double than the expected value to fairly describe the data - which is
inconsistent with the crystal field Hamiltonian. Another point is that the
skin depth is $\sim$ 11 $\mu$m and 8 $\mu$m for X- and Q-band measurements,
respectively. With those skin depths, as expected, we do not see any
signatures of surface effects in our data. Therefore, as far as our
experimental data shows, the bulk dominates the ESR signatures and strain
effects should not play a role.
It is worth also pointing the evolution of the sign of the $g$-shift. As
already mentioned, our results indicate that $p$-type carriers are the main
contributors near the Fermi level [73, 74]. However, earlier reports show
positive $g$-shifts (typical for $s$-type carriers) for Gd3+ substituted
Bi2(Te,Se)3 [46, 47]. Perhaps, the higher temperature reached in their
synthesis increases the number of defects, affecting the $g$-value - which is
expected due to the Se and Te boiling points. Future systematic studies of
Gd3+ substituted Bi2Se3 synthesized at different temperatures could help to
enlighten this open question.
The defects, mainly Se vacancies, also appears to have an important role in
the interpretation of our experiments. First of all, we still observed a weak,
but visible, Gd3+ fine structure at Q-band measurements. Due to the different
CEF states, Gd3+ local moments closer to impurity centers may need higher
fields to completely suppress the local WAL effect. Albeit it should be taken
with care, it is possible to estimate the ratio between Gd3+ ions in an
insulating and conducting environment performing a double integration from our
Gd3+ ESR spectrum for Q-band measurements. We roughly estimate that 70 % of
the Gd3+ ions have a conducting-like environment. As mentioned before,
previous results indicate that vacancies tend to be inhomogeneous in Bi2Se3
matrix [71]. Although the distribution of Gd3+ ions appears to be homogeneous,
it appears that those domains may have a tendency of accumulating near the
magnetic impurities, which would be consistent with a high number of local
moments center still showing an insulating character. The second hint of the
role of vacancies comes from the Gd3+ spin dynamics, especially the changes
observed in the ESR data at low temperatures when comparing X-band and Q-band
data. In this temperature range, presumably, the interaction between impurity
centers and Gd3+ local moments is more relevant, We obtained an evolution of
the negative increase of the $g$-shift at low temperatures as a function of
field: the Gd3+ $g$-shift decrease due to spin-spin interactions is much more
pronounced at X-band measurements when compared to Q-band ones [Fig. 3 c)].
The change in the charge distribution would change the interaction between
defects and 4$f$ local moments and, naturally, we obtain a change in the
low-$T$ Gd3+ spin dynamics.
Regarding the topology of the system, although the strong spin-orbit coupling
of the bulk is essential to the WAL mechanism, it is not clear if the topology
of the system plays any role. Another potential signature of surface
excitations could appear in a diffusive-like contribution to the Gd3+ ESR line
shape [89, 90, 91]. However, the presence of carriers will naturally suppress
any contribution from the surface excitations to the relaxation. This helps to
understand, associated with the weak CEF effects of Gd3+ ions, the lack of any
diffusive-like effects in the Gd3+ line shape [89, 90, 91].
Going back to the magnetism of 4$f$ electrons in chalcogenides, the obtained
exchange interactions, which is a Ruderman-Kittel-Kasuya-Yosida (RKKY)
interaction, in Table 1 are clearly smaller than those obtained for
Bi2-xMnxTe3 [23]. Additionally, it has been observed a change of the Korringa
rate as a function of Mn2+ concentration, which may be linked to changes in
the $p-d$ hybridization. In the Gd3+ case, we observed the same Korringa
relaxation for all the explored concentrations, indicating that $f-p$
hybridization may only play a small role, highlighting the clear difference
between the effects of $d$ and $f$ substitutions.
## V V. Conclusion
In summary, we performed electron spin resonance and complementary macroscopic
measurements in the Gd3+-substituted Bi2Se3 ($x$ = 0, 0.001, 0.002 and 0.006).
The Gd3+ ESR spectra at 4 K for different concentrations show seven Dysonian
lines for X-band measurements, which evolves to an apparent contribution from
distinct Gd3+ sites spectra for Q-band. We conjecture that such evolution
might be due to a breakdown of the local WAL effect and a change in the
crystal field parameters in the vicinity of the vacancies. This interpretation
is consistent with the Gd3+ spin dynamics response. Additionally, we show that
the 4$f$ substitution does not increase the DOS at the Fermi level, neither
introduces a relevant $f-p$ hybridization. This is in contrast to more
traditional substitutions with transition metals $d$ states magnetic ions and
indicates different magnetic mechanisms for $d$ and $f$ states in this system.
Our work points out that 4$f$ substitution in chalcogenides is an interesting
path to explore even further the role of magnetic impurities in this model
system.
###### Acknowledgements.
We would like to thank Dr. S. K. Misra for providing Ref. [88]. This work was
supported by FAPESP (SP-Brazil) Grants No 2022/09240-3, 2020/12283-0,
2019/26247-9, 2018/11364-7, 2017/10581-1, 2012/04870-7, CNPq Grants No
311783/2021-0, 309483/2018-2, 442230/2014-1 and 304649/2013-9, CAPES and
FINEP-Brazil. Work at Los Alamos was supported by the Los Alamos Laboratory
Directed Research and Development program through project 20210064DR.
## References
* Bernevig _et al._ [2006] B. A. Bernevig, T. L. Hughes, and S.-C. Zhang, Science 314, 1757 (2006).
* König _et al._ [2007] M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Science 318, 766 (2007).
* Culcer _et al._ [2020] D. Culcer, A. C. Keser, Y. Li, and G. Tkachov, 2D Mater. 7, 022007 (2020).
* Hasan and Kane [2010] M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010).
* Tokura _et al._ [2017] Y. Tokura, M. Kawasaki, and N. Nagaosa, Nat. Phys. 13, 1056 (2017).
* Zhang _et al._ [2012] D. Zhang, A. Richardella, D. W. Rench, S.-Y. Xu, A. Kandala, T. C. Flanagan, H. Beidenkopf, A. L. Yeats, B. B. Buckley, P. V. Klimov, _et al._ , Phys. Rev. B 86, 205127 (2012).
* Chang _et al._ [2013] C.-Z. Chang, J. Zhang, X. Feng, J. Shen, Z. Zhang, M. Guo, K. Li, Y. Ou, P. Wei, L.-L. Wang, _et al._ , Science 340, 167 (2013).
* He _et al._ [2014] K. He, Y. Wang, and Q.-K. Xue, Natl. Sci. Rev. 1, 38 (2014).
* Chang _et al._ [2015] C.-Z. Chang, W. Zhao, D. Y. Kim, H. Zhang, B. A. Assaf, D. Heiman, S.-C. Zhang, C. Liu, M. H. Chan, and J. S. Moodera, Nat. Mater. 14, 473 (2015).
* Grauer _et al._ [2015] S. Grauer, S. Schreyeck, M. Winnerlein, K. Brunner, C. Gould, and L. Molenkamp, Phys. Rev. B 92, 201304 (2015).
* Xiao _et al._ [2018] D. Xiao, J. Jiang, J.-H. Shin, W. Wang, F. Wang, Y.-F. Zhao, C. Liu, W. Wu, M. H. Chan, N. Samarth, _et al._ , Phys. Rev. Lett. 120, 056801 (2018).
* Yue _et al._ [2019] C. Yue, Y. Xu, Z. Song, H. Weng, Y.-M. Lu, C. Fang, and X. Dai, Nat. Phys. 15, 577 (2019).
* Zhang _et al._ [2019] D. Zhang, M. Shi, T. Zhu, D. Xing, H. Zhang, and J. Wang, Phys. Rev. Lett. 122, 206401 (2019).
* Deng _et al._ [2020] Y. Deng, Y. Yu, M. Z. Shi, Z. Guo, Z. Xu, J. Wang, X. H. Chen, and Y. Zhang, Science 367, 895 (2020).
* Liu _et al._ [2020] C. Liu, Y. Wang, H. Li, Y. Wu, Y. Li, J. Li, K. He, Y. Xu, J. Zhang, and Y. Wang, Nat. mater. 19, 522 (2020).
* Nenno _et al._ [2020] D. M. Nenno, C. A. Garcia, J. Gooth, C. Felser, and P. Narang, Nat. Rev. Phys. 2, 682 (2020).
* Fijalkowski _et al._ [2021a] K. Fijalkowski, N. Liu, M. Hartl, M. Winnerlein, P. Mandal, A. Coschizza, A. Fothergill, S. Grauer, S. Schreyeck, K. Brunner, _et al._ , Phys. Rev. B 103, 235111 (2021a).
* Zhang _et al._ [2009] H. Zhang, C.-X. Liu, X.-L. Qi, X. Dai, Z. Fang, and S.-C. Zhang, Nature physics 5, 438 (2009).
* Analytis _et al._ [2010] J. G. Analytis, J.-H. Chu, Y. Chen, F. Corredor, R. D. McDonald, Z. Shen, and I. R. Fisher, Physical Review B 81, 205407 (2010).
* Pan _et al._ [2011] Z.-H. Pan, E. Vescovo, A. Fedorov, D. Gardner, Y. Lee, S. Chu, G. Gu, and T. Valla, Physical review letters 106, 257004 (2011).
* Fijalkowski _et al._ [2021b] K. M. Fijalkowski, N. Liu, P. Mandal, S. Schreyeck, K. Brunner, C. Gould, and L. W. Molenkamp, Nature communications 12, 1 (2021b).
* Mahani _et al._ [2014] M. R. Mahani, A. Pertsova, M. F. Islam, and C. M. Canali, Phys. Rev. B 90, 195441 (2014).
* Zimmermann _et al._ [2016] S. Zimmermann, F. Steckel, C. Hess, H. Ji, Y. S. Hor, R. J. Cava, B. Büchner, and V. Kataev, Phys. Rev. B 94, 125205 (2016).
* Peixoto _et al._ [2016] T. R. Peixoto, H. Bentmann, S. Schreyeck, M. Winnerlein, C. Seibel, H. Maaß, M. Al-Baidhani, K. Treiber, S. Schatz, S. Grauer, _et al._ , Phys. Rev. B 94, 195140 (2016).
* Zhang _et al._ [2018] W. Zhang, D. West, S. H. Lee, Y. Qiu, C.-Z. Chang, J. S. Moodera, Y. San Hor, S. Zhang, and W. Wu, Phys. Rev. B 98, 115165 (2018).
* Ye _et al._ [2019] M. Ye, T. Xu, G. Li, S. Qiao, Y. Takeda, Y. Saitoh, S.-Y. Zhu, M. Nurmamat, K. Sumida, Y. Ishida, _et al._ , Phys. Rev. B 99, 144413 (2019).
* Bouaziz _et al._ [2019] J. Bouaziz, M. dos Santos Dias, F. S. M. Guimarães, and S. Lounis, Phys. Rev. Mater. 3, 054201 (2019).
* Tcakaev _et al._ [2020a] A. Tcakaev, V. Zabolotnyy, R. Green, T. Peixoto, F. Stier, M. Dettbarn, S. Schreyeck, M. Winnerlein, R. C. Vidal, S. Schatz, _et al._ , Phys. Rev. B 101, 045127 (2020a).
* Peixoto _et al._ [2020] T. R. Peixoto, H. Bentmann, P. Rüßmann, A.-V. Tcakaev, M. Winnerlein, S. Schreyeck, S. Schatz, R. C. Vidal, F. Stier, V. Zabolotnyy, _et al._ , npj Quantum Mater. 5, 1 (2020).
* Islam _et al._ [2018] M. Islam, C. M. Canali, A. Pertsova, A. Balatsky, S. Mahatha, C. Carbone, A. Barla, K. Kokh, O. Tereshchenko, E. Jiménez, _et al._ , Phys. Rev. B 97, 155429 (2018).
* Mogi _et al._ [2017a] M. Mogi, M. Kawamura, R. Yoshimi, A. Tsukazaki, Y. Kozuka, N. Shirakawa, K. Takahashi, M. Kawasaki, and Y. Tokura, Nat. Mater. 16, 516 (2017a).
* Mogi _et al._ [2017b] M. Mogi, M. Kawamura, A. Tsukazaki, R. Yoshimi, K. S. Takahashi, M. Kawasaki, and Y. Tokura, Sci. Adv. 3, eaao1669 (2017b).
* Mathimalar _et al._ [2020] S. Mathimalar, S. Sasmal, A. Bhardwaj, S. Abhaya, R. Pothala, S. Chaudhary, B. Satpati, and K. V. Raman, npj Quantum Mater. 5, 1 (2020).
* Pereira _et al._ [2020a] V. Pereira, S. Altendorf, C. Liu, S. Liao, A. Komarek, M. Guo, H.-J. Lin, C. Chen, M. Hong, J. Kwo, _et al._ , Phys. Rev. Mater. 4, 064202 (2020a).
* Pereira _et al._ [2020b] V. Pereira, C.-N. Wu, C.-A. Knight, A. Choa, L. Tjeng, and S. Altendorf, APL Mater. 8, 071114 (2020b).
* Chen _et al._ [2015] T. Chen, W. Liu, F. Zheng, M. Gao, X. Pan, G. Van Der Laan, X. Wang, Q. Zhang, F. Song, B. Wang, _et al._ , Adv. Mater. 27, 4823 (2015).
* Tcakaev _et al._ [2020b] A. Tcakaev, V. B. Zabolotnyy, C. I. Fornari, P. Rüßmann, T. R. Peixoto, F. Stier, M. Dettbarn, P. Kagerer, E. Weschke, E. Schierle, _et al._ , Phys. Rev. B 102, 184401 (2020b).
* Li _et al._ [2013] S. Li, S. Harrison, Y. Huo, A. Pushp, H. Yuan, B. Zhou, A. Kellock, S. Parkin, Y.-L. Chen, T. Hesjedal, _et al._ , Applied Physics Letters 102, 242412 (2013).
* Filnov _et al._ [2020] S. Filnov, I. I. Klimovskikh, D. Estyunin, A. Fedorov, V. Y. Voroshnin, A. Koroleva, A. G. Rybkin, E. Shevchenko, Z. S. Aliev, M. Babanly, _et al._ , Phys. Rev. B 102, 085149 (2020).
* Lee _et al._ [2019] E. Lee, S. Seong, M. Y. Yang, J. Kim, M.-H. Jung, B.-G. Park, Y. Kim, S. W. Han, and J.-S. Kang, Appl. Phys. Lett. 115, 072404 (2019).
* Lee _et al._ [2015] H. Lee, J. Kim, K. Lee, A. Jelen, S. Vrtnik, Z. Jagličić, J. Dolinšek, and M. Jung, Appl. Phys. Lett. 107, 182409 (2015).
* Kim and Jung [2018] S.-W. Kim and M.-H. Jung, AIP Adv. 8, 101319 (2018).
* Filnov _et al._ [2019] S. Filnov, Y. A. Surnin, A. Koroleva, I. Klimovskikh, D. Estyunin, A. Y. Varykhalov, K. Bokai, K. Kokh, O. Tereshchenko, V. Golyashov, _et al._ , J. Exp. Theor. Phys. 129, 404 (2019).
* Kholdi _et al._ [1994] M. E. Kholdi, M. Averous, S. Charar, C. Fau, G. Brun, H. Ghoumari-Bouanani, and J. Deportes, Phys. Rev. B 49, 1711 (1994).
* Song _et al._ [2012] Y. Song, F. Yang, M.-Y. Yao, F. Zhu, L. Miao, J.-P. Xu, M.-X. Wang, H. Li, X. Yao, F. Ji, _et al._ , Appl. Phys. Lett. 100, 242403 (2012).
* Isber _et al._ [1995] S. Isber, S. Charar, V. Mathet, C. Fau, and M. Averous, Phys. Rev. B 51, 15578 (1995).
* Gratens _et al._ [1997] X. Gratens, S. Isber, S. Charar, C. Fau, M. Averous, S. K. Misra, Z. Golacki, M. Ferhat, and J. C. Tedenac, Physical Review B 55, 8075 (1997).
* Garitezi _et al._ [2015] T. Garitezi, G. Lesseux, C. Jesus, T. Grant, Z. Fisk, R. Urbano, C. Rettori, and P. Pagliuso, in _Journal of Physics: Conference Series_ , Vol. 592 (IOP Publishing, 2015) p. 012125.
* Hikami _et al._ [1980] S. Hikami, A. I. Larkin, and Y. Nagaoka, Prog. Theor. Phys. 63, 707 (1980).
* Kim _et al._ [2011] Y. S. Kim, M. Brahlek, N. Bansal, E. Edrey, G. A. Kapilevich, K. Iida, M. Tanimura, Y. Horibe, S.-W. Cheong, and S. Oh, Phys. Rev. B 84, 073109 (2011).
* Lu and Shen [2011] H.-Z. Lu and S.-Q. Shen, Phys. Rev. B 84, 125138 (2011).
* Lu _et al._ [2011] H.-Z. Lu, J. Shi, and S.-Q. Shen, Phys. Rev. Lett. 107, 076801 (2011).
* Huang _et al._ [2012] F.-T. Huang, M.-W. Chu, H. Kung, W. Lee, R. Sankar, S.-C. Liou, K. Wu, Y. Kuo, and F. Chou, Phys. Rev. B 86, 081104 (2012).
* Devidas _et al._ [2015] T. Devidas, E. Amaladass, S. Sharma, R. Rajaraman, D. Sornadurai, N. Subramanian, A. Mani, C. Sundar, and A. Bharathi, EPL (Europhys. Lett.) 108, 67008 (2015).
* Adroguer _et al._ [2015] P. Adroguer, W. E. Liu, D. Culcer, and E. Hankiewicz, Phys. Rev. B 92, 241402 (2015).
* Teng _et al._ [2019] J. Teng, N. Liu, and Y. Li, J. Semicond. 40, 081507 (2019).
* Janíček _et al._ [2008] P. Janíček, Č. Drašar, P. Lošták, J. Vejpravová, and V. Sechovskỳ, Phys. B: Condens. Matter 403, 3553 (2008).
* Chen _et al._ [2011] J. Chen, X. He, K. Wu, Z. Ji, L. Lu, J. Shi, J. Smet, and Y. Li, Phys. Rev. B 83, 241304 (2011).
* Wang _et al._ [2011] J. Wang, A. M. DaSilva, C.-Z. Chang, K. He, J. Jain, N. Samarth, X.-C. Ma, Q.-K. Xue, and M. H. Chan, Phys. Rev. B 83, 245438 (2011).
* Pal _et al._ [2012] H. Pal, V. Yudson, and D. Maslov, Phys. Rev. B 85, 085439 (2012).
* Abragam and Bleaney [2012] A. Abragam and B. Bleaney, _Electron paramagnetic resonance of transition ions_ (OUP Oxford, 2012).
* Barnes [1981] S. Barnes, Adv. Phys. 30, 801 (1981).
* Poole and Farach [1971] C. P. Poole and H. A. Farach, _Relaxation in magnetic resonance_ , Vol. 19 (Elsevier, 1971).
* Feher and Kip [1955] G. Feher and A. Kip, Phys. Rev. 98, 337 (1955).
* Dyson [1955] F. J. Dyson, Phys. Rev. 98, 349 (1955).
* Anderson and Weiss [1953] P.-W. Anderson and P. Weiss, Reviews of Modern Physics 25, 269 (1953).
* Urban _et al._ [1975] P. Urban, D. Davidov, B. Elschner, T. Plefka, and G. Sperlich, Phys. Rev. B 12, 72 (1975).
* Park _et al._ [2013] K. Park, Y. Nomura, R. Arita, A. Llobet, and D. Louca, Phys. Rev. B 88, 224108 (2013).
* Xu _et al._ [2014] C. Xu, A. Hewitt, J. Wang, T. Guan, J. Boltersdorf, P. A. Maggard, D. B. Dougherty, and K. Gundogdu, J. Appl. Phys. 116, 043519 (2014).
* Tayal _et al._ [2017] A. Tayal, D. Kumar, and A. Lakhani, J. Phys.: Condens. Matter 29, 445704 (2017).
* Taylor _et al._ [2012] R. E. Taylor, B. Leung, M. P. Lake, and L.-S. Bouchard, J. Phys. Chem. C 116, 17300 (2012).
* Davidov _et al._ [1973] D. Davidov, C. Rettori, A. Dixon, K. Baberschke, E. Chock, and R. Orbach, Phys. Rev. B 8, 3563 (1973).
* Cao _et al._ [2013] Y. Cao, J. Waugh, X. Zhang, J.-W. Luo, Q. Wang, T. Reber, S. Mo, Z. Xu, A. Yang, J. Schneeloch, _et al._ , Nat. Phys. 9, 499 (2013).
* Vidal _et al._ [2013] F. Vidal, M. Eddrief, B. R. Salles, I. Vobornik, E. Velez-Fort, G. Panaccione, and M. Marangolo, Physical Review B 88, 241410 (2013).
* Kim _et al._ [2015] S. Kim, S. Vrtnik, J. Dolinšek, and M. Jung, Appl. Phys. Lett. 106, 252401 (2015).
* Shimizu _et al._ [2006] Y. Shimizu, K. Miyagawa, K. Kanoda, M. Maesato, and G. Saito, Phys. Rev. B 73, 140407 (2006).
* Riedl _et al._ [2019] K. Riedl, R. Valentí, and S. M. Winter, Nat. Commun. 10, 1 (2019).
* Kawamura and Uematsu [2019] H. Kawamura and K. Uematsu, J. Phys. Condens. Matter 31, 504003 (2019).
* Pustogow _et al._ [2020] A. Pustogow, T. Le, H.-H. Wang, Y. Luo, E. Gati, H. Schubert, M. Lang, and S. Brown, Phys. Rev. B 101, 140401 (2020).
* Miksch _et al._ [2021] B. Miksch, A. Pustogow, M. J. Rahim, A. A. Bardin, K. Kanoda, J. A. Schlueter, R. Hübner, M. Scheffler, and M. Dressel, Science 372, 276 (2021).
* Rettori _et al._ [1974] C. Rettori, H. Kim, E. Chock, and D. Davidov, Phys. Rev. B 10, 1826 (1974).
* Moriya [1963] T. Moriya, Journal of the Physical Society of Japan 18, 516 (1963).
* Narath [1967] A. Narath, Phys. Rev. 163, 232 (1967).
* Narath and Weaver [1968] A. Narath and H. Weaver, Phys. Rev. 175, 373 (1968).
* Shaw Jr and Warren Jr [1971] R. W. Shaw Jr and W. W. Warren Jr, Phys. Rev. B 3, 1562 (1971).
* Pagliuso _et al._ [1999] P. Pagliuso, C. Rettori, M. Torelli, G. Martins, Z. Fisk, J. Sarrao, M. Hundley, and S. Oseroff, Phys. Rev. B 60, 4176 (1999).
* Souza _et al._ [2019] J. Souza, C. Jesus, G. Lesseux, P. Rosa, R. Urbano, and P. Pagliuso, J. Phys. Condens. Matter 31, 465701 (2019).
* Misra [1986] S. K. Misra, Magn. Reson. Rev. 10, 285 (1986).
* Lesseux _et al._ [2016] G. Lesseux, T. Garitezi, P. Rosa, C. Jesus, S. Oseroff, J. Sarrao, Z. Fisk, R. Urbano, P. Pagliuso, and C. Rettori, J. Phys. Condens. Matter 28, 125601 (2016).
* Souza _et al._ [2018] J. Souza, G. Lesseux, R. Urbano, C. Rettori, and P. Pagliuso, AIP adv. 8, 055713 (2018).
* Souza _et al._ [2021] J. Souza, M. König, M. Ale Crivillero, M. Malcolms, R. Urbano, Z. Fisk, P. Rosa, P. Pagliuso, S. Wirth, and J. Sichelschmidt, Phys. Rev. Research 3, 033016 (2021).
|
Regret-Optimal Online Caching for Adversarial and Stochastic ArrivalsThis work is supported by a SERB grant on Leveraging Edge Resources for Service Hosting.
Optimal switching regret for online caching
Fathima Zarin Faizal10000-0002-5663-8308 Priya Singh10000-0002-1658-1116 Nikhil Karamchandani 10000-0002-7233-0717 Sharayu Moharir 10000-0001-9393-9276
F. Faizal et al.
Indian Institute of Technology Bombay, Mumbai, Maharashtra, India - 400076
We consider the online caching problem for a cache of limited size. In a time-slotted system, a user requests one file from a large catalog in each slot. If the requested file is cached, the policy receives a unit reward and zero rewards otherwise. We show that a Follow the Perturbed Leader (FTPL)-based anytime caching policy is simultaneously regret-optimal for both adversarial and i.i.d. stochastic arrivals. Further, in the setting where there is a cost associated with switching the cached contents, we propose a variant of FTPL that is order-optimal with respect to time for both adversarial and stochastic arrivals and has a significantly better performance compared to FTPL with respect to the switching cost for stochastic arrivals. We also show that these results can be generalized to the setting where there are constraints on the frequency with which cache contents can be changed. Finally, we validate the results obtained on various synthetic as well as real-world traces.
§ INTRODUCTION
The caching problem has been studied since the 1960s, initially motivated by memory management in computers [21]. More recently, there has been renewed interest motivated by Content Delivery Networks [3] used for applications such as Video-on-Demand services. Such applications rely on low latency to provide a good customer experience. The framework of this problem involves a library of $L$ files and a cache located near the end-users that is capable of storing at most $C$ files at any given time, the algorithmic challenge being to determine the most popular files to be stored in the cache.
Two types of arrival patterns have been considered in the existing literature and in our work. The first is known as the Independent Reference Model, where request arrivals are generated by an i.i.d. stochastic process and the distribution of the request process is unknown to the policy. The second arrival model is the adversarial arrival model where we make no structural assumptions on the arrival process. Here, the arrival pattern is generated by an oblivious adversary who knows which caching policy is being used but is not aware of the sample path of decisions made by the policy. In both models, as we are focused on the online caching problem, requests are revealed causally and therefore caching decisions have to be made based on past arrival patterns without any explicit knowledge of future arrivals.
Various metrics have been used to characterize the performance of caching polices, including hit-rate and competitive ratio. Regret is a popular metric for online learning algorithms [9] and is defined as the difference between the reward incurred by the optimal stationary policy and the policy under consideration. Our broad goal is to determine if there exists caching policies that have order-optimal regret with respect to time for i.i.d. stochastic and adversarial arrivals and therefore robust to the nature of the arrival process. Polices that perform well in the adversarial setting are primarily focused on not performing horribly on any arrival sequence. This often leads to sub-optimal performance for specific arrival sequences. Similarly, policies designed for specific arrival processes or under some structural assumptions on the arrival process have poor performance in the adversarial setting as they have very poor performance for specific arrival processes which affects the worst-case performance of the policy. For instance, policies designed for the independent reference model would not be ideal when requests are not stationary.
Prediction with expert advice [9, 16] and Online Convex Optimization [23] are well-known settings in online learning for which optimal algorithms have been found. Though the caching problem is equivalent to the prediction with expert advice setting with ${L \choose C}$ experts, ${L \choose C}$ is typically a very large number resulting in standard algorithms being computationally inefficient. Least Frequently Used (LFU), Least Recently Used (LRU) and First-in-First-Out (FIFO) are popular caching policies that have been shown to achieve optimal competitive ratio [4]. There are also results on the closed form stationary hit probabilities of these algorithms under the Independent Reference Model [6, 22].
Under stochastic arrivals, LFU achieves order-optimal regret [8] but under adversarial arrivals, LFU, LRU and FIFO have been shown to have suboptimal regret [20]. A sublinear regret upper bound was proved for a gradient-based coded caching policy (OGA) under adversarial arrivals [20] while the first uncoded caching policy to be shown to achieve sublinear regret is the Follow The Perturbed Leader (FTPL) policy [7]. Proposed in [11], FTPL has also been shown to achieve order-optimal regret under adversarial arrivals by proving a lower bound on the regret using a balls-into-bins argument in [7]. An FTPL-based policy has also been shown to be regret-optimal for bipartite caching networks [19].
These policies do not consider the overhead of fetching files into the cache from the library each time the cache updates, called the switching cost [18]. An $\tilde{O}(C \sqrt{T})$ upper bound on the regret including the switching cost was shown for a variant of the Multiplicative-Weight policy (MW) under adversarial arrivals which is also more computationally efficient compared to the original MW algorithm naively applied to the caching problem. An upper bound of $\tilde{O}(\sqrt{CT})$ was shown for an FTPL-based policy which is also simpler to implement [18] and improves upon the earlier bound by a factor of $\Theta(\sqrt{C})$.
§.§ Our contributions
We consider the following two settings: unrestricted switching, where the objective is to minimize the regret including the switching cost, and restricted switching, where the cache is allowed to update at certain fixed points only and the objective is to minimize regret. In Section <ref>, we consider the unrestricted switching setting and show that FTPL with an adaptive learning rate achieves order-optimal regret under stochastic arrivals even after including the switching cost, while FTPL with a constant learning rate cannot have order-optimal regret for both stochastic and adversarial arrivals. We also propose the Wait then FTPL (W-FTPL) policy that improves the bound on the switching cost from $\mathcal{O}(D)$ to $\mathcal{O}(\log D)$, where $D$ is the per-file switching cost. In Section <ref>, we consider the restricted switching setting and prove a lower bound on the regret of any policy and an upper bound on the regret of FTPL. We show that FTPL acheives order-optimal regret under stochastic file requests and in a special case of this setting under adversarial file requests. Finally, in Section <ref>, we present the results of numerical experiments on synthetic as well as real-world traces that validate the results obtained. Due to a lack of space, the proofs of the theorems stated in this paper can be found in [1].
We thus show that FTPL with an adaptive learning rate applied to the online caching problem has order-optimal regret under stochastic and adversarial arrivals in the unrestricted switching and in a special case of the restricted switching setting.
§ PROBLEM FORMULATION
We consider the classical content caching problem where a user requests files from a library that is stored in a back-end server. There is a cache that is capable of serving user requests at a lower cost but has a storage size that is typically considerably smaller than the library size. Time is slotted and in each time slot, the user requests at most one file. The sequence of events in a time slot is as follows. The cache may first update its contents, after which it receives a request for a file from the user. If the requested file is available in the cache, the cache is said to have a hit and the request is fulfilled locally by the cache, and otherwise a miss, in which case the file request is fulfilled by the back-end server.
Cache configuration. We consider a cache of size $C$ that stores files from a library $\mathcal{L}$ of size $L$. Usually, the cache size is much smaller than the library size, i.e., $C \ll L$. The file requested by the user at time $t$ is denoted by $x_t$ and is represented also in the form of the one-hot encoded vector $\mathbf{x}_t \in \{0,1\}^{L}$. For $\tau \geq 2$, we denote by $\boldsymbol{X}_{\tau}=\sum_{t=1}^{\tau-1} \boldsymbol{x}_t$ the $L$-length vector storing the cumulative sum of requests for each file till time slot $\tau-1$. $\boldsymbol{X}_1$ is initialized to be the zero vector. Let C(t) denote the set of files cached in round $t$ and let $\boldsymbol{y}_t \in \{0,1\}^{L}$ be a binary vector denoting the state of the cache at time $t$, such that $y_t = (y_t^1, y_t^2, \ldots, y_t^L)$ with $y_t^i= 1$ for $i \in C(t)$ and 0 otherwise.
File requests. We consider two types of file requests: adversarial and stochastic. The file requests are said to be adversarial if no assumptions are made regarding the statistical properties of the file requests. We assume that the adversary is oblivious, i.e., the entire file request sequence is fixed before the first request is sent. The file requests are said to be stochastic if in each slot, the request is generated independently according to a popularity distribution $\boldsymbol{\mu}=(\mu_1, \ldots, \mu_L)$, where $\mathbb{P}(x_t=i)=\mu_i$ and $\sum_{i} \mu_i =1$. Without loss of generality, we assume that $\mu_1 \geq \ldots \geq \mu_L$. As is the case in most real-world applications, the popularity distribution is assumed to be unknown to the caching policy beforehand.
Caching Policy. At the beginning of any given time slot $t \geq 2$, a caching policy $\pi(\cdot)$ maps the history of observations it has seen so far (denoted by $h(t)$) to a valid cache configuration $C(t)$, i.e., $C(t)=\pi(h(t))$. In the first time slot, we assume that the cache stores $C$ files randomly chosen from the library and that this does not incur any switch cost. We define $T$ to be the time horizon of interest. When the file requests are adversarial, the optimal stationary policy is defined to be the caching policy that stores the top $C$ files in hindsight, i.e., stores the $C$ files that received the maximum number of requests till time $T$. When the file requests are stochastic, the optimal stationary policy is defined to be the caching policy that stores the files with the top $C$ popularities in the cache, i.e., $C(t)=\mathcal{C} \, \forall t$, where $\mathcal{C}=\{1,\ldots,C\}$.
Reward and Switch Cost. At each time step, the policy obtains a reward of 1 unit when the requested file is available in the cache, i.e., a hit occurs, and a reward of 0 units otherwise. A caching policy that fetches a large number of files into the cache each time the cache updates is not ideal as fetching files into the cache causes latency and consumes bandwidth. Thus, we also consider the switch cost, i.e., the cost of fetching files from the back-end server into the cache. We assume that fetching a file into the cache from the server incurs a cost of $D$ units.
Performance metric. Policies are evaluated on the basis of the regret that they incur. Informally, the regret of a policy till time $T$ is the difference between the net utility of the optimal stationary policy and the net utility the policy under consideration. The net utility of a policy till time $T$ is the difference between the net reward accrued till $T$ and the overall switch cost incurred till then. Stationary policies do not incur any switch cost and hence their net utility is determined by the overall number of hits they have till time $T$. As discussed in the next section, in some cases, we omit the switch cost.
Problem settings. We consider the following two variations of the classical content caching problem:
* Setting 1: Unrestricted switching with switching cost.
In this setting, the system incurs a cost of $D$ units every time a file is fetched from the back-end server to be stored in the cache. When following a policy $\pi$ on a request sequence $\{x_t\}_{t=1}^{T}$, the regret till time $T$ for $\{x_t\}_{t=1}^{T}$ including the switching cost when the file requests are adversarial is defined as:
\begin{align}
R^{\pi}_{A}(\{x_t\}_{t=1}^{T},T,D) &=\sup _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1}\right\rangle-\sum_{t=1}^{T} \mathbb{E}\left\langle\boldsymbol{y}_{t}, \boldsymbol{x}_{t}\right\rangle +\frac{D}{2} \sum_{t=1}^{T-1} \mathbb{E}\left\|\boldsymbol{y}_{t+1}-\boldsymbol{y}_{t}\right\|_{1},
\end{align}
where the expectation is with respect to any randomness introduced by the policy. The regret of a policy $\pi$ till time $T$ is defined as the worst-case regret over all possible request sequences, i.e.,
\begin{align*}
R^{\pi}_{A}(T,D) = \underset{\{x_t\}_{t=1}^{T}}{\sup} R^{\pi}_{A}(\{x_t\}_{t=1}^{T},T,D).
\end{align*}
When the file requests are stochastic, the regret including the switching cost after $T$ time steps is defined as:
\begin{align}
R^{\pi}_{S}(T,D) =\mathbb{E} \left [ \sum_{t=1}^{T} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\} \right ] +\frac{D}{2} \sum_{t=1}^{T-1} \mathbb{E}\left\|\boldsymbol{y}_{t+1}-\boldsymbol{y}_{t}\right\|_{1}, \label{eqn: switching_regret_stochastic}
\end{align}
where $\mathcal{C}$ denotes the set of files having the top $C$ popularities. In the above expression, the expectation is taken with respect to the randomness in the file requests as well as any randomness introduced by the policy.
* Setting 2: Restricted switching without switching cost.
Here, the cache is allowed to change its contents only at $s+1$ fixed time slots for some $1 \leq s \leq T$. To be precise, the cache is allowed to change its contents only at the beginning of the following time slots: $1, r_1+1, r_1+r_2+1, \ldots, \sum_{i=1}^{s}r_i+1$, where $1 \leq r_i \leq T, 1 \leq i \leq s$ denotes the $i^{\text{th}}$ inter-switching period such that $\sum_{i=1}^{s}r_i = T$. Thus, within the time horizon $T$, the cache is allowed to update only at $s$ fixed time slots. Note that the setting where the cache is allowed to change its contents only after every $1 \leq r \leq T$ requests, i.e., at time slots $1,r+1,\ldots,T+1$ and $s=\frac{T}{r}$ is a special case of this setting. For simplicity, we restrict our attention to the case where there is no switch cost, i.e., $D=0$. When following a policy $\pi$, the regret after $T$ time steps when the file requests are adversarial is:
\begin{align}
R^{\pi}_{A}(T) &=\sup _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1}\right\rangle-\sum_{t=1}^{T} \mathbb{E}\left\langle\boldsymbol{y}_{t}, \boldsymbol{x}_{t}\right\rangle,
\end{align}
where the expectation is with respect to the randomness introduced by the policy. When the file requests are stochastic, the regret after $T$ time steps is:
\begin{align}
R^{\pi}_{S}(T) =\mathbb{E} \left [ \sum_{t=1}^{T} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\} \right ], \label{eqn: switching_regret_stochastic}
\end{align}
where $\mathcal{C}$ denotes the set of files having the top $C$ popularities. In the above expression, the expectation is taken with respect to the randomness introduced by the policy and the file requests.
To distinguish between results including switching cost and those without switching cost, we use the notation $R_{(\cdot)}^{\pi}(T,D)$ for results involving the switch cost and $R_{(\cdot)}^{\pi}(T)$ for results without the switch cost.
The overall goal of this work is to characterize the optimal regret in the two settings mentioned above, for both adversarial and stochastic file requests. This entails proving scheme-agnostic lower bounds on the regret as well as designing policies whose regret is of the same order as these lower bounds. As we will see, these results will also highlight the impact of switching cost and intermittent switching on the optimal achievable regret.
§ POLICIES
In this section, we introduce and formalize policies whose optimality (or suboptimality) will be discussed in later sections.
§.§ Least Frequently Used (LFU)
The LFU algorithm (formally defined in Algorithm <ref>) keeps track of the number of times each file has been requested so far. At each time step $t$, the files with the $C$ highest number of requests are cached. This policy is deterministic and thus performs poorly when faced with certain adversarial request sequences [20]. For the simplified case of $L=2, C=1$, one example is a round-robin request sequence of the form $1,2,1,2,\ldots$ which would result in LFU obtaining essentially zero reward while the optimal stationary policy obtains a reward of $T/2$. For stochastic requests, it has been shown to achieve $\mathcal{O}(1)$ regret when switching is allowed at all time slots and when the algorithm incurs no switch cost [8].
LFU algorithm
[1]
$\boldsymbol{c}_t \gets \mathbf{0}$
$t \leq T$
$C_t \gets \underset{C}{\argmax} (c_t(1), \ldots, c_t(L) )$
Receive file request $x_t$
$ c_t(x_t) \gets c_t(x_t) + 1$
§.§ Follow The Perturbed Leader (FTPL)
The FTPL algorithm (formally defined in Algorithm <ref>)) is a variation of the LFU algorithm and also keeps track of the number of times each file has been requested so far, but adds an independent Gaussian perturbation with mean 0 and standard deviation $\eta_t$ (referred to as the learning rate) to the counts of each file in each time slot. At each time step $t$, the files with the $C$ highest perturbed counts are cached. Special cases of this policy are known to achieve order-optimal regret under adversarial requests (with and without switch cost) [7, 18]. We will henceforth refer to the FTPL algorithm with the learning rate $\eta_t$ by FTPL($\eta_t$).
FTPL algorithm
[1]
$\boldsymbol{c}_t \gets \mathbf{0}$
Sample $\boldsymbol{\gamma} \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I}_{L \times L})$
$t \leq T$
$C_t \gets \underset{C}{\argmax} (c_t(1) + \eta_t\gamma(1) , \ldots, c_t(L) + \eta_t\gamma(L) )$
Receive file request $x_t$
$ c_t(x_t) \gets c_t(x_t) + 1$
§.§ Wait then FTPL (W-FTPL)
The algorithm that we propose, Wait then FTPL (formally defined in Algorithm <ref>)), is a variant of the FTPL algorithm where the policy remains idle for an initial deterministic waiting period and then follows the normal FTPL algorithm. The motivation for this algorithm is to avoid the higher switch cost incurred initially by the FTPL algorithm under stochastic file requests until the policy has seen enough requests to have a good enough estimate of the underlying popularity distribution, while ensuring order-optimal regret in the adversarial setting. We will henceforth refer to the W-FTPL algorithm with the learning rate $\eta_t$ by W-FTPL($\eta_t$).
W-FTPL algorithm
[1]
$\boldsymbol{c}_t \gets \mathbf{0}$
Sample $\boldsymbol{\gamma} \sim \mathcal{N}(\boldsymbol{0}, \boldsymbol{I}_{L \times L})$
$t \leq T$
$t >t'$
$C_t \gets \underset{C}{\argmax} (c_t(1) + \eta_t\gamma(1) , \ldots, c_t(L) + \eta_t\gamma(L) )$
Receive file request $x_t$
$ c_t(x_t) \gets c_t(x_t) + 1$
§ SETTING 1: UNRESTRICTED SWITCHING WITH SWITCHING COST
In this section, we consider the setting where there is no limitation on the switching frequency of the cache and the objective is to minimize the regret including the switching cost, i.e., minimize regret as well as the number of fetches into the cache. We consider both stochastic and adversarial file request sequences and show that FTPL($\alpha \sqrt{t}$) and W-FTPL($\alpha \sqrt{t}$) are order-optimal under both types of file requests. While the FTPL($\eta$) algorithm is order-optimal under adversarial requests for a particular value of $\eta$ [7], we prove that the same does not hold true for stochastic file requests.
§.§ Adversarial requests
In this section, we discuss the performance of the policies introduced in Section <ref> under adversarial file requests. The key results of this section has been summarized in the following theorem:
Under adversarial requests, we have
* <cit.> For any policy $\pi$ and $L \geq 2C$,
\begin{align*}
R_{A}^{\pi}(T,D=0) \geq \sqrt{\frac{C T}{2 \pi}}-\Theta\left(\frac{1}{\sqrt{T}}\right).
\end{align*}
* <cit.> The regret of the LFU policy can be characterized as:
\begin{align*}
R_{A}^{\textrm{LFU}}(T,0) = \Omega(T).
\end{align*}
* <cit.> The regret of FTPL($\alpha \sqrt{t}$) is upper bounded as:
\begin{align*}
R_{A}^{\textrm{FTPL}(\alpha \sqrt{t})}(T,D) \leq c_{1} \sqrt{T}+c_{2} \ln T+c_{3},
\end{align*}
where $c_{1}=\mathcal{O}(\sqrt{\ln (L e / C)})$, and $c_{2}, c_{3}$ are small constants depending on $L, C,D$ and $\alpha$.
* The regret of W-FTPL($\alpha \sqrt{t}$) is upper bounded as:
\begin{align*}
R_{A}^{\textrm{W-FTPL}(\alpha \sqrt{t})}(T,D) \leq \mathcal{O}(\sqrt{T}).
\end{align*}
Part (a) has been proved in [7] and provides a lower bound on the regret of any policy under adversarial requests.
As argued before, LFU performs poorly under adversarial requests. This is also seen for many popular classical caching algorithms like LRU and FIFO (refer [20]).
Part (c) has been proved in [18] and provides an $\mathcal{O}(\sqrt{T})$ upper bound on the regret including the switching cost of the FTPL($\alpha \sqrt{t}$) policy under adversarial requests, thus showing that this algorithm is order-optimal under adversarial requests. FTPL($\alpha \sqrt{T}$) has also been shown to be order-optimal under adversarial requests [7, 18].
Part (d) provides an upper bound on the regret including the switching cost of W-FTPL($\alpha \sqrt{t}$) under adversarial requests. This result shows that W-FTPL($\alpha \sqrt{t}$) is order-optimal under adversarial requests. The proof of this result can be found in Appendix <ref>.
§.§ Stochastic requests
To find a policy that achieves order-optimal regret under stochastic and adversarial arrivals, we were motivated by [7, 18] where the regret for FTPL with the learning rates $\alpha \sqrt{t}$ and $\alpha \sqrt{T}$ ($\alpha$ being some positive constant) under adversarial arrivals was characterized. In this section, we discuss the performance of these policies under stochastic file requests. The key results of this section has been summarized in the following theorem:
The file requests are stochastic i.i.d. with the popularity distribution $\boldsymbol{\mu}$.
* <cit.> When $D=0$, the regret of the LFU policy can be upper bounded as:
\begin{align}
R^{LFU}_{S}(T,0)<\min \left(\frac{16}{\Delta_{\min }^{2}}, \frac{4 C(L-C)}{\Delta_{\min }}\right), \label{eqn: lfu_ub}
\end{align}
where $\Delta_{\min}=\mu_C-\mu_{C+1}$.
* For $L=2, C=1$ and $D=0$, the regret of FTPL($\eta$) can be lower bounded as:
\begin{align*}
R^{\textrm{FTPL}(\eta)}_{S}(T,0) \geq \frac{\eta e^{-\left ( \frac{1+\eta}{\eta} \right )^2}}{4 }.
\end{align*}
* The regret of FTPL($\alpha \sqrt{t}$) is upper bounded as:
\begin{align*}
R^{\textrm{FTPL}(\sqrt{t})}_{S}(T,D) \leq \left (1+ DC \right ) t_0 + \left (1+ \frac{D}{\Delta_{\min}} \right ) \left ( \frac{8}{\Delta_{\min}} + \frac{32 \alpha^2}{\Delta_{\min}} \right ),
\end{align*}
where $t_0=\max \left \{\frac{8}{\Delta_{\min}^2} \log \left ( {L^3}\right ) , \frac{32 \alpha^2}{\Delta_{\min}^2 }\log \left ( {L^3}\right ) \right \}$.
* The regret of W-FTPL($\alpha \sqrt{t}$) is upper bounded as:
\begin{align*}
R^{\textrm{W-FTPL}(\sqrt{t})}_{S}(T,D) &\leq t' + \frac{16}{\Delta_{\min }} + \frac{64 \alpha^2}{\Delta_{\min }}+ 2L^3D \biggl ( e^{- u (\log D)^{1+\beta} \Delta_{\min}^{2} / 8} \frac{8}{\Delta_{\min}^2} \\&+ e^{-u (\log D)^{1+\beta} \Delta_{\min}^{2} / 32 \alpha^{2}} \frac{32 \alpha^2}{\Delta_{\min}^2} \biggr ),
\end{align*}
where $t'=\max \left \{\frac{8}{\Delta_{\min}^2} \log \left ( \frac{L^3}{2}\right ) , \frac{32 \alpha^2}{\Delta_{\min}^2 }\log \left ( \frac{L^3}{2}\right ), u (\log D)^{1+\beta} \right \}$.
Part (a) of the above theorem has been proved in [8] and shows that the regret of LFU is $\mathcal{O}(1)$ when the file requests are stochastic. Thus, the regret of any policy that has order optimal regret under stochastic file requests should be $\mathcal{O}(1)$.
Part (b) gives a lower bound on the regret of the FTPL($\eta$) algorithm under stochastic file requests. Note that for all $\eta \geq 1$, we have
\begin{align*}
R^{\textrm{FTPL}(\eta)}_{S}(T)&\geq \frac{\eta e^{-\left ( \frac{1+\eta}{\eta} \right )^2}}{4} \geq \frac{\eta}{4 e^4}.
\end{align*}
This shows that the regret is $\Omega(\eta)$ for the FTPL($\eta$) algorithm when the file requests are stochastic. Using $\eta=\alpha \sqrt{T}$, where $\alpha$ is a positive constant, from [7] which gave an $\mathcal{O}(\sqrt{T})$ upper bound for FTPL($\eta$), we get that with the constant learning rate $\eta=\alpha \sqrt{T}$, the regret of FTPL($\eta$) is $\Theta(\sqrt{T})$. Thus, while FTPL($\eta$) is order-optimal under adversarial requests (see Section <ref>), it cannot simultaneously be order-optimal under adversarial and stochastic file requests. The proof of this result can be found in Appendix <ref>.
Part (c) shows that FTPL($\alpha \sqrt{t}$), $\alpha >0$ is order-optimal when the file requests are stochastic. While FTPL($\eta$) with $\eta=\mathcal{O}(\sqrt{T})$ achieves $\Omega(\sqrt{T})$ regret, this result shows an $\mathcal{O}(1)$ upper bound on the regret of FTPL($\alpha \sqrt{t}$) including the switching cost, thus showing that this algorithm is order-optimal. Note that the upper bound grows linearly with the per-file switch cost $D$. The proof of this result can be found in Appendix <ref>
Recall that the Wait then FTPL($\alpha \sqrt{t}$) (W-FTPL($\alpha \sqrt{t}$)) algorithm is a variant of the FTPL($\alpha \sqrt{t}$) algorithm. The algorithm remains idle till time $t'=u (\log D)^{1+\beta}, u>0$, and then normal FTPL($\alpha \sqrt{t}$) is followed. Part (d) proves an $\mathcal{O}(1)$ upper bound on the regret including the switching cost of this algorithm with respect to the horizon $T$ under stochastic file requests, thus showing that this algorithm is order-optimal. The main improvement over the FTPL($\alpha \sqrt{t}$) algorithm is the $\mathcal{O}( (\log D)^{1+\beta} )$ upper bound on the regret including the switching cost for a large enough value of $D$ under stochastic file requests, as compared to the upper bound $\mathcal{O}(D)$ for FTPL($\alpha \sqrt{t}$). The key idea behind remaining idle for an initial period that depends logarithmically on $D$ is to avoid the higher switch cost incurred at the beginning by the FTPL($\alpha \sqrt{t}$) algorithm (refer to Section <ref>). The proof of this result can be found in Appendix <ref>.
§ RESTRICTED SWITCHING
In this section, we consider the setting where the cache is allowed to update its contents only at $s+1$ fixed number of time slots, where $s \in \mathbb{Z}, s \leq T$. The first point is at time slot $1$, the second at time slot $r_1+1$, the third point is at time slot $r_1+r_2+1$, and so on till the ${s+1}^{\mathrm{th}}$ point, which is at time slot $\sum_{i=1}^{s}r_i+1$, where $r_i \in \mathbb{Z}, r_i \leq T, 1 \leq i \leq s$ and $\sum_{i=1}^{s}r_i = T$. Note that the cache is allowed to update its contents only $s$ times till the time horizon $T$. Refer to Figure <ref> for an illustration of this setting. As a special case of this setting, we also consider the homogenous case where the cache is allowed to update only after every $r \in \mathbb{Z}$ requests, i.e., $s=\frac{T}{r}$. We study the regret performance of FTPL and also provide lower bounds on the regret incurred by any online scheme. In the homogenous case, we also show that FTPL($\sqrt{rt}$) achieves order-optimal regret.
[ every text node part/.style=align=center]
[start chain=1, node distance=9mm, every node/.style = draw]
[every join/.style=-, node distance=4mm]
[ellipse, on chain,fill=red!10] Backend
[ellipse, on chain, join,fill=blue!10] Cache;
[name=arrow, single arrow, on chain=1, single arrow head extend=3pt, minimum height=7mm, rotate = 180] ;
[name=1, draw,on chain=1, minimum size=.9cm,fill=yellow!40] ;
[node distance=0mm, minimum size=.9cm]
in 2,3,4
[name = , draw,on chain=1] ;
[name=5, draw,on chain=1,fill=yellow!40] ;
in 6,7
[name = , draw,on chain=1] ;
[name=8, draw,on chain=1,fill=yellow!40] ;
in 9,10
[name = , draw,on chain=1] ;
[name=k, single arrow, draw, on chain=1, single arrow head extend=3pt, minimum height=7mm, rotate = 180] ;
[circle, on chain=1,fill=green!10] User;
[start chain=2 going right, node distance=0cm, minimum size=0.9cm]
[xshift = 4cm, yshift=0.26cm] at (1.north) Request sequence;
[xshift = -0.6cm, yshift=-0.26cm] at (1.south) $t\ =$;
[yshift=-0.3cm, on chain=2] at (1.south) 1;
[on chain=2] 2;
[on chain=2] …;
[on chain=2] $r_1$;
[on chain=2] $r_1+1$;
[on chain=2] …;
[on chain=2] $r_1+r_2$;
[on chain=2,yshift=-0.18cm] $r_1+r_2$
[on chain=2,yshift=0.18cm] …;
[on chain=2] $T$;
The time slots where the cache is allowed to update its contents have been marked in yellow.
§.§ Stochastic requests
The file requests are stochastic i.i.d. with the popularity distribution $\boldsymbol{\mu}$. When cache updates are restricted to $s+1$ fixed points defined by the inter-switching periods $\{r_i\}_{i=1}^{s}$ as outlined above,
* When $L=2, C=1$, for any online caching policy $\pi$, there exists a popularity distribution such that the popularities of the two files are greater than $1>a>0$ and the difference in the popularities is $\Delta$, such that
\begin{align*}
R^{\pi}_S(T) \geq \frac{r_1 \Delta}{2} + \sum_{i=2}^{s} r_i \frac{\Delta}{4} \exp \left(- t_i \frac{\Delta^2}{a^2} \right).
\end{align*}
* The regret of FTPL($\alpha \sqrt{t}$) is upper bounded as:
\begin{align*}
R_{S}^{FTPL(\alpha \sqrt{t})}(T) &\leq r_1+ 2\sum_{j=1}^{C} \sum_{k=C+1}^{L} \sum_{i=2}^{s} r_i \, \Delta_{j, k} \left ( e^{-t_i \Delta_{j, k}^{2} / 8} + e^{-t_i \Delta_{j,k}^2/32 \alpha^2} \right ).
\end{align*}
In part (a), we prove a fundamental lower bound on the regret of any policy $\pi$ under stochastic file requests when cache updates are restricted to $s+1$ fixed time slots. The proof of this result can be found in Appendix <ref>. In part (b), we prove an upper bound on the regret of the FTPL($\alpha \sqrt{t}$) policy when cache updates are restricted to $s+1$ fixed time slots. The proof of this result can be found in Appendix <ref>. We thus have that the FTPL($\alpha \sqrt{t}$) policy has order-optimal regret in this setting under stochastic file requests. Next, we consider the special case where all $r_i$ are equal to $r$, i.e., $s=T/r$.
The file requests are stochastic i.i.d. with the popularity distribution $\boldsymbol{\mu}$. When the cache is allowed to update only after every $r$ requests,
\begin{align*}
R_{S}^{\textrm{FTPL}(\alpha \sqrt{t}) } \leq 1+t_0' + 2 \left ( \frac{8}{\Delta_{\min}} + \frac{32 \alpha^2}{\Delta_{\min}} \right ).
\end{align*}
where $t_0'=\max \left \{r,\frac{8}{\Delta_{\min}^2} \log \left ( {L^2}\right ) , \frac{32 \alpha^2}{\Delta_{\min}^2 }\log \left ( {L^2}\right ) \right \}$.
In Theorem <ref>, we prove an $\mathcal{O}(\max \{r,\log L\})$ upper bound on the regret of the FTPL($\alpha \sqrt{t}$) algorithm under stochastic file requests. While the order-optimality of this policy with respect to $r$ follows from Theorem <ref>, we also note that the bound proved here improves upon the worst-case $\mathcal{O}(L^2)$ dependency in the upper bound proved in part (b) of Theorem <ref> to $\mathcal{O}(\log L)$. The proof of this result can be found in Appendix <ref>.
§.§ Adversarial requests
Files are requested by an oblivious adversary. When cache updates are restricted to $s+1$ fixed points defined by $\{r_i\}_{i=1}^{s}$ as outlined above,
* For any online caching policy $\pi$ and $L \geq 2C$,
\begin{align*}
R^{\pi}_{A}(T) \geq \frac{1}{2} \left(0.15 \sqrt{ C\sum_{i=1}^{s}r_i^2 } \left(1-\frac{(C-1)\left(\sum_{i=1}^{s}r_i^4 \right)}{2\left(\sum_{i=1}^{s}r_i^2 \right)^{2}}\right) -0.6 \, C \underset{1 \leq i \leq s}{\max} r_i \right).
\end{align*}
* The regret of FTPL($\alpha \sqrt{t}$) is upper bounded as:
\begin{align*}
R_{A}^{\textrm{FTPL}(\alpha \sqrt{t})}(T) &\leq \mathcal{O}(\alpha \sqrt{T})+ \sqrt{\frac{2}{\pi}} \sum_{i=1}^{s} \frac{r_i^2}{\alpha \sqrt{\sum_{j=0}^{i-1}r_j }}.
\end{align*}
The proof of this theorem can be found in Appendix <ref> and Appendix <ref> respectively. In part (a), we prove a lower bound on the regret of any policy $\pi$ under adversarial file requests when cache updates are restricted to $s$ fixed time slots. Note that a necessary condition for this bound to be meaningful is
\begin{align}
4 \underset{1 \leq i \leq s}{\max} r_i \leq \sqrt{\sum_{i=1}^{s} r_{i}^{2} }. \label{eqn: necessary_condition_restricted_ad_lb}
\end{align}
When this bound is meaningful, we have that $R_A^{\pi}(T) = \Omega \left (\sqrt{C \sum_{i=1}^{s} r_i^2 } \right )$ for any online caching policy $\pi$. When all the $r_i$'s are equal, this condition translates to $r \leq T/16$. This condition does not hold if any of the $r_i$'s is too large. For instance, when $s=3$ and $r_1=T/2, r_2=r_3=T/4$, this condition does not hold. When $\underset{1 \leq i \leq s}{\max} r_i$ and $\underset{1 \leq i \leq s}{\min} r_i$ are known, a sufficient condition for (<ref>) to hold is:
\begin{align*}
\frac{\underset{1 \leq i \leq s}{\max} r_i}{\underset{1 \leq i \leq s}{\min} r_i} \leq \frac{\sqrt{s}}{4}.
\end{align*}
We now discuss the special case where all $r_i$ are equal to $r$, i.e., $s=T/r$. It follows from part (b) of Theorem <ref> that FTPL($\sqrt{rt}$) achieves a regret of $\mathcal{O}(\sqrt{rT})$. The following theorem provides a matching lower bound for this setting that proves that FTPL($\sqrt{rt}$) is order-optimal, the proof of which can be found in Appendix <ref>.
Files are requested by an oblivious adversary. When the cache is allowed to update only after every $r$ requests, for any online caching policy $\pi$ and $L \geq 2C$,
\begin{align*}
R^{\pi}_{A}(T) \geq \begin{cases}
\sqrt{\frac{CrT}{2\pi }}-\Theta\left({\frac{r\sqrt{r}}{\sqrt{T}}}\right), & \text{when } r=o(T), \\
\Omega(T), & \text{when } r=\Omega(T).
\end{cases}
\end{align*}
§ NUMERICAL EXPERIMENTS
In this section, we present the results of numerical simulations for the various policies discussed in Section <ref>.
The plots compare (a) the regret including the switching cost for $D=100$ as a function of $T$, (b) the switching cost in each time slot for $D=30, u=5, \beta=0.6$ as a function of $T$, (c) the regret as a function of $T$ with $D=0$, and (d) the regret including the switching cost for $T=2000$ as a function of $D$, of various caching policies under stochastic file requests. $L=10, C=4$ for each of the plots and the popularity distribution is a dyadic distribution. Parts (a) and (c) show that the regret incurred by FTPL($\mathcal{O}(\sqrt{T})$) is increasing with $T$ while FTPL($\mathcal{O}(\sqrt{t})$), W- FTPL($\mathcal{O}(\sqrt{t})$) and LFU have essentially constant regret. Part (b) shows that FTPL($\mathcal{O}(\sqrt{t})$) makes more switches at the beginning, thus motivating the W- FTPL($\mathcal{O}(\sqrt{t})$) algorithm. Part (d) shows that while the regret including the switching cost of LFU and FTPL($\mathcal{O}(\sqrt{t})$) increase linearly in $D$, it increases sublinearly for W-FTPL($\mathcal{O}(\sqrt{t})$).
The plots compare (a) the regret including the switching cost for $D=100$ as a function of $T$ on a round robin request sequence, (b) the regret without the switching cost, i.e., $D=0$ as a function of $T$ on the MovieLens dataset, (c) the regret as a function of the constant switching frequency $r$ under stochastic file requests from a dyadic distribution, and (d) the regret as a function of the switching frequency $r$ on the MovieLens dataset, of various caching policies. We used $L=2, C=1$ for Part (a) and $L=10,C=4$ for Part (c). In Part (a), note that the regret including the switching cost scales linearly with $T$ for LFU, while W-FTPL($\mathcal{O}(\sqrt{t})$) and FTPL($\mathcal{O}(\sqrt{t})$) show better performance. In Part (c), the regret scales linearly with $r$ for all the three algorithms as expected. The regret scales sublinearly with $T$ in Part (b) and linearly with $r$ in Part (d).
§.§ Setting 1 with stochastic file requests
Setup. We use $L=10, C=4$ throughout this section. The popularity distribution used is a dyadic distribution, i.e., for $1 \leq i \le L-1$, $\mu(i)=\frac{1}{2^i}$ and $\mu(L)=\frac{1}{2^{L-1}}$.
Results. Figure <ref> shows that the regret of FTPL including the switching cost increases with $T$, while it is essentially constant for FTPL($\mathcal{O}(\sqrt{t})$), W-FTPL($\mathcal{O}(\sqrt{t})$) and LFU. One can also observe that W- FTPL($\mathcal{O}(\sqrt{t})$) performs the best among all the four algorithms. In Figure <ref>, we plot only the regret as a function of $T$. Note that the same trend is observed here as well. Here, we omit plotting W- FTPL($\mathcal{O}(\sqrt{t})$) as its regret would be the same as that of FTPL($\mathcal{O}(\sqrt{t})$) in this case. Figure <ref> shows that FTPL($\mathcal{O}(\sqrt{t})$) indeed makes more switches at the beginning, which is the motivation for the W-FTPL($\mathcal{O}(\sqrt{t})$) policy where no switches are made for an initial period. Figure <ref> shows that the regret of LFU and FTPL($\mathcal{O}(\sqrt{t})$) grows linearly with $D$, while that of W-FTPL($\mathcal{O}(\sqrt{t})$) grows sublinearly with $D$.
§.§ Setting 1 with adversarial file requests
Setup. We consider a synthetic adversarial request sequence in Figure <ref> and a real-world trace in Figure <ref>. The synthetic adversarial request sequence used is $1,2,1,2,\ldots$ for $L=2,C=1$, i.e., a round-robin request sequence. The real-world trace used is the first 20,000 rows of the MovieLens 1M dataset [2, 12] which contains ratings for 2569 movies with timestamps that we model as requests to a CDN server of library size 2569 and a cache size of 25.
Results. Figure <ref> shows that under the round-robin request sequence, the regret including the switching cost of LFU scales linearly with $T$, while that of W-FTPL($\mathcal{O}(\sqrt{t})$) and FTPL($\mathcal{O}(\sqrt{t})$) scales sublinearly with $T$. Figure <ref> shows that on the MovieLens dataset, the regret scales sublinearly with $T$ for all the four algorithms.
§.§ Setting 2
Setup. We consider stochastic file requests drawn from a dyadic distribution for $L=10, C=4$ in Figure <ref> and file requests from the MovieLens dataset in Figure <ref>. We also used $T=18000$ and chose $r$ to be factors of $T$. There were 2518 unique movies in the first 18000 rows of the MovieLens dataset and we set the cache size to be 25.
Results. Figure <ref> shows that when file requests are drawn from a dyadic popularity distribution, the regret of all three policies vary linearly with $r$. Figure <ref> shows that on the MovieLens dataset, the regret scales linearly with $r$ for all the three policies.
§ CONCLUSION
We have shown that FTPL($\mathcal{O}( \sqrt{t})$) achieves order-optimal regret even after including the switching cost under stochastic requests. Combining with prior results on the performance of FTPL, it is simultaneously order-optimal for both stochastic and adversarial requests. We also showed that FTPL($\eta$) cannot possibly achieve order-optimal regret simultaneously under both stochastic and adversarial requests, while variants of this policy can individually be order-optimal under each type of request. We proposed the W-FTPL($\mathcal{O}( \sqrt{t})$) policy as a way of preventing the high switching cost incurred by FTPL at the beginning under stochastic file requests. We also considered the restricted switching setting, where the cache is allowed to update its contents only at specific pre-determined time slots and obtained a lower bound on the regret incurred by any policy. We proved an upper bound on the regret of FTPL($\mathcal{O}( \sqrt{t})$) and showed that it is order-optimal under stochastic file requests and in the homogenous restricted switching case under adversarial file requests.
This work motivates several directions for future work: (1) Bringing the upper and lower bounds closer in the general restricted switching setting would help in proving whether FTPL($\mathcal{O}( \sqrt{t})$) is order-optimal or not under adversarial file requests in this case too. (2) For the restricted switching setting, our results consider only the regret. Adding the switching cost too here would make the bounds complete.
[1]
[2]
MovieLens 1M dataset. <https://grouplens.org/datasets/movielens/>,
[accessed 8-August-2022]
[3]
Aggarwal, C., Wolf, J.L., Yu, P.S.: Caching on the world wide web. In: 125
Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6. pp. 94–107
(2000). 10.1109/69.755618
[4]
Albers, S.: Competitive online algorithms. BRICS (1996)
[5]
Alon, N., Spencer, J.: Appendix B: Paul Erdös. John Wiley and Sons, Ltd
(2008). https://doi.org/10.1002/9780470277331.app2
[6]
Aven, O.I., Coffman, E.G., Kogan, Y.A.: Stochastic Analysis of Computer
Storage. Kluwer Academic Publishers, USA (1987)
[7]
Bhattacharjee, R., Banerjee, S., Sinha, A.: Fundamental limits of online
network-caching. CoRR abs/2003.14085 (2020).
[8]
Bura, A., Rengarajan, D., Kalathil, D., Shakkottai, S., Chamberland, J.F.:
Learning to cache and caching to learn: Regret analysis of caching
algorithms. IEEE/ACM Transactions on Networking 30(1), 18–31
(2022). 10.48550/arXiv.2004.00472
[9]
Cesa-Bianchi, N., Lugosi, G.: Prediction, learning, and games. Cambridge
university press (2006). 10.1017/CBO9780511546921
[10]
Cohen, A., Hazan, T.: Following the perturbed leader for online structured
learning. In: Bach, F., Blei, D. (eds.) Proceedings of the 32nd International
Conference on Machine Learning. Proceedings of Machine Learning Research,
vol. 37, pp. 1034–1042. PMLR, Lille, France (07–09 Jul 2015)
[11]
Hannan, J.: Approximation to bayes risk in repeated play. Contributions to the
Theory of Games 3(2), 97–139 (1957). 10.1515/9781400882151
[12]
Harper, F.M., Konstan, J.A.: The movielens datasets: History and context. ACM
Trans. Interact. Intell. Syst. 5(4) (dec 2015).
[13]
Hoeffding, W.: Probability inequalities for sums of bounded random variables.
Journal of the American Statistical Association 58(301), 13–30
(1963). 10.1080/01621459.1963.10500830
[14]
Kaas, R., Buhrman, J.M.: Mean, median and mode in binomial distributions.
Statistica Neerlandica 34(1), 13–18 (1980).
[15]
Lattimore, T., Szepesvári, C.: Bandit Algorithms. Cambridge University Press
(2020). 10.1017/9781108571401
[16]
Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Information
and computation 108(2), 212–261 (1994).
[17]
Mourtada, J., Gaïffas, S.: On the optimality of the hedge algorithm in the
stochastic regime. Journal of Machine Learning Research 20, 1–28
(2019). 10.48550/arXiv.1809.01382
[18]
Mukhopadhyay, S., Sinha, A.: Online caching with optimal switching regret. CoRR
abs/2101.07043 (2021). 10.1109/ISIT45174.2021.9517925
[19]
Paria, D., Sinha, A.: Leadcache: Regret-optimal caching in networks. In:
Thirty-Fifth Conference on Neural Information Processing Systems. vol. 34,
pp. 4435–4447 (2021). 10.48550/arXiv.2009.08228
[20]
Paschos, G.S., Destounis, A., Vigneri, L., Iosifidis, G.: Learning to cache
with no regrets. In: IEEE INFOCOM 2019 - IEEE Conference on Computer
Communications. p. 235–243. IEEE Press (2019).
[21]
Silberschatz, A., Galvin, P., Gagne, G.: Operating System Principles. John
Wiley & Sons (2006)
[22]
Starobinski, D., Tse, D.: Probabilistic methods for web caching. Perform. Eval.
46(2–3), 125–137 (oct 2001).
[23]
Zinkevich, M.: Online convex programming and generalized infinitesimal gradient
ascent. In: Proceedings of the 20th international conference on machine
learning (icml-03). pp. 928–936 (2003)
§ PROOF OF PART (D) OF THEOREM <REF>
In this section, we prove an upper bound on the regret of W-FTPL($\alpha \sqrt{t}$) under adversarial requests. The proof mostly follows that of Theorem 4.1 of [18] and is given here for completeness. We bound the regret till time $t'$ by $t'$ and bound the regret incurred from time $t'+1$ in a manner similar to [18]. Thus,
\begin{align*}
R_{A}^{\textrm{W-FTPL}(\alpha \sqrt{t})}(T) &= \max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1}\right\rangle - \sum_{t=1}^{T} \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}_{t}, \boldsymbol{x}_{t}\right\rangle\right] \\
&\leq t' + \max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1} - X_{t'+1} \right\rangle - \sum_{t=t'+1}^{T} \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}_{t}, \boldsymbol{x}_{t}\right\rangle\right]. \numberthis \label{eqn: wftpl_adversarial_regret} \\
\end{align*}
We define the potential function $\Phi_{t} : \mathbb{R}^{L} \rightarrow \mathbb{R}$ for all time instants $t$ in the following way:
\begin{align*}
\Phi_{t}(\boldsymbol{x})=\mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{x}+\eta_{t} \boldsymbol{\gamma}\right\rangle\right],
\end{align*}
where $\mathcal{Y}$ is the set of possible cache configurations, i.e., the set $\{y \in \{0,1\}^{L}: \|y\|_1 \leq C \}$. As shown in the proof of Proposition 4.1 in [18], we have
\begin{align*}
\mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}_{t}, \boldsymbol{x}_{t}\right\rangle\right] =\Phi_{t}\left(\boldsymbol{X}_{t+1}\right)-\Phi_{t}\left(\boldsymbol{X}_{t}\right)-\frac{1}{2}\left\langle\boldsymbol{x}_{t}, \nabla^{2} \Phi_{t}\left(\widetilde{\boldsymbol{X}}_{t}\right) \boldsymbol{x}_{t}. \right\rangle
\end{align*}
where $\widetilde{\boldsymbol{X}}_{t}=\boldsymbol{X}_{t}+\theta_{t} \boldsymbol{x}_{t}$, for some $\theta_{t} \in[0,1]$. Adding this from time $t'+1$ till $T$ gives us:
\begin{align*}
&\sum_{t=t'+1}^{T} \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}_{t}, \boldsymbol{x}_{t}\right\rangle\right] \\
&=\sum_{t=t'+1}^{T}\left[\Phi_{t}\left(\boldsymbol{X}_{t+1}\right)-\Phi_{t}\left(\boldsymbol{X}_{\boldsymbol{t}}\right)\right]-\frac{1}{2} \sum_{t=t'+1}^{T}\left\langle\boldsymbol{x}_{t}, \nabla^{2} \Phi_{t}\left(\widetilde{\boldsymbol{X}}_{t}\right) \boldsymbol{x}_{t}\right\rangle \\
&=\Phi_{T}\left(\boldsymbol{X}_{T+1}\right)-\Phi_{t'+1}\left(\boldsymbol{X}_{t'+1}\right)+\sum_{t=t'+2}^{T}\left[\Phi_{t-1}\left(\boldsymbol{X}_{t}\right)-\Phi_{t}\left(\boldsymbol{X}_{t}\right)\right] \\
&- \frac{1}{2} \sum_{t=t'+1}^{T}\left\langle\boldsymbol{x}_{t}, \nabla^{2} \Phi_{t}\left(\widetilde{\boldsymbol{X}}_{t}\right) \boldsymbol{x}_{t}\right\rangle.
\end{align*}
Substituting this in (<ref>) gives us:
\begin{align*}
R_{A}^{\textrm{W-FTPL}(\alpha \sqrt{t})}(T) &\leq t' + \max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1} - X_{t'+1} \right\rangle - \Phi_{T}\left(\boldsymbol{X}_{T+1}\right) +\Phi_{t'+1}\left(\boldsymbol{X}_{t'+1}\right) \\ &+\sum_{t=t'+1}^{T-1}\left[\Phi_{t+1}\left(\boldsymbol{X}_{t+1}\right)-\Phi_{t}\left(\boldsymbol{X}_{t+1}\right)\right] + \frac{1}{2} \sum_{t=t'+1}^{T}\left\langle\boldsymbol{x}_{t}, \nabla^{2} \Phi_{t}\left(\widetilde{\boldsymbol{X}}_{t}\right) \boldsymbol{x}_{t}\right\rangle. \numberthis \label{eqn: wftpl_ad_regret_ub}
\end{align*}
Using Jensen's inequality, we have that
\begin{align*}
\Phi_{T}\left(\boldsymbol{X}_{T+1}\right ) &= \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X_{T+1}}+\eta_{T} \boldsymbol{\gamma}\right\rangle\right] \\
&\geq \max _{\boldsymbol{y} \in \mathcal{Y}} \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}, \boldsymbol{X_{T+1}}+\eta_{T} \boldsymbol{\gamma}\right\rangle\right] \\
&= \max _{\boldsymbol{y} \in \mathcal{Y}} \left\langle\boldsymbol{y}, \boldsymbol{X_{T+1}} \right\rangle.
\end{align*}
Substituting the above in (<ref>) gives us:
\begin{align*}
R_{A}^{\textrm{W-FTPL}(\alpha \sqrt{t})}(T) &\leq t' + \max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1} - X_{t'+1} \right\rangle - \max _{\boldsymbol{y} \in \mathcal{Y}} \left\langle\boldsymbol{y}, \boldsymbol{X_{T+1}} \right\rangle +\Phi_{t'+1}\left(\boldsymbol{X}_{t'+1}\right) \\ &-\sum_{t=t'+2}^{T}\left[\Phi_{t-1}\left(\boldsymbol{X}_{t}\right)-\Phi_{t}\left(\boldsymbol{X}_{t}\right)\right] + \frac{1}{2} \sum_{t=t'+1}^{T}\left\langle\boldsymbol{x}_{t}, \nabla^{2} \Phi_{t}\left(\widetilde{\boldsymbol{X}}_{t}\right) \boldsymbol{x}_{t}\right\rangle \\
&\leq t' + \Phi_{t'+1}\left(\boldsymbol{X}_{t'+1}\right) \\
&+\sum_{t=t'+1}^{T-1}\left[\Phi_{t+1}\left(\boldsymbol{X}_{t+1}\right)-\Phi_{t}\left(\boldsymbol{X}_{t+1}\right)\right]+ \frac{1}{2} \sum_{t=t'+1}^{T}\left\langle\boldsymbol{x}_{t}, \nabla^{2} \Phi_{t}\left(\widetilde{\boldsymbol{X}}_{t}\right) \boldsymbol{x}_{t}\right\rangle, \numberthis \label{eqn: wftpl_ad_regret_ub_maxvalue}
\end{align*}
as $\displaystyle \max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1} - \boldsymbol{X}_{t'+1} \right\rangle \leq \max _{\boldsymbol{y} \in \mathcal{Y}} \left\langle\boldsymbol{y}, \boldsymbol{X_{T+1}} \right\rangle$.
We also have:
\begin{align*}
\Phi_{t'+1}\left(\boldsymbol{X}_{t'+1}\right) &= \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{t'+1}+\eta_{t'+1} \boldsymbol{\gamma}\right\rangle\right] \\
&\leq \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{t'+1} \right\rangle\right] + \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}} \left\langle \boldsymbol{y}, \eta_{t'+1} \boldsymbol{\gamma}\right\rangle\right] \\
&\leq t' + \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle \boldsymbol{y}, \eta_{t'+1} \boldsymbol{\gamma}\right\rangle\right],
\end{align*}
as $\displaystyle\left\langle\boldsymbol{y}, \boldsymbol{X}_{t'+1} \right\rangle \leq t'$ for $\boldsymbol{y} \in \mathcal{Y}$. Substituting this back in (<ref>), we get
\begin{align*}
R_{A}^{\textrm{W-FTPL}(\alpha \sqrt{t})}(T) &\leq 2t' + \eta_{t'+1} \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle \boldsymbol{y}, \boldsymbol{\gamma}\right\rangle\right] \\
&+\sum_{t=t'+1}^{T-1}\left[\Phi_{t+1}\left(\boldsymbol{X}_{t+1}\right)-\Phi_{t}\left(\boldsymbol{X}_{t+1}\right)\right] + \frac{1}{2} \sum_{t=t'+1}^{T}\left\langle\boldsymbol{x}_{t}, \nabla^{2} \Phi_{t}\left(\widetilde{\boldsymbol{X}}_{t}\right) \boldsymbol{x}_{t}\right\rangle.
\end{align*}
As shown in [18],
\begin{align*}
\Phi_{t+1}\left(\boldsymbol{X}_{t+1}\right)-\Phi_{t}\left(\boldsymbol{X}_{t+1}\right) \leq \left|\eta_{t+1}-\eta_{t}\right| \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\langle\boldsymbol{y}, \gamma\rangle\right].
\end{align*}
Also, [10] proves that:
\begin{align*}
\mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle \boldsymbol{y}, \boldsymbol{\gamma}\right\rangle\right ] \leq C \sqrt{2 \log \left ( \frac{N}{C}\right )}.
\end{align*}
It has also been proved in [7] that
\begin{align*}
\sum_{t=1}^{T}\left\langle\boldsymbol{x}_{t}, \nabla^{2} \Phi_{t}\left(\widetilde{\boldsymbol{X}}_{t}\right) \boldsymbol{x}_{t}\right\rangle \leq \sqrt{\frac{2}{\pi}} \sum_{t=1}^{T} \frac{1}{\eta_{t}}.
\end{align*}
Combining all these results, we get
\begin{align*}
R_{A}^{\textrm{W-FTPL}(\alpha \sqrt{t})}(T) &\leq 2t' + \eta_{t'+1}C \sqrt{2 \log \left ( \frac{N}{C}\right )}+ \eta_{T} C \sqrt{2 \ln (N e / C)} + \sqrt{\frac{2}{\pi}} \sum_{t=1}^{T} \frac{1}{\eta_{t}}.
\end{align*}
As W-FTPL$(\alpha \sqrt{t})$ does not incur any switch cost till $t'$ and then incurs the same switch cost as FTPL$(\alpha \sqrt{t})$, the total switch cost till time $T$ incurred by W-FTPL$(\alpha \sqrt{t})$ can be bounded from above by the switch cost of FTPL$(\alpha \sqrt{t})$ proved in Proposition 4.2 of [18], which has been reproduced below:
\begin{align*}
\sum_{t=2}^{T} \mathbb{E}\left[\left\|\boldsymbol{y}_{t+1}-\boldsymbol{y}_{t}\right\|_{1}\right] &\leq \frac{3 \sqrt{2}}{\alpha \sqrt{\pi}}(\sqrt{T}-1)
+(N-1) \frac{2+\sqrt{2 e \ln (2 N)}}{\sqrt{e}} \ln T \\
&+\frac{3(N-1)(2+\sqrt{2 e \ln (2 N)})}{\sqrt{2 \pi e} \alpha}\left(1-T^{-1 / 2}\right) .
\end{align*}
Thus, we get an $\mathcal{O}(\sqrt{T})$ upper bound for $R_{A}^{\textrm{W-FTPL}(\alpha \sqrt{t})}(T,D)$.
§ PROOF OF PART (B) OF THEOREM <REF>
In this section, we prove a lower bound on the regret of FTPL with a constant learning rate under stochastic file requests. For any file $k$, let $\hat{\mu}_k(t)=\frac{c_k(t) + \eta_t \gamma(k)}{t}$ and let $\alpha_k(t)$ denote the empirical average number of requests received by file $k$, where $c_k(t)$ denotes the number of requests received by file $k$ at time $t$.
\begin{align*}
\mathbb{E}[R(T)]&=\mathbb{E}\left[\sum_{t=1}^{T} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\} \right ] \\
& \stackrel{(a)} {=} \mathbb{E}\left[\sum_{t=1}^{T}\left(\sum_{j \in \mathcal{C}} \mu_{j}-\sum_{k \in C(t)} \mu_{k}\right)\right] \\
& \geq \mathbb{E}\left[\sum_{t=1}^{T} \sum_{k \in C(t) \backslash \mathcal{C}}\left(\mu_{C}-\mu_{k}\right)\right] \\
&=\mathbb{E}\left[\sum_{t=1}^{T} \sum_{k \in \mathcal{L} \backslash \mathcal{C}}\left(\mu_{C}-\mu_{k}\right) \mathbbm{1}\{k \in C(t)\}\right] \\
&=\sum_{k \in \mathcal{L} \backslash \mathcal{C}}\left(\mu_{C}-\mu_{k}\right) \mathbb{E}\left[\sum_{t=1}^{T} \mathbbm{1}\{k \in C(t)\}\right] \\
&=\sum_{k \in \mathcal{L} \backslash \mathcal{C}}\left(\mu_{C}-\mu_{k}\right) \sum_{t=1}^{T} \mathbb{P}(k \in C(t)) \\
&\geq (\mu_C-\mu_{C+1}) \sum_{t=1}^{T} \sum_{k \in \mathcal{L} \backslash \mathcal{C}} \mathbb{P}(k \in C(t)) \\
&\geq (\mu_C-\mu_{C+1}) \sum_{t=1}^{T} \mathbb{P} \left ( \bigcup_{k \in \mathcal{L}\setminus \mathcal{C} }k \in C(t) \right ), \numberthis\label{here}
\end{align*}
where $(a)$ follows from the tower property of conditional expectation (i.e., condition on $C(t)$ inside the outer expectation).
The event $\left \{\bigcup_{k \in \mathcal{L}\setminus \mathcal{C} }k \in C(t) \right \}$ corresponds to the event where file 2 belongs to $C(t)$, which is equivalent to saying that file 2 has a perturbed count greater than that of file 1 at time $t$. Thus,
\begin{align*}
\mathbb{P} \left ( \bigcup_{k \in \mathcal{L}\setminus \mathcal{C} }k \in C(t) \right ) &= \mathbb{P}(\hat{\mu}_{2}(t)>\hat{\mu}_1(t))\\
&= \mathbb{P} \biggl ( (\alpha_{C+1}(t)-\alpha_{C}(t))-(\mu_{C+1}-\mu_C) \\&+\frac{\eta}{t}(\gamma_{C+1}(t)-\gamma_{C}(t))>\Delta_{\min} \biggr ).
\end{align*}
Thus, we get
\begin{multline}\label{lb}
\mathbb{P} \left ( \bigcup_{k \in \mathcal{L}\setminus \mathcal{C} }k \in C(t) \right ) \geq \mathbb{P}\left(\alpha_{2}(t)-\alpha_{1}(t)-(\mu_{2}-\mu_1)>-\frac{2}{t} \right ) \\ \times \mathbb{P} \left (\frac{\eta}{t}\left (\gamma_{2}(t)-\gamma_{1}(t) \right )>\Delta_{\min} + \frac{2}{t} \right),
\end{multline}
as the perturbation is independent of the file requests seen so far. We also have,
\begin{align*}
\mathbb{P}\left (\frac{\eta}{t}(\gamma_{2}(t)-\gamma_{1}(t)>\Delta_{\min} + \frac{2}{t} \right) &=\mathbb{P} \left ( \mathcal{N}\left (0,2\eta^2 \right ) > t\Delta_{\min} +2 \right ) \\
&\geq \frac{1}{4} e^{-(t\Delta_{\min}+2 )^2/4\eta^2}. \numberthis \label{eqn: ftpl_lb_noise}
\end{align*}
\begin{align*}
1-\alpha_{2}(t) &= \alpha_1(t). \numberthis \label{eqn: ftpl_lb_estimator}
\end{align*}
Using (<ref>) and (<ref>) in (<ref>),
\begin{align*}
\mathbb{P} \left ( \bigcup_{k \in \mathcal{L}\setminus \mathcal{C} }k \in C(t) \right ) &\geq \frac{1}{4}\mathbb{P}\left( \alpha_{2}(t) > \frac{1-\Delta_{\min}}{2} -\frac{1}{t} \right) e^{-(t\Delta_{\min}+2 )^2/4\eta^2}, \numberthis \label{eqn: ftpl_lb_binomial}
% &= \frac{1}{4}\mathbb{P}\left( \alpha_{C+1}(t) > \mu_{C+1} -\frac{1}{t} \right) e^{-(t\Delta_{\min}+2 )^2/4\eta^2}, \numberthis \label{eqn: ftpl_lb_binomial}
\end{align*}
where the last step follows from $1-\mu_{1} = \mu_{2}$. The paper [14] shows that any median $m$ of a Binomial$(n,p)$ distribution lies in the interval $[\lfloor np \rfloor, \lceil np \rceil]$. The first term in (<ref>) is the probability of a Binomial$(t,\mu_2)$ random variable exceeding $t\mu_2-1 \leq \lfloor t \mu_2 \rfloor$. Thus, we get that
\begin{align*}
\mathbb{P} \left ( \bigcup_{k \in \mathcal{L}\setminus \mathcal{C} }k \in C(t) \right ) &\geq \frac{1}{8} e^{-(t\Delta_{\min}+2 )^2/4\eta^2}.
\end{align*}
Adding this over all $T$ gives us:
\begin{align*}
\mathbb{E}[R(T)] &\geq \frac{\Delta_{\min}}{8} \sum_{t=1}^{\frac{2\eta}{\Delta_{\min}}} e^{-(t\Delta_{\min}+2 )^2/4\eta^2} \\
&\geq \frac{ \eta e^{-\left ( \frac{1+\eta}{\eta} \right )^2}}{4}.
\end{align*}
§ PROOF OF PART (C) OF THEOREM <REF>
In this section, we prove an upper bound on the regret (including switching cost) of FTPL($\alpha \sqrt{t}$) under stochastic arrivals. We assume $L \geq 3$. To prevent the regret from scaling as a polynomial in $L$, we use the following idea from [17]: The algorithm's regret is upper bounded by $t_0$ in the first $t_0$ rounds and in the other $T-t_0$ rounds, the regret is bounded using standard concentration inequalities. The switching cost is upper bounded by $DCt_0$ for the first $t_0$ requests and in the other $T-t_0$ rounds, the switching cost is bounded using standard concentration inequalities. Recall that $t_0=\max \left \{\frac{8}{\Delta_{\min}^2} \log \left ( {L^3}\right ) , \frac{32 \alpha^2}{\Delta_{\min}^2 }\log \left ( {L^3}\right ) \right \}$.
For any file $k$, at time $t$, let $\alpha_k(t)$ denote the empirical average number of requests received by file $k$. We also define, for any file $k$ after $t$ files have been requested from the cache and for $2 \leq i \leq s$,
\begin{align*}
\hat{\mu}_k(t) &:=\frac{c_k(t) + \eta_{t+1} \gamma(k)}{t}, \numberthis \label{eqn: perturbed_average}
\end{align*}
where $c_k(t)$ denotes the number of requests received by file $k$ at time $t$. We first consider the switching cost incurred by this policy. Any file can be fetched into the cache as the learning rate varies with time and as the number of switches is twice the number of fetches, the following equation from [18] holds:
\begin{align*}
\mathbb{E}\left[\left\|\boldsymbol{y}_{t+1}-\boldsymbol{y}_{t}\right\|_{1}\right] =2 \sum_{f=1}^{L} \mathbb{P}(\text { The file index } f \text { is fetched at time } t+1).
\end{align*}
The probability of a file $f$ being fetched at time $t+1$ is evaluated by taking the following cases:
* $f \in \mathcal{C}$:
Here, if file $f$ is fetched at time $t+1$, then $f \notin C(t)$ which implies that at time $t$, $\exists f' \in \mathcal{L} \setminus \mathcal{C}$ such that $f' \in C(t)$. This event can be upper bounded using a union bound over files in $\mathcal{L} \setminus \mathcal{C}$ for $f'$.
* $f \notin \mathcal{C}$:
Here, if file $f$ is fetched at time $t+1$, then $f \in C(t+1)$ which implies that at time $t+1$, $\exists f' \in \mathcal{C}$ such that $f' \notin C(t+1)$. This event can be upper bounded using a union bound over files in $\mathcal{C}$ for $f'$.
\begin{multline*}
\sum_{f=1}^{L} \mathbb{P}(\text { The file index } f \text { is fetched at time } t+1) \leq \sum_{j=1}^{C} \sum_{k=C+1}^{L} \mathbb{P}(\hat{\mu}_k(t-1)>\hat{\mu}_j(t-1)) \\+ \sum_{k=C+1}^{L} \sum_{j=1}^{C} \mathbb{P}(\hat{\mu}_k(t)>\hat{\mu}_j(t)).
\end{multline*}
Thus the expected number of switches till $T$ can be bounded as:
\begin{align*}
\frac{D}{2} \sum_{t=2}^T \mathbb{E}\left[\left\|\boldsymbol{y}_{t+1}-\boldsymbol{y}_{t}\right\|_{1}\right] &\leq DCt_0+\frac{D}{2} \sum_{t=t_0+1}^{T-1} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \mathbb{P}(\hat{\mu}_k(t-1)>\hat{\mu}_j(t-1)) \\&+\frac{D}{2} \sum_{t=t_0+1}^{T-1} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \mathbb{P}(\hat{\mu}_k(t)>\hat{\mu}_j(t)) \\
&\leq DCt_0+ D\sum_{t=t_0+1}^{T-1} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \mathbb{P}(\hat{\mu}_k(t)>\hat{\mu}_j(t)) \\
&\leq DCt_0 + D \sum_{t=t_0+1}^{T-1} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \mathbb{P}\left( \hat{\mu}_{j}(t)-\mu_{j} \leq-\Delta_{j, k} / 2\right )\\
&+\mathbb{P}\left(\hat{\mu}_{k}(t)-\mu_{k}>\Delta_{j, k} / 2\right) \\
&\leq DCt_0+D \sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \mathbb{P}\left( \alpha_{j}(t)-\mu_{j} \leq-\Delta_{j, k} / 4\right )\\
&+\mathbb{P}\left(\alpha_{k}(t)-\mu_{k}>\Delta_{j, k} / 4\right) \\&+\mathbb{P} \left( \eta_t \gamma(j) \leq-t\Delta_{j, k} / 4 \right )
+\mathbb{P} \left( \eta_t \gamma(k) >t\Delta_{j, k} / 4 \right ),
\end{align*}
where the last two steps follow from a union bounding argument. Now, using the Hoeffding inequality [13],
\begin{align*}
\mathbb{P}\left(\alpha_{j}(t)-\mu_{j} \leq-\Delta_{j, k} / 4\right) &\leq e^{-t \Delta_{j, k}^{2} / 8}, \\
\mathbb{P}\left(\alpha_{k}(t)-\mu_{k}>\Delta_{j, k} / 4\right) &\leq e^{-t^{2} \Delta_{j, k}^{2} / 8}.
\end{align*}
We also have:
\begin{align*}
\mathbb{P}\left(\eta_t \gamma(j) \leq-t \Delta_{j, k} / 4\right)&=\mathbb{P}\left(\eta_t \gamma(k)>t \Delta_{j, k} / 4\right) \\&\leq e^{-t^{2} \Delta_{j, k}^{2} / 32 \eta_t^{2}} = e^{-t\Delta_{j, k}^{2} / 32 \alpha^{2}}.
\end{align*}
Thus, we get the following bound on the switching cost:
\begin{align}
\sum_{t=1}^{T-1} \mathbb{E}\left[\left\|\boldsymbol{y}_{t+1}-\boldsymbol{y}_{t}\right\|_{1}\right] \leq DCt_0+ 2D \sum_{t=t_0+1}^{T-1} \sum_{j=1}^{C} \sum_{k=C+1}^{L} e^{-t \Delta_{j, k}^{2} / 8} + e^{-t\Delta_{j, k}^{2} / 32 \alpha^{2}}. \label{eqn: stochastic_eta_t_switchcost}
\end{align}
The regret for the first $t_0$ rounds is bounded by $t_0$. For $t>t_0$, we use ideas from [8] and [17] to upper bound the regret.
\begin{align*}
R_S^{\textrm{FTPL}(\alpha \sqrt{t})}(T) &=\mathbb{E}\left[\sum_{t=1}^{T} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\}\right] \\
&=\mathbb{E}\left[\sum_{t=1}^{T}\left(\sum_{j \in \mathcal{C}} \mu_{j}-\sum_{k \in C(t)} \mu_{k}\right)\right] \\
&\leq t_0+ \mathbb{E}\left[\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k} \mathbbm{1}\{j \notin C(t), k \in C(t)\}\right] \\
&\leq t_0+ \mathbb{E}\left[\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k} \mathbbm{1}\left(\hat{\mu}_{k}(t)>\hat{\mu}_{j}(t)\right)\right] \\
&\leq t_0+ \mathbb{E}\biggl[\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k}\biggl(\mathbbm{1}\left\{\hat{\mu}_{j}(t)-\mu_{j} \leq-\Delta_{j, k} / 2\right\}
\\&+\mathbbm{1}\left\{\hat{\mu}_{k}(t)-\mu_{k}>\Delta_{j, k} / 2\right\}\biggr)\biggr] \\
% &\leq \mathbb{E}\left[\sum_{t=1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k}\left(\mathbbm{1}\left\{\alpha_j(t)+\mathcal{N}_j(t)-\mu_{j} \leq-\Delta_{j, k} / 2\right\}
% +\mathbbm{1}\left\{\alpha_k(t)+\mathcal{N}_k(t)-\mu_{k}>\Delta_{j, k} / 2\right\}\right)\right] \\
&\leq t_0+ \mathbb{E}\biggl[\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k} \biggl ( \mathbbm{1}\left\{\alpha_j(t)-\mu_{j} \leq-\Delta_{j, k} / 4\right\} \\&+ \mathbbm{1}\left\{\eta_t\gamma(j)\leq-t\Delta_{j, k} / 4\right\} \biggr ) \biggr] \\
&+\mathbb{E}\biggl[\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k} \biggl ( \mathbbm{1}\left\{\alpha_k(t)-\mu_{k}>\Delta_{j, k} / 4\right\}\ \\&+ \mathbbm{1} \left \{ \eta_t\gamma(k) > t\Delta_{j, k} / 4 \right \} \biggr) \biggr]. \numberthis \label{eqn: eta_t_stochastic_regret_bound}
\end{align*}
By taking expectation inside the summation, we obtain the following upper bound for (<ref>):
\begin{align*}
R_S^{\textrm{FTPL}(\alpha \sqrt{t})}(T) & \leq t_0+ \sum_{j=1}^{C} \sum_{k=C+1}^{L} \sum_{t=t_0+1}^{T} 2 \Delta_{j, k} \left ( e^{-t \Delta_{j, k}^{2} / 8} + e^{-t^2 \Delta_{j,k}^2/32 \eta^2} \right ) \numberthis \label{eqn: eta_t_regret}
\end{align*}
Thus, combining (<ref>) and (<ref>) , we have:
\begin{align*}
R_S^{\textrm{FTPL}(\alpha \sqrt{t})}(T,D) &\leq (1+DC)t_0 \\&+ 2D\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \biggl(e^{-t \Delta_{j, k}^{2} / 8} \\&+e^{-t \Delta_{j, k}^{2} / 32 \alpha^{2}}\biggr) \\&+ 2\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j,k} \biggl(e^{-t \Delta_{j, k}^{2} / 8} \\&+e^{-t \Delta_{j, k}^{2} / 32 \alpha^{2}}\biggr). \numberthis \label{all}
\end{align*}
Each of these terms are bounded separately in the following way:
\begin{align*}
\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} e^{-t \Delta_{j, k}^{2} / 8} &\leq \sum_{t=t_0+1}^{T} C(L-C) e^{-t \Delta_{\min}^2/8} \\& \leq C(L-C) e^{-t_0 \Delta_{\min}^2/8} \sum_{t=t_0+1}^{T} e^{-(t-t_0) \Delta_{\min}^2/8}\\
&\leq \sum_{t=1}^{T-t_0} e^{-t \Delta_{\min}^2/8} \leq \frac{8}{\Delta_{\min}^2 }.
\end{align*}
\begin{align*}
\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} e^{-t \Delta_{j, k}^{2} / 32 \alpha^{2}} &\leq \sum_{t=t_0+1}^{T} C(L-C) e^{-t \Delta_{\min}^{2} / 32 \alpha^{2}} \\
&\leq L^2 e^{-t_0 \Delta_{\min}^{2} / 32 \alpha^{2}}\sum_{t=t_0+1}^{T} e^{-(t-t_0)\Delta_{\min}^{2} / 32 \alpha^{2}}.\\
&\leq \sum_{t=1}^{T-t_0} e^{-t \Delta_{\min}^{2} / 32 \alpha^{2}} \leq \frac{32 \alpha^2}{\Delta_{\min}^2 }.
\end{align*}
The function $f(u)=$ $u e^{-u^{2} / 2}$ is decreasing on $[1,+\infty)$. Since $\Delta_{j,k} \geq \Delta_{\min}, j \in \mathcal{C}, k \notin \mathcal{C}$, we get
\begin{align*}
\Delta_{j,k} e^{-t \Delta_{j,k}^{2} / 8} &=\frac{2}{\sqrt{t}} f\left(\frac{\sqrt{t} \Delta_{j,k}}{2}\right) \\&\leq \frac{2}{\sqrt{t}} f\left(\frac{\sqrt{t} \Delta_{\min}}{2}\right)=\Delta_{\min} e^{-t \Delta_{\min}^{2} / 8} .
\end{align*}
Note that for $t \geq t_0+1$, $t> \frac{4}{\Delta_{\min}^2}$ holds as $t_0 \geq \frac{8}{\Delta_{\min}^2}$. Using this, we have that:
\begin{align*}
\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j,k} e^{-t \Delta_{j, k}^{2} / 8} &\leq \sum_{t=t_0+1}^{T} C(L-C) \Delta_{\min} e^{-t \Delta_{\min}^{2} / 8} &\leq \frac{8}{\Delta_{\min} }.
\end{align*}
\begin{align*}
\sum_{t=t_0+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j,k} e^{-t \Delta_{j, k}^{2} / 32 \alpha^{2}} &\leq \frac{32 \alpha^2}{\Delta_{\min} }.
\end{align*}
Substituting these bounds in (<ref>), we have the following upper bound on the regret of FTPL($\alpha \sqrt{t}$):
\begin{align*}
R_S^{\textrm{FTPL}(\alpha \sqrt{t})}(T,D) \leq \left (1+ DC \right ) t_0 + 2\left (1+ \frac{D}{\Delta_{\min}} \right ) \left ( \frac{8}{\Delta_{\min}} + \frac{32 \alpha^2}{\Delta_{\min}} \right ) .
\end{align*}
§ PROOF OF PART (D) OF THEOREM <REF>
In this section, we prove an upper bound on the regret (including the switching cost) of the W-FTPL algorithm under stochastic file requests. Using (<ref>),
\begin{align*}
\frac{D}{2} \sum_{t=t'+1}^{T} \mathbb{E}\left[\left\|\boldsymbol{y}_{t+1}-\boldsymbol{y}_{t}\right\|_{1}\right] &\leq 2D\sum_{t=t'+1}^{T} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \left(e^{-t \Delta_{j, k}^{2} / 8}+e^{-t \Delta_{j, k}^{2} / 32 \alpha^{2}}\right) \\
& \leq 2D C(L-C) \sum_{t=t'+1}^{T} \left(e^{-t \Delta_{\min}^{2} / 8}+e^{-t \Delta_{\min}^{2} / 32 \alpha^{2}}\right) \\
& \leq 2D C(L-C) \sum_{t=t'+1}^{T} \biggl( e^{-t' \Delta_{\min}^{2} / 8} e^{-(t-t') \Delta_{\min}^{2} / 8} \\&+e^{-t' \Delta_{\min}^{2} / 32 \alpha^{2}} e^{-(t-t') \Delta_{\min}^{2} / 32 \alpha^{2}} \biggr) \\
&\leq 2DC(L-C) \left ( e^{-t' \frac{\Delta_{\min}^{2}}{8} } \frac{8}{\Delta_{\min}^2} + e^{-t' \frac{\Delta_{\min}^{2}}{32 \alpha^{2}}} \frac{32 \alpha^2}{\Delta_{\min}^2} \right ) \\
&\leq 2DC(L-C) \biggl ( e^{- u (\log D)^{1+\beta} \Delta_{\min}^{2} / 8} \frac{8}{\Delta_{\min}^2} \\&+ e^{-u (\log D)^{1+\beta} \Delta_{\min}^{2} / 32 \alpha^{2}} \frac{32 \alpha^2}{\Delta_{\min}^2} \biggr). \numberthis \label{eqn: wftpl_switchcost}
\end{align*}
Till $t'$, the regret is bounded by $t'$. Thus, the regret can be bounded in the following way using (<ref>) :
\begin{align*}
\mathbb{E}\left[\sum_{t=1}^{T} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\} \right ]
&\leq t' + 2\sum_{t=t'+1}^T \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j,k}\biggl(e^{-t \Delta_{j, k}^{2} / 8} \\&+e^{-t \Delta_{j, k}^{2} / 32 \alpha^{2}}\biggr) \\
&\leq t' + 2 \left ( \frac{8}{\Delta_{\min }} + \frac{32 \alpha^2}{\Delta_{\min }} \right ). \numberthis \label{eqn: wftpl_regret}
\end{align*}
Combining (<ref>) and (<ref>) gives the following upper bound on the regret of W-FTPL($\alpha \sqrt{t}$):
\begin{align*}
R_S^{\textrm{W-FTPL}(\alpha \sqrt{t})}(T,D) &\leq t' + \frac{16}{\Delta_{\min }} + \frac{64 \alpha^2}{\Delta_{\min } } + 2LDC(L-C) \biggl ( e^{- u (\log D)^{1+\beta} \Delta_{\min}^{2} / 8} \frac{8}{\Delta_{\min}^2} \\&+ e^{-u (\log D)^{1+\beta} \Delta_{\min}^{2} / 32 \alpha^{2}} \frac{32 \alpha^2}{\Delta_{\min}^2} \biggr ).
\end{align*}
§ PROOF OF PART (A) OF THEOREM <REF>
In this section, we prove a lower bound on the regret of any policy $\pi$ under stochastic file requests when cache updates are restricted to $s+1$ fixed points. To provide a lower bound on the regret, we first prove a lower bound on the regret for the initial set of $r_1$ requests.
\begin{align*}
R_{S}^{\pi}(r_1)&\geq \mathbb{E}\left[\sum_{t=1}^{r_1} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\} \right ] \\
& \stackrel{(a)} {=} \mathbb{E}\left[\sum_{t=1}^{r_1}\left(\sum_{j \in \mathcal{C}} \mu_{j}-\sum_{k \in C(t)} \mu_{k}\right)\right] \\
& \geq \mathbb{E}\left[\sum_{t=1}^{r_1} \sum_{k \in C(t) \backslash \mathcal{C}}\left(\mu_{C}-\mu_{k}\right)\right] \\
&=\mathbb{E}\left[\sum_{t=1}^{r_1} \sum_{k \in \mathcal{L} \backslash \mathcal{C}}\left(\mu_{C}-\mu_{k}\right) \mathbbm{1}\{k \in C(t)\}\right] \\
&=\sum_{k \in \mathcal{L} \backslash \mathcal{C}}\left(\mu_{C}-\mu_{k}\right) \mathbb{E}\left[\sum_{t=1}^{r_1} \mathbbm{1}\{k \in C(t)\}\right] \\
&=\sum_{k \in \mathcal{L} \backslash \mathcal{C}}\left(\mu_{C}-\mu_{k}\right) \sum_{t=1}^{r_1} \mathbb{P}(k \in C(t)) \\
&\geq (\mu_C-\mu_{C+1}) \sum_{t=1}^{r_1} \sum_{k \in \mathcal{L} \backslash \mathcal{C}} \mathbb{P}(k \in C(t)),\numberthis\label{eqn: restricted_stochastic_lb}
\end{align*}
where $(a)$ follows from the tower property of conditional expectation (i.e., condition on $C(t)$ inside the outer expectation). Note that $C(t)$ remains constant during the first $r_1$ requests and consists of $C$ files chosen randomly from the library of $L$ files. Thus, for any $k \in \mathcal{L} \setminus \mathcal{C}$ and $1 \leq t \leq r_1$,
\begin{align*}
\mathbb{P}(k \in C(t)) = \frac{{N-1 \choose C-1}}{{N \choose C}} = \frac{C}{N}.
\end{align*}
Using this in (<ref>), we get that
\begin{align*}
R_{S}^{\pi}(r_1) &\geq r_1(\mu_C-\mu_{C+1}) (L-C) C/N.
\end{align*}
For proving a lower bound on the regret for larger values of $T$, we consider $L=2, C=1$. Given a popularity distribution $\mu=(\mu_1, \mu_2)$ such that $\mu_1 > \mu_2$ and a policy $\pi$,
\begin{align*}
R_{S, \mu}^{\pi} (T) &= \mathbb{E}_{\pi \mu} \left[ \sum_{t=1}^{T} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\} \right] \\
&= \sum_{t=1}^{T} \left(\mu_1 - \mu_2 \right) \mathbb{E}_{\pi \mu} \left[\mathbbm{1} \left\{C(t) = \{2\} \right\} \right] \\
&= \left(\mu_1 - \mu_2 \right) \sum_{t=1}^{T} \mathbb{P}_{\pi \mu} \left ( C(t) = \{2\} \right ).
\end{align*}
Now, consider the popularity distribution $\mu'=(\mu_2, \mu_1)$. Following the same steps for this popularity distribution, we have that
\begin{align*}
R_{S, \mu'}^{\pi} (T) &= \left(\mu_1 - \mu_2 \right) \sum_{t=1}^{T} \mathbb{P}_{\pi \mu'} \left ( C(t) = \{1\} \right ).
\end{align*}
Defining $\Delta = \mu_1 - \mu_2$, we have that
\begin{multline*}
R_{S, \mu'}^{\pi} (T) + R_{S, \mu}^{\pi} (T) = \Delta \sum_{t=1}^{T} \mathbb{P}_{\pi \mu} \left (C(t) = \{2\} \right ) + \Delta \sum_{t=1}^{T} \mathbb{P}_{\pi \mu'} \left ( C(t) = \{1\} \right ).
\end{multline*}
Using the fact that the cache configuration remains constant except at the places where the cache is allowed to change its contents, we have that
\begin{multline}
R_{S, \mu'}^{\pi} (T) + R_{S, \mu}^{\pi} (T) \geq R_{S, \mu'}^{\pi} (r_1) + R_{S, \mu}^{\pi} (r_1) \\+ \Delta \sum_{i=2}^{s} r_i \Bigl ( \mathbb{P}_{\pi \mu} \left ( C(t_i+1) = \{2\} \right ) + \mathbb{P}_{\pi \mu'} \left ( C(t_i+1) = \{1\} \right ) \Bigr ), \label{eqn: restricted_lb_main}
\end{multline}
where $t_i$ has been defined in (<ref>).
Let $P$ and $Q$ be probability distributions on the same measurable space $(\Omega, \mathcal{F})$ and let $A \in \mathcal{F}$ be an arbitrary event. Then,
\begin{align*}
P(A) + Q(A^{\mathsf{c}}) \geq \frac{1}{2} \exp \left(-D(P, Q)\right).
\end{align*}
Consider a fixed $i$ such that $2 \leq i \leq s$. Setting $A$ to be the event $\{C(t_i+1) = \{2\} \}$ in Lemma <ref>, we have that
\begin{align*}
\mathbb{P}_{\pi \mu} \left ( C(t_i+1) = \{2\} \right )+ \mathbb{P}_{\pi \mu'} \left ( C(t_i+1) = \{1\} \right ) \geq \frac{1}{2} \exp \left(- D(\mathbb{P}_{\pi \mu}, \mathbb{P}_{\pi \mu'})\right).
\end{align*}
Let $\mathbb{C}$ be the set of valid caching configurations. If $\mathbb{P}_{\pi \mu}$ and $\mathbb{P}_{\pi \mu'}$ are probability measures on the set $\mathcal{G}_i := \{[2]^{t_i} \times \mathbb{C}^{t_i +1} \}$, then
\begin{align*}
D(\mathbb{P}_{\pi \mu}, \mathbb{P}_{\pi \mu'}) = t_i D(\mathbb{P}_{\mu}, \mathbb{P}_{\mu'}),
\end{align*}
where $\mathbb{P}_{\mu}, \mathbb{P}_{\mu'}$ are the corresponding marginal distributions.
Using the above lemma, we have that
\begin{align*}
\mathbb{P}_{\pi \mu} \left ( C(t_i+1) = \{2\} \right ) + \mathbb{P}_{\pi \mu'} \left (C(t_i+1) = \{1\} \right ) &\geq \frac{1}{2} \exp \left(- t_i D(\mathbb{P}_{\mu}, \mathbb{P}_{\mu'})\right) \\
&\geq \frac{1}{2} \exp \left(- t_i \frac{\Delta^2}{\mu_1 \mu_2 } \right),
\end{align*}
where the last step follows from an upper bound on the KL Divergence between two Bernoulli distributions. Substituting this back in (<ref>), we have that
\begin{align*}
R_{S, \mu'}^{\pi} (T) + R_{S, \mu}^{\pi} (T) \geq r_1 \Delta + \sum_{i=2}^{s} r_i \frac{\Delta}{2} \exp \left(- t_i \frac{\Delta^2}{\mu_1 \mu_2 } \right).
\end{align*}
\begin{align*}
\max \left\{ R_{S, \mu'}^{\pi} (T), R_{S, \mu}^{\pi} (T) \right\} \geq \frac{r_1 \Delta}{2} + \sum_{i=2}^{s} r_i \frac{\Delta}{4} \exp \left(- t_i \frac{\Delta^2}{\mu_1 \mu_2 } \right),
\end{align*}
which gives us the result.
Let $p_\mu(x) = \mathbb{P}_\mu (x(t) = x)$ be the density function associated with $\mathbb{P}_{\mu}$ and let $p_{\pi \mu}$ be the density function associated with $\mathbb{P}_{\pi \mu}$. We denote by $\pi( \mathcal{C}_1 | h)$ to be the probability of the caching policy choosing cache configuration $\mathcal{C}_1$ given the history $h$. Then,
\begin{multline*}
p_{\pi \mu} (C(1), x_1, C(2), x_2, \ldots, C(t_i+1)) \\= \pi(C(1)) p_{\mu}(x_1) \pi(C(2) | C(1), x_1) \cdots \pi(C(t_i+1) | C(1), x_1, \ldots, x_{t_i}).
\end{multline*}
\begin{align*}
D(\mathbb{P}_{\pi \mu}, \mathbb{P}_{\pi \mu'}) &= \sum_{g \in \mathcal{G}_i} p_{\pi \mu} (g) \log \left( \frac{p_{\pi \mu}(g)}{p_{\pi \mu'}(g)} \right) \\
&= \sum_{X^i = (x(1), \ldots, x(t_i))} p_{\mu }(X^i) \log \left( \frac{p_{\mu}(X^i)}{p_{\mu'}(X^i)} \right) \\
&= t_i D(\mathbb{P}_{\mu}, \mathbb{P}_{\mu'}),
\end{align*}
where the second last step follows from the fact that the log likelihood does not depend upon the caching policy and the last step follows from the independence of the file requests.
§ PROOF OF PART (B) OF THEOREM <REF>
In this section, we prove an upper bound on the regret of FTPL($\alpha \sqrt{t}$) when cache updates are restricted to $s+1$ fixed time slots under adversarial requests. For any file $k$, at time $t$, let $\alpha_k(t)$ denote the empirical average number of requests received by file $k$. We also define, for any file $k$ after $t$ files have been requested from the cache and for $2 \leq i \leq s$,
\begin{align*}
\hat{\mu}_k(t) &:=\frac{c_k(t) + \eta_t \gamma(k)}{t}, \numberthis \label{eqn: perturbed_average} \\
t_i &:=\sum_{j=1}^{i-1} r_j, \numberthis \label{eqn: cum_sums} \\
\eta^i &:= \eta_{t_i},
\end{align*}
where $c_k(t)$ denotes the number of requests received by file $k$ at time $t$. Thus,
\begin{align*}
R_{S}^{FTPL(\alpha \sqrt{t})}(T) &=\mathbb{E}\left[\sum_{t=1}^{T} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\}\right] \\
&\stackrel{(a)}{\leq} r_1 + \mathbb{E}\left[\sum_{t=r_1+1}^{T} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\}\right] \\
&=r_1+\mathbb{E}\left[\sum_{i=2}^{s} \sum_{t=t_i+1}^{t_{i+1}} \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\}\right] \\
&= r_1+\sum_{i=2}^{s} \sum_{t=t_i+1}^{t_{i+1}}\mathbb{E}\left[ \mathbbm{1}\{x(t) \in \mathcal{C}\}-\mathbbm{1}\{x(t) \in C(t)\}\right] \\
& \stackrel{(b)}{=}r_1+ \sum_{i=2}^{s} r_i \, \mathbb{E}\left[ \mathbbm{1}\{x(t_i+1) \in \mathcal{C}\}-\mathbbm{1}\{x(t_i+1) \in C(t_i+1)\}\right], \\
\end{align*}
where (a) follows from bounding the regret for the first $r_1$ time slots by $r_1$ and (b) follows from the fact that $C(t)$ remains the same from $t_i+1$ to $t_{i+1}$ for $1 \leq i \leq s$.
\begin{align*}
R_{S}^{FTPL(\alpha \sqrt{t})}(T) & \stackrel{(c)}{\leq} r_1 + \sum_{i=2}^{s} r_i \, \mathbb{E}\left[ \left [\left(\sum_{j \in \mathcal{C}} \mu_{j}-\sum_{k \in C\left (t_i +1\right )} \mu_{k}\right)\right] \right ]\\
&\leq r_1 + \sum_{i=2}^{s} r_i \, \mathbb{E}\left[ \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k} \mathbbm{1}\left \{j \notin C\left (t_i +1 \right ), k \in C\left (t_i +1\right ) \right \}\right] \\
&\leq r_1 + \sum_{i=2}^{s} r_i \, \mathbb{E}\left[ \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k} \mathbbm{1}\left(\hat{\mu}_{k} \left (t_i \right )>\hat{\mu}_{j} \left (t_i \right )\right)\right]\\
& \stackrel{(d)}{\leq} r_1 + \sum_{i=2}^{s} r_i \, \mathbb{E}\Biggl[ \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k}\biggl(\mathbbm{1}\left\{\hat{\mu}_{j}\left (t_i \right )-\mu_{j} \leq-\Delta_{j, k} / 2\right\}
\\&+\mathbbm{1}\left\{\hat{\mu}_{k}\left (t_i \right )-\mu_{k}>\Delta_{j, k} / 2\right\}\biggr)\Biggr],
\end{align*}
where $(c)$ follows from the tower property of conditional expectation (i.e., condition on $C(t)$ inside the outer expectation) and $(d)$ follows from adding $\Delta_{j,k}$ on both sides and using the fact that at least one of $(\hat{\mu}_{k}\left (t_i \right )-\mu_{k} )$ and $(\mu_{j} - \hat{\mu}_{j}\left (t_i \right )) $ must be greater than $\Delta_{j,k}/2$. Using (<ref>),
\begin{align*}
R_{S}^{FTPL(\alpha \sqrt{t})}(T) &\leq r_1 + \sum_{i=2}^{s} r_i \, \mathbb{E}\Biggl[\sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k} \Biggl ( \mathbbm{1}\left\{\alpha_j (t_i) -\mu_{j} \leq-\Delta_{j, k} / 4\right\} \\&+ \mathbbm{1}\left\{\eta_{t_i }\gamma(j)\leq-t_i \Delta_{j, k} / 4\right\} \Biggr ) \Biggr] \\
&+ \sum_{i=2}^{s} r_i \, \mathbb{E}\Biggl[ \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j, k} \Biggl ( \mathbbm{1}\left\{\alpha_k(t_i) -\mu_{k}>\Delta_{j, k} / 4\right\}\ \\&+ \mathbbm{1} \left \{ \eta_{t_i }\gamma(k) > t_i \Delta_{j, k} / 4 \right \} \Biggr) \Biggr], \numberthis \label{eqn: restricted_stoch_ftpl_regret}
\end{align*}
which follows from a union bounding argument similar to (d). Using Hoeffding's inequality ([13]), we obtain
\begin{align*}
\mathbb{P}\left(\alpha_{j}(t)-\mu_{j} \leq-\Delta_{j, k} / 4\right) &\leq e^{-t \Delta_{j, k}^{2} / 8}, \\
\mathbb{P}\left(\alpha_{k}(t)-\mu_{k}>\Delta_{j, k} / 4\right) &\leq e^{-t^2 \Delta_{j, k}^{2} / 8}.
\end{align*}
\begin{align*}
\mathbb{P}\left(\frac{\eta_t}{t}\gamma(j) \leq -\Delta_{j, k} / 4\right) = \mathbb{P}\left(\frac{\eta_t}{t}\gamma(k) > \Delta_{j, k} / 4\right) \leq e^{-t^2 \Delta_{j,k}^2/32\eta_t^2}.
\end{align*}
By taking the expectation inside the summation in (<ref>), we obtain the following upper bound for the regret:
\begin{align*}
R_{S}^{FTPL(\alpha \sqrt{t})}(T) &\leq r_1+ 2\sum_{j=1}^{C} \sum_{k=C+1}^{L} \sum_{i=2}^{s} r_i \, \Delta_{j, k} \left ( e^{-t_i \Delta_{j, k}^{2} / 8} + e^{-t_i \Delta_{j,k}^2/32 \alpha^2} \right ) \\
& = r_1+ 2\sum_{j=1}^{C} \sum_{k=C+1}^{L} \sum_{i=2}^{s} r_i \, \Delta_{j, k} \left ( e^{-\left ( \sum_{j=1}^{i-1}r_j \right ) \Delta_{j, k}^{2} / 8} + e^{-\left ( \sum_{j=1}^{i-1}r_j \right ) \Delta_{j,k}^2/32 \alpha^2} \right ) \numberthis \label{eqn: restricted_stochastic_ub_r_i_general}\\
& \leq r_1+ 2 \underset{2 \leq a \leq s}{\max} r_a \sum_{j=1}^{C} \sum_{k=C+1}^{L} \sum_{i=2}^{s} \, \Delta_{j, k} \left ( e^{-\left ( (i-1) \underset{1 \leq a \leq s}{\min} r_a \right ) \Delta_{j, k}^{2} / 8} + e^{-(i-1) \underset{1 \leq a \leq s}{\min} r_a \Delta_{j,k}^2/32 \alpha^2} \right ) \\
& \leq r_1 + 2 \frac{\underset{2 \leq a \leq s}{\max} r_a}{\underset{1 \leq a \leq s}{\min} r_a}\sum_{j=1}^{C} \sum_{k=C+1}^{L} \frac{8}{\Delta_{j,k}}+ \frac{32 \alpha^2}{\Delta_{j,k}} \\
& \leq r_1 + 2C(L-C) \frac{\underset{2 \leq a \leq s}{\max} r_a}{\underset{1 \leq a \leq s}{\min} r_a} \left ( \frac{8}{\Delta_{\min}}+ \frac{32 \alpha^2}{\Delta_{\min}} \right ).
\end{align*}
§ PROOF OF THEOREM <REF>
In this section, we prove an upper bound on the regret of FTPL$(\alpha \sqrt{t})$ when cache updates are restricted to periodic time slots. Recall that $t_0'=\max \left \{r,\frac{8}{\Delta_{\min}^2} \log \left ( {L^2}\right ) , \frac{32 \alpha^2}{\Delta_{\min}^2 }\log \left ( {L^2}\right ) \right \}$ and we assume that $L \geq 3$. We bound the regret for the first $t_0'$ rounds by $t_0'$. This technique of bounding the regret of an initial period by its worst-case regret and then using normal methods to bound the regret for $t>t_0'$ is based on [17].
Thus, we get the following expression for the regret:
\begin{align*}
R_{S}^{\textrm{FTPL}(\alpha \sqrt{t}) } &\leq \lceil t_0' \rceil +r \mathbb{E}\left[\sum_{t= \left \lceil \frac{t_0'}{r} \right \rceil +1}^{T/r} \mathbbm{1}\{x(rt) \in \mathcal{C}\}-\mathbbm{1}\{x(rt) \in C(rt)\}\right] \\
&\leq 1+t_0' + 2r\sum_{t=\left \lceil \frac{t_0'}{r} \right \rceil+1}^{T/r} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j,k} \left(e^{-rt \Delta_{j, k}^{2} / 8}+e^{-rt \Delta_{j, k}^{2} / 32 \alpha^{2}}\right),
\end{align*}
which follows from (<ref>). The function $f(u)=$ $u e^{-u^{2} / 2}$ is decreasing on $[1,+\infty)$. Since $\Delta_{j,k} \geq \Delta_{\min}, j \in \mathcal{C}, k \notin \mathcal{C}$, we get
\begin{align*}
\Delta_{j,k} e^{-rt \Delta_{j,k}^{2} / 8} &=\frac{2}{\sqrt{rt}} f\left(\frac{\sqrt{rt} \Delta_{j,k}}{2}\right) \\&\leq \frac{2}{\sqrt{rt}} f\left(\frac{\sqrt{rt} \Delta_{\min}}{2}\right)=\Delta_{\min} e^{-t \Delta_{\min}^{2} / 8} .
\end{align*}
Note that for $t \geq \left \lceil \frac{t_0'}{r} \right \rceil + 1$, $t> \frac{4}{r\Delta_{\min}^2}$ holds as $t_0' \geq \frac{8}{\Delta_{\min}^2}$. Thus,
\begin{multline*}
2r\sum_{t=\left \lceil \frac{t_0'}{r} \right \rceil+1}^{T/r} \sum_{j=1}^{C} \sum_{k=C+1}^{L} \Delta_{j,k} \left(e^{-rt \Delta_{j, k}^{2} / 8}+e^{-rt \Delta_{j, k}^{2} / 32 \alpha^{2}}\right) \\ \leq 2rL^2\Delta_{\min} \sum_{t=\left \lceil \frac{t_0'}{r} \right \rceil+1}^{T/r} e^{-rt \Delta_{\min}^{2} / 8}+e^{-rt \Delta_{\min}^{2} / 32 \alpha^{2}}.
\end{multline*}
\begin{align*}
L^2\sum_{t=\left \lceil \frac{t_0'}{r} \right \rceil+1}^{T/r} e^{-rt \Delta_{\min}^{2} / 8}+e^{-rt \Delta_{\min}^{2} / 32 \alpha^{2}}
&\leq L^2 \sum_{t=\left \lceil \frac{t_0'}{r} \right \rceil+1}^{T/r} \biggl \{ e^{-r \left (\left \lceil \frac{t_0'}{r} \right \rceil+t-\left \lceil \frac{t_0'}{r} \right \rceil \right) \Delta_{\min}^2/8} \\ &+ e^{-r \left (\left \lceil \frac{t_0'}{r} \right \rceil+ t-\left \lceil \frac{t_0'}{r} \right \rceil \right )\Delta_{\min}^{2} / 32 \alpha^{2}} \biggr \}\\
&\leq \sum_{t=1}^{T/r-\left \lceil \frac{t_0'}{r} \right \rceil} e^{- \frac{rt \Delta_{\min}^2}{8} } + e^{- \frac{rt \Delta_{\min}^{2} }{ 32 \alpha^{2}} } \\&\leq \frac{1}{r} \left ( \frac{8}{\Delta_{\min}^2 } + \frac{32 \alpha^2}{\Delta_{\min}^2 } \right ).
\end{align*}
This gives the following upper bound on the regret:
\begin{align*}
R_{S}^{\textrm{FTPL}(\alpha \sqrt{t}) } \leq 1+t_0' + 2 \left ( \frac{8}{\Delta_{\min}} + \frac{32 \alpha^2}{\Delta_{\min}} \right ).
\end{align*}
§ PROOF OF PART (A) OF THEOREM <REF>
In this section, we prove a lower bound on the regret of any policy $\pi$ when cache updates are restricted under adversarial file requests. The proof uses a technique called the probabilistic method from [7] which was pioneered by Erdős [5]. Consider a request sequence $\{x_t\}_{t=1}^{T}$ with a joint probability distribution defined on it. Then, we have the following lower bound on the regret incurred by any policy $\pi$:
\begin{equation}
R_{A}^{\pi}(T) \geq \mathbb{E}_{\{x_t\}_{t=1}^{T}} R_{A}^{\pi}(\{x_t\}_{t=1}^{T},T). \label{eqn: erdos_lb}
\end{equation}
Thus, the proof proceeds by defining a request sequence first and then using (<ref>) to lower bound the regret. Throughout this proof, if it is not explicitly mentioned with respect to what the expectation is being taken, it can be assumed that the expectation is being taken with respect to $\{x_t\}_{t=1}^{T}$.
§.§ Lower Bound when $\mathbf{C=1}$
We consider an adversarial request sequence where files are requested from the first $2$ files uniformly, and the same file is requested $r_i$ times from $t_i=\sum_{j=0}^{i-1}r_j$ to $t_i+r_i-1$, where $r_0=1$ and $1 \leq i \leq s$. To be precise, for each $1 \leq i \leq s$, a file is drawn uniformly at random from the first two files at $t=\sum_{j=0}^{i-1}r_j$ and is repeatedly requested till $t=t_i+r_i-1$, i.e., a total of $r_i$ times. Throughout this proof, “phase $i$" refers to the time period from $t=\sum_{j=0}^{i-1}r_j$ to $t=t_i+r_i-1$ (both points included). Let $W_i$ be a Bernoulli random variable indicating whether file 1 was requested in the $i^{\textrm{th}}$ phase or not. Note that $\{W_i\}_{i=1}^{s}$ are i.i.d. Thus, the reward obtained by the optimal offline policy corresponds to:
\begin{align*}
\underset{y \in \mathcal{Y}}{\max} \left \langle \boldsymbol{y}, \boldsymbol{X}_{T+1} \right \rangle &= \underset{y \in \mathcal{Y}}{\max} \left \langle \boldsymbol{y}, \sum_{t=1}^{T} \boldsymbol{x}_{t} \right \rangle \\
&= \underset{y \in \mathcal{Y}}{\max} \left \langle \boldsymbol{y}, \sum_{i=1}^{s} \sum_{t=t_i}^{t_i+r_i-1} \boldsymbol{x}_{t} \right \rangle \\
&= \underset{y \in \mathcal{Y}}{\max} \left \langle \boldsymbol{y}, \sum_{i=1}^{s} r_i \, \boldsymbol{x}_{t_i} \right \rangle,
\end{align*}
as the specific request sequence chosen here is constant in each phase. As $C=1$, $\boldsymbol{y}$ is a one-hot encoded vector and thus, we have that
\begin{align*}
\underset{y \in \mathcal{Y}}{\max} \left \langle \boldsymbol{y}, \boldsymbol{X}_{T+1} \right \rangle &= \max \left \{ \sum_{i=1}^{s} r_i W_i, T-\sum_{i=1}^{s} r_i W_i \right \} = \frac{T}{2} + \left |\frac{T}{2} - \sum_{i=1}^{s} r_i W_i\right |.
\end{align*}
The reward of any algorithm can be upper bounded in the following way:
\begin{align*}
\sum_{t=1}^{T} \left \langle \boldsymbol{y}_t, \boldsymbol{x}_{t} \right \rangle &= \sum_{i=1}^{s} \sum_{t=t_i}^{t_i+r_i-1} \left \langle \boldsymbol{y}_t, \boldsymbol{x}_{t} \right \rangle \\
&= \sum_{i=1}^{s} r_i \left \langle \boldsymbol{y}_{t_i}, \boldsymbol{x}_{t_i} \right \rangle,
\end{align*}
as the specific request sequence chosen here and the cache configuration is constant in each phase. Thus, the expected reward of any algorithm can be upper bounded in the following way, where the expectation is taken with respect to the request sequence only:
\begin{align*}
\mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [ \sum_{t=1}^{T} \left \langle \boldsymbol{y}_t, \boldsymbol{x}_{t} \right \rangle \right ] &= \mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [ \sum_{i=1}^{s} r_i \left \langle \boldsymbol{y}_{t_i}, \boldsymbol{x}_{t_i} \right \rangle \right ]\\
& \sum_{i=1}^{s} r_i \mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [\left \langle \boldsymbol{y}_{t_i}, \boldsymbol{x}_{t_i} \right \rangle \right ].
\end{align*}
In each time slot, the cache update happens before the file request arrives and hence $\boldsymbol{y}_t$ is independent of $\boldsymbol{x}_t$, $1 \leq t \leq T$. Thus, we have
\begin{align*}
\mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [ \sum_{t=1}^{T} \left \langle \boldsymbol{y}_t, \boldsymbol{x}_{t} \right \rangle \right ] &= \sum_{i=1}^{s} r_i \mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [\left \langle \boldsymbol{y}_{t_i}, \boldsymbol{x}_{t_i} \right \rangle \right ] \\
&= \sum_{i=1}^{s} r_i \left \langle \boldsymbol{y}_{t_i}, \mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [ \boldsymbol{x}_{t_i} \right ] \right \rangle \\
&= \frac{1}{2C} \sum_{i=1}^{s} r_i \left (\boldsymbol{y}_{t_i}(1)+\boldsymbol{y}_{t_i}(2) \right ) \\
&\leq \frac{1}{2} \sum_{i=1}^{s} r_i = \frac{T}{2}. \numberthis \label{eqn: algo_reward_ub}
\end{align*}
\begin{align*}
R_A^{\pi}(T) \geq \mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [ \left |\frac{T}{2} - \sum_{i=1}^{s} r_i W_i\right | \right ].
\end{align*}
Now, we lower bound $\mathbb{E}[|M|]$, where $M=\frac{T}{2} - \sum_{i=1}^{s} r_i W_i= \sum_{i=1}^{s} r_i\left ( W_i - \frac{1}{2}\right )$, as $\sum_{i=1}^{s}r_i=T$. We denote $m_i=r_i\left ( W_i - \frac{1}{2}\right )$. Thus, $M=\sum_{i=1}^{s} m_i$, where $\mathbb{E}[m_i]=0$ and $\sigma_{i}^2=\mathbb{E}[m_i^2]=\frac{r_i^2}{4}$. Note that $\sigma_M=\sqrt{\sum_{i=1}^{s} \sigma_i^2}=\frac{1}{2}\sqrt{\sum_{i=1}^{s}r_i^2}$. Now, using the Markov inequality,
\begin{align*}
\mathbb{E}[|M|] \geq \sigma_M \mathbb{P} (|M| \geq \sigma_M) &\geq \sigma_M \mathbb{P} (\frac{M}{\sigma_M} \geq 1) \\
&\geq \sigma_M \mathbb{P} (\frac{\sum_{i=1}^{s}m_i}{\sigma_M} \geq 1).
\end{align*}
Using the Berry-Esseen theorem, we have that
\begin{align*}
\left|\operatorname{Pr}\left(\frac{\sum_{i=1}^{s}m_i}{\sigma_M} \leq 1 \right)-\Phi(1)\right| \leq \frac{C_0}{\sigma_M} \max _{1 \leq i \leq n} \frac{\rho_{i}}{\sigma_{i}^{2}} ,
\end{align*}
where $\rho_i= \mathbb{E}[|m_i^3|]=\frac{r_i^3}{8}$, $C_0$ is a constant and $\Phi(\cdot)$ is the CDF of the standard Gaussian random variable. Thus, we get
\begin{align*}
\operatorname{Pr}\left(\frac{\sum_{i=1}^{s}m_i}{\sigma_M} \geq 1 \right) &\geq 1 - \Phi(1) - C_0 \frac{\underset{1 \leq i \leq s}{\max} r_i }{2\sigma_M} \\
&\geq 0.15 - C_0 \frac{\underset{1 \leq i \leq s}{\max} r_i }{\sqrt{\sum_{i=1}^{s}r_i^2}}.
\end{align*}
We thus have:
\begin{align*}
R_A^{\pi}(T) &\geq \mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [ \max \left \{ \sum_{i=1}^{s} r_i W_i, T-\sum_{i=1}^{s} r_i W_i \right \} \right ]\\
&\geq \frac{T}{2} + \frac{1}{2}\sqrt{\sum_{i=1}^{s}r_i^2} \left ( 0.15 - C_0 \frac{\underset{1 \leq i \leq s}{\max} r_i }{\sqrt{\sum_{i=1}^{s}r_i^2}} \right ).
\end{align*}
§.§ Lower bound for general $L,C$
In this section, we extend the result in the previous section for a general $L,C$ value. We consider an analogous adversarial request sequence where, files are requested from the first $2C$ files uniformly, and the same file is requested $r_i$ times from $t_i=\sum_{j=0}^{i-1}r_j$ to $t_i+r_i-1$, where $r_0=1$ and $1 \leq i \leq s$. To be precise, for each $1 \leq i \leq s$, a file is drawn uniformly at random from the first two files at $t=\sum_{j=0}^{i-1}r_j$ and is repeatedly requested till $t=t_i+r_i-1$, i.e., a total of $r_i$ times. We denote by $W_i$ the file requested in the $i^{\text{th}}$ phase.
We use the balls-into-bins technique from the proof of Lemma 1 of [7]. A bin is associated with each file from $1,\ldots,2C$ where a request for that file is equivalent to a ball being thrown into that bin. The bins are numbered as $1,2, \ldots, 2 C$ and every two consecutive bins $\{(2 i-1,2 i)\}, 1 \leq i \leq C$ are combined to form $C$ Super bins. Denote by $Z_{i}, i=1,2, \ldots, C$ the number of balls in the $i^{\text {th }}$ super bin. Conditioned on $Z_{i}$, the number of balls in the bins $2 i-1$ and $2 i$ are jointly distributed as $\left(V, Z_{i}-V\right)$, where $V$ is a binomial random variable with parameter $\left(Z_{i}, \frac{1}{2}\right)$. Let $H_{i}$ denote the number of balls in the bin containing the maximum number of balls among bins $2 i-1$ and $2 i$. Then, as shown in the previous section, when $\forall 1 \leq i \leq C, Z_{i}>0$ :
\begin{align*}
\mathbb{E}\left(H_{i} \mid Z_{i}\right) \geq \frac{Z_i}{2} + \frac{1}{2}\sqrt{\sum_{j=1}^{s}r_j^2 \mathbb{I}_{W_j \in \{(2 i-1,2 i)\} }} \left ( 0.15 - C_0 \frac{\underset{1 \leq j \leq s, W_j \in \{(2 i-1,2 i)\} }{\max} r_j }{\sqrt{\sum_{j=1}^{s}r_j^2 \mathbb{I}_{W_j \in \{(2 i-1,2 i)\} } } } \right ),
\end{align*}
The expected number of requests obtained by the best offline policy can be lower bounded as:
\begin{align*}
\underset{y \in \mathcal{Y}}{\max} \left \langle \boldsymbol{y}, \boldsymbol{X}_{T+1} \right \rangle &\geq \mathbb{E} \left [ \sum_{i=1}^{C} H_{i} \right ] \\
&= \sum_{i=1}^{C} \mathbb{E} \left [ H_{i} \right ] \\
&\stackrel{(a)}{=} C H_1 \\
&\stackrel{(b)}{=} C \mathbb{E} \left [ \mathbb{E}\left[H_{1} \mathbb{I}\left(Z_{1}>0\right) \mid Z_{1}\right] \right ],
\end{align*}
where (a) follows from the request sequence being symmetric across the $2C$ files and (b) follows from the tower property of conditional expectation as shown in [7]. Thus,
\begin{align*}
\underset{y \in \mathcal{Y}}{\max} \left \langle \boldsymbol{y}, \boldsymbol{X}_{T+1} \right \rangle &\geq \frac{C \mathbb{E}[Z_1]}{2} + C \mathbb{E} \left [ \frac{1}{2}\sqrt{\sum_{i=1}^{s}r_i^2 \mathbb{I}_{W_i \in \{(1,2)\} }} \left ( 0.15 - C_0 \frac{\underset{1 \leq i \leq s, W_i \in \{(1,2)\} }{\max} r_i }{\sqrt{\sum_{i=1}^{s}r_i^2 \mathbb{I}_{W_i \in \{(1,2)\} } } } \right ) \right ]\\
&\geq \frac{T}{2} + \frac{C}{2} \mathbb{E} \left [ 0.15 \sqrt{\sum_{i=1}^{s}r_i^2 \mathbb{I}_{W_i \in \{(1,2)\} }} - C_0 \underset{1 \leq i \leq s }{\max} r_i \right ], \numberthis \label{eqn: res_ad_lb_final}
\end{align*}
as $\mathbb{E}[Z_1]=T/C$. Using (20) of [7],
\begin{align*}
\mathbb{E} \left [ \sqrt{ \sum_{i=1}^{s}r_i^2 \mathbb{I}_{W_i \in \{(1,2)\} } }\right ] &\geq \sqrt{\mathbb{E}\left( \sum_{i=1}^{s}r_i^2 \mathbb{I}_{W_i \in \{(1,2)\} } \right)}\left(1-\frac{\operatorname{Var}\left(\sum_{i=1}^{s}r_i^2 \mathbb{I}_{W_i \in \{(1,2)\} } \right)}{2\left(\mathbb{E}\left(\sum_{i=1}^{s}r_i^2 \mathbb{I}_{W_i \in \{(1,2)\} }\right)\right)^{2}}\right) \\
&= \sqrt{ \frac{\sum_{i=1}^{s}r_i^2}{C} } \left(1-\frac{(C-1)\left(\sum_{i=1}^{s}r_i^4 \right)}{2\left(\sum_{i=1}^{s}r_i^2 \right)^{2}}\right),
\end{align*}
as $\operatorname{Var} \left (\mathbb{I}_{W_i \in \{(1,2)\} } \right )=\frac{1}{C}\left(1-\frac{1}{C}\right)$ and $\mathbb{E} \left (\mathbb{I}_{W_i \in \{(1,2)\} } \right )=\frac{1}{C}$. Similar to (<ref>),
\begin{align*}
\mathbb{E}_{\{x_t\}_{t=1}^{T}} \left [ \sum_{t=1}^{T} \left \langle \boldsymbol{y}_t, \boldsymbol{x}_{t} \right \rangle \right ] &= \frac{1}{2C} \sum_{i=1}^{s} r_i \left (\boldsymbol{y}_{t_i}(1)+\cdots+\boldsymbol{y}_{t_i}(2C) \right ) \\
&\leq \frac{1}{2} \sum_{i=1}^{s} r_i = \frac{T}{2}.
\end{align*}
Combining the above result and <ref> gives us the following lower bound on the regret:
\begin{align*}
R^{\pi}_{A}(T) \geq \frac{1}{2} \left(0.15 \sqrt{ C\sum_{i=1}^{s}r_i^2 } \left(1-\frac{(C-1)\left(\sum_{i=1}^{s}r_i^4 \right)}{2\left(\sum_{i=1}^{s}r_i^2 \right)^{2}}\right) -0.6 \, C \underset{1 \leq i \leq s}{\max} r_i \right).
\end{align*}
§ PROOF OF PART (B) OF THEOREM <REF>
In this section, we prove an upper bound on the regret of FTPL($\alpha \sqrt{t}$) when cache updates are restricted to after every $r$ slots only under adversarial file requests. The proof is based on the proof of Proposition 4.2 of [18]. Define the following potential function at each time slot $t$:
\begin{equation*}
\Phi_{t}(\boldsymbol{x})=\mathbb{E}_{\boldsymbol{\gamma} \sim \mathcal{N}(0, I)}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\langle\boldsymbol{y}, \boldsymbol{x}+\eta \boldsymbol{\gamma}\rangle\right]
\end{equation*}
The regret incurred in this setting can be expressed in the following way:
\begin{align*}
R_{A}^{\textrm{FTPL}(\alpha \sqrt{t})}(T) &= \max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1}\right\rangle - \sum_{t=1}^{T} \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}_{t}, \boldsymbol{x}_{t}\right\rangle\right] \\
&= \max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}_{T+1}\right\rangle - \sum_{i=1}^{s} \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}_{ \sum_{j=0}^{i-1}r_j } , \sum_{t=\sum^{{i-1}}_{k=0} r_k }^{\sum^{{i}}_{k=1} r_k} \boldsymbol{x}_{t }\right\rangle\right], \\ \numberthis \label{eqn: ftpl_adversarial_regret_restricted} \\
\end{align*}
where we define $r_0=1$. This follows from the fact that the cache configuration can change only at the pre-defined $s$ fixed time slots. Thus, we essentially have a time horizon of $s$, but each time slot $i, 1 \leq i \leq s$ contains $r_i$ requests instead. For brevity, we define
\begin{align*}
\boldsymbol{y}^{i} := \boldsymbol{y}_{ \sum_{j=0}^{i-1}r_j },\quad \boldsymbol{x}^{i} := \sum_{t=\sum^{{i-1}}_{k=0} r_k }^{\sum^{{i}}_{k=1} r_k} \boldsymbol{x}_{t },\quad \eta^i := \eta_{\sum_{j=0}^{i-1}r_j },\quad \boldsymbol{X}^{i} := \sum_{j=1}^{i} \boldsymbol{x}^{j},\quad 1 \leq i \leq s.
\end{align*}
We define the potential function $\Phi_{i} : \mathbb{R}^{L} \rightarrow \mathbb{R}$ for $1 \leq i \leq s$ in the following way:
\begin{align*}
\Phi_{i}(\boldsymbol{x})=\mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{x}+\eta^{i} \boldsymbol{\gamma}\right\rangle\right],
\end{align*}
where $\mathcal{Y}$ is the set of possible cache configurations, i.e., the set $\{y \in \{0,1\}^{L}: \|y\|_1 \leq C \}$. As shown in the proof of Proposition 4.1 in [18],
\begin{align*}
\nabla \Phi_{i}\left(\boldsymbol{X}^{i}\right) &=\left.\nabla \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}^{i}, \boldsymbol{x}+\eta^{i} \gamma\right\rangle\right]\right|_{\boldsymbol{x}=\boldsymbol{X}^{i}}=\mathbb{E}_{\boldsymbol{\gamma}}\left[\boldsymbol{y}^{i}\right]. \\
\mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}^{i}, \boldsymbol{x}^{i}\right\rangle\right] &= \left\langle\nabla \Phi_{i}\left(\boldsymbol{X}^{i}\right), \boldsymbol{X}^{i}-\boldsymbol{X}^{i-1}\right\rangle\\
&=\Phi_{i}\left(\boldsymbol{X}^{i}\right)-\Phi_{i}\left(\boldsymbol{X}^{i-1}\right)-\frac{1}{2}\left\langle\boldsymbol{x}^{i}, \nabla^{2} \Phi_{i}\left(\widetilde{\boldsymbol{X}}^{i}\right) \boldsymbol{x}^{i} \right\rangle,
\end{align*}
where $\widetilde{\boldsymbol{X}}^{i}=\boldsymbol{X}^{i-1}+\theta^{i} \boldsymbol{x}^{i}$, for some $\theta^{i} \in[0,1]$ using Taylor's theorem. Thus, we have
\begin{align*}
&\sum_{i=1}^{s} \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}^{i}, \boldsymbol{x}^{i}\right\rangle\right] \\
&=\Phi_{s}\left(\boldsymbol{X}^{s}\right) -\Phi_{1}\left(\boldsymbol{X}^{0}\right)+\sum_{i=2}^{s}\left[\Phi_{i-1}\left(\boldsymbol{X}^{i-1}\right)- \Phi_{i}\left(\boldsymbol{X}^{i-1}\right)\right] \\
&-\frac{1}{2} \sum_{i=1}^{s}\left\langle\boldsymbol{x}^{i}, \nabla^{2} \Phi_{i}\left(\widetilde{\boldsymbol{X}}^{i}\right) \boldsymbol{x}^{i}\right\rangle.
\end{align*}
Using Jensen's inequality, we have that
\begin{align*}
\Phi_{s}\left(\boldsymbol{X}_{s}\right ) &= \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X^{s}}+\eta^{s} \boldsymbol{\gamma}\right\rangle\right] \\
&\geq \max _{\boldsymbol{y} \in \mathcal{Y}} \mathbb{E}_{\boldsymbol{\gamma}}\left[\left\langle\boldsymbol{y}, \boldsymbol{X^{s}}+\eta^{s} \boldsymbol{\gamma}\right\rangle\right] \\
&= \max _{\boldsymbol{y} \in \mathcal{Y}} \left\langle\boldsymbol{y}, \boldsymbol{X^{s}} \right\rangle = \max _{\boldsymbol{y} \in \mathcal{Y}} \left\langle\boldsymbol{y}, \boldsymbol{X_{T+1}} \right\rangle.
\end{align*}
Substituting the above results in (<ref>), we get
\begin{align*}
R_{A}^{\textrm{FTPL}(\alpha \sqrt{t})}(T) &\leq \Phi_{1}\left(\boldsymbol{X}^{0}\right)+\sum_{i=2}^{s}\left[ \Phi_{i}\left(\boldsymbol{X}^{i-1}\right) - \Phi_{i-1}\left(\boldsymbol{X}^{i-1}\right) \right] \\
&+ \frac{1}{2} \sum_{i=1}^{s}\left\langle\boldsymbol{x}^{i}, \nabla^{2} \Phi_{i}\left(\widetilde{\boldsymbol{X}}^{i}\right) \boldsymbol{x}^{i}\right\rangle.
\end{align*}
Since $\boldsymbol{x}^{i}$ contains $r_i$ file requests, the quadratic form above may be upper bounded in the following way:
\begin{align*}
\left\langle\boldsymbol{x}^{i}, \nabla^{2} \Phi_{i}\left(\widetilde{\boldsymbol{X}}^{i}\right) \boldsymbol{x}^{i}\right\rangle &\leq r_i \max _{k,j, \boldsymbol{x}}\left(\left|\nabla^{2} \Phi_{i}(\boldsymbol{x})\right|\right)_{k j} \left\langle\boldsymbol{x}^{i}, \mathbf{1} \right\rangle \\
&= r_i^2 \max _{k,j, \boldsymbol{x}}\left(\left|\nabla^{2} \Phi_{i}(\boldsymbol{x})\right|\right)_{k j},
\end{align*}
where $\mathbf{1}$ is the all ones vector. Moreover, from [7],
\begin{align*}
\left(\left|\nabla^{2} \Phi_{i}(\boldsymbol{x})\right|\right)_{kj} \leq \frac{1}{\eta^{i}} \sqrt{\frac{2}{\pi}}.
\end{align*}
\begin{align*}
\frac{1}{2} \sum_{i=1}^{s}\left\langle\boldsymbol{x}^{i}, \nabla^{2} \Phi_{i}\left(\widetilde{\boldsymbol{X}}^{i}\right) \boldsymbol{x}^{i}\right\rangle \leq \sqrt{\frac{2}{\pi}} \sum_{i=1}^{s} \frac{r_i^2}{\eta^{i}}.
\end{align*}
It can also be shown that:
\begin{align*}
\Phi_{i}\left(\boldsymbol{X}^{i-1}\right)-\Phi_{i-1}\left(\boldsymbol{X}^{i-1}\right) &= \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}^{i-1}+\eta^{i} \boldsymbol{\gamma}\right\rangle\right] - \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, \boldsymbol{X}^{i-1}+\eta^{i-1} \boldsymbol{\gamma}\right\rangle\right]\\
&\leq \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle\boldsymbol{y}, |\eta^{i} - \eta^{i-1}| \boldsymbol{\gamma}\right\rangle\right] \\
&= \left|\eta^{i}-\eta^{i-1}\right| \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\langle\boldsymbol{y}, \gamma\rangle\right].
\end{align*}
We also have from [10] that:
\begin{align*}
\Phi_{1}\left(\boldsymbol{X}^{0}\right)= \eta^1 \mathbb{E}_{\boldsymbol{\gamma}}\left[\max _{\boldsymbol{y} \in \mathcal{Y}}\left\langle \boldsymbol{y}, \boldsymbol{\gamma}\right\rangle\right ] \leq \eta^1 C \sqrt{2 \log \left ( \frac{N}{C}\right )}.
\end{align*}
Thus, combining the above bounds, we get
\begin{align*}
R_{A}^{\textrm{FTPL}(\alpha \sqrt{t})}(T) &\leq \eta^1 C \sqrt{2 \log \left ( \frac{N}{C}\right )} + \eta^s C \sqrt{2 \log \left ( \frac{N}{C}\right )}+ \sqrt{\frac{2}{\pi}} \sum_{i=1}^{s} \frac{r_i^2}{\eta^{i}} \\
&\leq \alpha C \sqrt{2 \log \left ( \frac{N}{C}\right )} + \alpha C \sqrt{T} \sqrt{2 \log \left ( \frac{N}{C}\right )}+ \sqrt{\frac{2}{\pi}} \sum_{i=1}^{s} \frac{r_i^2}{\alpha \sqrt{\sum_{j=0}^{i-1}r_j }}. \\
\end{align*}
\left(\left|\nabla^{2} \Phi_{i}(\boldsymbol{x})\right|\right)_{j j}=\frac{1}{\eta} \mathbb{E}\left[\hat{y}\left(\tilde{\boldsymbol{x}}_{t}+\eta \boldsymbol{\gamma}\right)_{i} \gamma_{j}\right]
where $\hat{y}(z) \in \arg \max _{\boldsymbol{y} \in y}\langle\boldsymbol{y}, \boldsymbol{z}\rangle .$ Hence, using Jensen's inequality we have that
\left(\left|\nabla^{2} \Phi_{\eta}(x)\right|\right)_{i i} \leq \frac{1}{\eta} \mathbb{E}\left[\left|\hat{y}\left(\tilde{\boldsymbol{x}}_{t}+\eta \gamma\right)_{i} \| \gamma_{i}\right|\right] \stackrel{(a)}{\leq} \frac{1}{\eta} \mathbb{E}\left[\left|\gamma_{i}\right|\right] \stackrel{(b)}{=} \frac{1}{\eta} \sqrt{\frac{2}{\pi}}
where the inequality (a) follows from the fact that for all valid cache configurations $\boldsymbol{y}$, we have $y_{i} \in\{0,1\}$, and the equality (b) follows from the fact that $\gamma_{i} \sim \mathcal{N}(0,1)$.
§ PROOF OF PART (A) OF THEOREM <REF>
In this section, we prove a lower bound on the regret of any policy $\pi$ in the restricted switching case where the cache is allowed to update its contents only after every $r$ requests. The proof uses (<ref>) to bound the regret. We consider the adversarial request sequence where files are requested from the top $2C$ files, and the same file is requested $r$ times in each of the $T/r$ `phases' (using terminology defined in Appendix <ref>). To be precise, at the beginning of each phase $i$ where $1 \leq i \leq T/r$, i.e., at $t=1+(i-1)r$, a file is drawn uniformly at random from the first $2C$ files. This file is repeatedly requested $r$ times till $t=ir$. Throughout this proof, if it is not explicitly mentioned with respect to what the expectation is being taken, it can be assumed that the expectation is being taken with respect to $\{x_t\}_{t=1}^{T}$. The reward obtained by the optimal static configuration in hindsight can be bounded in the following way:
\begin{align*}
\mathbb{E}\left( \max_{y \in \mathcal{Y}}\left \langle \boldsymbol{y}, \sum_{t=1}^{T}\boldsymbol{x}_{t} \right \rangle \right) &= r\mathbb{E}\left( \max_{y \in \mathcal{Y}} \left \langle \boldsymbol{y}, \sum_{t=1}^{T/r}\boldsymbol{x}_{t} \right \rangle \right)\\
& \geq r\left( \frac{T}{2r}+\sqrt{\frac{C T}{2r \pi}}-\frac{\sqrt{r}(\sqrt{2}+1) C^{3 / 2}}{2 \sqrt{2 \pi T}}-\sqrt{\frac{2}{\pi}} \frac{rC^{2}}{T}\right)\\
& = \frac{T}{2}+\sqrt{\frac{CrT}{2\pi }}-\Theta\left({\frac{r\sqrt{r}}{\sqrt{T}}}\right),
\end{align*}
where in the second last step, we used Theorem 2 of [7]. For $1 \leq i \leq s$, let
\begin{align*}
\boldsymbol{y}^{i} :=\sum_{t=1+(i-1)r}^{ir} \boldsymbol{y}_t,
\end{align*}
which is the sum of the caching configuration vectors in phase $i$. Note that the cache configuration at each time slot is independent of the file request that arrives in that slot. Now, to upper bound the reward obtained by any policy $\pi$,
\begin{align*}
\mathbb{E}\left( \sum_{t=1}^{T} \left \langle \boldsymbol{y}_{t}, \boldsymbol{x}_{t} \right \rangle \right) & = \sum_{i=1}^{T/r}\frac{1}{2C}\sum_{k=1}^{2C} y^{i}(k)\\
&\leq \sum_{t=1}^{T/r}\frac{1}{2C}\sum_{k=1}^{L} y^{i}(k) \\
&\leq \frac{T}{r} \cdot \frac{1}{2C}\cdot rC \\&= \frac{T}{2},
\end{align*}
The second last step follows from the fact that the sum of the elements of $\boldsymbol{y}^i$ is exactly $rC$ as the cache configuration remains constant in each phase.
\begin{align*}
\therefore R_{A}^{\pi}(T) &\geq \mathbb{E}_{\{\boldsymbol{x}_{t}\}_{t=1}^{T}}\left(\boldsymbol{y}\text{*}\cdot \sum_{t=1}^{T}\boldsymbol{x}_{t}-\sum_{t=1}^{T}\boldsymbol{y}_{t}\cdot \boldsymbol{x}_{t}\right)\\
&\geq \sqrt{\frac{CrT}{2\pi }}-\Theta\left({\frac{r\sqrt{r}}{\sqrt{T}}}\right).
\end{align*}
For this bound to be meaningful, the second term needs to be order-wise smaller than the first term:
\begin{align*}
\frac{r\sqrt{r}}{\sqrt{T}}<\sqrt{rT}
\implies r<T
\end{align*}
Hence this is an $\mathcal{O}(\sqrt{rT})$ lower bound for $r<o(T)$.
For $r=\Omega(T)$, the first set of $r$ requests always gives a regret that is $\Omega(r)$ as in time slot 1, a random $C$ files out of $L$ are stored, and hence we get an overall regret of $\Omega(T)$.
[24]
Author, F.: Article title. Journal 2(5), 99–110 (2016)
[25]
Author, F., Author, S.: Title of a proceedings paper. In: Editor,
F., Editor, S. (eds.) CONFERENCE 2016, LNCS, vol. 9999, pp. 1–13.
Springer, Heidelberg (2016). 10.10007/1234567890
[26]
Author, F., Author, S., Author, T.: Book title. 2nd edn. Publisher,
Location (1999)
[27]
Author, A.-B.: Contribution title. In: 9th International Proceedings
on Proceedings, pp. 1–2. Publisher, Location (2010)
[28]
LNCS Homepage, <http://www.springer.com/lncs>. Last accessed 4
Oct 2017
|
# Partial Frames, Their Free Frames
and Their Congruence Frames
Anneliese Schauerte John Frith Department of Mathematics and Applied
Mathematics
University of Cape Town
Cape Town, South Africa
###### Abstract
The context of this work is that of partial frames; these are meet-
semilattices where not all subsets need have joins. A selection function,
$\mathcal{S}$, specifies, for all meet-semilattices, certain subsets under
consideration, which we call the “designated” ones; an $\mathcal{S}$-frame
then must have joins of (at least) all such subsets and binary meet must
distribute over these. A small collection of axioms suffices to specify our
selection functions; these axioms are sufficiently general to include as
examples of partial frames, bounded distributive lattices, $\sigma$-frames,
$\kappa$-frames and frames.
We consider right and left adjoints of $\mathcal{S}$-frame maps, as a prelude
to the introduction of closed and open maps.
Then we look at what might be an appropriate notion of Booleanness for partial
frames. The obvious candidate is the condition that every element be
complemented; this concept is indeed of interest, but we pose three further
conditions which, in the frame setting, are all equivalent to it. However, in
the context of partial frames, the four conditions are distinct. In
investigating these, we make essential use of the free frame over a partial
frame and the congruence frame of a partial frame.
We compare congruences of a partial frame, technically called
$\mathcal{S}$-congruences, with the frame congruences of its free frame. We
provide a natural transformation for the situation and also consider right
adjoints of the frame maps in question. We characterize the case where the two
congruence frames are isomorphic and provide examples which illuminate the
possible different behaviour of the two.
We conclude with a characterization of closedness and openness for the
embedding of a partial frame into its free fame, and into its congruence
frame.
###### keywords:
frame, partial frame, $\mathcal{S}$-frame, $\kappa$-frame, $\sigma$-frame,
free frame over partial frame, congruence frame, Boolean algebra, closed map,
open map
††journal: Electronic Notes in Theoretical Informatics and Computer
Science††volume: 2††thanks: Email<EMAIL_ADDRESS>Email<EMAIL_ADDRESS>
## 1 Introduction
Partial frames are meet-semilattices where, in contrast with frames, not all
subsets need have joins. A selection function, $\mathcal{S}$, specifies, for
all meet-semilattices, certain subsets under consideration, which we call the
“designated” ones; an $\mathcal{S}$-frame then must have joins of (at least)
all such subsets and binary meet must distribute over these. A small
collection of axioms suffices to specify our selection functions; these axioms
are sufficiently general to include as examples of partial frames, bounded
distributive lattices, $\sigma$-frames, $\kappa$-frames and frames.
We consider the classical notions of right and left adjoints, for
$\mathcal{S}$-frame maps. Unlike the situation for full frames, such maps need
not have right adjoints. This is a prelude to the introduction of closed and
open maps, and a discussion of their properties.
What is an appropriate notion of Booleanness for partial frames? The obvious
answer is that the partial frame should have every element complemented; this
concept is indeed of interest, but we pose three further conditions which, in
the frame setting, are all equivalent to it. However, in the context of
partial frames, the four conditions are distinct. In investigating these, we
make essential use of the free frame over a partial frame and the congruence
frame of a partial frame.
We compare congruences of a partial frame, technically called
$\mathcal{S}$-congruences, with the frame congruences of its free frame. We
provide a natural transformation for the situation and also consider right
adjoints of the frame maps in question. We characterize the case where the two
congruence frames are isomorphic and provide examples which illuminate the
possible different behaviour of the two.
We conclude with a characterization of closedness and openness for the
embedding of a partial frame into its free fame, and into its congruence
frame.
Since this document is intended as an extended abstract, proofs are omitted.
## 2 Background
This background section is taken largely from [16]. See [22] and [17] as
references for frame theory; see [3] and [2] for $\sigma$-frames; see [19] and
[20] for $\kappa$-frames; see [18] and [1] for general category theory.
The basics of our approach to partial frames can be found in [4], [5] and [7].
Our papers with a more topological flavour are [6], [8], [10], [13] and [14].
Our papers with a more algebraic flavour are [9], [11] and [12]. Crucial for
this paper is [15]. We are indebted to earlier work by other authors in this
field: see [21], [24], [25] and [23]. For those interested in a comparison of
the various approaches, see [5]. A meet-semilattice is a partially ordered set
in which all finite subsets have a meet. In particular, we regard the empty
set as finite, so a meet-semilattice comes equipped with a top element, which
we denote by $1$. We do not insist that a meet-semilattice should have a
bottom element, which, if it exists, we denote by $0$. A function between
meet-semilattices $f:L\to M$ is a meet-semilattice map if it preserves finite
meets, as well as the top element. A sub meet-semilattice is a subset for
which the inclusion map is a meet-semilattice map.
The essential idea for a partial frame is that it should be “frame-like” but
that not all joins need exist; only certain joins have guaranteed existence
and binary meets should distribute over these joins. The guaranteed joins are
specified in a global way on the category of meet-semilattices by specifying
what is called a selection function; the details are given below.
###### Definition 2.1.
A selection function is a rule, which we usually denote by $\mathcal{S}$,
which assigns to each meet-semilattice $A$ a collection $\mathcal{S}A$ of
subsets of $A$, such that the following conditions hold (for all meet-
semilattices $A$ and $B$):
1. 1. (S1)
For all $x\in A$, $\\{x\\}\in\mathcal{S}A$.
2. (S2)
If $G,H\in\mathcal{S}A$ then $\\{x\wedge y:x\in G,y\in H\\}\in\mathcal{S}A$.
3. (S2)′
If $G,H\in\mathcal{S}A$ then $\\{x\vee y:x\in G,y\in H\\}\in\mathcal{S}A$.
4. (S3)
If $G\in\mathcal{S}A$ and, for all $x\in G$, $x=\bigvee H_{x}$ for some
$H_{x}\in\mathcal{S}A$, then
$\bigcup\limits_{x\in G}H_{x}\in\mathcal{S}A.$
5. (S4)
For any meet-semilattice map $f:A\to B$,
$\mathcal{S}(f[A])=\\{f[G]:G\in\mathcal{S}A\\}\subseteq\mathcal{S}B.$
6. (SSub)
For any sub meet-semilattice $B$ of meet-semilattice $A$, if $G\subseteq B$
and $G\in\mathcal{S}A$, then $G\in\mathcal{S}B$.
7. (SFin)
If $F$ is a finite subset of $A$, then $F\in\mathcal{S}A$.
8. (SCov)
If $G\subseteq H$ and $H\in\mathcal{S}A$ with $\bigvee H=1$ then
$G\in\mathcal{S}A$. (Such $H$ are called $\mathcal{S}$-covers.)
9. (SRef)
Let $X,Y\subseteq A$. If $X\leq Y$ with $X\in\mathcal{S}A$ there is a
$C\in\mathcal{S}A$ such that $X\leq C\subseteq Y$. (By $X\leq Y$ we mean, as
usual, that for each $x\in X$ there exists $y\in Y$ such that $x\leq y$.)
Of course (SFin) implies (S1) but there are situations where we do not impose
(SFin) but insist on (S1). Note that we always have
$\emptyset\in\mathcal{S}A$. Once a selection function, $\mathcal{S}$, has been
fixed, we speak informally of the members of $\mathcal{S}A$ as the designated
subsets of $A$.
###### Definition 2.2.
An $\mathcal{S}$-frame $L$ is a meet-semilattice in which every designated
subset has a join and for any such designated subset $B$ of $L$ and any $a\in
L$,
$a\wedge\bigvee B=\bigvee\limits_{b\in B}a\wedge b.$
Of course such an $\mathcal{S}$-frame has both a top and a bottom element
which we denote by $1$ and $0$ respectively.
A meet-semilattice map $f:L\to M$, where $L$ and $M$ are $\mathcal{S}$-frames,
is an $\mathcal{S}$-frame map if $f(\bigvee B)=\bigvee\limits_{b\in B}f(b)$
for any designated subset $B$ of $L$. In particular such an $f$ preserves the
top and bottom element.
A sub $\mathcal{S}$-frame $T$ of an $\mathcal{S}$-frame $L$ is a subset of $L$
such that the inclusion map $i:T\to L$ is an $\mathcal{S}$-frame map.
The category $\mathcal{S}$Frm has objects $\mathcal{S}$-frames and arrows
$\mathcal{S}$-frame maps.
We use the terms “partial frame” and “$\mathcal{S}$-frame” interchangeably,
especially if no confusion about the selection function is likely. We also use
the term full frame in situations where we wish to emphasize that all joins
exist.
###### Note 2.3.
Here are some examples of different selection functions and their
corresponding $\mathcal{S}$-frames.
* 1.
In the case that all joins are specified, we are of course considering the
notion of a frame.
* 2.
In the case that (at most) countable joins are specified, we have the notion
of a $\sigma$-frame.
* 3.
In the case that joins of subsets with cardinality less than some (regular)
cardinal $\kappa$ are specified, we have the notion of a $\kappa$-frame.
* 4.
In the case that only finite joins are specified, we have the notion of a
bounded distributive lattice.
The remainder of this section gives a lot of information about
$\mathcal{H}_{\mathcal{S}}L$, the free frame over the $\mathcal{S}$-frame $L$,
as well as $\mathcal{C}_{\mathcal{S}}L$, the frame of
$\mathcal{S}$-congruences of $L$, and the relationship between the two. These
results come from [7] on $\mathcal{H}_{\mathcal{S}}L$, [9] and [11] on
$\mathcal{C}_{\mathcal{S}}L$.
In the definition below, $L$ is an $\mathcal{S}$-frame.
###### Definition 2.4.
1. (a)
A subset $J$ of an $L$ is an $\mathcal{S}$-ideal of $L$ if $J$ is a non-empty
downset closed under designated joins (the latter meaning that if $X\subseteq
J$, for $X$ a designated subset of $L$, then $\bigvee X\in J$).
2. (b)
The collection of all $\mathcal{S}$-ideals of $L$ will be denoted
$\mathcal{H}_{\mathcal{S}}L$, and called the $\mathcal{S}$-ideal frame of $L$.
It is in fact the free frame over $L$.
3. (c)
For $I\in\mathcal{H}_{\mathcal{S}}L$, $t\in(\operatorname{\downarrow}x)\vee
I\Longleftrightarrow t\leq x\vee s$, for some $s\in I$.
4. (d)
We call $\theta\subseteq L\times L$ an $\mathcal{S}$-congruence on $L$ if it
satisfies the following:
(C1) $\theta$ is an equivalence relation on $L$.
(C2) $(a,b),(c,d)\in\theta$ implies that $(a\wedge c,b\wedge d)\in\theta$.
(C3) If $\\{(a_{\alpha},b_{\alpha}):\alpha\in\mathcal{A}\\}\subseteq\theta$
and $\\{a_{\alpha}:\alpha\in\mathcal{A}\\}$ and
$\\{b_{\alpha}:\alpha\in\mathcal{A}\\}$ are designated subsets of $L$, then
$(\bigvee\limits_{\alpha\in\mathcal{A}}a_{\alpha},\bigvee\limits_{\alpha\in\mathcal{A}}b_{\alpha})\in\theta$.
5. (e)
The collection of all $\mathcal{S}$-congruences on $L$ is denoted by
$\mathcal{C}_{\mathcal{S}}L$; it is in fact a (full) frame with meet given by
intersection.
6. (f)
1. (i)
Let $A\subseteq L\times L$. We use the notation $\langle A\rangle$ to denote
the smallest $\mathcal{S}$-congruence containing $A$. This exists by
completeness of $\mathcal{C}_{\mathcal{S}}L$.
2. (ii)
We define $\nabla\hskip-2.0pt_{a}=\\{(x,y):x\vee a=y\vee a\\}$ and
$\Delta_{a}=\\{(x,y):x\wedge a=y\wedge a\\}$; these are
$\mathcal{S}$-congruences on $L$.
3. (iii)
It is easily seen that
$\nabla\hskip-2.0pt_{a}=\bigcap\\{\theta:\theta\in\mathcal{C}_{\mathcal{S}}L\textrm{
and }(0,a)\in\theta\\}=\langle(0,a)\rangle$ and that
$\Delta_{a}=\bigcap\\{\theta:\theta\in\mathcal{C}_{\mathcal{S}}L\textrm{ and
}(a,1)\in\theta\\}=\langle(a,1)\rangle$.
4. (iv)
For $a\leq b$, it follows that $\Delta_{a}\cap\nabla_{b}=\langle(a,b)\rangle$.
5. (v)
The congruence $\nabla_{1}=L\times L$ is the top element and
$\nabla_{0}=\\{(x,x):x\in L\\}$ (called the diagonal) is the bottom element of
$\mathcal{C}_{\mathcal{S}}L$.
7. (g)
The following hold in $\mathcal{C}_{\mathcal{S}}L$.
1. (i)
For any $\theta\in\mathcal{C}_{\mathcal{S}}L$,
$\theta=\bigvee\\{\nabla_{b}\wedge\Delta_{a}:(a,b)\in\theta,a\leq b\\}$.
2. (ii)
$\nabla\hskip-2.0pt_{a}\vee\theta=\\{(x,y):(x\vee a,y\vee a)\in\theta\\}$.
3. (iii)
$\Delta_{a}\vee\theta=\\{(x,y):(x\wedge a,y\wedge a)\in\theta\\}$.
4. (iv)
For any $I\in\mathcal{H}_{\mathcal{S}}L$, $\bigvee\limits_{x\in
I}\nabla_{x}=\bigcup\limits_{x\in I}\nabla_{x}$.
8. (h)
The function $\nabla:L\to\mathcal{C}_{\mathcal{S}}L$ given by
$\nabla(a)=\nabla_{a}$ is an $\mathcal{S}$-frame embedding. It has the
universal property that if $f:L\to M$ is an $\mathcal{S}$-frame map into a
frame $M$ with complemented image, then there exists a frame map
$\bar{f}:\mathcal{C}_{\mathcal{S}}L\to M$ such that $f=\bar{f}\circ\nabla$.
9. (i)
We also note that for frame maps $f$ and $g$ with domain
$\mathcal{C}_{\mathcal{S}}L$, if $f\circ\nabla=g\circ\nabla$ then $f=g$.
10. (j)
A useful congruence for our purposes is the Madden congruence, denoted
$\pi_{L}$ below:
1. (i)
For $x\in L$, set $P_{x}=\\{t\in L:t\wedge x=0\\}$.
2. (ii)
For $x\in L$, $P_{x}$ is an $\mathcal{S}$-ideal, and in
$\mathcal{H}_{\mathcal{S}}L$, $P_{x}=(\operatorname{\downarrow}x)^{*}$, the
pseudocomplement of $\operatorname{\downarrow}x$.
3. (iii)
Let $\pi_{L}=\\{(x,y):P_{x}=P_{y}\\}$; $\pi_{L}$ is an
$\mathcal{S}$-congruence.
4. (iv)
The quotient map induced by the Madden congruence, $p:L\to L/{\pi_{L}}$ is
dense, onto and the universal such. We refer to this as the Madden quotient of
$L$. (See [11].)
###### Definition 2.5.
For any $\mathcal{S}$-frame $L$, define
$e_{L}:\mathcal{H}_{\mathcal{S}}L\to\mathcal{C}_{\mathcal{S}}L$ to be the
unique frame map such that $e_{L}(\operatorname{\downarrow}a)=\nabla_{a}$ for
all $a\in L$; that is, making the following diagram commute:
$L$$\mathcal{H}_{\mathcal{S}}L$$\mathcal{C}_{\mathcal{S}}L$$\nabla$$\downarrow\\!$$e_{L}$
That this map $e_{L}$ exists follows from the freeness of
$\mathcal{H}_{\mathcal{S}}L$ as a frame over $L$. See [7].
###### Note 2.6.
For any $\mathcal{S}$-frame $L$, $\mathcal{H}_{\mathcal{S}}L$ is isomorphic to
a subframe of $\mathcal{C}_{\mathcal{S}}L$; that is, the free frame over $L$
is isomorphic to a subframe of the frame of $\mathcal{S}$-congruences of $L$.
## 3 Right and left adjoints
We use the following standard terminology:
###### Definition 3.1.
Let $h:L\to M$ be an $\mathcal{S}$-frame map.
A function $r:M\to L$ is a right adjoint of $h$ if
$h(x)\leq m\Longleftrightarrow x\leq r(m)\textrm{ for all }x\in L,m\in M.$
A function $l:M\to L$ is a left adjoint of $h$ if
$l(m)\leq x\Longleftrightarrow m\leq h(x)\textrm{ for all }x\in L,m\in M.$
We make no claim that all $\mathcal{S}$-frame maps have right (or left)
adjoints; this is false (see Example 3.3). However, clearly if an
$\mathcal{S}$-frame map has a right or left adjoint, such is unique.
###### Lemma 3.2.
Let $h:L\to M$ be an $\mathcal{S}$-frame map.
1. (1)
If $h$ has a right adjoint $r$, then for all $m\in M$,
$r(m)=\bigvee\\{x\in L:h(x)\leq m\\}.$
2. (2)
If $h$ has a left adjoint $\ell$, then for all $m\in M$,
$l(m)=\bigwedge\\{x\in L:m\leq h(x)\\}.$
We note that the existence of the above joins and meets has to be established
since an $\mathcal{S}$-frame need not be complete.
###### Example 3.3.
This is an example of an $\mathcal{S}$-frame map which has neither a right nor
a left adjoint.
Let $L$ be the $\sigma$-frame consisting of all countable and cocountable
subsets of $\mathbb{R}$, and $\mathbf{2}$ denote the $2$-element chain. Define
$h:L\to\mathbf{2}$ by $h(C)=0$ if $C$ is countable and $h(D)=1$ if $D$ is
cocountable. Then $h$ is a $\sigma$-frame map. However it has no right adjoint
since there is no largest $A\in L$ with $h(A)=0$. Similarly it has no left
adjoint.
###### Proposition 3.4.
Let $h:L\to M$ be an $\mathcal{S}$-frame map.
1. (1)
Suppose that $h$ has a right adjoint, $r$. Then $h$ preserves all existing
joins and $r$ preserves all existing meets.
2. (2)
Suppose that $h$ has a left adjoint $L$. Then $h$ preserves all existing meets
and $\ell$ preserves all existing joins.
## 4 Closed and open maps
###### Definition 4.1.
Let $h:L\to M$ be an $\mathcal{S}$-frame map.
We call $h$ _closed_ if, for all $m\in M$, there exists $x\in L$ with
$(h\times h)^{-1}(\nabla_{m})=\nabla_{x}$.
We call $h$ _open_ if, for all $m\in M$, there exists $x\in L$ with $(h\times
h)^{-1}(\Delta_{m})=\Delta_{x}$.
We know that (see [11]) that $\mathcal{C}_{\mathcal{S}}$ is a functor from
$\mathcal{S}$-frames to frames which is natural in the sense that for any
$\mathcal{S}$-frame $h:L\to M$ we have a frame map
$\mathcal{C}_{\mathcal{S}}h:\mathcal{C}_{\mathcal{S}}L\to\mathcal{C}_{\mathcal{S}}M$
making the following diagram commute:
$M$$L$$\mathcal{C}_{\mathcal{S}}L$$\mathcal{C}_{\mathcal{S}}M$$h$$\nabla_{L}$$\nabla_{M}$$\mathcal{C}_{\mathcal{S}}h$
Now $(h\times h)^{-1}$ is the right adjoint of $\mathcal{C}_{\mathcal{S}}h$,
because, for $\theta\in\mathcal{C}_{\mathcal{S}}L$,
$\mathcal{C}_{\mathcal{S}}h(\theta)$ is the $\mathcal{S}$-congruence of $M$
generated by $(h\times h)[\theta]$, so for all
$\theta\in\mathcal{C}_{\mathcal{S}}L,\phi\in\mathcal{C}_{\mathcal{S}}M$,
$\mathcal{C}_{\mathcal{S}}h(\theta)\subseteq\phi\Longleftrightarrow\theta\subseteq(h\times
h)^{-1}(\phi).$
###### Theorem 4.2.
Let $h:L\to M$ be an $\mathcal{S}$-frame map.
1. (a)
The map $h$ is closed iff $h$ has a right adjoint, $r$, and for all $x\in
L,m\in M$,
$r(h(x)\vee m)=x\vee r(m).$
2. (b)
The map $h$ is open iff $h$ has a left adjoint, $l$, and for all $x\in L,m\in
M$,
$l(h(x)\wedge m)=x\wedge l(m).$
###### Theorem 4.3.
Let $L$ be an $\mathcal{S}$-frame and $\theta$ an $\mathcal{S}$-congruence on
$L$.
(a) The quotient map $q:L\to L/\theta$ is closed if and only if $\theta$ is a
closed $\mathcal{S}$-congruence; i.e. $\theta=\nabla_{a}$ for some $a\in L$.
(b) The quotient map $q:L\to L/\theta$ is open if and only if $\theta$ is an
open $\mathcal{S}$-congruence; that is, $\theta=\Delta_{a}$ for some $a\in L$.
###### Definition 4.4.
Let $h:L\to M$ be an $\mathcal{S}$-frame map. We say that $h$ is _dense_
(resp., _codense_) if for all $a\in L,h(a)=0$ (resp., $h(a)=1$) implies that
$a=0$ (resp., $a=1$).
###### Lemma 4.5.
Let $h:L\to M$ be an $\mathcal{S}$-frame map.
1. (a)
If $h$ is dense and closed, then $h$ is one-one. If $h$ is dense, closed and
onto, then $h$ is an isomorphism.
2. (b)
If $h$ is codense and open, then $h$ is one-one. If $h$ is codense, open and
onto, then $h$ is an isomorphism.
###### Lemma 4.6.
Suppose that $f:L\to M$ and $g:M\to N$ are $\mathcal{S}$-frame maps.
1. (a)
1. (i)
If $f$ and $g$ are both closed, then $g\circ f$ is closed.
2. (ii)
If $g\circ f$ is closed and $g$ is one-one, then $f$ is closed.
3. (iii)
If $g\circ f$ is closed and $f$ is onto, then $g$ is closed.
2. (b)
As above but replace “closed” by “open”.
## 5 Boolean properties for partial frames
The material in this section comes from [16].
We begin by recalling how matters stand in the case of full frames. A Boolean
frame is simply a frame that is also a Boolean algebra, that is, every element
has a complement. However, Booleanness can also be characterized in a
different way. For any frame $M$, let $M_{**}=\\{x^{**}:x\in M\\}$ where
$x^{*}=\bigvee\\{z\in M:z\wedge x=0\\}$, the pseudocomplement of $x$. The
frame map $p:M\to M_{**}$ given by $p(x)=x^{**}$ is called the Booleanization
of $M$. It is the least dense quotient of $M$, but is also the unique dense
Boolean quotient of $M$. A frame is then Boolean if and only if it is
isomorphic to its Booleanization.
Following Madden’s lead in [19], in [11] we constructed a least dense quotient
for partial frames. The codomain need not be Boolean, however, as Madden
already noted in the case of $\kappa$-frames. We use his terminology,
“d-reduced”, to refer to those partial frames isomorphic to their least dense
quotients. We refer the reader to Definition 2.4(l) for our notation and
terminology in this regard.
The next result characterizes those $\mathcal{S}$-frames, $L$, that are
Boolean algebras, in several ways. These involve the free frame over $L$, the
congruence frame of $L$ and the the relationship between these two entities.
###### Proposition 5.1.
Let $L$ be an $\mathcal{S}$-frame. The following are equivalent:
1. (1)
$L$ is Boolean; that is, every element of $L$ is complemented.
2. (2)
All principal $\mathcal{S}$-ideals in $\mathcal{H}_{\mathcal{S}}L$ are
complemented.
3. (3)
The embedding $e:\mathcal{H}_{\mathcal{S}}L\to\mathcal{C}_{\mathcal{S}}L$ is
an isomorphism.
4. (4)
Every $\mathcal{S}$-congruence $\theta$ of $L$ is an arbitrary join of
$\mathcal{S}$-congruences of the form $\nabla_{a}$, for some $a\in L$.
In our experience with partial frames, it has often proved useful to compare
properties for a partial frame with the analogous properties for the
corresponding free frame. We do this now for Booleanness.
We recall that, if $M$ is a frame and $x\in M$, we call $x$ a dense element of
$M$ if $x^{*}=0$.
###### Proposition 5.2.
Let $L$ be an $\mathcal{S}$-frame. The following are equivalent:
1. (1)
The frame $\mathcal{H}_{\mathcal{S}}L$ is Boolean.
2. (2)
$\operatorname{\downarrow}1$ is the only dense element of
$\mathcal{H}_{\mathcal{S}}L$.
3. (3)
The $\mathcal{S}$-frame embedding $\nabla:L\to\mathcal{C}_{\mathcal{S}}L$ is
an isomorphism.
4. (4)
Every $\theta\in\mathcal{C}_{\mathcal{S}}L$ has the form $\theta=\nabla_{a}$,
for some $a\in L$.
We now provide four provably distinct conditions akin to Booleanness for
partial frames. In the setting of (full) frames they all amount to every
element being complemented.
###### Theorem 5.3.
Let $L$ be an $\mathcal{S}$-frame. In the following list of conditions, each
one implies the succeeding one, but not conversely.
1. (1)
$\mathcal{H}_{\mathcal{S}}L$ is a Boolean frame.
2. (2)
$L$ is a Boolean frame.
3. (3)
$L$ is a Boolean $\mathcal{S}$-frame.
4. (4)
$L$ is a d-reduced $\mathcal{S}$-frame.
(a)$\Rightarrow$(b): ($\not\Leftarrow$) See Example 5.5.
(b)$\Rightarrow$(c): ($\not\Leftarrow$): See Example 5.6.
(c)$\Rightarrow$(d): ($\not\Leftarrow$): See Example 5.4.
###### Example 5.4.
Let $\mathcal{S}$ designate countable subsets, and consider the $\sigma$-frame
$L=\mathcal{P}_{C}(\mathbb{R})$, which consists of all countable subsets of
$\mathbb{R}$ together with $\mathbb{R}$ as the top element. Countable join is
union, binary meet is intersection.
Here $(X,Y)\in\raisebox{0.0pt}{$\chi$}_{0}$ if and only if, for any countable
subset $U$, $U\cap X=\emptyset\Longleftrightarrow U\cap Y=\emptyset$, which
makes $X=Y$. So $\raisebox{0.0pt}{$\chi$}_{0}=\Delta$, which makes
$\mathcal{P}_{C}(\mathbb{R})$ d-reduced. However,
$\mathcal{P}_{C}(\mathbb{R})$ is clearly not Boolean.
###### Example 5.5.
Let $\mathcal{S}$ designate countable subsets, and let $\mathcal{L}$ consist
of all subsets of $\mathbb{R}$. Clearly $\mathcal{L}$ is a Boolean frame. We
show that $\mathcal{H}_{\mathcal{S}}\mathcal{L}$ is not Boolean, using
condition (d) of Proposition 5.2.
Let
$\mathcal{I}=\\{X\subseteq\mathbb{R}:X\cap(\mathbb{R}\backslash\mathbb{Q})\textrm{
is countable}\\}$. Then $\mathcal{I}$ is a $\sigma$-ideal of $\mathcal{L}$;
that is, a downset closed under countable unions. By Definition 2.4(g)(iv),
$\bigvee\limits_{X\in\mathcal{I}}\nabla_{X}=\bigcup\limits_{X\in\mathcal{I}}\nabla_{X}$
and this cannot have the form $\nabla_{Z}$ for any $Z\in\mathcal{L}$, since
that would require $Z\supseteq X$ for all $X\in\mathcal{I}$, and hence
$Z=\mathbb{R}$; a contradiction.XXXX
###### Example 5.6.
Let $L$ consist of all countable and co-countable subsets of the real line,
and let $\mathcal{S}$ designate countable subsets. Clearly $L$ is a Boolean
$\sigma$-frame, but not a complete lattice, so not a frame.
## 6 Comparing congruences on a partial frame and its free frame
The material in this section comes from [16].
In this section, for a partial frame $L$, we compare
$\mathcal{C}_{\mathcal{S}}L$, the frame of $\mathcal{S}$-congruences of $L$,
with $\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$, the frame of (frame)
congruences on $\mathcal{H}_{\mathcal{S}}L$, the free frame over $L$. The
universal property of the embedding $\nabla:L\to\mathcal{C}_{\mathcal{S}}L$
provides a frame map
$E_{L}:\mathcal{C}_{\mathcal{S}}L\to\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$.
We give an explicit description of this map, and show that it provides a
natural transformation.
We then turn our attention to its right adjoint
$D_{L}:\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)\to\mathcal{C}_{\mathcal{S}}L$.
Again, we provide an explicit description of this function, including an
interesting and useful action on closed congruences (Lemma 6.8).
###### Definition 6.1.
Let $L$ be an $\mathcal{S}$-frame. Consider this diagram
$\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$$\mathcal{H}_{\mathcal{S}}L$$L$$\mathcal{C}_{\mathcal{S}}L$$\nabla$$\operatorname{\downarrow}$$\nabla$$E_{L}$
By the universal property of $\nabla:L\to\mathcal{C}_{\mathcal{S}}L$ there
exists a unique frame map
$E_{L}:\mathcal{C}_{\mathcal{S}}L\to\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$
such that $E_{L}\circ\nabla=\nabla\circ\operatorname{\downarrow}$; that is,
for all $a\in L$
$E_{L}(\nabla_{a})=\nabla_{\operatorname{\downarrow}a}.$
###### Lemma 6.2.
Let $L$ be an $\mathcal{S}$-frame.
1. (1)
For $\theta\in\mathcal{C}_{\mathcal{S}}L$, $E(\theta)$ is the frame congruence
on $\mathcal{H}_{\mathcal{S}}L$ generated by
$\\{(\operatorname{\downarrow}x,\operatorname{\downarrow}y):(x,y)\in\theta\\}$;
this is denoted by
$E(\theta)=\langle(\operatorname{\downarrow}x,\operatorname{\downarrow}y):(x,y)\in\theta\rangle$.
2. (2)
The frame map
$E:\mathcal{C}_{\mathcal{S}}L\to\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$ is
dense.
###### Corollary 6.3.
Let $L$ be an $\mathcal{S}$-frame and
$E:\mathcal{C}_{\mathcal{S}}L\to\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$ given
as in Definition 6.1. For all $a\in L$:
1. (1)
$E(\nabla_{a})=\nabla_{\operatorname{\downarrow}a}$
2. (2)
$E(\Delta_{a})=\Delta_{\operatorname{\downarrow}a}$
###### Remark 6.4.
Let $L$ be an $\mathcal{S}$-frame. the embedding
$e:\mathcal{H}_{\mathcal{S}}L\to\mathcal{C}_{\mathcal{S}}L$ of Definition 2.5
can be incorporated into the diagram of Definition 6.1 as follows:
$\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$$\mathcal{H}_{\mathcal{S}}L$$L$$\mathcal{C}_{\mathcal{S}}L$$\nabla$$\operatorname{\downarrow}$$\nabla$$e$$E$
Note that
* •
the upper triangle commutes, since $e\circ\operatorname{\downarrow}=\nabla$.
* •
the lower triangle commutes, since, for $I\in\mathcal{H}_{\mathcal{S}}L$,
$E\circ e(I)=E(\bigvee\limits_{i\in I}\nabla_{i})=\bigvee\limits_{i\in
I}E(\nabla_{i})=\bigvee\limits_{i\in
I}\nabla_{\operatorname{\downarrow}i}=\nabla_{I}$.
Alternatively, this can be seen because the outer diagram commutes and every
$\mathcal{S}$-ideal is generated by principal $\mathcal{S}$-ideals.
###### Proposition 6.5.
The function $E_{L}$ provides a natural transformation from the functor
$\mathcal{C}_{\mathcal{S}}$ to the functor
$\mathcal{C}\mathcal{H}_{\mathcal{S}}$.
We now define, for any $\mathcal{S}$-frame $L$, the function $D_{L}$. In a
subsequent lemma, $D_{L}$ is seen to be the right adjoint of the frame map
$E_{L}$.
###### Definition 6.6.
Let $L$ be an $\mathcal{S}$-frame, and $\Phi$ a frame congruence on the frame
$\mathcal{H}_{\mathcal{S}}L$. Define
$D_{L}(\Phi)=\\{(x,y)\in L\times
L:(\operatorname{\downarrow}x,\operatorname{\downarrow}y)\in\Phi\\}.$
###### Lemma 6.7.
Let $L$ be an $\mathcal{S}$-frame.
1. (1)
For any frame congruence $\Phi$ on $\mathcal{H}_{\mathcal{S}}L$, $D_{L}(\Phi)$
is an $\mathcal{S}$-congruence on $L$.
2. (2)
The function
$D_{L}:\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)\to\mathcal{C}_{\mathcal{S}}L$
is the right adjoint of the frame map
$E_{L}:\mathcal{C}_{\mathcal{S}}L\to\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$
of Definition 6.1.
3. (3)
The function $D_{L}$ preserves bottom, top and arbitrary meets.
We now provide further properties of $D$, including its action on certain
special congruences. We note that the proof of Lemma 6.8(a) uses the fact
that, for $I$ an $\mathcal{S}$-ideal of an $\mathcal{S}$-frame $L$,
$\bigvee\limits_{i\in I}\nabla_{i}=\bigcup\limits_{i\in I}\nabla_{i}$. This is
not immediately obvious, but was proved in [15] Lemma 3.1.
###### Lemma 6.8.
Let $L$ be an $\mathcal{S}$-frame, and $D$ as in Definition 6.6.
1. (1)
For all $I\in\mathcal{H}_{\mathcal{S}}L$, $D(\nabla_{I})=\bigcup\limits_{i\in
I}\nabla_{i}.$
2. (2)
For all $a\in L$,
1. (a)
$D(\nabla_{\operatorname{\downarrow}a})=\nabla_{a}$
2. (b)
$D(\Delta_{\operatorname{\downarrow}a})=\Delta_{a}$
3. (3)
For $I\in\mathcal{H}_{\mathcal{S}}L$, $I$ is principal $\Longleftrightarrow
D(\nabla_{I})\vee D(\Delta_{I})=\nabla$.
###### Definition 6.9.
Let $M$ be a full frame. For any $a\in M$ we say $a$ is an
$\mathcal{S}$-Lindelöf element of $M$ if the following condition holds:
If $a=\bigvee B$ for some $B\subseteq M$, then $a=\bigvee D$ for some
designated subset $D$ of $M$ such that $D\subseteq B$.
See [7] for details about this notion. In particular, Lemma 4.3 of that paper
characterizes the $\mathcal{S}$-Lindelöf elements of
$\mathcal{H}_{\mathcal{S}}L$ as being the principal $\mathcal{S}$-ideals.
The next result characterizes those rather special $\mathcal{S}$-frames $L$
for which $E_{L}$ is an isomorphism.
###### Theorem 6.10.
Let $L$ be an $\mathcal{S}$-frame. The following are equivalent:
1. (1)
The embedding $\operatorname{\downarrow}:L\to\mathcal{H}_{\mathcal{S}}L$ is an
isomorphism.
2. (2)
Every $\mathcal{S}$-ideal of $L$ is principal.
3. (3)
$L$ is a frame and every element of $L$ is $\mathcal{S}$-Lindelöf.
4. (4)
The frame map
$E:\mathcal{C}_{\mathcal{S}}L\to\mathcal{C}(\mathcal{H}_{\mathcal{S}}L)$ is an
isomorphism.
The equivalent conditions of Theorem 6.10 might seem rather strong. Here are
some examples which show that these can obtain.
###### Example 6.11.
The conditions of Theorem 6.10 hold in the following examples:
* •
$\mathcal{S}$ selects finite subsets and $L$ is a finite frame.
* •
$\mathcal{S}$ selects countable subsets, and $L$ consists of the open subsets
of the real line.
* •
$\mathcal{S}$ selects finite subsets, or $\mathcal{S}$ selects countable
subsets, and $L$ consists of the cofinite subsets of the real line, together
with the empty set.
## 7 Closed and open embeddings into the free frame and the congruence frame
###### Theorem 7.1.
Let $L$ be an $\mathcal{S}$-frame and
$\mbox{$\downarrow$}:L\to\mathcal{H}_{\mathcal{S}}L$ the embedding into its
free frame.
1. (a)
The map $\downarrow$ has a right adjoint iff $\downarrow$ is an isomorphism.
2. (b)
The map $\downarrow$ is closed iff $\downarrow$ is an isomorphism.
3. (c)
The map $\downarrow$ has a left adjoint iff $L$ is a complete lattice.
4. (d)
The map $\downarrow$ is open iff $L$ is a frame.
###### Theorem 7.2.
Let $L$ be an $\mathcal{S}$-frame and $\nabla:L\to\mathcal{C}_{\mathcal{S}}L$
the embedding into its congruence frame.
1. (a)
The map $\nabla$ is closed iff $\nabla$ is an isomorphism.
2. (b)
The map $\nabla$ is open iff $L$ is a Boolean frame.
## References
* [1] Adámek, J., H. Herrlich and G. Strecker, “Abstract and Concrete Categories”, John Wiley & Sons Inc., New York, 1990. ISBN 13: 9780471609223 Available online at:
https://www.tac.mta.ca/tac/reprints/articles/17/tr17.pdf
* [2] Banaschewski, B., $\sigma$-frames, unpublished manuscript, 1980. Available online at:
https://math.chapman.edu/CECAT/members/Banaschewski%20Sigma-Frames.pdf
* [3] Banaschewski, B., and C.R.A. Gilmour, _Realcompactness and the Cozero Part of a Frame_ , Appl. Categ. Struct. 9 (2001) 395-417.
https://doi.org/10.1023/A:1011225712426
* [4] Frith, J., and A. Schauerte, _Uniformities and covering properties for partial frames (I)_ , Categ. General Alg. Struct. Appl. 2(1) (2014), 1-21. Available online at:
https://cgasa.sbu.ac.ir/article_6481_216dfcc250ed5622b17a8cd2139f700c.pdf
* [5] Frith, J., and A. Schauerte, _Uniformities and covering properties for partial frames (II)_ , Categ. General Alg. Struct. Appl.2(1) (2014), 23-35. Available online at:
https://cgasa.sbu.ac.ir/article_6798.html
* [6] Frith, J., and A. Schauerte, _Completions of uniform partial frames_ , Acta Mathematica Hungarica 147(1) (2015) 116-134.
https://doi.org/10.1007/s10474-015-0514-9
* [7] Frith, J., and A. Schauerte, _The Stone- $\check{\textrm{C}}$ech Compactification of a Partial Frame via Ideals and Cozero Elements_, Quaestiones Math. 39 (1) (2016) 115-134.
https://doi.org/10.2989/16073606.2015.1023866
* [8] Frith, J., and A. Schauerte, _Compactifications of partial frames via strongly regular ideals_ , Mathematica Slovaca, 68 (2) (2016) 285-298.
https://doi.org/10.1515/ms-2017-0100
* [9] Frith, J., and A. Schauerte, _Coverages Give Free Constructions for Partial Frames_ , Appl. Categ. Struct, 25(3) (2017) 303-321.
https://doi.org/10.1007/s10485-015-9417-8
* [10] Frith, J., and A. Schauerte, _One-point compactifications and continuity for partial frames_ , Categ. General Alg. Struct. Appl. 7 (2017) 57-88.
https://cgasa.sbu.ac.ir/article_43180_02e474fcbfa63e236d1fbd237390dba8.pdf
* [11] Frith, J., and A. Schauerte, _The congruence frame and the Madden quotient for partial frames_ , Algebra Univers. 79 (2018) Article 73.
https://doi.org/10.1007/s00012-018-0554-4
* [12] Frith, J., and A. Schauerte, _Meet-semilattice congruences on a frame_ , Appl. Categ. Struct 26(5) (2018) 997—1013.
https://doi.org/10.1007/s10485-018-9521-7
* [13] Frith, J., and A. Schauerte, _Partial frames and filter spaces_ , Topology and its Applications 263, (2019) 61-73.
https://doi.org/10.1016/j.topol.2019.05.021
* [14] Frith, J., and A. Schauerte, _Compactifications and reflections of partial spaces via partial frames_ , Topology and its Applications 273 (2020).
https://doi.org/10.1016/j.topol.2019.106982
* [15] Frith, J., and A. Schauerte, _A look at the structure of congruence frames by means of Heyting congruences_ , Quaest. Math. appeared online September 2021.
newlinedoi.org/10.2989/16073606.2021.1972052
* [16] Frith, J., and A. Schauerte, _Variants of Booleanness: Congruences of a partial frame versus those of its congruence frame_ , Math. Slovaca, accepted August 2021.
https://doi.org/10.2989/16073606.2021.1972052
* [17] Johnstone, P.T., “Stone Spaces”, Cambridge University Press, Cambridge, 1982. ISBN 978-0521337793
* [18] Mac Lane, S., “Categories for the Working Mathematician”, Springer-Verlag, Heidelberg, 1971. ISBN 978-1-4757-4721-8.
* [19] Madden, J.J., _$\kappa$ -frames_, J. Pure Appl Algebra 70 (1991) 107-127.
https://doi.org/10.1016/0022-4049(91)90011-P
* [20] Manuell, G., _A special class of congruences on $\kappa$-frames_, Algebra Univers. 78 (2017) 125–130.
https://doi.org/10.1007/s00012-017-0439-y
* [21] Paseka, J., _Covers in Generalized Frames_ , in: General Algebra and Ordered Sets (Horni Lipova 1994), Palacky Univ. Olomouc, Olomouc pp. 84-99. Available online at:
http://www.math.muni.cz/~paseka/ftp/papers/cigf.ps
* [22] Picado, J., and A. Pultr, “Frames and Locales”, Springer, Basel, 2012. ISBN 978-3-0348-0154-6.
* [23] Zenk, E.R., _Categories of Partial Frames_ , Algebra Univers. 54 (2005) 213-235.
https://doi.org/10.1007/s00012-005-1939-8
* [24] Zhao, D., _Nuclei on $Z$-Frames_, Soochow J. Math. 22 (1) (1996) 59-74.
* [25] Zhao, D., _On Projective $Z$-frames_, Canad. Math. Bull. 40(1) (1997) 39-46.
https://doi.org/10.4153/CMB-1997-004-4
|
# Invariant and Preserving Transforms for Cross Ratio of 4-Points in a line on
Desargues Affine Plane
Orgest ZAKA Orgest ZAKA: Department of Mathematics-Informatics, Faculty of
Economy and Agribusiness, Agricultural University of Tirana, Tirana, Albania
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>and James F. Peters
James F. PETERS: Department of Electrical & Computer Engineering, University
of Manitoba, WPG, MB, R3T 5V6, Canada and Department of Mathematics, Faculty
of Arts and Sciences, Adi̇yaman University, 02040 Adi̇yaman, Turkey
<EMAIL_ADDRESS>Dedicated to Girard Desargues and Karl G. C. von
Staudt
###### Abstract.
This paper introduces advances in the geometry of the transforms for cross
ratio of four points in a line in the Desargues affine plane. The results
given here have a clean, based Desargues affine plan axiomatics and
definitions of addition and multiplication of points on a line in this plane,
and for skew field properties. In this paper are studied, properties and
results related to the some transforms for cross ratio for 4-points, in a
line, which we divide into two categories, _Invariant_ and _Preserving_
transforms for cross ratio. The results in this paper are (1) the cross-ratio
of four points is _Invariant_ under transforms: Inversion, Natural
Translation, Natural Dilation, Mobiüs Transform, in a line of Desargues affine
plane. (2) the cross-ratio of four points is _Preserved_ under transforms:
parallel projection, translations and dilation’s in the Desargues affine
plane.
###### Key words and phrases:
Cross Ratio, Skew-Field, Desargues Affine Plane
###### 2010 Mathematics Subject Classification:
51-XX; 51Axx; 51A30; 51E15, 51N25, 30C20, 30F40
The research has been supported by the Natural Sciences & Engineering Research
Council of Canada (NSERC) discovery grant 185986, Instituto Nazionale di Alta
Matematica (INdAM) Francesco Severi, Gruppo Nazionale per le Strutture
Algebriche, Geometriche e Loro Applicazioni grant 9 920160 000362, n.prot U
2016/000036 and Scientific and Technological Research Council of Turkey
(TÜBİTAK) Scientific Human Resources Development (BIDEB) under grant no:
2221-1059B211301223.
## 1\. Introduction and Preliminaries
Influenced by the recently achieved results, related to the ratio of 2 and 3
points (see, [17], [10]), but mainly on the results presented in the paper
[23] for cross-ratio of four collinear points in a line $\ell^{OI}$, in
Desargues affine planes, in this paper we study some transforms regrading to
cross-ratio of four collinear points (four point in a line $\ell^{OI}$ on
Desargues affine plane). We divide this transforms in two categories
_Invariant-Transforms_ and _Preserving-Transforms_.
Earlier, we define addition and multiplication of points in a line on Desarges
affine plane, and we have prove that on each line on Desargues affine plane,
we can construct a skew-field related to these two actions, so
$(\ell^{OI},+,\cdot)$-is a skew-field, this construction has been achieved,
simply and constructively, using simple elements of elementary geometry, and
only the basic axioms of Desargues affine plane (see [16], [4], [14], [22] ).
In this paper, we consider dilations and translations entirely in the
Desargues affine plane (see [19], [18], [14], [22]).
The foundations for the study of the connections between axiomatic geometry
and algebraic structures were set forth by D. Hilbert [7]. And some classic
for this are, E. Artin [1], D.R. Huges and F.C. Piper [8], H. S. M Coxeter
[3]. Marcel Berger in [2], Robin Hartshorne in [5], etc. Even earlier, in we
works [19, 16, 4, 18, 15, 14, 22, 24, 21, 20] we have brought up quite a few
interesting facts about the association of algebraic structures with affine
planes and with ’Desargues affine planes’, and vice versa.
In this paper, all results based in geometric intuition, in axiomatic of
Desargues affine plane, and in skew-field properties, we utilize a method that
is naive and direct, without requiring the concept of coordinates.
### 1.1. Desargues Affine Plane
Let $\mathcal{P}$ be a nonempty space, $\mathcal{L}$ is a family of subsets of
$\mathcal{P}$. The elements $P$ of $\mathcal{P}$ are points and an element
$\ell$ of $\mathcal{L}$ is a line.
###### Definition 1.
The incidence structure $\mathcal{A}=(\mathcal{P},\mathcal{L},\mathcal{I})$,
called affine plane, where satisfies the above axioms:
1o:
For each points $P,Q\in\mathcal{P}$, there is exactly one line
$\ell\in\mathcal{L}$ such that $P,Q\in\ell$.
2o:
For each point $P\in\mathcal{P},\ell\in\mathcal{L},P\not\in\ell$, there is
exactly one line $\ell^{\prime}\in\mathcal{L}$ such that $P\in\ell^{\prime}$
and $\ell\cap\ell^{\prime}=\emptyset$ (Playfair Parallel Axiom [11]). Put
another way, if the point $P\not\in\ell$, then there is a unique line
$\ell^{\prime}$ on $P$ missing $\ell$ [12].
3o:
There is a 3-subset of points $\left\\{P,Q,R\right\\}\subset\mathcal{P}$,
which is not a subset of any $\ell$ in the plane. Put another way, there exist
three non-collinear points $\mathcal{P}$ [12].
_Desargues’ Axiom, circa 1630_ [9, §3.9, pp. 60-61] [13]. Let
$A,B,C,A^{\prime},B^{\prime},C^{\prime}\in\mathcal{P}$ and let pairwise
distinct lines
$\ell^{AA_{1}},\ell^{BB^{\prime}},\ell^{CC^{\prime}},\ell^{AC},\ell^{A^{\prime}C^{\prime}}\in\mathcal{L}$
such that
$\displaystyle\ell^{AA^{\prime}}\parallel\ell^{BB^{\prime}}\parallel\ell^{CC^{\prime}}\
\mbox{(Fig.~{}\ref{fig:DesarguesAxiom}(a))}$ $\displaystyle\ \mbox{{or}}\
\ell^{AA^{\prime}}\cap\ell^{BB^{\prime}}\cap\ell^{CC^{\prime}}=P.\mbox{(Fig.~{}\ref{fig:DesarguesAxiom}(b)
)}$ $\displaystyle\mbox{and}\ \ell^{AB}\parallel\ell^{A^{\prime}B^{\prime}}\ $
$\displaystyle\ \mbox{and}\ \ell^{BC}\parallel\ell^{B^{\prime}C^{\prime}}.$
$\displaystyle
A,B\in\ell^{AB},A^{\prime}B^{\prime}\in\ell^{A^{\prime}B^{\prime}},A,C\in\ell^{AC},$
$\displaystyle\ \mbox{and}\
A^{\prime}C^{\prime}\in\ell^{A^{\prime}C^{\prime}},B,C\in\ell^{BC},B^{\prime}C^{\prime}\in\ell^{B^{\prime}C^{\prime}}.$
$\displaystyle A\neq C,A^{\prime}\neq C^{\prime},$ $\displaystyle\ \mbox{and}\
\ell^{AB}\neq\ell^{A^{\prime}B^{\prime}},\ell^{BC}\neq\ell^{B^{\prime}C^{\prime}}.$
Figure 1. Desargues Axioms: (a) For parallel lines
$\ell^{AA_{1}}\parallel\ell^{BB^{\prime}}\parallel\ell^{CC^{\prime}}$; (b) For
lines which are cutting in a single point $P$,
$\ell^{AA^{\prime}}\cap\ell^{BB^{\prime}}\cap\ell^{CC^{\prime}}=P$.
Then $\boldsymbol{\ell^{AC}\parallel\ell^{A^{\prime}C^{\prime}}}$.
$\blacksquare$
A Desargues affine plane is an affine plane that satisfies Desargues’ Axiom.
###### Notation 1.
Three vertexes $ABC$ and $A^{\prime}B^{\prime}C^{\prime}$, which, fulfilling
the conditions of the Desargues Axiom, we call _’Desarguesian’_.
### 1.2. Addition and Multiplication of points in a line of Desargues affine
plane
The process of construct the points $C$ for addition (Figure 2 (a)) and
multiplication (Figure 2 (b)) of points in $\ell^{OI}-$line in affine plane,
is presented in the tow algorithm form
Addition Algorithm
Step.1:
$B_{1}\notin\ell^{OI}$
Step.2:
$\ell_{OI}^{B_{1}}\cap\ell_{OB_{1}}^{A}=P_{1}$
Step.3:
$\ell_{BB_{1}}^{P_{1}}\cap\ell^{OI}=C(=A+B)$
Multiplication Algorithm
Step.1:
$B_{1}\notin\ell^{OI}$
Step.2:
$\ell_{IB_{1}}^{A}\cap\ell^{OB_{1}}=P_{1}$
Step.3:
$\ell_{BB_{1}}^{P_{1}}\cap\ell^{OI}=C(=A\cdot B)$
Figure 2. (a) Addition of points in a line in affine plane, (b)
Multiplication of points in a line in affine plane
In [14] and [4], we have prove that $(\ell^{OI},+,\cdot)$ is a skew field in
Desargues affine plane, and is field (commutative skew field) in the Papus
affine plane.
###### Definition 2.
The parallel projection between the two lines in the Desargues affine plane,
will be called, a function,
$P_{P}:\ell_{1}\to\ell_{2},\quad\forall A,B\in\ell_{1},\quad
AP_{P}(A)||BP_{P}(B)$
It is clear that this function is a bijection between any two lines in
Desargues affine planes, for this reason, it can also be thought of as
isomorphism between two lines.
###### Definition 3.
[18] Dilatation of an affine plane
$\mathcal{A}=(\mathcal{P},\mathcal{L},\mathcal{I})$, called a its collineation
$\delta$ such that: $\forall P\neq Q\in\mathcal{P},\delta{(PQ)}||PQ$.
###### Definition 4.
[18] Translation of an affine plane
$\mathcal{A}=(\mathcal{P},\mathcal{L},\mathcal{I})$, called identical
dilatation $id_{\mathcal{P}}$ his and every other of its dilatation, about
which he affine plane has not fixed points.
Some well-known results related to translations and dilation’s in Desargues
affine planes.
* •
The dilatation set $\textbf{Dil}_{\mathcal{A}}$ of affine plane $\mathcal{A}$
forms a group with respect to composition $\circ$ ([18]).
* •
The translations set $\textbf{Tr}_{\mathcal{A}}$ of affine plane $\mathcal{A}$
forms a group with respect to composition $\circ$; which is a sub-group of the
dilation group $\left(\textbf{Dil}_{\mathcal{A}},\circ\right)$ ([18]).
* •
In a affine plane: the group $\left(\textbf{Tr}_{\mathcal{A}},\circ\right)$ of
translations is normal sub-group of the group of dilatations
$\left(\textbf{Dil}_{\mathcal{A}},\circ\right)$ ([18] ).
* •
Every dilatation in Desargues affine plane
$\mathcal{A}_{\mathcal{D}}=(\mathcal{P},\mathcal{L},\mathcal{I})$ which leads
a line in it, is an automorphism of skew-fields constructed on the same line
$\ell\in\mathcal{L},$ of the plane $\mathcal{A}_{\mathcal{D}}$ ([19] ).
* •
Every translations in Desargues affine plane
$\mathcal{A}_{\mathcal{D}}=(\mathcal{P},\mathcal{L},\mathcal{I})$ which leads
a line in it, is an automorphism of skew-fields constructed on the same line
$\ell\in\mathcal{L},$ of the plane $\mathcal{A}_{\mathcal{D}}$ ([19]).
* •
Each dilatation in a Desargues affine plane,
$\mathcal{A}_{\mathcal{D}}=(\mathcal{P},\mathcal{L},\mathcal{I})$ is an
isomorphism between skew-fields constructed over isomorphic lines
$\ell_{1},\ell_{2}\in\mathcal{L}$ of that plane ([22]).
* •
Each translations in a Desargues affine plane,
$\mathcal{A}_{\mathcal{D}}=(\mathcal{P},\mathcal{L},\mathcal{I})$ is an
isomorphism between skew-fields constructed over isomorphic lines
$\ell_{1},\ell_{2}\in\mathcal{L}$ of that plane ([22]).
### 1.3. Some algebraic properties of Skew Fields
I n this section $K$ will denote a skew field [6] and $z[K]$ its center, where
is the set $K$ such that
$z[K]=\left\\{k\in K\quad\quad ak=ka,\quad\forall a\in K\right\\}.$
###### Proposition 1.
$z[K]$ is a commutative subfield of a skew field $K$.
Let now $p\in K$ be a fixed element of the skew field $K$, we will denote by
$z_{K}(p)$ the centralizer in $K$ of the element $p$, where is the set,
$z_{K}(p)=\\{k\in K,pk=kp\\}.$
where $z_{K}(p)$ is sub skew field of K, but, in general, it is not
commutative.
Let $K$ be a skew field, $p\in K$, and let us denote by $[p_{K}]$ the
conjugacy class of $p$:
$[p_{K}]=\left\\{q^{-1}pq\quad,\quad q\in K\setminus\\{0\\}\right\\}$
If, $p\in z[K]$, for all $q\in K$ we have that $q^{-1}pq=p.$
### 1.4. Ratio of two and three points
In the paper [17], we have done a detailed study, related to the ratio of two
and three points in a line of Desargues affine plane. Below we are listing
some of the results for ratio of two and three points.
###### Definition 5.
[17] Lets have two different points $A,B\in\ell^{OI}-$line, and $B\neq O$, in
Desargues affine plane. We define as ratio of this tow points, a point
$R\in\ell^{OI}$, such that,
$R=B^{-1}A,\qquad\text{ we mark this, with,}\qquad R=r(A:B)=B^{-1}A$
For a ’ratio-point’ $R\in\ell^{OI}$, and for point $B\neq O$ in line
$\ell^{OI}$, is a unique defined point, $A\in\ell^{OI}$, such that
$R=B^{-1}A=r(A:B)$.
Figure 3. Ilustrate the Ratio-Point, of 2-Points in a line of Desargues affine
plane $R=r(A:B)=B^{-1}A$.
Some results for Ratio of 2-points in Desargues affine plane (see [17]).
* •
If have two different points $A,B\in\ell^{OI}-$line, and $B\neq O$, in
Desargues affine plane, then, $r^{-1}(A:B)=r(B:A)$.
* •
For three collinear points $A,B,C$ and $C\neq O$, in $\ell^{OI}-$line, have,
$r(A+B:C)=r(A:C)+r(B:C).$
* •
For three collinear points $A,B,C$ and $C\neq O$, in $\ell^{OI}-$line, have,
1. (1)
$r(A\cdot B:C)=r(A:C)\cdot B.$
2. (2)
$r(A:B\cdot C)=C^{-1}r(A:C).$
* •
Let’s have the points $A,B$ in the line $\ell^{OI}$ where $B\neq O$. Then have
that,
$r(A:B)=r(B:A)\Leftrightarrow A=B.$
* •
This ratio-map, $r_{B}:\ell^{OI}\to\ell^{OI}$ is a bijection in
$\ell^{OI}-$line in Desargues affine plane.
* •
The ratio-maps-set $\mathcal{R}_{2}=\\{r_{B}(X),\forall X\in\ell^{OI}\\}$, for
a fixed point $B$ in the line $\ell^{OI}$, forms a skew-field with ’addition
and multiplication’ of points. This, skew field $(\mathcal{R}_{2},+,\cdot)$ is
sub-skew field of the skew field $(\ell^{OI},+,\cdot)$.
Ratio of three points in a line on Desargues affine plane. (see [17])
###### Definition 6.
If $A,B,C$ are three points on a line $\ell^{OI}$ (collinear) in Desargues
affine plane, then we define their ratio to be a point $R\in\ell^{OI}$, such
that:
$(B-C)\cdot R=A-C,\quad\mbox{concisely}\quad R=(B-C)^{-1}(A-C),$
and we mark this with $r(A,B;C)=(B-C)^{-1}(A-C)$.
Figure 4. Ratio of 3-Points in a line of Desargues affine plane $R=r(A,B;C)$.
Some Results for Ratio of 3-points in Desargues affine plane ([17]).
* •
For 3-points $A,B,C$ in a line $\ell^{OI}$ of Desargues affine plane, we have
that,
$r(-A,-B;-C)=r(A,B;C).$
* •
For 3-points $A,B,C$ in a line $\ell^{OI}$ in the Desargues affine plane, have
$r^{-1}(A,B;C)=r(B,A;C).$
* •
If $A,B,C$, are three different points, and different from point $O$, in a
line $\ell^{OI}$ on Desargues affine plane, then
$r(A^{-1},B^{-1};C^{-1})=B[r(A,B;C)]A^{-1}.$
* •
In the Pappus affine plane, for three point different from point $O$, in
$\ell^{OI}-$line, we have $r(A^{-1},B^{-1};C^{-1})=r(A,B;C)\cdot r(B,A;O).$
* •
This ratio-map, $r_{BC}:\ell^{OI}\to\ell^{OI}$ is a bijection in
$\ell^{OI}-$line in Desargues affine plane.
* •
The ratio-maps-set $\mathcal{R}_{3}=\\{r_{BC}(X),\forall X\in\ell^{OI}\\}$,
for a different fixed points $B,C$ in $\ell^{OI}-$line, forms a skew-field
with ’addition and multiplication’ of points in $\ell^{OI}-$line. This, skew
field $(\mathcal{R}_{3},+,\cdot)$ is sub-skew field of the skew field
$(\ell^{OI},+,\cdot)$.
### 1.5. Cross-Ratio in a line of Desargues affine plane
Let us have the line $\ell^{OI}$ in Desarges affine plane $\mathcal{A_{D}}$,
and four points, $A,B,C,D\in\ell^{OI}$
###### Definition 7.
If $A,B,C,D$ are four points on a line $\ell^{OI}$ in Desarges affine plane
$\mathcal{A_{D}}$, no three of them equal, then we define their cross ratio to
be a point:
$c_{r}(A,B;C,D)=\left[(A-D)^{-1}(B-D)\right]\left[(B-C)^{-1}(A-C)\right]$
###### Definition 8.
If the line $\ell^{OI}$ in Desargues affine plane, is a infinite line (number
of points in this line is $+\infty$), we define as follows:
$\displaystyle c_{r}(\infty,B;C,D)$ $\displaystyle=(B-D)(B-C)^{-1}$
$\displaystyle c_{r}(A,\infty;C,D)$ $\displaystyle=(A-D)^{-1}(A-C)$
$\displaystyle c_{r}(A,B;\infty,D)$ $\displaystyle=(A-D)^{-1}(B-D)$
$\displaystyle c_{r}(A,B;C,\infty)$ $\displaystyle=(B-C)^{-1}(A-C)$
From this definition and from ratio definition 6 we have that,
* •
$c_{r}(A,B;C,D)=\left[(A-D)^{-1}(B-D)\right]\left[(B-C)^{-1}(A-C)\right]=r(B,A;D)\cdot
r(A,B;C).$
* •
$c_{r}(\infty,B;C,D)=(B-D)(B-C)^{-1}=[(D-B)^{-1}(C-B)]^{-1}=r^{-1}(C,D;B).$
* •
$c_{r}(A,\infty;C,D)=(A-D)^{-1}(A-C)=(D-A)^{-1}(C-A)=r(C,D;A)$.
* •
$c_{r}(A,B;\infty,D)=(A-D)^{-1}(B-D)=r(A,B;D)$.
* •
$c_{r}(A,B;C,\infty)=(B-C)^{-1}(A-C)=r(A,B;C)$.
Some results for Cross-Ratio of 4-collinear points in Desargues affine plane
(see [23]).
* •
If $A,B,C,D$ are distinct points in a $\ell^{OI}-$line, in Desargues affine
plane, then
$c_{r}(-A,-B;-C,-D)=c_{r}(A,B;D,C)\quad\text{and}\quad
c_{r}^{-1}(A,B;C,D)=c_{r}(A,B;D,C).$
* •
If $A,B,C,D$ are distinct points in a line, in Desargues affine plane and $I$
is unitary point for multiplications of points in same line, then,
(a):
$I-c_{r}(A,B;C,D)=c_{r}(A,C;B,D)$
(b):
$c_{r}(A,D;B,C)=I-c_{r}^{-1}(A,B;C,D)$
(c):
$c_{r}(A,C;D,B)=[I-c_{r}(A,B;C,D)]^{-1}$
(d):
$c_{r}(A,D;C,B)=[c_{r}(A,B;C,D)-I]^{-1}c_{r}(A,B;C,D)$
* •
If $A,B,C,D$ are distinct points, and different from zero-point $O$, in a
line, in Desargues affine plane and $I$ is unitary point for multiplications
of points in same line, have,
$c_{r}(A^{-1},B^{-1};C^{-1},D^{-1})=A\cdot c_{r}(A,B;C,D)\cdot A^{-1}$
* •
If the point $A\in z[K]$ (center of skew field $K=(\ell^{OI},+,\cdot)$), then,
$c_{r}(A,C;B,D)=c_{r}(A^{-1},B^{-1};C^{-1},D^{-1}).$
* •
If $A,B,C,D\in\ell^{OI}$ are distinct points in a line, in Desargues affine
plane and $I$ is unital point for multiplications of points in same line, then
equation
$c_{r}(A,B;C,D)=c_{r}(B,A;D,C)$
it’s true, if
(a):
points $A,B,C,D$ are in ’center of skew-field’ $z[K]$;
(b):
ratio-points $r(A,B;C)$ are in ’center of skew-field’;
(c):
ratio-point $r(B,A;D)$ are in ’center of skew-field’;
(d):
ratio-point $r(A,B;D)$ is in centaralizer of point $r(A,B;C)$, or vice versa.
## 2\. Some Invariant Transforms for Cross-Ratio of 4-points in a line of
Desargues affine plane
In this section we will see some transformations, for which the cross-ratio
are invariant under their action.
First, we are defining these transforms and illustrating them with the
corresponding figures, respectively: Inversion with Fig.5, Reflection with
Fig.6, Natural Translation with Fig.7 and Natural Dilation with Fig.8.
###### Definition 9.
Inversion of points in $\ell^{OI}-$line, called the map
$j_{P}:\ell^{OI}\to\ell^{OI},$
where $P\in\ell^{OI}$ is fix-point, and which satisfies the condition,
$\forall A\in\ell^{OI}\quad j_{P}(A)=P\cdot A.$
Figure 5. Ilustrate the Inversion of Points, in a line of Desargues affine
plane $J_{P}(A)=P\cdot A$.
###### Notation 2.
Inversion of points in $\ell^{OI}-$line,
$j_{P}:\ell^{OI}\to\ell^{OI},$
where $P=-I\in\ell^{OI}$, called _Involution_ or _Reflection about the point
$O$ in $\ell^{OI}-$line_ and we have,
$\forall A\in\ell^{OI}\quad j_{P}(A)=-I\cdot A=-A,$
where $-A$ is opposite point of point $A$, regrading to addition of points in
$\ell^{OI}-$line.
Figure 6. Ilustrate the Reflection of Points, in a line of Desargues affine
plane $j_{-I}(A)=-I\cdot A=-A$.
###### Definition 10.
A natural translation with point $P$, of points in $\ell^{OI}-$line, called
the map
$\varphi_{P}:\ell^{OI}\to\ell^{OI},$
for a fixed $P\in\ell^{OI}$ which satisfies the condition,
$\forall A\in\ell^{OI}\quad\varphi_{P}(A)=P+A.$
Figure 7. Ilustrate the Natural Translation of Points, in a line of Desargues
affine plane $\varphi_{P}(A)=P+A$.
###### Definition 11.
A natural Dilation of points in $\ell^{OI}-$line, called the map
$\delta_{n}:\ell^{OI}\to\ell^{OI},$
for a fixed natural number $n\in\mathbb{Z}$ which satisfies the condition,
if, $n>0$, have,
$\forall
A\in\ell^{OI}\quad\delta_{n}(A)=nA=\underbrace{A+A+\cdots+A}_{n-times},$
and if, $n<0$, we have
$\forall
A\in\ell^{OI}\quad\delta_{n}(A)=nA=\underbrace{[-A]+[-A]+\cdots+[-A]}_{(-n)-times},$
where $-A=(-I)\cdot A$ is the oposite point of point $A$, regarding to
addition of points in $\ell^{OI}-$line.
Figure 8. Ilustrate the Natural Translation of Points, in a line of Desargues
affine plane $\delta_{n}(A)=nA$.
###### Definition 12.
Lets have three fixed points $B,C,D\in\ell^{OI}.$ Mobiüs transform for cross-
ratio, we called the map,
$\mu:\ell^{OI}\to\ell^{OI},$
which satisfies the condition,
$\forall X\in\ell^{OI},\quad\mu(X)=c_{r}(X,B;C,D).$
###### Theorem 1.
The cross-ratios are invariant under the natural translation with a point $P$.
###### Proof.
From cross-ratio definition 7, have,
$c_{r}(A,B;C,D)=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
so for cross-ratio we have that,
$\displaystyle
c_{r}[\varphi_{P}(A),\varphi_{P}(B);\varphi_{P}(C),\varphi_{P}(D)]$
$\displaystyle=c_{r}(A+P,B+P;C+P,D+P)$
$\displaystyle=[((A+P)-(D+P))^{-1}((B+P)-(D+P))]$
$\displaystyle\cdot[((B+P)-(C+P))^{-1}((A+P)-(C+P))]$
$\displaystyle=[(A+P-D-P)^{-1}(B+P-D-P)]$
$\displaystyle\cdot[(B+P-C-P)^{-1}(A+P-C-P)]$
$\displaystyle=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
$\displaystyle=c_{r}(A,B;C,D)$
###### Theorem 2.
The Cross-Ratios are invariant under the natural dilation with a fixet
$n\in\mathbb{Z}$.
###### Proof.
From cross-ratio definition 7, we have
$c_{r}(A,B;C,D)=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
so for cross-ratio of _natural dilation-points_ we have that,
Case.1 For $n>0$, we have,
$\displaystyle c_{r}[\delta_{n}(A),\delta_{n}(B);\delta_{n}(C),\delta_{n}(D)]$
$\displaystyle=c_{r}(nA,nB;nC,nD)$ $\displaystyle=[(nA-nD)^{-1}(nB-nD)][(nB-
nC)^{-1}(nA-nC)]$ $\displaystyle=[(n(A-D))^{-1}n(B-D)][(n(B-C))^{-1}n(A-C)]$
$\displaystyle=[(A-D)^{-1}n^{-1}n(B-D)][(B-C)^{-1}n^{-1}n(A-C)]$
$\displaystyle=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
$\displaystyle=c_{r}(A,B;C,D)$
Case.2 For $n<0$, we mark $m=-n>0$ or $-m=n$ where $m>0$, and have that,
$\displaystyle c_{r}[\delta_{n}(A),\delta_{n}(B);\delta_{n}(C),\delta_{n}(D)]$
$\displaystyle=c_{r}(nA,nB;nC,nD)$
$\displaystyle=c_{r}([-m]A,[-m]B;[-m]C,[-m]D)$
$\displaystyle=[([-m]A-[-m]D)^{-1}([-m]B-[-m]D)]$
$\displaystyle\qquad\cdot[([-m]B-[-m]C)^{-1}([-m]A-[-m]C)]$
$\displaystyle=[(m[-A]-m[-D])^{-1}(m[-B]-m[-D])]$
$\displaystyle\qquad\qquad\cdot[(m[-B]-m[-C])^{-1}(m[-A]-m[-C])]$
$\displaystyle=[(m([-A]-[-D]))^{-1}m([-B]-[-D])]$
$\displaystyle\qquad\qquad\cdot[(m([-B]-[-C]))^{-1}m([-A]-[-C])]$
$\displaystyle=[([-A]-[-D])^{-1}m^{-1}m([-B]-[-D])]$
$\displaystyle\qquad\qquad\cdot[([-B]-[-C])^{-1}m^{-1}m([-A]-[-C])]$
$\displaystyle=[([-A]-[-D])^{-1}([-B]-[-D])]$
$\displaystyle\qquad\qquad\cdot[([-B]-[-C])^{-1}([-A]-[-C])]$
$\displaystyle=[([-I][A-D])^{-1}([-I][B-D])]$
$\displaystyle\qquad\qquad\cdot[([-I][B-C])^{-1}([-I][A-C])]$
$\displaystyle=[(A-D)^{-1}[-I]^{-1}[-I](B-D)]$
$\displaystyle\qquad\qquad\cdot[(B-C)^{-1}[-I]^{-1}[-I](A-C)]$
$\displaystyle=[(A-D)^{-1}([-I][-I])(B-D)]$
$\displaystyle\qquad\qquad\cdot[(B-C)^{-1}([-I][-I])(A-C)]$
$\displaystyle=[(A-D)^{-1}(I)(B-D)][(B-C)^{-1}(I)(A-C)]$
$\displaystyle=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
$\displaystyle=c_{r}(A,B;C,D)$
remember that, $[-I]^{-1}=-I$ and $(-I)\cdot(-I)=I$.
Another Proof of Case.2, for $n<0$, we mark $m=-n>0$ or $-m=n$ where $m>0$,
and have that,
$c_{r}[\delta_{n}(A),\delta_{n}(B);\delta_{n}(C),\delta_{n}(D)]=c_{r}(nA,nB;nC,nD)=c_{r}([-m]A,[-m]B;[-m]C,[-m]D)$
so,
$c_{r}([-m]A,[-m]B;[-m]C,[-m]D)=c_{r}(-[mA],-[mB];-[mC],-[mD])$
but from the results listed in section 1, for cross-ratio, we have that,
$c_{r}(-A,-B;-C,-D)=c_{r}(A,B;C,D),\quad\text{for all different points
A,B,C,D}\in\ell^{OI}$
therefore
$c_{r}(-[mA],-[mB];-[mC],-[mD])=c_{r}(mA,mB;mC,mD)$
and from Case.1 (since $m>0$), have
$c_{r}(mA,mB;mC,mD)=c_{r}(A,B;C,D).$
Hence,
$c_{r}[\delta_{n}(A),\delta_{n}(B);\delta_{n}(C),\delta_{n}(D)]=c_{r}(A,B;C,D).$
∎
###### Theorem 3.
The Cross-Ratios are invariant under Inversion with a given point
$P\in\ell^{OI}$.
###### Proof.
From cross-ratio definition 7, we have
$c_{r}(A,B;C,D)=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
so, for cross-ratio of the points
$j_{P}(A),j_{P}(B);j_{P}(C),j_{P}(D)\in\ell^{OI}$ (cross-ratio of Inversion-
points) we have that,
$\displaystyle c_{r}[j_{P}(A),j_{P}(B);j_{P}(C),j_{P}(D)]$
$\displaystyle=c_{r}(PA,PB;PC,PD)$ $\displaystyle=[(PA-PD)^{-1}(PB-PD)][(PB-
PC)^{-1}(PA-PC)]$ $\displaystyle=[(P(A-D))^{-1}P(B-D)][(P(B-C))^{-1}P(A-C)]$
$\displaystyle=[(A-D)^{-1}P^{-1}P(B-D)][(B-C)^{-1}P^{-1}P(A-C)]$
$\displaystyle=[(A-D)^{-1}(P^{-1}P)(B-D)][(B-C)^{-1}(P^{-1}P)(A-C)]$
$\displaystyle=[(A-D)^{-1}I(B-D)][(B-C)^{-1}I(A-C)]$
$\displaystyle=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
$\displaystyle=c_{r}(A,B;C,D)$
∎
###### Corollary 1.
The Cross-Ratios are invariant under _reflection about the point $O$ in
$\ell^{OI}-$line_ in Desargues affine plane.
###### Proof.
We have the Inversion with point $(-I)\in\ell^{OI}-$line, and from cross-ratio
definition 7, we have
$c_{r}(A,B;C,D)=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
so, for cross-ratio of the points
$j_{[-I]}(A),j_{[-I]}(B);j_{[-I]}(C),j_{[-I]}(D)\in\ell^{OI}$ (cross-ratio of
Inversion-points) we have that,
$\displaystyle c_{r}[j_{[-I]}(A),j_{[-I]}(B);j_{[-I]}(C),j_{[-I]}(D)]$
$\displaystyle=c_{r}([-I]A,[-I]B;[-I]C,[-I]D)$
$\displaystyle=[([-I]A-[-I]D)^{-1}([-I]B-[-I]D)]$
$\displaystyle\qquad\qquad\cdot[([-I]B-[-I]C)^{-1}([-I]A-[-I]C)]$
$\displaystyle=[([-I](A-D))^{-1}[-I](B-D)]$
$\displaystyle\qquad\qquad\cdot[([-I](B-C))^{-1}[-I](A-C)]$
$\displaystyle=[(A-D)^{-1}[-I]^{-1}[-I](B-D)]$
$\displaystyle\qquad\qquad\cdot[(B-C)^{-1}[-I]^{-1}[-I](A-C)]$
$\displaystyle=[(A-D)^{-1}([-I]^{-1}[-I])(B-D)]$
$\displaystyle\qquad\qquad\cdot[(B-C)^{-1}([-I]^{-1}[-I])(A-C)]$
$\displaystyle=[(A-D)^{-1}([-I][-I])(B-D)]$
$\displaystyle\qquad\qquad\cdot[(B-C)^{-1}([-I][-I])(A-C)]$
$\displaystyle=[(A-D)^{-1}I(B-D)][(B-C)^{-1}I(A-C)]$
$\displaystyle=[(A-D)^{-1}(B-D)][(B-C)^{-1}(A-C)]$
$\displaystyle=c_{r}(A,B;C,D)$
we used the fact that, in the skew field we have true $[-I]^{-1}=-I$ and
$[-I][-I]=I$.
Hence
$c_{r}[j_{[-I]}(A),j_{[-I]}(B);j_{[-I]}(C),j_{[-I]}(D)]=c_{r}(A,B;C,D)$
###### Theorem 4.
Cross-Ratios are invariant under Mobiüs transform.
###### Proof.
From Mobiüs transform definition 12 we have
$\mu(X)=c_{r}(X,B;C,D)=[(X-D)^{-1}(B-D)][(B-C)^{-1}(X-C)]$
so, for cross-ratio of the points $\mu(A),\mu(B),\mu(C),\mu(D)\in\ell^{OI}$
first, we calculate, this point, according to following the definition of
$\mu-$map, and we have
* •
$\mu(A)=c_{r}(A,B;C,D)$,
* •
$\mu(B)=c_{r}(B,B;C,D)$, so
$\displaystyle\mu(B)$ $\displaystyle=[(B-D)^{-1}(B-D)][(B-C)^{-1}(B-C)]$
$\displaystyle=[I][I]$ $\displaystyle=I.$
Thus
$\mu(B)=I$
* •
$\mu(C)=c_{r}(C,B;C,D)$, so,
$\displaystyle\mu(C)$ $\displaystyle=[(C-D)^{-1}(B-D)][(B-C)^{-1}(C-C)]$
$\displaystyle=[(C-D)^{-1}(B-D)][(B-C)^{-1}O]=O$
$\displaystyle=[(C-D)^{-1}(B-D)][O]$ $\displaystyle=O$
Thus
$\mu(C)=O.$
* •
$\mu(D)=c_{r}(D,B;C,D)$, so,
$\displaystyle\mu(D)$ $\displaystyle=[(D-D)^{-1}(B-D)][(B-C)^{-1}(D-C)]$
$\displaystyle=[O^{-1}(B-D)][(B-C)^{-1}(D-C)]$ (and $O^{-1}=\infty-$(point in
infinity)) $\displaystyle=[\infty][(B-C)^{-1}(D-C)]$ $\displaystyle=\infty$
Thus
$\mu(D)=\infty$
Now, calculate, cross-ratio of points $\mu(A),\mu(B),\mu(C),\mu(D)$, and have
$\displaystyle c_{r}[\mu(A),\mu(B);\mu(C),\mu(D)]$
$\displaystyle=c_{r}(\mu(A),I;O,\infty)$ $\displaystyle=r(\mu(A),I;O)$
$\displaystyle=(I-O)^{-1}(\mu(A)-O)$ $\displaystyle=(I)^{-1}(\mu(A))$
$\displaystyle=\mu(A)$ $\displaystyle=c_{r}(A,B;C,D)$
∎
## 3\. Transforms which Preserving Cross-Ratios of 4-points in a line of
Desargues affine plane
In this section we prove that the parallel projection, Translations and
Dilation of a line in itself or in isomorphic line in Desargues affine plane,
_preserving:_ the cross-ratios for 4-points. The geometrical interpretations,
even in the Euclidean view, are quite beautiful in the above theorems,
regardless of the rather rendered figures. This is also the reason that we are
giving the proofs in the algebraic sense. So we will always have in mind the
close connection of skew field and a line in Desargues affine plane, and the
properties of parallel projection, translations and dilation’s.
###### Theorem 5.
Translations $\varphi$ with trace the same line $\ell^{OI}$ of points
$A,B,C,D$, preserve the cross-ratio of this points,
$\varphi(c_{r}(A,B;C,D))=c_{r}(\varphi(A),\varphi(B);\varphi(C),\varphi(D))$
###### Proof.
For the proof of that theorem, we refer to the translation properties, which
are studied in the papers (see [14], [18], [19], [22]). We will follow
algebraic properties for these proofs. Whatever the translation $\varphi$,
automorphism or isomorphism, $\varphi:\ell^{OI}\rightarrow\ell^{OI}$ or
$\varphi:\ell^{OI}\rightarrow\ell^{O^{\prime}I^{\prime}}$, where
$\ell^{OI}\neq\ell^{O^{\prime}I^{\prime}}$ and
$\ell^{OI}||\ell^{O^{\prime}I^{\prime}}$, we have true the above equals,
$\displaystyle\varphi(c_{r}(A,B;C,D))$
$\displaystyle=\varphi\left\\{\left[(A-D)^{-1}(B-D)\right]\left[(B-C)^{-1}(A-C)\right]\right\\}$
(For this equation, we use the fact that $\varphi$ is homomorphisms)
$\displaystyle=\varphi\left[(A-D)^{-1}(B-D)\right]\cdot\varphi\left[(B-C)^{-1}(A-C)\right]$
$\displaystyle=\varphi[(A-D)^{-1}(B-D)]\cdot\varphi[(B-C)^{-1}(A-C)]$ (again,
from the fact that $\varphi$ is a homomorphism,)
$\displaystyle=\left\\{\varphi[(A-D)^{-1}]\cdot\varphi(B-D)\right\\}\cdot\left\\{\varphi[(B-C)^{-1}]\cdot\varphi(A-C)\right\\}$
(translation $\varphi$ is bijective,)
$\displaystyle=\left\\{[\varphi(A-D)]^{-1}\cdot\varphi(B-D)\right\\}\cdot\left\\{[\varphi(B-C)]^{-1}\cdot\varphi(A-C)\right\\}$
(from the fact that $\varphi$ is a homomorphism)
$\displaystyle=\left\\{[\varphi(A)-\varphi(D)]^{-1}\cdot[\varphi(B)-\varphi(D)]\right\\}\cdot\left\\{[\varphi(B)-\varphi(C)]^{-1}\cdot[\varphi(A)-\varphi(C)]\right\\}$
(from cross-ratio definition 7)
$\displaystyle=c_{r}(\varphi(A),\varphi(B);\varphi(C),\varphi(D))$
∎
###### Theorem 6.
The parallel projection between the two lines $\ell_{1}$ and $\ell_{2}$ in
Desargues affine plane, preserving the cross-ratio of $4-$points,
$P_{P}(c_{r}(A,B;C,D))=c_{r}(P_{P}(A),P_{P}(B);P_{P}(C),P_{P}(D))$
###### Proof.
If $\ell_{1}||\ell_{2}$, we have that the parallel projection is a
translation, and have true this theorem.
If lines $\ell_{1}$ and $\ell_{2}$ they are not parallel (so, they are cut at
a single point), we have $A,B,C\in\ell_{1}$ and
$P_{P}(A),P_{P}(B),P_{P}(C),P_{P}(D)\in\ell_{2}$. Also since $P_{P}$ is a
bijection we have that,
$\displaystyle P_{P}(c_{r}(A,B;C,D))$
$\displaystyle=P_{P}\left\\{\left[(A-D)^{-1}(B-D)\right]\left[(B-C)^{-1}(A-C)\right]\right\\}$
$\displaystyle=P_{P}[(A-D)^{-1}(B-D)]\cdot P_{P}[(B-C)^{-1}(A-C)]$
$\displaystyle=\left\\{P_{P}[(A-D)^{-1}]\cdot
P_{P}(B-D)\right\\}\cdot\left\\{P_{P}[(B-C)^{-1}]\cdot P_{P}(A-C)\right\\}$
$\displaystyle=\left\\{[P_{P}(A-D)]^{-1}\cdot
P_{P}(B-D)\right\\}\cdot\left\\{[P_{P}(B-C)]^{-1}\cdot P_{P}(A-C)\right\\}$
$\displaystyle=\left\\{[P_{P}(A)-P_{P}(D)]^{-1}\cdot[P_{P}(B)-P_{P}(D)]\right\\}$
$\displaystyle\cdot\left\\{[P_{P}(B)-P_{P}(C)]^{-1}\cdot[P_{P}(A)-P_{P}(C)]\right\\}$
$\displaystyle=c_{r}(\varphi(A),P_{P}(B);P_{P}(C),P_{P}(D))$
###### Theorem 7.
Dilation $\delta$ with fixed point in the same line $\ell^{OI}$ of points
$A,B,C,D$, preserve the cross-ratio of this points,
$\delta(c_{r}(A,B;C,D))=c_{r}(\delta(A),\delta(B);\delta(C),\delta(D))$
###### Proof.
For the proof of that theorem, we refer to the dilation properties, which are
studied in the papers (see [14], [18], [19], [22]). We will follow algebraic
properties for these proofs. Whatever the dilation $\delta$, is a automorphism
or a isomorphism, $\delta:\ell^{OI}\rightarrow\ell^{OI}$ or
$\delta:\ell^{OI}\rightarrow\ell^{O^{\prime}I^{\prime}}$, where
$\ell^{OI}\neq\ell^{O^{\prime}I^{\prime}}$ and
$\ell^{OI}||\ell^{O^{\prime}I^{\prime}}$, we have true the above equals,
$\displaystyle\delta(c_{r}(A,B;C,D))$
$\displaystyle=\delta\left\\{\left[(A-D)^{-1}(B-D)\right]\left[(B-C)^{-1}(A-C)\right]\right\\}$
$\displaystyle=\delta[(A-D)^{-1}(B-D)]\cdot\delta[(B-C)^{-1}(A-C)]$
$\displaystyle=\left\\{\delta[(A-D)^{-1}]\cdot\delta(B-D)\right\\}\cdot\left\\{\delta[(B-C)^{-1}]\cdot\delta(A-C)\right\\}$
$\displaystyle=\left\\{[\delta(A-D)]^{-1}\cdot\delta(B-D)\right\\}\cdot\left\\{[\delta(B-C)]^{-1}\cdot\delta(A-C)\right\\}$
$\displaystyle=\left\\{[\delta(A)-\delta(D)]^{-1}\cdot[\delta(B)-\delta(D)]\right\\}\cdot\left\\{[\delta(B)-\delta(C)]^{-1}\cdot[\delta(A)-\delta(C)]\right\\}$
$\displaystyle=c_{r}(\delta(A),\delta(B);\delta(C),\delta(D))$
∎
## 4\. Declarations
### Funding
The research by J.F. Peters was supported by the Natural Sciences &
Engineering Research Council of Canada (NSERC) discovery grant 185986 and
Instituto Nazionale di Alta Matematica (INdAM) Francesco Severi, Gruppo
Nazionale per le Strutture Algebriche, Geometriche e Loro Applicazioni grant 9
920160 000362, n.prot U 2016/000036 and Scientific and Technological Research
Council of Turkey (TÜBİTAK) Scientific Human Resources Development (BIDEB)
under grant no: 2221-1059B211301223.
### Conflict of Interest Statement
There is no conflict of interest with any funder.
### Authors’ contributions
The authors contribution is as follows: O. Zaka introduced the geometry and
its results in this paper. J.F. Peters refined and clarified various aspects
of the presented geometry and results in this paper.
## References
* [1] Emil Artin, _Geometric algebra_ , Intersci. Tracts Pure Appl. Math., vol. 3, Interscience Publishers, New York, NY, 1957 (English).
* [2] Marcel Berger, _Geometry. I, II. Transl. from the French by M. Cole and S. Levy_ , corrected 4th printing ed., Universitext, Berlin: Springer, 2009 (English).
* [3] H. S. M. Coxeter, _Introduction to geometry, 2nd ed._ , John Wiley & Sons, Inc., New York-London-Sydney, 1969, xvii+469 pp., MR0123930, MR0346644.
* [4] K. Filipi, O. Zaka, and A. Jusufi, _The construction of a corp in the set of points in a line of desargues affine plane_ , Matematicki Bilten 43 (2019), no. 01, 1–23, ISSN 0351-336X (print), ISSN 1857–9914 (online).
* [5] R. Hartshorne, _Foundations of projective geometry_ , New York: W.A. Benjamin, Inc. 1967. VII, 167 p. (1967)., 1967.
* [6] I.N. Herstein, _Topics in algebra, 2nd ed._ , Xerox College Publishing, Lexington, Mass., 1975, xi+388 pp., MR0356988; first edition in 1964, MR0171801 (detailed review).
* [7] D. Hilbert, _The foundations of geometry_ , The Open Court Publishing Co., La Salle, Ill., 1959, vii+143 pp., MR0116216.
* [8] D.R. Hughes and F.C. Piper, _Projective planes, graduate texts in mathematics, vol. 6_ , Spnnger-Verlag, Berlin, New York, 1973, x+291 pp., MR0333959.
* [9] A. Kryftis, _A constructive approach to affine and projective planes_ , Ph.D. thesis, University of Cambridge, Trinity College and Department of Pure Mathematics and Mathematical Statistics, 2015, supervisor: M. Hyland, v+170pp.,arXiv 1601.04998v1 19 Jan. 2016.
* [10] O.Zaka and J.F.Peters, _Progress in invariant and preserving transforms for the ratio of co-linear points in the desargues affine plane skew field_ , corr abs/2209.02636 (2022), arXiv:2209.02636.
* [11] G. Pickert, _Affine planes: An example of research on geometric structures_ , The Mathematical Gazette 57 (2004), no. 402, 278–291, MR0474017.
* [12] M. Prażmowska, _A proof of the projective Desargues axiom in the Desarguesian affine plane_ , Demonstratio Mathematica 37 (2004), no. 4, 921–924, MR2103894.
* [13] W. Szmielew, _Od geometrii afinicznej do euklidesowej (polish) [from affine geometry to euclidean geometry] rozwa?ania nad aksjomatyk? [an approach through axiomatics]_ , Biblioteka Matematyczna [Mathematics Library], Warsaw, 1981, 172 pp., ISBN: 83-01-01374-5, MR0664205.
* [14] O. Zaka, _Contribution to reports of some algebraic structures with affine plane geometry and applications_ , Ph.D. thesis, Polytechnic University of Tirana,Tirana, Albania, Department of Mathematical Engineering, 2016, supervisor: K. Filipi, vii+113pp.
* [15] O. Zaka, _Three vertex and parallelograms in the affine plane: Similarity and addition abelian groups of similarly $n$-vertexes in the Desargues affine plane_, Mathematical Modelling and Applications 3 (2018), no. 1, 9–15, http://doi:10.11648/j.mma.20180301.12.
* [16] O. Zaka and K. Filipi, _The transform of a line of Desargues affine plane in an additive group of its points_ , Int. J. of Current Research 8 (2016), no. 07, 34983–34990.
* [17] O. Zaka and J.F. Peters, _Advances in the geometry of the ratio of linear points in the Desargues affine plane skew field_ , corr abs/2208.12745 (2022), arXiv:2208.12745.
* [18] Orgest Zaka, _A description of collineations-groups of an affine plane_ , Libertas Mathematica (N.S.) 37 (2017), no. 2, 81–96, ISSN print: 0278 – 5307, ISSN online: 2182 – 567X, MR3828328.
* [19] Orgest Zaka, _Dilations of line in itself as the automorphism of the skew-field constructed over in the same line in desargues affine plane_ , Applied Mathematical Sciences 13 (2019), no. 5, 231–237.
* [20] Orgest Zaka and Mohanad A.Mohammed, _The endomorphisms algebra of translations group and associative unitary ring of trace-preserving endomorphisms in affine plane_ , Proyecciones 39 (2020), no. 4, 821–834 (English).
* [21] Orgest Zaka and Mohanad A. Mohammed, _Skew-field of trace-preserving endomorphisms, of translation group in affine plane_ , Proyecciones 39 (2020), no. 4, 835–850 (English).
* [22] Orgest Zaka and James F. Peters, _Isomorphic-dilations of the skew-fields constructed over parallel lines in the Desargues affine plane_ , Balkan J. Geom. Appl. 25 (2020), no. 1, 141–157 (English).
* [23] Orgest Zaka and James F. Peters, _Cross ratio geometry advances for four co-linear points in the desargues affine plane-skew field_ , corr abs/2209.14241 (2022), arXiv:2209.14241.
* [24] Orgest Zaka and James Francis Peters, _Ordered line and skew-fields in the Desargues affine plane_ , Balkan J. Geom. Appl. 26 (2021), no. 1, 141–156 (English).
|
# Functions tiling simultaneously with two arithmetic progressions
Mark Mordechai Etkind Department of Mathematics, Bar-Ilan University, Ramat-
Gan 5290002, Israel<EMAIL_ADDRESS>and Nir Lev Department of
Mathematics, Bar-Ilan University, Ramat-Gan 5290002, Israel
<EMAIL_ADDRESS>
(Date: September 21, 2023)
###### Abstract.
We consider measurable functions $f$ on $\mathbb{R}$ that tile simultaneously
by two arithmetic progressions $\alpha\mathbb{Z}$ and $\beta\mathbb{Z}$ at
respective tiling levels $p$ and $q$. We are interested in two main questions:
what are the possible values of the tiling levels $p,q$, and what is the least
possible measure of the support of $f$? We obtain sharp results which show
that the answers depend on arithmetic properties of $\alpha,\beta$ and $p,q$,
and in particular, on whether the numbers $\alpha,\beta$ are rationally
independent or not.
###### Key words and phrases:
Tiling, translates, doubly stochastic arrays, measure preserving graphs
###### 2010 Mathematics Subject Classification:
05B45, 15B51
Research supported by ISF Grant No. 1044/21 and ERC Starting Grant No. 713927.
## 1\. Introduction
### 1.1.
Let $f$ be a measurable function on $\mathbb{R}$, and
$\Lambda\subset\mathbb{R}$ be a countable set. We say that the function $f$
_tiles $\mathbb{R}$ at level $w$_ with the translation set $\Lambda$, or that
_$f+\Lambda$ is a tiling of $\mathbb{R}$ at level $w$_ (where $w$ is a
constant), if we have
$\sum_{\lambda\in\Lambda}f(x-\lambda)=w\quad\text{a.e.}$ (1.1)
and the series in (1.1) converges absolutely a.e.
In the same way one can define tiling by translates of a measurable function
$f$ on $\mathbb{R}^{d}$, or more generally, on any locally compact abelian
group.
If $f=\mathds{1}_{\Omega}$ is the indicator function of a set $\Omega$, then
$f+\Lambda$ is a tiling at level one if and only if the translated copies
$\Omega+\lambda$, $\lambda\in\Lambda$, fill the whole space without overlaps
up to measure zero. To the contrary, for tilings by a general real or complex-
valued function $f$, the translated copies may have overlapping supports.
Tilings by translates of a function have been studied by various authors, see
e.g. [LM91], [KL96], [KW99], [Kol04], [KL16], [KW19], [Liu21], [KL21],
[Lev22], [KP22].
### 1.2.
By the _support_ of a function $f$ we shall mean the set
$\operatorname{supp}f:=\\{x:f(x)\neq 0\\}.$ (1.2)
In [KP22], inspired by the Steinhaus tiling problem, the authors studied the
following question: how “small” can be the support of a function $f$ which
tiles $\mathbb{R}^{d}$ simultaneously by a finite number of lattices
$\Lambda_{1},\dots,\Lambda_{N}$? In particular, they posed the question as to
what is the least possible measure of the support of such a function $f$.
The problem is nontrivial even in dimension one and for two lattices only.
This case will be studied in the present paper. We thus consider a measurable
function $f$ on $\mathbb{R}$ that simultaneously tiles by two arithmetic
progressions $\alpha\mathbb{Z}$ and $\beta\mathbb{Z}$, that is,
$\sum_{k\in\mathbb{Z}}f(x-k\alpha)=p,\quad\sum_{k\in\mathbb{Z}}f(x-k\beta)=q\quad\text{a.e.}$
(1.3)
where $\alpha,\beta$ are positive real numbers, the tiling levels $p,q$ are
complex numbers, and both series in (1.3) converge absolutely a.e.
It is obvious that if $p,q$ are both nonzero, then the simultaneous tiling
condition (1.3) implies that $\operatorname{mes}(\operatorname{supp}f)$ can be
no smaller than $\max\\{\alpha,\beta\\}$. This estimate was improved for
_nonnegative_ functions $f$ in [KP22, Theorem 2.6], where the authors proved
that if $0<\alpha<\beta$ then the tiling condition (1.3) implies that
$\operatorname{mes}(\operatorname{supp}f)\geqslant\lceil\beta/\alpha\rceil\alpha$.
The authors asked in [KP22, Question 4] what is the least possible measure of
the support of a function $f$ satisfying (1.3). In this paper we obtain sharp
results which improve on the lower bound from [KP22] and provide a complete
answer to this question.
### 1.3.
Notice that if $f$ is nonnegative, then integrating the first equality in
(1.3) over the interval $[0,\alpha)$ yields $\int_{\mathbb{R}}f=p\alpha$, so
$f$ must in fact be integrable. The same holds if $f$ is complex valued but
assumed a priori to be in $L^{1}(\mathbb{R})$. Moreover, in this case we can
also integrate the second equality in (1.3) over $[0,\beta)$ and get
$\int_{\mathbb{R}}f=q\beta$, hence $p\alpha=q\beta$. This proves the following
basic fact:
###### Proposition 1.1.
Let $f$ be a measurable function on $\mathbb{R}$ assumed to be either
nonnegative or in $L^{1}(\mathbb{R})$. If $f$ satisfies (1.3) then the vector
$(p,q)$ is proportional to $(\beta,\alpha)$.
The convolution $\mathds{1}_{[0,\alpha)}\ast\mathds{1}_{[0,\beta)}$ provides a
basic example of a nonnegative function $f$ satisfying (1.3) with
$(p,q)=(\beta,\alpha)$, and such that $\operatorname{supp}f$ is an interval of
length $\alpha+\beta$.
We are interested in the following two main questions:
1. (i)
Do there exist tilings (1.3) such that the tiling level vector $(p,q)$ _is
not_ proportional to $(\beta,\alpha)$? (In such a case $f$ can be neither
nonnegative nor integrable.)
2. (ii)
What is the least possible value of $\operatorname{mes}(\operatorname{supp}f)$
for a function $f$ satisfying (1.3) with a given tiling level vector $(p,q)$?
In this paper we answer these questions in full generality. The answers turn
out to depend on arithmetic properties of $\alpha,\beta$ and $p,q$, and in
particular, on whether the numbers $\alpha,\beta$ are _rationally independent_
or not. Moreover, we will see that the results differ substantially between
these two cases.
## 2\. Results
### 2.1.
First we consider the case where $\alpha,\beta$ are rationally independent. In
this case our first result establishes the existence of tilings (1.3) such
that the levels $p,q$ are _arbitrary complex numbers_ , i.e. the vector
$(p,q)$ is not necessarily proportional to $(\beta,\alpha)$. Moreover, we can
construct such tilings with $\operatorname{mes}(\operatorname{supp}f)$ never
exceeding $\alpha+\beta$.
###### Theorem 2.1.
Let $\alpha,\beta$ be rationally independent. For any two complex numbers
$p,q$ there is a measurable function $f$ on $\mathbb{R}$ satisfying (1.3) with
$\operatorname{mes}(\operatorname{supp}f)\leqslant\alpha+\beta$.
We will also prove that while the function $f$ in Theorem 2.1 has support of
finite measure, $f$ cannot in general be supported on any _bounded_ subset of
$\mathbb{R}$.
###### Theorem 2.2.
Let $f$ be a measurable function on $\mathbb{R}$ satisfying (1.3) where
$\alpha,\beta$ are rationally independent. If the vector $(p,q)$ is not
proportional to $(\beta,\alpha)$, then $\operatorname{supp}f$ must be an
unbounded set.
It is obvious that the result does not hold if $(p,q)=\lambda(\beta,\alpha)$
where $\lambda$ is a scalar, since in this case the function
$f=\lambda\mathds{1}_{[0,\alpha)}\ast\mathds{1}_{[0,\beta)}$ satisfies (1.3)
and has bounded support.
The next result clarifies the role of the value $\alpha+\beta$ in Theorem 2.1.
It turns out that for most level vectors $(p,q)$ it is in fact the least
possible value of $\operatorname{mes}(\operatorname{supp}f)$.
###### Theorem 2.3.
Let $\alpha,\beta$ be rationally independent, and suppose that $(p,q)$ is not
proportional to any vector of the form $(n,m)$ where $n,m$ are nonnegative
integers. If a measurable function $f$ on $\mathbb{R}$ satisfies (1.3) then
$\operatorname{mes}(\operatorname{supp}f)\geqslant\alpha+\beta$.
In particular this result applies if $f$ is nonnegative, or is in
$L^{1}(\mathbb{R})$, or has bounded support. It follows from Proposition 1.1
and Theorem 2.2 that in any one of these cases the tiling level vector $(p,q)$
must be proportional to $(\beta,\alpha)$, and since $\alpha,\beta$ are
rationally independent, $(p,q)$ cannot therefore be proportional to any
integer vector $(n,m)$ unless $p,q$ are both zero. So we obtain:
###### Corollary 2.4.
Assume that a measurable function $f$ on $\mathbb{R}$ is nonnegative, or is in
$L^{1}(\mathbb{R})$, or has bounded support. If $\alpha,\beta$ are rationally
independent and (1.3) holds for some nonzero vector $(p,q)$, then $(p,q)$ is
proportional to $(\beta,\alpha)$ and
$\operatorname{mes}(\operatorname{supp}f)\geqslant\alpha+\beta$.
We thus obtain that for rationally independent $\alpha,\beta$, the convolution
$\mathds{1}_{[0,\alpha)}\ast\mathds{1}_{[0,\beta)}$ is a function minimizing
the value of $\operatorname{mes}(\operatorname{supp}f)$ among all nonnegative,
or all integrable, or all boundedly supported, functions $f$ satisfying (1.3)
for some nonzero tiling level vector $(p,q)$.
### 2.2.
We now consider the remaining case not covered by Theorem 2.3, namely, the
case where the tiling level vector $(p,q)$ is proportional to some vector
$(n,m)$ such that $n,m$ are nonnegative integers. By multiplying the vector
$(p,q)$ on an appropriate scalar we may suppose that $p,q$ are by themselves
nonnegative integers, and by factoring out their greatest common divisor we
may also assume $p,q$ to be _coprime_.
Interestingly, it turns out that in this case the measure of
$\operatorname{supp}f$ can drop below $\alpha+\beta$, in a magnitude that
depends on the specific values of the tiling levels $p$ and $q$.
###### Theorem 2.5.
Let $\alpha,\beta$ be rationally independent, and let $p,q$ be two positive
coprime integers. For any $\varepsilon>0$ there is a measurable function $f$
on $\mathbb{R}$ satisfying (1.3) such that
$\operatorname{mes}(\operatorname{supp}f)<\alpha+\beta-\min\Big{\\{}\frac{\alpha}{q},\frac{\beta}{p}\Big{\\}}+\varepsilon.$
(2.1)
The next result shows that the upper estimate (2.1) is actually sharp.
###### Theorem 2.6.
Let $f$ be a measurable function on $\mathbb{R}$ satisfying (1.3) where
$\alpha,\beta$ are rationally independent and $p,q$ are positive, coprime
integers. Then
$\operatorname{mes}(\operatorname{supp}f)>\alpha+\beta-\min\Big{\\{}\frac{\alpha}{q},\frac{\beta}{p}\Big{\\}}.$
(2.2)
The last two results yield that if the tiling levels $p,q$ are positive,
coprime integers, then the right hand side of (2.2) is the infimum of the
values of $\operatorname{mes}(\operatorname{supp}f)$ over all measurable
functions $f$ satisfying (1.3), but this infimum cannot be attained.
In Theorems 2.5 and 2.6 the tiling levels $p,q$ are assumed to be both
nonzero, which does not cover the case where $(p,q)=(1,0)$ or $(0,1)$. The
following result provides the sharp answer in this last case. By symmetry, it
is enough to consider $(p,q)=(1,0)$.
###### Theorem 2.7.
Let $\alpha,\beta$ be rationally independent, and let $(p,q)=(1,0)$. For any
$\varepsilon>0$ there is a measurable function $f$ on $\mathbb{R}$ satisfying
(1.3) such that $\operatorname{mes}(\operatorname{supp}f)<\alpha+\varepsilon$.
Conversely, any measurable $f$ satisfying (1.3) must have
$\operatorname{mes}(\operatorname{supp}f)>\alpha$.
The results above thus fully resolve the problem for rationally independent
$\alpha,\beta$.
### 2.3.
We now move on to deal with the other case where $\alpha,\beta$ are linearly
dependent over the rationals. Then the vector $(\alpha,\beta)$ is proportional
to some vector $(n,m)$ such that $n,m$ are positive integers. By rescaling, it
is enough to consider the case $(\alpha,\beta)=(n,m)$ where $n,m$ are positive
integers.
The tiling condition (1.3) thus takes the form
$\sum_{k\in\mathbb{Z}}f(x-kn)=p,\quad\sum_{k\in\mathbb{Z}}f(x-km)=q\quad\text{a.e.}$
(2.3)
where $n,m$ are positive integers, $p,q$ are complex numbers, and both series
in (2.3) converge absolutely a.e.
In this case our first result shows that the tiling levels $p,q$ cannot be
arbitrary.
###### Theorem 2.8.
Let $n,m$ be positive integers, and let $f$ be a measurable function on
$\mathbb{R}$ satisfying (2.3). Then the vector $(p,q)$ must be proportional to
$(m,n)$.
This is not quite obvious since $f$ is neither assumed to be nonnegative nor
in $L^{1}(\mathbb{R})$, so the conclusion does not follow from Proposition
1.1. Moreover, Theorem 2.8 is in sharp contrast to Theorem 2.1 which states
that for rationally independent $\alpha,\beta$ there exist tilings (1.3) such
that the levels $p,q$ are arbitrary complex numbers.
The next result gives a lower bound for the support size of a function $f$
that satisfies the simultaneous tiling condition (2.3) with a nonzero tiling
level vector $(p,q)$.
###### Theorem 2.9.
Let $f$ be a measurable function on $\mathbb{R}$ satisfying (2.3) where $n,m$
are positive integers and the vector $(p,q)$ is nonzero. Then
$\operatorname{mes}(\operatorname{supp}f)\geqslant n+m-\gcd(n,m).$ (2.4)
We will also establish that in fact the lower bound in Theorem 2.9 is sharp.
Due to Theorem 2.8, it suffices to prove this for the tiling level vector
$(p,q)=(m,n)$.
###### Theorem 2.10.
Let $n,m$ be positive integers, and let $(p,q)=(m,n)$. Then there is a
nonnegative, measurable function $f$ on $\mathbb{R}$ satisfying (2.3) and such
that $\operatorname{supp}f$ is an interval of length $n+m-\gcd(n,m)$.
It follows that $n+m-\gcd(n,m)$ is the least possible value of
$\operatorname{mes}(\operatorname{supp}f)$ among all measurable functions $f$
satisfying (2.3) with a nonzero tiling level vector $(p,q)$. In particular,
the convolution $\mathds{1}_{[0,n)}\ast\mathds{1}_{[0,m)}$ (whose support is
an interval of length $n+m$) _does not_ attain the least possible value of
$\operatorname{mes}(\operatorname{supp}f)$.
The results obtained thus answer the questions above in full generality.
###### Remark 2.11.
We note that the case where the tiling levels $p,q$ are both zero is trivial,
since then the zero function $f$ satisfies (1.3). It is also easy to construct
examples where $\operatorname{supp}f$ has positive but arbitrarily small
measure. For example, let $h$ be any function with
$\operatorname{supp}h=(0,\varepsilon)$, then the function
$f(x)=h(x)-h(x+\alpha)-h(x+\beta)+h(x+\alpha+\beta)$ satisfies (1.3) with
$p,q$ both zero and $\operatorname{supp}f$ has positive measure not exceeding
$4\varepsilon$.
### 2.4.
The rest of the paper is organized as follows.
In Section 3 we give a short preliminary background and fix notation that will
be used throughout the paper.
In Section 4 we prove Theorems 2.1, 2.5 and 2.7, that is, for any two
rationally independent $\alpha,\beta$ and for any tiling level vector $(p,q)$,
we construct a simultaneous tiling (1.3) such that
$\operatorname{mes}(\operatorname{supp}f)$ is minimal, or is arbitrarily close
to the infimum.
In Section 5 we prove that if a measurable function $f$ satisfies the
simultaneous tiling condition (1.3) with a tiling level vector $(p,q)$ that is
not proportional to $(\beta,\alpha)$, then $\operatorname{supp}f$ must be an
unbounded set (Theorem 2.2).
In Section 6 we solve a problem posed to us by Kolountzakis, asking whether
there exists a _bounded_ measurable function $f$ on $\mathbb{R}$ that tiles
simultaneously with rationally independent $\alpha,\beta$ and with arbitrary
tiling levels $p,q$. We prove that the answer is affirmative, and moreover,
$f$ can be chosen _continuous and vanishing at infinity_.
In Section 7 we prove Theorems 2.3 and 2.6 that give sharp lower bounds for
the measure of $\operatorname{supp}f$, where $f$ is any measurable function
satisfying the simultaneous tiling condition (1.3) with rationally independent
$\alpha,\beta$.
In the last Section 8, we consider the case where the two numbers
$\alpha,\beta$ are linearly dependent over the rationals. By rescaling we may
assume that $\alpha,\beta$ are two positive integers $n,m$. We prove Theorems
2.8, 2.9 and 2.10 using a reduction of the simultaneous tiling problem from
the real line $\mathbb{R}$ to the set of integers $\mathbb{Z}$.
## 3\. Preliminaries. Notation.
In this section we give a short preliminary background and fix notation that
will be used throughout the paper.
If $\alpha$ is a positive real number, then we use $\mathbb{T}_{\alpha}$ to
denote the circle group $\mathbb{R}/\alpha\mathbb{Z}$. We let $\pi_{\alpha}$
denote the canonical projection map $\mathbb{R}\to\mathbb{T}_{\alpha}$. The
Lebesgue measure on the group $\mathbb{T}_{\alpha}$ is normalized such that
$\operatorname{mes}(\mathbb{T}_{\alpha})=\alpha$.
We use $m(E)$, or $\operatorname{mes}(E)$, to denote the Lebesgue measure of a
set $E$ in either the real line $\mathbb{R}$ or the circle
$\mathbb{T}_{\alpha}$.
If $\alpha,\beta$ are two positive real numbers, then they are said to be
_rationally independent_ if the condition $n\alpha+m\beta=0$,
$n,m\in\mathbb{Z}$, implies that $n=m=0$. This is the case if and only if the
ratio $\alpha/\beta$ is an irrational number.
By the classical Kronecker’s theorem, if two positive real numbers
$\alpha,\beta$ are rationally independent then the sequence
$\\{\pi_{\alpha}(n\beta)\\}$, $n=1,2,3,\dots$, is dense in
$\mathbb{T}_{\alpha}$.
Let $f$ be a measurable function on $\mathbb{R}$, and suppose that the series
$\sum_{k\in\mathbb{Z}}f(x-k\alpha)$ (3.1)
converges absolutely for every $x\in\mathbb{R}$. Then the sum (3.1) is an
$\alpha$-periodic function of $x$, so it can be viewed as a function on
$\mathbb{T}_{\alpha}$. We denote this function by $\pi_{\alpha}(f)$. If the
sum (3.1) converges absolutely not everywhere but almost everywhere, then the
function $\pi_{\alpha}(f)$ is defined in a similar way on a full measure
subset of $\mathbb{T}_{\alpha}$.
We observe that the simultaneous tiling condition (1.3) can be equivalently
stated as the requirement that $\pi_{\alpha}(f)=p$ a.e. on
$\mathbb{T}_{\alpha}$, and that $\pi_{\beta}(f)=q$ a.e. on
$\mathbb{T}_{\beta}$.
If $f$ is in $L^{1}(\mathbb{R})$, then the sum (3.1) converges absolutely
almost everywhere, and moreover, the function $\pi_{\alpha}(f)$ is in
$L^{1}(\mathbb{T}_{\alpha})$ and satisfies
$\int_{\mathbb{T}_{\alpha}}\pi_{\alpha}(f)=\int_{\mathbb{R}}f$.
The set $\operatorname{supp}f:=\\{x:f(x)\neq 0\\}$ will be called the
_support_ of the function $f$. If we have $\operatorname{supp}f\subset\Omega$
then we will say that _$f$ is supported on $\Omega$_.
We observe that if $\operatorname{supp}f$ is a set of finite measure in
$\mathbb{R}$, then in the sum (3.1) there are only finitely many nonzero terms
for almost every $x\in\mathbb{R}$, which implies that the function
$\pi_{\alpha}(f)$ is well defined on a full measure subset of
$\mathbb{T}_{\alpha}$.
## 4\. Incommensurable arithmetic progressions: Constructing simultaneously
tiling functions with small support
In this section we prove Theorems 2.1, 2.5 and 2.7, that is, for any two
rationally independent $\alpha,\beta$ and for any tiling level vector $(p,q)$,
we construct a simultaneous tiling (1.3) such that
$\operatorname{mes}(\operatorname{supp}f)$ is minimal, or is arbitrarily close
to the infimum.
Throughout this section we shall suppose that $\alpha,\beta>0$ are two fixed,
rationally independent real numbers.
### 4.1.
It will be convenient to introduce the following terminology:
###### Definition 4.1.
By an _elementary set_ (in either $\mathbb{R}$, $\mathbb{T}_{\alpha}$ or
$\mathbb{T}_{\beta}$) we mean a set which can be represented as the union of
finitely many disjoint closed intervals of finite length.
We will use $\operatorname{int}(U)$ to denote the interior of an elementary
set $U$.
###### Lemma 4.2.
Let $A$ be an elementary set in $\mathbb{T}_{\alpha}$. Then given any nonempty
open interval $J\subset\mathbb{T}_{\beta}$, no matter how small, one can find
an elementary set $U\subset\mathbb{R}$ such that
1. (i)
$\pi_{\alpha}(U)=A$;
2. (ii)
$\pi_{\alpha}$ is one-to-one on $\operatorname{int}(U)$;
3. (iii)
$\pi_{\beta}(U)\subset J$.
Moreover, $U$ can be chosen inside the half-line $(r,+\infty)$ for any given
number $r$.
###### Proof.
We choose $\delta>0$ smaller than both the length of $J$ and $\alpha$, and we
decompose the elementary set $A$ as a union $A=A_{1}\cup\dots\cup A_{n}$,
where each $A_{j}$ is a closed interval in $\mathbb{T}_{\alpha}$ of length
smaller than $\delta$, and $A_{1},\dots,A_{n}$ have disjoint interiors. Let
$U_{j}$ be a closed interval in $\mathbb{R}$ such that $A_{j}$ is a one-to-one
image of $U_{j}$ under $\pi_{\alpha}$. By translating the sets $U_{j}$ by
appropriate integer multiples of $\alpha$ we can ensure that
$\pi_{\beta}(U_{j})\subset J$ (due to Kronecker’s theorem, since
$\alpha,\beta$ are rationally independent), and that the sets
$U_{1},\dots,U_{n}$ are pairwise disjoint and all of them are contained in a
given half-line $(r,+\infty)$. Then the set $U:=U_{1}\cup\dots\cup U_{n}$ is
an elementary set contained in $(r,+\infty)$ and satisfying the properties
(i), (ii) and (iii) above. ∎
###### Lemma 4.3.
Let $A\subset\mathbb{T}_{\alpha}$ be an elementary set, and $\varphi$ be a
measurable function on $A$. Given any nonempty open interval
$J\subset\mathbb{T}_{\beta}$, one can find an elementary set
$U\subset\mathbb{R}$ and a measurable function $f$ on $\mathbb{R}$, such that
1. (i)
$\pi_{\alpha}(U)=A$;
2. (ii)
$\pi_{\beta}(U)\subset J$;
3. (iii)
$m(U)=m(A)$;
4. (iv)
$f$ is supported on $U$;
5. (v)
$\pi_{\alpha}(f)=\varphi$ a.e. on $A$.
Moreover, $U$ can be chosen inside the half-line $(r,+\infty)$ for any given
number $r$.
###### Proof.
Use Lemma 4.2 to find an elementary set $U\subset\mathbb{R}$ such that
$\pi_{\alpha}(U)=A$, $\pi_{\alpha}$ is one-to-one on $\operatorname{int}(U)$,
and $\pi_{\beta}(U)\subset J$. Notice that the first two properties imply that
$m(U)=m(A)$. Recall also that Lemma 4.2 allows us to choose the set $U$ inside
any given half-line $(r,+\infty)$. We define a function $f$ on $\mathbb{R}$ by
$f(x):=\varphi(\pi_{\alpha}(x))$ for $x\in\operatorname{int}(U)$, and $f(x)=0$
outside of $\operatorname{int}(U)$. Then $f$ is a measurable function
supported on $U$. Since $\pi_{\alpha}$ is one-to-one on
$\operatorname{int}(U)$, we have $\pi_{\alpha}(f)=\varphi$ on the set
$\pi_{\alpha}(\operatorname{int}(U))$, a full measure subset of $A$. The
properties (i)–(v) are thus satisfied and the claim is proved. ∎
### 4.2.
The next lemma incorporates a central idea of our tiling construction
technique. Roughly speaking, the lemma asserts that one can find a function
$f$ on $\mathbb{R}$ with prescribed projections $\pi_{\alpha}(f)$ and
$\pi_{\beta}(f)$, and that, moreover,
$\operatorname{mes}(\operatorname{supp}f)$ need never exceed the total measure
of the supports of the projections.
###### Lemma 4.4.
Suppose that we are given two elementary sets $A\subset\mathbb{T}_{\alpha}$,
$B\subset\mathbb{T}_{\beta}$, both of positive measure, as well as two
measurable functions $\varphi$ on $A$, and $\psi$ on $B$. Then there is a
closed set $\Omega\subset\mathbb{R}$ (a union of countably many disjoint
closed intervals accumulating at $+\infty$) and a measurable function $f$
supported on $\Omega$, such that
1. (i)
$m(\Omega)=m(A)+m(B)$ (in particular, $\Omega$ has finite measure);
2. (ii)
$\pi_{\alpha}(\Omega)=A$, $\pi_{\alpha}(f)=\varphi$ a.e. on $A$;
3. (iii)
$\pi_{\beta}(\Omega)=B$, $\pi_{\beta}(f)=\psi$ a.e. on $B$.
Moreover, $\Omega$ can be chosen inside the half-line $(r,+\infty)$ for any
given number $r$.
###### Proof.
We choose an arbitrary decomposition of the set $A$ as a union
$A=\bigcup_{k=1}^{\infty}A_{k}$, where each $A_{k}\subset\mathbb{T}_{\alpha}$
is an elementary set and the sets $A_{1},A_{2},\dots$ have nonempty and
disjoint interiors. We do the same also for the set $B$, that is, we let
$B=\bigcup_{k=1}^{\infty}B_{k}$, where the $B_{k}$ are elementary sets in
$\mathbb{T}_{\beta}$ with nonempty, disjoint interiors.
Now, we apply Lemma 4.3 to the elementary set $A_{1}$, the function $\varphi$,
and an arbitrary nonempty open interval $J\subset B$. We obtain from the lemma
an elementary set $U_{1}\subset\mathbb{R}$ and a measurable function $g_{1}$
on $\mathbb{R}$, satisfying the conditions $\pi_{\alpha}(U_{1})=A_{1}$,
$\pi_{\beta}(U_{1})\subset B$, $m(U_{1})=m(A_{1})$, the function $g_{1}$ is
supported on $U_{1}$, and $\pi_{\alpha}(g_{1})=\varphi$ a.e. on $A_{1}$.
Next, we apply Lemma 4.3 again but with the roles of $\alpha,\beta$
interchanged, to the elementary set $B_{1}$, the function
$\psi-\pi_{\beta}(g_{1})$, and an arbitrary nonempty open interval $J\subset
A\setminus A_{1}$. The lemma yields an elementary set $V_{1}\subset\mathbb{R}$
and a measurable function $h_{1}$ on $\mathbb{R}$, such that
$\pi_{\beta}(V_{1})=B_{1}$, $\pi_{\alpha}(V_{1})\subset A\setminus A_{1}$,
$m(V_{1})=m(B_{1})$, the function $h_{1}$ is supported on $V_{1}$, and
$\pi_{\beta}(h_{1})=\psi-\pi_{\beta}(g_{1})$ a.e. on $B_{1}$.
We continue the construction in a similar fashion. Suppose that we have
already constructed the sets $U_{k},V_{k}$ and the functions $g_{k},h_{k}$ for
$1\leqslant k\leqslant n-1$. Using Lemma 4.3 we find an elementary set
$U_{n}\subset\mathbb{R}$ and a measurable function $g_{n}$ on $\mathbb{R}$,
such that $\pi_{\alpha}(U_{n})=A_{n}$, $\pi_{\beta}(U_{n})\subset
B\setminus\bigcup_{k=1}^{n-1}B_{k}$, $m(U_{n})=m(A_{n})$, $g_{n}$ is supported
on $U_{n}$, and
$\pi_{\alpha}(g_{n})=\varphi-\sum_{k=1}^{n-1}\pi_{\alpha}(h_{k})\quad\text{a.e.\
on $A_{n}$.}$ (4.1)
Then, we use again Lemma 4.3 to find an elementary set
$V_{n}\subset\mathbb{R}$ and a measurable function $h_{n}$ on $\mathbb{R}$,
such that $\pi_{\beta}(V_{n})=B_{n}$, $\pi_{\alpha}(V_{n})\subset
A\setminus\bigcup_{k=1}^{n}A_{k}$, $m(V_{n})=m(B_{n})$, $h_{n}$ is supported
on $V_{n}$, and
$\pi_{\beta}(h_{n})=\psi-\sum_{k=1}^{n}\pi_{\beta}(g_{k})\quad\text{a.e.\ on
$B_{n}$.}$ (4.2)
We may assume that the sets $U_{1},V_{1},U_{2},V_{2},\dots$ are pairwise
disjoint, that all of them lie inside a given half-line $(r,+\infty)$, and
that they accumulate at $+\infty$. Indeed, Lemma 4.3 allows us to choose the
sets such that they satisfy these properties. (In fact, one can check that by
their construction the sets necessarily have disjoint interiors.)
Finally, we define
$\Omega:=\bigcup_{n=1}^{\infty}(U_{n}\cup V_{n}),\quad
f:=\sum_{n=1}^{\infty}(g_{n}+h_{n}).$ (4.3)
The sum on the right hand side of (4.3) is well defined as the terms in the
series have disjoint supports. We will show that the properties (i), (ii) and
(iii) are satisfied.
We begin by verifying that (i) holds. Indeed, $m(U_{n})=m(A_{n})$,
$m(V_{n})=m(B_{n})$, and the sets $U_{1},V_{1},U_{2},V_{2},\dots$ are
disjoint. It follows that
$m(\Omega)=\sum_{n=1}^{\infty}(m(U_{n})+m(V_{n}))=\sum_{n=1}^{\infty}(m(A_{n})+m(B_{n}))=m(A)+m(B).$
(4.4)
Next we verify that (ii) is satisfied. Indeed, we have
$\pi_{\alpha}(U_{n})=A_{n}$ and $\pi_{\alpha}(V_{n})\subset A$ for every $n$,
hence $\pi_{\alpha}(\Omega)=A$. We must show that also
$\pi_{\alpha}(f)=\varphi$ a.e. on $A$. It would suffice to verify that this
holds on each $A_{n}$. Notice that $\pi_{\alpha}(\operatorname{int}(U_{k}))$
is disjoint from $A_{n}$ for $k\neq n$, and $\pi_{\alpha}(V_{k})$ is disjoint
from $A_{n}$ for $k\geqslant n$. Hence using (4.1) this implies that
$\pi_{\alpha}(f)=\pi_{\alpha}(g_{n})+\sum_{k=1}^{n-1}\pi_{\alpha}(h_{k})=\varphi\quad\text{a.e.\
on $A_{n}$.}$ (4.5)
In a similar way we can show that (iii) holds as well. We have
$\pi_{\beta}(V_{n})=B_{n}$ and $\pi_{\beta}(U_{n})\subset B$ for every $n$,
hence $\pi_{\beta}(\Omega)=B$. To see that $\pi_{\beta}(f)=\psi$ a.e. on $B$,
we verify that this is the case on each $B_{n}$. But
$\pi_{\beta}(\operatorname{int}(V_{k}))$ is disjoint from $B_{n}$ for $k\neq
n$, and $\pi_{\beta}(U_{k})$ is disjoint from $B_{n}$ for $k\geqslant n+1$.
Hence (4.2) implies that
$\pi_{\beta}(f)=\pi_{\beta}(h_{n})+\sum_{k=1}^{n}\pi_{\beta}(g_{k})=\psi\quad\text{a.e.\
on $B_{n}$.}$ (4.6)
Thus all the properties (i), (ii) and (iii) are satisfied and Lemma 4.4 is
proved. ∎
### 4.3.
We can now use Lemma 4.4 in order to prove Theorem 2.1 and Theorem 2.7.
###### Proof of Theorem 2.1.
Let $p,q$ be any two complex numbers. Apply Lemma 4.4 to the sets
$A=\mathbb{T}_{\alpha}$, $B=\mathbb{T}_{\beta}$, and to the constant functions
$\varphi=p$, $\psi=q$. The lemma yields a measurable function $f$ on
$\mathbb{R}$, supported on a set $\Omega\subset\mathbb{R}$ of measure
$\alpha+\beta$, and such that $\pi_{\alpha}(f)=p$ a.e. on
$\mathbb{T}_{\alpha}$, while $\pi_{\beta}(f)=q$ a.e. on $\mathbb{T}_{\beta}$,
that is, $f$ satisfies the tiling condition (1.3). The theorem is thus proved.
∎
###### Proof of Theorem 2.7.
Let $(p,q)=(1,0)$. Given $\varepsilon>0$ we apply Lemma 4.4 to the sets
$A=\mathbb{T}_{\alpha}$ and $B=[0,\varepsilon]\subset\mathbb{T}_{\beta}$, and
to the functions $\varphi=1$, $\psi=0$. The lemma yields a set
$\Omega\subset\mathbb{R}$ satisfying $m(\Omega)=\alpha+\varepsilon$,
$\pi_{\beta}(\Omega)=B$, as well as a measurable function $f$ supported on
$\Omega$ and such that $\pi_{\alpha}(f)=1$ a.e. on $\mathbb{T}_{\alpha}$, and
$\pi_{\beta}(f)=0$ a.e. on $B$. Notice though that the condition
$\pi_{\beta}(\Omega)=B$ ensures that $\pi_{\beta}(f)=0$ a.e. also on
$\mathbb{T}_{\beta}\setminus B$. Hence the tiling condition (1.3) is
satisfied. This proves one part of the theorem.
To prove the converse part, we suppose that $f$ is a measurable function on
$\mathbb{R}$ satisfying (1.3) with $(p,q)=(1,0)$, that is, $\pi_{\alpha}(f)=1$
a.e. on $\mathbb{T}_{\alpha}$ and $\pi_{\beta}(f)=0$ a.e. on
$\mathbb{T}_{\beta}$. It follows from the first assumption that the set
$\Omega:=\operatorname{supp}f$ has measure at least $\alpha$, since
$\pi_{\alpha}(\Omega)$ is a set of full measure in $\mathbb{T}_{\alpha}$. We
must show that actually $m(\Omega)>\alpha$. Suppose to the contrary that
$m(\Omega)=\alpha$. Then $\pi_{\alpha}(\Omega)$ cannot be a set of full
measure in $\mathbb{T}_{\alpha}$ unless $\pi_{\alpha}$ is one-to-one on a full
measure subset of $\Omega$. But then the assumption that $\pi_{\alpha}(f)=1$
a.e. on $\mathbb{T}_{\alpha}$ implies that $f=1$ a.e. on its support $\Omega$,
which in turn contradicts our second assumption that $\pi_{\beta}(f)=0$ a.e.
on $\mathbb{T}_{\beta}$. Hence we must have $m(\Omega)>\alpha$, and so the
second part of the theorem is also proved. ∎
### 4.4.
Next we turn to prove Theorem 2.5. The proof will require the following
notion:
###### Definition 4.5.
An $n\times m$ matrix $M=(c_{ij})$ is called a _doubly stochastic array_ (with
uniform marginals) if the entries $c_{ij}$ are nonnegative, and
$\sum_{j=1}^{m}c_{ij}=m,\quad 1\leqslant i\leqslant n,$ (4.7)
$\sum_{i=1}^{n}c_{ij}=n,\quad 1\leqslant j\leqslant m,$ (4.8)
that is, the sum of the entries at each row is $m$ and at each column is $n$.
By the _support_ of the matrix $M=(c_{ij})$ we refer to the set
$\operatorname{supp}M=\\{(i,j):c_{ij}\neq 0\\}.$
In [KP22, Question 7] the authors posed the following question, which arose in
connection with the simultaneous tiling problem in finite abelian groups: what
is the least possible size of the support of a doubly stochastic $n\times m$
array?
This problem was solved recently in [Lou23] and independently in [EL22].
###### Theorem 4.6 ([Lou23], [EL22]).
For all $n,m$ the minimal size of the support of an $n\times m$ doubly
stochastic array is equal to $n+m-\gcd(n,m)$.
In particular, there exists an $n\times m$ doubly stochastic array whose
support size is as small as $n+m-\gcd(n,m)$. We will use this fact in the
proof of Lemma 4.8 below.
### 4.5.
###### Lemma 4.7.
Let $p,q$ be two positive integers, and $0<\gamma<\min\\{\alpha q^{-1},\beta
p^{-1}\\}$. Then there is a system $\\{L_{ij}\\}$, $1\leqslant i\leqslant p$,
$1\leqslant j\leqslant q$, of disjoint closed intervals in $\mathbb{R}$, with
the following properties:
1. (i)
each interval $L_{ij}$ has length $\gamma$;
2. (ii)
$\pi_{\alpha}(L_{ij})$ is an interval $I_{j}\subset\mathbb{T}_{\alpha}$ that
does not depend on $i$;
3. (iii)
$\pi_{\beta}(L_{ij})$ is an interval $J_{i}\subset\mathbb{T}_{\beta}$ that
does not depend on $j$;
4. (iv)
$I_{1},\dots,I_{q}$ are disjoint closed intervals in $\mathbb{T}_{\alpha}$;
5. (v)
$J_{1},\dots,J_{p}$ are disjoint closed intervals in $\mathbb{T}_{\beta}$.
###### Proof.
We choose integers $m_{1},\dots,m_{q}$ such that the intervals
$I_{j}:=[m_{j}\beta,m_{j}\beta+\gamma]$, $1\leqslant j\leqslant q$, are
disjoint in $\mathbb{T}_{\alpha}$. This is possible due to Kronecker’s
theorem, since $\alpha,\beta$ are rationally independent and $q\gamma<\alpha$.
Since we also have $p\gamma<\beta$, we can find in a similar way integers
$n_{1},\dots,n_{p}$ such that the intervals
$J_{i}:=[n_{i}\alpha,n_{i}\alpha+\gamma]$, $1\leqslant i\leqslant p$, are
disjoint in $\mathbb{T}_{\beta}$. We then define the intervals
$L_{ij}\subset\mathbb{R}$ by
$L_{ij}:=[0,\gamma]+n_{i}\alpha+m_{j}\beta,\quad 1\leqslant i\leqslant
p,\;1\leqslant j\leqslant q.$ (4.9)
Then each interval $L_{ij}$ has length $\gamma$, and we have
$\pi_{\alpha}(L_{ij})=I_{j}$, $\pi_{\beta}(L_{ij})=J_{i}$, so all the required
properties (i)–(v) are satisfied.
Lastly we show that the intervals $L_{ij}$ must be disjoint. Indeed, suppose
that two intervals $L_{i_{1},j_{1}}$ and $L_{i_{2},j_{2}}$ have a point $x$ in
common. Then, on one hand, from (ii) we obtain $\pi_{\alpha}(x)\in
I_{j_{1}}\cap I_{j_{2}}$, which in turn using (iv) implies that $j_{1}=j_{2}$.
On the other hand, by (iii) we also have $\pi_{\beta}(x)\in J_{i_{1}}\cap
J_{i_{2}}$, and hence $i_{1}=i_{2}$ which now follows from (v). Hence the
intervals $L_{i_{1},j_{1}}$ and $L_{i_{2},j_{2}}$ cannot intersect unless
$(i_{1},j_{1})=(i_{2},j_{2})$. ∎
###### Lemma 4.8.
Let $p,q$ be two positive coprime integers, and $0<\gamma<\min\\{\alpha
q^{-1},\beta p^{-1}\\}$. Then there is an elementary set
$\Omega\subset\mathbb{R}$ and a measurable function $f$ supported on $\Omega$,
such that
1. (i)
$m(\Omega)=(p+q-1)\gamma$;
2. (ii)
the set $A=\pi_{\alpha}(\Omega)$ in $\mathbb{T}_{\alpha}$ has measure
$q\gamma$;
3. (iii)
the set $B=\pi_{\beta}(\Omega)$ in $\mathbb{T}_{\beta}$ has measure $p\gamma$;
4. (iv)
$\pi_{\alpha}(f)=p$ a.e. on $A$;
5. (v)
$\pi_{\beta}(f)=q$ a.e. on $B$.
It is instructive to compare this result with Lemma 4.4 above. Recall that the
sets $A,B$ in Lemma 4.4 can be any two elementary subsets of
$\mathbb{T}_{\alpha}$ and $\mathbb{T}_{\beta}$ respectively, and that the
projections $\pi_{\alpha}(f)$, $\pi_{\beta}(f)$ can be any two measurable
functions on $A$ and $B$ respectively, but the measure of the support $\Omega$
must in general be as large as the sum of $m(A)$ and $m(B)$. To the contrary,
in Lemma 4.8 we are able to reduce the measure of the support $\Omega$ to be
strictly smaller than the sum of $m(A)$ and $m(B)$.
###### Proof of Lemma 4.8.
Let $\\{L_{ij}\\}$, $1\leqslant i\leqslant p$, $1\leqslant j\leqslant q$, be
the system of disjoint closed intervals given by Lemma 4.7. We use Theorem 4.6
to find a $p\times q$ doubly stochastic array $M=(c_{ij})$, whose support is
of size $p+q-1$ (which is the smallest possible size as $p,q$ are coprime). We
define a function $f$ on $\mathbb{R}$ by $f(x):=c_{ij}$ for $x\in L_{ij}$,
$1\leqslant i\leqslant p$, $1\leqslant j\leqslant q$, and $f(x):=0$ if $x$
does not lie in any one of the intervals $L_{ij}$.
Let $\Omega$ be the support of the function $f$. Then $\Omega$ is the union of
those intervals $L_{ij}$ for which $(i,j)\in\operatorname{supp}M$. Since
$|\operatorname{supp}M|=p+q-1$, and since the intervals $L_{ij}$ are disjoint
and have length $\gamma$, it follows that $m(\Omega)=(p+q-1)\gamma$.
Recall that $\pi_{\alpha}(L_{ij})$ is a closed interval
$I_{j}\subset\mathbb{T}_{\alpha}$ not depending on $i$, and the intervals
$I_{1},\dots,I_{q}$ are disjoint. Let $A=I_{1}\cup\dots\cup I_{q}$, then $A$
has measure $q\gamma$. We show that $\pi_{\alpha}(f)=p$ a.e. on $A$. It would
suffice to verify that this holds on each one of the intervals $I_{j}$.
Indeed, as the sum of the entries of the matrix $M$ at the $j$’th column is
$p$, we have
$\pi_{\alpha}(f)=\sum_{i=1}^{p}c_{ij}=p\quad\text{on $I_{j}$.}$ (4.10)
Next, in a similar way, we let $B=J_{1}\cup\dots\cup J_{p}$, then $B$ has
measure $p\gamma$. We show that $\pi_{\beta}(f)=q$ a.e. on $B$, by checking
that this holds on each $J_{i}$. And indeed, this time due to the sum of the
entries of $M$ at the $i$’th row being $q$, we get
$\pi_{\beta}(f)=\sum_{j=1}^{q}c_{ij}=q\quad\text{on $J_{i}$.}$ (4.11)
Finally, since $p$ is nonzero, it follows from (4.10) that $\pi_{\alpha}(f)$
does not vanish on $A$, hence $\pi_{\alpha}(\Omega)$ must cover $A$. But
$\pi_{\alpha}(\Omega)$ is a subset of $A$, so we get $\pi_{\alpha}(\Omega)=A$.
Similarly, also $q$ is nonzero, so (4.11) implies that $\pi_{\beta}(\Omega)$
covers $B$, and since $\pi_{\beta}(\Omega)$ is also a subset of $B$ we
conclude that $\pi_{\beta}(\Omega)=B$. The lemma is thus proved. ∎
### 4.6.
Now we are able to prove Theorem 2.5 based on the results above.
###### Proof of Theorem 2.5.
Let $p,q$ be two positive coprime integers, and denote
$\sigma:=\min\Big{\\{}\frac{\alpha}{q},\frac{\beta}{p}\Big{\\}}.$ (4.12)
Given $\varepsilon>0$, we choose a number $\gamma$ such that
$\sigma-\varepsilon<\gamma<\sigma$ (we can assume that $\varepsilon$ is
smaller than $\sigma$). We then use Lemma 4.8 to obtain an elementary set
$\Omega_{1}\subset\mathbb{R}$ of measure $(p+q-1)\gamma$, and a measurable
function $f_{1}$ supported on $\Omega_{1}$, such that the elementary set
$A_{1}:=\pi_{\alpha}(\Omega_{1})$ has measure $q\gamma$, the elementary set
$B_{1}:=\pi_{\beta}(\Omega_{1})$ has measure $p\gamma$, and such that
$\pi_{\alpha}(f_{1})=p$ a.e. on $A_{1}$, while $\pi_{\beta}(f_{1})=q$ a.e. on
$B_{1}$.
Next, we apply Lemma 4.4 to the two elementary sets
$A_{2}:=\mathbb{T}_{\alpha}\setminus\operatorname{int}(A_{1})$,
$B_{2}:=\mathbb{T}_{\beta}\setminus\operatorname{int}(B_{1})$, and to the
constant functions $\varphi=p$, $\psi=q$. The lemma allows us to find a set
$\Omega_{2}\subset\mathbb{R}$ and a measurable function $f_{2}$ supported on
$\Omega_{2}$, such that $\pi_{\alpha}(\Omega_{2})=A_{2}$,
$\pi_{\alpha}(f_{2})=p$ a.e. on $A_{2}$, $\pi_{\beta}(\Omega_{2})=B_{2}$,
$\pi_{\beta}(f_{2})=q$ a.e. on $B_{2}$, and
$m(\Omega_{2})=m(A_{2})+m(B_{2})=(\alpha-q\gamma)+(\beta-p\gamma)=\alpha+\beta-(p+q)\gamma.$
(4.13)
The lemma also allows us to choose $\Omega_{2}$ to be disjoint from
$\Omega_{1}$.
We now define
$\Omega:=\Omega_{1}\cup\Omega_{2},\quad f:=f_{1}+f_{2}.$ (4.14)
Then $f$ is supported by $\Omega$. Since $\Omega_{1}$ and $\Omega_{2}$ are
disjoint, we have
$m(\Omega)=m(\Omega_{1})+m(\Omega_{2})=\alpha+\beta-\gamma.$ (4.15)
But recall that we have chosen $\gamma$ such that $\gamma>\sigma-\varepsilon$,
hence (4.15) yields that
$\operatorname{mes}(\operatorname{supp}f)<\alpha+\beta-\sigma+\varepsilon$.
That is, the condition (2.1) is satisfied.
We must verify that $f$ satisfies the tiling condition (1.3). We first show
that $\pi_{\alpha}(f)=p$ a.e. on $\mathbb{T}_{\alpha}$. Indeed, we have
$\pi_{\alpha}(\Omega_{1})=A_{1}$, $\pi_{\alpha}(\Omega_{2})=A_{2}$, where
$A_{1},A_{2}$ have disjoint interiors and their union is the whole
$\mathbb{T}_{\alpha}$. Moreover, $\pi_{\alpha}(f)=\pi_{\alpha}(f_{1})=p$ a.e.
on $A_{1}$, and $\pi_{\alpha}(f)=\pi_{\alpha}(f_{2})=p$ a.e. on $A_{2}$, which
proves the claim.
Finally we show that also $\pi_{\beta}(f)=q$ a.e. on $\mathbb{T}_{\beta}$. In
a similar way, we have $\pi_{\beta}(\Omega_{1})=B_{1}$,
$\pi_{\beta}(\Omega_{2})=B_{2}$, and $B_{1},B_{2}$ have disjoint interiors and
their union is $\mathbb{T}_{\beta}$. As before, we have
$\pi_{\beta}(f)=\pi_{\beta}(f_{1})=q$ a.e. on $B_{1}$, and
$\pi_{\beta}(f)=\pi_{\beta}(f_{2})=q$ a.e. on $B_{2}$. This shows that the
tiling condition (1.3) indeed holds and thus the theorem is proved. ∎
### 4.7. Remarks
1\. Let us say that a measurable function $f$ on $\mathbb{R}$ is _piecewise
constant_ if there is a strictly increasing real sequence $\\{\lambda_{n}\\}$,
$n\in\mathbb{Z}$, with no finite accumulation points, such that $f$ is
constant a.e. on each one of the intervals $[\lambda_{n},\lambda_{n+1})$ (note
that these intervals constitute a partition of $\mathbb{R}$). One can verify
that our proof of Theorems 2.1, 2.5 and 2.7 yields a function $f$ which is not
only measurable, but in fact is piecewise constant on $\mathbb{R}$.
2\. Our construction method allows us to choose the function $f$ in Theorems
2.1, 2.5 and 2.7 to have “dispersed” support, that is, $f$ can be supported on
the union of (countably many) small intervals that are located far apart from
each other.
## 5\. Simultaneous tiling by functions of bounded support
### 5.1.
One can easily notice that our proof of Theorems 2.1, 2.5 and 2.7 above yields
a function $f$ whose support lies inside any given half-line $(r,+\infty)$, so
that $\operatorname{supp}f$ is bounded from below. One may ask whether the
function $f$ can be chosen such that the support is bounded from both above
and below at the same time.
The answer is obviously ‘yes’ if we have $(p,q)=\lambda(\beta,\alpha)$ where
$\lambda$ is a scalar, since in this case the function
$f=\lambda\mathds{1}_{[0,\alpha)}\ast\mathds{1}_{[0,\beta)}$ satisfies the
simultaneous tiling condition (1.3) and has bounded support.
To the contrary, if the tiling level vector $(p,q)$ is not proportional to
$(\beta,\alpha)$, then Theorem 2.2 provides the question above with a negative
answer: _$f$ cannot be supported on any bounded set_. This theorem will be
proved in the present section.
We note that our proof in fact does not use the assumption that $\alpha,\beta$
are rationally independent. However if $\alpha,\beta$ are linearly dependent
over the rationals, then we know from Theorem 2.8 that there do not exist any
simultaneous tilings (1.3) with a level vector $(p,q)$ that is not
proportional to $(\beta,\alpha)$, so the result is vacuous in this case.
### 5.2.
We now turn to prove Theorem 2.2. To this end, we shall use a result due to
Anosov [Ano73, Theorem 1] that we state here as a lemma:
###### Lemma 5.1 ([Ano73]).
Let $\varphi\in L^{1}(\mathbb{T}_{\alpha})$. If the equation
$\psi(x)-\psi(x+\beta)=\varphi(x)\quad\text{a.e.}$ (5.1)
has a measurable solution $\psi:\mathbb{T}_{\alpha}\to\mathbb{C}$, then
$\int_{\mathbb{T}_{\alpha}}\varphi=0$.
In fact, in [Ano73, Theorem 1] a more general version of this result was
stated and proved, in the context of a measure-preserving transformation
acting on a finite measure space. Here we only state the result in the special
case where the transformation is a rotation of the circle
$\mathbb{T}_{\alpha}$.
###### Proof of Theorem 2.2.
Assume that $f$ is a measurable function on $\mathbb{R}$ satisfying (1.3). We
shall suppose that $f$ has bounded support, and prove that this implies that
the vector $(p,q)$ must be proportional to $(\beta,\alpha)$.
By translating $f$ we may assume that $\operatorname{supp}f\subset[0,n\beta)$,
where $n$ is a positive, large enough integer. We can then write
$f=\sum_{k=0}^{n-1}f_{k},\quad f_{k}:=f\cdot\mathds{1}_{[k\beta,(k+1)\beta)}.$
(5.2)
By the first condition in (1.3) we have
$\sum_{k=0}^{n-1}\pi_{\alpha}(f_{k})=\pi_{\alpha}(f)=p\quad\text{a.e.}$ (5.3)
The second condition in (1.3) can be equivalently stated as
$\sum_{k=0}^{n-1}f_{k}(x+k\beta)=q\cdot\mathds{1}_{[0,\beta)}(x)\quad\text{a.e.}$
(5.4)
It follows from (5.4) that
$\sum_{k=0}^{n-1}\pi_{\alpha}(f_{k})(x+k\beta)=q\cdot\pi_{\alpha}(\mathds{1}_{[0,\beta)})(x)\quad\text{a.e.}$
(5.5)
Let us define
$\varphi:=p-q\cdot\pi_{\alpha}(\mathds{1}_{[0,\beta)}),\quad\psi_{k}:=\pi_{\alpha}(f_{k}),\quad
0\leqslant k\leqslant n-1,$ (5.6)
then $\varphi\in L^{1}(\mathbb{T}_{\alpha})$, while the $\psi_{k}$ are
measurable functions on $\mathbb{T}_{\alpha}$. If we now subtract the equality
(5.5) from (5.3), this yields
$\sum_{k=1}^{n-1}(\psi_{k}(x)-\psi_{k}(x+k\beta))=\varphi(x)\quad\text{a.e.}$
(5.7)
Lastly, we introduce a measurable function $\psi$ on $\mathbb{T}_{\alpha}$
defined by
$\psi(x):=\sum_{k=1}^{n-1}\sum_{j=0}^{k-1}\psi_{k}(x+j\beta),$ (5.8)
and observe that (5.7) can be reformulated as
$\psi(x)-\psi(x+\beta)=\varphi(x)\quad\text{a.e.}$ (5.9)
This allows us to apply Lemma 5.1, which yields
$\int_{\mathbb{T}_{\alpha}}\varphi=0$. But using (5.6) we have
$\int_{\mathbb{T}_{\alpha}}\varphi=\int_{\mathbb{T}_{\alpha}}(p-q\cdot\pi_{\alpha}(\mathds{1}_{[0,\beta)}))=p\alpha-q\int_{\mathbb{R}}\mathds{1}_{[0,\beta)}=p\alpha-q\beta.$
(5.10)
We conclude that $p\alpha-q\beta=0$, so the vector $(p,q)$ is proportional to
$(\beta,\alpha)$. ∎
## 6\. Simultaneous tiling by a bounded function
### 6.1.
The following question was posed to us by Kolountzakis: Let $\alpha,\beta$ be
rationally independent. Given two arbitrary complex numbers $p,q$, does there
exist a _bounded_ measurable function $f$ on $\mathbb{R}$ satisfying the
simultaneous tiling condition (1.3)?
The answer is once again ‘yes’ if we have $(p,q)=\lambda(\beta,\alpha)$,
$\lambda\in\mathbb{C}$, since in this case the continuous, compactly supported
function $f=\lambda\mathds{1}_{[0,\alpha)}\ast\mathds{1}_{[0,\beta)}$
satisfies (1.3).
On the other hand, the problem is nontrivial if the vector $(p,q)$ is not
proportional to $(\beta,\alpha)$. We note that in this case, a bounded
function $f$ satisfying (1.3) _cannot be supported on any set of finite
measure_. Indeed, if $\operatorname{mes}(\operatorname{supp}f)$ is finite then
$f$ must be in $L^{1}(\mathbb{R})$, which is not possible due to Proposition
1.1.
We will nevertheless prove that the question above admits an affirmative
answer. Moreover, one can always choose $f$ to be continuous and vanishing at
infinity:
###### Theorem 6.1.
Let $\alpha,\beta$ be rationally independent. For any two complex numbers
$p,q$ one can find a continuous function $f$ on $\mathbb{R}$ vanishing at
infinity, and satisfying (1.3).
### 6.2.
In what follows we assume $\alpha,\beta$ to be rationally independent. Our
proof of Theorem 6.1 is based on the technique used to prove Lemma 4.4, but
this time we will use the following variant of Lemma 4.3.
###### Lemma 6.2.
Let $A$ be an elementary set in $\mathbb{T}_{\alpha}$, and $\varphi$ be a
continuous function on $A$. Then given any $\delta>0$ and any nonempty open
interval $J\subset\mathbb{T}_{\beta}$, one can find an elementary set
$U\subset\mathbb{R}$ and a continuous function $f$ on $\mathbb{R}$ such that
1. (i)
$\pi_{\alpha}(U)=A$;
2. (ii)
$\pi_{\beta}(U)\subset J$;
3. (iii)
$f$ is supported on $U$, $|f(x)|\leqslant\delta$ for all $x\in U$;
4. (iv)
$\pi_{\alpha}(f)=\varphi$ on some elementary set $A^{\prime}\subset A$,
$m(A\setminus A^{\prime})<\delta$.
Moreover, $U$ can be chosen inside the half-line $(r,+\infty)$ for any given
number $r$.
###### Proof.
We apply Lemma 4.2 to the elementary set $A$ and to the open interval $J$. The
lemma yields an elementary set $U_{0}\subset\mathbb{R}$ such that
$\pi_{\alpha}(U_{0})=A$, $\pi_{\alpha}$ is one-to-one on
$\operatorname{int}(U_{0})$, and $\pi_{\beta}(U_{0})\subset J$. Let
$M:=\sup|\varphi(t)|$, $t\in A$, and choose an integer $N$ sufficiently large
so that $N\delta>M$. We then find integers $m_{1},\dots,m_{N}$ such that, if
we denote $U_{j}:=U_{0}+m_{j}\alpha$, $1\leqslant j\leqslant N$, then
$\pi_{\beta}(U_{j})\subset J$ for every $j$. This is possible due to
Kronecker’s theorem, since $\alpha,\beta$ are rationally independent and
$\pi_{\beta}(U_{0})$ is a compact subset of the open interval $J$. We can also
choose the integers $m_{1},\dots,m_{N}$ such that the sets $U_{1},\dots,U_{N}$
are disjoint and all of them are contained in a given half-line $(r,+\infty)$.
We now find an elementary set
$U^{\prime}_{0}\subset\operatorname{int}(U_{0})$, such that the (also
elementary) set $A^{\prime}:=\pi_{\alpha}(U^{\prime}_{0})$ satisfies
$m(A\setminus A^{\prime})<\delta$. Let $f_{0}$ be a continuous function on
$\mathbb{R}$, supported on $U_{0}$, and satisfying
$f_{0}(x)=\varphi(\pi_{\alpha}(x))$ for $x\in U^{\prime}_{0}$, and
$|f_{0}(x)|\leqslant M$ for every $x\in\mathbb{R}$. Since $\pi_{\alpha}$ is
one-to-one on $\operatorname{int}(U_{0})$, we have
$\pi_{\alpha}(f_{0})=\varphi$ on the set $A^{\prime}$.
Finally we define the continuous function
$f(x):=\frac{1}{N}\sum_{j=1}^{N}f_{j}(x),\quad
f_{j}(x):=f_{0}(x-m_{j}\alpha).$ (6.1)
Then $f_{j}$ is supported on $U_{j}$, $1\leqslant j\leqslant N$, and hence $f$
is supported on the union
$U:=U_{1}\cup U_{2}\cup\dots\cup U_{N}.$ (6.2)
Recall that $U_{1},\dots,U_{N}$ are disjoint sets, and that $|f_{j}|\leqslant
M$ for each $j$. It thus follows from (6.1) that $|f(x)|\leqslant
MN^{-1}\leqslant\delta$ for every $x\in\mathbb{R}$. So property (iii) is
satisfied.
Notice that $\pi_{\alpha}(U_{j})=\pi_{\alpha}(U_{0})=A$ for every $j$. In
particular, this implies (i).
Since $f_{j}$ is a translate of $f_{0}$ by an integer multiple of $\alpha$, we
have $\pi_{\alpha}(f_{j})=\pi_{\alpha}(f_{0})$ for each $1\leqslant j\leqslant
N$. It follows that $\pi_{\alpha}(f)=\pi_{\alpha}(f_{0})=\varphi$ on
$A^{\prime}$. So (iv) is established.
Lastly, $\pi_{\beta}(U_{j})\subset J$ for every $j$, hence by (6.2) we have
$\pi_{\beta}(U)\subset J$ as well. We conclude that also the condition (ii)
holds and the lemma is proved. ∎
###### Proof of Theorem 6.1.
The approach is similar to the proof of Lemma 4.4, so we shall be brief. We
construct by induction a sequence $A_{1},A_{2},\dots$ of pairwise disjoint
elementary sets in $\mathbb{T}_{\alpha}$, a sequence $B_{1},B_{2},\dots$ of
pairwise disjoint elementary sets in $\mathbb{T}_{\beta}$, a sequence
$U_{1},V_{1},U_{2},V_{2},\dots$ of pairwise disjoint elementary sets in
$\mathbb{R}$ accumulating at infinity, and a sequence
$g_{1},h_{1},g_{2},h_{2},\dots$ of continuous functions on $\mathbb{R}$, in
the following way.
Suppose that we have already constructed the sets $A_{k},B_{k},U_{k},V_{k}$
and the functions $g_{k},h_{k}$ for $1\leqslant k\leqslant n-1$. We use Lemma
6.2 to find an elementary set $U_{n}\subset\mathbb{R}$, and a continuous
function $g_{n}$ on $\mathbb{R}$, such that $\pi_{\alpha}(U_{n})$ is disjoint
from the sets $A_{1},\dots,A_{n-1}$, $\pi_{\beta}(U_{n})$ is disjoint from the
sets $B_{1},\dots,B_{n-1}$, $g_{n}$ is supported on $U_{n}$,
$|g_{n}(x)|\leqslant 2^{-n}$ for all $x\in U_{n}$, and
$\pi_{\alpha}(g_{n})=p-\sum_{k=1}^{n-1}\pi_{\alpha}(h_{k})$ on some elementary
set $A_{n}\subset\mathbb{T}_{\alpha}$, which is disjoint from
$A_{1},\dots,A_{n-1}$, and such that
$(1-2^{-n})\alpha<m(A_{1}\cup\dots\cup A_{n})<\alpha.$ (6.3)
Then, we use again Lemma 6.2 but with the roles of $\alpha,\beta$
interchanged, to find an elementary set $V_{n}\subset\mathbb{R}$, and a
continuous function $h_{n}$ on $\mathbb{R}$, such that $\pi_{\beta}(V_{n})$ is
disjoint from the sets $B_{1},\dots,B_{n-1}$, $\pi_{\alpha}(V_{n})$ is
disjoint from $A_{1},\dots,A_{n}$, $h_{n}$ is supported on $V_{n}$,
$|h_{n}(x)|\leqslant 2^{-n}$ for all $x\in V_{n}$, and
$\pi_{\beta}(h_{n})=q-\sum_{k=1}^{n}\pi_{\beta}(g_{k})$ on some elementary set
$B_{n}\subset\mathbb{T}_{\beta}$, which is disjoint from
$B_{1},\dots,B_{n-1}$, and such that
$(1-2^{-n})\beta<m(B_{1}\cup\dots\cup B_{n})<\beta.$ (6.4)
We observe that Lemma 6.2 allows us to choose the sets
$U_{1},V_{1},U_{2},V_{2},\dots$ to be pairwise disjoint and accumulating at
$+\infty$. So we may assume this to be case.
Finally, we define $f:=\sum_{n=1}^{\infty}(g_{n}+h_{n})$, which is a
continuous function on $\mathbb{R}$ vanishing at infinity. Similarly to the
proof of Lemma 4.4, one can check that $\pi_{\alpha}(f)=p$ on the union
$\cup_{n=1}^{\infty}A_{n}$, a set of full measure in $\mathbb{T}_{\alpha}$,
while $\pi_{\beta}(f)=q$ on $\cup_{n=1}^{\infty}B_{n}$, a set of full measure
in $\mathbb{T}_{\beta}$. Thus $f$ satisfies the simultaneous tiling condition
(1.3). (We note that both sums in (1.3) have only finitely many nonzero terms
for almost every $x\in\mathbb{R}$.) ∎
### 6.3. Remarks
1\. One can choose the function $f$ in Theorem 6.1 to be not only continuous
but in fact _smooth_. To this end it suffices to replace Lemma 6.2 with a
similar version, where $\varphi$ and $f$ are smooth functions.
2\. If the tiling level vector $(p,q)$ is not proportional to
$(\beta,\alpha)$, then the function $f$ in Theorem 6.1 can only have slow
decay at infinity. In fact, $f$ cannot be in $L^{1}(\mathbb{R})$ due to
Proposition 1.1.
## 7\. Incommensurable arithmetic progressions: Lower bounds for the support
size of a simultaneously tiling function
In this section we prove Theorems 2.3 and 2.6. These theorems give a sharp
lower bound for the measure of $\operatorname{supp}f$, where $f$ is an
arbitrary measurable function satisfying the simultaneous tiling condition
(1.3).
Our proof is based on a graph-theoretic approach. We will show that any
simultaneously tiling function $f$ induces a weighted bipartite graph, whose
vertices and edges are also endowed with a measure space structure. The main
method of the proof is an _iterative leaves removal process_ that we apply to
this graph.
Throughout this section we again suppose that $\alpha,\beta>0$ are two fixed,
rationally independent real numbers.
### 7.1. Bipartite graphs and iterative leaves removal
We start by introducing some purely graph-theoretic concepts and notation.
A _bipartite graph_ is a triple $G=(A,B,E)$, consisting of two disjoint sets
$A,B$ of vertices, and a set $E\subset A\times B$ of edges. The sets $A,B$ and
$E$ may be infinite, and may even be uncountable. However, we will assume that
_each vertex in the graph $G$ has finite degree_.
For any set of vertices $S\subset A\cup B$, we denote by $E(S)$ the set of all
edges which are incident to a vertex from $S$.
For each $k\geqslant 0$ we let $A_{k}$ be the set of vertices of degree $k$ in
$A$, and $B_{k}$ be the set of vertices of degree $k$ in $B$. In particular,
$A_{1}$ and $B_{1}$ are the sets of leaves in $A$ and $B$, respectively. Note
that the sets $A_{k}$, $B_{k}$ form a partition of $A\cup B$.
A vertex $v\in A\cup B$ will be called a _star vertex_ if all the neighbors of
$v$ in the graph are leaves. We denote by $A_{*}$ the set of star vertices
which belong to $A$, and by $B_{*}$ the set of star vertices that belong to
$B$.
###### Definition 7.1 (leaves removal).
Given a bipartite graph $G=(A,B,E)$ _with no isolated vertices_ , we define
its _$A$ -leaves-removed-graph_ to be the graph
$G_{A}=(A\setminus A_{1},B\setminus B_{*},E\setminus E(A_{1})),$ (7.1)
that is, $G_{A}$ is the graph obtained from $G$ by removing all the leaves on
the $A$-side (including the edges incident to those leaves) and then dropping
the star vertices in $B$, which are the vertices on the $B$-side that became
isolated due to the removal of all their neighbors. Similarly we define the
_$B$ -leaves-removed graph_ to be
$G_{B}=(A\setminus A_{*},B\setminus B_{1},E\setminus E(B_{1})).$ (7.2)
###### Remark 7.2.
Notice that assuming $G$ to have no isolated vertices implies that the new
graph $G_{A}$ must have no isolated vertices either. Indeed, when we remove
the leaves from $A$, the only vertices which become isolated are those in
$B_{*}$, and we make sure to remove these vertices from $B$. Similarly, the
graph $G_{B}$ has no isolated vertices.
###### Definition 7.3 (iterative leaves removal).
Given a bipartite graph $G=(A,B,E)$ _with no isolated vertices_ , we define
its _leaves-removal-graph-sequence_ $G^{(n)}=(A^{(n)},B^{(n)},E^{(n)})$ as
follows. We let $G^{(0)}=G$, and for each $n$, if $n$ is even we let
$G^{(n+1)}=(G^{(n)})_{A}$, while if $n$ is odd then $G^{(n+1)}=(G^{(n)})_{B}$.
In other words, the sequence is obtained by consecutive removal of leaves
alternately from each side of the graph. First we remove all the leaves from
the $A$-side, as well as the star vertices on the $B$-side. By doing so we may
have created some new leaves on the $B$-side, as some vertices in $B$ may have
lost all their neighbors in $A$ but one. In the second step we remove all the
leaves from the $B$-side and the star vertices on the $A$-side. Then again
some vertices on the $A$-side may become leaves. The process continues in a
similar fashion.
Notice that if at the $n$’th step of the iterative process there are no leaves
to be removed on the relevant side of the graph, then we simply obtain
$G^{(n+1)}=G^{(n)}$.
###### Definition 7.4 (weighted bipartite graph).
We say that a bipartite graph $G=(A,B,E)$ is _weighted_ if it is endowed with
an edge-weight function $w:E\rightarrow\mathbb{C}$ which assigns a complex-
valued weight to each edge of the graph.
For each vertex $v\in A\cup B$, the sum of the weights of all the (finitely
many) edges incident to $v$ will be called the _weight of the vertex $v$_.
### 7.2. The graph induced by a subset of the real line
We now turn our attention to a specific construction of a bipartite graph.
###### Definition 7.5 (the induced graph $G(\Omega)$).
Let $\Omega$ be an arbitrary subset of $\mathbb{R}$. We associate to $\Omega$
a bipartite graph $G(\Omega)$ defined as follows. The set of vertices of the
graph is the union of the two disjoint sets $A=\pi_{\alpha}(\Omega)$ and
$B=\pi_{\beta}(\Omega)$, which form the bipartition of the graph. The set of
edges $E$ of the graph consists of all edges of the form
$e(x):=(\pi_{\alpha}(x),\pi_{\beta}(x))$ where $x$ goes through the elements
of $\Omega$.
###### Remark 7.6.
We note that distinct points $x,y\in\Omega$ correspond to distinct edges
$e(x)$, $e(y)$ in $E$. Indeed, if $e(x)=e(y)$ then we must have
$x-y\in\alpha\mathbb{Z}\cap\beta\mathbb{Z}$, which in turn implies that $x=y$
since $\alpha,\beta$ are rationally independent. Thus, the representation of
the elements of $\Omega$ as edges in the graph is one-to-one. In the sequel,
we will often identify edges of the graph with elements of the set $\Omega$.
###### Definition 7.7 (finite degrees assumption).
We say that a set $\Omega\subset\mathbb{R}$ satisfies the _finite degrees
assumption_ if each vertex in the graph $G(\Omega)$ has finite degree. This is
the case if and only if for every $x\in\mathbb{R}$, the sets
$\Omega\cap(x+\alpha\mathbb{Z})$ and $\Omega\cap(x+\beta\mathbb{Z})$ have both
finitely many elements.
In what follows, we shall assume that the given set $\Omega\subset\mathbb{R}$
satisfies the finite degrees assumption.
###### Remark 7.8.
Notice that the graph $G(\Omega)=(A,B,E)$ has no isolated vertices. Indeed, if
$a$ is a vertex in $A$ then $a=\pi_{\alpha}(x)$ for some $x\in\Omega$, so $a$
is incident to the edge $e(x)=(\pi_{\alpha}(x),\pi_{\beta}(x))$. Similarly,
any vertex $b\in B$ is incident to at least one edge in $E$.
###### Remark 7.9.
Let $G_{A}(\Omega)$ be the $A$-leaves-removed-graph of $G(\Omega)$. Notice
that $G_{A}(\Omega)$ is the graph induced by the set
$\Omega_{A}=\Omega\setminus E(A_{1})$, where here we identify edges of the
graph with elements of the set $\Omega$ (see Remark 7.6). Thus we have
$G_{A}(\Omega)=G(\Omega_{A})$. Similarly, the $B$-leaves-removed-graph
$G_{B}(\Omega)$ of $G(\Omega)$ is the graph induced by the set
$\Omega_{B}=\Omega\setminus E(B_{1})$ (where again edges of the graph are
identified with elements of $\Omega$). Hence the iterative leaves removal
process applied to the graph $G(\Omega)$ induces a sequence of sets
$\Omega^{(n)}\subset\mathbb{R}$, satisfying
$\Omega^{(n+1)}\subset\Omega^{(n)}\subset\Omega$ for all $n$, and such that
the leaves-removal-graph-sequence $G^{(n)}(\Omega)$ is given by
$G^{(n)}(\Omega)=G(\Omega^{(n)})$.
### 7.3. Vertices and edges as measure spaces
Assume now that $\Omega$ is a _measurable_ subset of $\mathbb{R}$, satisfying
the finite degrees assumption. In this case the induced graph $G(\Omega)$ can
be endowed with an additional measure space structure, as follows.
Recall that we have endowed $\mathbb{T}_{\alpha}$ and $\mathbb{T}_{\beta}$
with the Lebesgue measure, normalized such that
$m(\mathbb{T}_{\alpha})=\alpha$ and $m(\mathbb{T}_{\beta})=\beta$. We notice
that the two vertex sets $A=\pi_{\alpha}(\Omega)$ and $B=\pi_{\beta}(\Omega)$
of the graph $G(\Omega)=(A,B,E)$ are measurable subsets of
$\mathbb{T}_{\alpha}$ and $\mathbb{T}_{\beta}$ respectively. We may therefore
consider $A$ and $B$ as measure spaces, with the measure space structure
induced from the embedding of $A$ and $B$ into $\mathbb{T}_{\alpha}$ and
$\mathbb{T}_{\beta}$ respectively.
We also endow the edge set $E$ with a measure space structure, induced from
the identification of $E$ with $\Omega$ as a (measurable) subset of
$\mathbb{R}$ as in Remark 7.6. (Notice that we _do not_ endow $E$ with the
measure space structure induced from the embedding of $E$ into the product
space $A\times B$.)
In the sequel, we will also consider the entire vertex set $V:=A\cup B$ as a
single measure space, formed by the disjoint union of the two measure spaces
$A$ and $B$.
###### Lemma 7.10 (measurability lemma).
1. (i)
For each $k$ the set $A_{k}$ of vertices of degree $k$ in $A$ is a measurable
subset of $A$;
2. (ii)
The set $A_{*}$ of star vertices in $A$ (that is, the vertices in $A$ all of
whose neighbors are leaves) is a measurable subset of $A$;
3. (iii)
If $S\subset A$ is a measurable set of vertices, then $E(S)$ (the set of edges
incident to a vertex in $S$) is a measurable subset of $E$.
Similar assertions hold for the sets $B_{k}$, $B_{*}$ and $S\subset B$.
###### Proof.
If $a$ is a vertex in $A$, then the degree of $a$ in the graph $G(\Omega)$ is
equal to $\pi_{\alpha}(\mathds{1}_{\Omega})(a)$. Hence
$\pi_{\alpha}(\mathds{1}_{\Omega})$ is an everywhere finite, measurable
function on $\mathbb{T}_{\alpha}$. Since for each $k$ the set $A_{k}$ is the
preimage of $\\{k\\}$ under this function, it follows that $A_{k}$ is
measurable.
By a similar argument, also the set $B_{k}$ is measurable for each $k$.
Next we observe that
$A_{*}=A\setminus\pi_{\alpha}(\Omega\cap\pi_{\beta}^{-1}(B\setminus
B_{1})),\quad
B_{*}=B\setminus\pi_{\beta}(\Omega\cap\pi_{\alpha}^{-1}(A\setminus A_{1})),$
(7.3)
hence both sets $A_{*}$ and $B_{*}$ are measurable.
Finally, let $S$ be a measurable subset of $A$. Identifying the edges of the
graph with elements of $\Omega$, we have
$E(S)=\pi_{\alpha}^{-1}(S)\cap\Omega$, hence $E(S)$ is measurable. Similarly,
for any measurable set $S\subset B$, the set
$E(S)=\pi_{\beta}^{-1}(S)\cap\Omega$ is measurable. ∎
###### Remark 7.11.
Recall from Remark 7.9 that the iterative leaves removal process induces a
sequence of sets $\Omega^{(n)}\subset\mathbb{R}$, satisfying
$\Omega^{(n+1)}\subset\Omega^{(n)}\subset\Omega$ for all $n$, and such that
the leaves-removal-graph-sequence $G^{(n)}(\Omega)$ is given by
$G^{(n)}(\Omega)=G(\Omega^{(n)})$. It follows from Lemma 7.10 that if $\Omega$
is a measurable subset of $\mathbb{R}$, then all the sets $\Omega^{(n)}$ are
measurable too, since the set of edges removed at each step of the iterative
process is measurable.
For a vertex $a\in A$ we denote by $\deg_{A}(a)$ the degree of $a$ in the
graph $G(\Omega)$. Similarly, we denote by $\deg_{B}(b)$ the degree of a
vertex $b\in B$. Then $\deg_{A}$ and $\deg_{B}$ are nonnegative, integer-
valued functions on $A$ and $B$ respectively.
###### Lemma 7.12 (edge counting lemma).
$\deg_{A}$ is a measurable function on $A$. Moreover, for any measurable set
of vertices $S\subset A$ we have
$m(E(S))=\int_{S}\deg_{A}.$ (7.4)
In particular,
$m(E(A_{k}))=k\cdot m(A_{k}),\quad k=1,2,3,\dots$ (7.5)
Similar assertions hold for $\deg_{B}$ and $B_{k}$.
Notice that the integral in (7.4) may be finite or infinite, but in any case
it has a well-defined value, since $\deg_{A}$ is a nonnegative function.
###### Proof of Lemma 7.12.
Let $S\subset A$ be a measurable set. By identifying the edges of the graph
$G(\Omega)$ with elements of $\Omega$, we have
$E(S)=\pi_{\alpha}^{-1}(S)\cap\Omega$. Then
$m(E(S))=\int_{\mathbb{R}}\mathds{1}_{E(S)}=\int_{\mathbb{T}_{\alpha}}\pi_{\alpha}(\mathds{1}_{E(S)})$
(7.6)
(these equalities hold both if $E(S)$ has finite or infinite measure). But
notice that for a vertex $a\in A$ we have
$\pi_{\alpha}(\mathds{1}_{E(S)})(a)=\begin{cases}\deg_{A}(a),&a\in
S,\\\\[4.0pt] 0,&a\notin S.\end{cases}$ (7.7)
Thus (7.6) and (7.7) imply (7.4). Finally (7.5) is a consequence of (7.4),
since the function $\deg_{A}$ attains the constant value $k$ on the set
$A_{k}$. ∎
###### Remark 7.13.
Let $\mu_{A}$ be the measure on $A$ obtained as the image under the map
$\pi_{\alpha}$ of the Lebesgue measure restricted to $\Omega$. The assertion
of Lemma 7.12 may be equivalently stated by saying that $\deg_{A}$ is the
Radon-Nikodym derivative of $\mu_{A}$ with respect to the Lebesgue measure on
$A$.
### 7.4. A brief digression: Measure preserving graphs (graphings)
The graph $G(\Omega)$ endowed with its measure space structure is closely
related to the notion of a _measure preserving graph_ , or a _graphing_ , so
we will discuss this relation briefly here. For a detailed exposition we refer
to the book by Lovász [Lov12, Chapter 18].
A _Borel graph_ is a graph $(V,E)$ where the vertex set $V$ is a standard
Borel space (i.e. the measure space associated to a separable, complete metric
space), and the edge set $E$ is a Borel set in $V\times V$. One can show that
if $\Omega\subset\mathbb{R}$ is a Borel set, then the induced graph
$G(\Omega)$ is a Borel graph.
A _measure preserving graph_ , or a _graphing_ , is a Borel graph $(V,E)$
whose vertex set $V$ is endowed with a probability measure $\lambda$, such
that for any two measurable sets $U,W\subset V$ we have
$\int_{U}n_{W}(x)d\lambda(x)=\int_{W}n_{U}(x)d\lambda(x),$ (7.8)
where $n_{U}(x)$ and $n_{W}(x)$ denote the number of neighbors of $x$ within
the sets $U$ and $W$ respectively. The last condition relates the graph
structure to the measure space structure by requiring that “counting” the
edges between $U$ and $W$ from $U$, yields the same result as counting them
from $W$. One can show based on Lemma 7.12 that the graph $G(\Omega)$
satisfies the condition (7.8).
We point out however that in [Lov12] the notion of a graphing includes the
additional assumption that the degrees of the vertices in the graph are
bounded by a certain constant. To the contrary, for the graph $G(\Omega)$ we
only assume that each vertex has finite degree, allowing the existence of
vertices with arbitrarily large degrees.
### 7.5. The graph induced by a simultaneously tiling function
Let now $f$ be a measurable function on $\mathbb{R}$, and consider the graph
$G(\Omega)=(A,B,E)$ induced by the set $\Omega:=\operatorname{supp}(f)$. By
identifying the edges of the graph with elements of the set $\Omega$ (as in
Remark 7.6) we may view $f$ as a function on the set of edges of the graph.
Thus $G(\Omega)$ becomes a weighted graph, with the weight function $f$.
###### Lemma 7.14.
Let $f$ be a measurable function on $\mathbb{R}$,
$\operatorname{mes}(\operatorname{supp}f)<+\infty$, and assume that $f$
satisfies the simultaneous tiling condition (1.3). Then $f$ can be redefined
on a set of measure zero so as to satisfy also the following two additional
properties:
1. (i)
The set $\Omega:=\operatorname{supp}f$ satisfies the finite degrees
assumption;
2. (ii)
If the induced graph $G(\Omega)=(A,B,E)$ is weighted by the function $f$, then
each vertex from $A$ has weight $p$, while each vertex from $B$ has weight
$q$.
###### Proof.
Denote the given function by $f_{0}$, and let
$\Omega_{0}:=\operatorname{supp}f_{0}$. Let $X_{0}$ be the set of all points
$x\in\mathbb{R}$ satisfying the conditions
$\sum_{k\in\mathbb{Z}}\mathds{1}_{\Omega_{0}}(x-k\alpha)<+\infty,\quad\sum_{k\in\mathbb{Z}}f_{0}(x-k\alpha)=p,$
(7.9)
as well as the conditions
$\sum_{k\in\mathbb{Z}}\mathds{1}_{\Omega_{0}}(x-k\beta)<+\infty,\quad\sum_{k\in\mathbb{Z}}f_{0}(x-k\beta)=q.$
(7.10)
The assumptions imply that $X_{0}$ is a set of full measure in $\mathbb{R}$.
Then also the set
$X:=\bigcap_{(n,m)\in\mathbb{Z}^{2}}(X_{0}+n\alpha+m\beta)$ (7.11)
has full measure in $\mathbb{R}$. We define $f:=f_{0}\cdot\mathds{1}_{X}$,
then $f$ coincides with $f_{0}$ a.e. We will show that the new function $f$
satisfies the two additional conditions (i) and (ii).
Let $G(\Omega)=(A,B,E)$ be the graph induced by the set
$\Omega:=\operatorname{supp}f=\Omega_{0}\cap X$. We first verify the condition
(i), namely, we show that each vertex of $G(\Omega)$ has finite degree.
Indeed, let $a\in A$, then $a=\pi_{\alpha}(x)$ for some $x\in\Omega$, and the
degree of $a$ is the number of elements in the set
$\Omega\cap(x+\alpha\mathbb{Z})$. But this set has finitely many elements,
which follows from the first condition in (7.9) using the fact that
$\Omega\subset\Omega_{0}$ and $x\in\Omega\subset X_{0}$. Hence each vertex
$a\in A$ has finite degree in the graph $G(\Omega)$. Similarly, each vertex
$b\in B$ also has finite degree.
Now let the graph $G(\Omega)$ be weighted by the function $f$. We show that
condition (ii) holds. Indeed, let $a\in A$, then again $a=\pi_{\alpha}(x)$ for
some $x\in\Omega$. Since $\Omega\subset X$ and the set $X$ is invariant under
translations by elements from $\alpha\mathbb{Z}$, we have
$x+\alpha\mathbb{Z}\subset X$ and thus $f$ coincides with $f_{0}$ on the set
$x+\alpha\mathbb{Z}$. This implies that
$\pi_{\alpha}(f)(a)=\pi_{\alpha}(f_{0})(a)=p$, where the last equality follows
from the second condition in (7.9) using the fact that $x\in\Omega\subset
X_{0}$. But $\pi_{\alpha}(f)(a)$ is exactly the weight of the vertex $a$ in
the graph $G(\Omega)$, hence the vertex $a$ has weight $p$. The proof that
each vertex $b\in B$ has weight $q$ is similar. ∎
In what follows, we assume that $f$ is a measurable function on $\mathbb{R}$
satisfying the simultaneous tiling condition (1.3). Since our goal is to
obtain a lower bound for the measure of the support of $f$, we assume that
$\Omega:=\operatorname{supp}f$ is a set of finite measure.
We endow the graph $G(\Omega)=(A,B,E)$ with the weight function $f$. By
redefining the values of $f$ on a set of measure zero (using Lemma 7.14) we
can assume with no loss of generality that every vertex in the graph has
finite degree, and that the vertices from $A$ have weight $p$, while the
vertices from $B$ have weight $q$.
We will also assume that the tiling levels $p$ and $q$ in (1.3) are both
nonzero (the case where one of $p,q$ is zero is covered by Theorem 2.7). This
implies that the supports of the functions $\pi_{\alpha}(f)$ and
$\pi_{\beta}(f)$ coincide with $\mathbb{T}_{\alpha}$ and $\mathbb{T}_{\beta}$
respectively up to a set of measure zero. Hence
$m(A)=\alpha,\quad m(B)=\beta.$ (7.12)
### 7.6. The Euler characteristic
Recall that the set $E$ of edges of the graph $G(\Omega)$ is endowed with a
measure space structure, induced from the identification of $E$ with $\Omega$
as a measurable subset of $\mathbb{R}$ (Remark 7.6). In particular,
$m(E)=m(\Omega)<+\infty$.
###### Definition 7.15 (Euler characteristic).
The quantity
$\chi=m(A)+m(B)-m(E)$ (7.13)
will be called the _Euler characteristic_ of the graph $G(\Omega)=(A,B,E)$.
We call this quantity the “Euler characteristic” since it is the difference
between the total measure of the vertices in the graph and the total measure
of the edges.
Similarly, we let
$\chi^{(n)}=m(A^{(n)})+m(B^{(n)})-m(E^{(n)})$ (7.14)
denote the Euler characteristics of the leaves-removal-graph-sequence
$G^{(n)}(\Omega)$.
Let ${L}^{(n)}$ be the set of leaves removed at the $n$’th step of the
iterative leaves removal process, that is, if $n$ is even then
${L}^{(n)}=A^{(n)}_{1}$ (the set of leaves in $A^{(n)}$), and if $n$ is odd
then ${L}^{(n)}=B^{(n)}_{1}$ (the set of leaves in $B^{(n)}$). The next lemma
gives a lower bound for the measure of the set ${L}^{(n)}$ in terms of the
Euler characteristic $\chi^{(n)}$.
###### Lemma 7.16 (removed leaves estimate).
Assume that $\alpha>\beta$. Then
$m({L}^{(0)})>\chi^{(0)},$ (7.15)
and for all $n\geqslant 1$ we have
$m({L}^{(n)})\geqslant 2\chi^{(n)}.$ (7.16)
The assumption that $\alpha>\beta$ can be made with no loss of generality, for
otherwise we may simply interchange the roles of $\alpha$ and $\beta$. The
reason we need to make this assumption is that we have chosen to begin the
iterative leaves removal process by removing leaves from the $A$-side. (If we
had $\beta>\alpha$ then the process would have to begin by removing leaves
from the $B$-side.)
To prove Lemma 7.16 we will first establish two additional lemmas. The first
one gives a lower bound for the measures of the sets of leaves $A_{1}$ and
$B_{1}$.
###### Lemma 7.17.
We have
$m(A_{1})\geqslant 2m(A)-m(\Omega),$ (7.17)
and similarly,
$m(B_{1})\geqslant 2m(B)-m(\Omega).$ (7.18)
###### Proof.
Recall that we denote by $A_{k}$ the set of vertices in $A$ of degree $k$.
Since the sets $A_{k}$ form a partition of $A$, we have
$m(A)=\sum_{k=1}^{\infty}m(A_{k}).$ (7.19)
In turn, the sets $E(A_{k})=\pi_{\alpha}^{-1}(A_{k})\cap\Omega$ form a
partition of $\Omega$, and by Lemma 7.12 we have $m(E(A_{k}))=km(A_{k})$.
Hence
$m(\Omega)=\sum_{k=1}^{\infty}m(E(A_{k}))=\sum_{k=1}^{\infty}km(A_{k}).$
(7.20)
Using (7.19) and (7.20) we conclude that
$2m(A)-m(A_{1})=m(A_{1})+2\sum_{k=2}^{\infty}m(A_{k})\leqslant\sum_{k=1}^{\infty}km(A_{k})=m(\Omega),$
(7.21)
which proves (7.17). The inequality (7.18) can be proved in a similar way. ∎
The next lemma is a more symmetric version of the previous one.
###### Lemma 7.18.
We have
$m(A_{1})+m(B_{1})\geqslant 2\chi.$ (7.22)
In other words, the measure of the set of leaves in the graph $G(\Omega)$,
both from $A$ and from $B$, is at least $2\chi$. This is an immediate
consequence of Lemma 7.17. Indeed, taking the sum of (7.17) and (7.18) yields
$m(A_{1})+m(B_{1})\geqslant 2m(A)-m(\Omega)+2m(B)-m(\Omega)=2\chi.$ (7.23)
Now we can prove Lemma 7.16 based on the previous two lemmas.
###### Proof of Lemma 7.16.
Recall from (7.12) that we have $m(A)=\alpha$, $m(B)=\beta$, and that we have
assumed $\alpha>\beta$. Hence using Lemma 7.17 we obtain
$m({L}^{(0)})\geqslant 2\alpha-m(\Omega)>\alpha+\beta-m(\Omega)=\chi^{(0)},$
(7.24)
and so (7.15) is proved. Next, for $n\geqslant 1$ we apply Lemma 7.18 to the
graph $G^{(n)}(\Omega)$. The lemma gives
$m(A^{(n)}_{1})+m(B^{(n)}_{1})\geqslant 2\chi^{(n)}.$ (7.25)
However we observe that for $n\geqslant 1$, the set of leaves $B^{(n)}_{1}$ is
empty if $n$ is even, and $A^{(n)}_{1}$ is empty if $n$ is odd, due to the
removal of the leaves in the previous step of the iterative leaves removal
process. Hence (7.16) follows from (7.25). ∎
###### Lemma 7.19 (monotonicity).
For every $n$ we have
$\chi^{(n+1)}\leqslant\chi^{(n)}.$ (7.26)
###### Proof.
Suppose first that $n$ is even. By the definitions of $\chi^{(n+1)}$ and
$G^{(n+1)}(\Omega)$ we have
$\displaystyle\chi^{(n+1)}$
$\displaystyle=m(A^{(n+1)})+m(B^{(n+1)})-m(E^{(n+1)})$
$\displaystyle=(m(A^{(n)})-m(A^{(n)}_{1}))+(m(B^{(n)})-m(B^{(n)}_{*}))-(m(E^{(n)})-m(E^{(n)}(A^{(n)}_{1})))$
$\displaystyle=\chi^{(n)}-m(A^{(n)}_{1})-m(B^{(n)}_{*})+m(E^{(n)}(A^{(n)}_{1}))=\chi^{(n)}-m(B^{(n)}_{*}),$
(7.27)
where in the last equality we used $m(E^{(n)}(A^{(n)}_{1}))=m(A^{(n)}_{1})$
(Lemma 7.12). Hence for even $n$ we have
$\chi^{(n+1)}=\chi^{(n)}-m(B^{(n)}_{*}).$ (7.28)
Similarly, for odd $n$ we have
$\chi^{(n+1)}=\chi^{(n)}-m(A^{(n)}_{*}),$ (7.29)
and the inequality (7.26) follows. ∎
### 7.7. Jump sets and measure jumps
Let us denote by $J^{(n)}$ the set $B^{(n)}_{*}$ if $n$ is even, or the set
$A^{(n)}_{*}$ if $n$ is odd. The set $J^{(n)}$ will be called a _jump set_.
The equalities (7.28) and (7.29), established in the proof of Lemma 7.19, say
that for every $n$ we have
$m(J^{(n)})=\chi^{(n)}-\chi^{(n+1)}.$ (7.30)
###### Definition 7.20 (measure jump).
Whenever it happens that $\chi^{(n)}>\chi^{(n+1)}$, or equivalently, whenever
we have $m(J^{(n)})>0$, we will say that a _measure jump_ has occurred.
###### Lemma 7.21 (finite subtree lemma).
Assume that for some $n$ the set $J^{(n)}$ is nonempty (in particular, this is
the case if $\chi^{(n)}>\chi^{(n+1)}$). Then for each vertex $v\in J^{(n)}$,
the connected component of $v$ in the graph $G(\Omega)$ is a finite tree.
Moreover, if $v,w$ are two distinct vertices in $J^{(n)}$ then their
respective connected components in $G(\Omega)$ are disjoint.
###### Proof.
By definition, $J^{(n)}$ is either $B^{(n)}_{*}$ or $A^{(n)}_{*}$ depending on
whether $n$ is even or odd. We consider the case where $n$ is even (the case
where $n$ is odd is similar). Then $J^{(n)}=B^{(n)}_{*}$ is the set of star
vertices in $B^{(n)}$, that is, the vertices in $B^{(n)}$ all of whose
neighbors in the graph $G^{(n)}$ are leaves. Recalling that all the degrees of
vertices in $G(\Omega)$ are finite, it follows that the connected component of
a vertex $v\in J^{(n)}$ in the graph $G^{(n)}$ is a finite tree (of height
one, if we view $v$ as the root of the tree). Since $G^{(n)}$ was obtained
from $G^{(n-1)}$ by leaves removal, the connected component of $v$ in the
graph $G^{(n-1)}$ is again a finite tree (of height at most two, when counted
from the root $v$). Continuing in the same way, we conclude that the connected
component of $v$ in the graph $G^{(n-j)}$ is a finite tree (of height at most
$j+1$ from the root $v$), for each $j=1,2,\dots,n$. In particular, for $j=n$
we obtain the first assertion of the lemma.
Next we turn to prove the second assertion of the lemma. Consider two distinct
vertices $v,w\in J^{(n)}$. Since all the neighbors of both $v$ and $w$ within
$G^{(n)}$ are leaves, $v$ and $w$ cannot have any common neighbor in
$G^{(n)}$. Hence the connected components of $v$ and $w$ in the graph
$G^{(n)}$ are disjoint. Since $G^{(n)}$ was obtained from $G^{(n-1)}$ by
leaves removal, the connected components of $v$ and $w$ in the graph
$G^{(n-1)}$ are also disjoint. Continuing in the same way, we conclude that
the connected components of $v$ and $w$ in the graph $G^{(n-j)}$ are disjoint
for each $j=1,2,\dots,n$. In particular this is the case for $j=n$ and so the
second assertion of the lemma follows. ∎
### 7.8. Proof of Theorem 2.3
We now turn to prove Theorem 2.3. Recall that the theorem asserts that if the
tiling level vector $(p,q)$ is not proportional to any vector of the form
$(n,m)$ where $n,m$ are nonnegative integers, then
$m(\Omega)\geqslant\alpha+\beta$. To prove this result, we will assume that
the tiling levels $p$ and $q$ are both nonzero and that
$m(\Omega)<\alpha+\beta,$ (7.31)
and we will show that this implies that $(p,q)$ must be proportional to some
vector of the form $(n,m)$ where $n,m$ are two positive integers.
Recall from (7.12) that we have $m(A)=\alpha$, $m(B)=\beta$, hence (7.31) is
equivalent to the assumption that
$m(E)<m(A)+m(B),$ (7.32)
that is, the total measure of the edges in the graph $G(\Omega)$ is strictly
smaller than the total measure of the vertices.
The following lemma shows that to prove Theorem 2.3 it would be enough to
establish the existence of a finite connected component in the graph
$G(\Omega)$.
###### Lemma 7.22 (total weight equality).
Assume that the graph $G(\Omega)$ has a finite connected component $H$, and
suppose that $H$ has $m$ vertices in $A$ and $n$ vertices in $B$. Then
$mp=nq.$ (7.33)
###### Proof.
Recall that we have assumed (using Lemma 7.14) that the weight of each vertex
in $G(\Omega)$ is either $p$ or $q$, depending on whether this vertex lies in
$A$ or in $B$. Consider the total weight of the connected component $H$, that
is, the sum of the weights of all the edges in $H$. On one hand, this sum is
the same as the sum of the weights of the vertices of $H$ that belong to $A$,
and therefore it is equal to $mp$. On the other hand, this sum is also the
same as the sum of the weights of the vertices of $H$ that belong to $B$, so
it must also be equal to $nq$. Hence the equality in (7.33) must hold. ∎
We can now complete the proof of Theorem 2.3.
###### Proof of Theorem 2.3.
We may assume with no loss of generality that $\alpha>\beta$. Consider the
graph $G(\Omega)$ and its leaves-removal-graph-sequence $G^{(n)}(\Omega)$.
Suppose that (7.31) holds, then equivalently we have (7.32) and thus
$\chi^{(0)}>0$. We claim that after at most
$r=\lceil\frac{\alpha+\beta}{\chi^{(0)}}\rceil$ steps of the iterative leaves
removal process, a measure jump must occur. Indeed, if not then
$\chi^{(n)}=\chi^{(0)}$ for each $0\leqslant n\leqslant r$. But then Lemma
7.16 implies that the measure of the set ${L}^{(n)}$ of the leaves removed at
the $n$’th step of the iterative process, is at least $\chi^{(0)}$ for each
$0\leqslant n\leqslant r$. Thus the total measure of the removed leaves must
be at least $(r+1)\chi^{(0)}$. But $(r+1)\chi^{(0)}$ is greater than
$m(A)+m(B)$ which is the total measure of the set of vertices in the entire
graph $G(\Omega)$, so we arrive at a contradiction. Hence a measure jump must
occur.
We thus conclude that there exists at least one jump set $J^{(n)}$ of positive
measure, that is, there is $n$ such that
$m(J^{(n)})=\chi^{(n)}-\chi^{(n+1)}>0$. In particular the jump set $J^{(n)}$
is nonempty. Then by Lemma 7.21, any vertex $v\in J^{(n)}$ belongs to a finite
connected component of the graph $G(\Omega)$. Thus $G(\Omega)$ has a finite
connected component. By Lemma 7.22, there exist two positive integers $n,m$
such that $mp=nq$. We conclude that the vector $(p,q)$ is proportional to
$(n,m)$. Theorem 2.3 is thus proved. ∎
### 7.9. The total jump set
We now move on towards our next goal, which is to prove Theorem 2.6. This will
require a more detailed analysis of the jump sets which occur in our iterative
leaves removal process. We start with following lemma.
###### Lemma 7.23 (Euler characteristics limit).
We have
$\lim_{n\rightarrow\infty}\chi^{(n)}\leqslant 0.$ (7.34)
Notice that the existence of the limit in (7.34) is guaranteed due to the
monotonicity of the sequence $\chi^{(n)}$ (Lemma 7.19).
###### Proof of Lemma 7.23.
Let $G^{(n)}(\Omega)=(A^{(n)},B^{(n)},E^{(n)})$ be the leaves-removal-graph-
sequence of $G(\Omega)=(A,B,E)$, and recall that
$A^{(n+1)}\subset A^{(n)},\quad B^{(n+1)}\subset B^{(n)},\quad
E^{(n+1)}\subset E^{(n)},$ (7.35)
since the graph $G^{(n+1)}$ is obtained from $G^{(n)}$ by the removal of
vertices and edges. We define the _graph limit_
$G^{(\omega)}(\Omega)=(A^{(\omega)},B^{(\omega)},E^{(\omega)})$ by
$A^{(\omega)}=\bigcap_{n}A^{(n)},\quad B^{(\omega)}=\bigcap_{n}B^{(n)},\quad
E^{(\omega)}=\bigcap_{n}E^{(n)}.$ (7.36)
Equivalently, $G^{(\omega)}(\Omega)$ is the graph induced by the set
$\Omega^{(\omega)}=\bigcap_{n}\Omega^{(n)}$ (the equivalence can be verified
in a straightforward way using the finite degrees assumption).
It follows from (7.35) and (7.36) that
$m(A^{(n)})\to m(A^{(\omega)}),\quad m(B^{(n)})\to m(B^{(\omega)}),\quad
m(E^{(n)})\to m(E^{(\omega)}),$ (7.37)
and consequently,
$\displaystyle\lim_{n\rightarrow\infty}\chi^{(n)}$
$\displaystyle=\lim_{n\rightarrow\infty}(m(A^{(n)})+m(B^{(n)})-m(E^{(n)}))$
$\displaystyle=m(A^{(\omega)})+m(B^{(\omega)})-m(E^{(\omega)})=\chi^{(\omega)},$
(7.38)
where $\chi^{(\omega)}$ is the Euler characteristic of the graph
$G^{(\omega)}(\Omega)$.
Now, suppose to the contrary that $\chi^{(\omega)}>0$. Then we may apply Lemma
7.18 to the graph limit $G^{(\omega)}(\Omega)$ and obtain that there must be a
set of leaves with positive measure in $G^{(\omega)}(\Omega)$. Let $v$ be any
leaf of $G^{(\omega)}(\Omega)$, then $v$ has exactly one neighbor $w_{0}$ in
$G^{(\omega)}(\Omega)$. Notice that the vertex $v$ must have at least one more
neighbor in the original graph $G(\Omega)$, for otherwise $v$ is a leaf in
$G(\Omega)$ and should have been removed in either the first or second step of
the leaves removal process. Let $w_{0},w_{1},\dots,w_{k}$ be all the neighbors
of $v$ in $G(\Omega)$ (there can be only finitely many neighbors due to the
finite degrees assumption). Since the vertices $w_{1},\dots,w_{k}$ are no
longer in the graph limit $G^{(\omega)}(\Omega)$, for each $1\leqslant
j\leqslant k$ there is $n_{j}$ such that $w_{j}$ is not in
$G^{(n_{j})}(\Omega)$. Hence if we let $N:=\max\\{n_{1},\dots,n_{k}\\}$ then
$G^{(N)}(\Omega)$ does not contain any one of the vertices
$w_{1},\dots,w_{k}$. Thus $v$ is a leaf already in the graph
$G^{(N)}(\Omega)$. But then $v$ should have been removed at the $N$’th step of
the leaves removal process, so $v$ cannot belong to the graph limit
$G^{(\omega)}(\Omega)$. We thus arrive at a contradiction. This shows that
$\chi^{(\omega)}$ cannot be positive and the lemma is proved. ∎
###### Definition 7.24 (the total jump set).
The set
$J=\bigcup_{n=0}^{\infty}J^{(n)}$ (7.39)
will be called the _total jump set_ of the graph $G(\Omega)$.
Recall that $J^{(n)}$ is a subset of $B$ if $n$ is even, and $J^{(n)}$ is a
subset of $A$ if $n$ is odd. Hence $J$ is a subset of the entire vertex set
$V=A\cup B$ of the graph $G(\Omega)=(A,B,E)$. Moreover, if we consider $V$ as
a measure space, formed by the disjoint union of the two measure spaces $A$
and $B$, then $J$ is a measurable subset of $V$ (Lemma 7.10).
We also notice that the sets $J^{(n)}$ form a partition of $J$ (being disjoint
sets) and hence the measure $m(J)$ of the total jump set is equal to the sum
of all the measure jumps. By the proof of Theorem 2.3 we know that at least
one measure jump must occur, which implies that the set $J$ has positive
measure. Now we prove a stronger result:
###### Lemma 7.25 (lower bound for the total jump measure).
We have
$m(J)\geqslant m(A)+m(B)-m(\Omega).$ (7.40)
###### Proof.
Using (7.38) we have
$\displaystyle\chi^{(0)}-\chi^{(\omega)}$
$\displaystyle=\lim_{n\to\infty}(\chi^{(0)}-\chi^{(n)})=\lim_{n\to\infty}\sum_{k=0}^{n-1}(\chi^{(k)}-\chi^{(k+1)})=$
(7.41) $\displaystyle=\lim_{n\to\infty}\sum_{k=0}^{n-1}m(J^{(k)})=m(J).$
(7.42)
But due to Lemma 7.23 we know that $\chi^{(\omega)}$ is non-positive, thus
$m(J)=\chi^{(0)}-\chi^{(\omega)}\geqslant\chi^{(0)}$ (7.43)
which establishes (7.40). ∎
###### Lemma 7.26 (total jump set as a set of representatives).
Every connected component of the graph $G(\Omega)$ which is a finite tree,
intersects the total jump set $J$ at exactly one vertex. Conversely, each
vertex $w\in J$ lies in a connected component of the graph $G(\Omega)$ which
is a finite tree.
Thus, we may consider the total jump set $J$ as a set of representatives,
containing a unique representative vertex for each connected component of the
graph $G(\Omega)$ which is a finite tree.
###### Proof of Lemma 7.26.
Recall that $G^{(n+1)}(\Omega)$ is obtained from $G^{(n)}(\Omega)$ by (i) the
removal of the leaves in $A^{(n)}$ if $n$ is even, or the leaves in $B^{(n)}$
if $n$ is odd; (ii) the removal of the edges incident to the leaves removed;
and (iii) the removal of the set $J^{(n)}$ of vertices that become isolated
(which is the set $B^{(n)}_{*}$ if $n$ is even, or the set $A^{(n)}_{*}$ if
$n$ is odd).
Now let $H$ be a connected component of the graph $G(\Omega)$, and assume that
$H$ is a finite tree. Then the iterative leaves removal process necessarily
exhausts the tree $H$ after a finite number of steps (this can be easily
proved by induction on the size of the tree). Moreover, the tree $H$ gets
exhausted at the unique step $n$ for which $J^{(n)}\cap H$ is nonempty, and
$J^{(n)}\cap H$ must then consist of exactly one vertex.
(It is worth mentioning that at the last step $n$ when the tree gets
exhausted, it may happen that there is only one edge of the tree left to be
removed. In this case, one of the vertices of this edge will be considered as
a leaf, while the other vertex will be considered as an element of the set
$J^{(n)}$.)
Thus, each connected component of the graph $G(\Omega)$ which is a finite
tree, contributes exactly one vertex to the total jump set $J$.
Conversely, consider a vertex $w\in J$. Then $w$ belongs to the set $J^{(n)}$
for some $n$, so by Lemma 7.21 the connected component of $w$ in the graph
$G(\Omega)$ is a finite tree. ∎
###### Lemma 7.27 (upper bound for the total jump measure).
Assume that the tiling levels $p,q$ are two positive coprime integers. Then
$m(J)\leqslant\min\Big{\\{}\frac{\alpha}{q},\frac{\beta}{p}\Big{\\}}.$ (7.44)
###### Remark 7.28.
Let us explain our intuition behind Lemma 7.27. Recall that each vertex $w\in
J$ is a representative of a connected component of $G(\Omega)$ which is a
finite tree (Lemma 7.26). Let $H$ be one of these connected components, and
suppose that $H$ has $m$ vertices in $A$ and $n$ vertices in $B$. Using Lemma
7.22 it follows that $mp=nq$. But since $p,q$ are now assumed to be positive
coprime integers, this implies that $q$ must divide $m$, and $p$ must divide
$n$. In particular, we have $m\geqslant q$ and $n\geqslant p$. Hence the
connected component of each vertex $w\in J$ contributes at least $q$ vertices
to $A$, and at least $p$ vertices to $B$. So intuitively we may expect to have
$m(A)\geqslant q\cdot m(J),$ (7.45)
and
$m(B)\geqslant p\cdot m(J).$ (7.46)
But notice that according to (7.12) we have $m(A)=\alpha$, $m(B)=\beta$, so
that the two inequalities (7.45) and (7.46) together imply (7.44). This
explains why intuitively one may expect that Lemma 7.27 should be true.
We now turn to the formal proof of Lemma 7.27. Let $V=A\cup B$ be the vertex
set of the graph $G(\Omega)=(A,B,E)$, and let $V^{\prime}\subset V$ be the set
of those vertices whose connected component in the graph $G(\Omega)$ is a
finite tree. Let
$A^{\prime}=V^{\prime}\cap A,\quad B^{\prime}=V^{\prime}\cap B.$ (7.47)
###### Lemma 7.29.
The sets $A^{\prime},B^{\prime}$ (and hence also $V^{\prime}=A^{\prime}\cup
B^{\prime}$) are measurable.
###### Proof.
Consider the sets $J_{A}:=J\cap A$ and $J_{B}:=J\cap B$. The sets $J_{A}$,
$J_{B}$ are measurable since $A$, $B$ and $J$ are measurable sets.
Recall that every connected component in the graph $G(\Omega)$ which is a
finite tree, has a representative vertex $v\in J$ (Lemma 7.26). Hence
$A^{\prime}$ is the set of all vertices $a\in A$ such that there is a finite
path connecting $a$ to some element $v\in J=J_{A}\cup J_{B}$.
We next observe that a vertex $a\in A$ is connected to some vertex $v\in
J_{A}$ if and only if $a$ belongs to the set
$(\pi_{\alpha}\circ\pi_{\beta}^{-1}\circ\pi_{\beta}\circ\pi_{\alpha}^{-1})^{n}(J_{A})$
for some $n$. This is because when moving from a vertex in $A$ to a neighbor
vertex in $B$, we first go from the vertex to some of its incident edges
(which corresponds to picking an element of $\Omega$ belonging to the preimage
of the vertex under the map $\pi_{\alpha}$), and then go from this edge to its
other endpoint vertex (which corresponds to taking the image of the edge under
$\pi_{\beta}$). Similarly, when moving from a vertex in $B$ to a neighbor
vertex in $A$, we first pick an edge in the preimage under the map
$\pi_{\beta}$ and then take the image of the edge under $\pi_{\alpha}$.
For a similar reason, a vertex $a\in A$ is connected to some vertex $v\in
J_{B}$ if and only if $a$ is in the set
$(\pi_{\alpha}\circ\pi_{\beta}^{-1}\circ\pi_{\beta}\circ\pi_{\alpha}^{-1})^{n}(\pi_{\alpha}\circ\pi_{\beta}^{-1})(J_{B})$
for some $n$.
(We note that here we consider $\pi_{\alpha}$ and $\pi_{\beta}$ as maps
defined on the set $\Omega$, thus inverse images under these maps are
understood to be subsets of $\Omega$.)
We have thus shown that
$A^{\prime}=\bigcup_{n=0}^{\infty}\Big{[}(\pi_{\alpha}\circ\pi_{\beta}^{-1}\circ\pi_{\beta}\circ\pi_{\alpha}^{-1})^{n}(J_{A})\cup(\pi_{\alpha}\circ\pi_{\beta}^{-1}\circ\pi_{\beta}\circ\pi_{\alpha}^{-1})^{n}(\pi_{\alpha}\circ\pi_{\beta}^{-1})(J_{B})\Big{]},$
(7.48)
so the measurability of $A^{\prime}$ follows from the measurability of the
sets $J_{A},J_{B}$ and the fact that the measurability of a set is preserved
under both images and preimages with respect to the maps $\pi_{\alpha}$ and
$\pi_{\beta}$.
The proof that the set $B^{\prime}$ is also measurable is similar. ∎
###### Proof of Lemma 7.27.
Assume that the tiling levels $p$ and $q$ are two positive coprime integers.
We must prove that (7.44) holds, or equivalently, that (7.45) and (7.46) are
both satisfied. We will prove (7.45) only. The proof of (7.46) is similar.
Recall that the connected component of any vertex $v\in J$ in the graph
$G(\Omega)$ is a finite tree (Lemma 7.26). For each $v\in J$ we let $h_{A}(v)$
be the number of vertices of the connected component of $v$ which lie in the
set $A$. Then $h_{A}$ is a nonnegative, integer-valued function on $J$. Since
each finite connected component of $G(\Omega)$ must contain at least $q$
vertices in $A$ (Remark 7.28), we have $h_{A}(v)\geqslant q$ for every $v\in
J$.
We will show that the function $h_{A}$ is measurable and satisfies
$\int_{J}h_{A}\leqslant m(A^{\prime}).$ (7.49)
Notice that once (7.49) is established, we can conclude (using
$A^{\prime}\subset A$) that
$m(A)\geqslant m(A^{\prime})\geqslant\int_{J}h_{A}\geqslant q\cdot m(J),$
(7.50)
and (7.45) follows. So it remains to show that $h_{A}$ is measurable and
satisfies (7.49).
Recall that we denote by ${L}^{(n)}$ the set of leaves removed at the $n$’th
step of the iterative leaves removal process, that is, if $n$ is even then
${L}^{(n)}=A^{(n)}_{1}$ (the set of leaves in $A^{(n)}$), and if $n$ is odd
then ${L}^{(n)}=B^{(n)}_{1}$ (the set of leaves in $B^{(n)}$). We construct by
induction two sequences $\\{\phi_{A}^{(n)}\\}$ and $\\{\psi_{A}^{(n)}\\}$ of
functions on $V^{\prime}=A^{\prime}\cup B^{\prime}$, as follows. We define
$\psi_{A}^{(0)}=\mathds{1}_{A^{\prime}},\quad\psi_{A}^{(n+1)}=\psi_{A}^{(n)}+\phi_{A}^{(n)},$
(7.51)
and
$\displaystyle\phi_{A}^{(n)}(v)=$ $\displaystyle-\psi_{A}^{(n)}(v),$ $v\in
V^{\prime}\cap L^{(n)}$ (7.52) $\displaystyle\phi_{A}^{(n)}(v)=$
$\displaystyle\sum_{w}\psi_{A}^{(n)}(w),$ $v\in V^{\prime}\setminus L^{(n)}$
(7.53)
where $w$ goes through all the vertices in $V^{\prime}\cap L^{(n)}$ who are
neighbors of $v$ in the graph $G^{(n)}(\Omega)$ (if there are no such
neighbors then the sum is understood to be zero).
The motivation behind this construction is that we view the function
$\psi_{A}^{(n)}$ as assigning a certain mass to each vertex in $V^{\prime}$.
We start with the function $\psi_{A}^{(0)}$ which assigns a unit mass to each
vertex in $A^{\prime}$, and zero mass to each vertex in $B^{\prime}$. At the
$n$’th step of the leaves removal process, by adding $\phi_{A}^{(n)}$ to
$\psi_{A}^{(n)}$ we subtract the mass from each removed leaf in $V^{\prime}$
and add the mass back to the neighbor of the leaf, so that at each step the
mass is transferred from the removed leaves to their neighbors.
In particular, each $\psi_{A}^{(n)}$ is a nonnegative, integer-valued function
on $V^{\prime}$.
Notice that whenever the leaves removal process exhausts a connected component
$H$ of the graph $G(\Omega)$ (so that $H$ is a finite tree by Lemma 7.21),
then the total mass accumulated from all the $A^{\prime}$-vertices of $H$ is
transferred into the unique representative vertex $v$ of $H$ in the total jump
set $J$ (Lemma 7.26), and the value of $\psi_{A}^{(n)}(v)$ will remain fixed
from that point and on. This implies that the sequence $\psi_{A}^{(n)}$
converges pointwise to the function $h_{A}$ on $J$, and to zero on
$V^{\prime}\setminus J$.
We now give an equivalent way of constructing the function $\phi_{A}^{(n)}$.
Let $g_{n}$ be a function on the set $E$ of edges of the graph
$G(\Omega)=(A,B,E)$, defined as follows. We let $g_{n}(e)=\psi_{A}^{(n)}(v)$
if $e$ is a leaf edge in the graph $G^{(n)}(\Omega)$, incident to a vertex
$v\in V^{\prime}\cap L^{(n)}$. If this is not the case, then we let
$g_{n}(e)=0$. Then $g_{n}$ is supported on the subset $E^{(n)}(V^{\prime}\cap
L^{(n)})$ of the edge set $E$. By identifying the edges of $G(\Omega)$ with
elements of $\Omega$ (Remark 7.6) we may view $g_{n}$ as a function on
$\Omega$. If $n$ is even, then $g_{n}(x)=\psi_{A}^{(n)}(\pi_{\alpha}(x))$ if
$x$ is in the set $\pi_{\alpha}^{-1}(V^{\prime}\cap L^{(n)})\cap\Omega^{(n)}$,
and $g_{n}(x)=0$ otherwise. Similarly, if $n$ is odd then
$g_{n}(x)=\psi_{A}^{(n)}(\pi_{\beta}(x))$ if $x$ is in the set
$\pi_{\beta}^{-1}(V^{\prime}\cap L^{(n)})\cap\Omega^{(n)}$, and $g_{n}(x)=0$
otherwise.
Now consider again the definition (7.52), (7.53) of the function
$\phi_{A}^{(n)}$. Notice that up to a sign, both of (7.52) and (7.53) involve
summation of the values $g_{n}(e)$ over all the edges $e$ incident to the
vertex $v$, with the only difference that in (7.52) there is just one such
edge $e$ (since $v$ is a leaf in the graph $G^{(n)}(\Omega)$) while in (7.53)
the vertex $v$ might have several neighbors. Observe also that the
identification of edges in the graph with elements of $\Omega$, enables us to
express summation over neighbors in terms of the projections $\pi_{\alpha}$
and $\pi_{\beta}$. Thus if we extend the function $g_{n}$ to the whole
$\mathbb{R}$ by setting $g_{n}(x)=0$ for $x\notin\Omega$, then the function
$\phi_{A}^{(n)}$ can be equivalently defined by
$\phi_{A}^{(n)}=(-1)^{n+1}\pi_{\alpha}(g_{n})$ on $A^{\prime}$, and
$\phi_{A}^{(n)}=(-1)^{n}\pi_{\beta}(g_{n})$ on $B^{\prime}$.
It follows from this equivalent definition, using induction on $n$, that
$\phi_{A}^{(n)}$ and $\psi_{A}^{(n)}$ are both measurable functions on
$V^{\prime}$. Moreover, these functions are both in $L^{1}(V^{\prime})$, since
we have $\int_{\Omega}g_{n}=\int_{V^{\prime}\cap L^{(n)}}\psi_{A}^{(n)}$ and
the projections $\pi_{\alpha}$ and $\pi_{\beta}$ map $L^{1}(\Omega)$ into
$L^{1}(A)$ and $L^{1}(B)$ respectively.
As a consequence, the function $h_{A}$ (the pointwise limit of
$\psi_{A}^{(n)}$ on $J$) is a measurable function on $J$.
Now, notice that we have $\int_{V^{\prime}}\psi_{A}^{(0)}=m(A^{\prime})$. We
claim that in fact $\int_{V^{\prime}}\psi_{A}^{(n)}=m(A^{\prime})$ for every
$n$, that is, when the mass is transferred from leaves to neighbors along the
leaves removal process, the total mass remains constant. This is equivalent to
the assertion that $\int_{V^{\prime}}\phi_{A}^{(n)}=0$ for every $n$. Indeed,
we have $\phi_{A}^{(n)}=(-1)^{n+1}\pi_{\alpha}(g_{n})$ on $A^{\prime}$, and
$\phi_{A}^{(n)}=(-1)^{n}\pi_{\beta}(g_{n})$ on $B^{\prime}$. Hence
$(-1)^{n}\int_{V^{\prime}}\phi_{A}^{(n)}=-\int_{A^{\prime}}\pi_{\alpha}(g_{n})+\int_{B^{\prime}}\pi_{\beta}(g_{n})=-\int_{\Omega}g_{n}+\int_{\Omega}g_{n}=0.$
(7.54)
We have thus proved that $\int_{V^{\prime}}\psi_{A}^{(n)}=m(A^{\prime})$ for
every $n$. Moreover, the functions $\psi_{A}^{(n)}$ are nonnegative and
converge pointwise to $h_{A}$ on $J$, and to zero on $V^{\prime}\setminus J$.
Hence we may apply Fatou’s lemma, which yields
$m(A^{\prime})=\lim_{n\rightarrow\infty}\int_{V^{\prime}}\psi_{A}^{(n)}\geqslant\int_{V^{\prime}}\lim_{n\rightarrow\infty}\psi_{A}^{(n)}=\int_{J}h_{A},$
(7.55)
and we arrive at (7.49). Together with (7.50), this implies (7.45). The proof
of (7.46) can be done in a similar way. Lemma 7.27 is thus proved. ∎
### 7.10. Proof of Theorem 2.6
We can now turn to prove Theorem 2.6. Recall that the theorem asserts that if
the tiling levels $p,q$ are two positive, coprime integers then we must have
$m(\Omega)>\alpha+\beta-\min\Big{\\{}\frac{\alpha}{q},\frac{\beta}{p}\Big{\\}}.$
(7.56)
Let us combine the lower bound (Lemma 7.25) and the upper bound (Lemma 7.27)
for the total jump measure. Since by (7.12) we have $m(A)=\alpha$,
$m(B)=\beta$, this yields
$\alpha+\beta-m(\Omega)\leqslant
m(J)\leqslant\min\Big{\\{}\frac{\alpha}{q},\frac{\beta}{p}\Big{\\}},$ (7.57)
and as a consequence,
$m(\Omega)\geqslant\alpha+\beta-\min\left\\{\frac{\alpha}{q},\frac{\beta}{p}\right\\}.$
(7.58)
We thus almost arrive at (7.56). It only remains to show that an equality
cannot occur.
We will need the following lemma.
###### Lemma 7.30.
Suppose that we are given a full measure subset $\mathcal{J}$ of the total
jump set $J$. Let $\mathcal{V}$ be the set of all vertices $v\in V$ whose
connected component in the graph $G(\Omega)$ intersects $\mathcal{J}$. Then
$\mathcal{V}$ is a full measure subset of $V^{\prime}$.
###### Proof.
By definition, $\mathcal{V}$ is the set of those vertices $v\in V$ such that
there is a finite path connecting $v$ to some vertex in $\mathcal{J}$. Let
$\mathcal{J}_{A}=\mathcal{J}\cap A,\quad\mathcal{J}_{B}=\mathcal{J}\cap
B,\quad\mathcal{V}_{A}=\mathcal{V}\cap A,\quad\mathcal{V}_{B}=\mathcal{V}\cap
B.$ (7.59)
By the same argument as in the proof of Lemma 7.29, we have
$\mathcal{V}_{A}=\bigcup_{n=0}^{\infty}\Big{[}(\pi_{\alpha}\circ\pi_{\beta}^{-1}\circ\pi_{\beta}\circ\pi_{\alpha}^{-1})^{n}(\mathcal{J}_{A})\cup(\pi_{\alpha}\circ\pi_{\beta}^{-1}\circ\pi_{\beta}\circ\pi_{\alpha}^{-1})^{n}(\pi_{\alpha}\circ\pi_{\beta}^{-1})(\mathcal{J}_{B})\Big{]}.$
(7.60)
But the right hand sides of (7.48) and (7.60) coincide up to a set of measure
zero, since both the image and the preimage of a null set under the maps
$\pi_{\alpha}$ and $\pi_{\beta}$ is again a null set. Hence $\mathcal{V}_{A}$
is a full measure subset of $A^{\prime}$. In a similar way, $\mathcal{V}_{B}$
is a full measure subset of $B^{\prime}$. Thus
$\mathcal{V}=\mathcal{V}_{A}\cup\mathcal{V}_{B}$ is a full measure subset of
$V^{\prime}=A^{\prime}\cup B^{\prime}$. ∎
Now suppose to the contrary that there is equality in (7.58). Then the two
inequalities in (7.57) also become equalities. In particular, we obtain
$m(J)=\min\left\\{\frac{\alpha}{q},\frac{\beta}{p}\right\\}.$ (7.61)
This means that we must have $m(A)=q\cdot m(J)$ or $m(B)=p\cdot m(J)$. We
shall suppose that $m(A)=q\cdot m(J)$ (the other case is similar). In turn,
this implies that all the inequalities in (7.50) become equalities, that is,
$m(A)=m(A^{\prime})=\int_{J}h_{A}=q\cdot m(J).$ (7.62)
Since we know that $h_{A}(v)\geqslant q$ for every $v\in J$, it follows from
$\int_{J}h_{A}=q\cdot m(J)$ that $h_{A}(v)=q$ for every $v$ in some full
measure subset $\mathcal{J}$ of $J$. In other words, the connected component
of every $v\in\mathcal{J}$ has exactly $q$ vertices in $A$. In turn, using
Lemma 7.22 it follows that any such a connected component must have exactly
$p$ vertices in $B$. For each $v\in J$, let $h_{B}(v)$ be the number of
vertices of the connected component of $v$ which lie in $B$. Then we have
$h_{B}(v)=p$ for every $v\in\mathcal{J}$, and hence a.e. on $J$.
Consider functions $\phi_{B}^{(n)}$, $\psi_{B}^{(n)}$ on the set
$V^{\prime}=A^{\prime}\cup B^{\prime}$, defined analogously to the functions
$\phi_{A}^{(n)}$, $\psi_{A}^{(n)}$ from the proof of Lemma 7.27. Then the
sequence $\psi_{B}^{(n)}$ converges pointwise to $h_{B}$ on $J$, and to zero
on $V^{\prime}\setminus J$. But since the connected component of every
$v\in\mathcal{J}$ has exactly $p+q$ vertices ($q$ of them in $A$, and $p$ of
them in $B$) then after at most $p+q$ steps of the iterative leaves removal
process, all the mass $\psi_{B}^{(n)}$ on such a connected component is
concentrated on the representative vertex of the connected component in
$\mathcal{J}$. Using Lemma 7.30 we know that if $\mathcal{V}$ is the set of
all vertices whose connected component intersects $\mathcal{J}$, then
$\mathcal{V}$ is a full measure subset of $V^{\prime}$. We conclude that for
all $n\geqslant p+q$ we have $\psi_{B}^{(n)}=p$ on
$\mathcal{J}=\mathcal{V}\cap J$ (and hence, a.e. on $J$), and
$\psi_{B}^{(n)}=0$ on $\mathcal{V}\setminus J$ (and hence, a.e. on
$V^{\prime}\setminus J$). In turn, this implies that
$\int_{V^{\prime}}\psi_{B}^{(n)}=p\cdot m(J)$ for all $n\geqslant p+q$. But on
the other hand, observe (as in the proof of Lemma 7.27) that we have
$\int_{V^{\prime}}\psi_{B}^{(n)}=m(B^{\prime})$ for every $n$. We conclude
that
$m(B^{\prime})=p\cdot m(J).$ (7.63)
Next we claim that $m(B^{\prime})=m(B)$. Indeed, if not, then $B\setminus
B^{\prime}$ is a positive measure subset of $B$. It follows that the set
$\pi_{\alpha}(\pi_{\beta}^{-1}(B\setminus B^{\prime})\cap\Omega)$, consisting
of those vertices in $A$ that have a neighbor belonging to $B\setminus
B^{\prime}$ in the graph $G(\Omega)$, is a positive measure subset of
$A\setminus A^{\prime}$. But this implies that $m(A^{\prime})<m(A)$,
contradicting (7.62). Hence we must have $m(B^{\prime})=m(B)$.
We conclude that
$m(B)=m(B^{\prime})=p\cdot m(J).$ (7.64)
Finally we combine (7.62) and (7.64) to obtain
$\frac{\alpha}{q}=m(J)=\frac{\beta}{p},$ (7.65)
and it follows that $p\alpha=q\beta$. But this contradicts the assumption that
$\alpha,\beta$ are rationally independent. This establishes that equality in
(7.58) cannot occur, and completes the proof of Theorem 2.6. ∎
## 8\. Simultaneous tiling by integer translates
In this section we turn to deal with the case where the numbers $\alpha,\beta$
are linearly dependent over the rationals. By rescaling, it would be enough to
consider the case $(\alpha,\beta)=(n,m)$ where $n,m$ are positive integers.
We will prove Theorems 2.8, 2.9 and 2.10 by showing that if a measurable
function $f$ on $\mathbb{R}$ satisfies the simultaneous tiling condition (2.3)
then the tiling level vector $(p,q)$ must be proportional to $(m,n)$, and if
the level vector $(p,q)$ is nonzero then the least possible measure of the
support of $f$ is $n+m-\gcd(n,m)$.
The approach is based on a reduction of the simultaneous tiling problem from
the real line $\mathbb{R}$ to the set of integers $\mathbb{Z}$. In particular
we will prove that $n+m-\gcd(n,m)$ is also the least possible size of the
support of a function $g$ on $\mathbb{Z}$ that tiles the integers
simultaneously (with a nonzero level vector) by two arithmetic progressions
$n\mathbb{Z}$ and $m\mathbb{Z}$.
### 8.1.
We begin by introducing the notion of tiling by translates of a function on
the set of integers $\mathbb{Z}$. Let $g$ be a function on $\mathbb{Z}$, and
$\Lambda$ be a subset of $\mathbb{Z}$. We say that $g+\Lambda$ is a tiling of
$\mathbb{Z}$ at level $w$ if we have
$\sum_{\lambda\in\Lambda}g(t-\lambda)=w,\quad t\in\mathbb{Z},$ (8.1)
and the series (8.1) converges absolutely for every $t\in\mathbb{Z}$.
We are interested in simultaneous tiling of the integers by two arithmetic
progressions $n\mathbb{Z}$ and $m\mathbb{Z}$. We thus consider a function $g$
on $\mathbb{Z}$ satisfying
$\sum_{k\in\mathbb{Z}}g(t-kn)=p,\quad\sum_{k\in\mathbb{Z}}g(t-km)=q,\quad
t\in\mathbb{Z},$ (8.2)
where $n,m$ are positive integers, $p,q$ are complex numbers, and both series
in (8.2) converge absolutely for every $t\in\mathbb{Z}$.
#### 8.1.1.
We begin with the following basic result.
###### Proposition 8.1.
Let $g$ be a function on $\mathbb{Z}$ satisfying (8.2), where $n,m$ are
positive integers. Then $g\in\ell^{1}(\mathbb{Z})$, and the vector $(p,q)$
must be proportional to $(m,n)$.
###### Proof.
First we observe that
$\sum_{t\in\mathbb{Z}}|g(t)|=\sum_{t=0}^{n-1}\sum_{k\in\mathbb{Z}}|g(t-kn)|.$
(8.3)
By assumption, the inner sum on the right hand side of (8.3) converges for
every $t$. Hence the sum on the left hand side converges as well, which shows
that the function $g$ must be in $\ell^{1}(\mathbb{Z})$. Next, we have
$\sum_{t\in\mathbb{Z}}g(t)=\sum_{t=0}^{n-1}\sum_{k\in\mathbb{Z}}g(t-kn)=np,$
(8.4)
where the last equality follows from condition (8.2). In a similar way, we
also have
$\sum_{t\in\mathbb{Z}}g(t)=\sum_{t=0}^{m-1}\sum_{k\in\mathbb{Z}}g(t-km)=mq,$
(8.5)
again using (8.2). Hence $np=mq$, that is, the vector $(p,q)$ is proportional
to $(m,n)$. ∎
#### 8.1.2.
Let us recall (see Definition 4.5) that an $n\times m$ matrix $M=(c_{ij})$ is
called a _doubly stochastic array_ if its entries $c_{ij}$ are nonnegative,
and the sum of the entries at each row is $m$ and at each column is $n$. We
have seen that the minimal size of the support of an $n\times m$ doubly
stochastic array is $n+m-\gcd(n,m)$ (Theorem 4.6). In the proof of Lemma 4.8
we used one part of this result, namely, the part which states that there
exists an $n\times m$ doubly stochastic array whose support size is as small
as $n+m-\gcd(n,m)$.
In what follows we will use the other part of the result, that is, the part
which states that $n+m-\gcd(n,m)$ constitutes a lower bound for the support
size of any $n\times m$ doubly stochastic array. Actually, we will need a
stronger version of this result, proved in [EL22], which establishes that the
same lower bound holds also for complex-valued matrices, that is, even without
assuming that the matrix entries are nonnegative.
###### Theorem 8.2 (see [EL22, Theorem 3.1]).
Let $M=(c_{ij})$ be an $n\times m$ complex-valued matrix satisfying (4.7) and
(4.8), that is, the sum of the entries at each row is $m$ and at each column
is $n$. Then the support of $M$ has size at least $n+m-\gcd(n,m)$.
#### 8.1.3.
By the _support_ of a function $g$ on $\mathbb{Z}$ we mean the set
$\operatorname{supp}g=\\{t\in\mathbb{Z}:g(t)\neq 0\\}.$ (8.6)
In the next result we use Theorem 8.2 to give a lower bound for the support
size of any function $g$ on $\mathbb{Z}$ that tiles simultaneously by the two
arithmetic progressions $n\mathbb{Z}$ and $m\mathbb{Z}$ with a nonzero tiling
level vector $(p,q)$.
###### Theorem 8.3.
Let $g$ be a function on $\mathbb{Z}$ satisfying (8.2) where $n,m$ are
positive integers and the vector $(p,q)$ is nonzero. Then
$\operatorname{supp}g$ has size at least $n+m-\gcd(n,m)$.
###### Proof.
By Proposition 8.1 the function $g$ is in $\ell^{1}(\mathbb{Z})$, and the
tiling level vector $(p,q)$ is proportional to $(m,n)$. By multiplying the
function $g$ on an appropriate scalar we may suppose that $(p,q)=(m,n)$.
We will first prove the result in the special case where $n,m$ are _coprime_.
Let $\mathbb{Z}_{nm}$ be the additive group of residue classes modulo $nm$.
Define a function
$h(t):=\sum_{k\in\mathbb{Z}}g(t-knm),\quad t\in\mathbb{Z}.$ (8.7)
Then $h$ is periodic with period $nm$, so it may be viewed as a function on
$\mathbb{Z}_{nm}$.
Let $H_{k}$ denote the subgroup of $\mathbb{Z}_{nm}$ generated by the element
$k$. One can verify using (8.2) and (8.7) that the function $h$ tiles the
group $\mathbb{Z}_{nm}$ by translations along each one of the two subgroups
$H_{n}$ and $H_{m}$, that is to say,
$\sum_{s\in H_{n}}h(t-s)=m,\quad\sum_{s\in H_{m}}h(t-s)=n,\quad
t\in\mathbb{Z}_{nm}.$ (8.8)
Next, we denote by $\mathbb{Z}_{n}$ and $\mathbb{Z}_{m}$ the additive groups
of residue classes modulo $n$ and $m$ respectively. Since $n,m$ are coprime,
then by the Chinese remainder theorem there is a group isomorphism
$\varphi:\mathbb{Z}_{nm}\to\mathbb{Z}_{n}\times\mathbb{Z}_{m}$ given by
$\varphi(t)=(t\,\operatorname{mod}\,n,\,t\,\operatorname{mod}\,m)$. This
isomorphism allows us to lift the function $h$ to a new function
$M:\mathbb{Z}_{n}\times\mathbb{Z}_{m}\to\mathbb{R}$ (8.9)
defined by $M(\varphi(t))=h(t)$, $t\in\mathbb{Z}_{nm}$. We use (8.9) as an
alternative way to represent a complex-valued $n\times m$ matrix $M$, in which
the rows of the matrix are indexed by residue classes modulo $n$, while the
columns indexed by residue classes modulo $m$.
We now claim that the sum of the entries of the matrix $M$ at each row is
equal to $m$ and at each column is equal to $n$. To see this, we observe that
the isomorphism $\varphi$ maps the subgroup $H_{n}$ of $\mathbb{Z}_{nm}$ onto
the subgroup $\\{0\\}\times\mathbb{Z}_{m}$ of
$\mathbb{Z}_{n}\times\mathbb{Z}_{m}$. Hence for each $i\in\mathbb{Z}_{n}$, the
set $\\{(i,j):j\in\mathbb{Z}_{m}\\}$ is the image under $\varphi$ of a certain
coset of $H_{n}$ in $\mathbb{Z}_{nm}$, say, the coset $a_{i}-H_{n}$. It
follows that
$\sum_{j\in\mathbb{Z}_{m}}M(i,j)=\sum_{s\in H_{n}}h(a_{i}-s)=m,$ (8.10)
where in the last equality we used (8.8). In a similar way, $\varphi$ maps the
subgroup $H_{m}$ onto $\mathbb{Z}_{n}\times\\{0\\}$, so for each
$j\in\mathbb{Z}_{m}$ the set $\\{(i,j):i\in\mathbb{Z}_{n}\\}$ is the image
under $\varphi$ of a coset $b_{j}-H_{m}$, and we obtain
$\sum_{i\in\mathbb{Z}_{n}}M(i,j)=\sum_{s\in H_{m}}h(b_{j}-s)=n,$ (8.11)
again using (8.8). We thus see that the sum of the entries of $M$ at each row
is $m$ and at each column is $n$.
Notice that we cannot say that $M$ is a doubly stochastic array, since the
entries of $M$ are not guaranteed to be nonnegative (see Definition 4.5).
Nevertheless, we can now invoke Theorem 8.2 which is valid also for complex-
valued matrices. Since $n,m$ are coprime, it follows from the theorem that the
support of $M$ has size at least $n+m-1$. Since $\operatorname{supp}h$ and
$\operatorname{supp}M$ are of the same size, we conclude that
$|\operatorname{supp}h|\geqslant n+m-1.$ (8.12)
Lastly we observe that if $h(t)\neq 0$ for some $t\in\mathbb{Z}$, then $g$
does not vanish on at least one element of the arithmetic progression
$\\{t-knm:k\in\mathbb{Z}\\}$ due to (8.7). But these arithmetic progressions
are pairwise disjoint as $t$ goes through a complete set of residues modulo
$nm$. This shows that $\operatorname{supp}g$ has size at least as large as the
size of $\operatorname{supp}h$. So combined with (8.12) this implies that
$\operatorname{supp}g$ is of size at least $n+m-1$.
We have thus proved the result in the special case where $n,m$ are coprime. To
prove the result in the general case, we now let $n,m$ be two arbitrary
positive integers. We then write $n=dn^{\prime}$, $m=dm^{\prime}$, where
$d=\gcd(n,m)$ and $n^{\prime},m^{\prime}$ are coprime. For each $0\leqslant
j\leqslant d-1$ we consider the function
$g_{j}(t):=g(j+dt),\quad t\in\mathbb{Z}.$ (8.13)
It follows from (8.2) that each $g_{j}$ tiles the integers simultaneously by
the two arithmetic progression $n^{\prime}\mathbb{Z}$ and
$m^{\prime}\mathbb{Z}$ at levels $p$ and $q$ respectively. Since
$n^{\prime},m^{\prime}$ are coprime (and the tiling level vector is nonzero)
then, by what we have proved above, the size of $\operatorname{supp}g_{j}$
must be at least $n^{\prime}+m^{\prime}-1$. It follows that
$|\operatorname{supp}g|=\sum_{j=0}^{d-1}|\operatorname{supp}g_{j}|\geqslant
d(n^{\prime}+m^{\prime}-1)=n+m-d,$ (8.14)
and we arrive at the desired conclusion. ∎
We note that the correspondence between the $n\times m$ doubly stochastic
arrays and the nonnegative functions which tile the group $\mathbb{Z}_{nm}$ by
translations along each one of the two subgroups $H_{n}$ and $H_{m}$ (where
$n,m$ are coprime) was pointed out in an earlier version of [KP22].
#### 8.1.4.
Our next result shows that the lower bound in Theorem 8.3 is in fact sharp.
###### Theorem 8.4.
For any two positive integers $n,m$ there exists a nonnegative function $g$ on
$\mathbb{Z}$, supported on a set of $n+m-\gcd(n,m)$ consecutive integers, and
satisfying (8.2) with $(p,q)=(m,n)$.
###### Proof.
Let $\chi_{k}$ denote the indicator function of the subset
$\\{0,1,\dots,k-1\\}$ of $\mathbb{Z}$. We consider a function $g$ on
$\mathbb{Z}$ defined as the convolution
$g(t)=(\chi_{n}\ast\chi_{m})(t)=\sum_{s\in\mathbb{Z}}\chi_{n}(t-s)\chi_{m}(s),\quad
t\in\mathbb{Z}.$ (8.15)
Then $g$ is supported on the set $\\{0,1,\dots,n+m-2\\}$ of size $n+m-1$.
Since the function $\chi_{n}$ tiles at level one by translation with
$n\mathbb{Z}$, and $\chi_{m}$ tiles also at level one by translation with
$m\mathbb{Z}$, we can deduce from (8.15) that $g$ satisfies the simultaneous
tiling condition (8.2) with $(p,q)=(m,n)$. This proves the result in the case
where $n,m$ are coprime.
To prove the result in the general case, we write as before $n=dn^{\prime}$,
$m=dm^{\prime}$, where $d=\gcd(n,m)$ and $n^{\prime},m^{\prime}$ are coprime.
Let $h$ be a nonnegative function on $\mathbb{Z}$, supported on a set of
$n^{\prime}+m^{\prime}-1$ consecutive integers, which tiles simultaneously by
the two arithmetic progression $n^{\prime}\mathbb{Z}$ and
$m^{\prime}\mathbb{Z}$ at levels $m$ and $n$ respectively (such a function $h$
exists, by what we have proved above). We then define a function $g$ on
$\mathbb{Z}$ by
$g(j+dt):=h(t),\quad 0\leqslant j\leqslant d-1,\;t\in\mathbb{Z}.$ (8.16)
Then $g$ satisfies the simultaneous tiling condition (8.2), and $g$ is
supported on a set of $d(n^{\prime}+m^{\prime}-1)=n+m-d$ consecutive integers,
as required. ∎
### 8.2.
Given a measurable function $f$ on $\mathbb{R}$, we define for each
$x\in\mathbb{R}$ a function $f_{x}$ on the set of integers $\mathbb{Z}$, given
by
$f_{x}(t):=f(x+t),\quad t\in\mathbb{Z}.$ (8.17)
The following lemma gives a connection between tilings of $\mathbb{R}$ and
tilings of $\mathbb{Z}$.
###### Lemma 8.5.
Let $f$ be a measurable function on $\mathbb{R}$, and $\Lambda$ be a subset of
$\mathbb{Z}$. Then $f+\Lambda$ is a tiling of $\mathbb{R}$ at level $w$ if and
only if $f_{x}+\Lambda$ is a tiling of $\mathbb{Z}$ at the same level $w$ for
almost every $x\in\mathbb{R}$.
###### Proof.
Let $f+\Lambda$ be a tiling of $\mathbb{R}$ at level $w$, then we have
$\sum_{\lambda\in\Lambda}f(x-\lambda)=w$ (8.18)
for all $x$ in some set $E\subset\mathbb{R}$ of full measure. It follows that
for $t\in\mathbb{Z}$ we have
$\sum_{\lambda\in\Lambda}f_{x}(t-\lambda)=\sum_{\lambda\in\Lambda}f(x+t-\lambda)=w$
(8.19)
provided that $x\in E-t$. (The series in both (8.18) and (8.19) are understood
to converge absolutely.) Hence $f_{x}+\Lambda$ is a tiling of $\mathbb{Z}$ at
level $w$ for every $x$ belonging to the set $\bigcap_{t\in\mathbb{Z}}(E-t)$,
which is also a set of full measure in $\mathbb{R}$.
Conversely, let $f_{x}+\Lambda$ be a tiling of $\mathbb{Z}$ at level $w$ for
almost every $x\in\mathbb{R}$. Then
$\sum_{\lambda\in\Lambda}f(x-\lambda)=\sum_{\lambda\in\Lambda}f_{x}(-\lambda)=w\quad\text{a.e.}$
(8.20)
(with absolute convergence of the series) and so $f+\Lambda$ is a tiling of
$\mathbb{R}$ at level $w$. ∎
###### Lemma 8.6.
Let $f$ be a measurable function on $\mathbb{R}$. Then
$\operatorname{mes}(\operatorname{supp}f)=\int_{0}^{1}|\operatorname{supp}f_{x}|\,dx.$
(8.21)
The proof of this lemma is standard and so we omit the details.
### 8.3.
Now we can prove Theorem 2.8, Theorem 2.9 and Theorem 2.10.
###### Proof of Theorem 2.8.
Let $n,m$ be positive integers, and $f$ be a measurable function on
$\mathbb{R}$ satisfying (2.3). By Lemma 8.5, the function $g=f_{x}$ then
satisfies the simultaneous tiling condition (8.2) for almost every
$x\in\mathbb{R}$. Using Proposition 8.1 we conclude that the vector $(p,q)$
must be proportional to $(m,n)$. ∎
###### Proof of Theorem 2.9.
Let $f$ be a measurable function on $\mathbb{R}$ satisfying (2.3) where $n,m$
are positive integers and the vector $(p,q)$ is nonzero. By Lemma 8.5, the
function $g=f_{x}$ then satisfies the simultaneous tiling condition (8.2) for
almost every $x\in\mathbb{R}$. By applying Theorem 8.3 to the function
$g=f_{x}$ we obtain that $|\operatorname{supp}f_{x}|\geqslant n+m-\gcd(n,m)$
for almost every $x\in\mathbb{R}$. Finally, combining this with Lemma 8.6 we
conclude that
$\operatorname{mes}(\operatorname{supp}f)=\int_{0}^{1}|\operatorname{supp}f_{x}|\,dx\geqslant
n+m-\gcd(n,m),$ (8.22)
and so the theorem is proved. ∎
###### Proof of Theorem 2.10.
Let $n,m$ be positive integers, and $(p,q)=(m,n)$. Let $g$ be the function
given by Theorem 8.4, that is, $g$ is a nonnegative function on $\mathbb{Z}$,
supported on a set of $n+m-\gcd(n,m)$ consecutive integers, and satisfying
(8.2). We then construct a measurable (in fact, piecewise constant)
nonnegative function $f$ on $\mathbb{R}$ given by $f(x+t)=g(t)$ for every
$t\in\mathbb{Z}$ and $x\in[0,1)$. Then $f$ is supported on an interval of
length $n+m-\gcd(n,m)$, and $f$ satisfies the tiling condition (2.3) by Lemma
8.5. ∎
## Acknowledgement
We thank Mihalis Kolountzakis for posing to us the problem discussed in
Section 6.
## References
* [Ano73] D. V. Anosov, On an additive functional homology equation connected with an ergodic rotation of the circle. Izv. Akad. Nauk SSSR Ser. Mat. 37 (1973), no. 6, 1259–1274. English translation in Math. USSR-Izv. 7 (1973), no. 6, 1257–1271.
* [EL22] M. Etkind, N. Lev, Support of extremal doubly stochastic arrays. Israel J. Math., to appear, arXiv:2207.08116.
* [Kol04] M. N. Kolountzakis, The study of translational tiling with Fourier analysis. Fourier analysis and convexity, pp. 131–187, Birkhäuser, 2004.
* [KL96] M. N. Kolountzakis, J. C. Lagarias, Structure of tilings of the line by a function. Duke Math. J. 82 (1996), no. 3, 653–678.
* [KL16] M. N. Kolountzakis, N. Lev, On non-periodic tilings of the real line by a function. Int. Math. Res. Not. IMRN 2016, no. 15, 4588–4601.
* [KL21] M. N. Kolountzakis, N. Lev, Tiling by translates of a function: results and open problems. Discrete Anal. 2021, Paper No. 12, 24 pp.
* [KP22] M. N. Kolountzakis, E. Papageorgiou, Functions tiling with several lattices. J. Fourier Anal. Appl. 28 (2022), no. 4, Paper No. 68, 19 pp.
* [KW19] M. N. Kolountzakis, Y. Wang, The structure of multiplicative tilings of the real line. J. Fourier Anal. Appl. 25 (2019), no. 3, 1248–1265.
* [KW99] M. N. Kolountzakis, T. Wolff, On the Steinhaus tiling problem. Mathematika 46 (1999), no. 2, 253–280.
* [LM91] H. Leptin, D. Müller, Uniform partitions of unity on locally compact groups. Adv. Math. 90 (1991), no. 1, 1–14.
* [Lev22] N. Lev, An example concerning Fourier analytic criteria for translational tiling. Rev. Mat. Iberoam. 38 (2022), no. 6, 1975–1991.
* [Liu21] B. Liu, Periodic structure of translational multi-tilings in the plane. Amer. J. Math. 143 (2021), no. 6, 1841–1862.
* [Lou23] M. Loukaki, Doubly stochastic arrays with small support. Australas. J. Combin. 86 (2023), 136–148.
* [Lov12] L. Lovász, Large networks and graph limits. American Mathematical Society, 2012.
|
# Axiomatic de Sitter quantum Yang-Mills theory
with color confinement and mass gap
M.V. Takook _APC, UMR 7164_
_Université Paris Cité_
_75205 Paris, France_<EMAIL_ADDRESS>
###### Abstract.
The analyticity property of de Sitter’s quantum Yang-Mills theory in the
framework of Kerin space quantization, including quantum metric fluctuation,
is demonstrated. This property completes our previous work regarding quantum
Yang-Mills theory in de Sitter’s ambient space formalism, and we can construct
an axiomatic quantum field theory similar to Wightman’s axioms. The color
confinement is proven for the general case, which was previously approved in
the early universe. It is shown by using the interaction between gluon fields
and the conformal sector of the gravitational field, which is a massless
minimally coupled scalar gauge field. The gluon mass results from the
interaction between the gluon fields and the massless minimally coupled scalar
field as a conformal sector of the gravitational field and then the symmetry-
breaking setting due to the vacuum expectation value of the scalar field.
Proposed PACS numbers: 04.62.+v, 98.80.Cq, 12.10.Dm
## 1\. Introduction
A challenging problem of QFT is constructing an axiomatic quantum Yang-Mills
theory, similar to Wightman’s axiom approach, which includes color confinement
and mass gap111It is one of the seven Millennium Prize Problems in mathematics
that the Clay Mathematics Institute stated in May $24,2000$. [1]. From the
Wilson loops in lattice gauge theory, which explains the confinement phase,
one could conclude that the confinement problem relates to the spacetime
geometry or gravitational field. That means the source of color confinement
may come from the geometry of spacetime. In this regard, the QCD in curved
spacetime has been considered by many authors over the years. Finally, the
precise construction of the color confinement problem was solved in the one-
loop approximation by Kharzeev et al., see [2] and the references therein.
Kharzeev et al. constructed an effective low-energy Lagrangian of gluodynamics
on a curved conformal background that satisfies all constraints imposed by the
Renormalization Group. Their model is the scale and conformally invariant in
the vanishing vacuum energy density limit. It matches the perturbative theory
at short distances. Color fields are dynamically confined, and the strong
coupling freezes at distances more significant than the glueball size. The
main idea of their model is the coupling between a scalar field (the conformal
sector of the metric) with the massless vector fields (gluon fields), which
mediate the strong interaction. Nevertheless, their model is only
renormalizable in one-loop approximation and cannot be formulated in an
axiomatic Wightman-like QFT framework.
The existence of a covariant and renormalizable quantum Yang-Mills theory
within the framework of the Krein space quantization (including quantum metric
fluctuation) has recently been suggested in de Sitter (dS) ambient space
formalism [3]. We obtained that the ghost fields are the massless minimally
coupled (mmc) scalar fields. The fundamental part of our model is the
covariant quantization of the mmc scalar field based on the Krein space
quantization. We obtained that the source of the mass gap comes from the
infrared limit of the mmc scalar field in the interaction with the gluon
fields, and also, the origin of the color confinement gets out from the
spacetime geometry. The confinement of colors was only discussed in the early
universe when the effect of the gravitational field was so significant. Also,
the analyticity property does not discuss in previous work [3]. This property
is essential for constructing the interaction QFT in an axiomatic approach
similar to Wightman’s axiom, which will be studied in this article.
Recently, we formulated the mmc scalar field in the dS ambient space formalism
as a gauge potential or connection field [4]. We construct the scalar gauge
theory by helping an arbitrary constant five-vector field $B^{\alpha}$
analogous to the standard gauge theory. It is shown that the dS ambient space
formalism permits us to unify the scalar and vector gauge fields. The scalar
gauge field can be interpreted as the conformal sector of the gravitational
field. In the Landau gauge, the conformal sector is the mmc scalar field [5,
6]. This field may be interpreted as the connection between the different dS
hyperboloids in quantum dS geometry. It breaks the symmetry of a specific dS
hyperboloid [4].
In this letter, by using the ideas of the three papers mentioned above: $1)$
the coupling of the conformal sector of the metric with the gluon fields [2],
$2)$ Yang-Mills theory in Krein space quantization [3], and $3)$ the scalar
gauge field as the conformal sector of the gravitational field [4]; an
axiomatic quantum Yang-Mills theory in the framework of the Krein space
quantization (including quantum metric fluctuation) with a mass gap and color
confinement is constructed in the dS ambient space formalism. This
construction is renormalizable and analytic and can be applied to the current
universe. It is essential to insist that the ambient space formalism and the
Kerin space quantization make it possible to build a coherent model for the
quantum Yang-Mills theory.
It is interesting to note that some authors have already discussed these
ideas. The couplings of Lorentz vector and scalar potentials for explaining
the confinement problem of spinless particles in $1+1$ dimensions have also
been studied in [7]. Recently, the dependence of color confinement on dS and
Anti-dS geometry and especially on the conformal sector (i.e. scalar field
potential) is also considered in [8, 9].
## 2\. Axiomatic QFT
The axiomatic QFT of the massive field can be constructed from the Wightman
axioms in Minkowski spacetime [10]. It can be directly generalized to the
massive field in dS spacetime (principal series representation) by replacing
the positive energy condition with a certain geodesic spectral condition [11].
Nevertheless, some axioms must be changed for the massless vector field due to
the gauge invariance. In Minkowski space, Gupta introduced four types of
photons, and a covariant quantization for massless vector field was obtained
[12]. However, the scalar photon has negative norms. Then for getting a
similar approach to Wightman’s axioms, the positivity conditions must be
replaced with the existence of an indefinite sesquilinear form; see [13] and
the references therein. Then this procedure of the existence of four types of
photons was generalized to the dS spacetime, and an axiomatic QED in dS space
was constructed [14]. Now we would like to generalize it to QCD. Here we only
discuss the differences between QED and QCD in Minkowski and dS spaces.
We know that in Minkowskian spacetime, the four types of photons satisfy the
same field equations:
$\Box A_{\mu}^{(i)}=0,\;\;\;i=1,2,3,4,$
where $\Box=\partial_{\mu}\partial^{\mu}$ is the Laplace-Beltrami operator on
Minkowski spacetime. In this case, all modes propagate on the Minkowskian
light cone. Nevertheless, in dS spacetime, the transverse photons
($K_{\alpha}^{(t)},t=1,2$) propagate on the dS light cone, and the two others
(scalar and gauge modes $\phi_{s},\phi_{g}$) does not propagate only on the dS
light cone. These modes satisfy the following equations [14]:
(2.1) $\left\\{\begin{array}[]{clcr}\phi_{s}\equiv&\partial\cdot K\neq
0\,,\>\;\;\Longrightarrow\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\Box_{H}\phi_{s}=0,\\\
&\partial\cdot
K=0\,,\;\;\;\Longrightarrow\;\left\\{\begin{array}[]{clcr}\partial_{\alpha}^{\top}\phi_{g},&\;\Box_{H}\phi_{g}=0\,,\\\
K_{\alpha}^{t},&(\Box_{H}+2H^{2})K^{(t)}=0\,,\end{array}\right.\end{array}\right.\,$
where $\phi_{s}$ and $\phi_{g}$ can be associated with the scalar photon and
scalar pure gauge mode, respectively [14]. $H$ is the Hubble constant
parameter, and $\Box_{H}$ is the Laplace-Beltrami operator on dS spacetime. In
ambient space formalism, it can be written in the following form:
$\square_{H}=\partial^{\top}\cdot\partial^{\top}\,;\;\;\;\partial_{\beta}^{\top}=\theta_{\alpha\beta}\partial^{\alpha}=\partial_{\beta}+H^{2}x_{\beta}x\cdot\partial\,;\;\;\alpha,\;\beta\equiv
0,1,\cdots,4\,,$
where $\theta_{\alpha\beta}=\eta_{\alpha\beta}+H^{2}x_{\alpha}x_{\beta}$ is
the transverse projector on the dS hyperboloid; for ambient space notation,
see [3]. The scalar and pure gauge modes are the mmc scalar fields. They are
auxiliary unphysical states. The transverse modes satisfy the massless
conformally coupled scalar field equation, in which they are the physical
states and propagate on the dS light cone. In the previous paper [14], the
quantization was done in the standard method. However, in the indecomposable
representation, the scalar and the pure gauge modes are quantized in the Krein
space quantization. Due to the coupling of the propagator to the conserved
current, scalar and gauge photons are entirely decoupled from the theory, and
they can not appear in the internal line of the Feynman diagram. An axiomatic
quantization of the free mmc scalar field in the Krein space was constructed
previously in [15].
However, the situation for QCD is rather different, and the ghost modes
appear, which are also mmc scalar fields [3]. Although these modes can be
eliminated in the external line of the Feynman diagrams by imposing the
reality condition, their presence in the internal line can not be ignored,
similar to the Minkowskian counterpart. In this case, contrary to QED, due to
the ghost fields, the quantization must be done entirely in the Krein space to
obtain a covariant quantization or an axiomatic quantum field theory. For
simplicity, the scalar field is discussed, but it can be easily generalized to
the other spin fields.
Using a) dS covariance, b) locality, and c) existence of an indefinite
sesquilinear form, the free quantum field theory can be constructed on the
Krein space (Hilbert $\oplus$ anti-Hilber spaces). We begin with the following
infinite-dimensional local closed algebra:
(2.2) $[\phi(x),\phi(x^{\prime})]\equiv G(x,x^{\prime})\mathbbm{1}\,,$
where $G(x,x^{\prime})=\mathcal{W}(x,x^{\prime})-\mathcal{W}(x^{\prime},x)$ is
the commutation two-point function, and it is zero for space-like separate
points. $\mathbbm{1}$ is the identity operator on the Krein space, which is
constructed on the operator algebra (2.2).
$\mathcal{W}(x,x^{\prime})=\langle\Omega\mid\phi(x)\phi(x^{\prime})\mid\Omega\rangle$
is the two-point function and $\mid\Omega\rangle$ is the Krein-Fock vacuum
state [15]. For obtaining a well-defined and normalizable field operator, they
must be defined in a distribution sense (tempered distributions) on an open
set $\mathcal{O}$ of spacetime [11]. For any test function
$f(x)\in\mathcal{D}(X_{H})$, we have an indefinite sesquilinear form that is
defined by
(2.3) $\int_{X_{H}\times
X_{H}}f^{*}(x)\mathcal{W}(x,x^{\prime})f(x^{\prime})\mathrm{d}\sigma(x)\mathrm{d}\sigma(x^{\prime})\,,$
where $f^{*}$ is the complex conjugate of $f$. $X_{H}$ is a $4$-dimensional
hyperboloid embedded in the $5$-dimensional Minkowskian spacetime with the
equation:
(2.4) $M_{H}=\\{x\in{\rm IR}^{5}|\;\;x\cdot
x=\eta_{\alpha\beta}x^{\alpha}x^{\beta}=-H^{-2}\\},\;\;\alpha,\beta=0,1,2,3,4,$
where $\eta_{\alpha\beta}=$diag$(1,-1,-1,-1,-1)$. $\mathcal{D}(X_{H})$ is the
space of $C^{\infty}$ functions with compact support in $X_{H}$ and with
values in $\mathbb{C}$. $\mathrm{d}\sigma(x)$ is dS invariant volume element.
For the construction of the Hilbert space from the operator’s field algebra in
dS space, see [16].
Then the field operator can be divided into two parts: positive norm state and
negative norm state, which act on the Hilbert space and anti-Hilbert space,
respectively:
$\phi(f)=\frac{1}{\sqrt{2}}\left[\phi_{p}(f)+\phi_{n}(f)\right]\,.$
It is logical to write the field operator into the creation $\phi^{+}$ and the
annihilation parts $\phi^{-}$:
$\phi_{p}(f)=\phi^{-}_{p}(f)+\phi^{+}_{p}(f)\,,\;\phi_{n}(f)=\phi^{-}_{n}(f)+\phi^{+}_{n}(f)\,.$
$\phi^{+}_{p}(f)$ creates a positive norm state, and $\phi^{-}_{p}(f)$
annihilates a positive norm state from the Hilbert space. $\phi^{+}_{n}(f)$
creates a negative norm state, and $\phi^{-}_{n}(f)$ annihilates a negative
norm state from the anti-Hilbert space.
In Krein space quantization, the two-point function is the imaginary part of
the two-point function of the positive mode solutions, or standard method [15,
17]:
(2.5)
$\mathcal{W}(x,x^{\prime})=\mathcal{W}_{p}(x,x^{\prime})+\mathcal{W}_{n}(x,x^{\prime})=\mathrm{i}\mathrm{Im}\mathcal{W}_{p}(x,x^{\prime})=\frac{1}{2}G(x,x^{\prime}),$
where $\mathcal{W}_{n}(x,x^{\prime})=-\mathcal{W}_{p}^{*}(x,x^{\prime})$. The
two-point commutation function in the two models is the same
$G(x,x^{\prime})=G_{p}(x,x^{\prime})$; therefore, free-field quantization in
both models gives the same results.
To generalize the axiomatic approach to the quantization of Kerin’s space to
interaction quantum field theory, such as Yang-Mills theory, we need to prove
the analyticity properties of the vacuum expectation value of the field
operators product [10, 11]:
(2.6)
$\mathcal{W}_{N}(x_{1},\cdots,x_{N})\equiv\langle\Omega\mid\,\phi(x_{1})\phi(x_{2})\cdots\phi(x_{N})\mid\Omega\rangle\,.$
First, we recall a theorem regarding the analytic/holomorphic nature of sums,
differences, and products of analytic functions.
Analyticity theorem: Let $A\subseteq\mathbb{C}$ be an open set,
$k\in\mathbb{C}$, and let $f,g:A\longrightarrow\mathbb{C}$. Let $f$ and $g$ be
analytic on $A$, which are differentiable at every point $z_{0}\in A$. Then:
A) $f+g$ is analytic on $A$ and
$(f+g)^{\prime}(z)=f^{\prime}(z)+g^{\prime}(z)$ for all $z\in A$.
B) $kf$ is analytic on $A$ and $(kf)^{\prime}(z)=kf^{\prime}(z)$ for all $z\in
A$.
C) $fg$ is analytic on $A$ and
($fg)^{\prime}(z)=f(z)g^{\prime}(z)+f^{\prime}(z)g(z)$ for all $z\in A$.
The Wightman two-point function $\mathcal{W}_{p}(x,x^{\prime})$ is analytic in
the tuboid $\mathcal{T}_{12}$, to prove analytic property and define the
tuboid, see [11]. $\mathcal{W}_{n}(x,x^{\prime})$ is also analytic but in the
tuboid $\mathcal{T}_{21}$. Then in Krein space quantization, the two-point
function (2.5) is not analytic. However, it is free from all infrared and
ultraviolet divergence except the singularity of the light cone. Nevertheless,
we know this type of singularity can be manipulated by metric quantum
fluctuation [18]. Then in Krein space quantization which includes quantum
metric fluctuation, the two-point function
$\langle\mathcal{W}(x,x^{\prime})\rangle$ (or equivalently $\langle
G(x,x^{\prime})\rangle$) is well-defined. It is differentiable on the dS
hyperboloids; see equation $(2.5)$ in [18] for Minkowskian space and equation
$(5.15)$ in [3] for dS space. The expectation value is taken over the first-
order quantum metric fluctuations [19, 20, 21].
$\langle\mathcal{W}(x,x^{\prime})\rangle$ is a regular function on the dS
hyperboloids in Krein space quantization, i.e. non-singular and
differentiable. Therefore, it is analytic on the dS hyperboloids. In this
case, the time-ordered product two-point function $\langle
G_{T}(x,x^{\prime})\rangle$ is also analytic.
From the Feynman path integral on curved spacetime [22] and also the Feynman-
type algebra for dS spacetime [11, 23], $G_{T}(x_{1},\cdots,x_{N})$ can be
written in terms of a summation and multiplication of the two-point functions
$G_{T}(x,x^{\prime})$ in Krein space quantization. Then
$G_{T}(x_{1},\cdots,x_{N})$ is only singular on the light cones, which can be
manipulated by metric quantum fluctuation. Since the two-point function
$\langle G_{T}(x,x^{\prime})\rangle$ is analytic and using the calculation of
correlation function and the Analyticity theorem, the $N$-point correlation
function $\langle G_{T}(x_{1},\cdots,x_{N})\rangle$, is well-defined on the dS
hyperboloid and then it is analytic. For more details on calculating the
correlation functions in light cone fluctuation, see [20].
Therefore, the axiomatic procedure of quantum interaction field theory can be
completed similarly to Wightman’s axioms by using ambient space formalism and
Krein space quantization, including quantum metric fluctuation. An axiomatic
quantum Yang-Milles theory can be constructed by generalizing this procedure
to the spinor and vector fields. We recall that in Krein space with quantum
metric fluctuation, the axioms are: a) dS covariance, b) locality, c)
existence of an indefinite sesquilinear form, and d) analyticity properties.
## 3\. Color confinement and mass gap
Let us briefly recall the results of Kharzeev et al. [2]. Then the
relationship with our model in [4] will be discussed. They considered the
coupling between the gluon fields, $F_{\mu\nu}^{a}(X)$, with the conformal
sector of the metric $h(X)$, $g_{\mu\nu}=e^{h(X)}\eta_{\mu\nu}$. $X^{\mu}$ are
the intrinsic coordinate systems. Using a Legendre transformation from the
scalar field $h(X)$, they presented the dilaton field $\chi(X)$. Then they
calculated the effective potential $W(\chi,F)$ in the constant chromomagnetic
field, which explains the confinement problem of QCD in the one-loop
approximation. The effect of geometry on gluon fields appears as the potential
well, and Kharzeev et al. interpreted it as a conformal bag model. This
behavior is extracted from the conformal sector of geometry or a scalar field
$\chi(X)$. In this case, the scalar field is part of the geometry and can be
considered a connection field or gauge potential, a common point with our
model.
They obtained the glueball mass in terms of the dilaton mass, which can
explain the mass gap problem. They calculated the leading radiative correction
to the gluon propagator, which was constant, and they concluded that the
strong coupling freezes at distances larger than the glueball size. Two
crucial parts of their model are the scalar field as a gauge potential and a
constant part in the propagator, which appear precisely in our model. Although
their construction describes color confinement and mass gap problems in a one-
loop approximation, their model is not renormalizable and cannot be formulated
in an axiomatic QFT framework. These two last problems can be solved in our
model due to using the dS ambient space formalism, Krein space quantization,
and quantum metric fluctuation.
In the previous paper, we obtained that the gluon propagator in dS spacetime,
at the early universe limit, $H\longrightarrow$ immense value, is a linear
function of the ambient space coordinates $x^{\alpha}$ [3]. Such a behavior of
the two-point function freezes the color force for a long distance, and one
can explain the color confinement in the early universe. We have also shown
that due to the interaction between the gluon fields and the ghost fields,
which are the mmc scalar fields, mass terms appear for the gluon field in the
one-loop approximation. In the early universe, this mass term and linear
coordinates behavior of the propagator explain the mass gap. In this limit,
the effect of the geometry on QCD is significant, and the color confinement
and mass gap can be easily seen [3].
First, the relation between our model and Kharzeev et al. model is discussed,
and then we prove the color confinement and mass gap in the current universe.
We know well that the conformal sector of the spacetime metric becomes a
dynamical degree of freedom due to the trace anomaly of quantum matter fields
[24, 25]. In the Landau gauge of the gravitational field, the conformal sector
becomes the mmc scalar field [5, 6]:
(3.1)
$\mathcal{K}_{\alpha\beta}=\frac{1}{4}\theta_{\alpha\beta}\Phi_{\mathrm{m}}\,,$
where $\theta_{\alpha\beta}$ plays the role of the dS metric in ambient space
formalism and $\Phi_{\mathrm{m}}$ is mmc scalar field. In this gauge, the
interaction between the gluon fields, ($K_{\alpha}^{a}\,,\;a=1,\cdots,8$), and
the mmc scalar fields $\Phi_{\mathrm{m}}$ appears as the geometry effect. It
can be naturally extracted from the coefficients of connection of spacetime
geometry, $\Gamma_{\mu\nu}^{\rho}$, in the covariant derivative.
Nevertheless, we know that the QFT in curved spacetime suffers from
renormalizability. Krein space quantization (including quantum metric
fluctuation) must be used to solve this problem; see [18] and the references
therein. Now we use the idea from the paper [2] for the interaction of the
conformal sector of the gravitational field in dS spacetime with gluon fields
and combine it with quantization in Kerin space, which can explain the color
confinement and mass gap in a renormalized covariant way.
We recall the Yang-Mills theory in dS ambient space formalism. The SU$(3)$
gauge invariant Lagrangian density for the gauge vector field and spinor field
in the simplest form can be written in the following form [3]:
(3.2)
$\mathcal{L}(K^{a},\psi,\psi^{\dagger})=-\frac{1}{4}F_{\alpha\beta}^{\;\;\;\;a}F^{\alpha\beta
a}+H\psi^{\dagger}\gamma^{0}\left(-\mathrm{i}\not{x}\gamma\cdot
D^{K}+2\mathrm{i}\right)\psi\,,$
where
(3.3)
$F_{\alpha\beta}^{\;\;\;\;a}=\nabla^{\top}_{\alpha}K_{\beta}^{\;\;a}-\nabla^{\top}_{\beta}K_{\alpha}^{\;\;a}+g^{\prime}C_{bc}^{\;\;\;\;a}K_{\alpha}^{\;\;b}K_{\beta}^{\;\;c}\,,\;\;\;\;\;x^{\alpha}F_{\alpha\beta}^{\;\;\;\;a}=0=x^{\beta}F_{\alpha\beta}^{\;\;\;\;a}\,,$
and
(3.4)
$D_{\beta}^{K}\psi\equiv\left(\nabla^{\top}_{\beta}-\mathrm{i}g^{\prime}K_{\beta}^{a}t_{a}\right)\psi\,.$
$g^{\prime}$ is the coupling constant between the spinor field and the gluon
fields as the gauge potentials. From now on, and for simplicity, we ignore the
index $a$.
If we consider the mmc scalar field as a gauge potential [4], it may be
considered as a conformal part of the metric or the gravitational field. In
this regard, it must appear in the connection coefficients of the geometry of
spacetime, $\Gamma_{\mu\nu}^{\rho}(\Phi_{m}\,,\cdots)$. In this case,
spacetime geometry is no more Riemannian geometry and may be identified as
Weyl geometry. For the definition of the covariant derivative of a vector
field in Weyl geometry, see the equation $(37)$ in [26]. In the first-order
perturbation on the dS covariant derivative, similar to the scalar-spinor
field interaction [4] and Weyl geometry, the covariant derivative of the
vector field must be replaced with the following gauge-covariant derivative:
(3.5)
$\nabla^{\top}_{\alpha}K_{\beta}\;\;\Longrightarrow\;\;D_{\alpha}^{\Phi}K_{\beta}\equiv\left(\nabla^{\top}_{\alpha}+gB^{\top}_{\alpha}\Phi_{\mathrm{m}}+\cdots\right)K_{\beta}\,,$
where $g$ is the coupling constant between the gauge scalar field and the
vector field $K_{\beta}$. $B^{\alpha}$ is an arbitrary constant five-vector
field.
$\nabla^{\top}_{\alpha}K_{\beta}=\partial^{\top}_{\alpha}K_{\beta}-H^{2}x_{\beta}K_{\alpha}\,$
is the transverse derivative of a vector field on dS hyperboloid. For more
discussion about the constant vector field $B^{\alpha}$, see [4].
By replacing the equation (3.5) in the equations (3.2) and (3.3), the
interaction between the scalar gauge field and the vector field can be
extracted from the following Lagrangian density:
(3.6)
$\mathcal{L}(K,\Phi_{m})=-\frac{1}{2}\left(\nabla^{\top}_{\alpha}+gB^{\top}_{\alpha}\Phi_{m}+\cdots\right)K_{\beta}\left(\nabla^{\top\alpha}+gB^{\top\alpha}\Phi_{m}+\cdots\right)K^{\beta}+\cdots\,.$
We can see that there appears a mass term for the vector field, which is a
function of the mmc scalar field and the dS geometry:
(3.7) $\mathcal{L}(K,\Phi_{\mathrm{m}})=\cdots+\frac{1}{2}g^{2}B^{\top}\cdot
B^{\top}\Phi_{\mathrm{m}}^{2}\,K\cdot K+\cdots\,.$
The Lagrangian density of the interaction field, including the symmetry
breaking, results in a mass for gluon and then color confinement. Like the
standard model, the vacuum expectation value of the scalar field,
$\langle\Omega|\Phi_{m}|\Omega\rangle\equiv\mu\,$ plays the central role of
the gauge symmetry breaking in this case. By replacing $\Phi_{m}$ with
$\Phi_{m}+\mu\,$ in the Lagrangian density (3.7) a mass generation occurs, for
more details see [4]. We obtain a mass for the gluon fields as
(3.8) $m_{g}^{2}\approx
g^{2}\mu^{2}B^{\alpha}B^{\beta}\theta_{\alpha\beta}\,.$
This mass depends on the coupling constant $g$, the vacuum expectation value
of the scalar field $\mu$, the constant vector field $B^{\alpha}$, and the dS
spacetime geometry $\theta_{\alpha\beta}$. This mass in the gluon propagator
can be interpreted as a representation of a short-length force
By comparing with the Kharzeev et al. paper [2], this mass term may be
interpreted as the source of the spectral density in the Kallén-Lehmann type
representation for the gluon propagator. As a result, the strong coupling
freezes at long distances. The mixed gluon-scalar contributions to the
spectral density appear only for the nonabelian gauge theory, which manifests
color confinement in this case; for more details, see section III in [2].
We recall that a free field is called “massive” when it propagates inside the
light cone and corresponds to a massive Poincaré representation in the null
curvature limit. We call a free field “massless” if it propagates on the dS
light cone and corresponds to a massless Poincaré representation at the null
curvature limit. In this regard, the mmc scalar field is not a "true" massless
field since it does not propagate only on the dS light cone, i.e. a constant
term appears in the field propagator on the causal part of spacetime [17]. The
mass term (3.8) can also be associated with the constant solution of the mmc
scalar field, and its propagation inside the light cone at the quantum field
theory level [4].
In dS spacetime, the mmc scalar field quantization causes severe problems in
the standard QFT approach [27]. The problems can be overcome using Krein space
quantization [15]. In this method, the dS invariant is preserved, and we have
the propagation on and inside the light cone. Although this field is essential
in QCD, linear quantum gravity, and quantum cosmology, it also plays a central
role in quantum geometry when considered as the gauge potential [4].
In summary, we need the Krein space quantization and light-cone fluctuations
(or quantum metric fluctuations) to describe the renormalisability of quantum
fields in curved spacetime. Furthermore, for constructing an axiomatic quantum
Yang-Mills theory, which includes color confinement and mass gap, we also need
Krein space quantization, quantum metric fluctuations, and the conformal
sector of quantum gravity or the mmc scalar field as the gauge potential.
Therefore, there is a complicated entanglement between QFT and quantum
geometry or gravity, which will be discussed in the following article.
## 4\. Conclusions
In this article, we have supplemented our previous work regarding quantum
Yang-Mills theory, i.e. color confinement in the current universe, and the
analyticity properties of the vacuum expectation value of the product of the
field operators. The two mathematical subjects, Krein space quantization, and
dS ambient space formalism permit us to construct an axiomatic QFT for the
Yang-Mills theory. The color confinement and mass gap problems can be
extracted from the interaction between the gluon fields and the mmc scalar
field as the conformal sector of the gravitational field. The Krein space
quantization plays a central role in our model. The mmc scalar field is a
connection between gauge theory and quantum dS geometry. We now have the
building blocks needed to consider quantum dS geometry, which will be
discussed in the following article [28].
Acknowledgements: The author thanks Jean Pierre Gazeau and Eric Huguet for
their discussions and would like to thank le Collège de France, l’Université
Paris Cité, and Laboratoire APC for their financial support and hospitality.
## References
* [1] A. Jaffe and E. Witten, (2006), "Quantum Yang-Mills theory", Official problem description, [http://www.claymath.org/sites/default/files/yangmills.pdf].
* [2] D. Kharzeev, E. Levin, K. Tuchin, Phys. Rev. D 70 054005 (2004), QCD in curved space-time: a conformal bag model, [arXiv:0403152v1].
* [3] M.V. Takook, J.P. Gazeau, Nucl. Phys. B 980 115811 (2022), Quantum Yang-Mills theory in de Sitter ambient space formalism, [arXiv:2112.02651v2].
* [4] M.V. Takook, Nucl. Phys. B 984 115966 (2022), Scalar and vector gauges unification in de Sitter ambient space formalism, [arXiv:2204.00314 ].
* [5] M.V. Takook, J. Theor. Appl. Phys. 3 1 (2009), Linear gravity in de Sitter universe, [arXiv:1710.06605].
* [6] M. Enayati, S. Rouhani, M.V. Takook, Int. J. Theor. Phys. 55 5055 (2016), Quantum Linear Gravity in de Sitter Universe on Gupta-Bleuler vacuum state, [arXiv:1208.5562v2].
* [7] A.S. de Castro, Phys. Lett. A 346 71 (2005), Confinement of spinless particles by Coulomb potentials in two-dimensional spacetime [hep-th/0507218v1 ].
* [8] M. Kirchbach, C. B. Compean, (2019), De Sitter Special Relativity as a Possible Reason for Conformal Symmetry and Confinement in QCD, [arXiv:1701.00450v2].
* [9] M. Kirchbach, T. Popov, J.A. Vallejo, J. High Energy Phys. 2021 171 (2021), Color confinement at the boundary of the conformally compactified AdS5, [arXiv:2108.04161].
* [10] R.F. Streater and A.S. Wightman, W. A. Benjamin, Inc. (1964), PCT, Spin and Statistics, and All That.
* [11] J. Bros, U. Moschella, Rev. Math. Phys. 8 327 (1996), Two-point functions and Quantum Field in the de Sitter Universe, [gr-qc/9511019].
* [12] S.N. Gupta, Proc. Phys. Soc. Sect. A 63 681 (1950), Theory of longitudinal photons in quantum electrodynamics.
* [13] F. Strocchi, World Scientific Publishing (1993), General properties of quantum field theory.
* [14] T. Garidi, J.P. Gazeau, S. Rouhani, M.V. Takook, J. Math. Phys. 49 032501 (2008), Massless vector field in de Sitter universe, [arXiv:0608004v1].
* [15] J.P. Gazeau, J. Renaud, M.V. Takook, Class. Quant. Grav. 17 1415 (2000), Gupta-Bleuler quantization for minimally coupled scalar field in de Sitter space, [gr-qc/9904023].
* [16] M.V. Takook, J.P. Gazeau, E. Huget, (2022), Asymptotic states and S-matrix operator in de Sitter ambient space formalism, in preparation.
* [17] M. V. Takook, Mod. Phys. Lett. A 16 1691 (2001), _Covariant two point function for minimally coupled scalar field in de Sitter spacetime_ , [gr-qc/0005020].
* [18] M.V. Takook, Mod. Phys. Lett. A 37 2250059 (2022), "Krein" regularization method, [arXiv:2112.05390].
* [19] H.L. Ford, Phys. Rev. D 51 1692 (1995), Gravitons and Lightcone Fluctuations, arXiv:gr-qc/9410047.
* [20] H.L. Ford, N. F. Svaiter, Phys. Rev. D 54 2646 (1996), Gravitons and Lightcone Fluctuations II: Correlation Functions, arXiv:gr-qc/9604052.
* [21] H. Yu, H.L. Ford, Phys. Rev. D 60 084023 (1999), Light-cone fluctuations in flat spacetimes with nontrivial topology, arXiv:gr-qc/9904082.
* [22] L.E. Parker and D.J. Toms, Cambridge University Press (2009), Quantum Field Theory in Curved Spacetime Quantized fields and gravity.
* [23] J. Bros, H. Epstein, U. Moschella, Ann. Henri Poincaré 11 611 (2010), Particle Decays and Stability on the de Sitter Universe, arXiv:0812.3513.
* [24] I. Antoniadis, E. Mottola, Phys. Rev. D 45 2013 (1992), Four-dimensional quantum gravity in the conformal sector.
* [25] I. Antoniadis, P.O. Mazur, E. Mottola, Phys. Rev. D 55 4770 (1997), Physical States of the Quantum Conformal Factor, [hep-th/9509169].
* [26] J.T. Wheeler, Gen. Relativ. Gravit. 50 80 (2018), Weyl geometry, [arXiv:1801.03178].
* [27] B. Allen, Phys. Rev. D 32 3136 (1985), Vacuum states in de Sitter space.
* [28] M.V. Takook, (2023), Quantum de Sitter geometry, in preparation.
|
We construct an algorithm for computing the cycle classes of the spin components of a stratum of differentials in the moduli space of stable curves $\overline{\mathcal{M}}_{g,n}$. In addition, we implement it within the package . Our main strategy is to reconstruct these cycles by their restrictions to the boundary of $\overline{\mathcal{M}}_{g,n}$ via clutching maps. These restrictions can be computed recursively by smaller dimensional spin classes and determine the original class via a certain system of linear equations. To study the spin parities on the boundary of a stratum of differentials of even type, we make use of the moduli space of multi-scale differentials introduced in <cit.>.
As an application of our algorithm, one can verify a conjecture on spin double ramification cycles stated in <cit.> in many examples, by using the results computed by our algorithm.
§ INTRODUCTION
Let $\mathbb{P}\Omega\mathcal{M}_{g,n}(\mu)$ be a (projectivized) moduli space of differentials with labelled singularities of orders prescribed by an integer partition $\mu=(m_1,...,m_n)\in \mathbb{Z}^n$ of $2g-2$. For brevity, we will call the space $\mathbb{P}\Omega\mathcal{M}_{g,n}(\mu)$ a stratum of differentials and the partition $\mu$ a signature. The space $\mathbb{P}\Omega\mathcal{M}_{g,n}(\mu)$ parametrizes pairs $\big( (C,p_1,...,p_n),[\omega]\big)$, where $(C,p_1,...,p_n)$ is a genus $g$ Riemann surface with $n$ marked points while $[\omega]$ is the scaling equivalence class of a differential $\omega$ whose divisor is $\sum_{i=1}^n m_ip_i$. A (projectivized) stratum $\mathbb{P}\Omega\mathcal{M}_{g,n}(\mu)$ can be embedded into $\mathcal{M}_{g,n}$ and we denote the image and its closure in $\overline{\mathcal{M}}_{g,n}$ by $\mathcal{H}_g(\mu)$ resp. $\overline{\mathcal{H}}_g(\mu)$. The space $\overline{\mathcal{H}}_g(\mu)$ is called the Deligne-Mumford compactification of a (projectivized) stratum of differentials. Since $\overline{\mathcal{H}}_g(\mu)$ is a closed substack of $\overline{\mathcal{M}}_{g,n}$, it induces a cycle class $[\overline{\mathcal{H}}_g(\mu)]\in \mathrm{H}^*(\overline{\mathcal{M}}_{g,n},\mathbb{Q})$. The definitions of strata of differentials and the Deligne-Mumford compactification can be extended to $k$-differentials. For a given signature $\mu=(m_1,...,m_n)$, where $\sum_i m_i=k(2g-2)$, we denote the embedding of a (projectiviezed) stratum of $k$-differentials of type $\mu$ into $\overline{\mathcal{M}}_{g,n}$ and its closure in $\overline{\mathcal{M}}_{g,n}$ by $\mathcal{H}_g^k(\mu)$ resp. $\overline{\mathcal{H}}_g^k(\mu)$. The computation of the cycle classes $[\overline{\mathcal{H}}_g^k(\mu)]$ in $\CH^*(\overline{\mathcal{M}}_{g,n},\mathbb{Q})$ or $\mathrm{H}^*(\overline{\mathcal{M}},\mathbb{Q})$ is a problem that has been recently solved (cf. <cit.>, <cit.>, <cit.>, <cit.>).
The complete classification of connected components of a stratum of $1$-differentials can be found in <cit.> and <cit.>. On the other hand, the problem of classifying the connected components for strata of $k$-differentials is still not completely solved (cf. <cit.>). An important invariant of a stratum of $k$-differentials is the spin parity which will be recalled in Section <ref>. In general, if $k$ is odd and $\mu$ is of even type, i.e. the integers $m_1,..,m_n$ are all even, then the connected components of a stratum of $k$-differentials of type $\mu$ can be partitioned into two non-empty sets according to spin parities. Throughout this paper, if there is an object $O$ that admits spin parities, then the spin components of it will be denoted by $O^+$ resp. $O^-$. By taking into account the spin components of a stratum of differentials, one can split the computation of algebro-geometric invariants of a stratum into two components according to the spin parity. For instance, the formulae of Masur-Veech volumes of the spin components of a stratum have been derived in <cit.>; the formulae of the integral of $\psi$-classes on the Deligne-Mumford compactification of spin components of a stratum of $k$-differentials are proved in <cit.>. Despite these pieces of information about the spin components, the problem of computing the cycle classes $[\overline{\mathcal{H}}_g^k(\mu)^+]$ resp. $[\overline{\mathcal{H}}_g^k(\mu)^-]$ remains open. These classes are important because they allow us to do many computations in the intersection theory of the spin components by reducing to intersections on $\overline{\mathcal{M}}_{g,n}$.
For $k=1$, the stratum $\overline{\mathcal{H}}_{g}(\mu)$ has pure complex dimension $g$ if $\mu$ has any negative entry (so that $\overline{\mathcal{H}}_{g}(\mu)$ parameterizes meromorphic differentials) and of $g-1$ otherwise (for the space parameterizing holomorphic differentials). In this text, we construct an algorithm to split the fundamental class of a stratum of $1$-differentials of even type $[\overline{\mathcal{H}}_{g}(\mu)]\in H^{j}(\overline{\mathcal{M}}_{g,n};\mathbb{Q})$, where we let $j=2g$ resp. $j=2g-2$ if we are working on strata of meromorphic resp. holomorphic differentials, into the spin components $[ \overline{\mathcal{H}}_{g}(\mu)^{-}]$ and $[ \overline{\mathcal{H}}_{g}(\mu)^{+}]$. For short, we will call the fundamental class of a stratum of differentials $[\overline{\mathcal{H}}_g(\mu)]$ a stratum class. To do the recursive computation, it will be more convenient if we instead compute the spin stratum class, which is defined as:
\begin{align*}
\end{align*}
Note that since there already exist recursive algorithms to compute the stratum classes $[\overline{\mathcal{H}}_{g}(\mu)]$, the spin stratum class $[\overline{\mathcal{H}}_{g}(\mu)]^{\spin}$ will determine $[\overline{\mathcal{H}}_{g}(\mu)^{-}]$ and $[\overline{\mathcal{H}}_{g}(\mu)^{+}]$.
More generally, any (algebraic/cohomological) class on a stratum of $k$-differentials which admits a spin structure can be split into a sum of two summands corresponding to the even and odd spin parities respectively. Thus one can also define a linear map:
\begin{align*}
\bullet^{\spin}: \mathrm{H}^*(&\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu))\longrightarrow \mathrm{H}^*(\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu))\\ & w=w^++w^-\mapsto w^+-w^-
\end{align*}
In our text, given a class $w$, we denote its spin variant by $w^{\spin}$. For instance, the spin stratum class of $k$-differentials will be denoted by $[\overline{\mathcal{H}}_g^k(\mu)]^{\spin}$; the tautological classes $\xi^{\spin}$ and $\psi^{\spin}$ on strata of differentials will appear in Section <ref>.
§.§ The outline of our strategy
Given a dual graph $\Gamma$ of some stable curve of genus $g$ and $n$ marked points, there is a natural clutching morphism
\begin{align*}
\xi_\Gamma:\overline{\mathcal{M}}_\Gamma:=\prod_{v\in V(\Gamma)}\overline{\mathcal{M}}_{g_v,n_v}\longrightarrow \overline{\mathcal{M}}_{g,n}.
\end{align*}
The strategy of our recursive algorithm is to compute $[ \overline{\mathcal{H}}_{g}(\mu)]^{\spin}$ by solving a system of linear equations from clutching pullbacks, which reduce the problem to computing the spin stratum classes $[\overline{\mathcal{H}}_{g'}(\mu')]^{\spin}$ of smaller genera or fewer singularities. By proving that $\mathrm{H}^{2g-2}(\overline{\mathcal{M}}_{g,n};\mathbb{Q})=0$ in Theorem <ref>, we can strengthen a result of <cit.> (Proposition <ref>), which will guarantee the injectivity of the direct sum of clutching pullbacks with respect to one-edge stable graphs $\Gamma$
\begin{align}\label{eq:clut}
\oplus_\Gamma\xi_\Gamma^*:\mathrm{H}^{j}(\overline{\mathcal{M}}_{g,n};\mathbb{Q})\longrightarrow \bigoplus_\Gamma \mathrm{H}^{j}(\overline{\mathcal{M}}_\Gamma;\mathbb{Q}),
\end{align}
for $j$ within some appropriate range. By such a result, one can actually already construct a theoretical algorithm to compute the spin stratum classes. However, in practice, the cohomology of moduli spaces of stable curves $\overline{\mathcal{M}}_{g,n}$, for all $g,n$, is not entirely known. Thus, it is not possible to directly implement the strategy we mentioned above. It will only be feasible if we restrict our computation to the tautological ring $\RH^*(\overline{\mathcal{M}}_{g,n})$, on which we have a better understanding of the generators and relations. The drawback is that if a spin stratum class is not tautological, then our algorithm implemented in will raise errors. Thus, the first assumption we need to make for our implementation of the algorithm is that the spin stratum classes are tautological.
A bottleneck for programming the aforementioned strategy is that the clutching pullback with respect to the self-loop graph will not reduce the computation complexity (see Section <ref>). However, we observed that, in many cases where the number of marked points $n$ is larger equal to $3$, the injectivity of (<ref>) holds when we restrict the clutching pullbacks to the tautological ring $\RH^*(\overline{\mathcal{M}}_{g,n})$ even without including the non-compact type graph. This means that in the recursive computation of a spin stratum class, we only need to perform the clutching pullback with respect to the self-loop graph at most once. This ensures that the runtime of our algorithm will not be unreasonably long.
However, we are not able to prove that the phenomenon above is always true in this paper. Hence, we rephrase this into an assumption of the sufficiency of compact type clutching (Assumption <ref>), by which we mean that the direct sum of clutching pullbacks, restricting to $\RH^*(\overline{\mathcal{M}}_{g,n})$, with respect to one-edge graphs of compact type is injective for $g\geq 1, n\geq 3$ and for codimension $j$ which is not too large.
Our implemented algorithm for computing spin stratum classes in this paper will be based on two assumptions: First, the spin stratum classes are tautological; second, the assumption of the sufficiency of compact type clutching. For particular $g,n$ and $j$, it is known that the tautological classes generate the cohomology groups:
* $g=0$ and for any $n$ and $j$ (cf. <cit.>);
* $g=1$ and $n<11$, for any $j$ (cf. <cit.>);
* $g=2$ and $n<20$, and for any even number $j$ (cf. <cit.>);
* $g=3$ and $n=0,1,...,8$ (cf. <cit.>, <cit.>, <cit.>, <cit.>);
* $g=4$ and $n=0,1,...,6$ (cf. <cit.>);
* and more cf. <cit.>.
Thus, in the cases listed above, the first assumption is automatically true. On the other hand, the second assumption has been checked by the package on the tautological ring and it is true for $(g,n)$ equal to
\begin{align}\label{eq:range1}
\end{align}
To summarise, the assumptions we made are automatically true for the values of $g,n$ listed above. We have used the implemented algorithm to compute the spin stratum classes for plenty of signatures where $(g,n)$ is equal to
\begin{align}\label{eq:range2}
\end{align}
The results by the implemented algorithm for $(g,n)$ in the list above are guaranteed to be correct.
Lastly, we explain how we compute each individual clutching pullback. Indeed, we apply the Clutching Pullback Formula (<ref>) of $[\overline{\mathcal{H}}_g(\mu)]^{\spin}$. The formula is derived by using the spin structure on the moduli space of multi-scale differentials $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$ (the definitions will be recalled in Section <ref>), which defines a compactification of the stratum of differentials $\mathbb{P}\Omega\mathcal{M}_{g,n}(\mu)$. The compactification $\mathbb{P}\Omega\mathcal{M}_{g,n}(\mu)\subset\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$ has normal crossing boundary divisors and a proper morphism $p:\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)\longrightarrow\overline{\mathcal{M}}_{g,n}$ by sending a multi-scale differential to its underlying stable curve. The advantage to working with the compactification $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$ is that the set of connected components of $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$ is in bijection with the set of connected components of $\mathbb{P}\Omega\mathcal{M}_{g,n}(\mu)$, whereas in the Deligne-Mumford compactification the spin components can have intersections on the boundary. This implies that the closures of different spin components are still disjoint, thus the spin parity on a multi-scale differential of even type can be defined as the spin parity of the differentials lying in the neighbourhood. As a result, $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$ has spin components and
\begin{align*}
p_*\big([\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)^+]\big)&= [ \overline{\mathcal{H}}_{g}(\mu)^+]\\
p_*\big([\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)^-]\big)&= [ \overline{\mathcal{H}}_{g}(\mu)^-]\\
p_*\big([\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)]^{\spin}\big)&=[ \overline{\mathcal{H}}_{g}(\mu)]^{\spin}.
\end{align*}
Note that deriving the Clutching Pullback Formula (<ref>) requires an understanding of how the spin parity behaves on the boundary strata of $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$. The spin parity of a multi-scale differential lying on the boundary can be computed in the following way:
* First, we undegenerate the multi-scale differential to a usual differential which can be realized by a flat surface
* Then, we compute the spin parity by the explicit Formula (<ref>) for the Arf invariant.
§.§ Main results
Our first observation on the spin parity of a multi-scale differential is the following proposition. We briefly define the objects involved in the following proposition (readers are referred to Section <ref> for the details).
* An enhanced level graph is a graph whose edges are weighted by positive integers (which we call enhancements) and there is a full order on the set of vertices (which can be realized by placing the vertices on different levels).
* A twisted differential of type $\mu$ on a stable curve $(X;z_1,...,z_n)$ is a collection of non-zero meromorphic differentials on the irreducible components of $X$ such that their order of poles and zeros at $z_1,...,z_n$ are prescribed by $\mu$. A twisted differential is compatible with an enhanced level graph $\Delta$ if $\Delta$ is the dual graph of $(X;z_1,...,z_n)$, and the orders of poles and zeros at the nodal points are prescribed by the enhancements (altered by $\pm 1$).
* A (global) prong-matching is a collection of cyclic-order-reversing bijections which prescribe the matchings of horizontal geodesics (with respect to the meromorphic differentials) converging to the nodal points from two components.
Let $\boldsymbol{\eta}$ be a twisted differential of type $\mu$ and compatible with an enhanced level graph $\Delta$. Then we have the following classification of the spin parity of the multi-scale differentials whose underlying twisted differential is $\boldsymbol{\eta}$:
* If $\Delta$ has any edge of even enhancement (non-zero), then half of the (global) prong-matchings on $\boldsymbol{\eta}$ will give us a multi-scale differential of even spin and half of the prong-matchings will give us a welded surface of odd spin.
* If all the edges in $\Delta$ are of odd enhancements, then the spin parity of any multi-scale differential whose underlying twisted differential is $\boldsymbol{\eta}$ is just the sum of the spin parities of the components of $\boldsymbol{\eta}$.
The boundary divisors $D_\Delta$ in $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$ are parametrized by enhanced level graphs $\Delta$ such that either the number of levels is $2$ and there is no horizontal edge (an edge connecting two vertices on the same level), or there is exactly one level and one horizontal edge. We denote the set of two-level graphs without horizontal edges of a given stratum by $\LG_1(\mu)$. For short, we call such a level graph $\Delta$ a vertical two-level graph. The degree of the map $p|_{D_\Delta}$ is equal to the number of prong matching equivalence classes $\frac{\prod_{e\in E(\Delta}\kappa_e}{\ell_\Delta}$, where $\ell_\Delta$ is the l.c.m. of the enhancements. In addition, the image $p(D_\Delta)$ will be equal to the image of
\begin{align*}
\xi_{\Delta}:p^\top(\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\top})\times p^\bot(\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\bot})\subset \overline{\mathcal{M}}_\Delta\longrightarrow \overline{\mathcal{M}}_{g,n},
\end{align*}
where $\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\top}$ and $\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\bot}$ are the level strata corresponding to the top and bottom level of $\Delta$.
We have the following corollary to Proposition <ref>. We define $[D_\Delta]^{\spin}=[D_\Delta^+]- [D_\Delta^-]\in \CH^1(\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu))$.
Let $p:\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)\longrightarrow\overline{\mathcal{M}}_{g,n}$ be the proper projection morphism. For $\Delta\in \LG_1(\mu)$, if all edges of $\Delta$ are of odd enhancements, then
\begin{align*}
&p_*\big([D_\Delta]^{\spin}\big)\in \CH^*(\overline{\mathcal{M}}_{g,n})
\\=&\frac{\prod_{e\in E(\Delta)}\kappa_e}{|\Aut(\Delta)|\ell_\Delta}\xi_{\Delta*}\bigg(\pi_\top^*([\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\top}]^{\spin})\cdot\pi_\bot^*([\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\bot}]^{\spin}) \bigg)
\end{align*}
Otherwise, if there is an edge of even enhancement, then we have
\[p_*[D_\Delta]^{\spin}=0.\]
Although Corollary <ref> will not be applied directly in the computation of the clutching pullback a spin stratum class, it will be used combining with the residue resolving (Proposition <ref>). Furthermore, the principle of Corollary <ref> is used to derive the Clutching Pullback Formula of a spin stratum class. More precisely, if $\Delta$ has any edge of even enhancement, then the term corresponding to $\Delta$ in the Clutching Pullback Formula will be zero. Otherwise, the term corresponding to $\Delta$ will be just a product of the spin stratum classes on the top level and that on the bottom level.
The spin stratum classes $[ \overline{\mathcal{H}}_{g}(\mu)]^{\spin}\in \mathrm{H}^{*}(\overline{\mathcal{M}}_{g,n};\mathbb{Q})$ are recursively computable.
Theorem <ref> only guarantees a solution in $\mathrm{H}^*(\overline{\mathcal{M}}_{g,n},\mathbb{Q})$. In practice, we work with the package , by which we can only compute the solution in the tautological ring. Since uses the generalised Faber-Zagier relations (cf. <cit.>, <cit.>,<cit.>) to compute for a basis of the tautological ring. As a result, the correctness of our implemented algorithm will depend on the assumption that the generalised Faber-Zagier relations span the complete set of relations of the decorated strata classes (which serve as the additive generators) in the tautological ring. We have the following conditional result:
Assume that the generalised Faber-Zagier relations span the whole set of relations of the decorated strata classes in the tautological ring. If the spin stratum classes are tautological and the assumption of the sufficiency of compact type clutching (Assumption <ref>) holds, for all $g\leq g_0$ and $n\leq n_0$, then the algorithm we implemented in will yield the correct spin stratum classes $[ \overline{\mathcal{H}}_{g}(\mu)]^{\spin}$ such that $g\leq g_0$ and $n\leq n_0$.
By the implemented algorithm, we have computed the spin stratum classes of plenty of different signatures for $(g,n)$ within the range (<ref>), where the tautological ring is just the cohomology ring and the assumption of Faber-Zagier relations can be verified by Poincaré duality. Thus, in such range, our results are just the exact results.
§.§ Applications
First, we want to give an overview of the strategies to compute cycle classes $[\overline{\mathcal{H}}^k_g(\mu)]$ in $\CH^*(\overline{\mathcal{M}}_{g,n})$ or $\mathrm{H}^*(\overline{\mathcal{M}}_{g,n})$. In <cit.>, Sauvaget has constructed an algorithm to compute the stratum class of $1$-differentials $[\overline{\mathcal{H}}_g(\mu)]$ in $\mathrm{H}^*(\overline{\mathcal{M}}_{g,n})$ and he shows that the stratum classes are tautological. Later in <cit.>, Farkas and Pandhahripande conjectured that the sum of the stratum class of $1$-differentials and the boundary terms is equal to the twisted double ramification cycle (the definition will be recalled in Section <ref>). In <cit.>, Schmitt extended this conjecture to $k$-differentials and gave a recursion to extract the stratum classes $[\overline{\mathcal{H}}^k_g(\mu)]$ from the conjectured equations. This conjecture has been finally proved by <cit.> and <cit.>.
In <cit.>, Constantini, Sauvaget and Schmitt have constructed the spin variant of the twisted double ramification cycles. In addition, they conjectured an equation that is similar to the one conjectured by Farkas and Pandharipande in <cit.>, stating that the sum of the spin stratum class and some boundary terms is equal to the spin double ramification cycle. (The exact formulation will be recalled in Conjecture <ref> in Section <ref>.) Within the range in List (<ref>), the spin stratum classes we computed agree with Conjecture <ref>. Thus, the algorithm of our paper can help verify the conjecture in the range where computer power is sufficient to perform the computation.
Note that all the spin stratum classes of holomorphic strata of $1$-differentials are computed for $g\leq 4$, and they lie in the tautological ring (see Appendix <ref>). Hence, Conjecture <ref> will enable us to recursively compute the spin stratum classes $[\mathcal{H}^k_g(\mu)]^{\spin}$ by tautological classes. Hence, if we assume the conjecture about the spin stratum classes (Conjecture <ref>) to be true, then we have the following proposition:
Let $k$ be an odd positive integer and $\mu$ be a signature of even type. If Conjecture <ref> is true, then the spin stratum classes $[\mathcal{H}^k_g(\mu)]^{\spin}$ are tautologcial for $g\leq 4$.
The authors acknowledge support by the DFG under Grant MO1884/2-1
and the Collaborative Research Centre TRR 326 Geometry and
Arithmetic of Uniformized Structures, project number 444845124. In addition, the author thanks the advice of Martin Möller and the comments by Johannes Horn, Johannes Schmitt, Adrien Sauvaget and Matteo Constantini.
§ BACKGROUND
In this section, we will first define the spin parity of a flat surface (or equivalently a differential on a compact Riemann surface). Then we will introduce multi-scale differentials and their associated welded surfaces. Furthermore, we will explain how to extend the definition of spin parity to a welded surface and the plumbed surface associated to a multi-scale differential. In the end, we will explain why the spin parity of a welded surface is equal to that of a plumbed surface.
§.§ Spin parities
In this subsection, we introduce the spin parity of a flat surface. First, we start with some terminology on a topological surface, then we will dive into the spin parity on a flat surface. By abusing symbols, we will denote a simple loop on a topological surface $X$ and its corresponding cycle in $\mathrm{H}_1(X,\mathbb{Z})$ both by the same letter.
The algebraic intersection number $i(\bullet,\bullet)$ induces a symplectic form on $\mathrm{H}_1(X,\mathbb{Z})$, where $X$ is a compact orientable surface. Two cycles $\alpha,\beta$ form a symplectic pair if their intersection number $i(\alpha,\beta)$ is $\pm 1$. Let the genus of $X$ be $g$. Then a generating set $\{\gamma_1,...,\gamma_{2g}\}$ of $\mathrm{H}_1(X,\mathbb{Z})$ is a symplectic basis if it can be partitioned into symplectic pairs such that any two cycles from different symplectic pairs have zero intersection number.
Let $(X,\omega)$ be a flat surface induced by a differential $\omega$ on the compact Riemann surface $X$ and $\gamma:[0,1]\longrightarrow (X,\omega)$ be a smooth simple curve not passing through the conical singularities. On a flat surface, parallel translations along a smooth simple loop (avoiding the conical singularities) will preserve the direction of a vector. This implies that we can define the north direction at each point (except for the conical singularities) simultaneously and define a continuous map
\begin{align*}
T_{\gamma}: [0,1]\longrightarrow S^1 \quad
\theta \mapsto \dot{\gamma}(\theta)/\norm{\dot{\gamma}(\theta)}
\end{align*}
There is a unique lift $\widetilde{T}_\gamma:[0,1]\longrightarrow \mathbb{R}$ if we fix the starting point on $\mathbb{R}$ to be the origin. We define the turning number or index $\ind(\gamma)$ to be $\widetilde{T}_\gamma(1)$. If $\omega$ is a differential of even type, then $\ind(\gamma) \pmod 2$ will be invariant under the deformation of $\gamma$. Let $\{a_1,...,a_g,b_1,...,b_g\}$ be a symplectic basis of $\mathrm{H}_1(X,\mathbb{Z})$. Then the parity of spin of a flat surface $(X,\omega)$ is defined as:
\begin{equation}\label{eq:parity}
\phi((X,\omega))= \sum_{i=1}^{g}(\ind(a_i)+1)(\ind(b_i)+1) \pmod 2
\end{equation}
It has been shown in <cit.> that the parity of spin does not depend on the choice of a symplectic basis and is a locally constant function on a stratum of differentials of even type. We have now defined the parity of spin on a compact flat surface. Actually, we can extend the definition to the case that the differential $\omega$ of the pair $(X,\omega)$ is meromorphic.
Let $P$ be the set of poles of the meromorphic differential $\omega$, then one can realize $X\setminus P$ by a noncompact orientable surface carrying a flat metric with conical singularities. We call such a flat realization of a meromorphic differential a noncompact flat surface. A symplectic basis on $X$ can be realised by smooth simple closed loops on $X\setminus P$ which circumvent the conical singularities. By observation, one can see that if $\omega$ is of even type, the turning numbers $\pmod 2$ are independent of the choice of realizations. Hence, the definition of spin parities in (<ref>) can be also applied to noncompact flat surfaces associated to meromorphic differentials of even type (cf. <cit.>).
Note that there is another equivalent definition of spin parities from the perspective of theta characteristics. A theta characteristic is a line bundle $L$ on a Riemann surface $X$ such that $L^{\otimes 2}\simeq \omega_X$, where $\omega_X$ is the canonical sheaf on $X$. Given a flat surface $(X,\omega)$, where $\omega$ is of even type, the sheaf $\mathcal{O}_X(\frac{1}{2}\divisor(\omega))$ is a theta characteristic. The rank of the group of global sections $h^0(X,\mathcal{O}_X(\frac{1}{2}\divisor(\omega))) \pmod 2$ coincides with the parity of spin we defined above (cf. <cit.>).
There are two different ways to define the spin parity of a $k$-differential of even type $\eta$ on a compact Riemann surface $X$.
The first definition can be applied to any $k\in \mathbb{N}$: one can construct a canonical cover $\phi:\hat{X}\longrightarrow X$ (cf. <cit.>) such that $\phi^*\eta=\omega^{\otimes k}$ where $\omega$ is a $1$-differential on $\hat{X}$; the spin parity of $\eta$ is then defined to be that of $\omega$.
The second definition can be applied to any $k\in \mathbb{N}\setminus 2\mathbb{N}$: one can define a theta characteristic
\[L=\omega_X^{\otimes (-k+1)/2}\otimes \mathcal{O}_X(\frac{1}{2}\divisor(\eta)); \]
the spin parity of $\eta$ is defined to be $h^0(X,L) \pmod 2$. Notice that for $k>1$, these two definitions do not coincide with each other. For example, if $X$ is a genus $0$ curve, then the spin parity of a $k$-differential $\eta$ with respect to the first definition may be odd while the spin parity of $\eta$ with respect to the second definition is always even. In this text, when we mention the spin components of a stratum of $k$-differentials $\mathcal{H}^k_g(\mu)\subset \overline{\mathcal{M}}_{g,n}$, the spin parity refers to the second definition.
§.§ Multi-scale differentials and prong-matchings
We will in this subsection recall the definition of twisted $k$-differentials, enhanced level graphs and prong-matchings. In order to define $k$-twisted canonical divisors, we introduce twisted differentials and enhanced level graphs for general values of $k$. However, in the definition of multi-scale differentials and the rest of the text, we will mainly focus on the case that $k=1$.
A twisted $k$-differential of type $\mu=(m_1,...,m_n)$ on a stable curve $(X,\boldsymbol{z}=(z_1,...,z_n))$ a tuple of differentials $\boldsymbol{\eta}=(\eta_v)_{v}$, where $\eta_v$ is a differential on the irreducible component $X_v$ of $X$ such that the following conditions hold.
* If $z_i$ is in $X_v$, then $\ord_{z_i}(\eta_v)=\mu_i$.
* If $q_1\in X_{v_1}$ and $q_2 \in X_{v_2}$ are identified as a node of $X$, then
\[\ord_{q_1}(\eta_{v_1})+\ord_{q_2}(\eta_{v_2})=-2k. \]
* If $k=1$ and at a node $(q_1,q_2)$ the equation $\ord_{q_1}(\eta_{v_1})=\ord_{q_2}(\eta_{v_2})=-1$ holds, then
\[\res_{q_1}(\eta_{v_1})+\res_{q_2}(\eta_{v_2})=0.\]
* The differentials $\eta_v$ have no poles or zeros away from the nodes and the given marked points.
In this text, a twisted differential simply refers to a twisted $1$-differential. Some crucial information of a twisted differential can be described by an enhanced level graph. An enhanced level graph $\Delta$ consists of:
(i) a dual graph $\bar{\Delta}$ of $(X,\boldsymbol{z})$;
(ii) a level function $\ell:V(\bar{\Delta})\longrightarrow \{0,-1,-2,...\}$ such that $\ell^{-1}(0)$ is nonempty;
(iii) for each vertical edge $e$ an assigned positive integer $\kappa_e$ which is called enhancement.
We say that a twisted $k$-differential $\boldsymbol{\eta}$ is compatible with $\Delta$ if for the order on $V(\bar{\Delta})$ induced by $\ell$:
(iv) Given an edge $e$ connecting vertices $v_1,v_2$ in $\Delta$, $\ell(v_1)= \ell(v_2)$ if and only if $\ord_{q_1}(\eta_{v_1})=\ord_{q_2}(\eta_{v_2})= -k$. If $\ell(v_1)> \ell(v_2)$, then $\ord_{q_1}(\eta_{v_1})=\kappa_e-k$ and $\ord_{q_2}(\eta_{v_2})=-\kappa_e-k$.
If not specified, by saying that a twisted $1$-differential $\boldsymbol{\eta}$ is compatible with $\Delta$, the residues of the poles of $\boldsymbol{\eta}$ should also satisfy the following condition called the global residue condition (GRC). Since the GRC for general $k$ is technical and will not be used explicitly in our text, we will only state the GRC for $k=1$ here.
(v) For every level $L$, a connected component $Y$, which has no leg representing a marked pole, of the subgraph $\Delta_{> L}$ (which is a graph consisting of edges and vertices above level $L$) will give us a residue condition. Let $e_1,...,e_m$ be edges connecting $Y$ to vertices on level $L$. Then the residue condition induced by $Y$ is
\[\sum_{j=1}^m \res_{q_j^-}(\eta_{e_j^-})=0,\]
where $e_j^-$ is the lower vertex of edge $e_j$ and $q_j^-$ is the point in $X_{e_j^-}$ representing the node.
We will abuse the notation for edges so that we write $q$ for both a node in $X$ and also the corresponding edge in the compatible graph.
Let $D=\sum_i m_ip_i$ be a Weil Divisor on a stable curve $X$, where $p_i$ are nonsingular points on $X$ and $\sum_im_i=k(2g-2)$. We call $D$ a twisted $k$-canonical divisor if there is a twisted $k$-differential whose poles and zeros away from the nodal points are exactly prescribed by $D$; in addition, the twisted $k$-differential has to be compatible with some enhanced level structure on the dual graph of $X$ in the sense of condition (iv).
The rest of the text (except Section <ref>) will mainly focus on the case $k=1$, thus if not specified, a differential will refer to a $1$-differential.
Let $\boldsymbol{\eta}$ be a twisted differential. Each component $\eta_v$ of the twisted differential $\boldsymbol{\eta}$ endows $X_v$ with a flat structure. Moreover, for two points $q^+,q^-$ in $\widetilde{X}=\bigsqcup_{v\in V(\Delta)} X_v$ corresponding to a vertical node $q$ (a node corresponding to a vertical edge), there will be the same number of horizontal geodesics converging to these points. We call these geodesics prongs. The horizontal geodesics (of direction to east) which run towards the vertical node are incoming prongs and the others are outgoing prongs. We denote the set of incoming prongs and the set of outgoing prongs at $x$ by $P^{in}_x$ and $P^{out}_x$ respectively. There is a natural cyclic order on the set of prongs. A local prong-matching at node $q$ is a cyclic-order-reversing bijection $\sigma_q:P^{in}_{q^-}\longrightarrow P^{out}_{q^+}$ and a global prong-matching is a collection $\boldsymbol{\sigma}$ of local prong-matchings at every vertical node.
Let $X$ be a stable curve; $\boldsymbol{\eta}$ be a twisted $1$-differential compatible with some enhanced level structure on the dual graph $\bar{\Delta}$ of $X$; $\boldsymbol{\sigma}$ be a global prong-matching. There is a natural action of the level rescaling group $\mathbb{C}^{L}$ on the pair $(\boldsymbol{\eta},\boldsymbol{\sigma})$ (here $L$ is the number of levels), as now we describe:
(1) Let $\boldsymbol{d}=(d_j)_{j=0,-1,...,-L+1}\in \mathbb{C}^{L}$. We define
\begin{align*}
\boldsymbol{d}\cdot\boldsymbol{\eta}= \big(\exp(i2\pi d_{\ell(v)})\cdot\eta_v\big)_{v\in V(\Delta)}.
\end{align*}
(2) For each vertical node $q$, we let
\begin{align*}
\boldsymbol{d}\cdot \sigma_q: P^{in}_{q^-}\longrightarrow P^{out}_{q^+}
\end{align*}
be the map $\sigma_q$ precomposed and postcomposed by the rotations by the angles $-2\pi \Real(d_{\ell(q^-)})/\kappa_q$ and $2\pi \Real(d_{\ell(q^+)})/\kappa_q$ respectively, so that $\boldsymbol{d}\cdot \sigma_q$ remains to be a prong-matching.
\begin{align*}
\iota: &\mathbb{C}^{L-1}\hookrightarrow \mathbb{C}^{L}\\
& (d_{-1},...,d_{-L+1})\mapsto (0,d_{-1},...,d_{-L+1})
\end{align*}
be the natural inclusion. We call $\mathbb{C}^{L-1}$ the reduced level rescaling group. The action of $\mathbb{C}^{L-1}$ on $(\boldsymbol{\eta},\boldsymbol{\sigma})$ is induced by that of the level rescaling group.
A multi-scale differential is a tuple $(\boldsymbol{\eta},\Delta,\boldsymbol{\sigma})$ consisting of a twisted $1$-differential $\boldsymbol{\eta}$ of type $\mu$ on a stable curve $(X,\boldsymbol{z})$ such that $\boldsymbol{\eta}$ is compatible with the enhanced level structure $\Delta$ on the underlying dual graph $\bar{\Delta}$ of $(X,\boldsymbol{z})$. Furthermore, $\boldsymbol{\sigma}$ is a global prong-matching of $\boldsymbol{\eta}$. Two multi-scale differentials are equivalent if they differ by the action of the reduced level rescaling group $\mathbb{C}^{L-1}$. The moduli space of multi-scale differentials of type $\mu$ is denoted by $\Xi\overline{\mathcal{M}}_{\boldsymbol{g,n}}(\mu)$.
On the projectivized space $\mathbb{P}\Xi\overline{\mathcal{M}}_{\boldsymbol{g,n}}(\mu)$, two multi-scale differentials are equivalent if they differ by the action of the level rescaling group $\mathbb{C}^{L}$.
We call the subgroup $\mathbb{Z}^L\subset\mathbb{C}^{L}$ the level rotation group. It acts on the twisted differentials trivially while it rotates the prong-matchings. Two multi-scale differentials whose underlying twisted differentials are the same are equivalent if and only if their prong-matchings differ by the action of the level rotation group. Thus, in this text, we are only interested in the equivalence classes of prong-matchings.
§.§ Generalised stratum
We now introduce the generalised stratum. The generalised strata arise naturally if we consider each level of a (connected) enhanced level graph. It is a product of strata of differentials:
\begin{align*}
\Omega\mathcal{M}_{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})=\prod_{\nu=1}^k\Omega\mathcal{M}_{g_\nu,n_\nu}(\mu_\nu).
\end{align*}
It parametrizes differentials on disconnected compact Riemann surfaces. Similar to a moduli space of multi-scale differentials which compactifies a stratum of differentials, there is a compactification of a (projectivized) generalised stratum $\mathbb{P}\Omega\mathcal{M} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})$, which we denote by $\mathbb{P}\Xi\overline{\mathcal{M}} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu}):=\mathbb{P}\big(\prod_{\nu=1}^k\Xi\mathcal{M}_{g_\nu,n_\nu}(\mu_\nu)\big)$. According to <cit.>, $\mathbb{P}\Omega\mathcal{M} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})\subset \mathbb{P}\Xi\overline{\mathcal{M}} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})$ is a compactification with normal crossing boundary divisors. The boundary strata of $\mathbb{P}\Xi\overline{\mathcal{M}} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})$ are parametrized by (disconnected) enhanced level graphs.
Moreover, if we consider a level below zero, there are also extra residue conditions induced by the global residue condition. Hence, we also introduce the notion of a generalised stratum with residue conditions $\Omega\mathcal{M} ^\mathfrak{R}_{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})$. Here, $\mathfrak{R}$ refers to the residue space cut out by a set of residue conditions. Let $H_p$ be the set of pairs $(\nu, i)$ where $i$ is a marking of the component $\nu$ such that the marking $i$ is a pole. A collection $\mathfrak{R}$ of residue conditions consists of a partition $\boldsymbol{\lambda}_\mathfrak{R}$ of $H_p$ with parts $\lambda^{(j)}$. A part $\lambda^{(j)}$ represents a residue equation
\[\sum_{(\nu,i)\in \lambda^{(j)}}r_{\nu,i}=0 \]
\[\mathfrak{R}=\{(r_{\nu,i})_{(\nu,j)\in H_p}: \sum_{(\nu,i)\in \lambda^{(j)}}r_{\nu,i}=0, \lambda^{(j)}\in \boldsymbol{\lambda}_\mathfrak{R}\} \]
In analogy to the case of a usual stratum of differentials (i.e. with only one signature), we also call the projection pushforward of the fundamental class of a generalised stratum $p_*\big([\mathbb{P}\Omega\mathcal{M} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})]\big)\in H^*(\prod_i \mathcal{M}_{g_i,n_i},\mathbb{Q})$ a stratum class. If all the signatures of a generalised stratum are of even type, we call such generalised stratum a generalised stratum of even type. Given a generalised stratum of even type, we can define the class
\[p_*\big([\mathbb{P}\Omega\mathcal{M} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})]^{\spin}\big)=p_*\big([\mathbb{P}\Omega\mathcal{M} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})^+]\big)-p_*\big([\mathbb{P}\Omega\mathcal{M} _{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu})^-]\big)\]
and call it a spin stratum class.
From now on, for brevity, we will call a (projectivized) moduli space of multi-scale differentials, which compactifies a (projectivized) stratum of differentials, a compactified stratum. Similarly, a compactified generalised stratum refers to a (projectivized) moduli space of multi-scale differentials of type $\boldsymbol{\mu}$ on disconnected stable curves which compactifies a (projectivized) generalised stratum.
§.§ The spin parities of welded surfaces and plumbed surfaces
In this subsection, we will briefly introduce the welded surface and plumbed surface associated to a multi-scale differential. We refer the readers to <cit.> for more details. Then we will define the spin parities on them and show why these two spin parities coincide with each other.
Given a multi-scale differential $(\boldsymbol{\eta},\Delta,\boldsymbol{\sigma})$, a welded surface is a topological surface $\overline{X}_{\boldsymbol{\sigma}}$ with a map $f:\overline{X}_{\boldsymbol{\sigma}} \longrightarrow X$ such that
* the preimage of a node is a simple loop which we call a seam;
* away from the seams, $f$ is an isomorphism;
* the glueing of two boundaries circles of the preimages of two irreducible components of $X$ (to form a seam) is determined by the prong-matchings.
For instance, the compact closed surface in Figure <ref> is a welded surface associated to the nodal curve in Figure <ref>. Notice that the twisted differential $\boldsymbol{\eta}$ endows $\overline{X}_{\boldsymbol{\sigma}}$ with a piecewise flat structure. From now on, if not specified, the welded surface associated to a multi-scale differential will refer to the topological welded surface plus the piecewise flat structure on it.
Now we will briefly introduce the plumbing surfaces. According to <cit.>, given a multi-scale differential $(\boldsymbol{\eta},\Delta,\boldsymbol{\sigma})$ such that $X$ is not smooth, one can construct a family of flat surfaces which is controlled by a collection of parameters $t_{-1},...,t_{-L+1}\in\mathbb{C}$ indexed by the levels below zero. A generic fiber is a connected flat surface, while the fiber over $t_{-1}=...=t_{-L+1}=0$ is the disconnected flat surface associated to the twisted differential $\boldsymbol{\eta}$. This deformation family of flat surfaces is called a plumbing family. A connected fiber of the deformation family will be called a plumbed surface. A plumbed surface is just a flat surface and its spin parity is defined as that of a flat surface. The construction of a plumbed surface is actually quite technical, nevertheless, if we only want to make arguments on the spin parity, it suffices to just consider the welded surface.
A welded surface $\overline{X}_{\boldsymbol{\sigma}}$ is an orientable closed surface. By considering the piecewise flat structure on the welded surface, one can still calculate the turning number of a smooth simple loop if the curve is transverse to the seams.
Let $\overline{X}_{\boldsymbol{\sigma}}$ be a welded surface and $\gamma:[0,1]\longrightarrow \overline{X}_{\boldsymbol{\sigma}}$ be a smooth simple loop such that it is transverse to the seams. Let $0=y_0<y_1<...<y_N=1$ be a partition of the unit interval such that $\gamma\big( (y_j,y_{j+1}) \big)$ does not intersect any seam for $j=0,...,N-1$. The turning number of $\gamma$ is defined as
\begin{align*}
\ind(\gamma)=\sum_{j=0}^{N-1}\ind(\gamma|_{[y_j,y_{j+1}]})
\end{align*}
Then the spin parity of a welded surface is just computed by the Formula <ref>, where the symplectic basis is chosen for $H_1(\overline{X}_{\boldsymbol{\sigma}},\mathbb{Z})$. In addition, on a disconnected welded surface, the spin parity is just defined to be the sum of the spin parities on the connected components.
Given a multi-scale differential of even type, the spin parity of the welded surface associated to it equals the spin parity of a plumbed surface associated to it.
Note that the flat picture of a plumbed surface differs from a welded surface by deleting some strips and disc neighbourhoods (cf. <cit.> for the flat picture of a plumbed surface). For every simple closed curve on the plumbed surface, one can complete it to a simple closed curve on the welded surface which is transverse to the seams. The turning number of the completed curve will be the same as that of the original curve. As a result, a symplectic basis on the plumbed surface induces a symplectic basis on the welded surface such that the turning numbers remain unchanged.
By the above proposition, we can determine the parity of a multi-scale differential of even type by computing (<ref>) on the welded surface with respect to simple closed curves transverse to seams.
§ PRONG-MATCHING AND SPIN PARITY OF A WELDED SURFACE
In this section, we construct a basis of homology on the welded surface associated to a multi-scale differential. This basis is convenient for computing the spin parity and we call such basis a $\Delta$-adapted symplectic basis. The idea comes from the paper <cit.>, where Benirschke constructed a $\Delta$-adapted basis of the relative homology $\mathrm{H}_1(\overline{X}_{\boldsymbol{\sigma}}\setminus P,Z)$, namely a basis comprised of generators coming from each irreducible component of $X$ and generators capturing the homology of the enhanced level graph $\Delta$. Here, $\overline{X}_{\boldsymbol{\sigma}}$ is a welded surface compatible with the enhanced level graph $\Delta$; $P$ and $Z$ are the sets of poles and zeros corresponding to the legs of $\Delta$ respectively. In our text, we need a symplectic basis to compute the spin parity. That is why we cannot directly adopt the $\Delta$-adapted basis constructed in <cit.>. More precisely, we have to ensure that the choice of generating cycles of the graph can be completed to a symplectic basis on the welded surface. On the other hand, the idea behind our definition is very similar to the definition in <cit.>: we both want to construct a basis consisting of cycles from lifting of the bases of homology on various components, and cycles that cross the seams.
§.§ $\Delta$-adapted symplectic basis on a welded surface
Let $\overline{X}_{\boldsymbol{\sigma}}$ be a welded surface compatible with a level graph $\Delta$. A cycle $\gamma$ on $\overline{X}_{\boldsymbol{\sigma}}$ is called a graph cycle if it can be realized as a simple loop $\gamma$ such that the geometric intersection numbers $\langle\gamma,\delta\rangle \leq 1$ for each seam $\delta$ but not all the intersection numbers are zero. A cycle $\delta$ on $\overline{X}_{\boldsymbol{\sigma}}$ is called a vanishing cycle if it is the homology class of some seam. A cycle which can be realised by a simple loop not intersecting any seam is called a non-crossing cycle.
(0,0).. controls (0,0.7) and (0.5,1.5).. (1.5,1.5).. controls (2.5,1.5) and (3,0.7).. (3,-0.5).. controls (3,-1) and (2.7,-2.0).. (2,-2.3);
(0,0).. controls (1.2,1) and (1.8,1).. (2.4,-0.6).. controls (2.5,-0.8) and (2.7,-0.9)..(2.7,-1.6);
(0,0).. controls (-1,-0.4) and (-1,-0.7).. (0,-1.6);
(0,0).. controls (1,-0.4) and (1,-0.7).. (0,-1.6);
(0,-1.6).. controls (1,-1.9) and (1.5,-2.1).. (2.7,-1.6);
(0,-1.6).. controls (0.2,-2.6) and (1.5,-2.6).. (2,-2.3);
(1.0,1.0) .. controls (1.2,1.3) and (1.4,1.3) .. (1.6,1.0) ;
(0.9,1.0) .. controls (1.2,0.9) and (1.4,0.9) .. (1.7,1.0) ;
(-0.4,-0.7) .. controls (-0.2,-0.3) and (0,-0.3) .. (0.2,-0.7) ;
(-0.5,-0.6) .. controls (-0.2,-0.8) and (0,-0.8) .. (0.3,-0.6) ;
(1.0,-2.2) .. controls (1.2,-2) and (1.4,-2) .. (1.6,-2.2) ;
(0.9,-2.1) .. controls (1.2,-2.3) and (1.4,-2.3) .. (1.7,-2.1) ;
at (6,1.1) [circle,draw,inner sep=0pt,minimum size=5pt]1;
at (4.5,0) [circle,draw,inner sep=0pt,minimum size=5pt]1;
at (7,-1.5) [circle,draw,inner sep=0pt,minimum size=5pt]1;
(6,1.0)– (4.6,0.1);
(4.6,-0.1)– (6.9,-1.4);
(7.1,-1.4)– (6.1,1.0);
A nodal curve and its dual graph
(-0.8,0).. controls (-0.5,0.7) and (0.5,1.5).. (1.5,1.5).. controls (2.5,1.5) and (3,0.7).. (3,-0.5).. controls (3,-1) and (2.7,-2.0).. (2,-2.3).. controls (1.5,-2.6) and (0.2,-2.6).. (-0.5,-1.6).. controls (-1,-0.7) and (-1,-0.4) .. (-0.8,0);
(0.7,0).. controls (1.2,0.7) and (1.8,0.7).. (2.4,-0.6).. controls (2.5,-0.8) and (2.7,-0.9)..(2.2,-1.6).. controls (1.5,-2.1) and (1,-1.9) .. (0.7,-1.6).. controls (0.5,-0.7) and (0.5,-0.4).. (0.7,0) ;
[red] (0.5,0).. controls (0.6,0.8) and (1.8,0.8) .. (2.2,0).. controls (2.7,-0.7) and (2.7,-1.2).. (2.3,-1.8).. controls (2.0,-2.1) and (1.2,-2.1).. (0.6,-1.8).. controls (0.3,-1.5) and (0.3,-0.5).. (0.5,0) ;
[dashed,very thin] (-0.6,0.4).. controls (0,0.5).. (0.8,0.1);
[dashed,very thin] (-0.6,0.4).. controls (0,0.1).. (0.8,0.1);
[dashed,very thin] (-0.5,-1.4).. controls (0,-1.2).. (0.7,-1.1);
[dashed,very thin] (-0.6,-1.5).. controls (0,-1.5).. (0.7,-1.1);
[dashed,very thin] (2.3,-1.5).. controls (2.6,-1.6).. (2.9,-1.4);
[dashed,very thin] (2.3,-1.5).. controls (2.6,-1.4).. (2.8,-1.5);
[blue] (-0.6,0).. controls (0.2,1.2) and (1.2, 1.5).. (1.6,1.4).. controls (1.9,1.3) and (1.9,0.8).. (1.6,0.7).. controls (1.4, 0.8) and (0.5,0.6)..(0.3,0).. controls (0.3,-1.6) and (-0.6,-1.6).. (-0.6,0);
(1.0,1.0) .. controls (1.2,1.3) and (1.4,1.3) .. (1.6,1.0) ;
(0.9,1.0) .. controls (1.2,0.9) and (1.4,0.9) .. (1.7,1.0) ;
(-0.4,-0.7) .. controls (-0.2,-0.3) and (0,-0.3) .. (0.2,-0.7) ;
(-0.5,-0.6) .. controls (-0.2,-0.8) and (0,-0.8) .. (0.3,-0.6) ;
(1.0,-2.2) .. controls (1.2,-2) and (1.4,-2) .. (1.6,-2.2) ;
(0.9,-2.1) .. controls (1.2,-2.3) and (1.4,-2.3) .. (1.7,-2.1) ;
(-0.8,0).. controls (-0.5,0.7) and (0.5,1.5).. (1.5,1.5).. controls (2.5,1.5) and (3,0.7).. (3,-0.5).. controls (3,-1) and (2.7,-2.0).. (2,-2.3).. controls (1.5,-2.6) and (0.2,-2.6).. (-0.5,-1.6).. controls (-1,-0.7) and (-1,-0.4) .. (-0.8,0);
(0.7,0).. controls (1.2,0.7) and (1.8,0.7).. (2.4,-0.6).. controls (2.5,-0.8) and (2.7,-0.9)..(2.2,-1.6).. controls (1.5,-2.1) and (1,-1.9) .. (0.7,-1.6).. controls (0.5,-0.7) and (0.5,-0.4).. (0.7,0) ;
[->,brown] (0.5,0).. controls (0.6,0.8) and (1.8,0.8) .. (2.2,0).. controls (2.7,-0.7) and (2.7,-1.2).. (2.3,-1.8).. controls (2.0,-2.1) and (1.2,-2.1).. (0.6,-1.8).. controls (0.3,-1.5) and (0.3,-0.5).. (0.5,0) ;
[->,red] (2.3,-1.4).. controls (2.4,-1.2) and (2.7,-1.2) .. (2.8,-1.4).. controls (2.7,-1.5) and (2.4,-1.5).. (2.3,-1.4);
[dashed,very thin] (-0.6,0.4).. controls (0,0.5).. (0.8,0.1);
[dashed,very thin] (-0.6,0.4).. controls (0,0.1).. (0.8,0.1);
[dashed,very thin] (-0.5,-1.4).. controls (0,-1.2).. (0.7,-1.1);
[dashed,very thin] (-0.6,-1.5).. controls (0,-1.5).. (0.7,-1.1);
[dashed,very thin] (2.3,-1.5).. controls (2.6,-1.6).. (2.9,-1.4);
[dashed,very thin] (2.3,-1.5).. controls (2.6,-1.4).. (2.8,-1.5);
[->,blue] (0.3,-0.4).. controls (0.3,-1.4) and (-0.6,-1.4).. (-0.6,-0.4).. controls (-0.6,0.3) and (0.3,0.3).. (0.3,-0.4);
[->,blue] (0.8,1.0).. controls (0.9,1.5) and (1.6,1.5).. (1.8,1.0).. controls (1.6,0.6) and (0.9,0.6).. (0.8,1.0);
[->,blue] (0.8,-2.2).. controls (0.9,-1.9) and (1.6,-1.9).. (1.7,-2.2).. controls (1.6,-2.4) and (0.9,-2.4).. (0.8,-2.2);
[->,blue] (1.2,1.0).. controls (1.1,1.2) and (1.1,1.4).. (1.2,1.5).. controls (1.3,1.4) and (1.3,1.2)..(1.2,1.0);
[->,blue] (-0.6,0.3).. controls (-0.4,0.4) and (0,0.2).. (0.0,-0.4).. controls (-0.2,-0.4) and (-0.6,0).. (-0.6,0.3);
[->,blue] (1.3,-2.2).. controls (1.2,-2.3) and (1.2,-2.4).. (1.3,-2.5).. controls (1.4,-2.4) and (1.4,-2.3).. (1.3,-2.2);
[->,blue] (1.3,-2.2).. controls (1.2,-2.3) and (1.2,-2.4).. (1.3,-2.5).. controls (1.4,-2.4) and (1.4,-2.3).. (1.3,-2.2);
(1.0,1.0) .. controls (1.1,1.1) and (1.5,1.1) .. (1.6,1.0) ;
(0.9,1.0) .. controls (1.2,0.9) and (1.4,0.9) .. (1.7,1.0) ;
(-0.4,-0.7) .. controls (-0.2,-0.3) and (0,-0.3) .. (0.2,-0.7) ;
(-0.5,-0.6) .. controls (-0.2,-0.8) and (0,-0.8) .. (0.3,-0.6) ;
(1.0,-2.2) .. controls (1.2,-2) and (1.4,-2) .. (1.6,-2.2) ;
(0.9,-2.1) .. controls (1.2,-2.3) and (1.4,-2.3) .. (1.7,-2.1) ;
The associated welded surface and a $\Delta$-adapted symplectic basis
Consider the nodal curve in Figure <ref>, on the associated welded surface (Figure <ref> left) we have a blue loop $\gamma_b$ and a red loop $\gamma_r$. We can see that there is a seam such that the geometric intersection number with $\gamma_b$ is $2$. However, $\gamma_r$ intersects all the seams only once. Thus $\gamma_r$ is a graph cycle while $\gamma_b$ is not.
Note that a graph cycle $\gamma$ induces a loop on $\Delta$ which consists of edges whose corresponding seam intersects with $\gamma$.
Let $\overline{X}_{\boldsymbol{\sigma}}$ be a welded surface which is compatible with a level graph $\Delta$. A $\Delta$-adapted symplectic basis on a welded surface $\overline{X}_{\boldsymbol{\sigma}}$ is a symplectic basis such that it contains a set of graph cycles whose associated loops on $\Delta$ form a basis of $H_1(\Delta,\mathbb{Z})$ and moreover, remaining cycles are either non-crossing cycles or vanishing cycles dual to the graph cycles.
Let us consider the welded surface in Example <ref>. The blue cycles in Figure <ref> right are non-crossing cycles and the brown one is a graph cycle and the red one is a vanishing cycle. They constitute a $\Delta$-adapted symplectic basis on the welded surface.
§.§ Algorithm to construct a $\Delta$-adapted symplectic basis
This subsection is to prove the existence of a $\Delta$-adapted symplectic basis. Notice that we are arguing just by topological properties of a nodal curve and its dual graph. Thus, for every topological welded surface, our claim of the existence of a $\Delta$-adapted symplectic basis still holds. To construct a $\Delta$-adapted symplectic basis on the welded surface $\overline{X}_{\boldsymbol{\sigma}}$, it is equivalent to finding a basis of $\mathrm{H}_1(\Delta)$. However, not every spanning tree can give us a set of fundamental cycles that can be completed to a $\Delta$-adapted symplectic basis. Namely, the simple closed curves on the welded surface, which realize the fundamental cycles of the graph, cannot be chosen to be disjoint. The following example will illustrate the problem.
Consider the following graph, where the black edges constitute a spanning tree $T_0$ and every coloured edge gives rise to a fundamental cycle
at (0,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt, label=above:$d$];
at (0,-1) [circle,draw,fill,inner sep=0pt,minimum size=3pt,label=right:$a$];
at (-2,-1.5) [circle,draw,fill,inner sep=0pt,minimum size=3pt,label=above:$b$];
at (2,-1.5) [circle,draw,fill,inner sep=0pt,minimum size=3pt,label=above:$c$];
at (-1.5,-3) [circle,draw,fill,inner sep=0pt,minimum size=3pt,label=below:$v^*$];
at (1.5,-3) [circle,draw,fill,inner sep=0pt,minimum size=3pt,label=below:$e$];
at (-3,-1.2) $q_1$;
at (-0.5,-0.7) $q_2$;
at (0.2,-0.7) $q_3$;
at (0.7,-0.7) $q_4$;
at (3,-1.2) $q_5$;
at (-1.8,-2.5) ;
at (-1.2,-2.0) ;
at (-1.0,-1.6) ;
at (-1.0,-1.1) ;
at (-1.1,-2.4) ;
at (-0.5,-2.6) ;
at (-0.2,-3.2) ;
at (1.2,-2.5) ;
at (1.9,-2.3) ;
at (1.0,-1.1) ;
(0,0) – (0,-1);
(0,0) – (-2,-1.5);
(0,0) – (2,-1.5);
(0,0).. controls(-4,-2).. (-1.5,-3);
(0,0).. controls(4,-2).. (1.5,-3);
[red](0,-1) – (-2,-1.5);
[blue](-2,-1.5) – (-1.5,-3);
[orange](-1.5,-3) – (1.5,-3);
[yellow](1.5,-3) – (2,-1.5);
[green](2,-1.5) – (0,-1);
[violet](-2,-1.5) – (1.5,-3);
[brown](-2,-1.5) – (2,-1.5);
[purple](0,-1) – (-1.5,-3);
[pink](0,-1) – (1.5,-3);
[green!50!black](-1.5,-3) – (2,-1.5);
Five edges are connected to $d$ and any two of them share a fundamental cycle induced by $T_0$. Hence, if we want to realize the graph cycles as some closed curves, any two nodal marked points in $C_d$ will be connected by a segment. Then there will be some segments that intersect with each other because the 5 nodal marked points and the segments connecting them form a $K_5$-graph, which is not planar.
We now show how this problem can be circumvented. Suppose we are given a spanning tree $T_0$ of a graph $\Delta$, we can apply the following iteration to obtain a tree $T$. This tree will induce a collection of graph cycles such that on each $C_v$ the nodal marked points and the segments connecting them form a planar graph. The main idea is to reduce the number of edges in a spanning tree adjacent to a vertex.
* Take an initial vertex $v^*$ such that it is only connected to one edge in $T_0$. The vertex $v^*$ is fixed throughout the iteration. On $T_k$, for every vertex $v$ except $v^*$, there is exactly one edge adjacent to $v$ going towards $v^*$, which we denote as $e^{(k)}_v$ and others are all going away from $v^*$.
* Let $\{q^{(k)}_1,...,q^{(k)}_{g'}\}$ be edges of $\Delta$ not included in $T_k$ and $c_{q^{(k)}_i}$ the fundamental cycles formed by adding $q^{(k)}_i$ in the tree. Let $w$ be a vertex of distance $k+1$ to $v^*$ in $T_{k}$. For any two edges adjacent to $w$, which are not $e^{(k)}_w$ and share a fundamental cycle $c_{q^{(k)}_i}$, we replace one of these two edges by $q^{(k)}_i$. The edge $q^{(k)}_i$ should be connected only to vertices that are of distance greater equal $k+1$ to $v^*$. We carry out this operation until all the vertices of distance $k+1$ to $v^*$ in $T_{k}$ have no going-away edges sharing a fundamental cycle. The resulting tree will be set to be $T_{k+1}$.
* Notice that the set of vertices of distance $k'$ to $v^*$ in $T_k$, where $k'\leq k$, will be also the set of vertices of distance $k'$ to $v^*$ in $T_{k+1}$. Thus our operation in (2) will not change the distances we fixed in earlier stages. We terminate the iteration when we have exhausted all the vertices.
Let $\Sigma_v$ be the graph whose vertices are the nodal marked points on $C_v$, while its edges are segments connecting them. Now we show that the final spanning tree $T$ from the iteration above will give us on each $C_v$ a graph $\Sigma_v$ that is planar. First, notice that there are 3 types of vertices in $\Sigma_v$ which represent:
(i) a node which is represented by an edge adjacent to vertex $v$ in the tree $T$ towards the initial vertex $v^*$,
(ii) a node which is represented by an edge adjacent to vertex $v$ in the tree $T$ away $v^*$ and
(iii) a node which is represented by an edge, not in the tree $T$
If the $\Sigma_v$ is induced by the result spanning tree $T$ from the iteration, then the vertices of the same type are not connected. There is only one vertex of type (i) in $\Sigma_v$ and every vertex of type (iii) in $\Sigma_v$ has only one edge to some vertex of type (ii). Moreover, the unique vertex of type (i) is only connected to vertices of type (ii). Thus, any cycle in $\Sigma_v$ can only consist of multiple edges between the unique vertex of type (i) and a vertex of type (ii). This kind of $\Sigma_v$ is always planar. Indeed, if we regard the multiple edges between the unique vertex of type (i) and a vertex of type (ii) as one edge, the resulting graph $\Sigma_v'$ will be a tree. As adding an extra edge between two vertices will also add an extra face, the Euler characteristic will not change. Hence, the graph $\Sigma_v$ induced by the resulting tree $T$ is planar. Hence, we can now assert the existence of a $\Delta$-adapted symplectic basis on a welded surface. Finally, we can present our proof of Proposition <ref>.
Let $\overline{X}_{\boldsymbol{\sigma}}$ be a welded surface compatible with an enhanced level graph $\Delta$. Then there exists a $\Delta$-adapted symplectic basis on $\overline{X}_{\boldsymbol{\sigma}}$. Moreover, if $\Delta$ has an edge of even enhancements, then the basis can be so chosen that it contains a vanishing cycle corresponding to an edge of even enhancement.
Notice that every vertex of $\Delta$ can only be adjacent to an even number of edges of even enhancements, and hence by contracting all the edges of odd enhancements, we will get a graph whose vertices are of even degree. It is well-known that every edge of such a graph is non-separating. Thus, every edge of even enhancement in a level graph is non-separating. Let $\Delta'$ be the graph obtained by removing the chosen edge. We can apply our algorithm to generate a spanning tree $T$ for a $\Delta'$-adapted symplectic basis. It is easy to see that we can construct a $\Delta$-adapted symplectic basis by using the spanning tree $T$ of $\Delta'$.
§.§ How a prong-matching rotation changes the spin parity
In this subsection, we will describe how the spin parity changes when we change the prong-matching at a node.
The finite group $P_\Gamma=\prod_{e\in E(\Delta)}\mathbb{Z}/\kappa_e\mathbb{Z}$ is called the prong rotation group. Now let $r_q=(0,..,0,l,0,...0)$ (only the component for $e=q$ is $l$), where $\kappa_q>l>0$, be a representative of an element in the prong rotation group. Then it acts on a global prong-matching $\boldsymbol{\sigma}$ by altering the local prong-matching $\sigma_q$. It precomposes the cyclic-order-reversing bijection $\sigma_q:P^{in}_{q^-}\longrightarrow P^{out}_{q^+}$ with a rotation by $\frac{2\pi l}{\kappa_q}$ (in the ascending direction with respect to the cyclic order on $P^{in}_{q^-}$ ).
We denote the new welded surface obtained by applying $r_q$ on the prong-matchings be $\overline{X}_{r_q\boldsymbol{\sigma}}$. Let $\gamma$ on $\overline{X}_{\boldsymbol{\sigma}}$ be a loop representing a graph cycle passing through the seam corresponding to $q$. After we apply the prong rotation $r_q$, $\gamma$ will be a broken loop on $\overline{X}_{r_q\boldsymbol{\sigma}}$. By connecting the two ends of the broken loop by an arc of the seam cycle of $q$ we will get back a loop $r_q\gamma$ on $\overline{X}_{r_q\boldsymbol{\sigma}}$. By smoothing the loop, one can easily see that $\ind(\gamma)-\ind(r_q\gamma)=l\pmod 2$.
Another easy observation is that if $q$ is of odd enhancement, the vanishing cycle of $q$ will have odd turning number. Hence by expression (<ref>), a symplectic pair which includes the vanishing cycle associated to $q$ will contribute $0$ to the spin parity, no matter which prong-matching is chosen.
Now we have the preparatory material to present the proof of Proposition <ref>.
If there is an edge of even enhancement, then by Proposition <ref>, we can choose a symplectic basis such that it contains a vanishing cycle $\alpha$ corresponding to an edge of even enhancement. Let $\beta$ be the dual of $\alpha$ in the symplectic basis. By a generating local rotation at that edge, the turning number of $\beta$ will increase by $1$, while the turning numbers of other cycles in the symplectic basis remain unchanged. Hence, half of the prong-matchings give us an even spin and the other half of the pair of prong-matchings give us an odd spin.
Let $\Delta'$ be the graph obtained by splitting all the edges of odd enhancements in $\Delta$ into legs. If $h^1(\Delta')=0$ , then there is no graph cycle in the (disconnected) welded surface corresponding to $\Delta'$. Hence, rotating any local prong-matching will not change the turning number
$\pmod 2$ of members in a $\Delta$-adapted symplectic basis, thus the spin parity will remain unchanged. As a result, the spin parity of the welded surface associated to $\Delta'$ is just the same as that of the welded surface associated to $\Delta$.
§ VANISHING OF THE RATIONAL COHOMOLOGY OF $\MATHCAL{M}_{G,2}$ AT V.C.D.
In this section, we want to prove Theorem <ref> which will be one of the input to prove Proposition <ref> on the injectivity of clutching pullbacks.
Let $\Mod_g$ (resp. $\Mod_{g,n}$) be the mapping class group of an oriented closed genus $g$ surface (resp. with $n$-punctures). It is well-known that
\[H^i(\mathcal{M}_{g,n};\mathbb{Q})\simeq H^i(\Mod_{g,n};\mathbb{Q}) \]
It has been shown by Harer (<cit.>) that $\Mod_g$ resp. $\Mod_{g,1}$ are virtual duality group of virtual cohomological dimension $4g-5$ resp. $4g-3$, i.e. there exists a dualizing module called Steinberg module $\St_g$ (that will be defined later) such that for any torsion-free finite index subgroup $G<\Mod_g$ (resp. $G'<\Mod_{g,1}$) and $\Mod_g$ (resp. $\Mod_{g,1}$) module $A$, we have
\begin{align*}
H^i(G,A)&=H_{4g-5-i}(G,\St_g\otimes_\mathbb{Z} A)\\
H^i(G',A)&=H_{4g-3-i}(G',\St_g\otimes_\mathbb{Z} A),
\end{align*}
where $G$ (resp. $G'$) acts on $\St_g\otimes_\mathbb{Z} A$ by diagonal action. If we take rational coefficients, then the virtual duality implies
\begin{align*}
H^i(\Mod_{g},A\otimes_\mathbb{Z}\mathbb{Q})&=H_{4g-5-i}(\Mod_g,\St_g\otimes_\mathbb{Z} A\otimes_\mathbb{Z}\mathbb{Q})\\
H^i(\Mod_{g,1},A\otimes_\mathbb{Z}\mathbb{Q})&=H_{4g-3-i}(\Mod_{g,1},\St_g\otimes_\mathbb{Z} A\otimes_\mathbb{Z}\mathbb{Q}),
\end{align*}
In <cit.>, it has been proved that the coinvariant $(\St_g)_{\Mod_g}=0$ and this leads to
$H^{4g-3}(\mathcal{M}_g;\mathbb{Q})=H^{4g-5}(\mathcal{M}_{g,1};\mathbb{Q})=0$. Our aim in this section is to extend the result for $\mathcal{M}_{g,2}$:
For any $g\geq 1$, we have $H^{4g-2}(\mathcal{M}_{g,2};\mathbb{Q})=0$.
§.§ The Steinberg module $\St_g$
In this subsection, we describe the Steinberg module $\St_g$. Let $G$ be a torsion-free finite index subgroup of $\Mod_{g}$ (resp. $\Mod_{g,1}$). Then it acts on the Teichmüller space $\mathcal{T}_g$ (resp. $\mathcal{T}_{g,1}$) freely and properly discontinuously, which implies that the quotient space will be a manifold. Ivanov <cit.> and Harer <cit.> have constructed some contractible bordification $\overline{\mathcal{T}}$ of the Teichmüller space such that the quotient space is compact. According to Theorem 6.2 of <cit.>, the dualizing module for $G$ is then the reduced homology $\widetilde{H}_{2g-2}(\partial\overline{\mathcal{T}};\mathbb{Z})$. Ivanov <cit.> (resp. Harer <cit.>) has proved that
$\partial\overline{\mathcal{T}}_g$ (resp. $\partial\overline{\mathcal{T}}_{g,1}$) is homotopy equivalent to the curve complex $\mathcal{C}_g$ (resp. $\mathcal{C}_{g,1}$) constructed by Harvey in <cit.>.
In <cit.>, Harer showed that $\mathcal{C}_{g,1}$ is $\Mod_{g,1}$-equivariantly homotopy equivalent to $\mathcal{C}_g$. This means that the virtual dualizing modules of $\Mod_g$ and $\Mod_{g,1}$ are the same and the $\Mod_{g,1}$-action on it is just induced by the surjective homomorphism $\Mod_{g,1}\longrightarrow\Mod_g$. In addition, in Harer's construction of the cellular decomposition of the Teichmüller space $\mathcal{T}_{g,1}$, he introduced the arc complex $\mathcal{A}(S_{g,n})$.
Let $S_{g,n}$ be a genus $g$ closed surface with $n$ marked points. Then a $k$-arc system on it is a collection of isotopy classes (relative to the marked points) of $k$ arcs and closed loops $([\alpha_0],[\alpha_1],...,[\alpha_{k-1}])$ such that $\alpha_0,...,\alpha_{k-1}$ intersects only at the marked points. The arc complex $\mathcal{A}(S_{g,n})$ is the simplicial complex whose $k$-simplices are $k+1$-arc systems and the face relations are induced by inclusion.
An arc system $([\alpha_0],[\alpha_1],...,[\alpha_k])$ is a filling system if $S_{g,n}\setminus \cup_i\alpha_i$ is a disjoint union of disks. An arc system is called oriented if the arcs are ordered. We denote the closed subcomplex of $\mathcal{A}(S_{g,n})$ consisting of arc systems that do not fill $S_{g,n}$ by $\mathcal{A}_\infty(S_{g,n})$. Harer <cit.> has shown that $\mathcal{A}_\infty(S_{g,1})$ is homotopy equivalent to $\mathcal{C}_g$. Since $\mathcal{A}(S_{g,1})$ is contractible, we have
\[\St_g=\widetilde{H}_{2g-2}(\mathcal{A}_\infty(S_{g,1});\mathbb{Z})\simeq H_{2g-1}(\mathcal{A}(S_{g,1})/\mathcal{A}_\infty(S_{g,1});\mathbb{Z}) \]
Let $\mathcal{F}_j$ be the free abelian group generated by oriented filling systems which consist of $2g+j$ arcs. Note that a filling system has at least $2g$ arcs and $\mathcal{F}_\bullet$ is just the cellular chain complex $C_\bullet(\mathcal{A}(S_{g,1})/\mathcal{A}_\infty(S_{g,1});\mathbb{Z})$ shifted by $2g-1$.
We have a finite resolution of the Steinberg module $\St_g$:
\begin{align*}
0\longrightarrow \mathcal{F}_{4g-3}\longrightarrow...\longrightarrow \mathcal{F}_1\longrightarrow\mathcal{F}_0\longrightarrow \St_g\longrightarrow 0,
\end{align*}
where the maps are cellular chain maps. Moreover, if we tensor the resolution by $\mathbb{Q}$, then we get a projective resolution of $\St_g\otimes_\mathbb{Z}\mathbb{Q}$.
Let $A$ be a $\Mod_g$- (or $\Mod_{g,1}$-) module. Then
\begin{align*}
H^i(\Mod_{g,1};\mathbb{Q}\otimes_\mathbb{Z}A)&\simeq H_{4g-3-i}((\mathcal{F}_\bullet\otimes_\mathbb{Z}\mathbb{Q})\otimes_{\Mod_{g,1}}A)\\
H^i(\Mod_g;\mathbb{Q}\otimes_\mathbb{Z}A)&\simeq H_{4g-3-i}((\mathcal{F}_\bullet\otimes_\mathbb{Z}\mathbb{Q})\otimes_{\Mod_{g}}A).
\end{align*}
§.§ Chord diagrams
In order to keep track of the filling systems, it is convenient to use a chord diagram to represent them. Given an arc system on $S_{g,1}$, if we cut out a disk neighbourhood of the marked point and deform the cut arcs away from the marked point, then we will get a collection of chords of the boundary circle (for instance, see Figure <ref>).
(-3.0,0).. controls (-2.9,1.7) and (1.9,1.7).. (2,0);
(-3.0,0).. controls (-2.9,-1.7) and (1.9,-1.7).. (2,0);
(-2.1,0).. controls (-1.7,0.4) and (-1.5,0.4).. (-1.0,0);
(-2.2,0.1).. controls (-1.7,-0.2) and (-1.4,-0.2).. (-0.9,0.1);
(1.1,0).. controls (0.7,0.4) and (0.5,0.4).. (0,0);
(1.2,0.1).. controls (0.7,-0.2) and (0.4,-0.2).. (-0.1,0.1);
[red] (-1.5,0) ellipse (1cm and 0.5cm);
[blue] (-0.5,0).. controls (-0.7,-0.1) and (-0.9,-0.1).. (-1.2,-0.1);
[blue,dotted] (-1.2,-0.1).. controls (-1.6,-0.4) and (-1.6,-1).. (-1.2,-1.3);
[blue] (-1.2,-1.3).. controls (-0.7,-0.7) and (-0.6,0).. (-0.5,0);
[brown] (-0.5,0).. controls (-0.3,-0.1) and (-0.1,-0.1).. (0.2,-0.1);
[brown,dotted] (0.2,-0.1).. controls (0.6,-0.4) and (0.6,-1).. (0.2,-1.3);
[brown] (0.2,-1.3).. controls (-0.3,-0.7) and (-0.4,0).. (-0.5,0);
[green] (0.5,0) ellipse (1cm and 0.5cm);
at (-0.5,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt,label=below:$p$];
(0,0) circle (2cm);
[red] (-1.9,0.6).. controls (-0.5,0.5) and (-0.2,-0.4).. (-0.7,-1.8);
[blue] (-1.8,-0.6).. controls (-0.2,0.0) and (0.3,-0.4).. (0.8,-1.8);
[brown] (1.9,-0.6).. controls (0.5,-0.5) and (0.2,0.4).. (0.7,1.8);
[green] (1.8,0.6).. controls (0.2,0.0) and (-0.3,0.4).. (-0.8,1.8);
A filling system (left) and its chord diagram
A labelled $k$-chord diagram is a collection of $(2g+k)$ chords of an oriented circle such that we labelled them by isotopy classes of arcs on $S_{g,1}$. The orientation of the chords has to be compatible with that of the circle.
Given an oriented filling system, one can construct a labelled chord diagram associated with it. We call a chord diagram proper if it really comes from an oriented filling system. A labelled chord diagram with $2g+k$ chords is proper if and only if (cf. <cit.>):
* there are no parallel edges (include chords and edges of the circle), i.e. no two edges can deform to another without changing the intersections of chords;
* there are $k$ cycles if we trace along the chords and edges (each chord and edge can be gone through twice) of the circle.
The advantage to work on chord diagrams is that $\Mod_{g,1}$ (or $\Mod_g$) acts transitively on the set of labelled chord diagrams of the same topological type (i.e. the underlying topological spaces of $1$-complexes are orientation preserving isomorphic). It will be convenient to make some argument about the generators of $\St_g$ by just considering the chord diagrams of some certain topological type.
Let $x_1,...,x_{2g}$ be a standard set of generators of the fundamental group of a genus $g$ closed surface. Let $\phi_{g}$ be the filling system with $2g$ arcs so that the associated chord diagram is as depicted in Figure <ref>, where
\begin{align*}
x_{i} & \mbox{ if } i=2j\\
x_{i} x_{i-2}^{-1}& \mbox{ if } i=2j+1
\end{cases}
\end{align*}
(0,0) circle (2cm);
(0,0) circle (2cm);
(-1.9,0.6).. controls (-1,0.5) and (-0.7,-0.8)..node[right]$a_1$ (-0.9,-1.7);
(-1.8,-0.6).. controls (-0.7,-0.6) and (0.3,-0.8)..node[above]$a_2$ (0.8,-1.8);
(-0.2,-2).. controls (0.2,-0.8) and (1.3,-0.6).. (2,-0.2);
(1.7,-1.2).. controls (0.8,-0.5) and (0.8,0.4).. (1.2,1.5);
(1.9,0.6).. controls (1.2,0.6) and (-0.3,0.9).. (-0.3,2);
(0.8,1.8).. controls (0.4,1.2) and (-1.2,0.9).. node[below]$a_{2g}$(-1.6,1.3);
The chord diagram corresponding to filling system $\phi_g$
The Steinberg module $\St_g$ is generated by $[\phi_g]$ as a $\Mod_{g,1}$- (or $\Mod_g$-) module. Moreover, the class of any filling system whose chord diagram has more than one connected component of chords will be trivial in $\St_g$.
§.§ Proof of Theorem <ref>
To prove Theorem <ref>, it is equivalent to proving that $H^{4g-2}(\Mod_{g,2};\mathbb{Q})=0$. According to <cit.>, we have the following exact sequence relating the mapping class group $\Mod_{g,1}$ and $\Mod_{g,2}$:
\begin{align}
1\longrightarrow \pi_1(S_{g,1}\setminus \{p_1\},p_2)\longrightarrow \Mod_{g,2}\longrightarrow \Mod_{g,1}\longrightarrow 0,
\end{align}
where the first arrow is induced by point pushing and the second arrow is just forgetting a marked point. We can then apply the Lyndon-Hochschild-Serre spectral sequence of group cohomology to such exact sequence, i.e. there is a spectral sequence of cohomological type
\begin{align*}
E^{p,q}_2=H^p\big(\Mod_{g,1};H^q(\pi_1(S_{g,n}\setminus \{p_1\},p_2);\mathbb{Q})\big)\implies H^{p+q}(\Mod_{g,2};\mathbb{Q}).
\end{align*}
Note that $\pi_1(S_{g,n}\setminus \{p_1\},p_2)$ is a free group generated by $2g$ elements, and we have
\begin{align*}
H^{q}(\pi_1(S_{g,n}\setminus \{p_1\},p_2);\mathbb{Q})=
\begin{cases}
\mathbb{Q}& \mbox{ if } q=0\\
\mathbb{Q}^{2g}& \mbox{ if } q=1\\
0 & \mbox{ otherwise }
\end{cases}
\end{align*}
For $g=1$, due to Eichler-Shimura isomorphism and the fact that there is no nontrivial odd weight modular form, we have $H^1(\SL(2,\mathbb{Z});\mathbb{Q}[x,y]_1)=0$, where $\mathbb{Q}[x,y]_1$ is the space of $\mathbb{Q}$-linear function on $\mathbb{Q}^2$. Thus, we only need to consider the case where $g>1$.
Since the virtual cohomological dimension of $\Mod_{g,1}$ is $4g-3$, the terms $E^{p,q}_2$ where $p+q=4g-2$ can only be non-trivial if $p=4g-3$ and $q=1$. Hence, we need to show that $H^{4g-3}(\Mod_{g,1};\mathbb{Q}^{2g})=0$. The action of $\Mod_{g,1}$ on $\mathbb{Q}^{2g}$ comes from the action of $\Mod_{g,2}$ on $\pi_1(S_{g,n}\setminus \{p_1\},p_2)$. It is obvious that any point pushing with respect to $p_1$ will act trivially on $\mathbb{Q}^{2g}$ which is isomorphic to the abelianization of $\pi_1(S_{g},p_1)$ tensored with $\mathbb{Q}$. Since we also have the exact sequence
\begin{align*}
1\longrightarrow \pi_1(S_{g},p_1)\longrightarrow \Mod_{g,1}\longrightarrow \Mod_g\longrightarrow 1,
\end{align*}
the $\Mod_{g,1}$-module structure on $\mathbb{Q}^{2g}$ is induced by the $\Mod_g$-module structure on it via the map $\Mod_{g,1}\longrightarrow \Mod_g$. Moreover, by Corollary <ref>, we have
\begin{align*}
H^{4g-3}(\Mod_{g,1};\mathbb{Q}^{2g})&\simeq H_0(\Mod_{g,1};\St_g\otimes_\mathbb{Z}\mathbb{Q}^{2g})\\
&\simeq H_0(\Mod_g;\St_g\otimes_\mathbb{Z}\mathbb{Q}^{2g})\\
&\simeq H_0(\mathcal{F}_\bullet\otimes_{\Mod_g}\mathbb{Q}^{2g})
\end{align*}
Thus, once we can show that $H_0(\mathcal{F}_\bullet\otimes_{\Mod_g}\mathbb{Q}^{2g})=0$, the theorem is proved. We end our proof with the following proposition.
Let $\mathbb{Q}^{2g}$ be the $\Mod_g$- (or $\Mod_{g,1}$-) module we described above. Then $H_0(\mathcal{F}_\bullet\otimes_{\Mod_g}\mathbb{Q}^{2g})=0$.
For $g=1$, by the virtual duality and our reasoning above, we are done. The argument we give now is for $g>1$. Note that $H_0(\Mod_g;\St_g\otimes_\mathbb{Z}\mathbb{Q}^{2g})$ is just the coinvariant $(\St_g\otimes_\mathbb{Z}\mathbb{Q}^{2g})_{\Mod_g}$. Then by Proposition <ref>, it is generated by $\{[\phi_g]\otimes x_i^\vee\}$ because $\{x_1,...,x_{2g}\}$ form a basis for $H_1(S_{g,1};\mathbb{Q})$. Here, $x_i^\vee$ refers to the intersection product $(-,x_i)$ with respect to $x_i$. Thus, it suffices to show that $\phi_g\otimes x_i^\vee$ lies in the image of the cellular chain map
\begin{align*}
\partial\otimes id:\mathcal{F}_1\otimes_{\Mod_g}\mathbb{Q}^{2g}\longrightarrow\mathcal{F}_0\otimes_{\Mod_g}\mathbb{Q}^{2g}
\end{align*}
for every $i$.
For $g>1$, we consider the filling system corresponding to Figure <ref>, where $a_1,...,a_{2g}$ as defined before and $a_{2g+1}=x_{2g-1}^{-1}$. The cellular chain map image will be
\begin{align*}
(\partial\otimes 1)(\phi_g'\otimes a_i^\vee)=\big(\sum_{j=1}^{2g+1}(-1)^{i-1}(a_1,...,\hat{a_j},...,a_{2g+1})\big)\otimes a_i^\vee
\end{align*}
(0,0) circle (2cm);
(0,0) circle (2cm);
(-1.9,0.6).. controls (-1,0.5) and (-0.7,-0.8)..node[right]$a_1$ (-0.9,-1.7);
(-1.8,-0.6).. controls (-0.7,-0.6) and (0.3,-0.8)..node[above]$a_2$ (0.8,-1.8);
(-0.2,-2).. controls (0.2,-0.8) and (1.3,-0.6).. (2,-0.2);
(1.7,-1.2).. controls (0.8,-0.5) and (0.8,0.4).. (1.2,1.5);
(1.9,0.6).. controls (1.2,0.6) and (-0.3,0.9).. node[below]$a_{2g}$(-0.3,2);
(0.8,1.8).. controls (0.4,1.2) and (-1.2,0.9).. node[below]$a_{2g+1}$(-1.6,1.3);
The chord diagram corresponding to filling system $\phi_g'$
By Proposition <ref>, the terms for $j=2,3,...,2g$ will be zero because the corresponding chord diagram will be disconnected. In addition, there is some $t\in \Mod_g$ mapping $a_{i}$ to $a_{i+1}$ for $i=1,...,2g$. Hence, for $i=1,2,...,2g$, we have
\begin{align*}
(\partial\otimes 1)(\phi_g'\otimes a_i^\vee)=&(t\cdot\phi_g)\otimes a_i^\vee+\phi_g\otimes a_i^\vee\\
= &\phi_g\otimes (t\cdot a_i)^\vee+\phi_g\otimes a_i^\vee\\
=&\phi_g\otimes a_{i+1}^\vee+\phi_g\otimes a_i^\vee
\end{align*}
These yield for $i=1,...,2g+1$
\begin{align*}
[\phi_g\otimes a_i^\vee]=(-1)^{i-1}[\phi_g\otimes a_1^\vee] \in (\St_g\otimes_\mathbb{Z}\mathbb{Q}^{2g})_{\Mod_g}.
\end{align*}
Then we have
\begin{align*}
(g+1)[\phi_g\otimes x_1^\vee]=&[\phi_g\otimes x_1^\vee]+\bigg(\sum_{j=1}^{g-1}[\phi_g\otimes(x_{2j+1}^\vee-x_{2j-1}^\vee)]\bigg)+[-\phi_g\otimes x_{2g-1}^\vee]
\end{align*}
Thus, $[\phi_g\otimes x_1^\vee]=0$ and one can easily deduce that $[\phi_g\otimes x_i^\vee]=0$ for $i=1,...,2g$.
§ PREPARATORY MATERIAL FOR THE COMPUTATION OF SPIN STRATUM CLASSES
The goal of this section is to provide technical details about how to compute the clutching pullback of a (spin) stratum class. In addition, we will explain why the clutching pullbacks will give us a solution under certain assumptions. In Section <ref> to Section <ref>, we will explain how this pullback mechanism works, while in Section <ref> to Section <ref>, we will demonstrate how to do the actual computation when we reduce to the case of lower genus or fewer marked points.
§.§ Injectivity of clutching pullbacks
Recall that given a stable graph $\Gamma$ of genus $g$ and $n$ legs, there is a natural morphism
\begin{align*}
\xi_\Gamma:\overline{\mathcal{M}}_\Gamma= \prod_{v\in V(\Gamma)}\overline{\mathcal{M}}_{g(v),n(v)}\longrightarrow \overline{\mathcal{M}}_{g,n},
\end{align*}
which is called the clutching map. We have mentioned in the introduction that in order to compute the spin stratum classes $[\overline{\mathcal{H}}_g(\mu)]^{\spin}$, we will compute the clutching pullback of them and solve a system of linear equations.
The existence of a solution is clear. The uniqueness of the solution is due to the strengthened version of a result in <cit.>.
Consider the clutching maps $\xi_\Gamma:\overline{\mathcal{M}}_{\Gamma}\longrightarrow \overline{\mathcal{M}}_{g,n}$, where $\Gamma$ are stable graphs of genus $g$ and $n$-legs and with only one edge, the direct sum of pullback maps
\[\oplus_\Gamma\xi_\Gamma^*:\mathrm{H}^{j}(\overline{\mathcal{M}}_{g,n};\mathbb{Q})\longrightarrow \bigoplus_\Gamma \mathrm{H}^{j}(\overline{\mathcal{M}}_\Gamma;\mathbb{Q})\]
is injective if $j\leq d(g,n)$, where
\begin{align}\label{eq:range_inj}
\begin{cases}
2n-7 &\mbox{ if } g=0\\
2g-1 & \mbox{ if } n=0,1\\
2g & \mbox{ if } n=2\\
2g-3+n & \mbox{ otherwise }
\end{cases}
\end{align}
The first thing different from the original version is the first row of $d(g,n)$. Indeed, according to <cit.>, the cohomology ring $\mathrm{H}^*(\overline{\mathcal{M}}_{0,n};\mathbb{Q})$ is generated by cycle classes of boundary strata. In particular, the cohomology groups of odd degrees are trivial. Moreover, a cycle class of a boundary stratum of $\overline{\mathcal{M}}_{0,n}$ can be written as a product of classes of boundary divisors. If a class $x$ of even codimension $j\leq 2n-8$ lies in the kernel of the sum of clutching pullbacks, then any intersection product $x\cdot \prod_{i=1,...,j}[D_{\Gamma_i}]$ will also be zero because $x\cdot [D_\Gamma]= \xi_{\Gamma *}\xi_\Gamma^*(x)$. Due to Poincaré duality, this will force that $x=0$.
The other thing different from the original version is the second and third rows of the values of $d(g,n)$. According to Theorem 1 in <cit.> and our proof in Section <ref>, $\mathrm{H}^{4g-5}(\mathcal{M}_{g},\mathbb{Q})=\mathrm{H}^{4g-3}(\mathcal{M}_{g,1},\mathbb{Q})=\mathrm{H}^{4g-2}(\mathcal{M}_{g,2},\mathbb{Q})=0$. This implies that $\mathrm{H}^{2g-1}_c(\mathcal{M}_{g};\mathbb{Q})=\mathrm{H}^{2g-1}_c(\mathcal{M}_{g,1};\mathbb{Q})=\mathrm{H}^{2g}_c(\mathcal{M}_{g,2};\mathbb{Q})=0$. Then by the long exact sequence:
\begin{align*}
...\longrightarrow \mathrm{H}^j_c(\mathcal{M}_{g,n};\mathbb{Q})\longrightarrow \mathrm{H}^j(\overline{\mathcal{M}}_{g,n};\mathbb{Q})\longrightarrow \mathrm{H}^j(\partial\mathcal{M}_{g,n};\mathbb{Q})\longrightarrow ...,
\end{align*}
we can conclude that $\mathrm{H}^j(\overline{\mathcal{M}}_{g,n};\mathbb{Q})\longrightarrow \mathrm{H}^{j}(\partial\mathcal{M}_{g,n};\mathbb{Q})$ is injective if $j\leq d(g,n)$.
Then, by the same argument of Lemma 2.6 in <cit.>, it yields that
\begin{align*}
\oplus_\Gamma\xi_\Gamma^*:\mathrm{H}^{j}(\overline{\mathcal{M}}_{g,n};\mathbb{Q})\longrightarrow \bigoplus_\Gamma \mathrm{H}^{j}(\overline{\mathcal{M}}_\Gamma;\mathbb{Q})
\end{align*}
is injective for $j\leq d(g,n)$.
If $\overline{\mathcal{H}}_g(\mu)$ is a meromorphic stratum, then the stratum classes $[\overline{\mathcal{H}}_g(\mu)]$ lie in the degree $2g$ component of the tautological ring $\RH^{2g}(\overline{\mathcal{M}}_{g,n})\subseteq \mathrm{H}^{2g}(\overline{\mathcal{M}}_{g,n})$; if $\overline{\mathcal{H}}_g(\mu)$ is a holomorphic stratum, then the stratum class lies in $\RH^{2g-2}(\overline{\mathcal{M}}_{g,n})\subseteq \mathrm{H}^{2g-2}(\overline{\mathcal{M}}_{g,n})$. Thus, for meromorphic strata and $n=2$, the codimension of the (spin) stratum class will be out of the range of injectivity $d(g,n)=2g-3+n$ in the original proposition in <cit.>. That is why we need to improve $d(g,2)$ to $2g$.
In practice, computing the clutching pullback of a stratum class with respect to the self-node graph $\Gamma_0$ will be very difficult, especially if our signature $\mu$ already contains a pair of simple poles. We will illustrate the problem in Section <ref>. Hence, we want to assume that the injectivity is still guaranteed if we neglect the clutching pullback with respect to $\Gamma_0$ under the condition $n\geq 3$.
For $g>0,n\geq 3$, the pullback map
\begin{align*}
\oplus_{\Gamma\neq \Gamma_0}\xi_\Gamma^*:\mathrm{H}^{k}(\overline{\mathcal{M}}_{g,n})\longrightarrow \bigoplus_\Gamma \mathrm{H}^{k}(\overline{\mathcal{M}}_\Gamma)
\end{align*}
is injective if $k\leq 2g-3+n$.
This assumption is tested to be true for the cases of $(g,n)$ equal
\begin{align*}
(1,3),(1,4),(1,5),(1,6),(2,3),(2,4),(2,5),(3,3),(3,4), (4,3)
\end{align*}
for which our computer is capable to do the calculations by in a reasonable time.
§.§ Clutching Pullback Formula
In this subsection, we aim to establish a formula for doing intersection theory with the (spin) stratum class. Recall that in the introduction we mentioned that we want to compute the clutching pullback
\[\xi_\Gamma^*\big([\overline{\mathcal{H}}_g(\mu)]^{\spin}\big)= \xi_\Gamma^*p_*\big([\mathbb{P}\overline{\mathcal{M}}_{g,n}(\mu)]^{\spin}\big),\]
where $\Gamma$ is a one-edge stable graph of genus $g$ and $n$ leg. We will first proceed to derive the formula for the stratum class, and then give the spin variant of it.
We will apply the excess intersection formula (see for example <cit.> Prop. 17.4.1) to derive our pullback formula for $\xi_\Gamma^*$. First, we have to identify the fiber product
ℱ_Γ,μ[r,"t_1"][d,"t_2"] ℙΞℳ_g,n(μ) [d,"p"]
ℳ_Γ [r,"ξ_Γ"] ℳ_g,n
and the top Chern class of the excess normal bundle $E=t_2^*\mathcal{N}_{\Gamma}/\mathcal{N}_{t_1}$, where $\mathcal{N}_{\Gamma}$ and $\mathcal{N}_{t_1}$ are the normal bundles with respect to $\iota_\Gamma$ resp. $t_1$. Then
\[\xi_\Gamma^*p_*\big([\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)]\big)=t_{2*}c_{\textrm{top}}(E).\]
Let $\Delta$ and $\Gamma$ be stable graphs. A graph contraction of $\Delta$ to $\Gamma$ is defined as a pair of maps between the set of half edges (i.e. $H(\Delta)$ and $H(\Gamma)$) resp. the set of vertices (i.e.$V(\Delta)$ and $V(\Gamma)$):
\[\bigg(f_v:V(\Delta)\twoheadrightarrow V(\Gamma),\quad f_h:H(\Gamma)\hookrightarrow H(\Delta)\bigg), \]
satisfying the following conditions:
(i) For any half edge $h_0\in H(\Gamma)$ at a vertex $v_0\in V(\Gamma)$, the image $f_h(h_0)\in H(\Delta)$ should be at a vertex in $f_v^{-1}(v_0)$,
(ii) the map $f_h$ respects markings and edges are mapped to edges
(iii) for any vertex $v_0\in V(\Gamma)$, the subgraph in $\Delta$ consisting of all the vertices in $f_v^{-1}(v_0)$ and the half edges adjacent to them is a connected stable graph of genus $g(v_0)$ and $|H(\Gamma)_{v_0}|$-markings.
A $\Gamma$-structure on $\Delta$ is just a graph contraction of $\Delta$ to $\Gamma$.
Given a signature $\mu=(m_1,...,m_n)$ such that $\sum m_i=2g-2$, then for a one-edge stable graph $\Gamma$, we have the equality
\begin{equation}
\label{eq:excess}
\begin{split}
\xi_\Gamma^*\big([\overline{\mathcal{H}}_g(\mu)]\big)
&\sum_{\substack{\Delta\overset{f}{\rightsquigarrow}\Gamma\\ \Delta\in \LG_1(\mu)}}\frac{1}{|\Aut(\Delta)|}\cdot \frac{\prod_{e\in E(\Delta)}\kappa_e}{\kappa_{f}}\cdot \xi_{f*}\bigg(\pi_\top^*p^{\top}_*[\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\top}]\cdot \pi_\bot^*p^{\bot}_*[\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\bot}]\bigg)
\end{split}
\end{equation}
where $\Gamma_H$ is the horizontal level graph whose underlying graph is $\Gamma$ and $\kappa_f$ is the enhancement of the image edge in $\Delta$ corresponding to the single edge in $\Gamma$ via $f$.
Before we prove the above formula, we have to understand the fiber product $\mathcal{F}_{\Gamma,\mu}$.
Let $\pi:\mathcal{X}\longrightarrow B$ be a family of stable curves such that the dual graph of every fiber is some degeneration of $\Gamma$. Then a $\Gamma$-marking on $\mathcal{X}$ is the following data:
* a collection of $|E(\Gamma)|$ sections $\sigma_1,...,\sigma_{|E(\Gamma)|}$ with image in the singular locus of $\pi$;
* a collection of $|H(\Gamma)|$ sections $\sigma_{1,1},\sigma_{1,2},\sigma_{2,1},...,\sigma_{|E(\Gamma)|,2}$ of the normalization $\widetilde{\mathcal{X}}$ of $\mathcal{X}$;
* a collection of $|V(\Gamma)|$ connected components of $\mathcal{X}\setminus \ \cup_{i=0}^{|E(\Gamma)|}\sigma_i$
such that each fiber nodal curve is marked according to $\Gamma$.
We can regard $\overline{\mathcal{M}}_\Gamma$ as the stack of families of stable curves with a $\Gamma$-marking. As a result, one can easily see that the fiber product $\mathcal{F}_{\Gamma,\mu}$ is just the moduli stack of families of multi-scale differentials $(\mathcal{X},\boldsymbol{\eta})$ with a $\Gamma$-marking on $\mathcal{X}$. Let $\LG^1(\mu)$ be the set of level graphs compatible with signature $\mu$ such that the codimension of $D_\Delta$ is $1$.
The natural morphism
\begin{align*}
j:=\sqcup j_f:\coprod_{\substack{\Delta\in \LG^1(\mu) \\ \Delta\overset{f}{\rightsquigarrow}\Gamma}}D_{\Delta,f}\longrightarrow \mathcal{F}_{\Gamma,\mu},
\end{align*}
where $D_{\Delta,f}$ is the stack of families of multi-scale differentials with a $\Delta$-marking on the underlying family of curves and a $\Gamma$ structure $f$ on $\Delta$ being specified, is finite, flat and surjective.
Notice that a family of multi-scale differentials $(\mathcal{X},\boldsymbol{\eta})$ compatible with a level graph $\Delta$ whose underlying family of stable curves has a partial normalization according to $\Gamma$ if and only if there is a $\Gamma$-structure on $\Delta$. The morphism
\begin{align*}
\coprod_{\substack{\Delta\in\LG^1(\mu)\\ \Delta\rightsquigarrow\Gamma}} D_{\Delta}\times_{\overline{\mathcal{M}}_{g,n}} \overline{\mathcal{M}}_\Gamma\longrightarrow \mathcal{F}_{\Gamma,\mu}
\end{align*}
is étale. Moreover,
\begin{align*}
D_{\Delta}\times_{\overline{\mathcal{M}}_{g,n}}\overline{\mathcal{M}}_\Delta \times_{\overline{\mathcal{M}}_{g,n}}\overline{\mathcal{M}}_\Gamma\longrightarrow D_{\Delta}\times_{\overline{\mathcal{M}}_{g,n}} \overline{\mathcal{M}}_\Gamma
\end{align*}
is finite, flat and surjective. Note that
\begin{align*}
\overline{\mathcal{M}}_\Delta \times_{\overline{\mathcal{M}}_{g,n}}\overline{\mathcal{M}}_\Gamma\simeq \coprod_{\Delta\overset{f}{\rightsquigarrow}\Gamma}\overline{\mathcal{M}}_\Delta
\end{align*}
Hence, $j$ is finite, flat and surjective.
The zeroth Chern class of the excess bundle is
\begin{align*}
c_0(t_2^*\mathcal{N}_\Gamma/\mathcal{N}_{t_1})=\sum_{\substack{\Delta\in\LG^1(\mu)\\ \Delta\rightsquigarrow\Gamma}}\frac{1}{|\Aut(\Delta)|}j_{f,*}\bigg(c_0(j_f^*t_2^*\mathcal{N}_\Gamma/\mathcal{N}_{t_1\circ j_f}) \bigg)
\end{align*}
First note that
\begin{align*}
\end{align*}
and $j_f^*\mathcal{N}_{t_1}=\mathcal{N}_{t_1\circ j_f}$. In addition, if $Q$ is an irreducible component of $\mathcal{F}_{\Gamma,\mu}$, then for any $[C]\in \CH^0(Q)$,
\begin{align*}
\end{align*}
Since the degree of the morphism $j_f$ is just $|\Aut(\Delta)|$, it yields the formula above.
Now we can give the proof of Proposition <ref>.
We first consider those $\Delta$ which are two level graphs, i.e. $\Delta\in\LG_1$, then we consider the case that $\Delta=\Gamma_H$. Let $(h,h')$ be the edge ($h$ and $h'$ are the half-edges) of the one-edge graph $\Gamma$ and $\mathcal{L}_h$ be the section pullback of the dualizing sheaf of $\overline{\mathcal{M}}_\Gamma$. Then the normal bundle $\mathcal{N}_{\xi_\Gamma}$ is just $\mathcal{L}^\vee_h\otimes\mathcal{L}^{\vee}_{h'} $ (cf. <cit.>). According to the proof of Proposition 7.2 in <cit.>, there is a short exact sequence of coherent sheaves on $D_{\Delta,f}$ (at least outside a subvariety of
codimension two)
\begin{align*}
0\longrightarrow \mathcal{N}_{t_1\circ j_f}^{\otimes\ell(\Delta)/\kappa_f}\longrightarrow j_f^*t_2^*\mathcal{N}_{\xi_\Gamma}\longrightarrow \mathcal{Q}\longrightarrow 0 ,
\end{align*}
where $ \mathcal{N}_{t_1\circ j_f}$ is the normal bundle of $D_{\Delta,f}\longrightarrow \mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$. Here, $\ell(\Delta)$ is the l.c.m of the enhancements of the vertical edges of $\Delta$. As a result, $\mathcal{Q}$ has rank $0$ and by default the zeroth Chern class of $\mathcal{Q}$ is just $1$. Hence, the zeroth Chern class of $j_f^*t_2^*\mathcal{N}_{\xi_\Gamma}/\mathcal{N}_{t_1\circ j_f}$ is just $\frac{\ell(\Delta)}{\kappa_f}$.
If $\Delta=\Gamma_H$, then we have
\begin{align*}
\mathcal{N}_{t_1\circ j_f} \simeq j_f^*t_2^*\mathcal{N}_{\xi_\Gamma}
\end{align*}
Hence, the zeroth Chern class of the excess bundle on this component will be just $1$.
By Lemma <ref> and due to the fact that $t_2\circ j_f= \xi_{f}\circ p_\Delta$, we have
\begin{align*}
&\sum_{\substack{\Delta\overset{f}{\rightsquigarrow}\Gamma\\ \Delta\in \LG_1(\mu)}}\frac{1}{|\Aut(\Delta)|}\cdot \frac{\ell(\Delta)}{\kappa_{f}}\cdot \xi_{f*}p_{\Delta*}\bigg([D_{\Delta,f}]\bigg)
\end{align*}
Note that the degree of the morphism $p_\Delta$ is just the number of equivalence class of prong matchings which is just $\frac{\prod_{e\in E(\Delta)}\kappa_e}{\ell(\Delta)}$. Since the image of $p_\Delta$ is just $p^{\top}(\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\top})\times p^{\bot}(\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\bot})$, one gets the Clutching Pullback Formula <ref>.
By the same method as in Corollary <ref> to determine the spin variant of the projection pushforward of divisor associated to a two-level graph, we now state the spin version of (<ref>):
Given a one-edge stable graph $\Gamma$, the Clutching Pullback Formula for the spin stratum class is:
\begin{equation}
\label{eq:excess_p}
\begin{split}
\xi_\Gamma^*p_*\big([\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)]^{\spin}\big)
&\sum_{\substack{\Delta\overset{f}{\rightsquigarrow}\Gamma\\ \Delta\in \LG_1^{odd}(\mu)}}\frac{1}{|\Aut(\Delta)|}\cdot \frac{\prod_{e\in E(\Delta)}\kappa_e}{\kappa_f}\cdot \\ &\xi_{f*}\bigg(
\pi_\top^*p_{\top*}[\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\top}]^{\spin}\cdot \pi_\bot^*p_{\bot*}[\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\bot}]^{\spin}\bigg)
\end{split}
\end{equation}
In Section <ref>, we will give an example to illustrate how to use the clutching pullback formula to compute the clutching pullbacks of a stratum class. Then later in Section <ref>, we will demonstrate the use of the clutching pullback formula to compute the clutching pullbacks of a spin stratum class and reconstruct that spin stratum class.
§.§ Resolving the residue conditions $\mathfrak{R}$ of a generalised stratum
Assume that we are given a signature $\mu$ and the spin stratum class of the level strata in the spaces $\overline{\mathcal{M}}_{\Delta,\top}$ and $\overline{\mathcal{M}}_{\Delta,\bot}$. Then by applying the Clutching Pullback Formula (<ref>) on each one-edge graph, we get a system of linear equations. According to Proposition <ref>, this system of linear equations has a unique solution which is the spin stratum class. Hence, it is crucial to compute the spin stratum class of a level stratum.
Due to the global residue conditions, the generalised stratum associated to the lower level can have residue conditions. For such a (spin) stratum class, it is not possible to use the method of solving a system of linear equations from clutching pullbacks. Namely, the codimension of the (spin) stratum class is too high such that it is outside the range in Proposition <ref>. Nevertheless, the (spin) stratum class of a stratum with residue conditions is closely related to the (spin) stratum class of its ambient stratum which has one residue condition less. We can actually recursively compute the spin stratum class of a stratum with residue conditions from the spin stratum class of a stratum with no residue conditions.
Our main tools to resolve the residue conditions of a compactified generalised stratum are the following two propositions which are originally from <cit.>, and proved in <cit.> for the versions on the compactified generalised stratum.
Let $\boldsymbol{g},\boldsymbol{n},\boldsymbol{\mu}$ be tuples of integers. On a (projectivized) generalised stratum $\pmoxrmoduli$ that admits spin structure, every (algebraic/cohomological) class can be written as the sum of summands carrying the even and odd spins. Hence, we can in general define the spin variant of a class. For example, let $\xi$ be the class $c_1(\mathcal{O}(1))$ on $\pmoxrmoduli$, and $\psi_{(\nu,i)}$ be the $\psi$-class at the $i$-th marked point on the component $\nu$. We write $\xi^{\spin}$ and $\psi^{\spin}_{(\nu,i)}$ for the spin variants of $\xi$- and $\psi$-classes.
Assume that after we remove one residue condition from the set of residue conditions inducing $\mathfrak{R}$, the new residue space $\mathfrak{R}_0$ is strictly larger than conditions $\mathfrak{R}_0$, then $\pmoxrmoduli$ will be a subvariety of codimension one in $\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}_0}_{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu}) $.
The class of the stratum $\pmoxrmoduli$ with residue condition $\mathfrak{R}$ compares inside the Chow ring of the generalized stratum $\overline{B}= \mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}_0}_{\boldsymbol{g},\boldsymbol{n}}(\boldsymbol{\mu}) $ to the class $\xi$ by the formula
\[[\pmoxrmoduli]=-\xi -\sum_{\Delta\in\LG^\mathfrak{R}_1(\overline{B})}\ell_\Delta [D_\Delta] \]
where $\LG^\mathfrak{R}_1(\overline{B}) $ is the set of two-level graphs such that the removed residue condition from $\mathfrak{R}$ will induce no extra condition on the top level. In particular, if both $\pmoxrmoduli$ and $\overline{B}$ admit spin structure, then we have
\[[\pmoxrmoduli]^{\spin}=-\xi^{\spin} -\sum_{\Delta\in\LG^\mathfrak{R}_1(\overline{B})}\ell_\Delta [D_\Delta]^{\spin}\]
The following proposition tells us that $\xi$-class can actually be expressed as $\psi$-class plus boundary terms. This enable us easily compute the projection pushforward of the (spin) fundamental class of a stratum to $\overline{\mathcal{M}}_{g,n}$.
The class $\xi$ on $\overline{B}=\pmoxrmoduli$ can be expressed as
\[\xi=(m_{\nu,i}+1)\psi_{(\nu,i)}-\sum_{\Delta\in {}_{(\nu,i)}\LG_1(\overline{B})} \ell_\Delta [D_\Delta]\]
where ${}_{(\nu,i)}\LG_1(\overline{B}) $ is the set of two-level graphs with the leg $(\nu,i)$ on the lower level; $m_{\nu,i}$ is the order of singularity at the leg $(\nu,i)$. In particular, if $\overline{B}$ admits spin structure, then
\[\xi^{\spin}=(m_{\nu,i}+1)\psi_{(\nu,i)}^{\spin}-\sum_{\Delta\in {}_{(\nu,i)}\LG_1(\overline{B})} \ell_\Delta [D_\Delta]^{\spin}\]
The spin variants of the above propositions require that all the involved strata admit spin structure. However, if we want to resolve a residue relation on a pair of simple poles (i.e. of the form $r_1+r_2=0$), then the ambient stratum will no more admit a spin structure and the spin variant of the above propositions cannot apply. In the recursion of computing the spin stratum classes, there is a case where we cannot directly apply the clutching pullback method so that we have to resolve the residue condition on a pair of simple poles. This will be handled in Proposition <ref> of Section <ref>.
By Proposition <ref>, Proposition <ref> and Proposition <ref>, computing the projection pushforward of (the spin variant of) the fundamental class of a generalised stratum with residue conditions can be boiled down to computing stratum classes of smaller genus or fewer markings. Hence, now we basically have all the ingredients one would expect to carry out the recursive computation of the spin stratum classes. However, in fact, we still have some problems in some special cases if we really start the recursion. These will be handled in the next two subsections.
§.§ Clutching pullback of $\Gamma_0$
In this subsection, we will explain why we in general want to avoid computing the clutching pullback with respect to the self-loop graph $\Gamma_0$. To compute the clutching pullback $\xi_{\Gamma_0}$ of the (spin) stratum class of signature $\mu$, we inevitably need to know the stratum class $p_*[\mathbb{P}\Xi\overline{\mathcal{M}}_{g-1,n+2}^\mathfrak{R}(\mu')]\in \mathrm{H}^{2g}(\overline{\mathcal{M}}_{g-1,n+2})$, where $\mu'=(m_1,...,m_n,-1,-1)$ and $\mathfrak{R}$ is induced by the residue condition $r_{n+1}+r_{n+2}=0$.
If we recursively compute the clutching pullback of a stratum class with respect to $\Gamma_0$ , we will get into a bottleneck. Namely, we will end up with some level stratum that actually represents a complicated horizontal level graph. The following example may give some insight into it.
Let $\mu=(6)$. Then by repeating the clutching pullback of non-separable $\Gamma$, we will end up computing the spin stratum class $p_*[\mathbb{P}\Xi\overline{\mathcal{M}}_{0,9}^\mathfrak{R}(6,-1^8)^{\spin}]$, where $\mathfrak{R}$ consists of the residue conditions
\[r_2+r_3=0,\quad r_4+r_5=0, \quad r_6+r_7=0 \]
Notice that the residue conditions above imply $r_8+r_9=0$. One of the vertical two-level graphs of that stratum will be as the following:
(a) at (-1,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(b) at (1,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(c) at (3,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
[-] (c) edge (b);
[-] (a) edge (-1.7,0.1);
[-] (a) edge (-1.5,0.5);
[-] (a) edge (-0.3,0.1);
[-] (a) edge (-0.5,0.5);
[-] (c) edge (2.3,0.1);
[-] (c) edge (2.5,0.5);
[-] (c) edge (3.7,0.1);
[-] (c) edge (3.5,0.5);
at (-1.8,0.6)$-1_2$ ;
at (-0.2,0.6)$-1_4$ ;
at (-1.9,0.1)$-1_6$ ;
at (-0.1,0.1)$-1_8$ ;
at (2.2,0.6)$-1_3$ ;
at (3.8,0.6)$-1_5$ ;
at (2.1,0.1)$-1_7$ ;
at (3.9,0.1)$-1_9$ ;
at (-0.7,-0.2)$2$ ;
at (2.7,-0.2)$2$ ;
at (0.6,-1.3)$-4$ ;
at (1.4,-1.3)$-4$ ;
[-] (b) edge (1,-2.3);
at (1.1,-2.4)$6$;
The top level stratum illustrates the following horizontal level graph
(a) at (-1,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(c) at (3,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(a) .. controls (1,-0.5).. (c) ;
(a) .. controls (1,0.5).. (c) ;
(a) .. controls (1,1).. (c) ;
(a) .. controls (1,-1).. (c) ;
at (-1.3,-0.7)$2$ ;
at (3.3,-0.7)$2$ ;
[-] (a) edge (-1.2,-0.5);
[-] (c) edge (3.2,-0.5);
for which we do not have a method to reduce the computation of the spin stratum class to the computation of the spin stratum class of each component.
However, we expect that we will not always need to consider the clutching pullback of $\Gamma_0$. In the practice of application of Theorem <ref> we observe that in actual computation for $n\geq 3$ the common kernel of all clutching pullbacks except that with respect to $\Gamma_0$ is already trivial. Although we cannot give a proof of Assumption <ref>, we can propose a recursion scheme based on this assumption.
Nevertheless, we still need to handle the spin stratum classes of strata of differentials that have simple poles. Now if we want to construct the recursion, then the base cases of our recursion will be for $g=0$:
(i) $\mu$ does not contain any $-1$,
(ii) or $\mu=(2k,-2k,-1,-1)$,
(iii) or $\mu=(0,-1,-1)$.
Case (i) is obvious. Since there is no symplectic basis on a sphere, the spin will just be taken as $0$. Case (ii) and (iii) will be discussed in the next subsection based on the perspective of flat surfaces.
§.§ Spin stratum classes of signatures containing simple poles
In this subsection, we will show how to handle the stratum of even type up to pairs of simple poles with residue condition relating them. According to <cit.>, we can view a meromorphic differential as a non-compact flat surface, i.e. a surface obtained by glueing infinite half-cylinders and infinite half-planes on $\mathbb{R}^2$ such that there is no boundary.
A pair of simple poles with the residue condition $r_1+r_2=0$ relating them can be visualized as two infinite half-cylinders of the same width but in opposite direction. Any flat surface $(X,\omega)$ with simple poles (which are paired up by residue conditions) can canonically form a flat surface $(X',\omega')$ of larger genus by cutting and glueing the half infinite cylinders. If the signature $\mu$ contains only even numbers and $-1$s (paired up by residue conditions), then the spin parity of $(X,\omega)$ is defined to be the spin parity of $(X',\omega')$. The following two propositions are some easy results on the spin stratum classes of stratum that has genus $0$.
The flat surface of signature $\mu=(0,-1,-1)$ has odd parity.
After cutting and glueing the infinite half-cylinders, we get a flat torus. A flat torus always has odd parity.
Let $k\in \mathbb{Z}_{\geq 0}$. The spin stratum class of $\overline{\mathcal{H}}^\mathfrak{R}(2k,-2k,-1,-1)$, where $\mathfrak{R}$ is induced by the residue condition $r_3+r_4=0$, is equal to $\psi_1$.
First, note that $\mathbb{P}\Omega\mathcal{M}_{0,4}^\mathfrak{R}(2k,-2k,-1,-1)$ has dimension $0$. Hence, we just need to count the number of such flat surfaces of even parity and of odd parity. Consider a flat surface corresponding to the signature $(2k,-2k,-1,-1)$ with residue space $\mathfrak{R}$. Then due to the realization of non-compact flat surfaces by half-planes and cylinders that were constructed in <cit.>, we have $2(2k-1)$ half planes and $2$ half- infinite cylinders of opposite directions and the same width. Then by computing the spin parities of such flat surfaces where the half infinite cylinders are glued to different half-planes, we will know that the number of flat surfaces of even spin will be larger than that of odd spin by $1$.
[step=.5cm,gray,very thin] (2.9,-1.4) grid (14.2,2);
[green!20!white] (5,0) arc (0:180:10mm) – (5,0);
[green!20!white] (7,0) arc (0:-180:10mm) – (7,0);
[green!20!white] (10,0) arc (0:180:10mm) – (10,0);
[green!20!white] (12,0) arc (0:-180:10mm) – (12,0);
at (3.75,0) [circle,draw,inner sep=0pt,minimum size=3pt];
at (4.25,0) [circle,draw,inner sep=0pt,minimum size=3pt];
at (6,0) [circle,draw,inner sep=0pt,minimum size=3pt];
at (9,0) [circle,draw,inner sep=0pt,minimum size=3pt];
at (10.75,0) [circle,draw,inner sep=0pt,minimum size=3pt];
at (11.25,0) [circle,draw,inner sep=0pt,minimum size=3pt];
[very thick,dotted] (7.2,0) – (7.8,0);
[very thick] (3,0) – (3.65,0) node [above,text width=0.4cm,midway]$a_1$;
[very thick] (5,0) – (5.9,0) node [below,text width=0.4cm,midway]$a_3$;
[very thick] (8,0) – (8.9,0) node [above,text width=0.4cm,midway]$a_{4k-3}$;
[very thick] (10,0) – (10.65,0) node [below,text width=0.4cm,midway]$a_{1}$;
[very thick] (4.35,0) – (5,0) node [above,text width=0.4cm,midway]$a_2$;
[very thick] (6.1,0) – (7,0) node [below,text width=0.4cm,midway]$a_2$;
[very thick] (3.85,0) – (4.15,0) node [below,text width=0.4cm,midway]$b$;
[very thick] (10.85,0) – (11.15,0) node [above,text width=0.4cm,midway]$c$;
[very thick] (9,0) – (10,0) node [above,text width=0.4cm,midway]$a_{4k-2}$;
[very thick] (11.35,0) – (12,0) node [below,text width=0.4cm,midway]$a_{4k-2}$;
[green!20!white] (12.5,0) – (13,0) – (13,-2) – (12.5,-2) – (12.5,0);
at (12.5,0) [circle,draw,inner sep=0pt,minimum size=3pt];
at (13,0) [circle,draw,inner sep=0pt,minimum size=3pt];
[very thick] (12.5,0) – (13,0) node [above,text width=0.4cm,midway]$b$;
[very thick,blue] (12.5,0) – (12.5,-2);
[very thick,blue] (13,0) – (13,-2);
[green!20!white] (13.5,0) – (14,0) – (14,2) – (13.5,2) – (13.5,0);
at (13.5,0) [circle,draw,inner sep=0pt,minimum size=3pt];
at (14,0) [circle,draw,inner sep=0pt,minimum size=3pt];
[very thick] (13.5,0) – (14,0) node [below,text width=0.4cm,midway]$c$;
[very thick,red] (13.5,0) – (13.5,2);
[very thick,red] (14,0) – (14,2);
In general for the signature $\mu=(a,-b,-1^{2s})$ and a set of residue conditions $\mathfrak{R}$ which pair up the simple poles, in order to compute the spin stratum class, we need to resolve a residue condition pairing two simple poles instead of computing the clutching pullbacks. This is because the codimension of the class will be outside the range of injectivity (<ref>) of clutching pullbacks. More generally, if we are dealing with some stratum of differentials
* whose signature is of the form $\mu=(\mu',-1^{2s})$, where $\mu'$ is a partition of $2g-2+2s$ by even integers containing some negative entry;
* with a set of residue conditions $\mathfrak{R}$ pairing the simple poles up,
then we can directly express the spin stratum classes in the Chow ring of ambient stratum, which is of the same signature but with a set of residue conditions $\mathfrak{R}_0$ such that the residue condition on the first pair of simple poles is removed. Before we state the following proposition, we want to define some special set of two-level graphs of the ambient stratum $\overline{B}=\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}_0}_{g,n}(\mu)$. We denote
* by $\LG^{\mathfrak{R}}_1(\overline{B})$ the set of level graphs such that the the first and second simple poles are on the top level and their residue sum is forced to be zero;
* by $\LG^{\top}_1(\overline{B})$ (resp. $\LG^{\bot}_1(\overline{B})$) the set of level graphs such that the the first simple pole is on the top (resp. bottom) level while the second simple pole is on the bottom (resp. top) level.
Let $\mu=(\mu',-1^{2s})$ be a signature and $\mathfrak{R}$ (resp. $\mathfrak{R}_0$) be a set of residue conditions as we mentioned above. Then the cycle class of $\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}}_{g,n}(\mu)$ in the Chow ring of $\overline{B}= \mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}_0}_{g,n}(\mu)$ can be expressed as follow:
\begin{align}\label{eq:sim_res}
[\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}}_{g,n}(\mu)]=-\sum_{\Delta\in\LG^\mathfrak{R}_1(\overline{B})}\ell_\Delta [D_\Delta]+\sum_{\Delta\in\LG^{\top}_1(\overline{B})}\ell_\Delta [D_\Delta].
\end{align}
Similarly, the spin stratum class can be expressed as follow:
\begin{align}\label{eq:spin_sim_res}
[\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}}_{g,n}(\mu)]^{\spin}=-\sum_{\Delta\in\LG^\mathfrak{R}_1(\overline{B})}\ell_\Delta [D_\Delta]^{\spin},
\end{align}
We first prove Equation (<ref>). Let $f:\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}_0}_{g,n}(\mu)\longrightarrow\mathbb{P}^1$ be the extension of the map
\begin{align*}
\Tilde{f}:&\mathbb{P}\Omega\mathcal{M} ^{\mathfrak{R}_0}_{g,n}(\mu)\longrightarrow \mathbb{P}^1\\
&(X,\eta)\mapsto [-\res_{p_1}(\eta):\res_{p_2}(\eta)],
\end{align*}
where $p_1$ and $p_2$ are the first two simple poles. Then it is easy to see that the inverse image of $\{1\}$ is just the union of $\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}}_{g,n}(\mu)$ and $D_\Delta$ for $D_\Delta\in\LG^\mathfrak{R}_1(\overline{B})$ while the inverse image of $\{\infty\}$ is the union of $D_\Delta$ for $\Delta\in\LG^{\top}_1(\overline{B})$. Then Equation (<ref>) is just obtained by considering the multiplicities of the irreducible components of the inverse image. The multiplicity of $D_\Delta$ is just $\ell_\Delta$ because the residues on the bottom level decays like $t^{\ell_\Delta}$.
Second, our strategy to prove Equation (<ref>) is to construct a map from the ambient stratum to $\mathbb{P}^1$ such that the spin components will be lying in the inverse images of two different points on $\mathbb{P}^1$. Without loss of generality, we will assume $\overline{B}$ is connected (otherwise we can restrict to a connected component of $\overline{B}$). We claim that the generator of the monodromy $\pi_1(\mathbb{P}^1\setminus\{0,\infty\},1)\simeq \mathbb{Z}$ will be lifted to a path in $\overline{B}\setminus f^{-1}(\{0,\infty\})$ connecting two elements of $f^{-1}(1)$ of opposite spin. (We will prove our claim later.) This will lead to the reducibility of the fiber product
ℱ[r,"p"][d,"f̅"] B[d,"f"]
ℙ^1[r,"q"] ℙ^1,
where $q:z\mapsto z^2$. Indeed, the generator of the monodromy $\pi_1(\mathbb{P}^1\setminus\{0,\infty\},1)\simeq \mathbb{Z}$ via $\bar{f}$ ($\mathbb{P}^1$ on the left hand side) will be lifted to a path in in $\mathcal{F}\setminus \bar{f}^{-1}(\{0,\infty\})$ connecting two elements in $ \bar{f}^{-1}(1)$ of the same spin. Thus, the spin components of the inverse image $\bar{f}^{-1}(1)$ will lie on different connected components of $\mathcal{F}\setminus \bar{f}^{-1}(\{0,\infty\})$. Let $\overline{X^+}$ resp. $\overline{X^-}$ be the irreducible components of $\mathcal{F}$ containing the spin components of $\bar{f}^{-1}(1)$. It is obvious that both $\overline{X^+}$ and $\overline{X^-}$ are isomorphic to $\overline{B}$. Hence, by considering the map
\begin{align*}
\rho:\overline{B}\overset{\sim}{\longrightarrow} \overline{X}^+\overset{\bar{f}}{\longrightarrow}\mathbb{P}^1,
\end{align*}
we see that $\rho^{-1}(1)$ (resp. $\rho^{-1}(-1)$) is isomorphic to the even (resp. odd) spin component of $f^{-1}(1)$. This implies that
\begin{align*}
[\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}}_{g,n}(\mu)^+]+\sum_{\Delta\in\LG^\mathfrak{R}_1(\overline{B})}\ell_\Delta [D_\Delta^+]=[\mathbb{P}\Xi\overline{\mathcal{M}} ^{\mathfrak{R}}_{g,n}(\mu)^-]+\sum_{\Delta\in\LG^\mathfrak{R}_1(\overline{B})}\ell_\Delta [D_\Delta^-]
\end{align*}
and it yields Equation (<ref>).
Now it remains to show the claim on monodromy. We fix a collection of curves $\{\alpha_1,...,\alpha_{g+s},\beta_1,...,\beta_{g+s}\}$ on the flat surface $(X,\eta)$ (where we assume $X$ is smooth, otherwise we consider the welded surface associated to it) corresponding to an element in $f^{-1}(1)$ such that
* $\alpha_i$ only intersects $\beta_i$ (transversally at one point) and vice versa;
* $[\alpha_1],...,[\alpha_g],[\beta_1],...,[\beta_g]$ form a symplectic basis of the underlying closed surface
* $\beta_{g+1},...,\beta_{g+s}$ are geodesic closed curves around the simple poles (one simple pole for every pair of simple poles, we denote by $\beta_{g+1}',...,\beta_{g+s}'$ the geodesic closed curves of the opposite simple poles);
* $\alpha_{g+1},...,\alpha_{g+s} $ are curves connecting the geodesic closed curves $\beta_j,\beta_j'$, such that they are perpendicular to the $\beta_j,\beta_j'$.
Let $w=\sum_{i=1}^{g+s} (\ind(\alpha_i)+1)(\ind(\beta_i)+1)$. Note that $w \pmod 2$ will be the spin parity of $(X,\eta)$. Let $\gamma(t)=e^{2\pi i t}$ (where $t\in[0,1]$) be a loop on $\mathbb{P}^1\setminus\{0,\infty\}$ based at $1$. If we deform $(X,\eta)$ along some lift of $\gamma$ in $\overline{B}$, then we will end up with some $(X', \eta')$ of opposite spin. Indeed, the turning numbers of $\alpha_1,...,\alpha_g,\alpha_{g+2},...,\beta_1,...,\beta_{g+s}$ are locally constant along the deformation path as they take integral values. On the other hand, the turning number of $\alpha_{g+1}$ will change by $1$ as the turning angle of $\alpha_{g+1}$ changes continuously along the the deformation path (see Figure <ref>). This means that the Arf invariants of the original resp. resulting symplectic basis will differ by $1$. This completes our proof.
[] (0,0) – (0,2) node [left,text width=0.4cm,midway]$a$;
[] (1,0) – (1,2) node [right,text width=0.4cm,midway]$a$;
[dotted] (0,2) – (1,2);
[] (0,-1) – (0,-3) node [left,text width=0.4cm,midway]$b$;
[] (1,-1) – (1,-3) node [right,text width=0.4cm,midway]$b$;
[dotted] (0,-3) – (1,-3);
[blue] (0,-2) – (1,-2) node [below,text width=0.4cm,midway]$\beta_{g+1}'$;
[blue] (0,1) – (1,1) node [above,text width=0.4cm,midway]$\beta_{g+1}$;
[red] (0.5,1) – (0.5,0);
[red] (0.5,-1) – (0.5,-2);
[red,dotted] (0.5,0)– (0.5,-1) node [right,text width=0.4cm,midway]$\alpha_{g+1}$;
[] (0,0) – (0,2) node [left,text width=0.4cm,midway]$a$;
[] (1,0) – (1,2) node [right,text width=0.4cm,midway]$a$;
[dotted] (0,2) – (1,2);
[] (0.3,-1.6) – (1.6,-3.4) node [left,text width=0.4cm,midway]$b$;
[] (1.1,-1) – (2.4,-2.8) node [right,text width=0.4cm,midway]$b$;
[dotted] (1.6,-3.4) – (2.4,-2.8);
[blue] (1.1,-2.7) – (1.9,-2) node [below,text width=0.4cm,midway]$\beta_{g+1}'$;
[blue] (0,1) – (1,1) node [above,text width=0.4cm,midway]$\beta_{g+1}$;
[red] (0.5,1) – (0.5,0);
[red] (0.8,-1.4) – (1.5,-2.3);
[red,dotted] (0.5,0).. controls (0.5,-0.7) and (0.5,-1).. (0.8,-1.4) node [right,text width=0.4cm,midway]$\alpha_{g+1}$;
Deformation of pair $(\alpha_{g+1},\beta_{g+1})$ along the loop $\gamma$
Proposition <ref> can be also deduced from Proposition <ref>.
§.§ Computation of the spin stratum class of certain horizontal level graphs
In a moduli space of multi-scale differentials of type $\mu$, where $\mu$ is of even type, there will be no boundary stratum associated to some one-edge horizontal enhanced level graph of compact type. However, in the computation of clutching pullback with respect to $\Gamma_0$ (according to Assumption <ref>, this will only be applied to the case that $\mu=(m_1,m_2)$), we will encounter signatures of the form $\mu'=(m_1,m_2,-1,-1)$. Then we have to take care of some horizontal level graphs of compact type. This will happen if and only if $m_1,m_2\geq 0$. Hence, in this section, we will explain how to tackle this problem.
To compute the spin stratum class of a holomorphic stratum of signature $\mu=(m_1,...,m_n)$, we will have to compute first the spin stratum class of the meromorphic stratum of signature $\mu'=(m_1,...,m_n,-1,-1)$. If $n\geq 2$, then the multi-scale differentials of type $\mu'$ can be compatible with some one-edge horizontal level graphs of compact type. As all the integers $m_i\geq 0$, the one-edge horizontal graphs of compact type (associated to signature $\mu'$) will always be composed of two vertices such that there are two simple poles (and no other poles) on each vertex. The following example illustrates what kind of horizontal level graphs of compact type we will have to take into account.
Let $\mu=(2,2)$ and $\mu'=(2,2,-1,-1)$. We want to compute
$\xi_\Gamma^*p_*\big([\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(2,2,-1,-1)]^{\spin}\big)$, where $\Gamma$ is the following stable graph:
(a) at (-1,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(c) at (1,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
[-] (a) edge (c) ;
at (-1.3,-0.7)$p_1$ ;
at (-1,0.7)$p_3$ ;
at (1,0.7)$p_4$ ;
at (1.3,-0.7)$p_2$ ;
[-] (a) edge (-1.2,-0.5);
[-] (a) edge (-1,0.5);
[-] (c) edge (1.2,-0.5);
[-] (c) edge (1,0.5);
By the Clutching Pullback Formula, we will need to compute the projection pushforward of the divisor class corresponding to the following level graph $\Gamma_H$:
(a) at (-1,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(c) at (1,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
[-] (a) edge (c) ;
at (-1.3,-0.7)$2$ ;
at (-1,0.7)$-1$ ;
at (-0.6,0.2)$-1$ ;
at (1,0.7)$-1$ ;
at (0.6,0.2)$-1$ ;
at (1.3,-0.7)$2$ ;
[-] (a) edge (-1.2,-0.5);
[-] (a) edge (-1,0.5);
[-] (c) edge (1.2,-0.5);
[-] (c) edge (1,0.5);
As in Example <ref>, the spin parity of a multi-scale differential associated with such kind of level graph seems to be hard to determine. Nonetheless, the good thing here is that the spin parity on each vertex is still well defined. The reason is, if we look at a single vertex, the simple poles are still paired by residue condition.
However, one needs to be careful that the spin parity of a multi-scale differential associated with the level graph above is not just the sum of the spin parities at the vertices. Indeed, if we cut the horizontal edge and consider the flat surface corresponding to each vertex, we will doubly count the turning number contribution of the loop formed by the "$-1$"-legs. Hence, we should eliminate this effect. The correct way to count is to take the sum of spin parities of the vertices and then minus one. Let $\mu_0=(m_{0,1},...,m_{0,n_0},-1,-1)$ and $\mu_1=(m_{1,1},...,m_{1,n_1},-1,-1) $ be the signatures on the two vertices respectively. Then we have
\begin{equation}
p_{\Gamma_H*}([\mathbb{P}\Xi\overline{\mathcal{M}}_{\Gamma_H}]^{\spin})=- \pi_0^*p_{0*}([\mathbb{P}\Xi\overline{\mathcal{M}}_{g_0,n_0}(\mu_0)]^{\spin})\cdot \pi_1^*p_{1*}([\mathbb{P}\Xi\overline{\mathcal{M}}_{g_1,n_1}(\mu_1)]^{\spin}),
\end{equation}
where $\pi_0,\pi_1$ are the projection map of $\overline{\mathcal{M}}_\Gamma$ to its components.
In Example <ref>, each vertex is of the signature $(2,-1,-1)$, thus the corresponding stratum of differentials has only the odd spin component. As a result, $p_{\Gamma_H*}([\mathbb{P}\Xi\overline{\mathcal{M}}_{\Gamma_H}]^{\spin})=-p_{\Gamma_H*}([\mathbb{P}\Xi\overline{\mathcal{M}}_{\Gamma_H}])$.
For a stratum whose signature $\mu$ is of even type, there will be no one-edge horizontal level graph which is of compact type. Otherwise, the degree of the differentials on each vertex would be odd.
Assume that $\mu$ is minimal (i.e. there will be only one entry and the entry is non-negative) and $\mu'$ be the signature obtained by adding $-1$s to $\mu$. Then there will be no one-edge horizontal level graph of compact type for $\mu'$. The reason is that there will be a vertex of signature with only negative entries. This can only be possible if the vertex has genus $0$. This will force that the vertex has at least $3$ legs. Since three poles will sum up to strictly less than $-2$, this kind of vertex will not be possible.
Now, we really have all the tools we need to construct our recursive algorithm. We will explain the algorithm in the next section in detail.
§ THE ALGORITHM TO COMPUTE THE SPIN STRATUM CLASSES
In this section, we will show the architecture of our algorithm. Then we will enumerate some cases one can easily cross-check our results with. Before we start, let us clarify the background of the sage package we used. Our codes are based on the package and its subpackage . The package has been designed for doing intersection theory in the tautological rings of moduli spaces of stable curves (cf. <cit.>). Moreover, it can compute a basis for each component $\RH^{2k}(\overline{\mathcal{M}}_{g,n})$ of the tautological ring by assuming the generalized Faber-Zagier relations. On the other hand, the subpackage is programmed for doing intersection theory in the tautological rings of the compactified generalised strata (cf. <cit.>).
§.§ Description of the algorithm
Our algorithm is mainly based on the following tools we introduced in Section <ref>:
(i) Proposition <ref> asserts the existence of a unique solution to the system of linear equations obtained by clutching pullbacks with respect to one-edge stable graphs. The basis of the tautological ring can be computed by .
(ii) Assumption <ref> allows us to avoid doing clutching pullback with respect to $\Gamma_0$ if the number of markings is larger than $2$. This will reduce the computation complexity.
(iii) To compute the clutching pullback of the spin stratum class with respect to a one-edge graph $\Gamma$, we apply the Clutching Pullback Formula (<ref>).
(iv) In the computation of a single term of the excess intersection formula, Proposition <ref>, Proposition <ref> and Proposition <ref> allow us to pass the spin variant of the fundamental class of a compactified generalised stratum to a tautological class of the ambient stratum. Recursively, one can instead compute the projection pushforwards of a tautological class of a compactified generalised stratum with no residue condition.
(v) If we encounter a horizontal level graph, we can just use the method introduced in Section <ref> to determine the spin parity.
(vi) By Corollary <ref>, we can ignore the computation of the projection pushforward $p_*([D_\Delta]^{\spin})=0$ if $\Delta$ has any vertical edge of even enhancement.
(vii) Proposition <ref> and Proposition <ref> gives an explicit formula to compute the base cases (the signature is of the form $(0,-1,-1)$ and $(2k,-2k,-1,-1)$) in the recursion. In addition, a differential of even type and $g=0$, which has no simple poles, will have even spin by default.
Our algorithm is designed not only to compute the spin stratum class (which can be regarded as the projection pushforward of the spin variant of the fundamental class of a compactified stratum), but also the projection pushforward of the spin variant of a tautological class of a compactified generalised stratum.
Before we dig into the details of the algorithm, we introduce some terminology in our algorithm. An additive generator is analogous to a decorated stratum in the tautological rings of moduli spaces of stable curves. More precisely, an additive generator of the tautological ring of a compactified generalised stratum $\pmoxrmoduli$ is just a product of a $\psi$-polynomial with the cycle class of a boundary stratum $[B_\Delta]^{\spin}\in \mathrm{R}^*(\pmoxrmoduli)$. Here, $B_\Delta$ refers to the boundary stratum, which is associated to the level graph $\Delta$. Our algorithm will only support additive generators associated with level graphs without horizontal edges. A tautological class is just a $\mathbb{Q}$-linear combination of additive generators.
This algorithm will convert the spin variant of a tautological class of a compactified generalised stratum $\pmoxrmoduli$ (which admits spin structure) to a product tautological class of the moduli space of disconnected stable curves $\prod_{g\in\boldsymbol{g},n\in\boldsymbol{n}}\overline{\mathcal{M}}_{g,n}$. It consists of the following steps:
Step 1 First, we break down the input tautological class into additive generators and reduce the computation on the single additive generator. As an additive generator $A$ is just $f(\boldsymbol{\psi})\cdot [B_\Delta]$ (where $f(\boldsymbol{\psi})$ is a $\psi$-polynomial), the projection pushforward $p_*(A)$ will be just $f(\boldsymbol{\psi})\cdot p_*([B_\Delta])$.
Step 2 It reduces to compute $p_*([B_\Delta])$. If $\Delta$ is the trivial level graph, we will jump to Step 3. Otherwise, we extract the level strata $\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,i}$ (where $i=0,-1,...,-L+1$) from the levels of $\Delta$. Then we compute the projection pushforwards of the spin variant of the fundamental classes of each level stratum and clutch the classes together to yield $p_*([B_\Delta])$. (Here, if $\Delta$ has any edge of even enhancement, $p_*([B_\Delta])$ will be zero.)
Step 3 At this stage, we have to compute the projection pushforward of the spin variant of the fundamental class of a compactified generalised stratum $\pmoxrmoduli$ on $\prod_{g\in\boldsymbol{g},n\in\boldsymbol{n}}\overline{\mathcal{M}}_{g,n}$. If there is no residue condition (or the poles are only paired up simple poles), then we will go to Step 4 in case the tuple $\boldsymbol{g}$ has only one entry; while it will just return $0$ in case the generalised stratum has more than one product components due to dimensional reason. Otherwise, we will resolve the residue conditions and express the spin variant of the fundamental classes as a tautological class of the ambient stratum. After that, we will return to Step 1.
Step 4 The algorithm now needs to compute the spin stratum class of a signature $\mu$ of even type. If $g=0$ or the signature is of the form $(0,-1,-1)$ and $(2k,-2k,-1,-1)$, we will apply our results of the base cases. Otherwise, we will compute the clutching pullbacks of the spin stratum class with respect to one-edge stable graphs (self-loop graph $\Gamma_0$ will be ignored if the length of $\mu$ is greater than $2$). To apply the Clutching Pullback Formula to a specific one-edge graph $\Gamma$, we will sort out the two-level graphs or horizontal graphs with a $\Gamma$-structure.
Step 5 At this stage, we need to compute a single term in the Clutching Pullback Formula associated to some codimension-$1$ level graph $\Delta$ with a $\Gamma$-structure. We extract the top resp. bottom level stratum $\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\top}$ resp. $\mathbb{P}\Xi\overline{\mathcal{M}}_{\Delta,\bot}$. Then we compute the projection pushforward of the spin variant of the fundamental classes of level these strata (return to Step 3) and clutch these classes to yield a tautological class in $\overline{\mathcal{M}}_\Gamma$.
Step 6
After we have computed the clutching pullbacks of the spin stratum class and expressed the pullback classes by the basis computed by , we will solve the system of linear equations and yield the spin stratum class which is expressed in a basis computed by .
If we just want to theoretically show that the spin stratum classes are recursively computable, we can drop Assumption <ref> and argue just by Proposition <ref> and the Clutching Pullback Formula.
We can proceed with the algorithm above, but pretend we have a basis for the cohomology ring of every moduli space of stable curves $\overline{\mathcal{M}}_{g,n}$. Let $\mu=(m_1,...,m_r,(-1,-1)^{t})$ be a signature of even type (the expression $(-1,-1)^{t}$ means that we have $2t$ simple poles and these simple poles are paired up by residue condition) where $-2t+\sum_{i=1}^r m_i=2g-2$. The codimension of the spin stratum class of such signature will be
\begin{align*}
2g+2t-2 & \mbox{ if all } m_i \mbox{ are non-negative}\\
2g+2t & \mbox{ otherwise }
\end{cases}
\end{align*}
Recall the range of injectivity in Proposition <ref>:
\begin{align*}
\begin{cases}
2n-7 &\mbox{ if } g=0\\
2g-2 & \mbox{ if } n=0,1\\
2g & \mbox{ if } n=2\\
2g-2+n & \mbox{ otherwise }
\end{cases}
\end{align*}
Note that the number of markings $n=r+2t$. In addition,
\begin{align*}
\begin{cases}
r\geq 1 &\mbox{ if all $m_i$ are positive}\\
r\geq 2 &\mbox{ otherwise.}
\end{cases}
\end{align*}
For $g=0$, the range $d(g,n)=2(r+2t)-7=2r+4t-7$. If all $m_i$ are positive, then the codimension is $2t-2$ and $d(g,n)-(2t-2)=2r+2t-5\geq 0$ unless $r=1,t=1$, which is the case of $\mu=(0,-1,-1)$. Otherwise, the codimension will be $2t$ and $d(g,n)-2t=2r+2t-7\geq 0$ unless $r=2,t=1$, which is the case of $\mu=(2k,-2k,-1,-1)$. Since the initial cases $\mu=(0,-1,-1)$ and $\mu=(2k,-2k,-1,-1)$ are known, we can recursively compute the spin stratum classes for $g=0$ by the algorithm above theoretically via the clutching pullback formula and solving a system of linear equations.
For $g>0$, we have
\begin{align*}
2g-3+(1+2t)\leq 2g-3+(r+2t)=d(g,n) &\mbox{ if all $m_i$ are non-negative}\\
2g-3+(3+2t)\leq 2g-3+(r+2t)=d(g,n) &\mbox{ if } r\geq 3
\end{cases}
\end{align*}
The remaining signatures for $g>0$ which we cannot apply the clutching pullback method will be of the form $(a,-b,(-1,-1)^t)$. By Proposition <ref>, the spin stratum classes of these signatures can be reduced to the computation for spin stratum classes on which the clutching pullback method can be applied.
After we have presented our algorithm, we want to explain why the recursion based on Assumption <ref> will work. The main problem is whether the emergence of simple poles will finally reduce to the cases we manage to handle.
(i) If $\mu=(m_1,...,m_n)$ is of even type, then the signature $\mu'=(m_1,...,m_n,-1,-1)$ (obtained by degenerating to irreducible nodal curve) will have length of at least $3$. By Assumption <ref>, to compute the spin stratum class, it suffices to compute the clutching pullbacks with respect to the one-edge stable graphs of compact type. Thus, in the recursion, we only need to consider spin stratum classes of signatures containing at most one pair of simple poles.
(ii) Moreover, the signatures $\mu'=(m_1,...,m_n,-1,-1)$ appearing in the computation of clutching pullback with respect to the $\Gamma_0$ are of length less equal to $4$. This is because if $n\geq 3$, then we will not need to compute the clutching pullbacks with respect to $\Gamma_0$ due to Assumption <ref>.
(iii) Our algorithm only needs to handle horizontal level graphs of compact type for signatures of the form $(m_1,m_2,-1,-1)$ where $m_1,m_2\geq 0$. Then according to Section <ref>, we will reduce to compute the spin stratum classes associated to signatures $\mu_0=(m_0,-1,-1)$ and $\mu_1=(m_1,-1,-1)$ respectively. For both signatures, Assumption <ref> still applies, so we will not need to do the clutching pullback with respect to $\Gamma_0$.
(iv) The recursion of computing spin stratum class associated to a signature of the form $\mu=(m_0,m_1)$ will end up either with the computation for signatures of the form $(2k,-2k,-1,-1)$ or with the computation for the signature $(0,-1,-1)$.
By the discussion above, if Assumption <ref> is true for $g\leq g_0$ and $n\leq n_0$, then our algorithm will control the computation of clutching pullbacks for signatures that we manage to handle within . Then as in the proof of Theorem <ref>, we can conclude that for $g\leq g_0$ and $n\leq n_0$, our algorithm will yield the correct spin stratum class.
We will give some explicit examples of computing the spin stratum classes in Section <ref>.
§.§ commands for handling (spin) stratum
We now show how to input a (spin) additive generator or tautological class in and call the methods we have coded to compute the spin stratum classes. We first need to define a stratum (with/without spin structure): either using the class or its subclass . For example, we can input the compactified generalised stratum of differentials of even type $\mathbb{P}\Xi\overline{\mathcal{M}}_{1,3}^\mathfrak{R}(4,-2,-2)$, where the residue condition is $r_2=0$, by the following code:
sage: from admcycles.diffstrata import *
sage: from admcycles.diffstrata.spinstratum import SpinStratum, addspin
sage: X = SpinStratum([Signature((4,-2,-2))],res_cond=[[(0,1)]])
# alternatively
sage: X_nospin = GeneralisedStratum([Signature((4,-2,-2))],res_cond=[[(0,1)]])
sage: X = addspin(X_nospin)
An object in the class has a list of vertical two-level graphs. For example, if we want to enter a (spin) additive generator of the stratum above :
sage: from admcycles.diffstrata.spinstratum import AG_with_spin, AG_addspin
sage: A = AG_with_spin(X,((2,),0),leg_dict=1:2,2:1)
# alternatively
sage: A_nospin = AdditiveGenerator(X_nospin,((2,),0),leg_dict=1:2,2:1)
sage: A = AG_addspin(A_nospin)
The tuple $((2,),0)$ refers to the $3$rd two-level graph on the list . If we enter $((),0)$, then it refers to the underlying level graph of the generalised stratum itself. The condition means that the level graph is decorated by the monomial $\psi_1^2\psi_2$. The tautological class corresponding to two times of this additive generator will be:
from admcycles.diffstrata.spinstratum import ELGT_with_spin, ELGT_addspin
sage: E = ELGT_with_spin(X,[(2,A)])
# alternatively
sage: E_nospin = ELGTautClass(X_nospin,[(2,A_nospin)])
sage: E = ELGT_addspin(E_nospin)
Hence, if we enter the following codes, it will represent the (spin) stratum class in the tautological ring of $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$.
sage: A_nospin = AdditiveGenerator(X,((),0)) # no spin
sage: E_nospin = ELGTautClass(X,[(1,A)]) # no spin
sage: A = AG_with_spin(X,((),0))
sage: E = ELGT_with_spin(X,[(1,A)])
To compute the (spin) stratum class in the tautological ring of $\overline{\mathcal{M}}_{g,n}$, one can use the following:
sage: stratum_cl = E_nospin.to_prodtautclass().pushforward() #nospin
sage: spin_stratum_cl = E.to_prodtautclass_spin().pushforward()
§.§ Cross-checking our results with existing results
The spin stratum class for a stratum of differentials on genus $1$ curves can be easily determined. The odd spin stratum $\mathcal{H}_1(\mu)^-$ coincides with the stratum $\mathcal{H}_1(\mu')$, where $2\mu'=\mu$. Indeed, one can see that from the perspective of theta characteristics. Let $(C;p_1,...,p_n)$ be an marked elliptic curve and $\mu=(2m_1,...,2m_n)$ be a signature such that $\sum_i m_i=0$. Then the sheaf $\mathcal{O}_C(\sum_i m_ip_i)$ has a global section if and only if it is isomorphic to $\mathcal{O}_C\simeq \omega_C$. In other words, $h^0(\mathcal{O}_C(\sum m_ip_i))=1$ if $\sum m_ip_i$ is a canonical divisor, otherwise $h^0(\mathcal{O}_C(\sum m_ip_i))=0$. This implies that the $\mathcal{H}_1(2m_1,...,2m_n)^-\simeq \mathcal{H}_1(m_1,...,m_n)$. Hence,
\[[\mathcal{H}_1(2m_1,...,2m_n)]^{\spin}=[\mathcal{H}_1(2m_1,...,2m_n)]-2[ \mathcal{H}_1(m_1,...,m_n)]\]
Due to the fact that all the explicit computations have to be done by , we will show in Section <ref> how one can cross-check our results of spin stratum classes in with the existing results. We will demonstrate the commands for doing the cross-check for the following signatures:
* $g=1$: $\mu=(4,4,-8)$ (by the discussion above);
* $g=2$: $\mu=(2)$ (cf. <cit.>) and $\mu=(4,-2)$ (cf. <cit.>, <cit.>);
* $g=3$: $\mu=(4)$ and $\mu=(2,2)$ (cf. <cit.>.
§.§ Cross-checking the results in
We apply the following methods in to check the results for $\mu=(4,4,-8)$:
sage: from admcycles import *
sage: from admcycles.diffstrata import *
sage: from admcycles.diffstrata.spinstratum import Spin_strataclass
sage: Spinclass = Strataclass(1,1,(4,4,-8))- 2*Strataclass(1,1,(2,2,-4))
sage: Spinclass.basis_vector()
(0, -17, 9, 33, -26)
sage: X = SpinStratum([Signature((4,4,-8))])
sage: A=AG_with_spin(X,((),0))
sage: E=ELGT_with_spin(X,[(1,A)])
sage: our_result = E.to_prodtautclass_spin().pushforward()
sage: our_result.basis_vector()
(0, -17, 9, 33, -26)
# alternatively, we have a simpler method Spin_strataclass
sage: Spin_strataclass((4,4,-8)).basis_vector()
(0, -17, 9, 33, -26)
If $\mu=(2)$, according to <cit.>, the stratum of type $(2)$ is connected and hyperelliptic of odd spin, thus we have $[\overline{\mathcal{H}}(2)]^{\spin}=-[\overline{\mathcal{H}}(2)]$.
sage: Spinclass = -Strataclass(2,1,(2,))
sage: Spinclass.basis_vector()
(1/2, -7/2, 1/2)
sage: X = SpinStratum([Signature((2,))])
sage: A=AG_with_spin(X,((),0))
sage: E=ELGT_with_spin(X,[(1,A)])
sage: our_result = E.to_prodtautclass_spin().pushforward()
sage: our_result.basis_vector()
(1/2, -7/2, 1/2)
If $\mu=(4,-2)$, the stratum of differentials has a hyperelliptic component of odd spin and a component of even spin, according to <cit.>. In <cit.>, Schmitt and van Zelm have used the method of clutching pullbacks to compute the fundamental class of loci of admissible covers in $\overline{\mathcal{M}}_{g,n}$. The hyperelliptic locus is a special case. The method is built-in in .
sage: G = CyclicPermutationGroup(2)
sage: H = HurData(G,[G[1],G[1],G[1],G[1],G[1],G[1]])
sage: Hyp = (1/factorial(4))*Hidentify(2,H,markings=[1,2])
sage: Spinclass = Strataclass(2,1,(4,-2)) - 2*Hyp
sage: Spinclass.basis_vector()
(-63/2, 21/2, -22, -27, 44, 41, 54, 63/2, -147/2, 21/2, -1, -6, -21/2, 0)
sage: X = SpinStratum([Signature((4,-2))])
sage: A=AG_with_spin(X,((),0))
sage: E=ELGT_with_spin(X,[(1,A)])
sage: our_result = E.to_prodtautclass_spin().pushforward()
sage: our_result.basis_vector()
(-63/2, 21/2, -22, -27, 44, 41, 54, 63/2, -147/2, 21/2, -1, -6, -21/2, 0)
If $\mu=(4)$, according to <cit.>, the differential stratum has a hyperelliptic component of even spin and a component of odd spin. Hence we can use the same trick as before.
sage: G = CyclicPermutationGroup(2)
sage: H = HurData(G,[G[1],G[1],G[1],G[1],G[1],G[1],G[1],G[1]])
sage: Hyp = (1/factorial(7))*Hidentify(3,H,markings=[1])
sage: Spinclass = 2*Hyp - Strataclass(3,1,(4,))
sage: Spinclass.basis_vector()
(-729, 1099/12, -787/12, 213, -1151/12, 33/2, 533/2, 985/12, -779/12, -78, 219/2, 19/2, 50, -2213/12, -197/24, 35)
sage: X = SpinStratum([Signature((4,))])
sage: A=AG_with_spin(X,((),0))
sage: E=ELGT_with_spin(X,[(1,A)])
sage: our_result = E.to_prodtautclass_spin().pushforward()
sage: our_result.basis_vector()
(-729, 1099/12, -787/12, 213, -1151/12, 33/2, 533/2, 985/12, -779/12, -78, 219/2, 19/2, 50, -2213/12, -197/24, 35)
The fundamental class of the hyperelliptic component was actually also computed by Chen in <cit.>:
\begin{align*}
[\overline{\mathcal{H}}_{3}(4)^+]=& \psi(18\lambda-2\delta_0-9\delta^{\{1\}}_1-6\delta^{\{1\}}_1)-\lambda(45\lambda-\frac{19}{2}\delta_0-24\delta^{\{1\}}_2)\\&-\frac{1}{2}\delta_0^2-\frac{5}{2}\delta_0\delta^{\{1\}}_2-3(\delta^{\{1\}}_2)^2
\end{align*}
One can check that it coincides with the class computed in .
sage: Hyp.basis_vector()
(577/2, -36, 49/2, -81, 77/2, -8, -108, -65/2, 25, 31, -83/2, -7/2, -41/2, 151/2, 13/4, -14)
sage: chen_Hyp = psiclass(1,3,1)*(18*lambdaclass(1,3,1)- tautgens(3,1,1)[4]-9*tautgens(3,1,1)[3]-6*tautgens(3,1,1)[2])-lambdaclass(1,3,1)*(45*lambdaclass(1,3,1)-(19/4)*tautgens(3,1,1)[4]-24*tautgens(3,1,1)[2])-(1/8)*tautgens(3,1,1)[4]^2-(5/4)*tautgens(3,1,1)[4]*tautgens(3,1,1)[2]-3*tautgens(3,1,1)[2]^2
sage: chen_Hyp.basis_vector()
(577/2, -36, 49/2, -81, 77/2, -8, -108, -65/2, 25, 31, -83/2, -7/2, -41/2, 151/2, 13/4, -14)
If $\mu=(2,2)$, the stratum also has exactly two components, namely the hyperelliptic component and the odd spin component. We can use the same method as in the case of $\mu=(4)$ to compute the spin stratum class. The result agrees with the spin stratum class yielded by our algorithm.
In the next section, we will introduce the conjecture of the twisted spin double ramification cycles, and show how our algorithm can be improved by making use of it.
§ CONJECTURAL FORMULA OF THE SPIN TWISTED DR CYCLE
In this section, we will first introduce the theorem of twisted double ramification cycles (twisted DR cycle) and then state the conjectural formula of the spin twisted double ramification cycle from <cit.>. We now recall the definition of a twisted double ramification cycle. A $k$-twisted double ramification cycle $\DR_g(a)$, where the $k$-ramification vector $a=(a_1,...,a_n)$ is a tuple of integers summing up to $k(2g-2+n)$, is a cycle on $\overline{\mathcal{M}}_{g,n}$ compactifying the condition $\omega_{\log}^{\otimes k}\simeq \mathcal{O}_C(a_1p_1+...+a_np_n)$.
Notice that given a signature $\mu=(m_1,...,m_n)$ of a stratum of $k$-differentials (i.e. $\sum_im_i=k(2g-2)$), one can define a $k$-twisted double ramification cycle $\DR_g(a_\mu)$ by letting $a_\mu=(m_1+k,...,m_n+k)$.
§.§ A formula of the $k$-twisted double ramification cycles
Now we will introduce various geometric approaches to express the $k$-twisted double ramification cycles. For the (non-twisted) double ramification cycles, Pixton has constructed a formula expressed in terms of generators of the tautological ring of $\overline{\mathcal{M}}_{g,n}$, which has been proved in <cit.>. The formula of double ramification cycles with targets was later proved in <cit.>.
The formula for $k$-twisted double ramification cycles is very similar to Pixton's formula. We need to give some definitions to state the formula. Let $a=(a_1,...,a_n)$ be a $k$-ramification vector. An admissible $k$-weighting $\pmod r$ $w$ of a dual graph $\Gamma$ is a function $w:H(\Gamma)\longrightarrow \{0,1,...,r-1\}$, where $H(\Gamma)$ is the set of half edges and legs of $\Gamma$, such that
* for $h_i$, which is corresponding to marking $i\in \{1,...,n\}$, one has $w(h_i)=a_i \pmod r$,
* for any edge $e$ consisting of two half edge $h,h'$, we have $w(h)+w(h')=0 \pmod r$,
* for any $v\in V(\Gamma)$,
\[\sum_{h\in H_\Gamma(v)} w(h)=k(2g(v)-2+n(v)) \pmod r , \]
where $H_\Gamma(v)$ is the set of half-edges and legs incident to $v$.
We define a mixed degree tautological class:
\begin{align*}
P^{r,\bullet}_g(a)=\sum_{\Gamma\in G_{g,n}}\sum_{w\in W_{\Gamma,r}(a)} \frac{1}{|\Aut(\Gamma)|}\frac{1}{r^{h^1(\Gamma)}}\Cont_{a,\Gamma,w},
\end{align*}
where $G_{g,n}$ is the set of all dual graphs with genus $g$ and $n$; $W_{\Gamma,r}(a)$ is the set of admissible $k$-weightings $\pmod r$ on $\Gamma$ with respect to the ramification vector $a$; and
\begin{align*}
=& \xi_{\Gamma *}\Bigg[\prod_{v\in V(\Gamma)}\exp(-k\kappa_1[v])\prod_{i=1}^n\exp(a_i^2\psi_{h_i})\prod_{e=(h,h')\in E(\Gamma)}\frac{1-\exp(-w(h)w(h')(\psi_h+\psi_{h'}))}{\psi_h+\psi_{h'}} \Bigg].
\end{align*}
For sufficiently large values of $r$, the class $P^{r,\bullet}_{g}(a)$ is a mixed degree tautological class on $\overline{\mathcal{M}}_{g,n}$ whose coefficients are polynomials in $r$. We denote the class we obtained by substituting $r=0$ to the polynomials by $P^{\bullet}_g(a)$.
Finally, we can give the formula of the $k$-twisted double ramification cycle:
\begin{align}
\DR_g(a)=2^{-g}P^g_g(a),
\end{align}
where $P^g_g(a)$ is the degree $g$ terms of $P^{\bullet}_g(a)$.
§.§ The theorem of $k$-twisted DR cycle
In <cit.>, Farkas and Pandharipande have given a conjecture that the $1$-twisted double ramification cycle $\DR_g(a_\mu)$ agrees with the weighted fundamental class of the moduli space of twisted canonical divisors $[\widetilde{H}_g(\mu)]$. One can recall the definition of twisted $k$-canonical divisors by Remark <ref>. Later in <cit.>, the conjecture has been generalised for $k>1$. The generalised conjecture was later proved in <cit.> and <cit.>.
A simple star graph is a vertical two-level enhanced level graph of type $\mu=(m_1,...,m_n)$ (where $\sum_im_i=k(2g-2)$) such that:
* there is only one vertex $v_0$ (which is called the center) on the bottom level;
* there is no leg on the top level representing a pole;
* all the orders of poles and zeros of any vertex on the top level are divisible by $k$.
We denote the set of simple star graphs compatible to a given signature $\mu$ by $SG_1(\mu)$, now we can state the theorem of the $k$-twisted double ramification cycles.
Let $g,n\geq 0$ and $k>0$, and $\mu=(m_1,...,m_n) \in \mathbb{Z}^n\setminus k\mathbb{Z}^n_{\geq 0}$ such that $\sum_ia_i=k(2g-2)$. Then we have
\begin{align}
\DR_g(a_\mu)= \sum_{\Delta\in SG_1(\mu)}\frac{\prod_{e\in E(\Delta)}\kappa_e}{k^{N_0}|\Aut(\Delta)|}\xi_{\Delta*}\Bigg[& \big[\overline{\mathcal{H}}_{g(v)}^k(\mu[v],\kappa[v]-k)\big]\cdot\\
&\prod_{v\in V(\Delta^\bot)}\big[\overline{\mathcal{H}}_{g(v)}(\frac{\mu[v]}{k},\frac{\kappa[v]-k}{k})\big]\Bigg],
\end{align}
where $\kappa_e$ is the enhancement of the edge $e$ and $N_0$ is the number of vertices on the top level. In addition, $\overline{\mathcal{H}}_{g(v)}(\mu[v], \kappa[v]-k)$ is the stratum $k$-differentials of signature prescribed by $\mu[v]$ and the enhancements of edges adhered to $v$.
§.§ The conjecture on spin $k$-twisted DR cycle
We will first define the spin double ramification cycle and then state the conjecture of the spin version of Theorem <ref>. Let $k$ be a positive odd integer and $a$ be a ramification vector whose entries are all odd. We define the spin variant of the mixed degree tautological class $P^{r,\bullet}_g(a)$ (where $r$ is an even number) as:
\begin{align*}
P^{r,\spin,\bullet}_g(a)=\sum_{\Gamma\in G_{g,n}}\sum_{w\in W_{\Gamma,r,odd}(a)} \frac{1}{2^{g-h^1(\Gamma)}|\Aut(\Gamma)|}\frac{1}{r^{h^1(\Gamma)}}\Cont_{a,\Gamma,w},
\end{align*}
where $W_{\Gamma,r,odd}(a)$ is the set of admissible $k$-weightings such that all the edges are weighted by odd numbers. For sufficiently large values of $r$, $P^{r,\spin,\bullet}_g(a)$ is a mixed degree tautological class whose coefficients are polynomials in $r$. We denote the class we obtained by substituting $r=0$ to the polynomials by $P^{\spin,\bullet}_g(a)$.
The formula of a spin $k$-twisted double ramification cycle ramification cycle is:
\begin{align*}
\DR_g^{\spin}(a)=2^{-g}P^{\spin,g}_g(a).
\end{align*}
Finally, we can state the conjecture on the spin $k$-twisted double ramification cycles:
Let $g,n\geq 0$ and $k$ be a positive odd integer. In addition, let $\mu=(m_1,...,m_n) \in \mathbb{Z}^n\setminus k\mathbb{Z}^n_{\geq 0}$ such that $\sum_ia_i=k(2g-2)$ and all the entries of $\mu$ are even. Then we have
\begin{align}\label{eq:spin_pixton}
\begin{split}
\DR^{\spin}_g(a_\mu)= \sum_{\Delta\in SG_1^{odd}(\mu)}\frac{\prod_{e\in E(\Delta)}\kappa_e}{k^{N_0}|\Aut(\Delta)|}\xi_{\Delta*}\Bigg[& \big[\overline{\mathcal{H}}_{g(v)}^k(\mu[v],\kappa[v]-k)\big]^{\spin}\cdot\\ &\prod_{v\in V(\Delta^\bot)}\big[\overline{\mathcal{H}}_{g(v)}(\frac{\mu[v]}{k},\frac{\kappa[v]-k}{k})\big]^{\spin}\Bigg],
\end{split}
\end{align}
where $SG_1^{odd}(\mu)$ is the subset of $SG_1(\mu)$ consisting of the level graphs such that the enhancement of every edge is odd.
The left hand side of (<ref>) is already computable in but the right hand side requires us to compute the spin stratum classes of strata of $k$-differentials. For $k=1$, the algorithms we developed can help compute the right hand side so that we can verify the conjecture for plenty of signatures $\mu=(m_1,...,m_n)$ ($\sum_im_i=2g-2$) of meromorphic strata where $(g,n)$ equal:
\begin{align*}
\end{align*}
Note that on the right hand side of (<ref>), the contribution of a vertex on the top level of a simple star graph is just the spin stratum class of some holomorphic stratum of $1$-differential. Thus, if we assume the conjecture to be true, then we can first use our algorithm to compute the spin stratum classes of strata of holomorphic $1$-differentials and compute the spin stratum classes of strata of meromorphic $k$-differentials by recursively solving the equation (<ref>).
In , one can directly call the function to compute the spin stratum class for a signature of even type. Moreover, one can also use the keyword argument to apply Conjecture <ref> to the computation of the spin stratum class.
sage: from admcycles import *
sage: cl1=Strataclass(1,1,(6,-4),spin=True,spin_conj=True)
sage: cl2=Strataclass(1,1,(6,-4),spin=True)
sage: cl1 == cl2
§ EXAMPLES AND COMMANDS
§.§ Example: Applying the clutching pullback formula to $[\overline{\mathcal{H}}_1(4,-2,-2)]$
Before we start to consider the clutching pullback explicitly, let us introduce the notation of boundary strata of $\overline{\mathcal{M}}_{g,n}$:
* Consider $P\subseteq I=\{1,...,n\}$ and $g\geq q\geq 0$, we use $S_q^P$ to denote the boundary stratum of curves with one node and the markings in $P$ lie in the component of genus $q$. Notice that $S_q^P=S_{g-q}^{P^c}$, where $P^c$ is the complement of $P$. We use $\delta_{q}^{P}$ to denote the fundamental class of the corresponding boundary stratum and $\Gamma_q^{P}$ the corresponding dual graph.
* $S_0$ refers to the boundary stratum parametrizing curves with exactly one self-node and $\delta_0$ is referred to as its fundamental class. We denote the dual graph of this boundary stratum by $\Gamma_0$.
Furthermore, recall that on $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$, a boundary stratum is parametrised by some enhanced level graph $\Delta$. A codimension one boundary stratum of $\mathbb{P}\Xi\overline{\mathcal{M}}_{g,n}(\mu)$ is parametrized by some enhanced level graph $\Delta$ that has exactly one horizontal edge or is vertical with two levels. We use the following notations to denote the divisor corresponding to a one-edge horizontal graph:
(i) $D_h$ is referred to as the divisor corresponding to the boundary stratum corresponding to the self-loop graph;
(ii) $D_{q}^P$ is referred to as the divisor with respect to the boundary stratum associated to the one-edge horizontal level graph of compact type such that the markings in $P$ lie in the component of genus $q$;
(iii) We will also use $\Gamma_{q}^{P}$ and $\Gamma_0$ to denote the level graph of the divisors in (i) and (ii).
We now consider the stratum class $[\overline{\mathcal{H}}_1(4,-2,-2)]\in H^{2}(\overline{\mathcal{M}}_{1,3})$. The boundary divisors of the moduli space $\mathbb{P}\Xi \overline{\mathcal{M}}_{1,3}(4,-2,-2)$ are parametrized by vertical two-level graphs or one-edge horizontal graphs. We now consider the clutching pullback of $[\overline{\mathcal{H}}_1(4,-2,-2)]$ with respect to the one-edge stable graphs. The vertical two-level graphs and one-edge horizontal graphs which have some $\Gamma_0$-structures are:
(a) at (0,0) [circle,draw,inner sep=0pt,fill,minimum size=3pt];
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(a).. controls (-0.6,-0.75) and (-0.6,-1.5).. (b);
(a).. controls (0.6,-0.75) and (0.6,-1.5).. (b);
[-] (b) edge (-0.3,-2.3);
[-] (b) edge (0.3,-2.3);
[-] (a) edge (0,0.3);
at (-1.3,-2.4)(A); at (-0.3,-2.5)$4$;
at (0.3,-2.5)$-2_1$; at (0,0.5)$-2_2$;
at (-0.6,-0.5)$0$;
at (0.6,-0.5)$0$;
at (-0.6,-1.8)$-2$;
at (0.6,-1.8)$-2$;
(0) at (2,1);
(a) at (0,0) [circle,draw,inner sep=0pt,fill,minimum size=3pt];
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(a).. controls (-0.6,-0.75) and (-0.6,-1.5).. (b);
(a).. controls (0.6,-0.75) and (0.6,-1.5).. (b);
[-] (b) edge (-0.3,-2.3);
[-] (b) edge (0.3,-2.3);
[-] (a) edge (0,0.3);
at (-1.3,-2.4)(B); at (-0.3,-2.5)$4$;
at (0.3,-2.5)$-2_2$; at (0,0.5)$-2_1$;
at (-0.6,-0.5)$0$;
at (0.6,-0.5)$0$;
at (-0.6,-1.8)$-2$;
at (0.6,-1.8)$-2$;
(a) at (0,0) [circle,draw,inner sep=0pt,fill,minimum size=3pt];
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(a).. controls (-0.6,-0.75) and (-0.6,-1.5).. (b);
(a).. controls (0.6,-0.75) and (0.6,-1.5).. (b);
[-] (a) edge (-0.4,0.3);
[-] (a) edge (0.4,0.3);
[-] (b) edge (0,-2.3);
at (-1.3,-2.4)(C); at (0,-2.5)$4$;
at (0.4,0.5)$-2_2$; at (-0.4,0.5)$-2_1$;
at (-0.6,-0.5)$2$;
at (0.6,-0.5)$0$;
at (-0.6,-1.8)$-4$;
at (0.6,-1.8)$-2$;
(a) at (0,0) [circle,draw,inner sep=0pt,fill,minimum size=3pt];
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(a).. controls (-0.6,-0.75) and (-0.6,-1.5).. (b);
(a).. controls (0.6,-0.75) and (0.6,-1.5).. (b);
[-] (a) edge (-0.4,0.3);
[-] (a) edge (0.4,0.3);
[-] (b) edge (0,-2.3);
at (-1.3,-2.4)(D); at (0,-2.5)$4$;
at (0.4,0.5)$-2_2$; at (-0.4,0.5)$-2_1$;
at (-0.6,-0.5)$1$;
at (0.6,-0.5)$1$;
at (-0.6,-1.8)$-3$;
at (0.6,-1.8)$-3$;
(a) at (0,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(a).. controls (1,1) and (1,-1).. (a);
at (0,-0.7)$4$;
at (-0.4,0.5)$-2$;
at (-0.4,-0.5)$-2$;
at (0.5,0.5)$-1$;
at (0.5,-0.5)$-1$;
[-] (a) edge (0,-0.6);
[-] (a) edge (-0.3,-0.4);
[-] (a) edge (-0.3,0.4);
at (-1,-1.4)(E);
In this example, $\overline{\mathcal{M}}_{\Gamma_0}=\overline{\mathcal{M}}_{0,5}$. The contribution of each of the above vertical two-level graphs or one-edge horizontal graphs to $\xi^*_{\Gamma_0}[\overline{\mathcal{H}}_{1}(4,-2,-2)]$ are:
(A) There are four $\Gamma_0$-structures on Graph $A$. They map the $4$-th and $5$-th markings of $\overline{\mathcal{M}}_{0,5}$ to any one of the vertical edges and each edge gives us two possibilities to allocate the markings. Hence, its contribution of is
$\frac{1}{2}\cdot\frac{1}{1}\cdot\frac{1}{1} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {5};
\node at (0.5,-0.7) (n2) {2};
\node at (0.0,-1.0) (n1) {1};
\node at (-0.5,0.7) (n4) {4};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
+$\frac{1}{2}\cdot\frac{1}{1}\cdot\frac{1}{1} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {4};
\node at (0.5,-0.7) (n2) {2};
\node at (0.0,-1.0) (n1) {1};
\node at (-0.5,0.7) (n4) {5};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$ + \frac{1}{2}\cdot\frac{1}{1}\cdot\frac{1}{1} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {1};
\node at (0.5,-0.7) (n2) {5};
\node at (0.0,-1.0) (n1) {2};
\node at (-0.5,0.7) (n4) {3};
\node at (0.5,0.7) (n3) {4};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$ + \frac{1}{2}\cdot\frac{1}{1}\cdot\frac{1}{1} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {1};
\node at (0.5,-0.7) (n2) {4};
\node at (0.0,-1.0) (n1) {2};
\node at (-0.5,0.7) (n4) {3};
\node at (0.5,0.7) (n3) {5};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$= \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {5};
\node at (0.5,-0.7) (n2) {2};
\node at (0.0,-1.0) (n1) {1};
\node at (-0.5,0.7) (n4) {4};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
+$ \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {4};
\node at (0.5,-0.7) (n2) {2};
\node at (0.0,-1.0) (n1) {1};
\node at (-0.5,0.7) (n4) {5};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
(B) Similarly, the contribution of Graph (B) is
$\frac{1}{2}\cdot\frac{1}{1}\cdot\frac{1}{1} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {5};
\node at (0.5,-0.7) (n2) {3};
\node at (0.0,-1.0) (n1) {1};
\node at (-0.5,0.7) (n4) {4};
\node at (0.5,0.7) (n3) {2};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+\frac{1}{2}\cdot\frac{1}{1}\cdot\frac{1}{1} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {4};
\node at (0.5,-0.7) (n2) {3};
\node at (0.0,-1.0) (n1) {1};
\node at (-0.5,0.7) (n4) {5};
\node at (0.5,0.7) (n3) {2};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$ + \frac{1}{2}\cdot\frac{1}{1}\cdot\frac{1}{1} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {1};
\node at (0.5,-0.7) (n2) {5};
\node at (0.0,-1.0) (n1) {3};
\node at (-0.5,0.7) (n4) {2};
\node at (0.5,0.7) (n3) {4};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$ + \frac{1}{2}\cdot\frac{1}{1}\cdot\frac{1}{1} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {1};
\node at (0.5,-0.7) (n2) {4};
\node at (0.0,-1.0) (n1) {3};
\node at (-0.5,0.7) (n4) {2};
\node at (0.5,0.7) (n3) {5};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$= \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {5};
\node at (0.5,-0.7) (n2) {3};
\node at (0.0,-1.0) (n1) {1};
\node at (-0.5,0.7) (n4) {4};
\node at (0.5,0.7) (n3) {2};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
+$ \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (-0.5,-0.7) (n5) {4};
\node at (0.5,-0.7) (n2) {3};
\node at (0.0,-1.0) (n1) {1};
\node at (-0.5,0.7) (n4) {5};
\node at (0.5,0.7) (n3) {2};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (B) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
(C) Graph (C) has also four $\Gamma_0$-structures but they are not symmetric because the two vertical edges are not interchangeable. For the $\Gamma_0$-structures by which the edge of $\Gamma_0$ is mapped to the left edge of the level graph, $\kappa_{f}=3$. If the edge of $\Gamma_0$ is mapped to another edge, then $\kappa_{f}=1$. Thus, the contribution of Graph (C) is
$\frac{1}{1}\cdot\frac{3}{3}\cdot\frac{3}{3} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {1};
\node at (-0.5,-0.7) (n5) {5};
\node at (0,1.0) (n2) {2};
\node at (-0.5,0.7) (n4) {4};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+\frac{1}{1}\cdot\frac{3}{3}\cdot\frac{3}{3} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {1};
\node at (-0.5,-0.7) (n5) {4};
\node at (0,1.0) (n2) {2};
\node at (-0.5,0.7) (n4) {5};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+\frac{1}{1}\cdot\frac{3}{1}\cdot\frac{3}{3} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {5};
\node at (-0.5,-0.7) (n5) {1};
\node at (0,1.0) (n2) {3};
\node at (-0.5,0.7) (n4) {2};
\node at (0.5,0.7) (n3) {4};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+\frac{1}{1}\cdot\frac{3}{1}\cdot\frac{3}{3} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {4};
\node at (-0.5,-0.7) (n5) {1};
\node at (0,1.0) (n2) {3};
\node at (-0.5,0.7) (n4) {2};
\node at (0.5,0.7) (n3) {5};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$=4 \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {1};
\node at (-0.5,-0.7) (n5) {5};
\node at (0,1.0) (n2) {2};
\node at (-0.5,0.7) (n4) {4};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+4 \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {1};
\node at (-0.5,-0.7) (n5) {4};
\node at (0,1.0) (n2) {2};
\node at (-0.5,0.7) (n4) {5};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
(D) The contribution of Graph (D) is
$\frac{1}{2}\cdot\frac{2}{2}\cdot\frac{4}{2} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {1};
\node at (-0.5,-0.7) (n5) {5};
\node at (0,1.0) (n2) {2};
\node at (-0.5,0.7) (n4) {4};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+\frac{1}{2}\cdot\frac{2}{2}\cdot\frac{4}{2} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {1};
\node at (-0.5,-0.7) (n5) {4};
\node at (0,1.0) (n2) {2};
\node at (-0.5,0.7) (n4) {5};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+\frac{1}{2}\cdot\frac{2}{2}\cdot\frac{4}{2} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {5};
\node at (-0.5,-0.7) (n5) {1};
\node at (0,1.0) (n2) {3};
\node at (-0.5,0.7) (n4) {2};
\node at (0.5,0.7) (n3) {4};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+\frac{1}{2}\cdot\frac{2}{2}\cdot\frac{4}{2} \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {4};
\node at (-0.5,-0.7) (n5) {1};
\node at (0,1.0) (n2) {3};
\node at (-0.5,0.7) (n4) {2};
\node at (0.5,0.7) (n3) {5};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$=2 \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {1};
\node at (-0.5,-0.7) (n5) {5};
\node at (0,1.0) (n2) {2};
\node at (-0.5,0.7) (n4) {4};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
$+2 \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n1) {1};
\node at (-0.5,-0.7) (n5) {4};
\node at (0,1.0) (n2) {2};
\node at (-0.5,0.7) (n4) {5};
\node at (0.5,0.7) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n1);
\draw [-] (A) to (n2);
\draw [-] (B) to (n5);
\draw [-] (A) to (n3);
\draw [-] (A) to (n4);
\end{tikzpicture}\Bigg]$
(E) The contribution of the graph (E) will be just the class $[\overline{\mathcal{H}}_0^\mathfrak{R}(4,-2,-2,-1,-1) ]$ where $\mathfrak{R}$ is induced by the residue condition $r_4+r_5=0$.
To check whether we are correct, we can use the method in the package (the commands cf. <cit.>). We first compute the clutching pullback:
sage: from admcycles import *
sage: stratum_class = Strataclass(1,1,(4,-2,-2))
sage: stgraph = StableGraph([0],[[1,2,3,4,5]],[(4,5)])
sage: pull = stgraph.boundary_pullback(stratum_class)
sage: pull.totensorTautbasis(1,vecout=True)
(10, 5, -7, -7, 0)
On the other hand, we compute the result we get from the clutching pullback formula:
sage: A_contr=tautgens(0,5,1)[8]+tautgens(0,5,1)[7]
sage: B_contr=tautgens(0,5,1)[11]+tautgens(0,5,1)[10]
sage: C_contr=4*tautgens(0,5,1)[15]+4*tautgens(0,5,1)[14]
sage: D_contr=2*tautgens(0,5,1)[15]+2*tautgens(0,5,1)[14]
sage: E_contr= Strataclass(0,1,(4,-2,-2,-1,-1), res_cond=((0,0,0,1,1),))
sage: (A_contr+B_contr+C_contr+D_contr+E_contr).basis_vector()
(10, 5, -7, -7, 0)
Now we will do the clutching pullback with respect to other one-edge stable graphs. There is only one vertical two-level graph (or one-edge horizontal graph) which is degenerations to $\Gamma_1^{\{1\}}$, namely
(a) at (0,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(b) at (0,-2) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
[-] (a) edge (b);
at (0.2,-0.2)$2$ ;
at (-0.6,0.6)$-2$ ;
at (0.6,0.6)$-2$ ;
at (0.4,-1.8)$-4$ ;
[-] (a) edge (-0.5,0.5);
[-] (a) edge (0.5,0.5);
[-] (b) edge (0,-2.5);
at (0,-2.5)$4$;
at (-1,-2.4)(F);
Hence, $\xi_{\Gamma_1^{\{1\}}}^*([\overline{\mathcal{H}}_1(4,-2,-2)])\in CH^1(\overline{\mathcal{M}}_{\Gamma_1^{\{1\}}})$ will be equal to the following
$\frac{1}{1}\cdot\frac{3}{3}\cdot\frac{3}{3}\bigg(\pi_\top^*([\overline{\mathcal{M}}_{0,3}])\cdot \pi_\bot^*([\overline{\mathcal{H}}_1(4,-4)])\bigg)= 15\pi_\bot^*\psi_1
We have also the following vertical two-level graphs:
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$2$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2_2$ ;
at (0.4,-1.8)$-4$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (a) edge (0,0.5);
at (0,0.5)$-2_1$;
at (-1.6,-2.4)(G);
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$2$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2_1$ ;
at (0.4,-1.8)$-4$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (a) edge (0,0.5);
at (0,0.5)$-2_2$;
at (-1.6,-2.4)(H);
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$0$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2$ ;
at (0.4,-1.8)$-2$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (b) edge (0,-2.5);
at (0,-2.6)$-2$;
at (-1.6,-2.4)(I);
[xshift=10cm, yshift=-5cm]
(a) at (-1.5,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (1.5,0) [circle,draw,fill, inner sep=0pt,minimum size=3pt];
(c) at (0,-2) [circle,draw,fill, inner sep=0pt,minimum size=3pt];
at (0,-2.5)$4$;
[-] (b) edge (1.1,0.4);
[-] (b) edge (1.9,0.4);
at (1,0.5)$-2$ ;
at (2,0.5)$-2$ ;
[-] (a) edge (c);
[-] (b) edge (c);
at (0.7,-1.8)$-4$ ;
at (1.2,-0.2)$2$ ;
at (-1.2,-0.2)$0$ ;
at (-0.7,-1.8)$-2$ ;
at (-1.3,-2.4)(J);
[-] (c) edge (0,-2.3);
Graph (G) is a degeneration of $\Gamma_1^{\{2\}}$, while graph (H) is a degeneration of $\Gamma_1^{\{3\}}$. Similarly, we have
\begin{align*}
\xi_{\Gamma_1^{\{2\}}}^*([\overline{\mathcal{H}}_1(4,-2,-2)])= \frac{1}{1}\cdot\frac{3}{3}\cdot\frac{3}{3}\bigg(\pi_\top^*([\overline{\mathcal{H}}_1(2,-2)])\cdot\pi_\bot^*([\overline{\mathcal{M}}_{0,3}]) \bigg)= 3\pi_\top^*\psi_1\\
\xi_{\Gamma_1^{\{3\}}}^*([\overline{\mathcal{H}}_1(4,-2,-2)])= \frac{1}{1}\cdot\frac{3}{3}\cdot\frac{3}{3}\bigg(\pi_\top^*([\overline{\mathcal{H}}_1(2,-2)])\cdot\pi_\bot^*([\overline{\mathcal{M}}_{0,3}]) \bigg)= 3\pi_\top^*\psi_1\
\end{align*}
Graph (I) and (J) are degenerations of $\Gamma_1^\emptyset$. However, note that the top level of graph (J) has more than one vertices without any residue condition relating the legs of them. Thus, the projection pushforward of graph (J) is zero. Then we have
\[\xi_{\Gamma_1^\emptyset}^*([\overline{\mathcal{H}}_1(4,-2,-2)])= \frac{1}{1}\cdot\frac{1}{1}\cdot\frac{1}{1}\bigg(\pi_\top^*([\overline{\mathcal{M}}_{1,1})\cdot\pi_\bot^*([\overline{\mathcal{H}}_0^\mathfrak{R}(4,-2,-2,-2)]) \bigg)= \pi_\bot^*\psi_4, \]
where $\mathfrak{R}$ represent the residue condition of $r_4=0$.
§.§ Examples of computing the spin stratum class by clutching pullbacks
In this subsection, we will present examples of computing the spin stratum class (with and without residue conditions). In the following example, we will consider the stratum $\mathcal{H}_{1}(4,-2,-2)]$ which has been investigated in Section <ref>.
The two-level graphs of $\mathbb{P}\Xi \overline{\mathcal{M}}_{1,3}(4,-2,-2)$, which are of compact type, are
(a) at (0,0) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(b) at (0,-2) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
[-] (a) edge (b);
at (0.2,-0.2)$2$ ;
at (-0.6,0.6)$-2_1$ ;
at (0.6,0.6)$-2_2$ ;
at (0.4,-1.8)$-4$ ;
[-] (a) edge (-0.5,0.5);
[-] (a) edge (0.5,0.5);
[-] (b) edge (0,-2.5);
at (0,-2.5)$4$;
at (-1,-2.4)(F);
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$2$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2_2$ ;
at (0.4,-1.8)$-4$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (a) edge (0,0.5);
at (0,0.5)$-2_1$;
at (-1.6,-2.4)(G);
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$2$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2_1$ ;
at (0.4,-1.8)$-4$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (a) edge (0,0.5);
at (0,0.5)$-2_2$;
at (-1.6,-2.4)(H);
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$0$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2_2$ ;
at (0.4,-1.8)$-2$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (b) edge (0,-2.5);
at (0,-2.6)$-2_1$;
at (-1.6,-2.4)(I);
[xshift=10cm, yshift=-5cm]
(a) at (-1.5,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (1.5,0) [circle,draw,fill, inner sep=0pt,minimum size=3pt];
(c) at (0,-2) [circle,draw,fill, inner sep=0pt,minimum size=3pt];
at (0,-2.5)$4$;
[-] (b) edge (1.1,0.4);
[-] (b) edge (1.9,0.4);
at (1,0.5)$-2_1$ ;
at (2,0.5)$-2_2$ ;
[-] (a) edge (c);
[-] (b) edge (c);
at (0.7,-1.8)$-4$ ;
at (1.2,-0.2)$2$ ;
at (-1.2,-0.2)$0$ ;
at (-0.7,-1.8)$-2$ ;
at (-1.3,-2.4)(J);
[-] (c) edge (0,-2.3);
Notice that the top level of the graph $(J)$ has two components which can be scaled independently. Thus $p_{J*}[\mathbb{P}\Xi\overline{\mathcal{M}}_J]^{\spin}=0$ as the image will be of lower dimension. Hence it will not affect the spin stratum class.
The pushforward of the spin variant of the divisor class of the boundary stratum, which is associated to the level graph (F), on the moduli space of disconnected stable curves $\overline{\mathcal{M}}_F$ :
\begin{align*}
p_{F*}[\mathbb{P}\Xi\overline{\mathcal{M}}_F]^{\spin}=\pi_{\top}^*[\overline{\mathcal{H}}(2,-2,-2)]^{\spin}\cdot \pi_{\bot}^*[\overline{\mathcal{H}}_{1,2}(4,-4)]^{\spin}.
\end{align*}
As $\mu=(2,-2,-2)$ is a signature of a stratum of differentials on genus $0$ curves. By default,
\begin{align*}
\end{align*}
On the other hand, one can show that
\begin{align*}
\end{align*}
\[p_{F*}[\mathbb{P}\Xi\overline{\mathcal{M}}_A]^{\spin}=9\pi_\bot^*\psi_1\]
Now we can do the computations of the clutching pullbacks:
* Only the enhanced level graphs (F) and (J) have $\Gamma_1^{\{1\}}$-structures. Thus, by the excess intersection formula, we conclude that
\begin{align*}
\xi_{\Gamma_1^{\{1\}}}^*[\overline{\mathcal{H}}_1(4,-2,-2)]^{\spin}=9\pi_\bot^*\psi_1.
\end{align*}
Only the enhanced level graph (G) has $\Gamma_1^{\{2\}}$-structure. One can compute that $[\overline{\mathcal{H}}_1(2,-2)]^{\spin}= 3\psi_1 $. Thus, we have:
\[\xi_{\Gamma_1^{\{2\}}}^*[\overline{\mathcal{H}}_1(4,-2,-2)]^{\spin}=9\pi_\top^*\psi_1\]
* Only the enhanced level graph (H) has $\Gamma_1^{\{3\}}$-structure, thus we have:
\[\xi_{\Gamma_1^{\{3\}}}^*[\overline{\mathcal{H}}_1(4,-2,-2)]^{\spin}=3\pi_\top^*\psi_1\]
* Only the enhanced level graph (I) has $\Gamma_1^\emptyset$-structure, hence:
\[\xi_{\Gamma_1^\emptyset}^*[\overline{\mathcal{H}}_1(4,-2,-2)]^{\spin}=-\pi_\bot^*\psi_1\]
By using the function in , we can convert the product tautological classes above into vectors. Now we can make use of the basis of $RH^2(\overline{\mathcal{M}}_{1,3})$ given by :
\begin{align*}
\RH^2(\overline{\mathcal{M}}_{1,3})=\langle \kappa_1,\psi_1,\psi_2,\psi_3, \delta^{\{1\}}_1 \rangle
\end{align*}
By solving the collection of linear equations, we get
\begin{align*}
[\overline{\mathcal{H}}_1(4,-2,-2)]^{\spin}=&(0,1,3,3,-8)&= \psi_1+3(\psi_2+\psi_3)- 8\delta^{\{1\}}_1
\end{align*}
In the next example, we will illustrate how we resolve a residue condition.
Now we consider the spin stratum classes $[\overline{\mathcal{H}}_{1}^\mathfrak{R}(4,-2,-2)]^{\spin}$, where $\mathfrak{R}$ represents the residue condition $r_3=0$. We will first apply Proposition <ref> to resolve the residue condition. The level graphs in Section <ref> such that the residue condition $r_3=0$ induces no extra condition on the top level are the following:
(a) at (0,0) [circle,draw,inner sep=0pt,fill,minimum size=3pt];
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(a).. controls (-0.6,-0.75) and (-0.6,-1.5).. (b);
(a).. controls (0.6,-0.75) and (0.6,-1.5).. (b);
[-] (b) edge (-0.3,-2.3);
[-] (b) edge (0.3,-2.3);
[-] (a) edge (0,0.3);
at (-1.3,-2.4)(A); at (-0.3,-2.5)$4$;
at (0.3,-2.5)$-2_1$; at (0,0.5)$-2_2$;
at (-0.6,-0.5)$0$;
at (0.6,-0.5)$0$;
at (-0.6,-1.8)$-2$;
at (0.6,-1.8)$-2$;
(a) at (0,0) [circle,draw,inner sep=0pt,fill,minimum size=3pt];
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
(a).. controls (-0.6,-0.75) and (-0.6,-1.5).. (b);
(a).. controls (0.6,-0.75) and (0.6,-1.5).. (b);
[-] (b) edge (-0.3,-2.3);
[-] (b) edge (0.3,-2.3);
[-] (a) edge (0,0.3);
at (-1.3,-2.4)(B); at (-0.3,-2.5)$4$;
at (0.3,-2.5)$-2_2$; at (0,0.5)$-2_1$;
at (-0.6,-0.5)$0$;
at (0.6,-0.5)$0$;
at (-0.6,-1.8)$-2$;
at (0.6,-1.8)$-2$;
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$2$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2_2$ ;
at (0.4,-1.8)$-4$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (a) edge (0,0.5);
at (0,0.5)$-2_1$;
at (-1.6,-2.4)(G);
[xshift=0cm, yshift=-5cm ]
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$2$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2_1$ ;
at (0.4,-1.8)$-4$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (a) edge (0,0.5);
at (0,0.5)$-2_2$;
at (-1.6,-2.4)(H);
(a) at (0,0) [circle,draw,inner sep=0pt,minimum size=5pt]$1$;
(b) at (0,-2) [circle,draw,fill,inner sep=0pt,minimum size=3pt];
[-] (a) edge (b);
at (0.2,-0.3)$0$ ;
at (-0.6,-2.6)$4$ ;
at (0.6,-2.6)$-2_2$ ;
at (0.4,-1.8)$-2$ ;
[-] (b) edge (-0.5,-2.5);
[-] (b) edge (0.5,-2.5);
[-] (b) edge (0,-2.5);
at (0,-2.6)$-2_1$;
at (-1.6,-2.4)(I);
Notice that only the graph (I) has odd spin (due to the top level). The other graphs are all of even spin. Thus, we have
\begin{align*}
[\mathbb{P}\Xi\overline{\mathcal{M}}_{1,3}^\mathfrak{R}(4,-2,-2)]^{\spin}=-\xi^{\spin} -[D_A]-[D_B]-3[D_G]-3[D_H] +[D_I].
\end{align*}
By Proposition <ref>, if we take the reference leg to be the third marked point, then we have
\begin{align*}
\xi^{\spin}&=(-2+1)\psi_3^{\spin} -[D_B]- 3[D_G] + [D_I]
\end{align*}
\begin{align*}
[\mathbb{P}\Xi\overline{\mathcal{M}}_{1,3}^\mathfrak{R}(4,-2,-2)]^{\spin}& = \psi_3^{\spin}-[D_A]-3[D_H]
\end{align*}
By the result in Example <ref>, we then have
\begin{align*}
[\overline{\mathcal{H}}^\mathfrak{R}_{1}(4,-2,-2)]^{\spin}=&p_*[\mathbb{P}\Xi\overline{\mathcal{M}}_{1,3}^\mathfrak{R}(4,-2,-2)]^{\spin}\\=&p_*(\psi_3^{\spin}-[D_A]-3[D_H])\\=&\psi_3\cdot [\overline{\mathcal{H}}_{1}(4,-2,-2)]^{\spin}-p_*[D_A]-3p_*[D_H]\\=& \bigg(\psi_3(\psi_1+3\psi_2+3\psi_3)-8\psi_3\delta^{\{1\}}_1\bigg) - \Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle,fill] [above of =C] (A) {};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n2) {2};
\node at (-0.5,-0.7) (n1) {1};
\node at (0,1.0) (n3) {3};
\draw [-] (A) to [out=-120, in=120] (B);
\draw [-] (A) to [out=-60, in=60] (B);
\draw [-] (B) to (n2);
\draw [-] (B) to (n1);
\draw [-] (A) to (n3);
\end{tikzpicture}\Bigg]
- 3(3\psi_3)\cdot\Bigg[\begin{tikzpicture}[->,baseline=-3pt,node distance=1.3cm,thick,main node/.style={circle,draw,font=\Large,scale=0.5}]
\node at (0,0) (C) {};
\node [scale=.3,draw,circle] [above of =C] (A) {1};
\node [scale=.3,draw,circle,fill] [below of =C] (B) {};
\node at (0.5,-0.7) (n2) {2};
\node at (-0.5,-0.7) (n1) {1};
\node at (0,1.0) (n3) {3};
\draw [-] (A) to (B);
\draw [-] (B) to (n2);
\draw [-] (B) to (n1);
\draw [-] (A) to (n3);
\end{tikzpicture}\Bigg].
\end{align*}
§ COMPUTATION RESULTS OF $[\OVERLINE{\MATHCAL{H}}_G(\MU)]^{\SPIN}\IN \RH^*(\OVERLINE{\MATHCAL{M}}_{G,N})$
In the following, we will list some of our results of the spin classes for $(g,n)$ equal to
\begin{align*}
(2,1),(2,2), (2,3), (2,4), (3,1), (3,2), (3,3), (4,1), (4,2), (4,3)
\end{align*}
This also implies that Assumption <ref> holds during the computation for the spin stratum classes of strata compatible to the values of $(g,n)$ above. Our results will be written in vector form, whereas the basis is computed by . For $(g,n)= (3,3), (4,2)$, the expression of spin stratum classes of meromorphic strata will be super lengthy, so we will not list them here.
For $(g,n)= (2,1)$, by our recursion, we get
Spin_Class(2,) = (1/2, -7/2, 1/2)
For $(g,n)= (2,2)$,
Spin_Class(-6, 8) = (-1563/2, 521/2, -637, -602, 1274, 1021, 1204, 1563/2, -3647/2, 521/2, -116, -81, -521/2, 0)
Spin_Class(-4, 6) = (-423/2, 141/2, -174, -159, 348, 273, 318, 423/2, -987/2, 141/2, -33, -18, -141/2, 0)
Spin_Class(-2, 4) = (-63/2, 21/2, -27, -22, 54, 41, 44, 63/2, -147/2, 21/2, -6, -1, -21/2, 0)
The spin stratum classes we listed above are expressed in the basis of $\RH^4(\overline{\mathcal{M}}_{2,2},\mathbb{Q})$, which consists of:
\begin{align*}
\kappa_2, \kappa_1^2, \kappa_1\psi_1,\kappa_1\psi_2,\psi_1\psi_2, \psi_1^2,\psi_2^2,...
\end{align*}
The first entry of the basis of $\RH^{2j}(\overline{\mathcal{M}}_{g,n},\mathbb{Q})$ constructed by is by default just $\kappa_j$.
From now on, the vector of the spin stratum class will be quite lengthy. Thus we will only give the first few entries. For $(g,n)= (2,3)$,
Spin_Class(-1, -1, 4) = (0, 0, 4, 0, ...)
Spin_Class(-6, -6, 14) = (-10623/2, 3541/2, -4067, -5329, ...)
Spin_Class(-6, -2, 10) = (-3039/2, 1013/2, -995, -1517, ...)
Spin_Class(-6, 4, 4) = (-1023/2, 341/2, -167, -514, ...)
Spin_Class(-4, 2, 4) = (-162, 54, -58, -162, ...)
For $(g,n)=(2,4)$,
Spin_Class(-6, -6, -2, 16) = (-6208, 3129, -7270, -8261, ...)
Spin_Class(-6, -4, 2, 10) = (-5777/2, 1053/2, -2027/2, -2441/2, ...)
For $(g,n)=(3,1)$,
Spin_Class(4,) = (-729, 1099/12, -787/12, 213, ...)
For $(g,n)=(3,2)$,
Spin_Class(-4, 8) = (-34167333/140, 14160639/280, -976627/280, -358429/40, ...)
Spin_Class(2, 2) = (159/4, -179/48, -7/24, -7/24, ...)
For $(g,n)=(4,1)$,
Spin_Class(6,) = (-1125203/120, 85013/48, -325/3, -65089/24, ...)
For $(g,n)=(4,2)$,
Spin_Class(2, 4) = (59623999/840, -156407/16, 302305/672, -229619/112, ...)
for $(g,n)=(4,3)$,
Spin_Class(2,2,2) = (-1572061/48, 42127/9, -64043/288, -116291/72, ...)
One can input our result in by the method . The following is the example for $\mu=(2,2)$,
sage: from admcycles import *
sage: Spin_Class_2_2_vb = (159/4, -179/48, -7/24, -7/24, -131/48, -23/24, -131/48, 149/24, -83/16, 271/48, 77/48, 77/48, -193/8, -185/48, 127/48, 395/48, 221/48, -73/8, -185/48, 127/48, 395/48, 221/48, -73/8, -185/48, -41/48, 389/48, 389/48, -23/8, 51/8, -139/16, -323/16, 1/8, -25/4, -11/4, -11/4, -23/4, 973/48, 37/96, -5/2, 23/96, 23/96, -25/32)
sage: Spin_Class_2_2 = Tautvb_to_tautclass(Spin_Class_2_2_vb, g=3, n=2, r=2)
Here $r$ is the codimension of our class.
Missing 'biblatex' package
The bibliography requires the 'biblatex' package.
journaltitlePublications Mathématiques de l'Institut des Hautes Études Scientifiques
titleCalculating cohomology groups of moduli spaces of curves via algebraic geometry
Springer-Verlag Berlin Heidelberg
seriesGrundlehren der mathematischen Wissenschaften
titleGeometry of Algebraic Curves, Volume II with a contribution by Joseph Daniel Harris
Duke University Press
journaltitleDuke Mathematical Journal
titleCompactification of strata of abelian differentials
journaltitlearXiv preprint arXiv:1910.13492
titleThe moduli space of multi-scale differentials
journaltitleInventiones mathematicae
titleGroups with homological duality
journaltitlearXiv preprint arXiv:2007.02502
titleThe boundary of linear subvarieties
Walter de Gruyter GmbH & Co. KG
titleCohomology of moduli spaces of curves of genus three via point counts
journaltitlearXiv preprint arXiv:2004.08676
titlePixton's formula and Abel-Jacobi theory on the Picard stack
Wiley Online Library
journaltitleCommunications on Pure and Applied Mathematics
titleMapping class groups and their relationship to braid groups
journaltitleCommentarii Mathematici Helvetici
titleConnected components of the strata of the moduli space of meromorphic differentials
Duke University Press
journaltitleDuke Mathematical Journal
titleHomology of the curve complex and the Steinberg module of the mapping class group
Oxford University Press
journaltitleInternational Mathematics Research Notices
titleThe rational cohomology of the mapping class group vanishes in its virtual cohomological dimension
journaltitlearXiv preprint arXiv:2101.01650
titleTowards a classification of connected components of the strata of $ k $-differentials
journaltitlearXiv preprint arXiv:2208.02357
titleOn the Chow and cohomology rings of moduli spaces of stable curves
journaltitleInventiones mathematicae
titleMasur–Veech volumes and intersection theory on moduli spaces of abelian differentials
journaltitlearXiv preprint arXiv:2006.12815
titlediffstrata–a Sage package for calculations in the tautological ring of the moduli space of Abelian differentials
journaltitlearXiv preprint arXiv:2006.12803
titleThe Chern classes and the Euler characteristic of the moduli spaces of abelian differentials
journaltitlearXiv preprint arXiv:2112.04238
titleIntegrals of $\psi $-classes on twisted double ramification cycles and spaces of differentials
journaltitleMathematische Zeitschrift
titleLoci of curves with subcanonical points in low genus
journaltitlearXiv preprint arXiv:2002.01709
titleadmcycles–a Sage package for calculations in the tautological ring of the moduli space of stable curves
Cambridge University Press
journaltitleJournal of the Institute of Mathematics of Jussieu
titleThe moduli space of twisted canonical divisors
Princeton University Press
titleIntersection theory
journaltitlearXiv preprint math/9801003
titleTopological recursion relations in genus 2
journaltitlearXiv preprint math/9910174
titleThe Hodge polynomial of$\backslash$Mbar_ $\{$3, 1$\}$
University of Michigan, Department of Mathematics
journaltitleMichigan Mathematical Journal
titleConstructions of nontautological classes of moduli spaces of curves
Princeton University Press
booktitleRiemann Surfaces Related Topics (AM-97), Volume 97
titleBoundary structure of the modular group
journaltitleInventiones mathematicae
titleThe virtual cohomological dimension of the mapping class group of an orientable surface
London Mathematical Society
journaltitleCompositio Mathematica
titleInfinitesimal structure of the pluricanonical double ramification locus
IOP Publishing
journaltitleRussian Mathematical Surveys
titleComplexes of curves and the Teichmüller modular group
journaltitleLeningrad Math. J.
titleAttaching corners to Teichmuller space
journaltitlearXiv preprint arXiv:1509.08421
titleRelations on $\overline{\mathcal{M}}_{g,n}$ via equivariant Gromov-Witten theory of P1. Algebr. Geom., 4 (3): 311–336
Oxford Academic
journaltitleJournal of the London Mathematical Society
titleSpin structures and quadratic forms on surfaces
journaltitlePublications mathématiques de l'IHÉS
titleDouble ramification cycles on the moduli spaces of curves
Wiley Online Library
journaltitleJournal of Topology
titleDouble ramification cycles with target varieties
journaltitleTransactions of the American Mathematical Society
titleIntersection theory of moduli space of stable n-pointed curves of genus zero
journaltitleInventiones mathematicae
titleConnected components of the moduli spaces of Abelian differentials with prescribed singularities
London Mathematical Society
journaltitleCompositio Mathematica
titleTautological rings of spaces of pointed genus two curves of compact type
journaltitlearXiv preprint arXiv:1207.1918
titleConjectural relations in the tautological ring of $\overline{\mathcal{M}}_{g,n}$
journaltitleJournal of the American Mathematical Society
titleRelations on $\overline{\mathcal{M}}_{g,n}$ via 3-spin structures
Mathematical Sciences Publishers
journaltitleGeometry & Topology
titleCohomology classes of strata of differentials
journaltitlearXiv preprint arXiv:1607.08429
titleDimension theory of the moduli space of twisted $ k $-differentials
journaltitleSelecta Mathematica
titleIntersections of loci of admissible covers with tautological classes
|
Action for $N$ D0-Branes Invariant Under Gauged Galilean Transformations
J. Klusoň, 111Email addresses<EMAIL_ADDRESS>(J.Klusoň),
Department of Theoretical Physics and Astrophysics, Faculty of Science,
Masaryk University, Kotlářská 2, 611 37, Brno, Czech Republic
In this short note we formulate an action for $N$ D0-branes that is manifestly
invariant under gauged Galilean transformations. We also find its canonical
form and determine first class constraints that are generators of gauge
transformations.
## 1 Introduction and Summary
Relational mechanics is formulation of dynamics of particles that is closely
related to Mach’s idea that claims that dynamics of $N$ particles should be a
theory of relations about these quantities without any reference to external
non-material entities. The question is how to make these ideas more concrete.
One such a possibility was proposal that the Lagrangian should be invariant
under gauged Galilean group [1] 222See also [2, 3].. Such Lagrangian can be
found when its measure (kinetic energy term) is replaced with a measure that
is defined in the space of orbits, where orbits correspond to a set of
configurations which are equivalent under gauge transformations. Then it was
shown that the solutions of this gauge invariant dynamics correspond to the
solutions of the original Lagrangian with vanishing total momentum and angular
momentum. An alternative proposal how to construct relational mechanics was
presented in [4]. This construction starts with the original Lagrangian
invariant under rigid Galilean transformation. As the next step we add to it
specific counterterms that compensate changes in the kinetic energy term under
time dependent Galilean transformations. As a result Lagrangian invariant
under time dependent Galilean transformation was derived in [4] and it was
also shown that corresponding equations of motion are valid in any frame
making concrete implementation of the Mach’s principle.
In more details, Newton’s mechanics is invariant under time independent
translations, space translations and rotations of particle’s positions
$\mathbf{x}_{i}$
$\displaystyle t^{\prime}=t+\epsilon\ ,\quad\epsilon=\mathrm{const}\ ,$
$\displaystyle\mathbf{x}^{\prime}_{i}=\mathbf{x}_{i}+\xi\
,\quad\xi=\mathrm{const}\ ,$
$\displaystyle\mathbf{x}^{\prime}_{i}=\mathbf{A}\mathbf{x}_{i}\ ,$
where $\mathbf{A}$ is orthogonal matrix, $i=1,\dots,N$ where $i$ labels
particles in ansamble. Further, Newton’s laws are invariant under Galilean
transformations
$\mathbf{x}^{\prime}_{i}=\mathbf{x}_{i}+\mathbf{V}t\ ,$ (2)
which are special case of local time dependent translations when we identify
$\xi=\mathbf{V}t$. The transformations (1) and (2) represent Galilean group of
Newtonian mechanics. In fact, there is a privileged set of inertial frames and
clocks in Newtonian mechanics which are related each other through the
transformation of the Galilean group. Then we leave an idea of these
privileged frames when the Galilean group of transformation becomes gauge
group with time dependent parameters. We can also eliminate the absolute time
when we gauge time translation as $t^{\prime}=t+\epsilon(t)$. In such a
formulation of mechanics there are no privileged frames and clocks and it
becomes purely relational.
Detailed analysis of formulation of relational mechanics was performed in [4]
where systems of $N$ non-relativistic particles was studied. The Lagrangian
that is invariant under time dependent Galilean group was found there together
with corresponding Hamiltonian formulation. It was also shown there that
Relational Mechanics contain frames where Newtonian mechanics is valid and
these frames are determined by mass distribution of all particles which is an
essence of the Mach’s principle.
In this article we would like to follow the procedure used in [4] and in [5]
in case of one very interesting non-relativistic system which is low energy
Lagrangian for $N$ D0-branes in string theory [6, 7]. It is well known that
the low-energy Lagrangian for a system of many type IIA D0-branes is the
matrix quantum mechanics Lagrangian arising from the dimensional reduction to
$0+1$ dimensions of the $10D$ super Yang-Mills Lagrangian, for review see for
example [11, 12, 13]. An importance of this action is that it is the key point
in the formulation of Matrix theory [8] which is strictly speaking defined for
infinite number of D0-branes even if there is a version of Matrix theory that
is correct for finite $N$ as well [9, 10]. Now we show that it is possible to
formulate Lagrangian for $N$ D0-branes that is manifestly invariant under
gauged Galilean transformations and hence, according to the extended
discussion presented in [4] corresponds to relational mechanics for $N$
D0-branes. On the other hand it is not possible to write it in manifestly
relational form where the Lagrangian depends on relative velocities and
distances of particles in the general case due to the fact that fundamental
objects in matrix theory are matrices rather than coordinates of individual
D0-branes. On the other hand this can be easily done in the approximation when
off-diagonal components of matrices are small with respect to the diagonal
ones so that we can neglect them and we show that in this case the Lagrangian
takes purely relational form.
As the next step in our research we find canonical form of this theory and we
identify two sets of the generators of the first class constraints. We also
show that by appropriate fixing the gauge symmetry this theory reduces to the
ordinary finite Matrix theory.
We mean that this is nice and interesting result that should be developed
further. In fact, the natural and very important question is to include
fermionic terms in the constructions of relational mechanics for $N$
D0-branes. The second question is whether it is possible to generalize this
construction of relational mechanics to the case of the full non-linear
version of the action as it is represented by non-abelian Dirac-Born-Infeld
action for $N$ D0-branes [14, 15]. We hope to return to these problems in near
future.
The structure of this paper is as follows. In the next section (2) we find an
action for $N$ D0-branes that is invariant under gauged Galilean
transformations. Then in section (3) we determine its canonical form and
identify constraints structure of the theory. Finally in section (4) we show
how it is possible to make this theory manifestly reparametrization invariant.
## 2 Relational Formulation of Action for $N$ D0-Branes
In this section we find the form of low energy effective action for $N$
D0-branes that is invariant under gauged Galilean transformations. We start
with the Lagrangian for $N$ D0-branes that at the leading order corresponds to
$U(N)$ Super-Yang-Mills mechanical system that has the form
$S=\int dtL\ ,\quad
L=\frac{1}{2gl_{s}}\mathrm{Tr}\left[\dot{\Phi}^{I}\dot{\Phi}^{I}+\frac{1}{2}[\Phi^{I},\Phi^{J}][\Phi^{I},\Phi^{J}]+\mathrm{fermions}\right]\
,$ (3)
where $\Phi^{I}_{ij}$ are $N\times N$ Hermitean matrices where
$I,J=1,2,\dots,9$ and where $i,j,\dots=1,\dots,N$. Further, $g_{s}$ is string
coupling constant and $l_{s}$ is the string length. Note that the transverse
space is nine-dimensional Euclidean space and repeated indices mean summation
over them. Finally this Lagrangian contains terms with fermions that makes the
Lagrangian $N=16$ Super-Yang-Mills mechanics. In more details, the Lagrangian
(3) can be defined by dimensional reduction of $10$ dimensional
$\mathcal{N}=1$ super-Yang-Mills theory with gauge group $U(N)$ to the $0+1$
dimensions. In what follows we restrict ourselves to the bosonic terms only
leaving its extension to the fully supersymmetric invariant case in the near
future.
The action (3) is invariant under rigid translation
$\Phi^{\prime I}(t)=\Phi^{I}(t)+\xi^{I}\mathbf{I}_{N\times N}\ ,$ (4)
where $\xi^{I}$ is constant and where $\mathbf{I}_{N\times N}$ is unit
$N\times N$ matrix. Further, the Lagrangian is invariant under rigid rotation
$\Phi^{I}(t)=\Lambda^{I}_{\ J}\Phi^{J}(t)\ ,$ (5)
where $\Lambda^{I}_{\ J}$ obey the relations
$\Lambda^{I}_{\ K}\delta_{IJ}\Lambda^{J}_{\ L}=\delta_{KL}\ .$ (6)
Let us now construct Lagrangian that is invariant under time dependent
translation
$\Phi^{\prime I}(t)=\Phi^{I}(t)+\xi^{I}(t)\mathbf{I}_{N\times N}\ .$ (7)
Clearly the potential term is invariant under this transformation while the
kinetic term transforms as
$\displaystyle\delta\left(\frac{1}{2g_{s}l_{s}}\mathrm{Tr}[\dot{\Phi}^{I}\dot{\Phi}^{I}]\right)=\frac{1}{g_{s}l_{s}}\mathrm{Tr}\delta\dot{\Phi}^{I}\dot{\Phi}^{I}=\frac{1}{g_{s}l_{s}}\dot{\xi}^{I}\mathrm{Tr}\dot{\Phi}^{I}\
.$
In order to compensate this transformation we consider following variation
$\displaystyle\frac{1}{2Ng_{s}l_{s}}\delta(\mathrm{Tr}\dot{\Phi}^{I}\mathrm{Tr}\dot{\Phi}^{I})=\frac{1}{gl_{s}}\dot{\xi}^{I}\mathrm{Tr}\dot{\Phi}^{I}$
so that following combination
$\frac{1}{2g_{s}l_{s}}\mathrm{Tr}[\dot{\Phi}^{I}\dot{\Phi}_{I}]-\frac{1}{2Ng_{s}l_{s}}\mathrm{Tr}\dot{\Phi}^{I}\mathrm{Tr}\dot{\Phi}^{I}$
(10)
is invariant under the gauge symmetry (7). As the next step we proceed to the
analysis of the invariance of the Lagrangian under time dependent rotation.
Time Dependent Rotation
As we have argued above the action (3) is invariant also under rigid rotation
(5) where $\Lambda^{I}_{\ J}$ obey the relation (6). It is convenient to write
it in infinitesimal form when we define
$\Lambda^{I}_{\ J}=\delta^{I}_{\ I}+\omega^{I}_{\ J}\ ,\quad\omega^{I}_{\
J}\ll\delta^{I}_{J}\ .$ (11)
Then (6) implies
$\delta_{KJ}\omega^{J}_{\ L}+\delta_{LI}\omega^{I}_{\
K}=0\Rightarrow\omega_{KL}+\omega_{LK}=0\ .$ (12)
Let us now presume that $\omega^{I}_{\ J}$ depend on $t$ so that (11) gives
$\delta\Phi^{I}=\Phi^{\prime I}-\Phi^{I}=\omega^{I}_{\ J}\Phi^{J}\
,\quad\delta\dot{\Phi}^{I}\equiv\dot{\omega}^{I}_{\ J}\Phi^{J}+\omega^{I}_{\
J}\dot{\Phi}^{J}\ .$ (13)
Then the variation of the kinetic term is equal to
$\displaystyle\frac{1}{2g_{s}l_{s}}\delta\mathrm{Tr}(\dot{\Phi}^{I}\delta_{IJ}\dot{\Phi}^{J})=\frac{1}{g_{s}l_{s}}\dot{\omega}^{I}_{\
K}\mathrm{Tr}(\Phi^{K}\delta_{IJ}\dot{\Phi}^{J})+\frac{1}{g_{s}l_{s}}\mathrm{Tr}(\dot{\Phi}^{J}\omega_{JK}\dot{\Phi}^{K})$
$\displaystyle=\frac{1}{g_{s}l_{s}}\dot{\omega}^{I}_{\
K}\mathrm{Tr}(\Phi^{K}\delta_{IJ}\dot{\Phi}^{J})\ ,$
where the last term on the first line vanishes due to the anti-symmetry of
$\omega_{IJ}$. As the next step we should calculate the variation of new
compensating term and we find
$\displaystyle-\delta\left(\frac{1}{2Ng_{s}l_{s}}\mathrm{Tr}\dot{\Phi}^{I}\mathrm{Tr}\dot{\Phi}^{I}\right)=-\dot{\omega}^{I}_{\
K}\frac{1}{Ng_{s}l_{s}}\mathrm{Tr}\Phi^{K}\delta_{IJ}\mathrm{Tr}\dot{\Phi}^{J}-\mathrm{Tr}\frac{1}{Ng_{s}l_{s}}\omega_{KJ}\mathrm{Tr}\dot{\Phi}^{K}\mathrm{Tr}\dot{\Phi}^{J}=$
$\displaystyle=-\dot{\omega}^{I}_{\
K}\frac{1}{Ng_{s}l_{s}}\mathrm{Tr}\Phi^{K}\delta_{IJ}\mathrm{Tr}\dot{\Phi}^{J}\
,$
where again the last term on the first line vanishes due to the anti-symmetry
of $\omega_{IJ}$. In other words the variation of the kinetic term is equal to
$\displaystyle\delta\left(\frac{1}{2g_{s}l_{s}}\mathrm{Tr}(\dot{\Phi}^{I}\delta_{IJ}\dot{\Phi}^{J})-\frac{1}{2Ng_{s}l_{s}}\mathrm{Tr}\dot{\Phi}^{I}\mathrm{Tr}\dot{\Phi}^{I}\right)=\dot{\omega}_{JK}\mathbf{J}^{KJ}\
,$
$\displaystyle\mathbf{J}^{IJ}=\frac{1}{2g_{s}l_{s}}\mathrm{Tr}(\Phi^{I}\dot{\Phi}^{J}-\Phi^{J}\dot{\Phi}^{I})-\frac{1}{2g_{s}l_{s}N}(\mathrm{Tr}\Phi^{I}\mathrm{Tr}\dot{\Phi}^{J}-\mathrm{Tr}\Phi^{J}\mathrm{Tr}\dot{\Phi}^{I})=-\mathbf{J}^{JI}\
.$
Note that $\mathbf{J}^{IJ}$ transforms under time dependent rotation as
$\displaystyle\delta\mathbf{J}^{IJ}=\omega^{I}_{\
K}\mathbf{J}^{KJ}+\mathbf{J}^{IK}\omega_{K}^{\ J}-\dot{\omega}^{I}_{\
K}I^{KJ}-I^{IK}\dot{\omega}_{K}^{\ J}\ ,$
where
$I^{KJ}=\frac{1}{2g_{s}l_{s}}\mathrm{Tr}(\Phi^{K}\Phi^{J})-\frac{1}{2g_{s}l_{s}N}\mathrm{Tr}\Phi^{K}\mathrm{Tr}\Phi^{J}\
.$ (18)
Our goal is to add new additional term to the Lagrangian to make it invariant
under time dependent rotation. We propose that such term has a form
$-\frac{1}{2}\mathbf{J}^{IJ}\mathbf{M}_{IJ,KL}\mathbf{J}^{KL}\ ,$ (19)
where the matrix $\mathbf{M}_{IJ,KL}$ is matrix in $I,J,K,L$ indices while it
is scalar with respect to the $U(N)$ structure.
Now we proceed to the construction of the object $\mathbf{M}_{IJ,KL}$. First
of all we demand that it does not depend on time derivative of $\Phi$ so that
it transforms as ordinary tensor under time dependent rotation. Then under
time dependent rotation the new term (19) transforms as
$\displaystyle\delta(\frac{1}{2}\mathbf{J}^{IJ}\mathbf{M}_{IJ,KL}\mathbf{J}^{KL})=-\dot{\omega}^{I}_{\
M}I^{MJ}\mathbf{M}_{IJ,KL}\mathbf{J}^{KL}-I^{IM}\dot{\omega}_{M}^{\
J}\mathbf{M}_{IJ,KL}\mathbf{J}^{KL}=$ $\displaystyle=-\dot{\omega}^{I}_{\
M}I^{MJ}(M_{IJ,KL}-M_{JI,KL})\mathbf{J}^{KL}=2\dot{\omega}^{I}_{\
M}I^{MJ}\mathbf{M}_{JI,KL}\mathbf{J}^{KL}\ ,$
where we presume that $\mathbf{M}_{IJ,KL}=-\mathbf{M}_{JI,KL}$ as a
consequence of the fact that $\mathbf{J}^{IJ}=-\mathbf{J}^{JI}$. In order to
cancel variation of the kinetic term we should require that
$2I^{MJ}\mathbf{M}_{JI,KL}=\delta_{IL}\delta_{K}^{M}\ .$ (21)
To proceed further we introduce $I^{-1}_{IJ}$ as matrix inverse to $I^{JK}$ so
that $I^{IJ}I^{-1}_{JK}=\delta^{I}_{K}$. Then if we multiply (21) with
$(I^{-1})_{RM}$ we get
$\mathbf{M}_{RI,KL}=\frac{1}{2}\delta_{IL}I^{-1}_{RK}\ .$ (22)
We see that it is natural to define matrix $\mathbf{M}_{IJ,KL}$ as
$\displaystyle\mathbf{M}_{IJ,KL}=\frac{1}{2}\delta_{JL}I^{-1}_{IK}\
,\quad\mathbf{M}_{JI,KL}=-\frac{1}{2}\delta_{JL}I^{-1}_{IK}\ ,$
$\displaystyle\mathbf{M}_{KL,IJ}=\frac{1}{2}\delta_{LJ}I^{-1}_{KI}\
,\quad\mathbf{M}_{IJ,LK}=-\frac{1}{2}\delta_{JL}I^{-1}_{IK}\ .$
In summary, we have found Lagrangian for $N$ D0-branes that is invariant under
local translation and rotation and that has the form
$L=\frac{1}{2gl_{s}}\mathrm{Tr}\left[\dot{\Phi}^{I}\dot{\Phi}_{I}+\frac{1}{2}[\Phi^{I},\Phi^{J}][\Phi^{I},\Phi^{J}]\right]-\frac{1}{2Ng_{s}l_{s}}\mathrm{Tr}\dot{\Phi}^{I}\dot{\Phi}^{I}-\frac{1}{2}\mathbf{J}^{IJ}\mathbf{M}_{IJ,KL}\mathbf{J}^{KL}\
.$ (24)
This is final form of the Lagrangian for $N$ D0-branes that is invariant under
time dependent Galilean transformation so that this Lagrangian is valid in any
frame. In fact, following [4] we can interpret is as relational formulation of
D0-brane mechanics. To see this more clearly let us consider situation when
the matrices $\Phi^{I}$ are diagonal, or say alternatively, situation when we
can neglect all off diagonal terms with respect to diagonal ones. Then the
matrices $\Phi^{I}$ have the form
$\Phi^{I}_{ij}=x^{I}_{i}\delta_{ij}\ ,$ (25)
where $x^{I}_{i}$ are coordinates of individual $i-$th D0-brane. With such a
configuration we find that the potential term vanishes while the kinetic term
has the form
$\frac{1}{2g_{s}l_{s}}(\sum_{i}v_{i}^{I}v_{i}^{I}-\frac{1}{N}\sum_{i}v_{i}^{I}\sum_{j}v_{j}^{I})\
,\quad v^{I}_{i}=\frac{dx^{I}_{i}}{dt}$ (26)
that can be written in an alternative form
$\frac{1}{4Ng_{s}l_{s}}\sum_{i,j}(v_{i}^{I}-v_{j}^{I})(v_{i}^{I}-v_{j}^{I})$
(27)
which nicely demonstrate the relational form of this Lagrangian. Further,
matrix $I^{IJ}$ has the form
$I^{IJ}=\frac{1}{2g_{s}l_{s}}(\sum_{i}x^{I}_{i}x^{J}_{i}-\sum_{i}x^{I}_{i}\sum_{j}x^{J}_{j})=\frac{1}{4g_{s}l_{s}N}\sum_{i,j}(x_{i}^{I}-x_{j}^{I})(x^{J}_{i}-x^{J}_{j})\
.$ (28)
In the same way we proceed with $\mathbf{J}^{IJ}$ and we get
$\displaystyle\mathbf{J}^{IJ}=\frac{1}{2g_{s}l_{s}}\sum_{i}(x^{I}_{i}v^{J}_{i}-x^{J}_{i}v^{I}_{i})-\frac{1}{2g_{s}l_{s}N}(\sum_{i}x^{I}_{i}\sum_{j}v^{J}-\sum_{i}x^{J}_{i}\sum_{j}v^{I}_{j})=$
$\displaystyle=\frac{1}{4g_{s}l_{s}N}\sum_{i}\sum_{j}\left((x^{I}_{i}-x^{I}_{j})(v^{J}_{i}-v^{J}_{j})-(x^{J}_{i}-x^{J}_{j})(v^{I}_{i}-v^{I}_{j})\right)\
$
which again depends on relative distances and velocities. In summary we obtain
Lagrangian
$\displaystyle
L=\frac{1}{4Ng_{s}l_{s}}\sum_{i,j}(v_{i}^{I}-v_{j}^{I})(v_{i}^{I}-v_{j}^{I})-\frac{1}{2}\mathbf{J}^{IJ}\mathbf{M}_{IJ,KL}\mathbf{J}^{KL}\
$
that has manifestly form of relational mechanics as follows from (28) and (2).
## 3 Hamiltonian Formalism
In this section we find Hamiltonian from Lagrangian (24). In the first step we
introduce conjugate momenta to the matrix elements $\Phi^{I}_{ij}$ where we
will tread matrix elements as independent keeping in mind that we have
$\Phi_{ij}=\Phi_{ji}^{*}$. Then from (24) we obtain
$\displaystyle(\Pi_{I})_{ij}=\frac{\delta
L}{\delta\dot{\Phi}^{I}_{ij}}=\frac{1}{g_{s}l_{s}}(\dot{\Phi}_{I})_{ji}-\frac{1}{Ng_{s}l_{s}}\delta_{ji}\mathrm{Tr}{\dot{\Phi}_{I}}-$
$\displaystyle\frac{1}{2g_{s}l_{s}}(\Phi^{K}_{ji}\delta^{L}_{I}-\Phi^{L}_{ji}\delta^{K}_{I})-\frac{1}{N}(\mathrm{Tr}\Phi^{K}\delta_{ji}\delta^{L}_{I}-\mathrm{Tr}\Phi^{L}\delta_{ji}\delta^{K}_{I}))\mathbf{M}_{KL,MN}\mathbf{J}^{MN}\
,$
using
$\frac{\delta\mathbf{J}^{KL}}{\delta\dot{\Phi}^{I}_{ij}}=\frac{1}{2g_{s}l_{s}}(\Phi^{K}_{ji}\delta^{L}_{I}-\Phi^{L}_{ji}\delta^{K}_{I})-\frac{1}{2g_{s}l_{s}N}(\mathrm{Tr}\Phi^{K}\delta_{ji}\delta^{L}_{I}-\mathrm{Tr}\Phi^{L}\delta_{ji}\delta^{K}_{I})\
.$ (32)
As the next step we define Hamiltonian in the standard way
$\displaystyle H=(\Pi_{I})_{ij}\dot{\Phi}^{I}_{ij}-L$
$\displaystyle=\frac{1}{2g_{s}l_{s}}\dot{\Phi}^{I}_{ij}\dot{\Phi}^{I}_{ji}-\frac{1}{2g_{s}l_{s}N}\mathrm{Tr}\dot{\Phi}^{I}\mathrm{Tr}\dot{\Phi}^{I}-\frac{1}{2}\mathbf{J}^{IJ}\mathbf{M}_{IJ,KL}\mathbf{J}^{KL}-\frac{1}{4g_{s}l_{s}}\mathrm{Tr}[\Phi^{I},\Phi^{J}][\Phi^{I},\Phi^{J}]\
.$
To proceed further note that (3) implies
$\displaystyle\mathbf{P}_{I}\equiv\mathrm{Tr}\Pi_{I}=(\Pi_{I})_{ij}\delta_{ji}=0\
,$
so that $\mathbf{P}_{I}\approx 0$ is primary constraint of the theory.
Further, from (3) we also get
$\displaystyle\Phi^{I}_{ij}(\Pi_{J})_{ij}-\Phi^{J}_{ij}(\Pi_{I})_{ij}=0$
so that there is second set of primary constraints
$\mathbf{J}^{IJ}=\Phi^{I}_{ij}\Pi^{J}_{ij}-\Phi^{J}_{ij}\Pi^{I}_{ij}\approx 0\
,$ (36)
where again repeated indices mean summation over them.
Now we should check that they are the first class constraints. To do this we
introduce canonical Poisson brackets
$\left\\{\Phi^{I}_{ij},(\Pi_{J})_{kl}\right\\}=\delta^{I}_{J}\delta_{ik}\delta_{jl}\
.$ (37)
Clearly we have
$\displaystyle\left\\{\mathbf{P}_{I},\mathbf{P}_{J}\right\\}=0$ (38)
and also
$\displaystyle\left\\{\mathbf{J}^{IJ},\mathbf{P}_{K}\right\\}=\delta^{I}_{K}\mathrm{Tr}\Pi^{J}-\delta^{J}_{K}\mathrm{Tr}\Pi^{I}=\delta^{I}_{K}\mathbf{P}^{J}-\delta^{J}_{K}\mathbf{P}^{I}\approx
0\ .$
As the last Poisson bracket we calculate
$\left\\{\mathbf{J}^{IJ},\mathbf{J}^{KL}\right\\}$ and we obtain
$\displaystyle\left\\{\mathbf{J}^{IJ},\mathbf{J}^{KL}\right\\}=\delta^{IL}\Pi^{J}_{ij}\Phi^{K}_{ij}-\delta^{JK}\Phi^{I}_{mn}\Pi^{L}_{mn}-\delta^{JL}\Pi^{I}_{mn}\Phi^{K}_{mn}+\delta^{IK}\Phi_{mn}^{J}\Pi^{L}_{mn}-$
$\displaystyle-\delta^{IK}\Phi_{mn}^{L}\Pi_{mn}^{J}+\delta^{JL}\Phi^{I}_{mn}\Pi^{K}_{mn}+\delta^{JK}\Pi^{I}_{mn}\Phi^{L}_{mn}-\delta^{IL}\Phi^{J}_{mn}\Pi^{K}_{mn}=$
$\displaystyle=\delta^{IK}\mathbf{J}^{JL}-\delta^{IL}\mathbf{J}^{JK}-\delta^{JK}\mathbf{J}^{IL}+\delta^{JL}\mathbf{J}^{IK}\approx
0\ .$
In summary we find that $\mathbf{P}_{I}\approx 0\ ,\mathbf{J}^{IJ}\approx 0$
are first class constraints. We will discuss their properties below.
Finally we return to the Hamiltonian and express it in the form of canonical
variables. Using (3) we obtain
$\displaystyle(\Pi_{I})_{ij}(\Pi_{I})_{ji}=\frac{1}{g_{s}^{2}l_{s}^{2}}\mathrm{Tr}\dot{\Phi}^{I}\dot{\Phi}_{I}-\frac{1}{Ng_{s}^{2}l_{s}^{2}}\mathrm{Tr}\dot{\Phi}_{I}\mathrm{Tr}\dot{\Phi}_{I}-\frac{2}{g_{s}l_{s}}\mathbf{J}^{KL}\mathbf{M}_{KL,MN}\mathbf{J}^{MN}+\frac{1}{g_{s}l_{s}}\mathbf{J}^{KL}\mathbf{M}_{KL,MN}\mathbf{J}^{MN}$
and we find that the bare Hamiltonian is equal to
$\displaystyle
H_{B}=\frac{g_{s}l_{s}}{2}\mathrm{Tr}\Pi_{I}\Pi_{I}-\frac{1}{4g_{s}l_{s}}\mathrm{Tr}[\Phi^{I},\Phi^{J}][\Phi^{I},\Phi^{J}]\
.$ (42)
Then it is easy to see that total Hamiltonian that is given as linear
combination of the bare Hamiltonian with the first class constraints has the
form
$\displaystyle
H_{T}=\frac{g_{s}l_{s}}{2}\mathrm{Tr}\Pi_{I}\Pi_{I}-\frac{1}{4g_{s}l_{s}}\mathrm{Tr}[\Phi^{I},\Phi^{J}][\Phi^{I},\Phi^{J}]+\lambda_{I}\mathbf{P}_{I}+\lambda^{IJ}\mathbf{J}_{IJ}\
.$
Since $\mathbf{P}_{I}$ and $\mathbf{J}_{IJ}$ are first class constraints the
standard procedure is to fix them. For example, we can impose the gauge fixing
condition that says that the center of mass coordinates are equal to zero. In
other words we define gauge fixing functions $\mathcal{G}_{I}$ as
$\mathcal{G}_{I}\equiv\mathrm{Tr}\Phi_{I}\approx 0\ .$ (44)
Then we have
$\left\\{\mathcal{G}_{I},\mathbf{P}_{J}\right\\}=\left\\{\Phi^{I}_{ij},(\Pi_{J})_{kl}\right\\}\delta_{ji}\delta_{lk}=\delta^{I}_{J}\delta_{ik}\delta_{jl}\delta_{ji}\delta_{lk}=\delta^{I}_{J}\delta_{il}\delta_{li}=N\delta^{I}_{J}\
.$ (45)
In other words $\mathcal{G}_{I}$ and $\mathbf{P}_{J}$ are set of second class
constraints that now strongly vanish. We further fix generators
$\mathbf{J}_{IJ}\approx 0$ by imposing conditions that off-diagonal components
of the matrix $I^{IJ}$ are zero
$\mathcal{G}^{IJ}\equiv I^{IJ}\approx 0\ ,I\neq J\ ,$ (46)
where
$I^{IJ}=\frac{1}{2g_{s}l_{s}}\mathrm{Tr}(\Phi^{I}\Phi^{J})-\frac{1}{2g_{s}Nl_{s}}\mathrm{Tr}(\Phi^{I})\mathrm{Tr}(\Phi^{J})\
.$ (47)
Note that we have following Poisson brackets
$\displaystyle\left\\{\mathcal{G}^{IJ},\mathbf{P}_{K}\right\\}=0$ (48)
together with
$\displaystyle\left\\{\mathcal{G}^{IJ},\mathbf{J}^{KL}\right\\}=\frac{1}{2g_{s}l_{s}}(\delta^{IL}\Phi^{J}_{nm}\Phi_{mn}^{K}-\delta^{IK}\Phi^{J}_{ji}\Phi^{L}_{ij}+\delta^{JL}\Phi^{K}_{mn}\Phi^{I}_{nm}-\delta^{JK}\Phi^{I}_{mn}\Phi^{L}_{nm})-$
$\displaystyle-\frac{1}{2g_{s}N}(\delta^{IL}\Phi^{K}_{ij}\delta_{ji}-\delta^{IK}\Phi_{ij}^{L}\delta_{ji})\mathrm{Tr}\Phi^{J}-\frac{1}{2g_{s}N}(\delta^{JL}\Phi^{K}_{ij}\delta_{ji}-\delta^{JK}\Phi_{ij}^{L}\delta_{ji})\mathrm{Tr}\Phi^{I}=$
$\displaystyle=\delta^{IL}I^{JK}-\delta^{IK}I^{JL}+\delta^{JL}I^{KI}-\delta^{JK}I^{IL}\
.$
For $I=L$ and $J=K$ we obtain non-zero result
$\left\\{\mathcal{G}^{LK},\mathbf{J}^{KL}\right\\}=I^{KK}\ .$ (50)
Since $I^{KK}\neq 0$ by definition we find that $\mathcal{G}^{LK}$ is gauge
fixing function for $\mathbf{J}^{KL}$ and they form collection of the second
class constraints.
It is important to stress that gauge fixed theory with
$\mathbf{P}_{I}=\mathbf{J}^{IJ}=0$ corresponds to the original Hamiltonian for
$N$ D0-branes and we can interpret these frames with vanishing total momentum
and angular momentum as Newtonian frames, for more details we recommend
discussion presented in [4].
## 4 Gauging Time translation
In order to find an action invariant under arbitrary time dependent
translation $t^{\prime}=t+\epsilon(t)$ we follow the standard procedure of
parametrized systems, see for example [16]. We begin with the canonical form
of the action
$S=\int dt((\Pi_{I})_{ij}\dot{\Phi}_{ij}^{I}-H_{T})\ ,$ (51)
where $H_{T}$ is given in (3). As the next step we introduce variable $t$ and
conjugate momenta $p_{t}$ and rewrite the action into the form
$S=\int
d\tau(p_{t}\frac{d}{d\tau}t+(\Pi_{I})_{ij}\frac{d}{d\tau}\Phi_{ij}^{I}-N(p_{t}+H_{T}))\
.$ (52)
In order to see equivalence between (52) and (51) let us consider equations of
motion for $N$ and $p_{t}$ that give
$\frac{d}{d\tau}t-N=0\ ,\quad p_{t}+H_{T}=0\ $ (53)
that inserting back to the action (52) we obtain
$S=\int d\tau\frac{dt}{d\tau}((\Pi_{I})_{ij}\frac{d}{dt}\Phi_{ij}^{I}-H_{T})\
,$ (54)
where we presumed that the first relation in (53) can be inverted. Then it is
easy to see that (54) is equivalent do (51). The action (52) is manifestly
reparametrization invariant under transformation
$\tau=f(\tau^{\prime})\ ,\quad t^{\prime}(\tau^{\prime})=t(\tau)\ ,\quad
N(\tau)=N^{\prime}(\tau^{\prime})\frac{1}{\frac{df}{d\tau^{\prime}}}\ $ (55)
and
$(\Pi^{\prime}_{I})_{ij}(\tau^{\prime})=(\Pi_{I})_{ij}(\tau)\
,\quad\Phi^{\prime I}_{ij}(\tau^{\prime})=\Phi_{ij}^{I}(\tau)\ .$ (56)
In summary we got the action (52) that is invariant under time dependent
Galilean transformation together with arbitrary redefinition of the time
$\tau$. Clearly this construction is generally correct even in our specific
case of $N$ D0-branes. On the other hand the question is physical
interpretation of the coordinate $t$ and how it should be interpreted in the
context of non-abelian nature of D0-brane action. In other words this
construction cannot be interpreted as covariant form of the action for $N$
D0-brane which is very difficult to construct, see for example [17, 18]. For
that reason we mean that this construction has only formal meaning.
Acknowledgement:
The work of J.K. was supported by the Grant Agency of the Czech Republic under
the grant P201/12/G028.
## References
* [1] J. B. Barbour and B. Bertotti, _“Mach’s Principle and the Structure of Dynamical Theories,”_ Proc. Roy. Soc. Lond. A 382 (1982), 295-306 doi:10.1098/rspa.1982.0102
* [2] J. Barbour, _“The Definition of Mach’s Principle,”_ Found. Phys. 40 (2010), 1263-1284 doi:10.1007/s10701-010-9490-7 [arXiv:1007.3368 [gr-qc]].
* [3] D. Lynden-Bell and J. Katz, _“Classical mechanics without absolute space,”_ Phys. Rev. D 52 (1995), 7322-7324 doi:10.1103/PhysRevD.52.7322 [arXiv:astro-ph/9509158 [astro-ph]].
* [4] R. Ferraro, _“Relational Mechanics as a gauge theory,”_ Gen. Rel. Grav. 48 (2016) no.2, 23 doi:10.1007/s10714-016-2018-5 [arXiv:1410.6509 [gr-qc]].
* [5] K. Glampedakis, _“A Machian Reformulation of Quantum Mechanics,”_ Found. Phys. 52 (2022) no.2, 36 doi:10.1007/s10701-022-00551-3 [arXiv:2202.11561 [quant-ph]].
* [6] J. Polchinski, _“Dirichlet Branes and Ramond-Ramond charges,”_ Phys. Rev. Lett. 75 (1995), 4724-4727 doi:10.1103/PhysRevLett.75.4724 [arXiv:hep-th/9510017 [hep-th]].
* [7] J. Dai, R. G. Leigh and J. Polchinski, _“New Connections Between String Theories,”_ Mod. Phys. Lett. A 4 (1989), 2073-2083 doi:10.1142/S0217732389002331
* [8] T. Banks, W. Fischler, S. H. Shenker and L. Susskind, _“M theory as a matrix model: A Conjecture,”_ Phys. Rev. D 55 (1997), 5112-5128 doi:10.1103/PhysRevD.55.5112 [arXiv:hep-th/9610043 [hep-th]].
* [9] N. Seiberg, _“Why is the matrix model correct?,”_ Phys. Rev. Lett. 79 (1997), 3577-3580 doi:10.1103/PhysRevLett.79.3577 [arXiv:hep-th/9710009 [hep-th]].
* [10] A. Sen, _“D0-branes on T**n and matrix theory,”_ Adv. Theor. Math. Phys. 2 (1998), 51-59 doi:10.4310/ATMP.1998.v2.n1.a2 [arXiv:hep-th/9709220 [hep-th]].
* [11] A. Sen, _“An Introduction to nonperturbative string theory,”_ [arXiv:hep-th/9802051 [hep-th]].
* [12] W. Taylor, _“Lectures on D-branes, gauge theory and M(atrices),”_ [arXiv:hep-th/9801182 [hep-th]].
* [13] W. Taylor, _“The M(atrix) model of M theory,”_ NATO Sci. Ser. C 556 (2000), 91-178 doi:10.1007/978-94-011-4303-5_3 [arXiv:hep-th/0002016 [hep-th]].
* [14] A. A. Tseytlin, _“On nonAbelian generalization of Born-Infeld action in string theory,”_ Nucl. Phys. B 501 (1997), 41-52 doi:10.1016/S0550-3213(97)00354-4 [arXiv:hep-th/9701125 [hep-th]].
* [15] R. C. Myers, _“Dielectric branes,”_ JHEP 12 (1999), 022 doi:10.1088/1126-6708/1999/12/022 [arXiv:hep-th/9910053 [hep-th]].
* [16] M. Henneaux and C. Teitelboim, _“Quantization of gauge systems,”_
* [17] D. Brecher, K. Furuuchi, H. Ling and M. Van Raamsdonk, _“Generally covariant actions for multiple D-branes,”_ JHEP 06 (2004), 020 doi:10.1088/1126-6708/2004/06/020 [arXiv:hep-th/0403289 [hep-th]].
* [18] D. Brecher, P. Koerber, H. Ling and M. Van Raamsdonk, _“Poincare invariance in multiple D-brane actions,”_ JHEP 01 (2006), 151 doi:10.1088/1126-6708/2006/01/151 [arXiv:hep-th/0509026 [hep-th]].
|
# Locating the source of forced oscillations in transmission power grids
Robin Delabays1, Andrey Y. Lokhov2, Melvyn Tyloo2,3, Marc Vuffray2 1University
of Applied Sciences of Western Switzerland, Sion, Switzerland 2Theoretical
Division, Los Alamos National Laboratory, Los Alamos, NM USA 3Center for
Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM USA
###### Abstract
Forced oscillation event in power grids refers to a state where malfunctioning
or abnormally operating equipment causes persisting periodic disturbances in
the system. While power grids are designed to damp most of perturbations
during standard operations, some of them can excite normal modes of the system
and cause significant energy transfers across the system, creating large
oscillations thousands of miles away from the source. Localization of the
source of such disturbances remains an outstanding challenge due to a limited
knowledge of the system parameters outside of the zone of responsibility of
system operators. Here, we propose a new method for locating the source of
forced oscillations which addresses this challenge by performing a
simultaneous dynamic model identification using a principled maximum
likelihood approach. We illustrate the validity of the algorithm on a variety
of examples where forcing leads to resonance conditions in the system
dynamics. Our results establish that an accurate knowledge of system
parameters is not required for a successful inference of the source and
frequency of a forced oscillation. We anticipate that our method will find a
broader application in general dynamical systems that can be well-described by
their linearized dynamics over short periods of time.
## Introduction
The power grid is indubitably one if not the greatest engineering achievement
of the past century, as recognized by the National Academy of Engineering [1].
It is an intricate and complex network that has the size of a continent with
thousands of constituents that require constant supervision. An important
aspect of this surveillance is to ensure that voltage frequencies remain
within a narrow band ($50\pm 0.05$Hz in Europe [2] and $60\pm 0.05$Hz in the
U.S. [3]), as the violation of this requirement can cause significant damage
to vital assets and results in blackouts [4]. Voltage frequency fluctuations
are primarily caused by real-time imbalances between production and
consumption, such as variations in the charging of electric vehicles, or a
sudden gust at a wind farm. The power grid is designed to damp these random
electromechanical oscillations appearing during standard operating conditions
before it creates a resonance with one of the normal modes of the system.
However, malfunctioning equipment or abnormal operating conditions can cause
periodic disturbances that would persist over time, creating an undesirable
transfer of energy across the system, an effect referred to as forced
oscillations.
Whereas most forced oscillations are localized to a particular area, some may
be close in frequency to one of the dominant normal modes, resulting in a
system-wide response and significant energy transfers [5]. Potential impacts
of these wide-area oscillations include equipment failure, inadvertent
tripping or control actions, and problems with the automatic generation
control. This is why fast and reliable location of the source of forced
oscillations is crucial in ensuring the safety and reliability of power grids.
However, it remains an outstanding challenge, even when forced oscillation
events are detected on the network. For instance, on January 11, 2019, a
forced oscillation event happened across the entire Eastern Interconnection on
the U.S power grid that was promptly noticed by the reliability coordinators.
Nonetheless, existing tools were ineffective at identifying the source
location, and a wide-area operator action did not contribute to mitigating the
event [6]. The root cause was later fortuitously identified as a faulty input
from a steam turbine at a combined-cycle power plant in Florida which forced
the system to oscillate for around 18 minutes before local plant personnel
removed the unit from service. The forced oscillations created by the faulty
turbine had a peak-to-peak amplitude of 200 MW at the generating unit, with
power swings of about 50 MW observed as far as the New England area [6], which
shows how these disturbances have the power to affect an entire continent. In
the case of the November 29, 2005 Western American Oscillation event, a forced
oscillation with amplitude 20MW originating from Alberta in Canada created a
resonance effect across the entire Western Interconnection. This led to
oscillations of amplitude 200 MW registered on the California-Oregon
Interface, thousands of miles away from the source [7]. This ability of forced
oscillations to cause disturbances at long distances and to be amplified by
the grid dynamics seriously complicates the search of their source. Forced
oscillations pose a permanent threat to the power grid with more than 20
large-scale events in the past 30 years documented in the U.S., with some of
them still lacking a well-identified root cause [8].
The increasing deployment of time-synchronous and distributed frequency
sensors in the grid, such as Phasor Measurement Units (PMUs) [9], presents an
opportunity for developing advanced data-driven and automated detection and
localization techniques of forced oscillations. In the majority of cases,
_detection_ of forced oscillations poses a limited challenge as they can be
directly observed from the Fourier peaks in the signal spectrum [10]. However,
forced oscillations need to be differentiated from weakly damped normal modes,
or free oscillations [11, 12, 13]. Weakly damped modes are typically analyzed
through Prony analysis [14], and are mitigated via preventative measures by
power system operators. On the other hand, the _localization_ of forced
oscillations constitutes a much tougher challenge. A complete and perfect
knowledge of the system and its dynamics would allow one to locate the source
of a forcing [15, 16, 17]. However, an accurate instantaneous knowledge of the
power grid dynamics appears as too strong of an assumption given that the
system parameters can fluctuate on the scale of tens of minutes due to local
feedback control or temperature variations [18], while the details on the
system topology may be unavailable outside of the zone of responsibility for
reliability coordinators. Different methods have been proposed to circumvent
this lack of information about the system. Some techniques are based on local
physical properties such as the monitoring of front arrival times [19], the
evaluation of energy flow [20, 21], signal decomposition [22], or verification
of the linear relation between voltage and current by multiple PMUs [23, 24].
Unfortunately, these methods can be very sensitive to modeling errors such as
an inaccurate assessment of the fluctuation propagation speed, or can fail to
localize perturbations that are amplified by normal modes. Black-box machine
learning methods [25, 26, 27] have been developed with the aim of being fully
model-agnostic, but suffer from a prohibitive requirement in training examples
of forced oscillations events. These various shortcomings motivated recent
calls from system operators and regulators to develop robust tools for
performing forced oscillation analysis and localization [6, 28], which led to
a further exploration [29, 30]. Nevertheless, a correct localization of the
source in the case where dominant system modes are excited due to the
resonance phenomenon remains an outstanding challenge.
In this paper, we propose a new principled method of detection and
localization of forced oscillations which is agnostic to the knowledge of the
system topology and parameters, fully capable of identifying the source of
distant normal modes excitation, and which does not rely on any offline
training. Our method operates within a much broader framework that extends
rather universally to any dynamical system that can be well-described over
short period of time by its linearized dynamics. This makes it a method of
choice for the detection and localization of energy transfers created by small
disturbances on a large class of complex networks. The key insight in our
method consists in leveraging random frequencies fluctuations naturally
present in the system to build in real time an effective dynamic model of the
network. The dynamic model identification and source localization are
performed at the same time using a principled maximum likelihood approach. We
illustrate the performance of our approach on a number of examples where
forcing excites the natural system modes and create a resonance phenomenon, as
well as on a real-world PMU data set.
Figure 1: A toy example with three nodes illustrating the challenges and
solutions for locating the source of forced oscillations in a network of
coupled oscillators. (a) A forced oscillation is induced on the orange node
with frequency $f=0.16$ [${\rm s}^{-1}$]. Time series of the network states
corresponding to generalized positions $\bm{x}_{t}$ and momentum $\bm{p}_{t}$
(corresponding to phase deviations $\bm{\theta}_{t}$ and frequency deviations
$\bm{\omega}_{t}$ in power grids, see _Supplementary Information_ , section S1
[31]) presented in panels (b) and (c), respectively, are generated using Eq.
(1) and the parameters detailed in the _Supplementary Information_ , section
S2 [31]. (d) A naïve Fourier analysis does not allow one to identify either
the forcing source or location: the Fourier transform of the position time
series displays its largest peak at the blue node with the frequency $f=0.08$
$[{\rm s}^{-1}]$, and the Fourier transform of the momentum time series
displays peaks at the orange node, with a frequency $f=0.8$ [${\rm s}^{-1}$].
(e) An eigendecomposition of the dynamic state matrix shows the natural
frequencies (eigenvalues) and modes (eigenvectors, with the size of the nodes
indicating the magnitude of the respective components) of the system. This
analysis reveals the reason for the observed behavior of the Fourier
transforms: the forcing frequency is close to a natural frequency of the
system, thus creating a resonance effect and exciting the natural modes of the
system. (f) Our source localization algorithm confidently points to the
correct source (orange node) and frequency (indicated with a dashed gray line)
of the forced oscillation. The accuracy of the determination of the forcing
frequency $f$ is fundamentally defined by the duration of the available time
series.
## Challenges and solutions for locating the oscillation source
Learning of the dynamics of linear stochastic systems lies at the foundation
of our approach. In the ambient regime of fluctuations, a complex system such
as power grid with a general non-linear dynamics is typically found in a state
which is close to a stable equilibrium. In this ambient regime of small
perturbations, the dynamics of the system is well described by a linear
stochastic equation corresponding to a celebrated model of coupled harmonic
oscillators. Without surprise, this family of models includes the popular
swing equations describing the ambient dynamics of generators in a
transmission power grid, which are derived under the assumption of small
deviations from the steady state [32, 33] (see _Supplementary Information_ ,
section S1 [31] for more details).
We consider the linear stochastic network dynamics with an additional forcing
at a single node $l$, indexed by $\bm{e}_{l}$, the canonical basis vector with
nonzero $l$th component. Mathematically, this dynamics is described by the
following linear stochastic differential equation,
$\displaystyle\bm{M}d\bm{p}_{t}$
$\displaystyle=\bm{D}\bm{p}_{t}dt+\bm{L}\bm{x}_{t}dt+\gamma\bm{e}_{l}\cos(2\pi(ft+\phi))dt+d\bm{W}_{t},$
(1)
where $\bm{x}_{t}$ represents the network state variables (for a power grid,
they correspond to deviations of phases from the steady state values),
$\bm{p}_{t}dt=d\bm{x}_{t}$ is the generalized momentum (deviation of
frequencies from the steady state values in a power grid), $\bm{M}$, $\bm{D}$,
$\bm{L}$ are generalized mass, damping, and network coupling parameters,
$\gamma$, $f$, and $\phi$ are the amplitude, frequency and phase of the
forcing, respectively, and $\bm{W}_{t}$ is a Wiener noise process describing
random fluctuations.
Our goal is to reconstruct the oscillation frequency and the location of the
forcing source from the measured time series of $\bm{x}_{t}$ and $\bm{p}_{t}$
of length $T$ at a sequence of $N$ discrete time steps
$t\in\\{t_{1},...,t_{N}\\}$. We do not assume any knowledge of the system
parameters or topology, which represents a realistic scenario in power grids:
instantaneous awareness of system parameters are almost never available to
system operators [18]. Hence, we do not suppose any knowledge on the inertia
$\bm{M}$, damping $\bm{D}$, or grid Laplacian matrix $\bm{L}$. Obviously, we
also do not assume any knowledge of the parameters related to the forcing,
i.e., $\gamma$, $l$, $f$, or $\phi$. All these parameters of interest need to
be recovered from the noisy data $\\{\bm{x}_{t}\\}$ and $\\{\bm{p}_{t}\\}$.
In Figure 1, we illustrate with a toy example of a three-node network the key
mechanism that renders the localization of forced oscillations elusive to
standard signal processing analysis. Namely, a forcing with a frequency close
to a natural frequency of the system may excite the natural modes of the
system peaked at a different node. As a result, neither the correct forcing
frequency nor the source of the oscillations are evident from the Fourier
spectrum in Figure 1 (d). This effect is reminiscent of forced oscillation
events in power grids such as those of November 29, 2005 and January 11, 2019
discussed above, where large perturbations can be seen far from the source and
at a very different frequency.
To address this challenge, we develop a principled method for determining the
oscillation frequency $f$ and locating the source of forced oscillations based
on a maximum likelihood approach (see _Methods_ and _Supplementary
Information_ , section S3 [31] for a detailed derivation). A naïve real-space
estimator leads to a complicated non-convex optimization problem in frequency,
as we explain in the _Supplementary Information_ , section S4 [31]. A key
insight which leads to an efficient solution consists in realizing that the
finite length of the time series imposes a finite resolution on the
frequencies. This leads to a tractable formulation featuring the Fourier
transformed quantities, which can be efficiently solved with the state-of-the-
art interior-point method solvers (see _Methods_).
Knowledge of the number of sources leads to a discrete formulation of the
problem, where optimization is run for every node (or every pair of nodes if
two sources of forcing are present, _etc._). We refer to this method as to the
_System Agnostic Localization of Oscillations (SALO)_ algorithm. SALO approach
is fully parallelizable over all nodes in the network, making it the method of
choice for a high performance computing system. An example of an application
of the SALO framework is given in Figure 1 (e) for our toy resonance example:
both the source and the frequency of forced oscillations are unambiguously and
correctly identified. For streaming applications where a faster identification
is desired, we consider a computationally advantageous relaxation of the
problem. In this SALO-relaxed version of the algorithm, all nodes are formally
allowed to be a source with a respective amplitude $\gamma_{i}$ for each node
$i$ (see _Methods_ for more details). In what follows, we benchmark the method
on a number of simulated and real use cases.
## Tests on synthetic systems and real data
Figure 2: Detection and localization of forced oscillations under the
resonance conditions. (a) Synthetic test case with a topology inspired by the
UK high-voltage grid, designed to reproduce features observed in real
oscillatory events where oscillations interacted with the system modes.
Details on the network parameters are given in the _Supplementary Information_
, section S2 [31]. The forcing at the orange node results in a highest
response in Fourier spectra at the opposite side of the network, as shown for
the Fourier components of the (b) generalized state and of the (c) generalized
momentum. (d) SALO algorithm confidently identifies the correct forcing
frequency and source without any knowledge of the system topology or
parameters. The envelope of scores for non-highlighted nodes is shown in gray.
An effect where interaction between the forcing frequency and one of the
natural modes of the system leads to a peak response away from the forcing
source may arise in networks with a much more complex structure compared to
the three-node system illustrated in Figure 1. Such a resonance behavior
represents an outstanding challenge for detection algorithms whereby the peaks
in the Fourier spectrum may not only involve nodes far away from the source,
but also point to frequencies related to the natural modes of the system
rather than to the frequency of the forcing. We showcase this phenomenon on a
synthetically generated data set according to the model (1) on a network
inspired by the UK high-voltage grid, see Figure 2. This test case
demonstrates some of the features observed in the 2005 Western Interconnection
and in the 2019 Eastern Interconnection oscillation events. In particular, the
largest amplitudes in the Fourier components can be located very far away from
the source, at distances comparable to the diameter of the network. On the
other hand, we see that the maximum likelihood based SALO method identifies
the correct forcing frequency and the correct source, for which the Fourier
signals are otherwise completely hidden among the responses of other nodes in
the network, see Figure 2 (b), (c), and (d).
In the _Supplementary Information_ , section S5 [31], we further illustrate
the challenges of Fourier-based source localization under the resonance
phenomenon on a standard IEEE test case topology with 57 nodes. Yet in this
case again, as shown in the Figure S3, SALO method precisely and unequivocally
identifies the correct frequency and location of the forcing source, without
exploiting any prior knowledge on the system topology and parameters. We also
used this test case to demonstrate the performance of the SALO-relaxed version
of the algorithm, which accurately points to the correct location of the
source of forced oscillations, while benefiting from the computational
complexity of a single source verification under the full maximum likelihood
approach. This shows that the SALO-relaxed version can be used for a quicker
assessment of the forced oscillations once they have been detected in the
system, prior to running parallelized computations under the SALO framework.
We further show this computational advantage on a series of synthetic
instances of increasing size in Figure S4, whereby the ratio of run-times of
SALO and SALO-relaxed scales linearly with the system size.
Figure 3: Robustness of the SALO algorithm to model misspecification. (a) In
the case where the source of forced oscillations is outside of the observed
system (gray node), the algorithm still correctly identifies the forcing
frequency and the neighbors of the hidden source inside the visible system
(three overlapping peaks in likelihood at the forcing frequency correspond to
orange, purple, and blue nodes). (b) In the case of several sources, both
locations appear as peaks in the rescaled likelihood score at their respective
forcing frequencies. (c) In the case of non-sinusoidal forcing injected into
the system, the forcing location is still correctly identified, while the
complex nature of the forcing shows up as likelihood peaks at different
harmonics of the forcing signal. The envelope of scores for non-highlighted
nodes is shown in gray in all panels.
In the tests described above, we assumed that the time series have been
produced using the model (1), albeit the system parameters are not known to
the reconstruction algorithm. In particular, it is assumed that there is a
_single_ and _observed_ source node $l$, and that the forcing of the type
$\cos(2\pi(ft+\phi))$ is associated with a single frequency $f$. In Figure 3
we look at the results produced by SALO in situations where these assumptions
are violated. In Figure 3 (a), the source node is outside of the observable
system. We see that in this case, the SALO algorithm is still able to
correctly identify the forcing frequency, and points to the immediate
neighbors of the source node inside the observable system. The Figure 3 (b)
demonstrates the case where two sources of oscillations at different nodes and
frequencies are simultaneously present in the system. Notably, two peaks
corresponding to both forcings appear in the rescaled likelihood score.
Finally, in Figure 3 (c), we show the rescaled likelihood scores for the input
forcing signals of different types. Even in this case, SALO correctly
identifies the source location showing up as several peaks in the likelihood
at different harmonics of the input forcing signal. This makes the algorithm
remarkably robust to the assumptions behind the model. In the _Supplementary
Information_ , section S6 [31], we show an example of an application of the
SALO algorithm to real data which display a combination of features observed
in Figure 3.
Another possible misspecification is the assumption of linearity of the
dynamics. For instance, real power systems do not exactly follow the linear
model of the type (1) with constant system parameters. Instead, a linear swing
model which falls within the class (1) as discussed in the _Supplementary
Information_ , section S1 [31], represents an approximation to a general non-
linear dynamics of generators under small deviations from the steady state,
valid over finite periods of time [18, 34]. However, with the understanding
that (1) only serves as an _effective_ model providing an adequate description
of the complex system dynamics over short time scales, we show that the SALO
method is still applicable to data from real transmission power grids with the
purpose of identifying the conditions pertaining to forced oscillations. Here,
we benchmark the SALO algorithm on an instance of real PMU data from a U.S.
transmission power grid with 200 nodes, where a presence of sustained
oscillations has been suggested in previous studies. A Fourier-type analysis
of the PMU data from a U.S. Independent System Operator showed a presence of
sustained oscillations with a frequency in the 4-6[s$-1$] range, responsible
for the emergence of correlations between several node clusters [35]. The
analysis of these time series with the SALO method, see Figure 4, confidently
points to a single source of oscillations, with a frequency close to the range
previously identified in [35]. Incidentally, although the ground truth for
this system is unknown, the identified node is consistently pointed to as the
most likely source of sustained oscillations even for PMU data separated by a
time interval of about a month, see Figure 4 (b) and (c). An extended range of
candidate frequencies is likely to be connected to the fundamental limits on
the data resolution, as exemplified in the _Supplementary Information_ ,
section S4 [31], where shorter time series lead to a wider log-likelihood
objective function in the frequency domain (see Figure S1).
Figure 4: Identifying the location of forced oscillations for PMU data from a
U.S. power system operator. (a) Geographical layout of PMU sensors in the
system providing time series data (anonymized and modified coordinates). The
node identified as the most likely source of forced oscillations is
highlighted in orange. (b) According to the SALO algorithm, the identified
node has a much higher likelihood for being a source for a range of
frequencies in the vicinity of $f=4.6$ $[{\rm s}^{-1}]$ compared to the rest
of the nodes in the network (corresponding likelihoods depicted in gray). A
finite range of candidate frequencies is a realistic feature that may emerge
in real systems due to finite length of the collected data. (c) The algorithm
consistently points to the same candidate source in a similar frequency range
even for time series collected over two time periods separated by almost a
month (March 2013 vs. April 2013). This consistency strongly indicates that
sustained oscillations may originate from a single faulty component of the
system. The envelope of scores for other non-highlighted nodes in the system
is shown in gray.
## Conclusions
We proposed a rigorous maximum-likelihood-based framework enabling a
simultaneous system identification and localization of the source of forced
oscillations at the same time, without any prior knowledge of the system
topology or parameters. In particular, our method is able to perfectly locate
the oscillation source, even in the case where the forcing excites one of the
natural modes of the system and creates an amplitude peak at a far-away node
and at a different frequency. This scenario is reminiscent of some of the
real-world historical events, such as the 2005 Western Interconnection event,
which turned out to be the most challenging from the oscillation source
localization perspective. The ease of parallelization and a relaxed version of
the algorithm makes the method scalable to large network instances and
multiple sources, while robustness to modeling assumptions makes the algorithm
applicable to data produced by real-world dynamical systems.
The SALO algorithm can be naturally adapted to the situation where additional
prior information is available. For instance, the matrices $\bm{M}$, $\bm{D}$,
and $\bm{L}$ do not need to be reconstructed from data if a prior knowledge on
the network structure and parameters is available. In the _Supplementary
Information_ , section S7 [31], we show that under this scenario, the SALO
algorithm is able to identify the correct source and frequency of the forcing
using significantly less data compared to the most challenging setting
considered in this work, where no prior information on the system is
available. Similarly, knowledge of the disturbance type or number of sources
can also be directly incorporated into the algorithm. In the context of power
grid applications, it could be practical to extend the SALO method to the case
of partial sensor coverage, as well as to a source localization under the
model which specifically takes into account different types of generators,
buses, and PMUs in the grid. For instance, the primary setting considered in
our work assumes that all buses have non-zero inertia, which is true for
generators in the power grid, but not necessarily for other buses, e.g. those
representing loads.
Due to a wider applicability of our approach to general stochastic linear
dynamics and coupled harmonic oscillators, we anticipate that our methods will
find a broader range of applications beyond power grids, e.g., vehicular
platoon subjected to malfunctioning elements or malicious attacks, and more
broadly multi-agent systems where automated units must reach an overall
consensus [36], as well as forced oscillations in wave propagation dynamics.
## Methods
### Model.
The dynamic equation (1) can be reformulated as a first-order linear dynamic
system
$\displaystyle d\bm{p}_{t}$
$\displaystyle=\bm{A}\bm{X}_{t}dt+\gamma\bm{e}_{l}{\rm Re}\left(e^{2\pi
i(ft+\phi)}\right)dt+d\bm{W}_{t},$ (2)
where we used $\bm{p}_{t}dt=d\bm{x}_{t}$, and denoted
$\bm{X}_{t}=(\bm{x}_{t},\bm{p}_{t})$. Discretizing Eq. (2) with an Euler-
Maruyama approximation scheme over $N$ points yields the finite difference
equation, for $j=0,...,N-1$,
$\displaystyle\bm{\Delta}_{t_{j}}$
$\displaystyle=\bm{A}\bm{X}_{t_{j}}+\gamma\bm{e}_{l}{\rm Re}\left(e^{2\pi
i(k\frac{j}{N}+\phi)}\right)+\bm{\xi}_{j}\,,$ (3)
where $\bm{\Delta}_{t_{j}}=(\bm{X}_{t_{j+1}}-\bm{X}_{t_{j}})/\tau$ for
$\tau=T/N$, we assumed $t_{j}=j\tau$, and the frequency relates to the integer
$0<k<N/2$ as $k=fT$. As discussed in the problem formulation above, the
measurements $\\{\bm{X}_{j\tau}\\}_{j=0,\ldots,N-1}$ are assumed to be
available, but the parameters of the system $\bm{A}$, $\gamma$, $l$, $f$, and
$\phi$ are unknown, and need to be estimated from the data.
### Reconstruction algorithm.
Given that on finite time intervals, the contribution of the Wiener process is
an i.i.d. Gaussian variable with zero mean, i.e., $\bm{\xi}_{t_{j}}\sim{\cal
N}(0,\tau)$, the negative log-likelihood reads,
$\displaystyle
L\left(\bm{A},\gamma,l,k,\phi~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right)$
$\displaystyle=\frac{1}{N}\sum_{j=0}^{N-1}\left\|\bm{\Delta}_{t_{j}}-\bm{A}\bm{X}_{t_{j}}-\gamma\bm{e}_{l}{\rm
Re}\left(e^{2\pi i(\frac{k}{N}j+\phi)}\right)\right\|^{2}.$ (4)
Note that a discretized set of frequencies appearing in the forcing term as a
result of finiteness in $T$ and $N$ is crucial, because an optimization over a
continuous variable $f$ leads to a hard nonlinear optimization problem, as
exemplified in the _Supplementary Information_ , section S4 [31]. For a fixed
frequency $k$ and node $l$, the joint minimization over $\bm{A}$, $\gamma$,
and $\phi$ remains challenging to solve as the negative log-likelihood
possesses numerous local minima. Nevertheless, the global minimization of (4)
over the phase $\phi$, as a function of the remaining parameters, can be
performed exactly, leading to the following expression for the partial
negative log-likelihood:
$\displaystyle L_{\rm
SALO}\left(\bm{A},\gamma,l,k~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=0}^{N-1}\right)$
$\displaystyle={\rm Tr}(\bm{A}^{\top}\bm{A}\Sigma_{0})-2{\rm
Tr}(\bm{A}\Sigma_{1})+\frac{1}{2}\gamma^{2}-\frac{2\gamma}{\sqrt{N}}\sqrt{{\rm
Tr}\left(\bm{A}_{l,\cdot}^{\top}\bm{A}_{l,\cdot}F(k)\right)-2f_{l}(k)\bm{A}_{l,\cdot}+g_{l}(k)}\,,$
(5)
see _Supplementary Information_ , section S3 [31] for details of the
derivation and the definitions of all the quantities that can be directly
computed from the available time series $\\{\bm{X}_{j\tau}\\}_{j=1,\ldots,N}$.
For a fixed $l$ and $k$, the expression (5) remains a non-convex function in
the arguments $\bm{A}$ and $\gamma$. However, we have observed that this cost
function, unlike its naive counterpart (4), seems to always possess a single
minimum that can be efficiently found using state-of-the-art interior point
methods. In this work, we used the optimization software Ipopt [37] within the
Julia/JuMP modeling framework for mathematical optimization [38]. Note that
the optimization is fully parallelizable over $k$ and $l$. We refer to the
minimization of the partial negative log-likelihood as to the SALO method.
We also consider an accelerated method referred to as the SALO-relaxed method,
which exploits the spatial relaxation of the problem in the variable $\gamma$.
Under this variant of the algorithm, all nodes are formally allowed to be a
source, and their relative likelihood to be a source is encoded by a
respective amplitude $\gamma_{i}$ for each node $i$ (so that the regression
now runs over the vector $\bm{\gamma}$):
$\displaystyle L_{\rm
SALOr}\left(\bm{A},\bm{\gamma},k~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right)$
$\displaystyle={\rm Tr}(\bm{A}^{\top}\bm{A}\Sigma_{0})-2{\rm
Tr}(\bm{A}\Sigma_{1})+\sum_{l=1}^{n}\left[\frac{\gamma_{i}^{2}}{2}-\frac{2\gamma_{i}}{\sqrt{N}}\sqrt{{\rm
Tr}\left(\bm{A}_{l,\cdot}^{\top}\bm{A}_{l,\cdot}F(k)\right)-f_{l}(k)\bm{A}_{l,\cdot}+g_{l}(k)}\right]\,.$
(6)
We benchmark this method in the _Supplementary Information_ , section S5 [31],
where we also discuss the computational speed-up of SALO-relaxed compared to
the SALO estimator.
## Data availability
All data that support the plots within this paper and other findings of this
study are available from the authors on reasonable request.
## Code availability
Code implementing SALO and SALO-relaxed algorithms in Julia can be found at
the following Github repository: https://github.com/lanl-ansi/SALO.
## Acknowledgements
This work was supported by U.S. DOE/OE as part of the DOE Advanced Sensor and
Data Analytics Program and by the Laboratory Directed Research and Development
program of Los Alamos National Laboratory under project numbers 20220797PRD2,
20210078DR, and 20220774ER. RD was supported by the Swiss National Science
Foundation, under grant nr. P400P2_194359. The authors are grateful to Prof.
Daniel Bienstock from Columbia University, and Dr. Yilu Liu and Dr. Wenpeng
“Wayne” Yu from the Power IT Lab at Oak Ridge National Lab and UT Knoxville
for sharing real PMU data used in this work. The authors also thank Dr. Slava
Maslennikov from ISO New England and Prof. Michael Chertkov from University of
Arizona for fruitful discussions.
## Author contributions
All authors designed and performed the research, wrote the manuscript,
reviewed and edited the paper.
## Competing interests
The authors declare no competing interests.
## Additional information
Supplementary information is available for this paper [31].
Correspondence and requests for materials should be addressed to MV.
## References
* Constable and Somerville [2003] G. Constable and B. Somerville, _A century of innovation: Twenty engineering achievements that transformed our lives_ (Joseph Henry Press, 2003).
* EN [5] Standard EN 50160 – Voltage Characteristics of Public Distribution Systems, https://www.se.com/ww/library/SCHNEIDER_ELECTRIC/SE_LOCAL/APS/204836_1312/DraftStandard0026rev2-DraftEN501602005-05.pdf, accessed: 2022-10-25.
* Kirby _et al._ [2003] B. J. Kirby, J. Dyer, C. Martinez, R. A. Shoureshi, R. Guttromson, J. Dagle, _et al._ , _Frequency control concerns in the North American electric power system_ (United States. Department of Energy, 2003).
* Alizadeh Mousavi _et al._ [2012] O. Alizadeh Mousavi, R. Cherkaoui, and M. Bozorg, Blackouts risk evaluation by monte carlo simulation regarding cascading outages and system frequency deviation, Elec. Power Syst. Research 89, 157 (2012).
* Sarmadi and Venkatasubramanian [2015] S. A. N. Sarmadi and V. Venkatasubramanian, Inter-area resonance in power systems from forced oscillations, IEEE Trans. Power Syst. 31, 378 (2015).
* [6] NERC report: Eastern Interconnection Oscillation Disturbance January 11, 2019 Forced Oscillation Event, https://www.nerc.com/pa/rrm/ea/Documents/January_11_Oscillation_Event_Report.pdf, accessed: 2022-04-26.
* Sarmadi _et al._ [2016] S. A. N. Sarmadi, V. Venkatasubramanian, and A. Salazar, Analysis of november 29, 2005 western american oscillation event, IEEE Trans. Power Syst. 31, 5210 (2016).
* Ghorbaniparvar [2017] M. Ghorbaniparvar, Survey of forced oscillations in power systems, J. Mod. Syst. Clean Energy 5, 671 (2017).
* Sauer _et al._ [2017] P. W. Sauer, M. A. Pai, and J. H. Chow, _Power System Dynamics and Stability: With Synchrophasor Measurement and Power System Toolbox_ (John Wiley & Sons, 2017).
* Zhou and Dagle [2015] N. Zhou and J. Dagle, Initial results in using a self-coherence method for detecting sustained oscillations, IEEE Trans. Power Syst. 30, 522 (2015).
* Wang and Turitsyn [2016] X. Wang and K. Turitsyn, Data-driven diagnostics of mechanism and source of sustained oscillations, IEEE Trans. Power Syst. 31, 4036 (2016).
* Xie and Trudnowski [2017] R. Xie and D. J. Trudnowski, Distinguishing between natural and forced oscillations using a cross-spectrum index, Proc. of the IEEE PESGM (2017).
* Ye _et al._ [2017] H. Ye, Y. Lie, P. Zhang, and Z. Du, Analysis and detection of forced oscillation in power system, IEEE Trans. Power Syst. 32, 1149 (2017).
* Hauer _et al._ [1990] J. F. Hauer, C. J. Demeure, and L. L. Scharf, Initial results in prony analysis of power system response signals, IEEE Trans. Power Syst. 5, 80 (1990).
* Nudell and Chakrabortty [2013] T. R. Nudell and A. Chakrabortty, A graph-theoretic algorithm for disturbance localization in large power grids using residue estimation, in _2013 Amercian Control Conference (ACC)_ (IEEE, 2013) pp. 3467–3472.
* Cabrera _et al._ [2017] I. R. Cabrera, B. Wang, and K. Sun, A method to locate the source of forced oscillations based on linearized model and system measurements, Proc. of the IEEE PESGM (2017).
* Delabays _et al._ [2021] R. Delabays, L. Pagnier, and M. Tyloo, Locating line and node disturbances in networks of diffusively coupled dynamical agents, New J. Phys. 23, 043037 (2021).
* Lokhov _et al._ [2018] A. Y. Lokhov, M. Vuffray, D. Shemetov, D. Deka, and M. Chertkov, Online learning of power transmission dynamics, in _2018 Power Systems Computation Conference (PSCC)_ (2018) pp. 1–7.
* Semerow _et al._ [2016] A. Semerow, S. Horn, B. Schwarz, and M. Luther, Disturbance localization in power systems using wide area measurement systems, in _IEEE International Conference on Power System Technology (POWERCON)_ (IEEE, 2016).
* Chen _et al._ [2013] L. Chen, Y. Min, and W. Hu, An energy-based method for location of power system oscillation source, IEEE Trans. Power Syst. 28, 828 (2013).
* Maslennikov _et al._ [2017] S. Maslennikov, B. Wang, and E. Litvinov, Dissipating energy flow method for locating the source of sustained oscillations, Int. J. Elec. Power & Energy Syst. 88, 55 (2017).
* Huang _et al._ [2018] T. Huang, N. M. Freris, P. R. Kumar, and L. Xie, Localization of forced oscillations in the power grid under resonance conditions, Proc. of the IEEE CISS (2018).
* Chevalier _et al._ [2018] S. Chevalier, P. Vorobev, and K. Turitsyn, Using effective generator impedance for forced oscillation source location, IEEE Trans. Power Syst. 33, 6264 (2018).
* Chevalier _et al._ [2019] S. Chevalier, P. Vorobev, and K. Turitsyn, A bayesian approach to forced oscillation source location given uncertain generator parameters, IEEE Trans. Power Syst. 34, 1641 (2019).
* Chen and Maun [2000] Z. Chen and J.-C. Maun, Artifical neural network approach to single-ended fault locator for transmission lines, IEEE Trans. Power Syst. 15, 370 (2000).
* Cardoso _et al._ [2004] G. Cardoso, J. G. Rolim, and H. H. Zürn, Application of neural-network modules to electric power system fault section estimation, IEEE Trans. Power Deliver. 19, 1034 (2004).
* Lee _et al._ [2018] H.-W. Lee, J. Zhang, and E. Modiano, Data-driven localization and estimation of disturbance in the interconnected power system, in _IEEE International Conference on Communications, Control, and Computing Techologies for Smart Grids (SmartGridComm)_ (IEEE, 2018).
* [28] 2021 IEEE-NASPI Oscillation Source Location Contest, http://web.eecs.utk.edu/~kaisun/Oscillation/2021Contest/, accessed: 2022-04-26.
* Estevez _et al._ [2021a] P. G. Estevez, P. Marchi, C. Galarza, and M. Elizondo, Non-stationary power system forced oscillation analysis using synchrosqueezing transform, IEEE Trans. Power Syst. 36, 1583 (2021a).
* Estevez _et al._ [2021b] P. G. Estevez, P. Marchi, F. Messina, and C. Galarza, Forced oscillation identification and filtering from multi-channel time-frequency representation, arXiv preprint arXiv:2108.08736 (2021b).
* [31] See supplementary material below for the dynamical model used to approximate the system ; the parameters of the test cases ; the mathematical derivation of the salo and salo-relaxed algorithms ; details about the shape of the likelihood function ; performance analysis of the salo algorithm ; further results on real pmu data ; improvement of the algorithm, using prior information on the system.
* Bergen and Vittal [2000] A. R. Bergen and V. Vittal, _Power Systems Analysis_ (Prentice Hall, 2000).
* Kundur _et al._ [1994] P. Kundur, N. J. Balu, and M. G. Lauby, _Power system stability and control_ , Vol. 7 (McGraw-hill New York, 1994).
* Hannon _et al._ [2021] C. Hannon, D. Deka, D. Jin, M. Vuffray, and A. Y. Lokhov, Real-time anomaly detection and classification in streaming pmu data, in _2021 IEEE Madrid PowerTech_ (IEEE, 2021) pp. 1–6.
* Escobar _et al._ [2019] M. Escobar, D. Bienstock, and M. Chertkov, Learning from power system data stream, Proc. of the IEEE PowerTech (2019).
* Tanner _et al._ [2003] H. G. Tanner, A. Jadbabaie, and G. J. Pappas, Stable flocking of mobile agents, part i: fixed topology, in _42nd IEEE International Conference on Decision and Control_, Vol. 2 (2003) pp. 2010–2015 Vol.2.
* Wächter and Biegler [2006] A. Wächter and L. T. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Math. Progr. 106, 25 (2006).
* Dunning _et al._ [2017] I. Dunning, J. Huchette, and M. Lubin, Jump: A modeling language for mathematical optimization, SIAM Review 59, 295 (2017).
Supplementary information
## Appendix A Linear second-order dynamics and relation to swing equations
In this section, we show that the linearized swing equations describing the
voltage phase dynamics in the ambient regime [32, 33], falls in the class of
linear dynamic models [Eq. (1) in the main text] that we consider in this
work. When the system remains in the vicinity of a steady-state fixed point,
the linearized equations give a fair approximation of the swing dynamics [32,
Sec. 14.3] at node $i\in\\{1,...,n\\}$:
$\displaystyle\begin{split}\dot{\theta}_{i}&=\omega_{i},\\\
m_{i}\dot{\omega}_{i}+d_{i}\omega_{i}&=-\sum_{j=1}^{n}b_{ij}(\theta_{i}-\theta_{j})+\eta_{i}\,,\end{split}$
(7)
where $\theta_{i}$ and $\omega_{i}\coloneqq\dot{\theta}_{i}$ are respectively
phase and frequency deviations from the fixed point, $m_{i}$ and $d_{i}$ are
respectively the effective inertia and damping, $b_{ij}$ is the susceptance of
the line between nodes $i$ and $j$ (zero if they are not connected), and
$\eta_{i}$ accounts for a additive disturbances, including noise (typically
considered white and Gaussian) and a potential forcing. Aggregating the phases
and frequencies in a vector
$\bm{X}_{t}\coloneqq(\bm{\theta}_{t}^{\top},\bm{\omega}_{t}^{\top})^{\top}$,
we write Eq. (7) as the stochastic differential equation, for $t\in[0,T]$,
$\displaystyle d\bm{X}_{t}$
$\displaystyle=\bm{A}\bm{X}_{t}dt+\gamma\bm{e}_{l}{\rm Re}\left(e^{2\pi
i(ft+\phi)}\right)dt+d\bm{W}_{t},$ (8) $\displaystyle\bm{A}$
$\displaystyle=\begin{pmatrix}\bm{0}&\mathbb{I}\\\
\bm{M}^{-1}\bm{L}&-\bm{M}^{-1}\bm{D}\end{pmatrix}\,,$ (9)
where the matrices $\bm{M}$ and $\bm{D}$ are the diagnonal matrices of
inertias and dampings respectively, $\bm{L}$ is the Laplacian matrix of the
grid, weighted by the susceptances, $\gamma$, $f$, and $\phi$ are the
amplitude, frequency and phase of the forcing respectively, $\mathbb{I}$ is
the identity matrix of appropriate size, $\bm{e}_{l}$ is the canonical basis
vector with nonzero $l$th component, and $\bm{W}_{t}$ is a Wiener process,
accounting for unpredictable disturbances. Therefore, this model is a
particular case of the dynamic model [Eq. (1) in the main text] considered in
this work.
## Appendix B Parameters of the studied synthetic test cases
### Three-node test case in Fig. 1 (main text).
The dynamic state matrix $\bm{A}$ (see Eq. (8)) used for the example of Fig. 1
(main text) is given by:
$\displaystyle\bm{A}$ $\displaystyle=\begin{pmatrix}0&0&0&1&0&0\\\
0&0&0&0&1&0\\\ 0&0&0&0&0&1\\\
-\frac{417}{17}&\frac{40}{17}&-\frac{40}{17}&-\frac{1}{10}&0&0\\\
\frac{40}{17}&-\frac{21}{17}&\frac{4}{17}&0&-1&0\\\
-\frac{40}{17}&\frac{4}{17}&-\frac{21}{17}&0&0&-10\end{pmatrix}\,.$ (10)
It corresponds to the dynamics of three nodes, symmetrically coupled, with
damping parameters varying over two orders of magnitude. Choosing these
parameters and selecting a forcing frequency close to the imaginary part of
the eigenvalues of matrix $A$, Eq. (10) allows one to have the largest
amplitude of oscillation at a node that is not the source of the forcing and
at a frequency that is not exactly the one of the forcing, but instead
corresponding to the imaginary part of the selected system mode.
### UK high-voltage network test case in Fig. 2 (main text).
The network topology used to generate the time series in the results of Fig. 2
is a model of the UK high-voltage transmission grids. It contains 120 nodes
and 165 lines. Its adjacency list is given in Table 1.
1 2 1 21 120 1 42 118 1 68 67 1 77 78 1 95 97 1 115 116 1 1 5 1 120 22 1 117
118 1 54 66 1 75 74 1 95 96 1 80 81 1 2 3 1 22 26 1 117 35 1 66 65 1 72 76 1
97 96 1 16 20 1 3 4 1 22 28 1 27 28 1 65 64 1 76 78 1 97 98 1 20 22 1 3 11 1 9
24 1 28 29 1 64 63 1 78 79 1 94 107 1 35 36 1 5 4 1 8 36 1 29 30 1 63 62 1 79
80 1 96 100 1 25 33 1 4 6 1 7 42 1 29 31 1 62 61 1 78 80 1 99 100 1 25 26 1 6
7 1 7 38 1 31 59 1 64 56 1 79 81 1 100 104 1 24 25 1 7 8 1 38 39 1 117 49 1 56
57 1 81 89 1 100 101 1 24 34 1 8 9 1 39 40 1 49 48 1 57 62 1 75 82 1 101 102 1
26 120 1 11 10 1 38 40 1 48 46 1 57 58 1 82 83 1 102 104 1 46 118 1 11 119 1
40 41 1 46 47 1 58 59 1 83 84 1 102 103 1 56 65 1 119 10 1 41 37 1 46 45 1 59
60 1 84 85 1 104 105 1 68 69 1 10 12 1 37 35 1 45 43 1 60 61 1 83 86 1 104 106
1 27 31 1 10 9 1 35 34 1 45 49 1 60 74 1 86 87 1 107 108 1 93 94 1 9 19 1 34
33 1 45 66 1 61 74 1 87 88 1 108 106 1 12 13 1 33 32 1 49 50 1 73 74 1 87 90 1
107 109 1 13 14 1 32 27 1 50 51 1 73 63 1 90 91 1 108 109 1 14 15 1 26 27 1 50
54 1 73 72 1 89 95 1 109 112 1 15 16 1 32 58 1 54 55 1 73 75 1 90 92 1 112 111
1 16 17 1 33 57 1 54 52 1 72 71 1 91 92 1 110 111 1 17 18 1 34 44 1 52 51 1 71
70 1 92 93 1 111 113 1 17 19 1 43 44 1 52 53 1 70 69 1 92 94 1 114 113 1 19 21
1 44 57 1 54 68 1 70 66 1 93 95 1 114 112 1 19 23 1 42 43 1 54 67 1 69 77 1 94
95 1 114 115 1
Table 1: Adjacency list of the model of the UK high-voltage transmission grid.
Each cell of the seven columns corresponds to line parameters: two indices
that indicate two nodes at the end of the line, and the respective line
capacity.
The time series are obtained by numerically solving Eq. (1) in the main text,
including a forcing at one bus.
## Appendix C Derivation of the SALO and SALO-relaxed algorithms
In this section, we provide the derivation of Eq. (5) from the _Methods_
section of the main text. Starting with the negative likelihood expression
(4), we have
$\displaystyle L$
$\displaystyle\left(\bm{A},\gamma,l,k,\phi~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right)$
$\displaystyle=\frac{1}{N}\sum_{j=0}^{N-1}\left[\bm{\Delta}_{t_{j}}-\bm{A}\bm{X}_{t_{j}}-{\gamma}\bm{e_{l}}{\rm
Re}\left(e^{2\pi
i(\frac{k}{N}j+\phi)}\right)\right]^{\top}\left[\bm{\Delta}_{t_{j}}-\bm{A}\bm{X}_{t_{j}}-{\gamma}\bm{e_{l}}{\rm
Re}\left(e^{2\pi i(\frac{k}{N}j+\phi)}\right)\right]\,$ (11)
$\displaystyle=\frac{1}{N}\sum_{j=0}^{N-1}\Big{[}\bm{\Delta}_{t_{j}}^{\top}\bm{\Delta}_{t_{j}}-2\bm{\Delta}_{t_{j}}^{\top}\bm{A}\bm{X}_{t_{j}}+\bm{X}_{t_{j}}^{\top}\bm{A}^{2}\bm{X}_{t_{j}}+{\gamma}^{2}\bm{e_{l}}^{\top}\bm{e_{l}}\,\left[{\rm
Re}\left(e^{2\pi i(\frac{k}{N}j+\phi)}\right)\right]^{2}$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad-2{\gamma}\,\bm{e_{l}}^{\top}(\bm{\Delta}_{t_{j}}-\bm{A}\bm{X}_{t_{j}}){\rm
Re}\left(e^{2\pi i(\frac{k}{N}j+\phi)}\right)\Big{]}.$ (12)
Define the discrete Fourier transforms of the data
$\displaystyle\widetilde{\bm{X}}(k)$
$\displaystyle=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}e^{2\pi
i\frac{k}{N}j}\bm{X}_{t_{j}}\,,$ (13)
$\displaystyle\widetilde{\bm{\Delta}}(k)$
$\displaystyle=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}e^{2\pi
i\frac{k}{N}j}\bm{\Delta}_{t_{j}}\,.$ (14)
Then
$\displaystyle
L\left(\bm{A},\gamma,l,k~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right)$
$\displaystyle=\frac{1}{N}\sum_{j=0}^{N-1}\left[\bm{\Delta}_{t_{j}}^{\top}\bm{\Delta}_{t_{j}}-2\bm{\Delta}_{t_{j}}^{\top}\bm{A}\bm{X}_{t_{j}}+\bm{X}_{t_{j}}^{\top}\bm{A}^{2}\bm{X}_{t_{j}}\right]$
(15) $\displaystyle+\frac{{\gamma}^{2}}{N}\sum_{j=0}^{N-1}\left[{\rm
Re}\left(e^{2\pi i(\frac{k}{N}j+\phi)}\right)\right]^{2}\,$ (16)
$\displaystyle-\frac{2{\gamma}}{\sqrt{N}}\,{\rm
Re}\left(([\tilde{\bm{\Delta}}]_{l}-[\bm{A}\tilde{\bm{X}}]_{l})e^{2\pi
i\phi}\right)\,,$ (17)
where $[{\bm{Y}}]_{l}$ is the $l$th component of a vector ${\bm{Y}}$.
Let us consider each of the terms in this expression separately. In (15), the
term
$\frac{1}{N}\sum_{j=0}^{N-1}\bm{\Delta}_{t_{j}}^{\top}\bm{\Delta}_{t_{j}}$ is
independent of the variables being optimized, and hence can be dropped from
the optimization. Using the definitions
$\displaystyle\Sigma_{0}=\frac{1}{N}\sum_{j=0}^{N-1}\bm{X}_{t_{j}}\bm{X}_{t_{j}}^{\top}\,,\quad\Sigma_{1}=\frac{1}{N}\sum_{j=0}^{N-1}\bm{X}_{t_{j}}\bm{\Delta}_{t_{j}}^{\top}\,,$
(18)
the relevant terms in (15) can be simply written as ${\rm
Tr}(\bm{A}^{\top}\bm{A}\Sigma_{0})-2{\rm Tr}(\bm{A}\Sigma_{1})$.
Regardless of the values of $\phi$ in (16), it can be shown that this term is
simply equal to $\frac{1}{2}\gamma^{2}$. Indeed,
$\displaystyle\frac{{\gamma}^{2}}{N}\sum_{j=0}^{N-1}\left[{\rm
Re}\left(e^{2\pi i(\frac{k}{N}j+\phi)}\right)\right]^{2}$
$\displaystyle=\frac{{\gamma}^{2}}{N}\sum_{j=0}^{N-1}\left[\frac{e^{2\pi
i(\frac{k}{N}j+\phi)}+e^{-2\pi i(\frac{k}{N}j+\phi)}}{2}\right]^{2}$ (19)
$\displaystyle=\frac{{\gamma}^{2}}{4N}\sum_{j=0}^{N-1}\left[e^{4\pi
i(\frac{k}{N}j+\phi)}+e^{-4\pi i(\frac{k}{N}j+\phi)}+2\right]$ (20)
$\displaystyle=\frac{{\gamma}^{2}}{2}+e^{4\pi
i\phi}\frac{{\gamma}^{2}}{N}\sum_{j=0}^{N-1}e^{4\pi i\frac{k}{N}j}+e^{-4\pi
i\phi}\frac{{\gamma}^{2}}{N}\sum_{j=0}^{N-1}e^{-4\pi i\frac{k}{N}j}$ (21)
$\displaystyle=\frac{{\gamma}^{2}}{2},$ (22)
where in the last line we used the fact that $k$ is an integer.
Finally, notice that the last term (17) is the only one that depends on the
phase $\phi$, and the minimization over the phase $\phi$ can be performed
independently of other variables. The minimum of (17) over $\phi$ is given by
$-\frac{2{\gamma}}{\sqrt{N}}\left|[\tilde{\bm{\Delta}}]_{l}-[\bm{A}\tilde{\bm{X}}]_{l}\right|$,
where $|\cdot|$ is the modulus of a complex number. Further, we can write
$\displaystyle-\frac{2{\gamma}}{\sqrt{N}}\left|[\tilde{\bm{\Delta}}]_{l}-[\bm{A}\tilde{\bm{X}}]_{l}\right|$
$\displaystyle=-\frac{2{\gamma}}{\sqrt{N}}\sqrt{\left([\tilde{\bm{\Delta}}]_{l}-[\bm{A}\tilde{\bm{X}}]_{l})\right)\left([\tilde{\bm{\Delta}}]_{l}-[\bm{A}\tilde{\bm{X}}]_{l}\right)^{\dagger}}$
(23) $\displaystyle=-\frac{2\gamma}{\sqrt{N}}\sqrt{{\rm
Tr}\left(\bm{A}_{l,\cdot}^{\top}\bm{A}_{l,\cdot}F(k)\right)-2f_{l}(k)\bm{A}_{l,\cdot}+g_{l}(k)},$
(24)
where $M_{l,\cdot}$ denotes the $l$th row of the matrix, and we defined
$\displaystyle F={\rm
Re}\left(\tilde{\bm{X}}\tilde{\bm{X}}^{\dagger}\right)\,,\quad f_{l}={\rm
Re}\left([\tilde{\bm{\Delta}}]_{l}\tilde{\bm{X}}^{\dagger}\right)\,,\quad
g_{l}$ $\displaystyle=\left|[\tilde{\bm{\Delta}}]_{l}\right|^{2}\,.$ (25)
Therefore, using all the transformation above and defining a new objective
function where minimization of
$L\left(\bm{A},\gamma,l,k~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right)$ over
$\phi$ has been performed,
$L\left(\bm{A},\gamma,l,k~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right)=\min_{\phi}L\left(\bm{A},\gamma,l,k,\phi~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right),$
(26)
we finally obtain the expression for the SALO objective function given in the
_Methods_ section of the main text:
$\displaystyle L_{\rm SALO}$
$\displaystyle\left(\bm{A},\gamma,l,k~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=0}^{N-1}\right)$
$\displaystyle={\rm Tr}(\bm{A}^{\top}\bm{A}\Sigma_{0})-2{\rm
Tr}(\bm{A}\Sigma_{1})+\frac{1}{2}\gamma^{2}-\frac{2\gamma}{\sqrt{N}}\sqrt{{\rm
Tr}\left(\bm{A}_{l,\cdot}^{\top}\bm{A}_{l,\cdot}F(k)\right)-2f_{l}(k)\bm{A}_{l,\cdot}+g_{l}(k)}\,.$
(27)
In order to obtain the SALO-relaxed version of the algorithm, we notice that
the objective (27) can be rewritten in an equivalent form, where the candidate
source node $l$ and the amplitude $\gamma$ are encoded as the $l$th component
of the one-hot vector $\bm{\gamma}$ that spans all nodes in the system:
$\displaystyle L_{\rm SALO}$
$\displaystyle\left(\bm{A},\bm{\gamma},k~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=0}^{N-1}\right)$
$\displaystyle={\rm Tr}(\bm{A}^{\top}\bm{A}\Sigma_{0})-2{\rm
Tr}(\bm{A}\Sigma_{1})+\sum_{l=1}^{n}\left[\frac{\gamma_{l}^{2}}{2}-\frac{2\gamma_{l}}{\sqrt{N}}\sqrt{{\rm
Tr}\left(\bm{A}_{l,\cdot}^{\top}\bm{A}_{l,\cdot}F(k)\right)-f_{l}(k)\bm{A}_{l,\cdot}+g_{l}(k)}\right]\,,$
(28) $\displaystyle\text{s.t.}\quad\|\bm{\gamma}\|_{0}=1.$
A spatially relaxed version of the algorithm over the variable $\gamma$ is
obtained by dropping the one-hot-encoding constraint $\|\bm{\gamma}\|_{0}=1$,
a version that we refer to as the SALO-relaxed algorithm:
$\displaystyle L_{\rm SALOr}$
$\displaystyle\left(\bm{A},\bm{\gamma},k~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right)$
$\displaystyle={\rm Tr}(\bm{A}^{\top}\bm{A}\Sigma_{0})-2{\rm
Tr}(\bm{A}\Sigma_{1})+\sum_{l=1}^{n}\left[\frac{\gamma_{l}^{2}}{2}-\frac{2\gamma_{l}}{\sqrt{N}}\sqrt{{\rm
Tr}\left(\bm{A}_{l,\cdot}^{\top}\bm{A}_{l,\cdot}F(k)\right)-f_{l}(k)\bm{A}_{l,\cdot}+g_{l}(k)}\right]\,.$
(29)
In this version of the algorithm, the source location is determined from the
indices of the largest components of the vector $\bm{\gamma}$.
## Appendix D Challenges for the optimization of the continuous likelihood
function
The length and the sampling rate of the available time series impose a natural
limit to the frequency resolution, which sets a limit on a precision with
which the forcing frequency $f$ can be identified. This idea lies at the heart
of the discretization in the frequency domain explained in the _Methods_
section of the main text. Both SALO and SALO-relaxed methods are then
evaluated for each discrete value of possible frequencies $k$. To provide an
additional insight into our approach, in this section we illustrate why a
direct optimization over the continuous frequency $f$ results in a hard-to-
solve non-convex optimization problem. Consider a direct optimization over a
continuous variable $f$ of the negative log-likelihood
$\displaystyle\widetilde{L}\left(\bm{A},\gamma,l,f,\phi~{}|~{}\\{\bm{X}_{t_{j}}\\}_{j=1}^{N}\right)$
$\displaystyle=\frac{1}{N}\sum_{j=0}^{N-1}\left\|\bm{\Delta}_{t_{j}}-\bm{A}\bm{X}_{t_{j}}-\gamma\bm{e}_{l}\cos(2\pi
i(ft_{j}+\phi))\right\|^{2}$ (30)
from $N$ measurements $\\{\bm{X}_{j\tau}\\}_{j=0,\ldots,N-1}$ taken at times
$t_{j}=j\tau$ over the observation window of length $T$, so that the interval
$\tau=T/N$. For simplicity, let us fix variables $\bm{A}$, $\gamma$, $l$, and
$\phi$ to their ground-truth values. In this case, we can plot the objective
function (30) for a test problem as a one-dimensional function of $f$, see
Figure 5. We see that with the increasing observation window $T$, the
objective function is oscillating around a certain constant value for most
values of $f$, with a sharp and narrow peak at the correct frequency value.
It is possible to understand this behavior using a toy example presented in
Figure 6. When parameters $\bm{A}$, $\gamma$, $l$, and $\phi$ are fixed to
their ground-truth values, the optimization problem in (30) becomes
essentially equivalent to fitting of a noisy periodic function $\cos(\nu
t)+\xi_{t}$, where $\xi_{t}$ is a white Gaussian noise, with a function
$\cos(ft)$. For any value of $f$ which is not equal to $\nu$ and a
sufficiently long time series, a small deviation of $f$ from $\nu$ makes the
fit eventually diverge to the point where the two curves appear in
counterphase. This effect is responsible for the form of the objective
function presented in the Figure 5. This type of landscapes with a large
number of local minima and a single narrow global minimum is particularly
challenging for optimization solvers. This challenge is alleviated in our
approach by switching to the discrete frequencies $k$ and turning the
minimization into a mixed-integer optimization problem.
Figure 5: Objective function (30) for a small test problem plotted as a one-
dimensional function of the forcing frequency $f$ for different observation
windows $T$. For simplicity, all other parameters of the model ($\bm{A}$,
$\gamma$, $l$, and $\phi$) are set to their ground-truth values used to
generate the data. With increasing $T$, the objective function becomes more
and more challenging to optimize, exhibiting an oscillatory behavior with a
narrow peak at the correct frequency value. Figure 6: Example of the
divergence between a time series with the period $\nu=0.95$ $[{\rm s}^{-1}]$
and a sinusoidal fit with a mismatching frequency $f=0.90$ $[{\rm s}^{-1}]$.
## Appendix E Performance of the SALO-relaxed algorithm
The SALO-relaxed algorithm has an improved computational time compared to the
SALO algorithm given that it does not need to be run for all possible
candidate sources in the network. In this section, we empirically benchmark
the SALO-relaxed algorithm against the SALO method. For all the test cases
used throughout the main text, we observed that the SALO-relaxed algorithm
points to the same source, consistently with SALO. Here, we use additional
test cases to illustrate that while the SALO-relaxed algorithm provides an
improvement in computational time, it does not sacrifice the accuracy of the
source localization.
In Figure 7, we compare the outcome of both methods on the network structure
extracted from the standard IEEE 57-bus test case [IEEE57]. Similarly to the
test case presented in the Figure 2 in the main text, we modified the
parameters of the network to produce the challenging resonance conditions,
where the largest response seen in the Fourier spectrum appears on nodes that
are far away from the source. Importantly, under these challenging conditions,
SALO-relaxed method agrees with the SALO algorithm, and confidently points
both to the correct source of forced oscillations, as well as to the ground-
truth forcing frequency.
In Figure 8, we run a scaling experiment to compare the computational speed of
SALO-relaxed and SALO methods, without using parallelized computation for the
latter one. For this purpose, we use random Watts-Strogatz networks of
increasing sizes. At every network instance, the source and frequency of
forced oscillations is correctly identified by both algorithms, however, SALO-
relaxed algorithm enjoys a computational advantage over SALO that scales
linearly with the system size.
We provide further results of testing of the SALO-relaxed in the situation
where some of the modeling assumptions are violated in the next section F.
Figure 7: (a) Synthetic test case with the IEEE-57 test case topology,
designed to reproduce the resonance conditions where forced oscillations
interact with the system modes. The forcing at the orange node results in a
highest response in Fourier spectra at the opposite side of the network, as
shown for the Fourier components of the (b) generalized state and of the (c)
generalized momentum. (d) SALO algorithm confidently identifies the correct
forcing frequency and source without any knowledge of the system topology or
parameters. The SALO-relaxed version of the method is equally capable to
unambiguously identify both (e) the frequency and (f) the source of forced
oscillations under these challenging conditions. Figure 8: (a) A random
instance of a Watts-Strogatz network with $20$ nodes and one source of forced
oscillations (shown in orange). (b) Rescaled log-likelihood for the SALO
algorithm on a random network of $20$ nodes, which correctly identifies the
oscillation source and the frequency. Similarly, the SALO-relaxed method
perfectly identifies both (c) the frequency and (d) the location of the source
in this network. (e) Ratio of SALO and SALO-relaxed run-times without
parallelization over candidate sources $l$ as a function of the system size
$n$ . Dashed line shows a linear fit of the data points.
## Appendix F Additional results on the real PMU data
In the Figure 3 of the main text, we showed the results produced by the SALO
method in the case of multiple sources injecting oscillations at different
frequencies, potentially with sources lying outside of the observed system. In
this section, we apply the SALO method to real data, which shows a combination
of features observed in Figure 3 of the main text. We use the data collected
under an oscillatory event in the U.S. Eastern Interconnection from
FNET/GridEye, a GPS-synchronized wide-area frequency measurement network
[UTKdata]. We have analyzed the subset of the time series without missing
data. The ground-truth location of the sources of forced oscillations is not
known in the considered data set. The network topology or parameters are also
not known, which calls for the application of the SALO method. The known
locations of a subset of measurement devices is depicted in Figure 9 (a). The
analysis of the time series with our SALO method points to the most likely
sources of forced oscillation, highlighted with the orange, blue, and purple
colors in the rescaled log-likelihood scores in Figure 9 (b). A precise
location of the node corresponding to the likelihood score highlighted in
orange is not available, however the other likely sources are shown with the
blue and purple colors in Figure 9 (a).
Figure 9: (a) Known locations of FNET/GridEye measurement devices that
recorded the data under an oscillatory event in the U.S. Eastern
Interconnection area [UTKdata], along with the location of the most likely
sources identified by SALO. (b) Rescaled log-likelihood scores peaking at two
different frequencies for the most likely sources of oscillations. The
location of the orange node is not available.
The features shown in the log-likelihood scores are reminiscent of the ones
observed in the synthetic study Figure 3 of the main text. In particular, both
the orange and the blue nodes indicate the presence of oscillations at the
exact same frequency $f=4.54$ $[{\rm s}^{-1}$]. This feature is similar to the
response of the algorithm observed in the case where the source is located
outside of the observed system, see Figure 3 (a) in the main text. Although
the geographical location of the orange node is not available, the blue node
is indeed located at the edge of the observed area, which makes this scenario
plausible. The analysis with SALO also points to a presence of the second
source of oscillations at a different frequency $f=1.35$ $[{\rm s}^{-1}]$,
similarly to the algorithm response observed in a synthetic case with two
oscillation sources shown in Figure 3 (b) of the main text. The respective
source node found by SALO is located inside the bulk of the system, as shown
in purple in Figure 9 (a). In Figure 10, we use a synthetic network example to
verify that the combination of these features – two sources of oscillations,
inside and outside the observed system – indeed leads to a picture
corresponding to a combination of features observed in Figures 3 (a) and 3 (b)
of the main text. Interestingly, the nature of the leading peaks in the
rescaled log-likelihood observed in Figure 10 shows a lot of resemblance with
the results produced by the SALO algorithm in Figure 9 (b).
Figure 10: Rescaled log-likelihood score obtained by SALO for the scenario
with one observed and one hidden sources of forced oscillations. The algorithm
correctly identifies the location and forcing frequency for the visible
source, and points to the neighbors of the hidden source inside the visible
system for the detected oscillation at the other forcing frequency. As before,
the envelope of scores for non-highlighted nodes is shown in gray.
Finally, we test the performance of the SALO-relaxed algorithm under the
scenarios of violated modeling assumptions studied in Figures 3 (a) and 3 (b)
of the main text, as well as for the combination of hidden and multiple
sources examplified in Figure 10. The results of these tests are reported in
Figure 11. In all three cases, the outcome produced by the SALO-relaxed
algorithm is in complete agreement with the results shown by SALO.
Figure 11: Rescaled log-likelihood and amplitude vector inferred using the
SALO-relaxed version of the algorithm for three scenarios of model mismatch,
illustrated on a network with $20$ nodes. In the top panel, forced oscillation
is injected at the unobserved gray node (a). SALO-relaxed identifies the
correct forcing frequency (b) and points to the immediate neighbors (c) of the
gray node as the most likely sources. The center panel studies the situation
where two independent forcings are present at different nodes (e). For two
frequency peaks identified in the log-likelihood score (f), we show two
respective amplitude vectors, with the largest components pointing to the
correct sources. The bottom panel shows the combination of the previous two
scenarios, with two sources, one observed (purple) and one unobserved (gray)
(g). Both forcing frequencies (h) and the respective locations from the
largest components of the amplitude vector (i) are correctly identified. All
results are consistent with the outcome produced by SALO, see Figure 3 (a),(b)
in the main text, and Figure 10.
## Appendix G Use of prior information on the system parameters
So far in the paper, we have considered the most challenging case where no
information on the system parameters is assumed. In this section, we study the
performance of SALO and SALO-relaxed methods in the scenario where the system
topology and parameters, i.e., elements of the matrix $\bm{A}$ in Eq. (8), are
known. We compare the system-agnostic version of SALO with the system-informed
one in Figures. 12 and 13. For both versions of the algorithm, inclusion of
prior information enables the correct identification of the source and the
frequency of the forcing with significantly less data. We found that using
time series of length below 250 time steps, the agnostic version was not able
to correctly identify the forcing, whereas the system-informed version
provided a correct recovery using time series with up to 100 time steps in
length. These experiments highlight the reduction of the required data for a
successful localization of the forced oscillations when the prior information
on the system is available.
Figure 12: Rescaled log-likelihood score obtained by the SALO algorithm from
time series data with three different lengths, illustrated on a network with
$20$ nodes (a) with the forced oscillation injected at node 20 at a frequency
about $0.08$ [${\rm s}^{-1}$]. For shorter time series with length of (b) 10
and (c) 23 [s], the system-informed version of SALO algorithm is capable of
correctly identifying the forcing, whereas the system-agnostic version
requires more data since it simulataneous needs to estimate the parameter
matrix $A$. The time series with length of (d) 39 [s] are enough for both the
system-agnostic and system-informed versions of the method to recover both
location and frequency of the forced oscillation. Figure 13: Rescaled log-
likelihood score and location amplitude vector obtained by the SALO-relaxed
algorithm from time series data with three different lengths, illustrated on a
network with $20$ nodes (a) with the forced oscillation injected at node 20 at
a frequency about $0.08$ [${\rm s}^{-1}$]. For shorter time series with length
of (b) 10 and (c) 23 [s], the system-informed version of SALO-relaxed
algorithm is capable of correctly identifying the forcing, whereas the system-
agnostic version requires more data since it simulataneous needs to estimate
the parameter matrix $A$. The time series with length of (d) 39 [s] are enough
for both the system-agnostic and system-informed versions of the method to
recover both location and frequency of the forced oscillation.
|
Bram<EMAIL_ADDRESS>Steven
<EMAIL_ADDRESS>Frank Van<EMAIL_ADDRESS>Nick<EMAIL_ADDRESS>Hasselt University - tUL
Flanders Make,
Expertise Centre for Digital Media Analysis of Object Detection with Synthetic
Data
# Analysis of Training Object Detection Models with Synthetic Data
###### Abstract
Recently, the use of synthetic training data has been on the rise as it offers
correctly labelled datasets at a lower cost. The downside of this technique is
that the so-called domain gap between the real target images and synthetic
training data leads to a decrease in performance. In this paper, we attempt to
provide a holistic overview of how to use synthetic data for object detection.
We analyse aspects of generating the data as well as techniques used to train
the models. We do so by devising a number of experiments, training models on
the Dataset of Industrial Metal Objects (DIMO) [De Roovere et al.(2022)De
Roovere, Moonen, Michiels, and wyffels]. This dataset contains both real and
synthetic images. The synthetic part has different subsets that are either
exact synthetic copies of the real data or are copies with certain aspects
randomised. This allows us to analyse what types of variation are good for
synthetic training data and which aspects should be modelled to closely match
the target data. Furthermore, we investigate what types of training techniques
are beneficial towards generalisation to real data, and how to use them.
Additionally, we analyse how real images can be leveraged when training on
synthetic images. All these experiments are validated on real data and
benchmarked to models trained on real data. The results offer a number of
interesting takeaways that can serve as basic guidelines for using synthetic
data for object detection. Code to reproduce results is available at
https://github.com/EDM-Research/DIMO_ObjectDetection.
## 1 Introduction
Deep learning and its applications have advanced tremendously over the last
couple of years. These powerful machine learning models require a large amount
of labelled training data however. The more complex and the better these
models get, the more training data they require. But good training data is not
easy to come by. Manually creating photographs and labeling them is a slow and
costly process. Additionally, humans are prone to introducing errors and bias
in datasets [Tommasi et al.(2017)Tommasi, Patricia, Caputo, and Tuytelaars,
Northcutt et al.(2021)Northcutt, Athalye, and Mueller], which is bad for model
performance. Furthermore, some forms of annotations are very difficult for a
human to create, such as depth maps, segmentation maps or object poses.
Due to these problems with datasets created by humans, synthetic training data
has become more popular over recent years. With modern rendering technology it
is easy to render thousands of images fairly quickly and at a low cost when 3D
models are provided. Since the 3D composition of the depicted scene is known,
the accompanying labels for the machine learning task can easily be generated.
Additionally, these labels are pixel correct and the dataset contains less
bias, since a computer is way better at randomising than a human. There is
however a big disadvantage to using synthetic data. Although looking very
realistic, there still is a difference in appearance between real and rendered
images. This causes a model that is trained on synthetic images, to perform
worse on real images. This phenomenon is called the domain gap [Tobin et
al.(2017)Tobin, Fong, Ray, Schneider, Zaremba, and Abbeel] and it hinders
synthetic training data from being widely adopted.
Object Detection is one of the most prominent fields of computer vision. This
is due to the fact that it has many applications and is often the first step
in vision pipelines for more complex tasks. Some of these applications include
robot control [Bai et al.(2020)Bai, Li, Yang, Song, Li, and Zhang], product
inspection [Yang et al.(2020)Yang, Li, Wang, Dong, Wang, and Tang],
surveillance [Sreenu and Durai(2019)] and many more. If a company wants to
apply deep learning to their specific tasks, they need high quality training
data that is specific to them. Manually labelled data is often too expensive
or sometimes even too difficult to come by, especially so for smaller
companies. These companies sometimes resolve to using synthetic data to train
their models, often with unsatisfactory results, due to the aforementioned
problems with synthetic data. While synthetic data is cheaper then manually
created data, it is not for free. When rendering thousands of images, costs
can accumulate to a large number as well.
In this paper we offer a number of insights on how to generate and how to use
synthetic training data. The goal is to generate knowledge on how to create
training data that offers good performance on real images whilst keeping the
total cost of rendering as low as possible. When using this synthetic data we
use only basic deep learning mechanisms that are available in most toolkits.
We deliberately stray away from more complex methods of domain adaptation and
generalisation to make our findings as widely applicable as possible. To
provide these insights, we perform a number of experiments using the Dataset
of Industrial Metal Objects (DIMO) [De Roovere et al.(2022)De Roovere, Moonen,
Michiels, and wyffels]. This dataset contains a set of real images, exact
synthetically rendered copies of those real images and sets of synthetic
images with variations in different aspects. This unique dataset allows us to
study the exact impact of those variations towards the generalisation on real
images. However, data alone is only half the picture. Additionally, we study
the impact of a number of deep learning techniques towards the generalisation
on real test sets. The impact is measured by training a number of object
detection models on different datasets and configurations while measuring the
performance on a real test set.
## 2 Related Work
Models trained on synthetic data often suffer from a decrease in performance
on real data. This is due to the domain gap, a term introduced by Tobin _et
al_ [Tobin et al.(2017)Tobin, Fong, Ray, Schneider, Zaremba, and Abbeel]. They
argue that it is impossible to perfectly simulate all aspects of a camera and
that there will always be a difference between synthetic training data and
real test data. They solve this for the task of object localisation by using
domain randomisation. This technique randomises as many aspects of the
rendering as possible as opposed to trying to accurately simulate the data.
Trembley _et al_ [Tremblay et al.(2018)Tremblay, Prakash, Acuna, Brophy,
Jampani, Anil, To, Cameracci, Boochoon, and Birchfield] applied this technique
for object detection. Their domain randomised car detection dataset leads to
great performance on the KITTI dataset [Geiger et al.(2012)Geiger, Lenz, and
Urtasun], even better than the Virtual KITTI dataset [Gaidon et
al.(2016)Gaidon, Wang, Cabon, and Vig] that was modelled to be similar. This
has shown that randomisation can be a substitute for realism.
A different approach to randomisation is attempting to make the datasets as
realistic as possible. Movshovitz-Attias _et al_ [Movshovitz-Attias et
al.(2016)Movshovitz-Attias, Kanade, and Sheikh] investigated how useful
photorealism is and what parameters are the most important, for the task of
viewpoint estimation. They show that a more complex rendering process is
beneficial and that adding synthetic images to a real dataset offers a boost
in performance. Additionally, they conclude that randomising lighting
parameters leads to better generalisation. Hodan _et al_ [Hodaň et
al.(2019)Hodaň, Vineet, Gal, Shalev, Hanzelka, Connell, Urbina, Sinha, and
Guenter] developed a method for generating object detection datasets using
physically based rendering (PBR). They show that models trained on PBR
datasets perform better than ones trained on datasets created by simpler
rendering techniques and that increasing the quality of the PBR leads to
better models. Additionally, they show that taking into account the context
(gist, geometric, semantic, and illumination contextual aspects) in which the
object will be placed improves the performance of the trained network.
There are other techniques to improve performance as well. Hinterstoisser _et
al_ [Hinterstoisser et al.(2019)Hinterstoisser, Lepetit, Wohlhart, and
Konolige] use transfer learning to improve generalisation of models trained on
synthetic data. They initialise a network with weights trained on real data
and freeze the layers of the feature detector during training. Using this
technique, they train a model on a simple synthetic dataset and manage to get
performance close to that of a model trained on real data. Nowruzi _et al_
[Nowruzi et al.(2019)Nowruzi, Kapoor, Kolhatkar, Hassanat, Laganière, and
Rebut] investigated the use of real images when training on synthetic data.
They show that adding a small amount of real images can be useful and that
fine-tuning is better than mixing the data.
In our research we use the DIMO dataset. This dataset contains a real dataset,
an exact synthetic copy and a number of different variations of the synthetic
copy. We can thus investigate what variations are beneficial for
generalisation but also research if it is useful to put effort in copying some
aspects of the target dataset. This allows us to offer some unique insights
that other papers have not yet investigated. Additionally, we attempt to offer
a holistic analysis, investigating a number of important concepts for
synthetic training data.
## 3 Experimental Setup
### 3.1 The Dataset
In our experiments we use five subsets of the DIMO datasets. The first subset
contains real RGB images, captured with a JAI GO-5000 camera (subset denoted
as real). Additionally, we used the four synthetic datasets that have two
types of variations. The first is an exact digital twin dataset, where both
the poses of the objects as well as the light of the environment match with
the real images (synth). The second and third are datasets for which either
the poses (synth, rand pose) or the lighting conditions (synth, rand light)
are randomised, with the non-randomised component matching the real images.
The fourth and last dataset of DIMO that we use in our experiments varies both
the poses and the lighting conditions (synth, rand all). The poses of the
objects for the real images were manually set in representative and
interesting positions. These poses were then manually annotated and used in
the generation of the synthetic data. The randomization of the poses in the
synthetic data is done by spawning the objects in a uniform random location 30
cm above the carrier. Subsequently, a physics engine simulates the dropping of
the objects on the carrier, until they advance into a stable state. This
provides a variation in the distribution of poses between the synthetic and
real datasets. The light variations were created by iterating over a list of
31 environment maps of indoor scenes. The real light was captured with a HDR
360 image of the environment where the real images where recorded. More
details on the creation of this dataset can be found in the original DIMO [De
Roovere et al.(2022)De Roovere, Moonen, Michiels, and wyffels] paper. Figure 1
shows an example scene from the DIMO dataset, and images from the four
different synthetic datasets.
Figure 1: Examples of an scene in the DIMO dataset to illustrate the different
variations. From left to right: Real image, synthetic copy, randomised poses,
randomised lighting, both randomised.
For the experiments in this paper, we use the first 150 scenes of the DIMO
dataset for each of the subsets. Since there is more variation introduced in
some datasets, they do not have an equal amount of images. The real and its
synthetic twin have around 2k images each, the synthetic datasets with random
light or random poses contain around 29k images each and the fully random
synthetic dataset contains 78k images. We split each dataset in a training
(90%), validation (5%) and test (5%) set. Since the datsets are subdivided in
scenes, we ensure that all images from a specific scene belong to the same
set. The models are thus tested on unseen scenes.
### 3.2 The Model
Although recently transformers have surpassed convolutional neural networks in
object detection performance [Zhang et al.(2022)Zhang, Li, Liu, Zhang, Su,
Zhu, Ni, and Shum], CNN’s still remain the most popular type of neural
networks for vision. This is especially true outside of research.
Additionally, a lot of research is still being done in the field of CNN’s,
pushing their performance closer to that of transformers [Liu et al.(2022)Liu,
Mao, Wu, Feichtenhofer, Darrell, and Xie, Wang et al.(2022)Wang, Bochkovskiy,
and Liao]. We therefore opt to focus our analysis on CNN based feature
detectors.
In our experiments we use the Mask R-CNN model [He et al.(2017)He, Gkioxari,
Dollár, and Girshick, Abdulla(2017)], a widely used object detection and
instance segmentation model. It is a two stage architecture consisting of a
convolutional feature detection network and detection heads. In this work
ResNet101 [He et al.(2016)He, Zhang, Ren, and Sun] is used as a feature
detector. When transfer learning is used throughout this paper, the feature
detector is initialised with weights trained on COCO [Lin et al.(2014)Lin,
Maire, Belongie, Hays, Perona, Ramanan, Dollár, and Zitnick]. Unless mentioned
otherwise, the layers of the feature detector are frozen when using transfer
learning. In each experiment we train the model for 100 epochs with Stochastic
Gradient Descent using a learning rate of $0.001$ and a momentum of $0.9$. A
batch size of four is used and each epoch $1.000$ images are used to train the
model. This is done to be able to consistently compare per-epoch model
performance between models trained on datasets with different amounts of
images. If data augmentation is used this is a combination of zero to two
color modifying augmentations and zero to one translating augmentations. The
color augmentations include: add, multiply, Gaussian blur, Gaussian noise,
motion blur and grayscale; the translating augmentations include: rotation,
translation, shear, scale, horizontal flip and perspective transform. More
details are provided in the supplementary material.
## 4 Results
In this section we describe and execute a number of experiments using the
setup described in Section 3. To test the performance on the real target
domain for each of these experiments, we test the trained models on an unseen
set of real images. We compute the AP, AP50 and AP75 values as described for
the COCO challenge [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona,
Ramanan, Dollár, and Zitnick]. In this section we only report on the AP value
as the other metrics follow the same trends. For completeness, the other
metrics are provided in the supplementary material.
### 4.1 Scene Composition
In this first experiment we attempt to determine whether it is important to
create a synthetic dataset that closely matches the target data in terms of
scene compositions and what type of variations are beneficial for
generalisation. Should you use lighting conditions and object poses that are
plausible for the real test dataset, or can you just make these parameters
random? To find out, we trained a Mask R-CNN model from scratch on each of
five datasets described in Section 3.1. Since the datasets have different
amounts of images, we also repeated the experiment equalising the sizes of the
datasets. Each of the datasets were reduced to the size of the real dataset,
which is 1775 images. We sampled random images from the training set. The
results are shown in Figure 2.
Figure 2: Results of the Mask R-CNN model trained on each of the five
datasets, with variable and equal dataset sizes, on the real test dataset.
When looking at the results from the variable size experiment, we can see that
the model trained on the real images performs the best on the real test set,
this is as expected. The rendered copy of that dataset, with the same poses
and lighting conditions, performs the worst and fails to generalise towards
the real images. The model trained on the dataset with the same poses but with
extra images under varying lighting conditions generalises reasonably well,
falling 13 AP points under the model trained on real data. The model trained
on the dataset with randomised poses and real lighting conditions performs way
worse, only achieving 23.77 AP. This shows that variation in the form of
lighting conditions is more beneficial toward generalisation on real data than
variation in object poses. The model trained on the fully randomised dataset
performs the best of all the synthetic datasets. This dataset is however way
larger than the other datasets.
We therefore also compare the performance of the models trained on datasets of
equal size. Here we see that the model trained on the dataset with randomised
light and real poses performs the best of the synthetic datasets. It performs
slightly better than the fully randomised dataset. Both of these models still
outperform the non randomised synthetic dataset. The model trained on the
dataset with real lighting and randomised poses performs the worst with only
5.22 AP. This leads us to conclude that there is a slight benefit to modelling
object poses. We confirm our suspicion that varying lighting conditions help
toward generalisation on real data and trying to model real light conditions
hurts performance. We argue this is due to the fact that it is easier to make
higher level features such as shapes and poses match between real and rendered
images, it therefore makes sense to try and accurately simulate these features
in the synthetic dataset. Lower level features – such as color, lighting and
texture – are more difficult to accurately render in a synthetic dataset. We
therefore believe it is better to try and randomise these features of the
synthetic dataset, as this leads to better generalisation.
### 4.2 Training Techniques
In our previous experiment, we trained the model from scratch and did not
augment our data. This is however not a realistic scenario. It has been shown
that transfer learning and data augmentation help improve performance on real
data, when trained on synthetic data [Hinterstoisser et
al.(2019)Hinterstoisser, Lepetit, Wohlhart, and Konolige]. We therefore repeat
the previous experiment, but now we include transfer learning and data
augmentation to analyse their effects on generalisation. For this experiment,
we are more focused on the effects of certain training techniques on the
different datasets as opposed to comparing the datasets amongst each other. We
therefore use all images of each dataset, the models have thus been trained on
different sizes of datasets. We include one extra experiment where we
restricted each dataset to have the same size as the real dataset, being 1.7k
images.
Figure 3: Results of training the model on each of the five datasets with data
augmentation and/or transfer learning. The horizontal lines indicate the
performance of the model trained without these techniques.
Figure 3 shows the results of these experiments. The horizontal lines
represent the performance of the models trained without any of these
techniques. For the models trained with only data augmentation, we see that
the model trained on real images gets a slight performance boost compared to
the model without (3.7 AP). The models trained on the two datasets with
randomised lighting experience only a small difference in accuracy and the
model trained on the dataset with real poses and random lighting even suffers
a small decrease in performance. The two models trained on the datasets with
the real lighting conditions experience a large boost in performance. When
training the models with transfer learning we observe a large boost in
performance for all models. Interestingly, the worst performer of the previous
experiment – the synthetic copy dataset – now becomes the best performing
synthetic dataset, while having much less images than all the other datasets.
When using both techniques we see very similar performance to when only
transfer learning is used. Some models even suffer a slight decrease in
accuracy. Finally, when we train the model on equally sized datasets using
both training techniques, we notice only a very small decrease in performance.
This is notable, since the dataset sizes decrease from 26k and 70k to only
1.7k.
Figure 4: The evolution of the AP on the real test set for models trained
without any techniques, with data augmentation and transfer learning.
To further analyse the impact of data augmentation and transfer learning, we
compute the AP on the real test set for every two epochs. This allows us to
investigate the evolution of the training process. Figure 4 shows the training
process for the model trained without any techniques and for the models
trained with either data augmentation or transfer learning. When using no
techniques we see that learning flattens of very quickly, especially for the
datasets with no variation in lighting. When using data augmentation the model
is able to learn for longer and at a better rate. This has a way larger impact
on the datasets with no variation in lighting. When using transfer learning,
the models for all of the datasets show good performance after one epoch, the
learning flattens off very quickly however.
From these results we conclude that data augmentation and especially transfer
learning help overcome the difference in low level features between real and
synthetic data. Data augmentation does so by introducing more variation in
these features, leading to a more robust model. Transfer learning achieves
this by initialising the model with weights that are already capable of
detecting low level features from real world images. Thus when using these
techniques, it is beneficial to accurately simulate the poses and lighting
conditions of the target domain in the synthetic dataset. When it is not
possible to model the target domain, one should try to maximise variation in
lighting conditions to achieve the best generalisation. Additionally, it is
not necessary to use a large amount of images, even when using a randomised
dataset. To confirm this, we trained models on a number of subsequently
smaller subsets of the full random synthetic dataset. The results, shown in
Table 1, indicate that adding more images only helps until a certain amount as
we see a peak at 20k images. The differences in AP are not big, showing that
only a few thousand images can already produce a decent model.
Image Count | AP
---|---
1755 | 69.42
4387 | 72.23
8775 | 72.58
17550 | 73.01
35100 | 72.01
70200 | 71.52
Table 1: Results of training the model on different amounts of images sampled
from the fully random dataset.
#### 4.2.1 Transfer Learning
So far we have shown that transfer learning is very helpful when training on
synthetic data. Transfer learning can be done in multiple ways however. In our
previous experiments we initialised the feature detector with weights trained
on COCO, froze those layers and only trained the network heads. This forces
the network to make predictions based on features learned from the COCO
dataset, possibly leading to a decrease in performance. It is also possible to
retrain parts of the feature detector, allowing the network to learn new
features from the dataset. To investigate which layers we can retrain without
losing the benefits of transfer learning, we train a number of models with
different parts of the feature detector frozen. We train a model starting from
the 3rd, 4th and 5th ResNet stage and we perform an experiment where we
retrain all layers. For this experiment, we only use the fully random
synthetic dataset. Networks are again initialised with a model pre-trained on
COCO and data augmentation is used.
Layers Retrained | AP
---|---
All | 81.26
Stage 3+ | 76.71
Stage 4+ | 80.77
Stage 5+ | 77.13
Heads | 71.52
Table 2: Performance in AP on the real test set of models trained with
transfer learning, but with different layers retrained.
The results for this experiment are shown in Table 2. The best performing
model is the model where all layers were retrained. The model where only the
detection heads were retrained has the worst performance, falling at least
five AP points below the other models. This shows us that while it is useful
to initialise a network with transfer learning, it is important to let the
network learn new features from the synthetic dataset as well.
### 4.3 Leveraging Real Images
So far we have only considered using strictly synthetic images. It is however
sometimes possible that some real images with their labels are available as
well. In the following experiments we try to examine whether it is useful to
use real images besides synthetic images and how to best use the real images.
This can be done in many ways, the two most straightforward and widely used
are: mixing the real images with the synthetic training images during
training, and fine-tuning a network trained on synthetic images with real
images afterwards. We will be testing both these methods. Furthermore, we
analyse how many real images to use. We perform tests with different ratios of
real to synthetic images, keeping the total amount of images constant at 3510.
For these experiments we use the real and fully randomised datasets and train
with transfer learning and data augmentation. For the initial training stage
we only retrain the heads of the network. When fine-tuning, we retrain the
entire network with a lower learning rate of $0.0001$.
Figure 5: Performance in AP on the real test set of models trained with both
synthetic and real data. The real data is either mixed in with the synthetic
or used for fine tuning. The synthetic and real data is used in different
ratios. The horizontal lines represent the performance of models trained on
purely synthetic and real data, taken from previous experiments.
In Figure 5 the results of the different ratios of synthetic to real data are
shown for the different techniques. The horizontal lines represent the
performance of the models trained on the real and fully random datasets. When
mixing real images with the synthetic dataset we see that adding only a small
amount already gives a performance boost. Adding a larger ratio of real images
improves the performance even more, until the five to one ratio. The best
performing model in this strategy achieves an AP of 80.13, which is an
improvement over the model trained on real data. The models trained by fine-
tuning on real data see a large benefit from this technique. Even when using
only a small amount of real images the performance increases to 82.05 AP, this
is 10 AP points above the model trained on the large random synthetic dataset
and almost five AP points above the model trained on real data. Increasing the
ratio of real images slightly increases the performance for the fine-tuning,
reaching 83.7 AP for the one to one ratio.
From these results we can conclude that combining real and synthetic images
can lead to an increase in performance compared to training on only one of the
two. Additionally, we find that fine-tuning is the best way to use real images
and that only a small amount of real images can already make a significant
difference.
## 5 Conclusion
Throughout this paper we defined a number of experiments, training object
detection models using different techniques on the DIMO dataset. We evaluated
all these models, trained on synthetic data, on real data from the same
problem domain. The goal was to acquire useful guidelines on how to generate
data for deep learning and how to properly use this data.
Our experiments offer unique insights in how different variations of synthetic
datasets perform on real data, using different training techniques. We show
that modelling the lighting conditions and poses of a synthetic dataset to
match the real target domain is beneficial towards generalisation, but only if
transfer learning is used. When using transfer learning, we show that it is
not beneficial to freeze the layers of the feature detector. It is better to
retrain the entire network. This is contrary to some current research
[Hinterstoisser et al.(2019)Hinterstoisser, Lepetit, Wohlhart, and Konolige],
so we argue this should be considered on a per-problem basis. Additionally, we
investigated how to leverage real images. In our experiments we find that
adding a small amount of real images is beneficial and that fine-tuning is the
best method to do so. This is in line with the current state of the art
[Nowruzi et al.(2019)Nowruzi, Kapoor, Kolhatkar, Hassanat, Laganière, and
Rebut].
The Dataset of Industrial Metal Objects is a fairly simple dataset in terms of
scene composition, as it contains no unknown objects and covers only a limited
range of camera positions. Yet, it is the only dataset that includes the kind
of controlled variations needed for these experiments and the scenes depicted
in the dataset are highly relevant for industrial applications. We therefore
believe that the recommendations made in this paper could serve as guidelines
to generate data and train models for new problem domains. Future research
should try to extrapolate these findings to different and more complex domains
by generating other datasets with these controlled variations.
## Acknowledgements
This study was supported by the Special Research Fund (BOF) of Hasselt
University. The mandate ID is BOF20OWB24. Research was done in alignment with
Flanders Make’s PILS SBO project (R-9874).
## References
* [Abdulla(2017)] Waleed Abdulla. Mask r-cnn for object detection and instance segmentation on keras and tensorflow. https://github.com/matterport/Mask_RCNN, 2017.
* [Bai et al.(2020)Bai, Li, Yang, Song, Li, and Zhang] Qiang Bai, Shaobo Li, Jing Yang, Qisong Song, Zhiang Li, and Xingxing Zhang. Object detection recognition and robot grasping based on machine learning: A survey. _IEEE Access_ , 8:181855–181879, 2020. 10.1109/ACCESS.2020.3028740.
* [De Roovere et al.(2022)De Roovere, Moonen, Michiels, and wyffels] Peter De Roovere, Steven Moonen, Nick Michiels, and Francis wyffels. Dataset of industrial metal objects, 2022. URL https://arxiv.org/abs/2208.04052.
* [Gaidon et al.(2016)Gaidon, Wang, Cabon, and Vig] Adrien Gaidon, Qiao Wang, Yohann Cabon, and Eleonora Vig. Virtualworlds as proxy for multi-object tracking analysis. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 4340–4349, 2016. 10.1109/CVPR.2016.470.
* [Geiger et al.(2012)Geiger, Lenz, and Urtasun] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In _Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2012.
* [He et al.(2016)He, Zhang, Ren, and Sun] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 770–778, Los Alamitos, CA, USA, jun 2016. IEEE Computer Society. 10.1109/CVPR.2016.90. URL https://doi.ieeecomputersociety.org/10.1109/CVPR.2016.90.
* [He et al.(2017)He, Gkioxari, Dollár, and Girshick] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In _2017 IEEE International Conference on Computer Vision (ICCV)_ , pages 2980–2988, 2017. 10.1109/ICCV.2017.322.
* [Hinterstoisser et al.(2019)Hinterstoisser, Lepetit, Wohlhart, and Konolige] Stefan Hinterstoisser, Vincent Lepetit, Paul Wohlhart, and Kurt Konolige. On pre-trained image features and synthetic images for deep learning. In Laura Leal-Taixé and Stefan Roth, editors, _Computer Vision – ECCV 2018 Workshops_ , pages 682–697, Cham, 2019. Springer International Publishing. ISBN 978-3-030-11009-3.
* [Hodaň et al.(2019)Hodaň, Vineet, Gal, Shalev, Hanzelka, Connell, Urbina, Sinha, and Guenter] Tomáš Hodaň, Vibhav Vineet, Ran Gal, Emanuel Shalev, Jon Hanzelka, Treb Connell, Pedro Urbina, Sudipta N. Sinha, and Brian Guenter. Photorealistic image synthesis for object instance detection. In _2019 IEEE International Conference on Image Processing (ICIP)_ , pages 66–70, 2019. 10.1109/ICIP.2019.8803821.
* [Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan, Dollár, and Zitnick] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, _Computer Vision – ECCV 2014_ , pages 740–755, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10602-1.
* [Liu et al.(2022)Liu, Mao, Wu, Feichtenhofer, Darrell, and Xie] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s, 2022. URL https://arxiv.org/abs/2201.03545.
* [Movshovitz-Attias et al.(2016)Movshovitz-Attias, Kanade, and Sheikh] Yair Movshovitz-Attias, Takeo Kanade, and Yaser Sheikh. How useful is photo-realistic rendering for visual learning? In Gang Hua and Hervé Jégou, editors, _Computer Vision – ECCV 2016 Workshops_ , pages 202–217, Cham, 2016. Springer International Publishing. ISBN 978-3-319-49409-8.
* [Northcutt et al.(2021)Northcutt, Athalye, and Mueller] Curtis G Northcutt, Anish Athalye, and Jonas Mueller. Pervasive label errors in test sets destabilize machine learning benchmarks. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)_ , 2021. URL https://openreview.net/forum?id=XccDXrDNLek.
* [Nowruzi et al.(2019)Nowruzi, Kapoor, Kolhatkar, Hassanat, Laganière, and Rebut] Farzan Erlik Nowruzi, Prince Kapoor, Dhanvin Kolhatkar, Fahed Al Hassanat, Robert Laganière, and Julien Rebut. How much real data do we actually need: Analyzing object detection performance using synthetic and real data. _ArXiv_ , abs/1907.07061, 2019.
* [Sreenu and Durai(2019)] G. Sreenu and M A Durai. Intelligent video surveillance: a review through deep learning techniques for crowd analysis. _Journal of Big Data_ , 6:48, 06 2019. 10.1186/s40537-019-0212-5.
* [Tobin et al.(2017)Tobin, Fong, Ray, Schneider, Zaremba, and Abbeel] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 23–30, 2017. 10.1109/IROS.2017.8202133.
* [Tommasi et al.(2017)Tommasi, Patricia, Caputo, and Tuytelaars] Tatiana Tommasi, Novi Patricia, Barbara Caputo, and Tinne Tuytelaars. A deeper look at dataset bias. In _Domain adaptation in computer vision applications_ , pages 37–55. Springer, 2017.
* [Tremblay et al.(2018)Tremblay, Prakash, Acuna, Brophy, Jampani, Anil, To, Cameracci, Boochoon, and Birchfield] Jonathan Tremblay, Aayush Prakash, David Acuna, Mark Brophy, Varun Jampani, Cem Anil, Thang To, Eric Cameracci, Shaad Boochoon, and Stan Birchfield. Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In _Proceedings of the IEEE conference on computer vision and pattern recognition workshops_ , pages 969–977, 2018.
* [Wang et al.(2022)Wang, Bochkovskiy, and Liao] Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors, 2022. URL https://arxiv.org/abs/2207.02696.
* [Yang et al.(2020)Yang, Li, Wang, Dong, Wang, and Tang] Jing Yang, Shaobo Li, Zheng Wang, Hao Dong, Jun Wang, and Shihao Tang. Using deep learning to detect defects in manufacturing: A comprehensive survey and current challenges. _Materials_ , 13(24), 2020. ISSN 1996-1944. 10.3390/ma13245755. URL https://www.mdpi.com/1996-1944/13/24/5755.
* [Zhang et al.(2022)Zhang, Li, Liu, Zhang, Su, Zhu, Ni, and Shum] Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, and Heung-Yeung Shum. Dino: Detr with improved denoising anchor boxes for end-to-end object detection, 2022. URL https://arxiv.org/abs/2203.03605.
|
# ACE: Cooperative Multi-agent Q-learning with Bidirectional Action-Dependency
Chuming Li1, 2, Jie Liu2††footnotemark: , Yinmin Zhang1, 2††footnotemark: ,
Yuhong Wei3, Yazhe Niu2, 3, Yaodong Yang4,
Yu Liu2, 3††footnotemark: , Wanli Ouyang1, 2 Equal contributionCorresponding
author
###### Abstract
Multi-agent reinforcement learning (MARL) suffers from the non-stationarity
problem, which is the ever-changing targets at every iteration when multiple
agents update their policies at the same time. Starting from first principle,
in this paper, we manage to solve the non-stationarity problem by proposing
bidirectional action-dependent Q-learning (ACE). Central to the development of
ACE is the sequential decision making process wherein only one agent is
allowed to take action at one time. Within this process, each agent maximizes
its value function given the actions taken by the preceding agents at the
inference stage. In the learning phase, each agent minimizes the TD error that
is dependent on how the subsequent agents have reacted to their chosen action.
Given the design of bidirectional dependency, ACE effectively turns a multi-
agent MDP into a single-agent MDP. We implement the ACE framework by
identifying the proper network representation to formulate the action
dependency, so that the sequential decision process is computed implicitly in
one forward pass. To validate ACE, we compare it with strong baselines on two
MARL benchmarks. Empirical experiments demonstrate that ACE outperforms the
state-of-the-art algorithms on Google Research Football and StarCraft Multi-
Agent Challenge by a large margin. In particular, on SMAC tasks, ACE achieves
100% success rate on almost all the hard and super hard maps. We further study
extensive research problems regarding ACE, including extension, generalization
and practicability. Code is made available to facilitate further research.
## Introduction
Cooperative multi-agent reinforcement learning (MARL) aims to learn a good
policy that controls multiple agents and maximizes the cumulative return in a
given task. It has great potential in various real-world tasks, such as robot
swarm control 2017, autonomous driving 2020; 2016 and multi-player games 2019;
2021. A major challenge of MARL is the complex joint action space. In multi-
agent tasks, the joint action space increases exponentially with the number of
agents. Hence, for the sake of scalability, existing MARL algorithms usually
learn an individual policy to select the action for every single agent. In
MARL algorithms, the reward signal is affected by other agents’ behavior.
However, the environment of multi-agent task is non-stationary 2019; 2021 to
every single agent, where the policies of agents keep changing during the
learning process. This non-stationary problem breaks the Markov assumption in
single-agent RL algorithms and causes endless adaptation of multiple agents
according to each other’s change of policy. In value-based methods, the non-
stationary problem shows up as that the value of the individual action can not
be estimated accurately.
To solve the non-stationary problem, we introduce bidirectional action-
dependency to estimate the action value of every single agent accurately. We
cast multi-agent decision-making process as a sequential decision-making
process, where only one agent makes a decision at a time. In this sequential
process, the bidirectional action-dependency is embodied in two aspects. In
the forward direction, the evaluation of an agent’s action value is dependent
on the preceding agents’ actions in the decision-making sequence. While in the
backward direction, the target to update an agent’s action value is dependent
on how subsequent agents react to the preceding actions. We formulate this
bidirectional dependence by transforming a multi-agent Markov Decision Process
(MMDP) 1994 into a single-agent Markov Decision Process (MDP), called
sequentially expanded MDP (SE-MDP). In SE-MDP, a decision $\boldsymbol{a}^{t}$
based on a state $s^{t}$ is expanded to multiple intermediate states
$\left[s^{t}_{a_{1}},...,s^{t}_{a_{1:n}}\right]$, named SE-state. The SE-state
$s^{t}_{a_{1:i}}$ is defined as the state $s^{t}$ in the original MMDP along
with the decisions $a_{1:i}$ made by the preceding agents. Only one agent
makes a decision at each SE-state. After each agent makes the decision, the
state transits to the next one, which includes the new decision. This
transformation validates that the proposed bidirectional action-dependency
does circumvent the non-stationary problem.
Figure 1: Comparison between the original MMDP (above) and the transformed SE-
MDP (below). A single transition in MMDP is expanded to $n$ sequentially
expanded states in SE-MDP.
With the introduced bidirectional dependency, we propose a simple but powerful
method, called bidirectional ACtion-dEpendent deep Q-learning (ACE). ACE is
compatible with the abundant Q-learning methods for single-agent tasks, and
naturally inherits their theoretical guarantee of convergence and performance.
For practical implementation, we identify an efficient and effective network
representation of the SE-state. We first generate the embeddings for all units
in the task as well as the embeddings for their available actions. Then, we
combine the embedding of each unit with the corresponding action embedding to
construct the embedding for every SE-state. This design is quite efficient,
because the embeddings of all SE-states along a sequential decision-making
process are constructed with additive combination among the same set of unit
and action embeddings. This set is computed only once before every sequential
decision-making process, and the additive combination brings in negligible
cost. Moreover, an interaction-aware action embedding is developed to describe
the interaction among units in the multi-agent task, which further improves
the performance of ACE.
We evaluate the performance of ACE on both a toy case and complex cooperative
tasks. In the toy case, ACE demonstrates its advantage in converging to the
optimal policy against the popular value-factorization methods. Because it
bridges the gap of the optimal actions between the joint and individual
Q-function, which widely exists in value-factorization methods. For complex
tasks, we choose two benchmark scenarios in Google Research Football (GRF)
2020 environment and eight micromanagement tasks in StarCraft Multi-Agent
Challenge (SMAC) 2019. Empirical results show that ACE significantly
outperforms the state-of-the-art algorithms on GRF, and achieves higher sample
efficiency by up to 500%. On SMAC, ACE achieves 100% win rates in almost all
the hard and super-hard maps. Other advantages of ACE are verified with
comprehensive experiments, including generalization, extension and
practicability. Surprisingly, ACE also indicates better generalization
performance compared with other baselines when transferred to a new map with a
different number of agents in SMAC.
## Related Work
To solve the widespread cooperation tasks, many multi-agent reinforcement
learning (MARL) algorithms have been proposed recently. According to the
extent of centralization, these works can be divided into two categories,
independent learning scheme and action-dependent learning scheme.
First, many works tend towards a fully independent learning scheme 2022, where
agents make decisions with their independent value functions or policies. One
typical category assigns independent actor to each agent by directly
transferring the actor-critic methods to multi-agent scenarios 2018; 2017;
2021; 2021. Another line is value-based methods 1993; 2017; 2018; 2020; 2019;
2020; 2020; 2020. To avoid the non-stationary problem, they usually develop
different factorized value functions following the IGM principle 2019, which
requires that the individually optimal actions are consistent with the jointly
optimal actions. We remark that existing value factorization methods following
the IGM principle either suffer from the structural constraints, like VDN and
QMIX, or introduce secondary components along with additional hyperparameters,
like QTRAN, WQMIX and QPLEX. However, the optimal joint action often changes
due to the discovery of a better policy, resulting in the mismatch between the
optimal joint Q function and individual functions during training. This means
that individual Q functions require more iterations to recover the
satisfaction of IGM, and the policy explores the environment with sub-optimal
actions, leading to low sample efficiency. To avoid the issues, this paper
focuses on directly estimating the value of each action, rather than following
the IGM principle to construct factorization function classes.
Figure 2: The schematic of the unit encoder. The node embedding is obtained by
the node encoder, and the edge embedding (for the unit and its interacted
units) is obtained from the edge encoder. The average-pooled edge embedding is
added to the node embedding to provide unit embedding.
Second, the action-dependent learning scheme 2019; 2021a; 2022; 2021b; 2022;
2022; 2022 is more centralized. One perspective is action-dependent execution,
where the agent makes decisions with dependency on other agents’ actions. CGS
2022 proposes a graph generator to output a directed acyclic graph which
describes the action dependency. Each node in the graph represents an agent
whose policy is dependent on the action of agents on its parent nodes.
However, each agent’s decision is only dependent on part of the previous
agents in the topological sort of the generated DAG, and the policy update is
independent on the reaction of the subsequent agents. It means the non-
stationary effect is not totally removed. In another perspective, action-
dependency is introduced in policy update rather than execution. Multi-agent
rollout algorithm 2019 and HAPPO 2021a follow a update to sequentially update
the policy of each agent with the others fixed, thus avoiding the conflicting
update directions of individual policy updates. This paradigm is an implicit
rather than full action-dependency, because the policy does not explicitly
depends on the actions of the preceding agents. As an extra difference, ACE is
the first value-based MARL method that achieves remarkable performance
following the action-dependent learning scheme. More elaborate discussion with
relative works are referred to the Appendix.
## Problem Formulation
In this paper we take Multi-agent Markov Decision Process (MMDP) 1994 to model
cooperative multi-agent tasks. An MMDP is a tuple
$\mathcal{G}=\left\langle\mathcal{S},\mathcal{N},\mathcal{A},P,r,\gamma\right\rangle$,
where $\mathcal{S}$ is the space of global state and $\mathcal{N}$ is the set
of $n$ agents.
$\mathcal{A}\equiv\mathcal{A}_{1}\times,...,\times\mathcal{A}_{n}$ is the
joint action space consisting of each agent’s action space $\mathcal{A}_{i}$.
At each step, the global state $s$ is transformed to each agent $i$’s input,
and each agent $i$ selects an action $a_{i}\in\mathcal{A}_{i}$. Then, with the
joint action $\boldsymbol{a}=\left[a_{1},...,a_{n}\right]$ and the transition
function $P\left(s^{\prime}|s,\boldsymbol{a}\right)$, the process transits to
the next state $s^{\prime}$ and returns a reward
$r\left(s,\boldsymbol{a}\right)$. The target we consider is to learn an
optimal policy $\boldsymbol{\pi}\left(\boldsymbol{a}\mid s\right)$ which
maximizes the expected return
$\mathcal{R}=\mathbb{E}_{\boldsymbol{\pi}}\left[\sum_{t=0}^{\infty}\gamma^{t}r\left(s^{t},\boldsymbol{a^{t}}\right)\right]$.
## Method
### Bidirectional Action-Dependency
In this section, we consider a sequential decision-making scheme: all agents
make decisions sequentially. The bidirectional action-dependency has two
directions. In the forward direction, each agent’s decision depends on the
state and their preceding agents’ actions. Inversely, in the backward
direction, the update of the Q-value for an agent’s action depends on how its
successor reacts to the preceding actions.
We formalize this bidirectional dependency by transforming the original MMDP
$\mathcal{G}$ into a single agent MDP $\widetilde{\mathcal{G}}$. In
$\widetilde{\mathcal{G}}$, the state transits along the decision-making
sequence. Specifically, a intermediate transition happens each time when a
single agent in the sequence selects its action. The intermediate state is
defined as the original state $s^{t}$ along with the actions of the agents
which have made their decisions, denoted as $s_{a_{1:i}}^{t}$. At each
intermediate transition, an agent $i$ receives its intermediate state
$s_{a_{1:i-1}}^{t}$ and produces its action $a_{i}$, then the intermediate
state intermediately transits to $s_{a_{1:i}}^{t}$ with a reward $0$. After
the last agent $n$ makes decision and the intermediate state intermediately
transits to $s_{a_{1:n}}^{t}$, a psuedo agent produces an empty action and the
intermediate state transits from $s_{a_{1:n}}^{t}$ to $s^{t+1}$, with the
reward $r\left(s^{t},\boldsymbol{a}^{t}\right)$ defined in the original MMDP
$\mathcal{G}$. With the above definition, a transition
$\left(s^{t},\boldsymbol{a}^{t},r\left(s^{t},\boldsymbol{a}^{t}\right),s^{t+1}\right)$
of $\mathcal{G}$ is expanded into a sequence of intermediate transitions
$\left(s^{t},a_{1}^{t},0,s_{a_{1}}^{t}\right),\left(s_{a_{1}}^{t},a_{2}^{t},0,s_{a_{1:2}}^{t}\right),...,(s_{a_{1:n-1}}^{t},a_{n},r(s^{t},\boldsymbol{a}^{t}),$
$s^{t+1})$ in $\widetilde{\mathcal{G}}$. We define $\widetilde{\mathcal{G}}$
as the sequential expansion of $\mathcal{G}$ and name this MDP as sequentially
expanded MMDP (SE-MMDP). Similarly, we define the intermediate state
$s_{a_{1:i}}$ as sequentially expanded state (SE-state), of which the space is
represented by $\widetilde{\mathcal{S}}$.
As depicted in Figure 1, the formulation of SE-MDP validates that the
bidirectional action-dependency does circumvent the non-stationary problem. In
SE-MDP, the forward dependency is manifested in that the preceding actions are
incorporated in the SE-state. It means the changeable behavior of the
preceding agents are tracked in the value estimation of each SE-state. As for
the backward dependency described by dashed lines in Figure 1, the target
value of an agent’s action $a_{i}$ in the Bellman operator depends on its
successor’s reaction to the preceding actions, i.e., the best selection of
$a_{i+1}$, which also tracks the successor’s behavior.
### Bidirectional Action-Dependent Q-learning
The formulation of sequential expansion $\widetilde{\mathcal{G}}$ circumvents
the non-stationary problem, which enables us to easily adopt different single-
agent algorithms to solve $\widetilde{\mathcal{G}}$. Based on the formulation
of sequential expansion, this section introduces the proposed bidirectional
ACtion-dEpendent Q-learning (ACE), which transfers existing single-agent
value-based methods to multi-agent scenarios with minimalist adaptation and
inherits their theoretical guarantee of convergence and performance.
Figure 3: Schematic of the pipeline of ACE, which takes SMAC as an instance.
There are four units in the map. Units 1 and 2 are controlled by the RL agent,
and units 3 and 4 are enemies controlled by the environment. At first, the
initial state embedding is generated, consisting of the initial embedding for
all units obtained from the unit encoder, as well as the action embedding of
all actions obtained from the action encoder (only the action embedding of
unit 1 is shown in the figure, where actions _attack 3_ and _attack 4_ mean
unit 1 attacking unit 3 and 4 respectively). Then, agent (unit) 1 is the first
one to make the decision, thus its action embeddings are incorporated into the
initial unit embeddings to rollout to the embeddings of different new SE-
states $e\left(s_{a_{1}}^{t}\right)$ (4 rolled out SE-states in the figure).
Afterwards, all of these new SE-states are evaluated by the value encoder.
Finally, the SE-state with the maximum value is retained and used by the next
rollout for the action of agent 2.
Value-based methods usually learn the function
$Q:\mathcal{S}\rightarrow\mathbb{R}^{\left|A\right|}$ to build the mapping
from the state to the estimated return of actions, and select the action with
the maximum $Q$ value during execution. However, in SE-MDP, once making the
decision $a_{i+1}$ for the $i+1$th agent on the current SE-state
$s_{a_{1:i}}^{t}$, we can direct intermediate transition to the next SE-state
$s_{a_{1:i+1}}^{t}$ without interacting with the environment. Hence, we take a
step forward and use the value function
$V:\widetilde{\mathcal{S}}\rightarrow\mathbb{R}$ to estimate the return of the
SE-state rather than the action, and use the values
$V\left(s_{a_{1:i+1}}^{t}\right)$ of all possible next SE-states rolled out
via different actions $a_{i+1}$ to select the optimal action.
Decision with Rollout Specifically, to make decision at an SE-state
$s_{a_{1:i}}^{t}$, we use agent $i+1$’s action space $A_{i+1}$ to roll out to
all possible next SE-states $s_{a_{1:{i+1}}}^{t}$, and select the action
$a_{i+1}^{t}=\mathop{\arg\max}_{a_{i+1}}\
V\left(s_{a_{1:i},a_{i+1}}^{t}\right)$, which leads to the next SE-state with
the optimal value $V\left(s_{a_{1:i+1}}^{t}\right)$.
Update with Rollout Our value function $V$ is updated by the standard Bellman
backup operator in single agent RL. At an SE-state $s_{a_{1:i}}^{t}$, to
obtain the target value to update the value $V\left(s_{a_{1:i}}^{t}\right)$,
we also rollout to all possible next SE-states $s_{a_{1:{i+1}}}^{t}$, estimate
their values $V\left(s_{a_{1:i+1}}^{t}\right)$ and select the maximum value as
the target value. For the final SE-state $s_{a_{1:{n}}}^{t}$ in a decision
sequence, we roll out at the first SE-state $s^{t+1}$ in the next decision
sequence, i.e., the next state in the original MMDP $\mathcal{G}$. The update
of $V$ is formalized as Eq 1, with $\hat{V}\left(s_{a_{1:i}}^{t}\right)$
denoting the Bellman target of $V\left(s_{a_{1:i}}^{t}\right)$.
$\displaystyle\hat{V}\left(s_{a_{1:i}}^{t}\right)=\begin{cases}\mathop{\max}_{a_{i+1}}\
\gamma V\left(s_{a_{1:i},a_{i+1}}^{t}\right),&\text{if $i<n$}\\\
\mathop{\max}_{a_{1}}\ r\left(s^{t},a_{1:n}\right)+\gamma
V\left(s_{a_{1}}^{t+1}\right),&\text{if $i=n$}\end{cases}$ (1)
### Network Representation
Deep Reinforcement Learning (DRL) methods usually benefit from the good
generalization ability of a deep neural network (DNN), which encodes the state
to a vectorized embedding and maps the embedding to the estimated return of
the state or action. As the design of representation has a great effect on the
efficiency and performance of the algorithm, we will discuss two concerns in
the design of the network representation of the SE-state $s_{a_{1:{i}}}^{t}$
with DNN.
Figure 4: Comparison of ACE against baselines on four super hard and four hard
SMAC maps.
Decomposed State Embedding Firstly, a transition
$\left(s^{t},\boldsymbol{a}^{t},r\left(s^{t},\boldsymbol{a}^{t}\right),s^{t+1}\right)$
in the original MMDP $\mathcal{G}$ corresponds to $n$ intermediate transitions
in the sequential expansion $\widetilde{\mathcal{G}}$ and each intermediate
transition requires to evaluate $\left|A_{i}\right|$ next states, resulting in
$\sum_{i=1}^{n}\left|A_{i}\right|$ total states for evaluation. Direct
computing all states’ embedding from scratch will bring unacceptable
computational cost, thus the first principle we follow in the representation
of SE-state is: all the state embeddings $e_{s}\left(s_{a_{1:{i}}}^{t}\right)$
along the sequential decision-making are decomposed into a shared embedding
$e_{s}\left(s^{t}\right)$ of the initial state $s^{t}$, as well as a shared
set of embeddings $e_{a}\left(a_{1}\right),...,e_{a}\left(a_{n}\right)$ of
available actions $a_{1},...,a_{n}$, all generated by the same action encoder.
Then, the state embedding $e_{s}\left(s_{a_{1:{i}}}^{t}\right)$ is obtained by
combining the initial state embedding $e_{s}\left(s^{t}\right)$ and the
corresponding action embeddings
$e_{a}\left(a_{1}\right),...,e_{a}\left(a_{i}\right)$. In this decomposition,
the original state only requires to be encoded once rather than
$\sum_{i=1}^{n}\left|A_{i}\right|$. Moreover, the combination is additive and
introduces negligible cost. Secondly, a multi-agent task involves interaction
among multiple units, including cooperative interaction among agent-controlled
units, like healing an allied unit in SMAC, and interaction between agent-
controlled and environment-controlled units, like attacking an enemy unit in
SMAC. We follow two designs, unit-wise state embedding and interaction-aware
action embedding, to describe the interactions in the state and action
embedding.
Unit-wise State Embedding For state embedding, we use a unit encoder to
generate the unit-wise embedding $e_{u}\left(u_{i}\right)$ of each unit
$u_{i}$ in the environment, which forms the initial state embedding
$e_{s}\left(s^{t}\right)=\left[e_{u}\left(u_{1}\right),...,e_{u}\left(u_{m}\right)\right]$.
Here $m$ is the number of units. We assume that the first $n$ units are
controlled by the RL agent and the rest $(m-n)$ ones are controlled by the
environment. We do not fuse the unit embeddings to a global state embedding,
but retain them to facilitate the description of the interactions among units.
The input feature of each unit includes the node feature and edge feature. The
node feature is the state of each unit, e.g., the health and shield in SMAC
and the speed in GRF, and the edge feature is the relation between the units,
e.g., the distance between units in SMAC. Our unit encoder takes a fairly
simple architecture, depicted in Figure 2. The node and edge feature are
separately encoded by two encoders to generate the corresponding embedding. In
this paper, we take a fully connected layer along with a ReLU 2018 as the
encoder. The resulted edge embedding average-pooled and then added to the node
embedding to obtain the final unit embedding.
Interaction-aware Action Embedding To make the state embedding
$e_{s}\left(s_{a_{1:{i}}}^{t}\right)$ aware of the unit interactions, we
develop a two-fold interaction-aware action embedding. Given an action $a_{i}$
that is executed by unit $u_{i}$ and involves the interaction with some target
units, its action embedding consists of an active embedding and a passive
embedding, formalized by
$e_{a}\left(a_{i}\right)=\left[e_{a}^{a}\left(a_{i}\right),e_{a}^{p}\left(a_{i}\right)\right]$.
The active embedding $e_{a}^{a}\left(a_{i}\right)$ encodes the effect of
action $a_{i}$ on the unit $u_{i}$ itself, and the passive embedding
$e_{a}^{p}\left(a_{i}\right)$ encodes the effect of action $a_{i}$ on the
target units. For actions without interaction, it only has an active embedding
$e_{a}^{a}\left(a_{i}\right)$, formalized by
$e_{a}\left(a_{i}\right)=\left[e_{a}^{a}\left(a_{i}\right)\right]$.
After the generation of the original state embedding
$e_{s}\left(s^{t}\right)=\left[e_{u}\left(u_{1}\right),...,e_{u}\left(u_{m}\right)\right]$
and the action embedding
$e_{a}\left(a_{1}\right),...,e_{a}\left(a_{i}\right)$, we use an additive
combination of the unit and action embedding to construct the state embedding
$e\left(s_{a_{1:i}}^{t}\right)$ of the intermediate SE-states
$s_{a_{1:i}}^{t}$, formalized by
$e_{s}\left(s_{a_{1:i}}^{t}\right)=\left[e_{u}\left(u_{1,a_{1:i}}\right),\\!...,e_{u}\left(u_{m,a_{1:i}}\right)\right]$.
The element $e_{u}\left(u_{j,a_{1:i}}\right)$ denotes the combination of the
initial unit embedding $e_{u}\left(u_{j}\right)$ of unit $j$ and the
embeddings of its associated actions among $a_{1:i}$. The rule of combination
is: for each action $a_{i}$, its active action embedding
$e_{a}^{a}\left(a_{i}\right)$ is added on the unit embedding
$e_{u}\left(u_{i}\right)$ of it executor $u_{i}$; if $a_{i}$ involves an
interaction with some target unit, its passive action embedding
$e_{a}^{p}\left(a_{i}\right)$ is added on $e_{u}\left(u_{j}\right)$ to
describe the interaction. The definition of $e_{u}\left(u_{j,a_{1:i}}\right)$
is formalized by:
$\displaystyle
e_{u}\left(u_{j,a_{1:i}}\right)\\!=\\!\begin{cases}&e_{u}\left(u_{j}\right)+\sum_{e_{a}^{p}\left(a_{k}\right)\in
P\left(a_{1:i}\right)_{j}}e_{a}^{p}\left(a_{k}\right),\\\ &\ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \text{if $i<j$}\\\
&e_{u}\left(u_{j}\right)\\!+\\!e_{a}^{a}\left(a_{j}\right)\\!+\\!\sum_{e_{a}^{p}\left(a_{k}\right)\in
P\left(a_{1:i}\right)_{j}}e_{a}^{p}\left(a_{k}\right),\\\ &\ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \text{if $i>=j$}\end{cases}$ (2)
where $P\left(a_{1:i}\right)_{j}$ is the set of all passive action embeddings
whose target unit is $u_{j}$. When $i>=j$, which means $u_{j}$ is an agent-
controlled unit and has made its decision $a_{j}$, the active embedding
$e_{a}^{a}\left(a_{j}\right)$ will also be added to $e_{u}\left(u_{j}\right)$.
In this paper, the passive embedding $e_{a}^{p}\left(a_{i}\right)$ of a unit
$u_{i}$ is generated from an action encoder whose input is the node feature of
the unit $u_{i}$, because the effect of action $a_{i}$ may rely on the
executor’s state. For instance, in GRF the effect on the ball is affected by
the speed of the controller. However, the active embedding
$e_{a}^{a}\left(a_{i}\right)$ is defined as a learnable parameterized vector,
because it is added to the embedding $e_{u}\left(u_{i}\right)$ of $u_{i}$
which has already encoded the state of $u_{i}$. Both the two kinds of
embeddings are learnable. Like the encoders of node and edge features, we also
take a fully connected layer along with a ReLU activation as the action
encoder in this paper.
At last, we use a encoder to estimate the value of each SE-state. The state
embedding
$e_{s}\left(s_{a_{1:i}}^{t}\right)=\left[e_{u}\left(u_{1,a_{1:i}}\right),...,e_{u}\left(u_{m,a_{1:i}}\right)\right]$
is fed into a ’fc-relu’ structure to encode the interaction-aware unit
embedding, followed by a ’pooling-fc’ structure to output the estimated value.
Figure 3 demonstrates the pipeline of embeddings generation and how to use
them to represent the transition in $\widetilde{\mathcal{G}}$.
## Experiment
To study the advantages of ACE, we consider three tasks: (1) Spiders-and-Fly,
(2) StarCraft Multi-Agent Challenge and (3) Google Research Football. Since
the baselines we compare with are designed for partial observation settings,
we also introduce our efforts to guarantee the fairness in this section. More
details on these tasks are included in Appendix.
Spiders-and-Fly The Spiders-and-Fly problem is first proposed in 2019, where
multiple spiders are controlled to catch a fly in a two 2D grid. At each time
step, each spider moves to a neighboring location or stays put, while the fly
moves randomly to a neighboring location. In this paper, we modify it to a
much harder problem where only two spiders are controlled by the RL agent, and
the fly will avoid moving to the neighboring locations of the spiders,
otherwise stay still. Each episode starts with a state where the Manhattan
distance between the fly and each spider is larger than 4. With such
modifications, the two spiders must perform perfect cooperation to encircle
the fly at the corner. The reward is defined as 10 if the fly is caught
otherwise 0.
StarCraft Multi-Agent Challenge (SMAC) In SMAC 2019, the ally units controlled
by the RL agent play against the enemy units controlled by the built-in rules.
To win the competition, allies learn to perform cooperative micro-tricks, such
as positioning, kiting and focusing fire. This benchmark consists of various
maps classified as Easy, Hard, and Super Hard. Since the Easy maps solved well
by existing methods 2021, we focus on four super hard maps: corridor, MMM2,
6h_vs_8z, and 3s5z_vs_3s6z, and four hard maps: 5m_vs_6m, 2c_vs_64zg, 8m_vs_9m
and 3s_vs_5z.
Google Research Football (GRF) Compared with SMAC, GRF 2020 provides a harder
environment with large action space and sparse reward. In the GRF, agents
coordinate timing and location to organize attacks and only scoring leads to
rewards. In our experiments, we control the left team players except for the
goalkeeper while the built-in engine controls the right team players. We
evaluate our method on two challenging scenarios: academy_3_vs_1_with_keeper
and academy_counterattack_hard. For a fair comparison, we use the standard 19
actions (i.e., moving, sliding, shooting and passing), and use the same
observation in CDS 2021 to construct our input feature. Following the settings
in CDS, we also make a reasonable change to the two half-court offensive
scenarios: we will lose if our players or the ball returns to our half-court.
All experiments are tested with this modification. The final reward is +100
when our team wins, -1 when our player or the ball returns to our half-court,
and 0 otherwise.
Evaluation Metric For Spiders-and-Fly, we derive an analytical optimal
solution as the oracle policy and introduce two metrics: (1) the samples
required to achieve 100% success rate in ten steps, and (2) the gap between
the average steps required by the RL policy and the oracle policy to catch the
fly. For SMAC, we follow the official evaluation metric in 2019, i.e., we run
32 test episodes without exploration to record the test win rate and report
the median performance as well as the 25-75% percentiles across 5 seeds. For
GRF, we similarly run 32 test episodes to obtain win rate and report the
average win rate as well as the variance across 5 seeds.
Metric | Map | VDN | QMIX | QTRAN | ACE
---|---|---|---|---|---
Steps | 5$\times$5 | 0.78$\pm$0.10 | 0.77$\pm$0.10 | 0.60$\pm$0.09 | 0.04$\pm$0.03
7$\times$7 | 0.90$\pm$0.12 | 0.87$\pm$0.11 | 1.02$\pm$0.09 | 0.07$\pm$0.02
Samples (M) | 5$\times$5 | 0.19$\pm$0.02 | 0.19$\pm$0.02 | 0.17$\pm$0.02 | 0.09$\pm$0.01
7$\times$7 | 1.97$\pm$0.10 | 1.81$\pm$0.09 | 1.68$\pm$0.09 | 1.01$\pm$0.06
Table 1: Comparison ACE against baselines on Spiders-and-Fly. Steps represent
the gap between the average steps of the methods and the oracle policy.
Samples represent the number of samples to achieve a 100% success rate within
10 steps.
### Performance
Spiders-and-Fly We compare ACE with three value factorization methods: QTRAN
2019, QMIX 2018 and VDN 2017, on two grids with the sizes 5x5 and 7x7. As
shown in Table 1, ACE is the only one that can approximate the performance of
the oracle policy, while the baselines, although also find the best behavior
in some cases, cannot consistently converge to the optimal policy. Moreover,
ACE takes up to 50% fewer samples to achieve the 100% success rate in ten
steps.
Figure 5: Comparison of ACE against baseline on GRF.
SMAC We compare ACE with both the state-of-the-art (SOTA) value-based and
actor-critic methods on SMAC. First, our value-based baseline is the fine-
tuned QMIX 2021 combining QMIX 2018 with bags of code-level optimizations and
outperforming QPLEX 2020, QTRAN 2019, vanilla QMIX and Weighted QMIX 2020.
Secondly, we choose the SOTA actor-critic method, NOISY-MAPPO 2021, as the
actor-critic baseline. Although the two methods are proposed for CTDE
pipeline, they are also important baselines to solve the exponentially large
action space in multi-agent tasks. To this end, the comparison with them is
fair. Note that both baseline algorithms are designed for partially observable
scenarios, where each agent only use its local observation to generate the
action, while ACE uses the observation of all units to make decisions. Thus,
to make a fair comparison, in the two baselines, we make each agent share the
union of all units’ observations at the input, denoted as SHARED. We also
evaluate the baselines with the original local observation, denoted as LOCAL,
because in some cases the shared observation has worse performance. For
example, NOISY-MAPPO-LOCAL achieves better performance than NOISY-MAPPO-SHARED
in 6h_vs_5z. As shown in Figure 4, ACE surpasses fine-tuned QMIX and NOISY-
MAPPO by a large margin in the final win rate and the sample efficiency.
Remarkably, it achieves 100% test win rates in almost all maps, including
5m_vs_6m and 3s5z_vs_3s6z, which have not been solved well by existing methods
even with shared observation. Therefore, ACE achieves a new SOTA on SMAC.
GRF We show the performance comparison against the baselines in Figure 5. ACE
outperforms the SOTA methods CDS-QMIX 2021 and CDS-QPLEX 2021 by a large
margin in both two scenarios. The gap between ACE and the baselines is even
larger than that on SMAC, possibly due to that the football game requires more
complex cooperation skills.
(a)
(b)
Figure 6: (a): Comparison of ACE and ACE-w/o-IA against QMIX on corridor and
3s5z_vs_3s6z. (b): Comparison of sorted order against shuffle order of ACE.
(a)
(b)
(c)
Figure 7: (a): Comparision of ACE and ACE-PPO on 5m_vs_6m. (b): Transfer from
5m_vs_6m to maps of different numbers of agents. ’x-y’ represents xm_vs_ym.
ACE can achieve remarkable performance in the zero-shot setting, regardless of
whether agent number increases or decreases. (c): Comparison of ACE-CTDE
against ACE.
### Ablation: What matters in the components of ACE?
To better understand why ACE outperforms the baseline algorithms, we further
make ablations and modifications to it. First, we remove the interaction-aware
action embedding by only using the active embedding, denoted by ACE-w/o-IA,
and compare it with ACE and the fine-tuned QMIX. As shown in Figure 6(a), the
gap between ACE-w/o-IA and the QMIX is still large, which validates that the
bidirectional action-dependency itself retains much of the benefit of ACE.
Moreover, ACE-w/o-IA is worse than ACE due to the effective interaction-aware
embedding. Secondly, we study how the order of decision-making influences the
performance of ACE. We compare two settings: 1) Shuffle Order: random orders
are generated for both data collection and training. 2) Sorted Order: agents
are sorted by unit types and locations. As shown in Figure 6(b), the two
settings have little difference in performance in two SMAC maps, which
validates that ACE is quite robust to the order of agents.
### Extension: Extend ACE to the actor-critic method
Our approach, transforming a MMDP into a MDP, is general and can be combined
with a more extensive range of single-agent RL algorithms. In this section, we
combine ACE with an actor-critic method, PPO, denoted by ACE-PPO. To generate
the logit of each action in PPO, we roll out each action to the corresponding
next SE-states, and use the same way how our value encoder evaluates these
states to obtain the logit. As shown in Figure 7(a), ACE-PPO achieves a
comparable performance with ACE on the 5m_vs_6m map, which validates that ACE
is applicable to wider types of algorithms.
### Generalization: Does ACE generalize to a new task with a different number
of agents?
An interesting advantage of ACE is its surprising generalization. Compared
with prior methods where agents make decisions individually, ACE explicitly
models the cooperation between agents. As a result, when the preceding agents
take sub-optimal actions due to the change of the task, the subsequent agents
compensate it through the learned cooperative skills. We train ACE and the
fine-tuned QMIX-SHARED on 5m_vs_6m and test them on 4m_vs_5m, 5m_vs_5m,
6m_vs_7m, 8m_vs_9m and 10m_vs_11m. Note that the action and observation space
change with the agent number, we address this problem as described in
Appendix. As in Figure 7(b), although without any fine-tuning on the test
maps, ACE still achieves considerable win rates, which reveals an excellent
generalization to the change of the agent number.
### Practicability: Apply ACE to the CTDE scheme.
We develop a simple adaptation of ACE, denoted by ACE-CTDE, to apply it in the
Centralized training and decentralized execution (CTDE) scheme, a popular
scheme for multi-agent tasks with limited communication. In most CTDE methods,
an individual value function uses the local observation to estimate the
individual value, and a joint value function estimates the value of the joint
action with the global state. The optimal actions of the two functions are
aligned via well-designed constraints to guarantee the IGM property.
Similarly, we use a counterfactual distillation technique, to distill the
optimal joint action directly generated via the sequential rollout in ACE,
into an additional individual value function $Q\left(o_{i}^{t},a_{i}\right)$
of which the input is the local observation. The counterfactual distillation
is formalized by
$\hat{Q}\left(o_{i}^{t},a_{i}\right)=V\left(s_{a_{i},a_{i-}^{*}}^{t}\right)$.
$\hat{Q}\left(o_{i}^{t},a_{i}\right)$ is the target to update
$Q\left(o_{i}^{t},a_{i}\right)$ and $o_{i}^{t}$ is the local observation of
agent $i$. $a_{i-}^{*}$ denotes the optimal joint action generated by the
sequential rollout excluding the action of agent $i$. This distillation
estimates each individual action value of an agent with other agents’ actions
fixed jointly optimal, thus it follows the IGM principle. In Figure 7(c), ACE-
CTDE is evaluated with the individual value function $Q$ in a decentralized
way. We can see that ACE-CTDE performs nearly as well as ACE due to the IGM
property of the proposed distillation.
## Conclusion
In this paper, we introduce bidirectional action-dependency to solve the non-
stationary problem in cooperative multi-agent tasks. The proposed ACE
algorithm significantly improves the sample efficiency and the converged
performance against the state-of-the-art algorithm. Comprehensive experiments
validate the advantages of ACE in many aspects.
## Acknowledgments
Wanli Ouyang was supported by the Australian Research Council Grant
DP200103223, Australian Medical Research Future Fund MRFAI000085, CRC-P Smart
Material Recovery Facility (SMRF) – Curby Soft Plastics, and CRC-P ARIA -
Bionic Visual-Spatial Prosthesis for the Blind. This work is partially
supported by the Shanghai Committee of Science and Technology (Grant No.
21DZ1100100). We acknowledge Yining Fang for the strong support and in depth
discussion.
## References
* Agarap (2018) Agarap, A. F. 2018. Deep learning using rectified linear units (relu). _arXiv preprint arXiv:1803.08375_.
* Bertsekas (2019) Bertsekas, D. 2019. Multiagent rollout algorithms and reinforcement learning. _arXiv preprint arXiv:1910.00120_.
* Böhmer, Kurin, and Whiteson (2020) Böhmer, W.; Kurin, V.; and Whiteson, S. 2020. Deep coordination graphs. In _International Conference on Machine Learning_ , 980–991. PMLR.
* Canese et al. (2021) Canese, L.; Cardarilli, G. C.; Di Nunzio, L.; Fazzolari, R.; Giardino, D.; Re, M.; and Spanò, S. 2021. Multi-agent reinforcement learning: A review of challenges and applications. _Applied Sciences_ , 11(11): 4948.
* Chenghao et al. (2021) Chenghao, L.; Wang, T.; Wu, C.; Zhao, Q.; Yang, J.; and Zhang, C. 2021. Celebrating diversity in shared multi-agent reinforcement learning. _Advances in Neural Information Processing Systems_ , 34.
* Foerster et al. (2018) Foerster, J.; Farquhar, G.; Afouras, T.; Nardelli, N.; and Whiteson, S. 2018. Counterfactual multi-agent policy gradients. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 32.
* Fu et al. (2022) Fu, W.; Yu, C.; Xu, Z.; Yang, J.; and Wu, Y. 2022. Revisiting Some Common Practices in Cooperative Multi-Agent Reinforcement Learning. In _International Conference on Machine Learning_.
* Gronauer and Diepold (2022) Gronauer, S.; and Diepold, K. 2022. Multi-agent deep reinforcement learning: a survey. _Artificial Intelligence Review_ , 55(2): 895–943.
* Hu, Hu, and Liao (2021) Hu, J.; Hu, S.; and Liao, S.-w. 2021. Policy Perturbation via Noisy Advantage Values for Cooperative Multi-agent Actor-Critic methods. _arXiv preprint arXiv:2106.14334_.
* Hu et al. (2021) Hu, J.; Wu, H.; Harding, S. A.; Jiang, S.; and Liao, S.-w. 2021. RIIT: Rethinking the Importance of Implementation Tricks in Multi-Agent Reinforcement Learning. _arXiv preprint arXiv:2102.03479_.
* Hüttenrauch, Šošić, and Neumann (2017) Hüttenrauch, M.; Šošić, A.; and Neumann, G. 2017. Guided deep reinforcement learning for swarm systems. _arXiv preprint arXiv:1709.06011_.
* Kuba et al. (2021a) Kuba, J. G.; Chen, R.; Wen, M.; Wen, Y.; Sun, F.; Wang, J.; and Yang, Y. 2021a. Trust region policy optimisation in multi-agent reinforcement learning. _arXiv preprint arXiv:2109.11251_.
* Kuba et al. (2022) Kuba, J. G.; Feng, X.; Ding, S.; Dong, H.; Wang, J.; and Yang, Y. 2022. Heterogeneous-agent mirror learning: A continuum of solutions to cooperative marl. _arXiv preprint arXiv:2208.01682_.
* Kuba et al. (2021b) Kuba, J. G.; Wen, M.; Meng, L.; Zhang, H.; Mguni, D.; Wang, J.; Yang, Y.; et al. 2021b. Settling the variance of multi-agent policy gradients. _Advances in Neural Information Processing Systems_.
* Kurach et al. (2020) Kurach, K.; Raichuk, A.; Stańczyk, P.; Zając, M.; Bachem, O.; Espeholt, L.; Riquelme, C.; Vincent, D.; Michalski, M.; Bousquet, O.; et al. 2020\. Google research football: A novel reinforcement learning environment. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, 4501–4510.
* Littman (1994) Littman, M. L. 1994. Markov games as a framework for multi-agent reinforcement learning. In _Machine learning proceedings 1994_ , 157–163. Elsevier.
* Lowe et al. (2017) Lowe, R.; Wu, Y. I.; Tamar, A.; Harb, J.; Pieter Abbeel, O.; and Mordatch, I. 2017\. Multi-agent actor-critic for mixed cooperative-competitive environments. _Advances in neural information processing systems_ , 30.
* Papoudakis et al. (2019) Papoudakis, G.; Christianos, F.; Rahman, A.; and Albrecht, S. V. 2019. Dealing with non-stationarity in multi-agent deep reinforcement learning. _arXiv preprint arXiv:1906.04737_.
* Rashid et al. (2020) Rashid, T.; Farquhar, G.; Peng, B.; and Whiteson, S. 2020. Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. _Advances in neural information processing systems_ , 33: 10199–10210.
* Rashid et al. (2018) Rashid, T.; Samvelyan, M.; Schroeder, C.; Farquhar, G.; Foerster, J.; and Whiteson, S. 2018. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In _International Conference on Machine Learning_ , 4295–4304. PMLR.
* Ruan et al. (2022) Ruan, J.; Du, Y.; Xiong, X.; Xing, D.; Li, X.; Meng, L.; Zhang, H.; Wang, J.; and Xu, B. 2022. GCS: Graph-based Coordination Strategy for Multi-Agent Reinforcement Learning. _arXiv preprint arXiv:2201.06257_.
* Samvelyan et al. (2019) Samvelyan, M.; Rashid, T.; De Witt, C. S.; Farquhar, G.; Nardelli, N.; Rudner, T. G.; Hung, C.-M.; Torr, P. H.; Foerster, J.; and Whiteson, S. 2019. The starcraft multi-agent challenge. _arXiv preprint arXiv:1902.04043_.
* Shalev-Shwartz, Shammah, and Shashua (2016) Shalev-Shwartz, S.; Shammah, S.; and Shashua, A. 2016. Safe, multi-agent, reinforcement learning for autonomous driving. _arXiv preprint arXiv:1610.03295_.
* Son et al. (2020) Son, K.; Ahn, S.; Reyes, R. D.; Shin, J.; and Yi, Y. 2020. QTRAN++: Improved Value Transformation for Cooperative Multi-Agent Reinforcement Learning. _arXiv preprint arXiv:2006.12010_.
* Son et al. (2019) Son, K.; Kim, D.; Kang, W. J.; Hostallero, D. E.; and Yi, Y. 2019. Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning. In _International Conference on Machine Learning_ , 5887–5896. PMLR.
* Sunehag et al. (2017) Sunehag, P.; Lever, G.; Gruslys, A.; Czarnecki, W. M.; Zambaldi, V.; Jaderberg, M.; Lanctot, M.; Sonnerat, N.; Leibo, J. Z.; Tuyls, K.; et al. 2017. Value-decomposition networks for cooperative multi-agent learning. _arXiv preprint arXiv:1706.05296_.
* Tan (1993) Tan, M. 1993. Multi-agent reinforcement learning: Independent vs. cooperative agents. In _Proceedings of the tenth international conference on machine learning_ , 330–337.
* Terry et al. (2021) Terry, J.; Black, B.; Grammel, N.; Jayakumar, M.; Hari, A.; Sullivan, R.; Santos, L. S.; Dieffendahl, C.; Horsch, C.; Perez-Vicente, R.; et al. 2021. Pettingzoo: Gym for multi-agent reinforcement learning. _Advances in Neural Information Processing Systems_ , 34: 15032–15043.
* Usunier et al. (2016) Usunier, N.; Synnaeve, G.; Lin, Z.; and Chintala, S. 2016. Episodic exploration for deep deterministic policies for StarCraft micromanagement.
* Wang et al. (2020) Wang, J.; Ren, Z.; Liu, T.; Yu, Y.; and Zhang, C. 2020. Qplex: Duplex dueling multi-agent q-learning. _arXiv preprint arXiv:2008.01062_.
* Wang et al. (2019) Wang, W.; Yang, T.; Liu, Y.; Hao, J.; Hao, X.; Hu, Y.; Chen, Y.; Fan, C.; and Gao, Y. 2019. Action Semantics Network: Considering the Effects of Actions in Multiagent Systems. In _International Conference on Learning Representations_.
* Ye et al. (2022) Ye, J.; Li, C.; Wang, J.; and Zhang, C. 2022. Towards Global Optimality in Cooperative MARL with Sequential Transformation. _arXiv preprint arXiv:2207.11143_.
* Yu et al. (2021) Yu, C.; Velu, A.; Vinitsky, E.; Wang, Y.; Bayen, A.; and Wu, Y. 2021. The Surprising Effectiveness of PPO in Cooperative, Multi-Agent Games. _arXiv preprint arXiv:2103.01955_.
* Zhang and Lesser (2011) Zhang, C.; and Lesser, V. 2011. Coordinated multi-agent reinforcement learning in networked distributed POMDPs. In _Twenty-Fifth AAAI Conference on Artificial Intelligence_.
* Zhang and Lesser (2013) Zhang, C.; and Lesser, V. 2013. Coordinating multi-agent reinforcement learning with limited communication. In _Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems_ , 1101–1108.
* Zhang et al. (2020) Zhang, T.; Xu, H.; Wang, X.; Wu, Y.; Keutzer, K.; Gonzalez, J. E.; and Tian, Y. 2020\. Multi-agent collaboration via reward attribution decomposition. _arXiv preprint arXiv:2010.08531_.
* Zhang et al. (2019) Zhang, Z.; Li, H.; Zhang, L.; Zheng, T.; Zhang, T.; Hao, X.; Chen, X.; Chen, M.; Xiao, F.; and Zhou, W. 2019. Hierarchical reinforcement learning for multi-agent moba game. _arXiv preprint arXiv:1901.08004_.
* Zhou et al. (2020) Zhou, M.; Luo, J.; Villella, J.; Yang, Y.; Rusu, D.; Miao, J.; Zhang, W.; Alban, M.; Fadakar, I.; Chen, Z.; et al. 2020. Smarts: Scalable multi-agent reinforcement learning training school for autonomous driving. _arXiv preprint arXiv:2010.09776_.
## Appendix
In this appendix, we first discuss the difference between this paper and
related works. Then, we provide more elaboration on the implementation
details, experiment results and qualitative results. Specifically, we present
the discussion of the related works in Further Discussion of Related Work, the
implementation details of ACE and the baselines on the three environments in
Implementation Details, implementation details of ablation studies in Ablation
Details, and additional experimental results in Additional Experiments.
## Further Discussion of Related Work
Many MARL algorithms have been proposed to achieve improvements in cooperation
aspects. The two tracks that are most relative with ACE are coordination graph
2011; 2013; 2020 and advanced network representation 2019.
To mitigate the non-stationary problem in MARL, the coordination Graph-based
algorithms introduce action dependency into the formulation and utilize the
higher-order state-action functions instead of fully individual state-action
functions to represent the joint Q-function. However, coordination graph
methods, still based on the value factorization paradigm, only introduce
pairwise action dependency in the value estimation for MMDP. While ACE is
based on the transformation from MMDP to MDP, which tackles the non-stationary
problem by introducing full action dependency.
To boost the action representation in MARL, ASN 2019 also classifies the
actions of an agent into $A_{in}$ and $A_{out}$ according to whether the
actions affect other units. However, ACE is fundamentally different with ASN.
First, ASN is designed for individual value network, while ACE for the joint
value network. Secondly, ASN can only model the action of one agent, and ACE
describe all actions that has been selected. Moreover, ASN uses the
information of the target unit at the input to model the action that interacts
with the target unit, while ACE use the action embedding to update the output
embedding of the target unit.
ZO 2016 is another relative work. It also introduces a sequential inference
scheme/sequential expansion (SE) for structured output prediction. However,
the potential of SE has long been overlooked by the community. In this work,
we rethink this topic and fully releases the potential of SE and further
validate the compatibility of SE with other single agent algorithms (e.g.,
PPO). Additionally, ACE directly learns the value function by
$V(s,a_{1},a_{2},...,a_{i})$, which has a unified representation of agents
that have and haven’t made decisions. While the definition of value function
in ZO is $Q(a_{i}|s,a_{1},a_{2},...,a_{i-1})$, where the model must
additionally learn which agent to choose act for in each step, which struggles
to achieve high sample efficiency.
## Implementation Details
### Baselines
##### (1) Fine-tuned QMIX2021 and Noisy-MAPPO2021
We use the official code of fine-tuned QMIX
111https://github.com/hijkzzz/pymarl2 and Noisy-MAPPO
222https://github.com/hijkzzz/noisy-mappo provided by their authors. These two
implementations incorporate fine-tuned hyper-parameters and bags of code-level
optimizations, which fully release the potential of the two baselines.
##### (2) The Difference between SHARED and LOCAL
The aim of the SHARED setting is to enable each individual agent in our
baseline algorithms to make the decision based on the observation of all
agents. In the SHARED setting, we make the state of an enemy observable to all
agents if it is observed by at least one agent. Also, the state of each agent
is observed by all other agents. As a result, the SHARED setting enables all
agents to have access to a shared global observation, which is the union of
the observation of all agents.
##### (3) CDS2021
We use the official code of CDS 333https://github.com/lich14/CDS.
##### (4) QTRAN, QMIX, and VDN on Spider and Fly 2021
We use the same hyper-parameters as ACE for QTRAN, QMIX, and VDN, as listed in
Table 2. Also, the input feature of them is based on the same information as
ACE, including the unit ID, the position, and the distance among units, as
listed in Table 3. For the algorithm-specific hyper-parameters of these
baselines, like the weight of the extra loss in QTRAN, we use the original
choices provided in their original papers.
Parameter | Value
---|---
Exploration
action_selector | epsilon_greedy
epsilon_type | linear
epsilon_start | 1
epsilon_end | 0.05
epsilon_decay | 150k
Sample
collector_env_num | 8
sample_per_collect | 1024
replay_buffer_size | 1M
Training
update_per_collect | 10
batch_size | 256
weight_decay | 0
learning_rate | 0.0005
target_update_theta | 0.02
discount_factor | 0.99
optimizer | adam
Model
hidden_len | 128
Table 2: Hyper-parameter Settings of ACE and baselines on Spiders and Fly.
Feature Name | Components
---|---
ACE
node_feature | unit id (0, 1 for spiders, 2 for fly)
| unit position
edge_feature | distance on two axes
Baselines
input_feature | unit id (0, 1 for spiders, 2 for fly)
| unit position, distance to all other
| units on two axes
Table 3: Input Feature on Spiders and Fly.
### ACE
We implemented our ACE based on the DI-engine
444https://github.com/opendilab/DI-engine, which is a generalized decision
intelligence engine and supports various deep reinforcement learning
algorithms. All codes are licensed under the Apache License or MIT License.
The hyper-parameter settings of ACE on Spiders and Fly, SMAC and GRF are
respectively listed in Table 2, Table 4 and 9. The definition of the input
feature on the three environments are provided in Table 3, Table 5 and 6.
Different from pymarl2 555https://github.com/hijkzzz/pymarl2, in DI-engine,
the batch_size represents the number of transitions, not the episode.
Parameter | Value
---|---
Exploration
action_selector | epsilon_greedy
epsilon_type | linear
epsilon_start | 1
epsilon_end | 0.05
epsilon_decay | 50k
Sampler
collector_env_num | 8
episode_per_collect | 32
replay_buffer_size | 300k (100k for 2c_vs_64zg)
Training
update_per_collect | 50
batch_size | 320
weight_decay | 1e-5
learning_rate | 3-4
target_update_theta | 0.008
discount_factor | 0.99
optimizer | rmsprop
Model
hidden_len | 256
Table 4: Hyperparameter Settings of ACE on SMAC.
Feature Name | Components
---|---
node_feature | unit id, unit type, unit position, health, shield,
| cool down
edge_feature | distance on two axes
Table 5: Input Feature of ACE on SMAC.
Feature Name | Components
---|---
node_feature | unit id, unit position, speed, whether own ball
edge_feature | distance on two axes
Table 6: Input Feature of ACE on GRF.
In Table 7, we show in detail how the two-fold interaction-aware action
embedding was constructed.
Env | Action | Passive Embedding | Active Embedding
---|---|---|---
Whether Use? | Target Unit | Whether Use? | Target Unit
Spiders-and-Fly | move | No | N/A | Yes | Itself
stay | No | N/A | Yes | Itself
SMAC | move | No | N/A | Yes | Itself
attack | Yes | The unit it attack | Yes | Itself
heal | Yes | The unit it heal | Yes | Itself
GRF | action of the ball owner | Yes | No | Yes | Itself
action of other players | No | N/A | Yes | Itself
Table 7: Interaction-aware action embedding.
### Environments
#### Spiders and Fly
Here, we visualize the initial positions of the two spiders and fly in a
7$\times$7 map in the Figure 8. Each episode starts with a state where the
Manhattan distance between the fly and each spider is larger than 4. In each
time step, the optimal strategy is one of the following two types: (1) both of
the spiders move to drive the fly to the corner, and when it is at the corner,
(2) one spider stays still to restrict the possible movement of the fly and
another one approaches the fly. Thus, optimal cooperation is required in each
time step. It is observed in our experiments that, the proposed ACE is the
only one that can approximate the performance of the oracle policy in the
5$\times$5 and 7$\times$7 map. This result benefits from that the introduced
bidirectional action-dependency enables the explicit learning of how to
cooperate with the preceding agents and how the successors react.
Figure 8: Visualization of spiders and the fly in a 7$\times$7 map, where the
two spiders are controlled by the RL agent. The reward is defined as 10 if the
fly is caught otherwise 0.
#### GRF
Academy_3_vs_1_with_keeper and academy_counter- attack_hard are two of the
hardest official scenarios in GRF. To provide more clear details about the two
scenarios in GRF, we visualize the initial positions of all players and the
ball in the two scenarios in the Figure 9, and provide an RGB screenshot of
academy_3_vs_1_with_keeper in Figure 10. In both two scenarios, the target is
to control the left team players (red points in Figure 9) to get a score with
the learned cooperative skills. The right team players are controlled by a
built-in AI to defend. The proposed ACE achieves a new SOTA in both two
scenarios.
(a)
(b)
Figure 9: Visualization of the initial positions of all players and the ball,
where red points are our players, blue points are opponents and the black
point represents the ball. All our players except for our goalkeeper are
controlled by an RL agent while others are controlled by the built-in engine.
Figure 10: The academy_3_vs_1_with_keeper scenario in real game.
## Ablation Details
### ACE-PPO
In ACE-PPO, we derive the probability distribution over the action space of
each agent via a softmax operation over a logit vector, formalized by:
$\displaystyle p\left(a_{i+1}^{t}\mid
s_{a_{1:i}}^{t}\right)=\frac{e^{l\left(a_{i+1}^{t}\mid
s_{a_{1:i}}^{t}\right)}}{\sum_{\tilde{a}_{i+1}^{t}\in\mathcal{A}_{i+1}}e^{l\left(\tilde{a}_{i+1}^{t}\mid
s_{a_{1:i}}^{t}\right)}}.$ (3)
Here $l\left(a_{i+1}^{t}\mid s_{a_{1:i}}^{t}\right)$ is the logit generated in
the same way as $V\left(s_{a_{1:i+1}}^{t}\right)$. Specifically, we use an
additional logit encoder with the same structure as the value encoder to
produce $l\left(a_{i+1}^{t}\mid s_{a_{1:i}}^{t}\right)$.
We use GAE($\lambda$) as the advantage estimator, which is a common practice
of PPO in most tasks. The temporal difference is calculated and summed over
all adjacent SE-states, formalized by:
$\displaystyle A\left(a_{i+1}^{t}\mid s_{a_{1:i}}^{t}\right)=$ (4)
$\displaystyle(\gamma\lambda)^{0}\left(\gamma
V\left(s_{a_{1:i+1}}^{t}\right)-V\left(s_{a_{1:i}}^{t}\right)\right)$ (5)
$\displaystyle+(\gamma\lambda)^{1}\left(\gamma
V\left(s_{a_{1:i+2}}^{t}\right)-V\left(s_{a_{1:i+1}}^{t}\right)\right)$ (6)
$\displaystyle+\ldots$ (7)
$\displaystyle+(\gamma\lambda)^{(n-i)}\left(r\left(s^{t},a_{1:n}^{t}\right)+\gamma
V\left(s_{a_{1}}^{t+1}\right)-V\left(s_{a_{1:n}}^{t}\right)\right)$ (8)
$\displaystyle+(\gamma\lambda)^{(n-i+1)}\left(\gamma
V\left(s_{a_{2}}^{t+1}\right)-V\left(s_{a_{1}}^{t+1}\right)\right)$ (9)
$\displaystyle+\ldots.$ (10)
In summary, our implementation of ACE-PPO on the SE-MDP is equivalent to the
standard PPO on a single agent MDP. The hyper-parameters are listed in Table
8.
Parameter | Value
---|---
Exploration
action_selector | softmax
Sampler
collector_env_num | 8
n_episode | 32
gae_lambda | 0.95
Training
update_per_collect | 50
batch_size | 3200
learning_rate | 5e-4
discount_factor | 0.99
optimizer | adam
value_weight | 1
entropy_weight | 0.01
clip_ratio | 0.05
use_value_clip | True
value_clip_ratio | 0.3
recompute_adv | True
adv_norm | True
value_norm | True
Model
hidden_len | 256
Table 8: Hyperparameter Settings of ACE-PPO on SMAC.
### Generalization
To handle the change of the observation dimension and the action space among
maps with different agent numbers, we develop extended observation and action
space. In all the maps, we assume that there exist the largest possible
numbers of ally units and enemy units, which are 10 and 11 in the largest map
10m_vs_11m. To realize any smaller map xm_vs_ym(with x<=10 and y<=11), we
initialize each episode with only x ally units and y enemy units alive, which
are randomly chosen. This modification enables us to run ACE and fine-tuned
QMIX on all maps with the same observation dimension and action space.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 11: We denote agents trained on the Am_vs_Bm map and tested on the
Cm_vs_Dm map as Train (A-B)-Test (C-D). For example, Train (5-6)-Test (8-9)
means agents are tested on the 8m_vs_9m map while trained on the 5m_vs_6m map,
corresponding to the settings of transfer learning. Train (5-6)-Test (5-6)
means agents are trained and tested on the 5m_vs_6m map. (a): Agents’
positions of Train (5-6)-Test (5-6). (b): Agents’ positions of Train
(5-6)-Test (8-9). As shown in the left half side of (a) and (b), ACE perfectly
transfers the station position of 5m_vs_6m to 8m_vs_9m, that is, making the
agents stand up and down in a line. (c): Careful positioning and alternating
fire of Train (5-6)-Test (8-9). ACE perfectly transfers the operation of
careful positioning and alternating fire of 5m_vs_6m to 8m_vs_9m. Each ally
has very little health but kills all enemies. (d): Focusing fire in Train
(5-6)-Test (5-6). (e): Focusing fire in Train (5-6)-Test (8-9). (f): Focusing
fire in Train (8-9)-Test (8-9). The optimal choice (d) in 5m_vs_6m is all
allies focusing fire on one enemy while the optimal choice (f) in 8m_vs_9m is
all allies focusing fire on two enemies, since 5 allies can kill an enemy
instantly and having 8 agents attacking one agent (e) at the same time is
wasteful.
### ACE-CTDE
In this subsection we provide the detail of the implementation of
$Q\left(o_{i}^{t},a_{i}\right)$ in ACE-CTDE. For each agent $i$, we generate
$Q\left(o_{i}^{t},a_{i}\right)$ in the same way as that of
$V\left(s_{a_{1:i}}^{t}\right)$, with the same unit encoder, action encoder
and value encoder, except two differences. First, due to the partial
observation, we only calculate the unit embedding of the units observed by
agent $i$. Also, the edge feature of these units only includes the relation
with the units observed by agent $i$. Second, due to the limited
communication, only the action embedding $e_{a}\left(a_{i}\right)$ of agent
$i$ is added on the corresponding unit embeddings, rather than all the
preceding actions. Finally, the set of unit embeddings (only of the units
observed by agent $i$) incorporated with only $e_{a}\left(a_{i}\right)$ is
encoded by a value encoder to generate $Q\left(o_{i}^{t},a_{i}\right)$, with
the same structure as that for $V$.
## Additional Experiments
### The behavior of transferred agents
In this section, we carefully analyze the behavior of transferred agents. As
shown in Figure 11, ACE perfectly transfers the station position, careful
positioning, alternating fire, and focusing fire of 5m_vs_6m to 8m_vs_9m and
then achieves 40% win rate in 8m_vs_9m. However, the optimal choice of
focusing fire in 8m_vs_9m is different from that in 5m_vs_6m, which is largely
responsible for ACE not achieving a 100$\%$ win rate on the 8m_vs_9m map.
Inspired with ACE, in future work, we will design an algorithm to use only one
policy to solve all maps in SMAC consisting of different unit types and
numbers.
Parameter | Value
---|---
Exploration
action_selector | epsilon_greedy
epsilon_type | linear
epsilon_start | 1
epsilon_end | 0.05
epsilon_decay | 50k for academy_3_vs_1_with_keeper
300k for academy_counterattack_hard
Sampler
collector_env_num | 8
episode_per_collect | 32
replay_buffer_size | 300k
Training
update_per_collect | 50
batch_size | 320
weight_decay | 1e-5
learning_rate | 0.002 for academy_3_vs_1_with_keeper
0.0009 for academy_counterattack_hard
target_update_theta | 0.08
discount_factor | 0.99
optimizer | rmsprop
Model
hidden_len | 128
Table 9: Hyperparameter Settings of ACE on GRF.
|
# Interpreting Primal-Dual Algorithms for Constrained Multiagent Reinforcement
Learning
Daniel Tabas<EMAIL_ADDRESS>
University of Washington Department of Electrical Engineering Seattle WA
USA Ahmed S. Zamzam<EMAIL_ADDRESS>
National Renewable Energy Laboratory Golden CO USA Baosen Zhang
<EMAIL_ADDRESS>
University of Washington Department of Electrical Engineering Seattle WA
USA
###### Abstract
Constrained multiagent reinforcement learning (C-MARL) is gaining importance
as MARL algorithms find new applications in real-world systems ranging from
energy systems to drone swarms. Most C-MARL algorithms use a primal-dual
approach to enforce constraints through a penalty function added to the
reward. In this paper, we study the structural effects of this penalty term on
the MARL problem. First, we show that the standard practice of using the
constraint function as the penalty leads to a weak notion of safety. However,
by making simple modifications to the penalty term, we can enforce meaningful
probabilistic (chance and conditional value at risk) constraints. Second, we
quantify the effect of the penalty term on the value function, uncovering an
improved value estimation procedure. We use these insights to propose a
constrained multiagent advantage actor critic (C-MAA2C) algorithm. Simulations
in a simple constrained multiagent environment affirm that our
reinterpretation of the primal-dual method in terms of probabilistic
constraints is effective, and that our proposed value estimate accelerates
convergence to a safe joint policy.
###### keywords:
Multiagent reinforcement learning, primal-dual methods, chance constraints,
conditional value at risk
## 1 Introduction
As reinforcement learning (RL) algorithms progress from virtual to cyber-
physical applications, it will be necessary to address the challenges of
safety, especially when systems are controlled by multiple agents. Examples of
multiagent safety-critical systems include power grids (Cui et al., 2022),
building energy management (BEM) systems (Biagioni et al., 2022), autonomous
vehicle navigation (Zhou et al., 2022), and drone swarms (Chen et al., 2020).
In each of these applications, agents must learn to operate in a complicated
environment while satisfying various local and system-wide constraints. Such
constraints, derived from domain-specific knowledge, are designed to prevent
damage to equipment, humans, or infrastructure or to preclude failure to
complete some task or objective.
Constrained multiagent reinforcement learning (C-MARL) poses challenges beyond
the single-agent constrained reinforcement learning (C-RL) problem because the
interactions between agents can influence both the satisfaction of constraints
and the convergence of policies. The potential scale of C-MARL problems
eliminates the possibility of directly using common model-based methods for
C-RL, such as in Chen et al. (2018); Ma et al. (2021); Tabas and Zhang (2022).
The main strategy for tackling C-MARL problems found in the literature is the
Lagrangian or primal-dual method (see, e.g. Lu et al. (2021); Li et al.
(2020); Lee et al. (2018); Parnika et al. (2021) and the references therein).
Our aim is to understand some potential drawbacks of this approach and some
ways these drawbacks can be mitigated.
In the primal-dual approach to C-MARL, each agent receives a reward signal
that is augmented with a penalty term designed to incentivize constraint
satisfaction. The magnitude of the penalty term is tuned to steer policies
away from constraint violations while not unnecessarily overshadowing the
original reward. Although this approach has been shown to converge to a safe
joint policy under certain assumptions (Lu et al., 2021), it changes the
structure of the problem in a way that is not well understood, leading to two
challenges.
The first challenge is that the primal-dual algorithm only enforces
_discounted sum constraints_ derived from the original safety constraints of
the system. As we will show, discounted sum constraints guarantee safety only
in expectation, which is difficult to interpret. We propose simple
modifications to the penalty term that enable the enforcement of more
interpretable constraints, namely, chance constraints (Mesbah, 2016) and
conditional value at risk constraints (Rockafellar and Uryasev, 2000),
providing bounds on the probability and the severity of future constraint
violations. There have been several C-RL algorithms that work with risk
sensitivities (García and Fernández, 2015; Chow et al., 2018), but the
multiagent context is less studied, and our contributions provide a novel
understanding of the safety guarantees provided by C-MARL algorithms.
Figure 1: BEM with a voltage constraint at the point of common coupling.
The second challenge is the fact that the reward is constantly changing as the
dual variables are updated, which diminishes the accuracy of value estimates.
We quantify this loss of accuracy, and we propose a new value estimation
procedure to overcome it. Our proposal builds on results in Tessler et al.
(2019) showing the affine relationship between the value function and the dual
variables. We develop a novel class of temporal difference algorithms for
value function estimation that directly exploits this observation, giving rise
to a value estimate that maintains an accurate derivative with respect to the
dual variables. Compared to existing algorithms, our estimates are much more
robust to dual variable updates.
The specific C-MARL formulation we study in this paper is inspired by the BEM
problem (Molina-Solana et al., 2017; Biagioni et al., 2022), illustrated in
Figure 1. The main objective of BEM is to control a building’s resources to
minimize the cost of energy consumption while maintaining comfort and
convenience for the occupants. However, when BEMs are deployed in multiple
buildings, it is critical to ensure that the power network connecting them is
safely operated because the uncoordinated control of buildings can cause
network-level voltage or power flow violations. This mandates a level of
coordination among agents in the learning stage; thus, we adopt the commonly-
studied centralized training/decentralized execution (CTDE) framework (Lowe et
al., 2017; Foerster et al., 2018), in which a simulator or coordinator
provides global state information, constraint evaluations, and Lagrange
multipliers (dual variables) to each agent during training. During the testing
(execution) phase, we assume that there is no real-time communication between
the agents. This stems from the need for privacy and the lack of communication
infrastructure in practical systems 111Even in buildings with advanced
metering infrastructure or smart meters, they typically only exchange
information with the utility a few times a day..
The rest of the paper is organized as follows. In Section 2, we formulate the
problem under consideration. In Section 3, we provide an overview of our main
interpretive tool, the occupation measure (Borkar and Bhatt, 1996). In Section
4, we use the occupation measure to reformulate discounted sum constraints as
probabilistic constraints. In Section 5, we study the value structure of the
primal-dual problem and use the results to propose a new value estimation
algorithm. In Section 6, we provide some simulation results affirming the
contribution of the theoretical observations.
### 1.1 Notation
The natural numbers and the real numbers are denoted $\mathbb{N}$ and
$\mathbb{R}$, respectively. Given a measurable set $\mathcal{S}$, the set of
all possible probability densities over $\mathcal{S}$ is denoted as
$\Delta_{\mathcal{S}}$. For any discount factor $\gamma\in(0,1)$ and any
sequence $\\{y_{t}\\}_{t=0}^{T}$, the discounted sum operator is
$\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{T}[y_{t}\mid\gamma]=(1-\gamma)\sum_{t=0}^{T}\gamma^{t}y_{t}$,
and
$\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}[y_{t}\mid\gamma]=\lim_{T\rightarrow\infty}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{T}[y_{t}\mid\gamma]$
if the limit exists. We often drop the second argument $\gamma$ for brevity.
The positive component operator is $[y]_{+}=\max\\{y,0\\}$, and the logical
indicator function $I[\cdot]$ maps $\\{\text{True},\text{False}\\}$ to
$\\{1,0\\}$.
## 2 Problem formulation
### 2.1 Constrained MARL
We consider a noncooperative setting in which $n$ agents pursue individual
objectives while subject to global constraints (e.g., a shared resource
constraint). We assume there is no real-time communication, and that each
agent’s action is based only on its local observations. However, policy
updates can use global information under the CTDE framework (Lowe et al.,
2017; Foerster et al., 2018). In this paper, we consider the case of
continuous state and action spaces.
The setting is described by the tuple
$(\\{\mathcal{X}_{i}\\}_{i\in\mathcal{N}},\\{\mathcal{U}_{i}\\}_{i\in\mathcal{N}},\\{R_{i}\\}_{i\in\mathcal{N}},f,C,p_{0},\gamma)$,
where $\mathcal{N}$ is the index set of agents,
$\mathcal{X}_{i}\subset\mathbb{R}^{n_{x}^{i}}$ and
$\mathcal{U}_{i}\subset\mathbb{R}^{n_{u}^{i}}$ are the state and action spaces
of agent $i$, and
$R_{i}:\mathcal{X}_{i}\times\mathcal{U}_{i}\rightarrow\mathbb{R}$ is the
reward function of agent $i$. We assume that the sets $\mathcal{X}_{i}$ and
$\mathcal{U}_{i}$ are compact for all $i$. Let
$\mathcal{X}=\prod_{i\in\mathcal{N}}\mathcal{X}_{i}$ and
$\mathcal{U}=\prod_{i\in\mathcal{N}}\mathcal{U}_{i}$ be the joint state and
action spaces of the system, respectively. Then
$f:\mathcal{X}\times\mathcal{U}\rightarrow\Delta_{\mathcal{X}}$ describes the
state transition probabilities, i.e., $f(\cdot\mid x,u)$ is a probability
density function. The function $C:\mathcal{X}\rightarrow\mathbb{R}^{m}$ is
used to describe a set of safe states, $\mathcal{S}=\\{x\in\mathcal{X}\mid
C(x)\leq 0\\}.$
Let $p_{0}\in\Delta_{\mathcal{X}}$ denote the initial state probability
density and $\gamma\in(0,1)$ be a discount factor. At time $t$, the state,
action, and reward of agent $i$ are $x^{i}_{t}$, $u^{i}_{t},$ and $r^{i}_{t}$,
respectively, and constraint $j$ evaluates to $c_{t}^{j}=C^{j}(x_{t})$. Using
a quantity without a superscript to represent a stacked vector ranging over
all $i\in\mathcal{N}$ or all $j\in\\{1,\ldots,m\\}$, a system trajectory is
denoted $\tau=\\{(x_{t},u_{t},r_{t},c_{t})\\}_{t=0}^{\infty}$.
In the noncooperative C-MARL framework, each agent seeks to learn a policy
$\pi_{i}:\mathcal{X}_{i}\rightarrow\Delta_{\mathcal{U}_{i}}$ that maximizes
the expected discounted accumulation of individual rewards. We let
$\pi:\mathcal{X}\rightarrow\Delta_{\mathcal{U}}$ denote the joint policy, and
$f^{\pi}:\mathcal{X}\rightarrow\Delta_{\mathcal{X}}$ is the state transition
probability induced by a joint policy $\pi$. The tuple $(p_{0},f,\pi)$ induces
a state visitation probability density at each time step,
$p_{t}^{\pi}(x)=\int_{\mathcal{X}^{t}}p_{0}(x_{0})\cdot\prod_{k=1}^{t}f^{\pi}(x_{k}\mid
x_{k-1})\ dx_{0}\ \ldots\ dx_{k-1}$, and we say
$p_{\infty}^{\pi}(x)=\lim_{t\rightarrow\infty}p_{t}^{\pi}(x)$ for each
$x\in\mathcal{X}$ if the limit exists. The collection of visitation
probabilities $\\{p_{t}^{\pi}\\}_{t=0}^{\infty}$ gives rise to a probability
density of trajectories $\tau$, denoted
$\mathcal{M}\in\Delta_{\prod_{t=0}^{\infty}(\mathcal{X}\times\mathcal{A}\times\mathbb{R}^{n}\times\mathbb{R}^{m})}$;
thus, the objective of each agent can be stated precisely as maximizing
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}r_{t}^{i}]$.
The agents, however, must settle on a joint policy that keeps the system in
the safe set $\mathcal{S}$. Due to the stochastic nature of the system,
satisfying this constraint at all times is too difficult and in some cases too
conservative. A common relaxation procedure is to formulate an augmented
reward $\tilde{r}_{t}^{i}=r_{t}^{i}-\lambda^{T}c_{t}$ where
$\lambda\in\mathbb{R}^{m}_{+}$, the _Lagrange multiplier_ or _dual variable_ ,
is adjusted to incentivize constraint satisfaction. This leads to the primal-
dual algorithm for C-MARL, discussed in the next section. The following mild
assumption facilitates the analysis.
###### Assumption
$R^{i}$, $C^{j}$, and $p_{t}^{\pi}$ are bounded on $\mathcal{X}$ for all
$i\in\mathcal{N}$, all $j\in\\{1,\ldots,m\\}$, and all $t\in\mathbb{N}$.
The boundedness of $R^{i}$ and $C^{j}$ is a common assumption (Lu et al.,
2021; Tessler et al., 2019; Paternain et al., 2019) that we will use to
exchange the order of limits, sums, and integrals using the dominated
convergence theorem. The assumption of bounded $p_{t}^{\pi}$ is not strictly
necessary and does not change the results; however, we use it throughout the
paper to simplify calculations.
### 2.2 Primal-dual algorithms
The augmented reward function leads to the following min-max optimization
problem for agent $i$:
$\displaystyle\min_{\lambda\geq
0}\max_{\pi_{i}}\mathbb{E}_{\tau\sim\mathcal{M}}\big{[}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}[r_{t}^{i}-\lambda^{T}c_{t}]\big{]}$
(1) $\displaystyle=$ $\displaystyle\min_{\lambda\geq
0}\max_{\pi_{i}}\bigg{(}\mathbb{E}_{\tau\sim\mathcal{M}}\big{[}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}[r_{t}^{i}]\big{]}-\lambda^{T}\mathbb{E}_{\tau\sim\mathcal{M}}\big{[}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}[c_{t}]\big{]}\bigg{)}$
(2)
where (2) uses absolute convergence (stemming from Assumption 2.1) to
rearrange the terms of the infinite sum. Note that the minimization over
$\lambda$ is coupled across agents. Any fixed point of (2) will satisfy
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}c_{t}]\leq
0$ because if
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}c_{t}^{j}]\neq
0$, then the objective value can be reduced by increasing or decreasing
$\lambda_{j}$, unless
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}c_{t}^{j}]<0$
and $\lambda_{j}=0$. In other words, the primal-dual method enforces a
_discounted sum constraint_ derived from the safe set $\mathcal{S}$. Although
discounted sum constraints are convenient, it is not obvious what they imply
about safety guarantees with respect to the original constraints. We begin our
investigation of discounted sum constraints by taking a closer look at a state
visitation probability density known as the occupation measure.
## 3 Occupation measure
The occupation measure describes the average behavior of a Markov process in
some sense which will be made precise shortly. As we will show, the occupation
measure is instrumental in clarifying the role of discounted sum constraints.
In this paper, we use a definition common for continuous-state, infinite-
horizon discounted MDPs (Paternain et al., 2019; Silver et al., 2014).
###### Definition 3.1 (Occupation measure).
The occupation measure $\mu^{\pi}_{\gamma}\in\Delta_{\mathcal{X}}$ associated
with discount factor $\gamma$, induced by a joint policy $\pi$, is defined for
any $x\in\mathcal{X}$ as
$\mu^{\pi}_{\gamma}(x)=\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}p_{t}^{\pi}(x).$
In this section, we provide some interpretations for the occupation measure
before using it to ascribe meaning to discounted sum constraints. The first
question one might ask is whether $\mu^{\pi}_{\gamma}$ is itself a pdf. It is,
of course, nonnegative, and the following proposition shows it integrates to
unity under mild conditions.
###### Proposition 3.2.
Under Assumption 2.1, $\int_{\mathcal{X}}\mu^{\pi}_{\gamma}(x)dx=1$.
The proof for Proposition 3.2 is in Appendix A222The full paper with appendix
is available at https://arxiv.org/abs/2211.16069.. What does
$\mu^{\pi}_{\gamma}$ tell us about the behavior of a system under a given
policy? It describes the probability of visiting a certain state but with more
weight placed on states that are likely to be visited earlier in time. In
fact, $\mu^{\pi}_{\gamma}$ describes the near-term behavior in the following
sense.
###### Proposition 3.3.
Under Assumption 2.1, for any $x\in\mathcal{X}$, the following statements
hold:
1. 1.
$\lim_{\gamma\rightarrow 0^{+}}\mu^{\pi}_{\gamma}(x)=p_{0}(x).$
2. 2.
$\lim_{\gamma\rightarrow
1^{-}}\mu^{\pi}_{\gamma}(x)=\lim_{t\rightarrow\infty}p_{t}^{\pi}$ if the
latter limit exists.
The proof for Proposition 3.3 is in Appendix A. Figure 2 provides an
illustration of the result in Proposition 3.3 when $p_{t}^{\pi}$ evolves as a
normal distribution with mean $0.95^{t}$ and constant variance. The point at
which $\mu^{\pi}_{\gamma}$ equally resembles $p_{0}$ and $p_{\infty}^{\pi}$ is
exactly at $\gamma=0.95$.
Figure 2: Example of the occupation measure for various levels of $\gamma$.
According to Proposition 3.3, the occupation measure describes a state
distribution that lies between the initial and long-term behavior of the
system. But where exactly does it lie in between these two extremes? The
effective horizon of a discounted planning problem is often set to
$T_{1}(\gamma)=\frac{1}{1-\gamma}$, which is the expected termination time if
the probability of an episode terminating at any given time step is
$(1-\gamma)$ (Paternain et al., 2022); however, the concept of a random
stopping time might not be sensible in all applications. Another way to define
the effective horizon is to study the geometric accumulation of weights. In
this case, the effective horizon can be measured as
$T_{2}(\gamma,\varepsilon)=\min\\{K\in\mathbb{N}:\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{K-1}[1]\geq
1-\varepsilon\\}$, where $\varepsilon\in(0,1)$ is a tolerance. Using either of
these two definitions, the occupation measure can be said to describe the
behavior of the system from the start time up to the effective horizon.
Specifically, one may truncate the sum in Definition 3.1 at the effective
horizon to obtain a conceptual understanding of what the occupation measure
describes.
Depending on the application, either $T_{1}$ or $T_{2}$ can provide a more
sensible connection between discounted and finite-horizon problems. But are
these two definitions related? The next proposition answers this affirmatively
by showing that $T_{1}$ is actually a special case of $T_{2}$.
###### Proposition 3.4.
$T_{1}(\gamma)=T_{2}(\gamma,\varepsilon)$ when $\varepsilon$ is set to
$\gamma^{\frac{1}{1-\gamma}}\approx\frac{1}{e}$.
The proof for Proposition 3.4 is in Appendix A. Proposition 3.4 is illustrated
in Figure 3, where the effective horizon is plotted as a function of $\gamma$
for three different values of $\varepsilon$. With an understanding of the
occupation measure as a visitation density describing behavior up to the
effective horizon, we can begin to derive meaningful risk-related
interpretations of discounted sum constraints. These interpretations lead
directly to sensible recommendations for the design of C-MARL algorithms.
Figure 3: Effective horizon length as a function of $\gamma$.
## 4 Discounted risk metrics
The discounted sum constraint can naturally be reinterpreted as a certain type
of average constraint. In particular, Assumption 2.1 ensures the equivalence
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}C(x_{t})]=\mathbb{E}_{x\sim\mu^{\pi}_{\gamma}}[C(x)]$
(Paternain et al., 2019). This near-term average does not relate to any well-
known risk metrics and hence does not provide a practical safety guarantee. In
general, information about the mean of a distribution cannot be used to infer
information about its tails; however, simple changes to the penalty function
can yield information about either the probability of incurring a constraint
violation or the expected severity of constraint violations.
###### Proposition 4.1 (Near-term probability of constraint violations).
Suppose that for some $\delta_{j}\in[0,1]$ and $\alpha_{j}\in\mathbb{R}$ , we
have
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}I[C^{j}(x_{t})\geq\alpha_{j}]]\leq\delta_{j}$.
Then under Assumption 2.1, $\textup{Pr}\\{C^{j}(x)\geq\alpha_{j}\mid
x\sim\mu^{\pi}_{\gamma}\\}\leq\delta_{j}.$
###### Proof 4.2.
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}I[C^{j}(x_{t})\geq\alpha_{j}]]=\mathbb{E}_{x\sim\mu^{\pi}_{\gamma}}[I[C^{j}(x)\geq\alpha_{j}]]=\textup{Pr}\\{C^{j}(x)\geq\alpha_{j}\mid
x\sim\mu^{\pi}_{\gamma}\\}$. The first equality uses Assumption 2.1 to apply
an equivalence established in e.g. Paternain et al. (2019). The second
equality follows from the definition of expectation.
Proposition 4.1 makes it easy to enforce chance constraints using primal-dual
methods. When the penalty term $C^{j}(x)$ is replaced by the quantity
$I[C^{j}(x)\geq\alpha_{j}]-\delta_{j}$, the primal-dual algorithm enforces
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}I[C^{j}(x_{t})\geq\alpha_{j}]]-\delta_{j}\leq
0$. By Proposition 4.1, this guarantees that
$\textup{Pr}\\{C^{j}(x)\geq\alpha_{j}\mid
x\sim\mu^{\pi}_{\gamma}\\}\leq\delta_{j}$. Because the probability of
constraint violations is defined with $x$ varying over $\mu^{\pi}_{\gamma}$,
we call the resulting guarantee a near-term or discounted chance constraint.
This can be repeated for each $j\in\\{1,\ldots,m\\}$, providing a set of
bounds on the probability of violating _each_ constraint by more than its
tolerance $\alpha_{j}$. On the other hand, we can control the probability of
violating any constraint as follows. Define the statement $C(x)\geq\alpha$ to
be true if $C^{j}(x)\geq\alpha_{j}\ \forall\ j\in\\{1,\ldots,m\\},$ and false
otherwise. Then, applying Proposition 4.1 to the test condition
$C(x)\geq\alpha$ will result in a bound on $\textup{Pr}\\{C(x)\geq\alpha\mid
x\sim\mu^{\pi}_{\gamma}\\}$.
While discounted chance constraints enable one to control the probability of
extreme events in the near future, conditional value at risk constraints
(Rockafellar and Uryasev, 2000) afford control over the severity of such
events.
###### Definition 4.3 (Rockafellar and Uryasev (2000)).
Given a risk level $\beta\in(0,1)$, a cost
$h:\mathcal{X}\rightarrow\mathbb{R}$, and a probability density $\mu$ on
$\mathcal{X}$, the value at risk (VaR) and conditional value at risk (CVaR)
are defined as:
$\displaystyle\textup{VaR}(\beta,h,\mu)=\min\\{\alpha\in\mathbb{R}:\textup{Pr}\\{h(x)\leq\alpha\mid
x\sim\mu\\}\geq\beta\\},$
$\displaystyle\textup{CVaR}(\beta,h,\mu)=\frac{1}{1-\beta}\int_{h(x)\geq\textup{VaR}(\beta,h,\mu)}h(x)\mu(x)dx.$
In other words, $\textup{VaR}(\beta,h,\mu)$ is the least upper bound on $h$
that can be satisfied with probability $\beta$, while
$\textup{CVaR}(\beta,h,\mu)$ describes the expected value in the VaR-tail of
the distribution of $h$. CVaR characterizes the expected severity of extreme
events, which can be defined precisely as the $(1-\beta)$ fraction of events
$x$ with the worst outcomes as ranked by the cost incurred, $h(x)$. The VaR
and CVaR for $h(x)=x,$ when $x$ follows a standard normal distribution, are
illustrated in Figure 4, where the shaded region has an area of $(1-\beta)$.
For the rest of the paper, we assume that the cdf of $h(x)$ is continuous when
$x\sim\mu$. For further details and for cases in which this assumption does
not hold, we refer the reader to Rockafellar and Uryasev (2002).
###### Proposition 4.4 (Near-term CVaR).
For any $\alpha_{j}\geq 0$, suppose that
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}[[C^{j}(x_{t})-\alpha_{j}]_{+}]]\leq\delta_{j}$.
Then,
$\textup{CVaR}(\beta,C^{j},\mu^{\pi}_{\gamma})\leq\alpha_{j}+(1-\beta)^{-1}\delta_{j}.$
###### Proof 4.5.
Under Assumption 2.1, the identity
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}[C^{j}(x_{t})-\alpha_{j}]_{+}]=\mathbb{E}_{x\sim\mu^{\pi}_{\gamma}}[[C^{j}(x)-\alpha_{j}]_{+}]$
holds (Paternain et al., 2019). Next, we use the fact that the CVaR is the
minimum value of the convex function in $\alpha_{j}$ given by
$F(\alpha_{j}\mid\beta,C^{j},\mu^{\pi}_{\gamma}):=\alpha_{j}+(1-\beta)^{-1}\mathbb{E}_{x\sim\mu^{\pi}_{\gamma}}[[C^{j}(x)-\alpha_{j}]_{+}]$
(Rockafellar and Uryasev, 2000); thus, $F$ provides an upper bound on CVaR.
Some rearranging leads to the result.
Figure 4: Example of VaR and CVaR at risk level $\beta=0.9$.
Similar to the chance-constrained case, Proposition 4.4 makes it easy to
enforce CVaR constraints in the primal-dual algorithm. Here, the penalty term
used is $[C^{j}(x)-\alpha_{j}]_{+}-\delta_{j}$. Using this penalty, the
algorithm enforces
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}[[C^{j}(x_{t})-\alpha_{j}]_{+}]]-\delta_{j}\leq
0$, which by Proposition 4.4 implies
$\textup{CVaR}(\beta,C^{j},\mu^{\pi}_{\gamma})\leq\alpha_{j}+(1-\beta)^{-1}\delta_{j}.$
By repeating for each $j\in\\{1,\ldots,m\\}$, we can bound the expected
severity of the constraint violations for each of the $m$ constraints. Because
the CVaR constraint is defined with $x$ varying over $\mu^{\pi}_{\gamma}$, the
resulting guarantee is called a near-term or discounted CVaR constraint.
To obtain a tight bound on the CVaR, $\alpha_{j}$ must be set to
$\textup{VaR}(\beta,C^{j},\mu^{\pi}_{\gamma})$, which minimizes the function
$F$ introduced in the proof of Proposition 4.4 (Rockafellar and Uryasev,
2000). Unfortunately, the VaR is not known ahead of time. Chow et al. (2018)
include $\alpha_{j}$ as an optimization variable in the learning procedure,
but extending their technique to the multiagent setting is not
straightforward. Our approach is to include it as a tunable hyperparameter.
Simulation results in Section 6 show that it is easy to choose $\alpha_{j}$ to
give a nearly tight bound.
## 5 Primal-dual value functions
In this section, we investigate challenges with value estimation in the
primal-dual regime. The fact that the reward to each agent is constantly
changing (due to dual variable updates) makes it difficult to accurately
estimate state values. To quantify this decrease in accuracy, we introduce the
value functions induced by the joint policy $\pi$,
$\\{V_{\pi}^{i}:\mathcal{X}\times\mathbb{R}\rightarrow\mathbb{R}\\}_{i\in\mathcal{N}},\
\\{V_{R,\pi}^{i}:\mathcal{X}\rightarrow\mathbb{R}\\}_{i\in\mathcal{N}},\
V_{C,\pi}:\mathcal{X}\rightarrow\mathbb{R}^{m}$ where:
$\displaystyle
V^{i}_{\pi}(x,\lambda)=\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}r_{t}^{i}-\lambda^{T}c_{t}\mid
x_{0}=x],$ (3) $\displaystyle
V^{i}_{R,\pi}(x)=\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}r_{t}^{i}\mid
x_{0}=x],\quad
V_{C,\pi}(x)=\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}c_{t}\mid
x_{0}=x].$ (4)
Note that $c_{t}$ could be modified as indicated in Section 4, and the
following results would hold for the modified penalty function.
Obviously, it is impossible to learn an accurate value function when $\lambda$
is unknown and changing; however, simply making $\lambda$ available to a value
function approximator does not guarantee good generalization beyond previously
seen values of $\lambda$. Having a good estimate of the _derivative_ of the
value function with respect to $\lambda$ will ensure accuracy under small
perturbations to the dual variables. Fortunately, this derivative is easy to
obtain. Formally, under Assumption 2.1, we can write
$V_{\pi}^{i}(x,\lambda)=V_{R,\pi}^{i}(x)-\lambda^{T}V_{C,\pi}(x)$ (Tessler et
al., 2019), and therefore,
$\nabla_{\lambda}V_{\pi}^{i}(x,\lambda)=-V_{C,\pi}(x)$. By learning
$V_{R,\pi}^{i}$ and $V_{C,\pi}$ as separate functions and then combining them
using the true value of $\lambda$, we can construct a value estimate whose
derivative with respect to the dual variables is as accurate as our estimate
of $V_{C,\pi}$ itself. This estimate will be more robust to small changes in
$\lambda$. We will refer to this type of value estimate as a _structured value
function_ or a _structured critic_.
###### Proposition 5.1.
Let $\bar{c}=\mathbb{E}_{x\sim\mu^{\pi}_{\gamma}}[C(x)]$ and
$\Sigma_{C}^{2}=\mathbb{E}_{x\sim\mu^{\pi}_{\gamma}}[(\bar{c}-C(x))(\bar{c}-C(x))^{T}]$.
Suppose $\lambda$ is randomly varying with mean $\bar{\lambda}$ and variance
$\Sigma_{\lambda}^{2}$. Using a structured value function approximator can
reduce the mean square temporal difference error by up to
$\textup{Tr}[\Sigma_{\lambda}^{2}\cdot(\Sigma_{C}^{2}+\bar{c}\bar{c}^{T})]$.
Figure 5: Temporal difference error trajectories in a simple policy evaluation
task.
The proof of Proposition 5.1 is in Appendix A. Figure 5 illustrates
Proposition 5.1 in a simple value estimation task with quadratic rewards,
linear dynamics and policies, linear state constraints, and randomly varying
$\lambda$. The generic critic (GC) is a value function modeled as a quadratic
function of the state only. The input-augmented critic (IAC) is a value
function modeled as an unknown quadratic function of the state and dual
variables, while the structured critic (SC) is modeled using
$\hat{V}_{\pi}^{i}=\hat{V}_{R,\pi}^{i}-\lambda^{T}\hat{V}_{C,\pi}$ with
quadratic $\hat{V}_{R,\pi}^{i}$ and linear $\hat{V}_{C,\pi}$ trained on their
respective signals.
The dashed line in Figure 5 is at the value
$\textup{Tr}[\Sigma_{\lambda}^{2}\cdot(\Sigma_{C}^{2}+\bar{c}\bar{c}^{T})]$
predicted in Proposition 5.1. In this simple value estimation task, high
accuracy can be achieved when conditioning on the randomly varying $\lambda$;
however, having an accurate estimate of $\nabla_{\lambda}V_{\pi}^{i}$ by using
a structured critic is also shown to help. Although in practice
$\bar{\lambda}$ and $\Sigma_{\lambda}^{2}$ change over time, the simulation
results in Section 6 confirm that using structured critics improves
performance. The loss function for value function approximation is therefore
given by:
$\displaystyle
TDE(x,x^{\prime})=[R^{i}(x^{i})+\gamma\hat{V}_{R,\pi}^{i}((x^{i})^{\prime})-\hat{V}_{R,\pi}^{i}(x^{i})]^{2}+\|C(x)+\gamma\hat{V}_{C,\pi}(x^{\prime})-\hat{V}_{C,\pi}(x)\|_{2}^{2}$
(5)
where $x\in\mathcal{X}$ and $x^{\prime}\sim f^{\pi}(x)$. Equation (5) is
simply a sum of squared temporal difference errors over the set of $m+1$ value
functions. For algorithmic details, we refer the reader to Appendix B.
## 6 Simulations
In our simulations, we sought to demonstrate the effectiveness of the penalty
modifications and structured critic proposed in sections 4 and 5. We tested
our findings in a modified multiagent particle environment333Code for the
environments is available at github.com/dtabas/multiagent-particle-envs. (Lowe
et al., 2017) with two agents pursuing individual objectives subject to a
constraint on the joint state. The state of each agent is its position and
velocity in $\mathbb{R}^{2}$, i.e.
$x^{i}=\begin{bmatrix}y^{iT}&v^{iT}\end{bmatrix}^{T}$ where
$y^{i}\in\mathbb{R}^{2}$ is the position and $v^{i}\in\mathbb{R}^{2}$ is the
velocity of agent $i$. The objective of each agent is to drive its position
$y^{i}$ to a landmark $y^{i*}\in\mathbb{R}^{2}$, while making sure that the
agent ensemble satisfies the safety constraint. The reward and constraint
functions are given by:
$\displaystyle R^{i}(y^{i})=-\xi_{i}\|y^{i}-y^{i*}\|_{2}^{2},\quad
C(y)=\textbf{1}^{T}y$ (6)
where $\xi_{i}>0$ is a constant and
$y=\begin{bmatrix}y^{1T}&y^{2T}\end{bmatrix}^{T}$ is the position of the agent
ensemble.
Figure 6: $\textup{Pr}\\{C(x)\geq 0.1\mid x\sim\mu^{\pi}_{\gamma}\\}$ measured
throughout training. Key: SC = structured critic, MP = modified penalty (Prop.
4.1). Both modifications speed convergence to a safe policy. The shaded region
represents $\pm 1$ standard deviation across 5 training runs.
The landmark $y^{*}=\begin{bmatrix}y^{1*T}&y^{2*T}\end{bmatrix}^{T}$ is
stationed outside of the safe region $\mathcal{S}=\\{y\mid C(y)\leq 0\\}.$
Thus, the agents cannot both reach their goals while satisfying $C(y)\leq 0$.
To train the agents to interact in this environment, we used a modified
version of the EPyMARL codebase444Code for the algorithms is available at
github.com/dtabas/epymarl. (Papoudakis et al., 2020). We tested several MARL
algorithms, including MADDPG (Lowe et al., 2017), COMA (Foerster et al.,
2018), and MAA2C (Papoudakis et al., 2020). We decided to use the MAA2C
algorithm because it consistently produced the best results and because as a
value function-based algorithm, it provided the most straightforward route to
implementing the changes proposed in Section 5. Details of the algorithm,
pseudocode, hyperparameters, and supplementary simulation results are provided
in Appendix B.
For each risk metric described in Section 4, we tested the convergence of the
agents to a safe policy with and without modifications to the penalty and
value functions. Figure 6 shows the results when we make the substitution
$C(x)\leftarrow I[C(x)\geq\alpha]-\delta$ in the penalty function to enforce a
chance constraint, $\textup{Pr}\\{C(x)\geq\alpha\mid
x\sim\mu^{\pi}_{\gamma}\\}\leq\delta$ with $\alpha$ and $\delta$ each set to
$0.1$. The modified penalty function performs the best as a chance constraint-
enforcing signal (red and green lines in Figure 6). Whether or not the penalty
function is modified, the structured critic finds safer policies throughout
training (red vs. green and orange vs. blue lines).
Figure 7 shows the results when we make the substitution
$C(x)\leftarrow[C(x)-\alpha]_{+}-\delta$ in the penalty function to enforce
the constraint
$\textup{CVaR}(\beta,C,\mu^{\pi}_{\gamma})\leq\alpha+(1-\beta)^{-1}\delta$.
Using the modified penalty (red and green lines in Figure 7) drives the CVaR
upper bound (drawn in dashed lines) to the target value, and due to the choice
of $\alpha$, this bound is nearly tight. On the other hand, using the original
penalty results in an overly conservative policy that achieves low risk at the
expense of rewards (right panel). We also point out that when using the
modified penalty with the structured critic, the CVaR is lower throughout
training compared to when the generic critic is used, indicating improved
effectiveness in enforcing limits on risk.
We chose $\alpha$ using the following heuristic, to make the bound on CVaR
nearly tight. The “correct” value of $\alpha$ that would achieve a tight bound
is $\textup{VaR}(\beta,C,\mu^{\pi}_{\gamma})$. Moreover, the upper bound that
we used is convex and continuously differentiable in $\alpha$ (Rockafellar and
Uryasev, 2000); therefore, small errors in $\alpha$ will lead to small errors
in the upper bound on CVaR, and any approximation of the VaR will suffice. We
obtained an approximation simply by running the simulation once with $\alpha$
set to zero and computing $\textup{VaR}(\beta,C,\mu^{\pi}_{\gamma})$ over some
test trajectories. If necessary, the process could be repeated additional
times. Alternatively, $\alpha$ could be tuned adaptively by computing VaR
online, but the stability of such a procedure would need further
investigation.
Figure 7: $\textup{CVaR}(\beta=0.9,C,\mu^{\pi}_{\gamma})$ measured throughout
training. Key: SC = structured critic, MP = modified penalty (Prop. 4.4). The
dashed lines represent the CVaR upper bound used in Prop. 4.4. The panel on
the right shows progress toward the original objective through the total
original returns,
$\sum_{i=1}^{2}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{T}r^{i}_{t},$
without penalty terms. The shaded region represents $\pm 1$ standard deviation
across 5 training runs. The rewards increase then decrease because the agents
first learn to navigate towards the landmark, which is outside the safe
region, then learn to back off to satisfy the constraint.
## 7 Conclusion
In this paper, we studied the effect of primal-dual algorithms on the
structure of C-MARL problems. First, we used the occupation measure to study
the effect of the penalty term on safety. We showed that using the constraint
function as the penalty enforces safety only in expectation, but by making
simple modifications to the penalty term, one may enforce meaningful
probabilistic safety guarantees, namely, chance and CVaR constraints. These
risk metrics are defined over the occupation measure, leading to notions of
safety in the near term. Next, we studied the effect of the penalty term on
the value function. When the dual variable and constraint evaluation signals
are available, it is easy to model the relationship between the penalty term
and the value function. By exploiting this structure, the accuracy of the
value function can be improved. We demonstrated the usefulness of both of
these insights in a constrained multiagent particle environment, showing that
convergence to a low-risk policy is accelerated. One open question is the
effect of primal-dual methods on game outcomes. Some agents might pay a higher
price than others for modifying their policies to satisfy system-wide
constraints. Understanding and mitigating this phenomenon will be the focus of
future work.
D. Tabas and A. S. Zamzam would like to thank Patrick Emami and Xiangyu Zhang
for helpful discussions in the early stages of this work, and Georgios
Papoudakis for advice on software implementation. This work is partially
supported by the National Science Foundation Graduate Research Fellowship
Program under Grant No. DGE-2140004. Any opinions, findings, conclusions, or
recommendations expressed in this material are those of the authors and do not
necessarily reflect the views of the National Science Foundation. This work
was authored in part by the National Renewable Energy Laboratory (NREL),
operated by Alliance for Sustainable Energy, LLC, for the U.S. Department of
Energy (DOE) under Contract No. DE-AC36-08GO28308. The work of A. S. Zamzam
was supported by the Laboratory Directed Research and Development (LDRD)
Program at NREL. The views expressed in the article do not necessarily
represent the views of the DOE or the U.S. Government. The U.S. Government
retains and the publisher, by accepting the article for publication,
acknowledges that the U.S. Government retains a nonexclusive, paid-up,
irrevocable, worldwide license to publish or reproduce the published form of
this work, or allow others to do so, for U.S. Government purposes.
## References
* Biagioni et al. (2022) David Biagioni, Xiangyu Zhang, Dylan Wald, Deepthi Vaidhynathan, Rohit Chintala, Jennifer King, and Ahmed S. Zamzam. PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems. _Proceedings of the 2022 13th ACM International Conference on Future Energy Systems_ , pages 565–570, 2022.
* Borkar and Bhatt (1996) Vivek S. Borkar and Abhay G. Bhatt. Occupation Measures for Controlled Markov Processes: Characterization and Optimality. _The Annals of Probability_ , 24(3):1531–1562, 1996.
* Chen et al. (2018) Steven Chen, Kelsey Saulnier, Nikolay Atanasov, Daniel D. Lee, Vijay Kumar, George J. Pappas, and Manfred Morari. Approximating Explicit Model Predictive Control Using Constrained Neural Networks. In _Proceedings of the American Control Conference_ , pages 1520–1527, 2018.
* Chen et al. (2020) Yu Jia Chen, Deng Kai Chang, and Cheng Zhang. Autonomous Tracking Using a Swarm of UAVs: A Constrained Multi-Agent Reinforcement Learning Approach. _IEEE Transactions on Vehicular Technology_ , 69(11):13702–13717, 2020.
* Chow et al. (2018) Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. Risk-constrained reinforcement learning with percentile risk criteria. _Journal of Machine Learning Research_ , 18:1–51, 2018\.
* Cui et al. (2022) Wenqi Cui, Jiayi Li, and Baosen Zhang. Decentralized safe reinforcement learning for inverter-based voltage control. _Electric Power Systems Research_ , 211(108609), 2022.
* Foerster et al. (2018) Jakob N. Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. Counterfactual multi-agent policy gradients. _32nd AAAI Conference on Artificial Intelligence_ , pages 2974–2982, 2018.
* García and Fernández (2015) Javier García and Fernando Fernández. A Comprehensive Survey on Safe Reinforcement Learning. _Journal of Machine Learning Research_ , 16:1437–1480, 2015\.
* Lee et al. (2018) Donghwan Lee, Hyungjin Yoon, and Naira Hovakimyan. Primal-Dual Algorithm for Distributed Reinforcement Learning: Distributed GTD. In _Proceedings of the IEEE Conference on Decision and Control_ , pages 1967–1972, 2018.
* Li et al. (2020) Wenhao Li, Bo Jin, Xiangfeng Wang, Junchi Yan, and Hongyuan Zha. F2A2: Flexible Fully-decentralized Approximate Actor-critic for Cooperative Multi-agent Reinforcement Learning. _arXiv: 2004.11145_ , pages 1–42, 2020.
* Lowe et al. (2017) Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. In _31st Conference on Neural Information Processing Systems_ , 2017\.
* Lu et al. (2021) Songtao Lu, Kaiqing Zhang, Tianyi Chen, Tamer Başar, and Lior Horesh. Decentralized Policy Gradient Descent Ascent for Safe Multi-Agent Reinforcement Learning. _35th AAAI Conference on Artificial Intelligence_ , pages 8767–8775, 2021.
* Ma et al. (2021) Haitong Ma, Jianyu Chen, Shengbo Eben, Ziyu Lin, Yang Guan, Yangang Ren, and Sifa Zheng. Model-based Constrained Reinforcement Learning using Generalized Control Barrier Function. _IEEE International Conference on Intelligent Robots and Systems_ , pages 4552–4559, 2021.
* Mesbah (2016) Ali Mesbah. Stochastic model predictive control: An overview and perspectives for future research. _IEEE Control Systems Magazine_ , 36(6):30–44, 2016.
* Molina-Solana et al. (2017) Miguel Molina-Solana, María Ros, M. Dolores Ruiz, Juan Gómez-Romero, and M. J. Martin-Bautista. Data science for building energy management: A review. _Renewable and Sustainable Energy Reviews_ , 70:598–609, 2017.
* Papoudakis et al. (2020) Georgios Papoudakis, Filippos Christianos, Lukas Schäfer, and Stefano V. Albrecht. Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks. In _35th Conference on Neural Information Processing Systems_ , 2020\.
* Parnika et al. (2021) P. Parnika, Raghuram Bharadwaj Diddigi, Sai Koti Reddy Danda, and Shalabh Bhatnagar. Attention actor-critic algorithm for multi-agent constrained co-operative reinforcement learning. _Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems_ , 3:1604–1606, 2021.
* Paternain et al. (2019) Santiago Paternain, Luiz F.O. Chamon, Miguel Calvo-Fullana, and Alejandro Ribeiro. Constrained reinforcement learning has zero duality gap. In _Advances in Neural Information Processing Systems_ , volume 32, 2019.
* Paternain et al. (2022) Santiago Paternain, Miguel Calvo-Fullana, Luiz F.O. Chamon, and Alejandro Ribeiro. Safe Policies for Reinforcement Learning via Primal-Dual Methods. _IEEE Transactions on Automatic Control_ , 2022.
* Rockafellar and Uryasev (2000) R. Tyrrell Rockafellar and Stanislav Uryasev. Optimization of Conditional Value-at-Risk. _Journal of Risk_ , 2:21–42, 2000.
* Rockafellar and Uryasev (2002) R. Tyrrell Rockafellar and Stanislav Uryasev. Conditional value-at-risk for general loss distributions. _Journal of Banking and Finance_ , 26(7):1443–1471, 2002.
* Silver et al. (2014) David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. _31st International Conference on Machine Learning, ICML 2014_ , 1:605–619, 2014.
* Tabas and Zhang (2022) Daniel Tabas and Baosen Zhang. Computationally Efficient Safe Reinforcement Learning for Power Systems. In _Proceedings of the American Control Conference_ , pages 3303–3310. American Automatic Control Council, 2022.
* Tessler et al. (2019) Chen Tessler, Daniel J. Mankowitz, and Shie Mannor. Reward constrained policy optimization. In _7th International Conference on Learning Representations, ICLR 2019_ , May 2019.
* Zhou et al. (2022) Wei Zhou, Dong Chen, Jun Yan, Zhaojian Li, Huilin Yin, and Wanchen Ge. Multi-agent reinforcement learning for cooperative lane changing of connected and autonomous vehicles in mixed traffic. _Autonomous Intelligent Systems_ , 2(1), 2022.
## Appendix A Theoretical results
### A.1 Proof of Proposition 3.2
Applying the definition of $\mu^{\pi}_{\gamma}$, we have
$\int_{\mathcal{X}}\mu^{\pi}_{\gamma}(x)dx=\int_{\mathcal{X}}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}p_{t}^{\pi}(x)dx$.
Using the Dominated Convergence Theorem, we can exchange the order of the sum
and integral. Each individual $p_{t}^{\pi}$ integrates to $1$. The geometric
sum property ensures that the resulting expression evaluates to $1$.
### A.2 Proof of Proposition 3.3
1. 1.
By definition, we have $\lim_{\gamma\rightarrow
0^{+}}\mu^{\pi}_{\gamma}(x)=\lim_{\gamma\rightarrow
0^{+}}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}p_{t}^{\pi}(x)$.
Using Tannery’s theorem, we can exchange the order of the limit and the
infinite sum. The zeroth term in the sum evaluates to $p_{0}(x)$ and all other
terms evaluate to $0$.
2. 2.
Assume $\lim_{t\rightarrow\infty}p_{t}^{\pi}$ exists, and denote it
$p_{\infty}^{\pi}$. Using the triangle inequality, we have
$\displaystyle|\mu^{\pi}_{\gamma}(x)-p_{\infty}^{\pi}(x)|$
$\displaystyle\leq\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{\infty}|p_{t}^{\pi}(x)-p_{\infty}^{\pi}(x)|$
(7)
$\displaystyle=\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{N}|p_{t}^{\pi}(x)-p_{\infty}^{\pi}(x)|+\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=N+1}^{\infty}|p_{t}^{\pi}(x)-p_{\infty}^{\pi}(x)|$
(8)
for some $N\in\mathbb{N}$. Since $p_{t}^{\pi}(x)\rightarrow
p_{\infty}^{\pi}(x)$, we can choose $N$ large enough to make the second term
in (8) arbitrarily small. Then, using boundedness of $p_{t}^{\pi}$ for all
$t$, we can take $\gamma\rightarrow 1^{-}$ to make the first term arbitrarily
small.
### A.3 Proof of Proposition 3.4
By the geometric sum property, we have
$T_{2}(\gamma,\varepsilon)=\min\\{K\in\mathbb{N}:\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{K-1}[1]\geq
1-\varepsilon\\}=\min\\{K\in\mathbb{N}:1-\gamma^{K}\geq
1-\varepsilon\\}=\min\\{K\in\mathbb{N}:K\geq\frac{\log\varepsilon}{\log\gamma}\\}=\big{\lceil}\frac{\log\varepsilon}{\log\gamma}\big{\rceil}.$
The termination time follows a geometric distribution with parameter
$(1-\gamma)$, and thus has expected value $\frac{1}{1-\gamma}$. Setting
$T_{2}(\gamma,\varepsilon)=T_{1}(\gamma)$ and solving for $\varepsilon$
(ignoring the integer constraint) yields
$\varepsilon=\gamma^{\frac{1}{1-\gamma}}$. Finally, taking
$\lim_{\gamma\rightarrow 1}\gamma^{\frac{1}{1-\gamma}}$ yields $\frac{1}{e}$.
### A.4 Proof of Proposition 5.1
Let $x\sim\mu^{\pi}_{\gamma}$, $x^{\prime}\sim f^{\pi}(x)$,
$\bar{c}=\mathbb{E}_{x\sim\mu^{\pi}_{\gamma}}[C(x)]$, and
$\Sigma_{C}^{2}=\mathbb{E}_{x\sim\mu^{\pi}_{\gamma}}[(\bar{c}-C(x))(\bar{c}-C(x))^{T}]$.
Suppose $\lambda$ is randomly distributed with mean $\bar{\lambda}$ and
variance $\Sigma_{\lambda}^{2}$. For any value function approximator
$\hat{V}^{i}_{\pi}$, assume $\lambda$ and $\hat{V}^{i}_{\pi}$ are independent.
Let $\eta=\begin{bmatrix}1&\lambda^{T}\end{bmatrix}^{T}$, $\
d=\begin{bmatrix}R^{i}(x)&C(x)^{T}\end{bmatrix}^{T}$,
$\hat{V}_{\pi}^{i}:\mathcal{X}\rightarrow\mathbb{R},\
\hat{V}_{R,\pi}^{i}:\mathcal{X}\rightarrow\mathbb{R},$ and
$\hat{V}_{C,\pi}:\mathcal{X}\rightarrow\mathbb{R}^{m}$. Let $\mathcal{D}$ be a
dataset of trajectories sampled from $\mathcal{M}$ that is used to train
$\hat{V}_{\pi}^{i},\ \hat{V}_{R,\pi}^{i}$, and $\hat{V}_{C,\pi}$. The mean
square temporal difference error achieved by using a generic value function is
$\displaystyle
MSTDE_{1}=\mathbb{E}_{x,x^{\prime},\lambda,\mathcal{D}}[(\eta^{T}d+\gamma\hat{V}_{\pi}^{i}(x^{\prime})-\hat{V}_{\pi}^{i}(x))^{2}]$
(9)
while the error achieved using the structured value function is
$\displaystyle
MSTDE_{2}=\mathbb{E}_{x,x^{\prime},\mathcal{D}}[(\eta^{T}d+\gamma[\hat{V}_{R,\pi}^{i}(x^{\prime})-\lambda^{T}\hat{V}_{C,\pi}(x^{\prime})]-[\hat{V}_{R,\pi}^{i}(x)-\lambda^{T}\hat{V}_{C,\pi}(x)])^{2}].$
(10)
Note that in (10) we do not take the expectation over $\lambda$ since the dual
variables are made available to this function approximator.
Begin with the states and dual variables fixed at
$(\bar{x},\bar{x}^{\prime},\bar{\lambda})$. Let
$\hat{g}(\bar{x},\bar{x}^{\prime})=\begin{bmatrix}\hat{V}_{R,\pi}^{i}(\bar{x})&\hat{V}_{C,\pi}(\bar{x})^{T}\end{bmatrix}^{T}-\gamma\begin{bmatrix}\hat{V}_{R,\pi}^{i}(\bar{x}^{\prime})&\hat{V}_{C,\pi}(\bar{x}^{\prime})^{T}\end{bmatrix}^{T}$
and
$\hat{h}(\bar{x},\bar{x}^{\prime})=\hat{V}_{\pi}^{i}(\bar{x})-\gamma\hat{V}_{\pi}^{i}(\bar{x}^{\prime})$.
Then, suppressing the arguments $(\bar{x},\bar{x}^{\prime})$ and setting
$\bar{\eta}=\begin{bmatrix}1&-\bar{\lambda}^{T}\end{bmatrix}^{T}$, we can
write the squared temporal difference error at
$(\bar{x},\bar{x}^{\prime},\bar{\lambda})$ as
$\displaystyle
STDE_{1}(\bar{\eta})=\mathbb{E}_{\mathcal{D}}[(\bar{\eta}^{T}d-\hat{h})^{2}],$
(11) $\displaystyle
STDE_{2}(\bar{\eta})=\mathbb{E}_{\mathcal{D}}[(\bar{\eta}^{T}d-\bar{\eta}^{T}\hat{g})^{2}].$
(12)
The loss function used to train $\hat{V}_{R,\pi}^{i}$ and $\hat{V}_{C,\pi}$ is
$\displaystyle\mathbb{E}_{\mathcal{D}}[\|d-\hat{g}\|^{2}].$ (13)
Since $d$ is a deterministic function of $x$, (13) can be decomposed into bias
and variance terms:
$\displaystyle\mathbb{E}_{\mathcal{D}}[\|d-\hat{g}\|^{2}]$
$\displaystyle=\mathbb{E}_{\mathcal{D}}[\sum_{k=0}^{m}(d_{k}-\hat{g}_{k})^{2}]$
(14)
$\displaystyle=\sum_{k=0}^{m}\mathbb{E}_{\mathcal{D}}[(d_{k}-\hat{g}_{k})^{2}]$
(15)
$\displaystyle=\sum_{k=0}^{m}[(d_{k}-\mathbb{E}_{\mathcal{D}}\hat{g}_{k})^{2}+\mathbb{E}_{\mathcal{D}}[(\hat{g}-\mathbb{E}_{\mathcal{D}}\hat{g})^{2}]]$
(16) $\displaystyle:=\sum_{k=0}^{m}[b_{k}^{2}+\sigma_{k}^{2}]$ (17)
$\displaystyle:=\textup{Tr}[bb^{T}+\Sigma^{2}]$ (18)
where $k=0$ corresponds to the reward signal and $k=1,\ldots,m$ corresponds to
the cost signals.
Following a similar line of reasoning, we can use (18) to rewrite (12) as
$\displaystyle
STDE_{2}(\bar{\eta})=\textup{Tr}[(bb^{T}+\Sigma^{2})(\bar{\eta}\bar{\eta}^{T})].$
(19)
For the sake of argument, we assume that $\hat{g}$ and $\hat{h}$ achieve the
same performance at $(x,x^{\prime},\lambda)$, that is,
$\displaystyle
STDE_{1}(\bar{\eta})=STDE_{2}(\bar{\eta})=\textup{Tr}[(bb^{T}+\Sigma^{2})(\bar{\eta}\bar{\eta}^{T})]$
(20)
where $\textup{Tr}[(bb^{T})(\bar{\eta}\bar{\eta}^{T})]$ and
$\textup{Tr}[\Sigma^{2}\bar{\eta}\bar{\eta}^{T}]$ reflect the bias squared and
variance terms, respectively. How do $STDE_{1}$ and $STDE_{2}$ change when
$\lambda$ is allowed to vary? Using the generic estimator, the noise in
$\lambda$ will introduce some amount of irreducible error into $STDE_{1}$. On
the other hand, using $\lambda=\bar{\lambda}+\Delta\lambda$ in our proposed
estimator will change the bias and variance terms in $STDE_{2}$ while the
irreducible error remains at zero (since there is no uncertainty when
$\Delta\lambda$ is known). Setting
$\Delta\eta=\begin{bmatrix}0&-\Delta\lambda^{T}\end{bmatrix}^{T},$ the
temporal difference errors at
$(\bar{x},\bar{x}^{\prime},\bar{\lambda}+\Delta\lambda)$ are
$\displaystyle
STDE_{1}(\bar{\eta}+\Delta\eta)=\textup{Tr}[(bb^{T}+\Sigma^{2})(\bar{\eta}\bar{\eta}^{T})]+(\Delta\eta^{T}d)^{2},$
(21) $\displaystyle
STDE_{2}(\bar{\eta}+\Delta\eta)=\textup{Tr}[(bb^{T}+\Sigma^{2})((\bar{\eta}+\Delta\eta)(\bar{\eta}+\Delta\eta)^{T})].$
(22)
Taking the expectation over $\Delta\lambda$ which has a mean of zero and a
variance of $\Sigma_{\lambda}^{2}$, and setting
$\Sigma_{\eta}^{2}=\begin{bmatrix}0&0\\\ 0&\Sigma_{\lambda}^{2}\end{bmatrix}$,
yields
$\displaystyle\mathbb{E}_{\Delta\lambda}[STDE_{1}(\bar{\eta}+\Delta\eta)-STDE_{2}(\bar{\eta}+\Delta\eta)]$
$\displaystyle=\textup{Tr}[\Sigma_{\eta}^{2}(dd^{T}-bb^{T}-\Sigma^{2})]$ (23)
$\displaystyle=\textup{Tr}[\Sigma_{\lambda}^{2}(cc^{T}-\tilde{b}\tilde{b}^{T}-\tilde{\Sigma}^{2})]$
(24)
where $\tilde{b}=(c-\mathbb{E}_{\mathcal{D}}\hat{g}_{C})$,
$\tilde{\Sigma}^{2}=\mathbb{E}_{\mathcal{D}}[(\hat{g}_{C}-\mathbb{E}_{\mathcal{D}}\hat{g}_{C})^{2}],$
and $\hat{g}_{C}=\hat{V}_{C,\pi}(x)-\gamma\hat{V}_{C,\pi}(x^{\prime})$. Note
that
$\mathbb{E}_{\mathcal{D}}[\|c-\hat{g}_{C}\|^{2}]=\textup{Tr}[\tilde{b}\tilde{b}^{T}+\tilde{\Sigma}^{2}].$
Taking $\tilde{b},\tilde{\Sigma}^{2}\rightarrow 0$ as the accuracy of
$\hat{g}_{C}$ improves, (24) can be estimated as
$\displaystyle\textup{Tr}[\Sigma_{\lambda}^{2}cc^{T}].$ (25)
Taking the expectation over $c\sim C(x),x\sim\mu^{\pi}_{\gamma}$ yields the
final result.
## Appendix B Simulation details
### B.1 Algorithm
The Constrained Multiagent Advantage Actor Critic (C-MAA2C) algorithm is shown
in Algorithm 1. The main differences from the basic MAA2C algorithm are the
penalty modifications in lines 9 and 11, the use of vector-valued value
functions $\hat{V}^{i}:\mathcal{X}\rightarrow\mathbb{R}^{m+1}$ (one per agent
in the noncooperative setting), and the dual update.
There are two apparent differences between Algorithm 1 and the concepts
described in the main text. The first is that Algorithm 1 uses n-step returns
in the advantage function, whereas Section 5 only considers one-step returns.
We resolve this discrepancy by revisiting the proof of Proposition 5.1. First,
note that the coefficients $\eta$ can be factored out of the returns just like
they are factored out of the rewards. Thus, the proof only requires slight
modifications up to the last line, Equation (25). Using returns instead of
rewards in (25) will lead to a different numerical result but the conclusion
(justification for using a structured value function) will be the same.
The second apparent difference is the fact that Algorithm 1 considers finite-
horizon episodic tasks, thus the primal-dual algorithm will enforce
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{T}c_{t}]\leq
0$. Due to the finite horizon, we cannot directly use the occupation measure
to interpret the meaning of this constraint. However, we can define the
occupation measure over a finite horizon as
$\displaystyle\mu^{\pi}_{\gamma,T}(x)=\frac{1}{1-\gamma^{T+1}}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{T}p_{t}^{\pi}(x).$
(26)
It is easy to show that $\mu^{\pi}_{\gamma,T}$ is nonnegative and integrates
to unity over $\mathcal{X}$. We can use $\mu^{\pi}_{\gamma,T}$ in place of
$\mu^{\pi}_{\gamma}$ everywhere in order to interpret discounted sum
constraints and to generate probabilistic constraints in finite-horizon
episodic tasks. The statements
$\mathbb{E}_{\tau\sim\mathcal{M}}[\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{T}c_{t}]\leq
0,\
\mathbb{E}_{\tau\sim\mathcal{M}}[(1-\gamma^{T+1})^{-1}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{T}c_{t}]\leq
0,$ and $\mathbb{E}_{x\sim\mu^{\pi}_{\gamma,T}}[C(x)]\leq 0$ are equivalent.
Note that the effective horizon discussed in Section 3 may be shorter than the
horizon length $T$.
Algorithm 1 C-MAA2C with probabilistic safety guarantees and structured value
functions
1: Input discount factor $\gamma$, learning rates $\zeta_{\theta},\
\zeta_{\omega},\ \zeta_{\lambda}$, n-step return horizon $\kappa$, tolerances
$\alpha$ and $\delta$, multiplier limit $\lambda_{\text{max}}$, episode length
$T$, number of episodes $K$, risk metric $\in\\{\text{average, chance,
CVaR}\\}$
2: Initialize actor parameters $\\{\theta^{i}\\}_{i\in\mathcal{N}}$, critic
parameters $\\{\omega^{i}\\}_{i\in\mathcal{N}}$, parameterized policies
$\pi^{i}(\cdot\mid\theta^{i}):\mathcal{X}_{i}\rightarrow\Delta_{\mathcal{U}_{i}}$,
parameterized value estimates
$\hat{V}^{i}(\cdot\mid\omega^{i}):\mathcal{X}\rightarrow\mathbb{R}^{m+1},$
dual variables $\lambda\in\mathbb{R}^{m}$,
$\eta:=\begin{bmatrix}1&-\lambda^{T}\end{bmatrix}^{T}$
3: for $k=1,2,\ldots,K$ do
4: Initialize $x_{0}\sim p_{0}$ $\triangleright$ Run 1 episode
5: for $t=0,1,\ldots T$ do
6: Sample $u_{t}^{i}\sim\pi^{i}(\cdot\mid x_{t}^{i},\theta^{i})$ for
$i\in\mathcal{N}$
7: Receive $\\{r_{t}^{i}\\}_{i\in\mathcal{N}},\ c_{t},x_{t+1}$
8: if risk metric = chance then
9: $c_{t}\leftarrow I[c_{t}\geq\alpha]-\delta$ $\triangleright$ Proposition
4.1
10: else if risk metric = CVaR then
11: $c_{t}\leftarrow[c_{t}-\alpha]_{+}-\delta$ $\triangleright$ Proposition
4.4
12: end if
13: Let $d_{t}^{i}=\begin{bmatrix}r_{t}^{i}&c_{t}^{T}\end{bmatrix}^{T}$ for
$i\in\mathcal{N}$
14: end for
15: for $i\in\mathcal{N}$ do
16: for $t=0,1,\ldots,T$ do
17: $N=\min\\{T,t+\kappa\\}$
18:
$D_{t}^{i}=\sum_{n=t}^{N-1}\gamma^{n-t}d_{n}^{i}+\gamma^{N-t}\hat{V}^{i}(x_{N}\mid\omega^{i})$
$\triangleright$ Compute n-step returns
19: $A_{t}^{i}=\eta^{T}(D_{t}^{i}-\hat{V}^{i}(x_{t}\mid\omega^{i}))$
$\triangleright$ Compute advantages
20: end for
21:
$\theta^{i}\leftarrow\theta^{i}+\zeta_{\theta}\sum_{t=0}^{T}A_{t}^{i}\nabla_{\theta^{i}}\log\pi^{i}(u_{t}^{i}\mid
x_{t}^{i},\theta^{i})$ $\triangleright$ Actor update
22:
$\omega^{i}\leftarrow\omega^{i}-\zeta_{\omega}\nabla_{\omega^{i}}\sum_{t=0}^{T}\|D_{t}^{i}-\hat{V}^{i}(x_{t}\mid\omega^{i})\|_{2}^{2}$
$\triangleright$ Critic update
23: end for
24:
$\lambda\leftarrow\lambda+\zeta_{\lambda}\operatorname*{\scaleobj{0.85}{\scalerel*{\Gamma}{\sum}}}_{t=0}^{T}c_{t}$
$\triangleright$ Dual update
25: $\lambda\leftarrow\min\\{[\lambda]_{+},\lambda_{\text{max}}\\}$
26: $\eta\leftarrow\begin{bmatrix}1&-\lambda^{T}\end{bmatrix}^{T}$
27: end for
### B.2 Hyperparameters
Simulation hyperparameters are listed in Table 1.
Simulation |
---|---
Episode length | $25$
Number of episodes | $\\{4,8\\}\times 10^{4}$
Number of trials per configuration | 5
RL algorithm |
Discount factor $\gamma$ | 0.99
Actor learning rate $\zeta_{\theta}$ | $3\times 10^{-4}$
Critic learning rate $\zeta_{\omega}$ | $3\times 10^{-4}$
Dual update step size $\zeta_{\lambda}$ | $1\times 10^{-4}$
Optimizer | Adam$(\beta_{\text{Adam}}=(0.9,0.999))$
n-step return horizon $\kappa$ | 5
Constraint enforcement |
$\lambda_{\text{max}}$ | 10
Risk level $\beta$ | 0.9
“LHS tolerance” $\alpha$: |
Average constraints | N/A
Chance constraints | 0.1
CVaR constraints | 0.2
“RHS tolerance” $\delta$: |
Average constraints | 0
Chance constraints | 0.1
CVaR constraints | $5\times 10^{-3}$
Actors |
Policy architecture | Multi-layer perceptron
Number of hidden layers | 2
Hidden layer width | 64
Hidden layer activation | ReLU
Output layer activation | Linear
Action selection | Categorical sampling
Parameter sharing | No
Critics |
Critic architecture | Multi-layer perceptron
Number of hidden layers | 2
Hidden layer width | 64
Hidden layer activation | ReLU
Output layer activation | Linear
Target network update interval | 200 episodes
Parameter sharing | No
Table 1: Simulation hyperparameters.
### B.3 Additional simulation results
Here, we provide some additional results to supplement the findings in Section
6. First, we compared the convergence to a safe policy under the original
discounted sum constraint and found that similar to the results for the other
types of constraints, the structured critic demonstrates a better safety
margin throughout training. This is illustrated in Figure 8.
Figure 8: Evaluation of the discounted sum constraint throughout training,
showing that the structured critic helps the actor to find safer policies
faster. Each line and shaded region represents the mean and standard deviation
over 5 training runs. Key: SC = structured critic.
Next, we provide a closer look at the accuracy of the CVaR upper bound
provided in Proposition 4.4, and illustrated using dashed lines in the left
panel of Figure 7. Table 2 shows that in all four configurations in which the
CVaR was evaluated, the upper bound is a fairly accurate estimate. The results
from Section 6 show that this upper bound can be used to drive the actual CVaR
below a target value. Although using a structured critic with modified penalty
function yielded the most accurate CVaR UB, the accuracy in all four
configurations could be improved by making further adjustments to the
tolerance $\alpha$. The error is reported for policies tested at the end of
the training phase.
Penalty function | Critic | CVaR UB error
---|---|---
$C(x)$ | Generic | 18.3%
$C(x)$ | Structured | 11.8%
$[C(x)-\alpha]_{+}-\delta$ | Generic | 7.6%
$[C(x)-\alpha]_{+}-\delta$ | Structured | 3.7%
Table 2: Accuracy of CVaR upper bound.
|
# Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha1, Mengdi Huai1,2, Jianhui Sun1, Aidong Zhang1
###### Abstract
Rising usage of deep neural networks to perform decision making in critical
applications like medical diagnosis and financial analysis have raised
concerns regarding their reliability and trustworthiness. As automated systems
become more mainstream, it is important their decisions be transparent,
reliable and understandable by humans for better trust and confidence. To this
effect, concept-based models such as Concept Bottleneck Models (CBMs) and
Self-Explaining Neural Networks (SENN) have been proposed which constrain the
latent space of a model to represent high level concepts easily understood by
domain experts in the field. Although concept-based models promise a good
approach to both increasing explainability and reliability, it is yet to be
shown if they demonstrate robustness and output consistent concepts under
systematic perturbations to their inputs. To better understand performance of
concept-based models on curated malicious samples, in this paper, we aim to
study their robustness to adversarial perturbations, which are also known as
the imperceptible changes to the input data that are crafted by an attacker to
fool a well-learned concept-based model. Specifically, we first propose and
analyze different malicious attacks to evaluate the security vulnerability of
concept based models. Subsequently, we propose a potential general adversarial
training-based defense mechanism to increase robustness of these systems to
the proposed malicious attacks. Extensive experiments on one synthetic and two
real-world datasets demonstrate the effectiveness of the proposed attacks and
the defense approach.
## Introduction
With growth of highly specialized architectures for a variety of use-cases and
their superior performance, Deep Neural Networks (DNNs) are increasingly being
used in sensitive and critical applications such as medical diagnosis,
employment/recruiting, financial credit analysis, etc. However, widespread
adoption of such models faces several challenges - primarily the black-box
nature of their predictions. Many recent research works have proposed
explanation methods which provide deeper understanding of model predictions by
providing “explanations”. Explanations range from being local in nature where
they assign importance scores to the features present in an input sample but
can also be global in nature where the model identifies certain “concepts”
present in the input sample. A concept can be thought of as an abstraction of
features which are usually shared across multiple similar sample points. For
example, in Figure 1, a concept can be entirely clinical “osteophyptes-
femur”, “sclerosis-tibia”, etc. Usually DNNs are trained end-to-end, which
makes it difficult to isolate concepts and even harder to make them human
understandable. To alleviate this, concept-based approaches have been proposed
(Koh et al. 2020; Alvarez Melis and Jaakkola 2018) which map a sample from
input space to a concept space and subsequently map the concept space to the
prediction space. The concept space usually consists of high-level human
understandable concepts. A model trained incorporating either manually curated
or automatically learned concepts increases both interpretability and
reliability of its predictions. One such example, as proposed in (Koh et al.
2020), Concept Bottleneck Models (CBMs), can help domain experts quickly
identify any discrepancy and intervene when and where needed. CBMs also offer
generalizability in the sense that any DNN can be easily converted into a CBM
by resizing an intermediate layer to correspond to the size of any closed
concept set pre-selected by domain experts. The training of such models uses
standard training procedure with a loss function augmented with an extra term
from the bottleneck layer.
Figure 1: Example of how a value of a concept indicating osteophytes (bone
spurs) can be maliciously changed although the actual severity of disease
quantified by Kellgren-Lawrence grade (KLG Score) remains the same.
Although incredibly simple in formulation and training, many recent works have
demonstrated certain flaws in CBMs which warrant an increased caution in their
widespread applications. For exmaple, Margeloiu et al. (Margeloiu et al. 2021)
demonstrated that computing the saliency maps with respect to a single concept
does not capture the position of that concept in the image itself. Similarly,
Mahinpei et al. (Mahinpei et al. 2021) demonstrate that CBMs suffer from
“information leakage” where more than necessary information is encoded in a
concept - making them adulterated with non-relevant noise resulting in
unreliable downward predictions.
In this paper, we aim to study the security vulnerability and robustness of
concept-based models to carefully crafted malicious attacks, where an
adversary with a malevolent intent aims to introduce perturbations to clean
sample image and modify it in an adversarial manner to manipulate the concepts
predicted by the model. Specifically, we first demonstrate how concepts
learned by a concept-based model can be manipulated by introducing
adversarially generated perturbations in input samples. The goal of attacker
is to effectively manipulate concepts without changing the final model
predictions. We will study different concept attacks - concept erasure,
concept introduction and concept confounding - all of which disrupt the
concept set predicted by a well trained concept-based model. Note that
proposed attacks can be generalized for any concept-based model. We utilize
Concept Bottleneck Models (Koh et al. 2020) in this paper as an example to
demonstrate the efficacy of our attacks on one of the most popular concept-
based modelling paradigm. To improve trust and reliability of concept-based
models, it is important that both concepts and predictions are robust to
malicious attacks. Instilling trust in predictions are a well researched
problem in adversarial literature. However, the robustness of concepts is an
open question. In our paper, we focus on analyzing robustness of concepts
without changing model predictions. This critical difference creates important
distinction between our proposed attacks and standard adversarial attacks
where goal of attacker is to change the prediction label.
As shown in Figure 1, an attacker can easily introduce non-relevant concepts
without changing the prediction label. These non-relevant concepts can cause
misinterpretations as shown in Figure 1 \- the concept “osteophytes femur
lateral”- which quantifies amount of growth of bone spurs in the upper bone
has been maliciously changed to a very high value. This disruption, especially
in high security settings, can prompt remedial actions - like expensive oral
medicines or even surgery to fix a “supposed” problem even if it does not
exist. The fact that such concepts can be manipulated without any perceptible
change in the appearance of the input sample and its final prediction
essentially defeats the utility of concept-based models in critical
applications as these attacks can be very hard to detect. To alleviate this,
in addition to studying attacks, we also propose a general adversarial
training-based defense mechanism to improve the robustness of the learning
models against the proposed attacks on concepts. We conduct comprehensive
experiments on different datasets of varying risk levels - ConceptMNIST, CUB
and OAI, and the derived experimental results demonstrate the efficacy of both
our attacks and defense.
## Related Work
Related work on concept-level explanations. To incorporate a broader
perspective on model decision making in sensitive applications such as medical
diagnosis or financial forecasting (Suo et al. 2020; Xun et al. 2020), concept
attribution methods have been proposed. These methods provide a higher level
abstract notion of explanations by aligning model explanations with human-
understandable concepts improving overall reliability. Several popular methods
which automatically learn concepts are detailed (Kim et al. 2018; Ghorbani et
al. 2019; Yeh et al. 2020; Wu et al. 2020; Goyal et al. 2019). On the other
hand providing concept priors have been utilized to align model concepts with
human understandable concepts (Zhou et al. 2018; Murty, Koh, and Liang 2020;
Chen et al. 2019).
Related work on concept bottleneck models (CBMs). Concept bottleneck models
were initially limited to specific use-cases. More recently, the applications
of such bottleneck models was generalized in the recent work (Koh et al. 2020)
which postulated that any prediction model architecture can be transformed
into a CBM by simply resizing any intermediate layer to represent a human-
understandable concept representation. Similar work on utilizing and improving
CBMs for various downstream tasks include (Sawada and Nakamura 2022; Jeyakumar
et al. 2021; Pittino, Dimitrievska, and Heer 2021; Bahadori and Heckerman
2020).
Related work on robustness of interpretations. Although explanations have
enabled deeper understanding of DNNs, there are concerns regarding their
robustness. (Ghorbani, Abid, and Zou 2019) showed that explanations can easily
be misled by introducing imperceptible noise in the input image. Several other
works have highlighted similar problems on vision, natural language and
reinforcement learning such as (Adebayo et al. 2018; Dombrowski et al. 2019;
Slack et al. 2020; Kindermans et al. 2019; Sinha et al. 2021; Huai et al.
2020). Similarly, concept explanation methods are also fragile to small
perturbations to input samples (Brown and Kvinge 2021). Such concerns
regarding fragility of model explanations have prompted related research in
improving robustness of explanation methods. For example, (Levine, Singla, and
Feizi 2019; Lakkaraju, Arsov, and Bastani 2020; Mangla, Singh, and
Balasubramanian 2020) proposed learning more robust feature attributions while
(Alvarez Melis and Jaakkola 2018; Soni et al. 2020; Huai et al. 2022) try to
learn more robust concepts. However, existing defense methods (Levine, Singla,
and Feizi 2019; Lakkaraju, Arsov, and Bastani 2020; Mangla, Singh, and
Balasubramanian 2020) on improving the robustness of the feature-level model
explanations cannot be directly adopted here. The reason is that they only
focus on the post-hoc interpretations, while we work on the intrinsic concept-
based interpretable network.
## Methodology
This section investigates vulnerability of CBMs to malicious attacks. We first
introduce details of proposed attack strategies. Subsequently, we propose a
defense mechanism Robust Concept Learning (RCL), based on which we train
robust models to prevent malicious attacks. Even though our experiments are
conducted on CBMs, attacks are general enough and can be used to attack any
concept-based model.
### Malicious Attacks against CBMs
In this section, we propose a general optimization framework for designing
malicious attacks. Here, we use $K$ and $T$ to denote the number of class
labels and the number of concepts, respectively. In a CBM model, we are given
a set of training samples $\\{(x_{i},y_{i},c_{i})\\}_{i=1}^{N}$, where
$x_{i}\in\mathbb{R}^{D}$ denotes the $i$-th training sample,
$y_{i}\in\\{1,\cdots,K\\}$ is the target classification label for sample
$x_{i}$, and $c_{i}\in\mathbb{R}^{T}$ is a vector of $T$ concepts. CBMs
usually consider two components, i.e., the concept component $g(\cdot)$ and
the prediction component $f(\cdot)$. Specifically, CBMs consider the form
$f(g(x))$, where $g:\mathbb{R}^{D}\rightarrow\mathbb{R}^{T}$ maps an input $x$
into the concept space, and $f:\mathbb{R}^{T}\rightarrow\mathbb{R}^{K}$ maps
concepts into the final prediction. CBMs define task accuracy as how
accurately $f(x)$ predicts label $y$, and concept accuracy as how accurately
$g(x)$ predicts concept $c$. For sample $x$, we use $\mathcal{U}(x;f,g)$ to
denote its concept-based explanations generated by $g$ to explain the
predicted classification label (i.e., $argmaxf(x)$). Let
$G(\mathcal{U}(x;f,g),\mathcal{U}(x+\delta;f,g))$ denote the attacker’s goal
of maximizing the difference between the generated concept-based explanations
before and after the attacks. In order to achieve the attacker’s attacking
goal, we propose the following framework:
$\displaystyle\max_{||\delta||_{\infty}\leq\epsilon_{thresh}}\quad
G(\mathcal{U}(x;f,g),\mathcal{U}(x+\delta;f,g))\quad$
$\displaystyle\text{s.t.}\quad argmaxf(x+\delta)=argmaxf(x),\quad
x+\delta\in[0,1]^{D},$
where $\delta$ denotes the adversarial perturbation, $\epsilon_{thresh}$
controls the magnitude of the whole adversarial perturbations, and
$\mathcal{U}(x+\delta;f,g)$ is the generated concept-based explanations to
interpret the predicted class label for the crafted adversarial sample
$x+\delta$. Objective function is used to maximize the difference of generated
concepts before and after the attacks. The first constraint is enforced to
make sure that predictions of sample $x$ is identical before and after the
attack. The second constraint guarantees that generated perturbation is
imperceptible so it cannot be easily detected. $\textit{l}_{\infty}$ norm is
most commonly used when considering imperceptible perturbations and measures
the feature with the largest amount perturbation, regardless of number of
other features that have been maliciously modified. By solving the above
optimization problem, the attacker can find an optimal perturbation that can
maximize the attacker’s goals. Depending on how to define the attacker’s goal,
we categorize three different types of attacking, which are given as
following:
* •
Erasure: Concept erasure attack seeks to subtly delete a particular concept
without changing the class prediction result. The gap in perception and
absence of concepts would be puzzling to an analyzer and very difficult to
detect, especially in datasets where every image of the same class does not
have the same concepts - while still seemingly giving the same final
prediction. Note that in CBMs, the importance score of the $j$-th concept for
sample $x$ is calculated as $g_{j}(x)$. In practice, for CBMs, we usually have
a pre-defined threshold $\gamma$ that is used to determine whether a concept
is a relevant concept. Specifically, for sample $x$, the $j$-th concept is a
relevant concept if $g_{j}(x)-\gamma\geq 0$. Let $S_{x,Rev}$ denote the set of
the targeted initially relevant concepts. In order to remove the presence of
an initially relevant concept, the attacker’s goal is defined as follows,
$\displaystyle\sum_{j\in S_{x,Rev}}(\mathbb{I}[\gamma-
g_{j}(x+\delta)]-\mathbb{I}[\gamma-g_{j}(x)]),$ (1)
where $\mathbb{I}[\cdot]$ is the indicator function, $\delta$ denotes the
crafted adversarial perturbation, and $\gamma$ is the given threshold. Note
that for the $j$-th initially relevant concept, we have $g_{j}(x)-\gamma>0$,
which means that $\mathbb{I}[\gamma-g_{j}(x)]=0$. The attacker aims to craft
the adversarial perturbation $\delta$ such that this $j$-th concept becomes
the non-relevant concept, i.e., $\gamma>g_{j}(x+\delta)$. In other words, the
attack is successful if and only if $\mathbb{I}[\gamma-
g_{j}(x+\delta)]=(\mathbb{I}[\gamma-g_{j}(x+\delta)]-\mathbb{I}[\gamma-
g_{j}(x)])=1$, where $\mathbb{I}[\gamma-g_{j}(x)]=0$. The above objective is
used to maximize the attacker’s goal by reducing the importance score of these
initially relevant concepts such that their importance scores are less than
the threshold $\gamma$. By solving the above objective, the attacker can find
an optimal perturbation that can remove the presence of initially relevant
concepts for sample $x$.
* •
Introduction: Concept introduction attack aims to manipulate the presence of
non-relevant concepts without modifying the classification result. This
hinders accurate analysis of model’s interpretations by providing mixed
interpretations. The attacker tries to introduce new non-relevant concepts
which were not previously present in the concept set of the original sample.
For sample $x$, let $S_{x,Non}$ denote the set of targeted concepts that do
not originally present in sample $x$. The attacker’s goal of attacking the
presence of these targeted initially non-relevant concepts can be formulated
as follows,
$\displaystyle\sum_{j\in
S_{x,Non}}(\mathbb{I}[g_{j}(x+\delta)-\gamma]-\mathbb{I}[g_{j}(x)-\gamma]),$
(2)
where $\delta$ denotes the perturbation to be optimized. Note that for the
$j$-th initially non-relevant concept, if $g_{j}(x+\delta)-\gamma\geq 0$, we
can say this initially non-relevant concept becomes the relevant concept after
perturbation. The above loss defines attacker’s goal - maximizing presence of
targeted non-relevant concepts. Specifically, above loss function aims to
maximize the attacker’s goal by increasing the importance scores of the
targeted initially non-relevant concepts such that these targeted concepts’
importance scores are larger than the threshold $\gamma$. To achieve his goal
of maximizing the presence of the initially non-relevant concepts for sample
$x$, attacker can solve above objective to find an optimal perturbation.
* •
Confounding: Concept confounding attack attempts to build on top of both
erasure and introduction by simultaneously removing relevant concepts and
introducing non-relevant concepts. The concept confounding attack is a much
more powerful attack than just the concept introduction attack as it also
removes concepts while maintaining the same model prediction. This can be
especially troublesome as it would defeat any purpose of training models with
concept bottlenecks. Let $S_{x,Rev}$ and $S_{x,Non}$ denote index set of the
targeted initially relevant concepts and the set of the targeted initially
non-relevant concepts, respectively. In this case, attacker’s goal can be
mathematically represented as follows,
$\displaystyle\sum_{j\in S_{x,Rev}}(\mathbb{I}[\gamma-
g_{j}(x+\delta)]-\mathbb{I}[\gamma-g_{j}(x)])$ (3) $\displaystyle+\sum_{j\in
S_{x,Non}}(\mathbb{I}[g_{j}(x+\delta)-\gamma]-\mathbb{I}[g_{j}(x)-\gamma]),$
where $\delta$ denotes the adversarial perturbation to be optimized. The above
objective is used to maximize the attacker’s goal by decreasing the importance
scores of these targeted initially relevant concepts to reduce their presence
and increasing these non-relevant concepts’ importance scores to introduce
their presence.
The above schemes define attacker’s goals from different aspects. The exact
attack formulation and the final optimization equation is given by Equation 11
in Appendix. Based on above proposed adversarial attacks, we can perform the
security vulnerability analysis to understand how motivated attackers can
craft malicious examples to mislead CBMs to generate wrong concepts. The
magnitude of perturbation reflect the robustness of CBMs to attacks. The
smaller the magnitude of the crafted adversarial perturbations is, the less
robust the generated concepts are to the adversarial attacks.
Figure 2: Iterative perturbations to generate diverse training images.
Corresponding histograms represent presence of concepts across the spectrum.
Proposed augmentation generates images that should belong to same concept
class but contain wider variance of concepts.
### Improving Concept Robustness
Our goal here is to design a defense mechanism which can effectively generate
concept-based explanations robust to malicious attacks. Note that in CBMs, we
consider bottleneck models of form $f(g(x))$, where $g$ maps an input into the
concept space and $f$ maps concepts into a final class prediction. Let
$\mathcal{L}_{Y}=l(f(g(x_{i});y_{i}))$ and
$\mathcal{L}_{C}=\Sigma^{T}_{j}l(g_{j}(x_{i}),c_{i}^{j})$ denote the
classification training loss and the concept training loss over the $i$-th
training data, respectively, where $T$ is the total number of concepts and $l$
represents Binary Cross Entropy or Root Mean Square Error loss.
Hybrid training paradigm: In order to learn the concept component $g$ and the
class prediction component $f$, traditional works (Koh et al. 2020; Margeloiu
et al. 2021) usually adopt two common ways of learning CBMs - sequential and
joint. We discuss both paradigms in brief below:
* •
Sequential Training: Learns the concept model $g$ by minimizing the concept
training loss and subsequently learns the class prediction model $f$ by
minimizing the classification loss independently. Mathematically it can be
thought of minimizing training objective detailed in Equation 4 first with
$\gamma=0$ and then subsequently minimizing with $\lambda=0$. As concepts once
learned are never updated again during prediction model optimization, the
concepts learned are completely independent of the prediction task.
* •
Joint Training: Learns both concept and prediction models ($f$ and $g$) by
minimizing both concept and classification loss jointly in an end-to-end
manner. Mathematically it can be thought of minimizing the entire training
objective Equation 4 with appropriate values of $\gamma$ and $\lambda$. As
concepts and prediction task are learned jointly, concepts learned are not
independent of the prediction task as there is some guidance of gradient
directions from the prediction part of the model $f$ in the concept model $g$.
As demonstrated in Table 4 (Appendix), sequential training has lower concept
error but worse task performance as compared to joint training (Consistently
shown by Figure 2 in (Koh et al. 2020)). Hence, there exists a tradeoff
between concept and task loss while using joint or sequential training
paradigm. However, as we will demonstrate in Tables 1, 2 and 3, joint training
shows higher vulnerability of concepts to malicious attacks - implying that
concepts learned during joint training are less robust as compared to those
learned in sequential. This behavior is expected - as concepts learned during
joint training have higher chances of being spuriously correlated to
predictions, making them easier to be maliciously attacked.
To overcome this and achieve a better trade-off between concept robustness and
prediction performance, we propose a new hybrid training paradigm by combining
the sequential and joint training methods. Specifically, in our proposed
hybrid training method, we first freeze the prediction model and only let the
concept model learn for the first half of total epochs. Subsequently, we
unfreeze the complete model and let training continue for the remainder of
epochs with a lower learning rate. Based on this, we formulate training loss
as follows, where $(x_{i},c_{i},y_{i})$ is a data point sampled from image set
(X), concept set (C), and label set (Y):
$\displaystyle\mathcal{L}_{f,g}=$
$\displaystyle\Sigma_{i}[\gamma*l(f(g(x_{i});y_{i}))+\lambda*\Sigma^{T}_{j}l(g_{j}(x_{i});c^{j}_{i})],$
(4)
where the first and second terms represent the task and concept losses for
$i$-th training sample with $T$ total number of concepts, respectively. The
values of $\gamma\in\\{0,1\\}$ and $\lambda\in\mathbb{R}$. Using the above
loss formulation, the complete model parameters $\theta_{f,g}$ are updated as
follows:
$\theta_{f,g}=\begin{cases}\theta_{g}-\omega*\nabla_{\theta_{g}}~{}\mathcal{L}_{f,g}\
\,\,(\gamma=0)&\text{if epoch~{}$\leq$~{}N/2,}\\\
\theta_{f,g}-\omega^{{}^{\prime}}*\nabla_{\theta_{f,g}}~{}\mathcal{L}_{f,g}&\text{epoch~{}$>$~{}N/2}\end{cases}$
where $\omega$ and $\omega^{{}^{\prime}}$ represent learning rates. The above
proposed hybrid training method is a two-stage training paradigm.
Specifically, during the optimization procedure, for the first half of epochs,
we set $\gamma=0$ and $\lambda=1$ and learning rate $\omega$ such that we can
first freeze the class prediction model and only train the concept model.
Subsequently, in the remaining epochs, we focus on full model training by
setting $\gamma$ as $1$ and assigning a pre-defined appropriate weight value
to $\lambda$ and a different (smaller) learning rate $\omega$’. The specifics
of the training procedures are further detailed in Appendix.
Generate diverse training data using adversarial augmentation. The essential
reason why an attacker can easily introduce malicious perturbations in a
sample is the lack of sample diversity in each concept class. The data
distribution of each concept can be discrete and highly dispersed. For
example, in the CUB (birds) dataset (Reed et al. 2016) \- a sample set
containing numerous different types of birds with e.g. ‘WingColor==Black’
concept class, which would still not be enough to cover all possible
combinations of birds of different sizes, shapes, etc. Hence, the distribution
of ‘WingColor==Black’ concept has huge vacancies in its domain that CBM fails
to explore while malicious attacker can easily manipulate. One way to make it
difficult for the attacker to exploit such ‘vacancies’ in data distribution
(previously unexplored by CBMs) is to augment the training set by injecting
diverse training samples which smoothen the concept distribution space.
Intuitively, it simulates a weak attacker and generates images that look
perceptually similar - but with potentially different concept classes which in
turn, significantly enriches the spectrum of concepts existing in the training
data.
Robust concept learning (RCL). We introduce our proposed approach to
effectively generate robust concept-based explanations here. Our framework
alternates between an inner maximization, where images are iteratively updated
with perturbations that increase diversity in concept distribution; and an
outer minimization, where model parameters are optimized to find a sweet spot
between class prediction, concept accuracy, and concept robustness.
Specifically, in the inner loop, we aim to find a perturbed input
$\tilde{x}_{i}$, such that, its difference from true input $x_{i}$ is smaller
than a budget $\epsilon_{thresh}$ (i.e.,
$\|x_{i}-\tilde{x}_{i}\|_{\infty}\leq\epsilon_{thresh}$), while it maximizes
the concept divergence loss $l(g(\tilde{x}_{i}),c_{i}))$ (i.e. the concept
misclassification error) at the same time. The motivation is to generate
images that appear identical but are widely diversified in terms of concept
distribution. Formally, we iteratively update $\tilde{x_{i}}$ in the inner
loop as follows,
$\displaystyle\tilde{x_{i}}~{}\leftarrow~{}\tilde{x_{i}}+\epsilon*sign(\nabla_{\tilde{x_{i}}}l(g(\tilde{x}_{i}),c_{i}))$
(5)
Figure 2 provides an illustration of how the inner loop is effective in
generating images with high diversity. We plot the original image as well as
intermediate images at each updating step and the ultimate generated perturbed
image along with their associated concepts. As can be seen from the concept
histogram of each image - concept distributions vary without much perceptual
changes in image.
Once the perturbed sample is iteratively generated, in the outer loop, we aim
to optimize the model weight such that it achieves a good balance between task
classification, concept prediction as well as concept robustness. The updated
total loss $\mathcal{L}_{f,g}$ we optimize is,
$\displaystyle\mathcal{L}_{f,g}~{}=\Sigma_{i}[\gamma*{\mathcal{L}_{Y}}+\lambda*{\mathcal{L}_{C}}+\alpha*\mathcal{L}_{adv}]$
(6)
where $\mathcal{L}_{Y}$ and $\mathcal{L}_{C}$ denote task classification loss
and concept prediction loss, respectively. The adversarial loss
$\mathcal{L}_{adv}$ is calculated by
$\mathcal{L}_{adv}=~{}l(g(\tilde{x}_{i}),c_{i})$ ($l$ is the same as defined
before). $\gamma$, $\lambda$, and $\alpha$ are tunable weights.
To combine the advantages from both joint and sequential models, we adopt a
hybrid training paradigm as described previously, in which we disable the
training of prediction model and only allow the concept model to be trained in
the first half, before unfreezing the prediction model and training the whole
model with a lower learning rate in the second half. Our empirical
investigation shows this hybrid paradigm outperforms both sequential and joint
training alone by a nontrivial margin. Pseudocode of RCL detailed in Algorithm
A is moved to appendix due to space limit.
## Experimental Study
### Dataset Description
We test the proposed approaches on the following 3 datasets of varying domains
and levels of security and trust required. For a standard classification task
such as digit or bird identification, a wrong concept set is not a very
concerning outcome - however for a medical diagnosis - a wrong concept set can
be catastrophic. For a more comprehensive description of datasets, please
refer to Appendix.
* •
ConceptMNIST (C-MNIST): We augment the original MNIST dataset by constructing
concepts of each image by including 2 physical characteristics of numbers in
the image along with 10 standard non-overlapping concepts representing one hot
encodings of the number, resulting in a size 12 concept vector for each image.
[Low Risk]
* •
CUB: The Caltech-UCSD Birds-200-2011 dataset (Reed et al. 2016) consists of
photos of 200 classes of birds. Pre-processing of the dataset is performed
exactly as (Koh et al. 2020). Final dataset consists of 112 concepts for each
class with concepts representing physical traits of the birds like wing color,
beak size, etc. [Low Risk]
* •
OAI: The Osteoarthritis Initiative (OAI) dataset (Nevitt, Felson, and Lester
2006) consists of X-ray images and clinical data for about 36,000 patients
over 4 years of study who pose a risk of knee osteoarthritis. The task is
X-ray grading into 4 different risk categories (KLG Score). Each image has 10
medical concepts from X-ray images such as bone spacing. For more
comprehensive description, refer to (Pierson et al. 2019). [High Risk]
### Benchmarking and Ablation Study
We train CBMs on all 3 datasets using different training strategies -
sequential and joint proposed by (Koh et al. 2020) and hybrid as previously
discussed with hyperparameters mentioned in Appendix. We train the respective
models for CUB and OAI datasets based on the hyperparameters mentioned in (Koh
et al. 2020) as well as train hybrid models on both datasets to compare with
standard models. In addition, we also train robust models using RCL (Algorithm
A (Appendix)) utilizing both joint and hybrid training paradigms. The task
errors for C-MNIST and CUB are classification error while for OAI, task error
is Root Mean Square Error (RMSE) as the prediction label is a continuous
variable. Concept error for C-MNIST and CUB is 0-1 error (binary concepts),
while for OAI, concept error is RMSE (concepts are continuous variables). The
benchmark results are reported in Table 4. As expected, performance of hybrid
models lie between joint and standard models in task and concept performance.
Usage of all 3 training paradigms presents a trade-off between task and
concept performance depending on use-case. For eg., in high-risk settings,
where concept accuracy is paramount (e.g. medical diagnosis), sequential can
be utilized. Whereas tasks where small errors in concepts can be tolerated but
prediction performance is important, joint can be utilized. Hybrid paradigm
provides a good trade-off between both sequential and joint paradigms.
### Attack Results and Discussion
We report results on a set of 500 randomly chosen samples from the test set
for all 3 datasets. We skip all samples which - a) have wrong task prediction
label and b) have concept accuracy $\leq$ 60% for binary valued concepts
(C-MNIST, CUB) or concept Root Mean Square Error (RMSE) $\geq$ 0.6 for
continuous valued concepts (OAI). In all our experiments, we begin by
reporting attack success results using standard adversarial attack setting on
the joint model (Adv. Attack (Joint)), followed by results for proposed
attacks on standard Joint, Sequential and Hybrid models, and finally on joint
and hybrid models trained using RCL. As concept scores are not explicitly used
during optimization in standard adversarial setting, we expect attack success
metrics to be relatively low. Mathematically standard adversarial setting can
be formulated as Equation 11 (Appendix) with $\beta$ set to 0.
Hyperparameter Selection All attacks are performed with 2 distinct sets of
hyperparameters. The first set of hyperparameters controls properties of
attacks - budget ($\epsilon_{thresh}$), number of steps ($N$) and learning
rate ($\epsilon$). We refer popular
benchmarks111github.com/MadryLab/mnist˙challenge,cifar10˙challenge for
hyperparameter selection decisions. The second set of hyperparameters controls
the influence of concepts ($\alpha$) and influence of predictions ($\beta$) to
the loss optimized during attacks (Equation 11 \- Appendix). We defer
discussion around hyperparameters to Appendix.
| C-MNIST | CUB | OAI
---|---|---|---
Adv. Attack(Joint) | $4\pm 0$% | $1\pm 0$% | $0\pm 0$%
Joint | $67\pm 5$% | $66\pm 7$% | $62\pm 3$%
Sequential | $44\pm 4$% | $56\pm 4$% | $54\pm 5$%
Hybrid | $51\pm 4$% | $59\pm 6$% | $54\pm 4$%
RCL-Joint | $22\pm 2$% | $32\pm 2$% | $5\pm 2$%
RCL-Hybrid | $18\pm 2$% | $23\pm 2$% | $1\pm 0$%
Table 1: Attack results on erasure attacks for datasets - C-MNIST, CUB and OAI
averaged over 3 different seeds.
Results on Erasure Attack. As erasure attack attempts to remove or “flip”
relevant concepts in a particular sample, we run our attack by targeting all
possible concepts for each selected sample. For C-MNIST and CUB, we classify a
sample as being “flipped” if it is no longer classified as being ‘present’
based on sigmoid classification ($\geq$0.5) after the attack. For OAI, we
consider a concept as being “flipped” if its absolute value changes with more
than a pre-defined threshold after attack. In the experiments we set this
threshold as 2 (hyperparameter settings - Appendix) which we believe can
result in a significant shift in medical diagnosis of knee-pain. Table 1 shows
the percentage of successful flips for standard adversarial attack, followed
by joint, sequential and hybrid models across all 3 datasets. A higher
percentage of flipped concepts implies a higher success rate for the attack.
We observe about 60% of concepts are successfully flipped across 3 datasets
for joint, sequential and hybrid models with the highest and lowest success
rates being on joint and sequential respectively as discussed before. Joint
and hybrid models trained using RCL show significantly lower attack success
rates of 18%, 23% and 1% on C-MNIST, CUB and OAI - demonstrating RCL’s success
as a defense. As targeted concept scores are not used during optimization in
standard adversarial attack setting, we observe successful flip percentages to
be low (4%, 1% and 0% for C-MNIST, CUB and OAI). Figure 4 demonstrates attack
results on a sample from CUB dataset.
Figure 3: Top-most: Original image and associated concepts. Following 3 images
show final concept set after attack on selected concept (red). Concepts in red
previously classified as “present” selectively attacked and removed.
Figure 4: Left: Original image and associated concepts. Right: Concepts in red
previously not present in image have been “introduced” in perturbed version.
| C-MNIST | CUB
---|---|---
| %Intro. | %Ret. | %Intro. | %Ret.
Adv. Attack(Joint) | $53\pm 2$% | $86\pm 2$% | $8\pm 2$ % | $77\pm 4$%
Joint | $114\pm 4$% | $96\pm 2$% | $33\pm 5$% | $92\pm 3$%
Sequential | $71\pm 2$% | $93\pm 5$% | $30\pm 4$% | $97\pm 2$%
Hybrid | $102\pm 2$% | $94\pm 2$% | $31\pm 3$% | $95\pm 2$%
RCL-Joint | $18\pm 3$% | $96\pm 2$% | $13\pm 2$% | $93\pm 3$%
RCL-Hybrid | $13\pm 2$% | $96\pm 2$% | $23\pm 2$% | $97\pm 2$%
Table 2: Attack results on introduction attacks for datasets - C-MNIST and CUB
averaged over 3 different seeds. %Intro denotes the percentage of new concepts
introduced wrt. original concept set, while %Ret denotes the percentage of
concepts retained from the original concept set. Note: if more than original
number of concepts are introduced, introduction percentage $\geq$100%.
Results on Introduction Attack. As opposed to erasure, introduction attack
attempts to introduce non-relevant concepts to concept prediction set of a
perturbed sample image. As introduction attack specifically targets non-
relevant concepts, this attack is not suitable for data with continuous
concept values (e.g. on OAI, all concepts are deemed to be relevant for
prediction). We report percentage of new concepts introduced (%Introduced) in
perturbed image before and after attack. Goal of attack here is to introduce
previously non-relevant concepts, hence higher value of %Introduced implies
higher success of the attack. In addition, we also report percentage of
concepts retained (%Retained) from the original concept set to ensure no
significant change in originally relevant concepts (ideally close to 100%).
Table 2 shows the percentage of non-relevant concepts successfully introduced
on standard adversarial setting followed by Joint, Sequential and Hybrid
models across CUB and C-MNIST datasets. For CUB, around 33%, 30% and 31% while
for C-MNIST, 114%, 71% and 102% concepts are successfully introduced for
joint, sequential and hybrid models respectively. Average percentage of
relevant concepts retained are relatively high ($\geq$90%) for all 3 models.
As before, models trained with RCL are less susceptible to attack, with
introduction percentages around 20% for both datasets. Similar to erasure,
non-relevant concept scores are not explicitly used during optimization in
standard adversarial attack, we observe low values of both percentage
introduced and retained. Figure 4 demonstrates attack results visually on a
sample from CUB.
Results on Confounding Attack. Confounding is a combination of both erasure
and introduction attacks. As confounding essentially maximizes the difference
between original and perturbed concept sets, we report the Jaccard Similarity
index (JSI) for binary concepts (CUB, C-MNIST) and average (Avg-$\Delta$) and
minimum (Min-$\Delta$) absolute change in concept values for continuous
concepts (OAI). Lower JSI values indicate a greater difference in concept sets
before and after attack, implying higher success of confounding attack.
Similarly, higher values of Avg-$\Delta$ implies confounding attack disrupts
values for all concepts by a significant amount whereas, high Min-$\Delta$
implies that even minimum concept disruption caused is still relatively large
- reducing trust in all concept predictions. Table 3 reports JSI on CUB and
C-MNIST for joint, sequential and hybrid models across all 3 datasets. We
observe relatively low values of JSI (around 0.2 for CUB and 0.4 for C-MNIST
respectively) showcasing the success of proposed attack. We also observe
models trained using RCL demonstrate relatively higher JSI values of around
0.5 for both datasets, thus making them less susceptible attack. Similarly,
for OAI dataset, RCL models demonstrate much better robustness against
confounding attacks with average absolute change 2 orders of magnitude less
than their standard counterparts (0.0019 vs 0.35) - which further validates
success of RCL as a defense mechanism. As before, adversarial attack’s JSI is
relatively high as none of relevant and non-relevant concept scores are
utilized during optimization (details - Appendix).
| C-MNIST | CUB | OAI
---|---|---|---
| Jaccard Sim | Jaccard Sim | Avg-$\Delta$ | Min-$\Delta$
Adv. Attack(Joint) | $0.61\pm 0.04$ | $0.51\pm 0.03$ | $0.21\pm 0.03$ | $0.03\pm 0.006$
Joint | $0.38\pm 0.03$ | $0.20\pm 0.01$ | $0.57\pm 0.03$ | $0.13\pm 0.001$
Sequential | $0.44\pm 0.02$ | $0.23\pm 0.02$ | $1.06\pm 0.05$ | $0.35\pm 0.001$
Hybrid | $0.41\pm 0.04$ | $0.25\pm 0.03$ | $0.7\pm 0.03$ | $0.21\pm 0.001$
RCL-Joint | $0.52\pm 0.03$ | $0.49\pm 0.05$ | $0.0058\pm$1e$-$4 | 3.1$e$-4$\pm$1e$-$6
RCL-Hybrid | $0.55\pm 0.05$ | $0.54\pm 0.04$ | 0.0019$\pm$1e$-$4 | 1.8$\mathbf{e}$-5$\pm$1e$-$6
Table 3: Attack results on confounding attacks for datasets - C-MNIST, CUB and
OAI avg. over 3 seeds. Jaccard Sim. represents Jaccard Similarity indices
(JSI). Lower JSI value implies concept set before and after are more
dissimilar.
Effect of varying attack budget ($\epsilon_{thresh}$): We also report
additional results with varying attack budgets in the Appendix. As expected,
attack success rates increase with increasing value of attack budgets
($\epsilon_{thresh}$). However, with higher $\epsilon_{thresh}$, images start
to lose visual imperceptibility implying a trade-off between attack success
and budget.
## Conclusion
In this paper, we conducted the first systematic study on malicious attacks
against concept bottleneck models (CBMs). Specifically, we first proposed
different novel attack methods (e.g., the concept erasure attacks and the
concept introduction attacks) to show that current CBMs are vulnerable to
adversarial perturbations. To defend such adversarial attacks and enhance the
robustness of CBMs against adversarial attacks, we also proposed a generic
adversarial training-based defense mechanism. Extensive experimental results
on real-world datasets not only show that current CBMs are vulnerable to
malicious perturbations, but also demonstrate the effectiveness of the
proposed defense mechanism.
## Acknowledgments
This work is supported in part by the US National Science Foundation under
grants 2213700, 2217071, 2008208, 1955151. Any opinions, findings, and
conclusions or recommendations expressed in this material are those of the
author(s) and do not necessarily reflect the views of the National Science
Foundation.
## References
* Adebayo et al. (2018) Adebayo, J.; Gilmer, J.; Muelly, M.; Goodfellow, I.; Hardt, M.; and Kim, B. 2018\. Sanity checks for saliency maps. _Advances in neural information processing systems_ , 31.
* Alvarez Melis and Jaakkola (2018) Alvarez Melis, D.; and Jaakkola, T. 2018. Towards robust interpretability with self-explaining neural networks. _Advances in neural information processing systems_ , 31.
* Bahadori and Heckerman (2020) Bahadori, M. T.; and Heckerman, D. E. 2020. Debiasing concept bottleneck models with instrumental variables. _arXiv preprint arXiv:2007.11500_.
* Brown and Kvinge (2021) Brown, D.; and Kvinge, H. 2021. Brittle interpretations: The Vulnerability of TCAV and Other Concept-based Explainability Tools to Adversarial Attack. _arXiv preprint arXiv:2110.07120_.
* Chen et al. (2019) Chen, R.; Chen, H.; Ren, J.; Huang, G.; and Zhang, Q. 2019. Explaining neural networks semantically and quantitatively. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 9187–9196.
* Dombrowski et al. (2019) Dombrowski, A.-K.; Alber, M.; Anders, C.; Ackermann, M.; Müller, K.-R.; and Kessel, P. 2019. Explanations can be manipulated and geometry is to blame. _Advances in Neural Information Processing Systems_ , 32.
* Ghorbani, Abid, and Zou (2019) Ghorbani, A.; Abid, A.; and Zou, J. 2019. Interpretation of neural networks is fragile. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, 3681–3688.
* Ghorbani et al. (2019) Ghorbani, A.; Wexler, J.; Zou, J.; and Kim, B. 2019. Towards automatic concept-based explanations. _Advances in Neural Information Processing Systems_.
* Goyal et al. (2019) Goyal, Y.; Feder, A.; Shalit, U.; and Kim, B. 2019. Explaining classifiers with causal concept effect (cace). _arXiv preprint arXiv:1907.07165_.
* Huai et al. (2022) Huai, M.; Liu, J.; Miao, C.; Yao, L.; and Zhang, A. 2022. Towards Automating Model Explanations with Certified Robustness Guarantees. _Proceedings of the AAAI Conference on Artificial Intelligence_.
* Huai et al. (2020) Huai, M.; Sun, J.; Cai, R.; Yao, L.; and Zhang, A. 2020. Malicious Attacks against Deep Reinforcement Learning Interpretations. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, 472–482.
* Jeyakumar et al. (2021) Jeyakumar, J. V.; Dickens, L.; Cheng, Y.-H.; Noor, J.; Garcia, L. A.; Echavarria, D. R.; Russo, A.; Kaplan, L. M.; and Srivastava, M. 2021. Automatic Concept Extraction for Concept Bottleneck-based Video Classification.
* Kim et al. (2018) Kim, B.; Wattenberg, M.; Gilmer, J.; Cai, C.; Wexler, J.; Viegas, F.; et al. 2018\. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In _International conference on machine learning_ , 2668–2677. PMLR.
* Kindermans et al. (2019) Kindermans, P.-J.; Hooker, S.; Adebayo, J.; Alber, M.; Schütt, K. T.; Dähne, S.; Erhan, D.; and Kim, B. 2019. The (un) reliability of saliency methods. In _Explainable AI: Interpreting, Explaining and Visualizing Deep Learning_ , 267–280. Springer.
* Koh and Liang (2017) Koh, P. W.; and Liang, P. 2017. Understanding black-box predictions via influence functions. In _International Conference on Machine Learning_. PMLR.
* Koh et al. (2020) Koh, P. W.; Nguyen, T.; Tang, Y. S.; Mussmann, S.; Pierson, E.; Kim, B.; and Liang, P. 2020. Concept bottleneck models. In _International Conference on Machine Learning_ , 5338–5348. PMLR.
* Lakkaraju, Arsov, and Bastani (2020) Lakkaraju, H.; Arsov, N.; and Bastani, O. 2020. Robust and stable black box explanations. In _International Conference on Machine Learning_. PMLR.
* Levine, Singla, and Feizi (2019) Levine, A.; Singla, S.; and Feizi, S. 2019. Certifiably robust interpretation in deep learning. _arXiv preprint arXiv:1905.12105_.
* Mahinpei et al. (2021) Mahinpei, A.; Clark, J.; Lage, I.; Doshi-Velez, F.; and Pan, W. 2021. Promises and pitfalls of black-box concept learning models. _arXiv preprint arXiv:2106.13314_.
* Mangla, Singh, and Balasubramanian (2020) Mangla, P.; Singh, V.; and Balasubramanian, V. N. 2020. On Saliency Maps and Adversarial Robustness. _arXiv preprint arXiv:2006.07828_.
* Margeloiu et al. (2021) Margeloiu, A.; Ashman, M.; Bhatt, U.; Chen, Y.; Jamnik, M.; and Weller, A. 2021\. Do Concept Bottleneck Models Learn as Intended? _arXiv preprint arXiv:2105.04289_.
* Murty, Koh, and Liang (2020) Murty, S.; Koh, P. W.; and Liang, P. 2020. ExpBERT: Representation Engineering with Natural Language Explanations. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , 2106–2113.
* Nevitt, Felson, and Lester (2006) Nevitt, M.; Felson, D.; and Lester, G. 2006. The osteoarthritis initiative. _Protocol for the cohort study_ , 1.
* Pierson et al. (2019) Pierson, E.; Cutler, D.; Leskovec, J.; Mullainathan, S.; and Obermeyer, Z. 2019\. Using machine learning to understand racial and socioeconomic differences in knee pain. NBER Machine Learning and Healthcare Conference.
* Pittino, Dimitrievska, and Heer (2021) Pittino, F.; Dimitrievska, V.; and Heer, R. 2021. Hierarchical Concept Bottleneck Models for Explainable Images Segmentation, Objects Fine Classification and Tracking. _Objects Fine Classification and Tracking_.
* Reed et al. (2016) Reed, S.; Akata, Z.; Lee, H.; and Schiele, B. 2016. Learning deep representations of fine-grained visual descriptions. _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 49–58.
* Sawada and Nakamura (2022) Sawada, Y.; and Nakamura, K. 2022. Concept Bottleneck Model with Additional Unsupervised Concepts. _IEEE Access_.
* Sinha et al. (2021) Sinha, S.; Chen, H.; Sekhon, A.; Ji, Y.; and Qi, Y. 2021. Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing. In _Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP_ , 420–434.
* Slack et al. (2020) Slack, D.; Hilgard, S.; Jia, E.; Singh, S.; and Lakkaraju, H. 2020. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_ , 180–186.
* Soni et al. (2020) Soni, R.; Shah, N.; Seng, C. T.; and Moore, J. D. 2020. Adversarial TCAV–Robust and Effective Interpretation of Intermediate Layers in Neural Networks. _arXiv preprint arXiv:2002.03549_.
* Sun et al. (2022) Sun, J.; Huai, M.; Jha, K.; and Zhang, A. 2022. Demystify Hyperparameters for Stochastic Optimization with Transferable Representations. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ , KDD ’22, 1706–1716. New York, NY, USA: Association for Computing Machinery. ISBN 9781450393850.
* Suo et al. (2020) Suo, Q.; Zhong, W.; Xun, G.; Sun, J.; Chen, C.; and Zhang, A. 2020. GLIMA: Global and Local Time Series Imputation with Multi-directional Attention Learning. In _IEEE International Conference on Big Data, Big Data 2020, Atlanta, GA, USA, December 10-13, 2020_ , 798–807. IEEE.
* Wu et al. (2020) Wu, W.; Su, Y.; Chen, X.; Zhao, S.; King, I.; Lyu, M. R.; and Tai, Y.-W. 2020. Towards Global Explanations of Convolutional Neural Networks With Concept Attribution. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 8652–8661.
* Xun et al. (2020) Xun, G.; Jha, K.; Sun, J.; and Zhang, A. 2020. Correlation Networks for Extreme Multi-label Text Classification. _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_.
* Yeh et al. (2020) Yeh, C.-K.; Kim, B.; Arik, S.; Li, C.-L.; Pfister, T.; and Ravikumar, P. 2020. On completeness-aware concept-based explanations in deep neural networks. _Advances in Neural Information Processing Systems_ , 33: 20554–20565.
* Zhou et al. (2018) Zhou, B.; Sun, Y.; Bau, D.; and Torralba, A. 2018. Interpretable basis decomposition for visual explanation. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , 119–134.
## Appendix A Appendix
### Dataset Description
We test the proposed attack approaches on the following three different
datasets of varying domains and levels of security and trust required. For a
standard classification task such as digit or bird identification, a wrong
concept set is not a very concerning outcome - however for a medical diagnosis
- a wrong concept set can be catastrophic.
* •
ConceptMNIST: We augment the original MNIST dataset which already consists of
images of various handwritten numbers from 0-9 along with their labels with
carefully constructed, human understandable concepts to emulate real-world
image recognition tasks. We propose to augment the dataset with a combination
of overlapping and non-overlapping concepts as detailed below:
1. 1.
Non-Overlapping Concepts: Certain real-world concepts are shared by samples
belonging to only a specific class instance - for example, in medical
diagnosis of cancer, the presence of a specific malicious cell. We attempt to
emulate such concepts by creating a vector of concepts which are specific to
only a class of digits in MNIST. The easiest way to perform this is to one-hot
encode labels for each handwritten digit to form a vector of size 10. For
example for the number 4, the vector would be represented by
[0,0,0,0,1,0,0,0,0,0] where each entry corresponds to a binary concept we
call- “isNumi”. Note that no 2 digit classes can share the same vector
representation.
2. 2.
Overlapping Concepts: Next, we emulate real-world concepts which are shared by
samples from multiple classes - for example, in the CUB bird identification
dataset, the color of the bird being black is shared by multiple bird classes.
For MNIST, we consider 2 physical characteristics of numbers in the image -
presence of a curved line and a presence of a straight line in their
LaTeXvisualizations (the way they are printed in LaTeXtypeset formatting). We
call these concepts as [“CurvedLine:present” , “StraightLine:present”] and are
respectively set to 1 if the number can be constructed by only straight lines
or curved lines in the standard LaTeXtypeset format. For example, the number 6
has no straight lines but only curved lines - hence is represented as [1,0]
while the number 5 has both straight and curved lines and hence represented as
[1,1]. Meanwhile, the number 7 has only straight lines so can be represented
as [0,1]. Note that multiple digits can share similar representations, for
example both numbers 0 and 6 will be represented by [1,0].
Both the “isNumi” and [“CurvedLine:present” , “StraightLine:present”] as
described above are concatenated - resulting in a size 12 concept vector for
each image. Note that this construction of concepts is completely up to user’s
subjective convenience and can be modified according to what the user thinks
best interprets a prediction result. For a visual description of concept
annotations, refer to Figure 5.
* •
CUB: The Caltech-UCSD Birds-200-2011 dataset (Reed et al. 2016) consists of
11,788 photos of 200 different classes of birds. The task is to classify each
photo in one of the categories. The original dataset consists of 312 binary
concepts representing physical traits. The dataset is processed exactly as
described in (Koh et al. 2020) \- final dataset consists of 112 annotated
concepts for each class of birds with each concept representing various
physical traits of birds like wing color, beak size, etc. It is ensured all
birds of same class contain same concept traits.
* •
OAI: Osteoarthritis Initiative (OAI) dataset (Nevitt, Felson, and Lester 2006)
consists of X-ray images and clinical data for about 36,000 patients over 4
years of study who pose a risk of knee osteoarthritis. The task is to perform
X-ray grading and classify each image into 4 different risk categories
corresponding to Kellgren-Lawrence grade (KLG) which measures the severity of
osteoarthritis, with higher numbers implying higher severity. Each image is
annotated with 10 concepts which include medical diagnosis from X-ray images
such as bone spacing. For a more comprehensive dataset details and processing
description, refer to (Koh et al. 2020) and (Pierson et al. 2019). Note that
OAI dataset is not publicly available and requires special permissions as
detailed in (Pierson et al. 2019).
Figure 5: Non-overlapping and overlapping concept construction considered
during construction of ConceptMNIST dataset.
### Model Training Objectives
(Koh et al. 2020) proposes 3 different training strategies for Concept
Bottleneck Models (CBMs), we only consider joint and sequential model training
strategy for all 3 datasets. As (Koh et al. 2020) have demonstrated, joint
model training mostly outperforms sequential training approach on task errors
and has comparable performance on the concept errors. However, as already
discussed previously, both sequential and joint model training have respective
advantages with joint offering higher prediction accuracies while sequential
offering more robust concept learning. In addition, joint model training is
more time-efficient due to being end-to-end in nature and is more flexible to
downstream tasks. Although (Koh et al. 2020) utilizes 3 different training
objectives (Section 3, (Koh et al. 2020)), we formulate them in a unified,
general purpose objective function for all types of model training.
$\mathcal{L}_{f,g}=\Sigma_{i}[\gamma*l(f(g(x_{i});y_{i})+\lambda*\Sigma^{T}_{j}l(g_{j}(x_{i});c^{j}_{i})]$
(7)
* •
Sequential Objective: We use Equation 7 with $\gamma=0$ to first train the
model $g$ and subsequently train set $\gamma=1$ and $\lambda=0$ for training
the model $f$.
* •
Joint Objective: Equation 7 can be directly used with appropriate weights for
$\gamma$ and $\lambda$ for training both models $f$ and $g$ in tandem.
* •
Hybrid Objective: We use Equation 7 with $\gamma=0$, to first train the model
$g$ and subsequently train set $\gamma=1$ and $\lambda$ to appropriate concept
weight value for training both models $f$ and $g$ in tandem.
As discussed before, for datasets CUB and OAI, we set the concept weights
$\gamma$ to 0.001 and 1 respectively (as reported in (Koh et al. 2020)), while
for ConceptMNIST we set $\gamma$ to 0.5 for all subsequent experiments. The
learning rates $\omega$ for C-MNIST is 0.01 and $\omega^{{}^{\prime}}$ is set
as 0.005. For CUB, we follow (Koh and Liang 2017) with $\omega$ at 0.01 and
$\omega^{{}^{\prime}}$ 0.005 while for OAI $\omega$ and $\omega^{{}^{\prime}}$
is set as 0.0008 and 0.0005 respectively. We use SGD momentum as the optimizer
and follow tuning rules in (Sun et al. 2022) to set the hyperparameters of the
optimizer.
| Dataset
---|---
C-MNIST | CUB | OAI
Task | Concept | Task | Concept | Task | Concept
Joint | 0.01 | 0.03 | 0.19 | 0.13 | 0.38 | 0.58
Sequential | 0.02 | 0.03 | 0.24 | 0.07 | 0.41 | 0.56
Hybrid | 0.01 | 0.03 | 0.21 | 0.11 | 0.40 | 0.58
RCL-Joint | 0.02 | 0.03 | 0.27 | 0.19 | 0.44 | 0.55
RCL-Hybrid | 0.02 | 0.03 | 0.28 | 0.18 | 0.45 | 0.53
Table 4: Benchmark and replication Task and Concept Errors for 3 datasets over
5 model training paradigms. Models are trained using Joint and Sequential
approaches as proposed by (Koh et al. 2020) and hybrid approach as discussed
in the Methodology section. Models RCL-Joint and RCL-Hybrid are trained
utilizing RCL with joint and hybrid training strategies respectively. Numbers
reported are averaged over 3 different seeds with standard deviation under 3%.
### Model Architectures
All CBMs consist of two distinct networks - $f$ and $g$. We utilize similar
model architectures proposed in (Koh et al. 2020) for CUB and OAI datasets.
The network $g$ maps inputs in $\mathbb{R}$ to concept space while the network
$f$ maps concepts to the output space.
* •
ConceptMNIST: The network $g$ consists of 2 convolutional layers with 32
channels each, along with a maxpool layer in between followed by a fully
connected layer. The network $f$ consists of a fully connected layer. The
fine-tuning is done end-to-end with the weight 0.1 given to the concept loss.
We train the model with batch size set to 64, learning rate set to 1e-4 for a
total of 20 epochs.
* •
CUB: The network $g$ consists of a Inception V3 pre-trained on Imagenet and
the network $f$ consists of one layer MLP. The fine-tuning is done end-to-end
with the weight 0.001 given to the concept loss. The rest of the
hyperparameters are unchanged from (Koh et al. 2020).
* •
OAI: For the OAI dataset, we utilize the architecture provided in (Koh et al.
2020) which consists of a ResNet-18 with last 12 layers unfrozen. The
prediction model consists of a 3 layer MLP with 50 neurons each in the first 2
layers while the last layer provides a regression score. The training
hyperparameters are used unchanged.
The final benchmarking task and concept losses are reported in the main paper.
As can be seen both models trained using standard objective and with robust
concept learning with adversarial training obtain similar task and concept
errors.
### Attack Formulation
Recall that our attack framework’s objective follows the form detailed in
Equation 8.
$\displaystyle\max_{||\delta||_{\infty}\leq\epsilon_{thresh}}\quad
G(\mathcal{U}(x;f,g),\mathcal{U}(x+\delta;f,g))\quad$ (8)
$\displaystyle\text{s.t.}\quad argmaxf(x+\delta)=argmaxf(x),\quad
x+\delta\in[0,1]^{D},$
The goal of the objective function is to maximize the difference of the
generated concept-based explanations before and after the performed
adversarial attacks. However importantly, the objective function follows 2
constraints - the first which ensures that model predictions for both the
original and attacked samples remain identical, and second that the generated
adversarial perturbation is imperceptible.
The difference between original and attacked sample’s concepts which needs to
be maximized, denoted as $\mathcal{D}$ differs for various attacks which are
detailed below.
* •
For erasure attack, we attempt to minimize the concept score of a selected
target concept, hence the selected set contains only the target concept score
denoted by $C_{S_{target}}$ such that:
$\displaystyle C_{S_{target}}=$
$\displaystyle\\{s_{target}|s_{target}\in\mathbb{I}(\sigma(\hat{g}(x_{i})){\\}}$
$\displaystyle\mathcal{D}(x_{i})=\Sigma~{}(C_{S_{target}})$
* •
For introduction attack, we attempt to maximize the concept scores of concepts
which are not-relevant to the initial model predictions. Therefore, we include
all non-relevant concept scores in the selected set denoted by $C_{S_{non-
rel}}$ such that:
$\displaystyle C_{S_{non-rel}}$
$\displaystyle=\\{s_{i}|s_{i}\notin\mathbb{I}(\sigma(\hat{g}(x_{i}))\\}$
$\displaystyle\mathcal{D}(x_{i})=\Sigma(C_{S_{non-rel}})$
* •
For confounding attack, we simultaneously maximize concept scores of all
relevant concepts and minimize concept scores of non-relevant concepts. The
final objective is a weighted sum of relevant and non-relevant concept sets.
Therefore, we include all relevant and non-relevant concept scores in the
selected set denoted by $C_{S_{rel}}$ and $C_{S_{non-rel}}$ such that:
$\displaystyle
C_{S_{rel}}=\\{s_{i}|s_{i}\in\mathbb{I}(\sigma(\hat{g}(x_{i}))\\}$
$\displaystyle C_{S_{non-
rel}}=\\{s_{i}|s_{i}\notin\mathbb{I}(\sigma(\hat{g}(x_{i}))\\}$
$\displaystyle\mathcal{D}(x_{i})=\Sigma(C_{S_{non-
rel}})+\frac{\gamma}{\beta}*\Sigma(C_{S_{rel}})$
However, it is not enough to maximize only $\mathcal{D}$, without considering
its effect on model prediction. Note that the first metric specifically
ensures model predictions should remain identical after attack. To this
effect, we propose to maximize the prediction score of the original sample as
output by the prediction model $\hat{f}$. We capture this information in
$\mathcal{P}$ which would also be maximized in conjunction with $\mathcal{D}$
to ensure model predictions remain identical.
We have 2 different types of model outputs - standard softmax label-wise
outputs for CUB and ConceptMNIST datasets, and a single regressed value for
OAI dataset. The different forms of $\mathcal{P}$ are detailed below:
* •
Label-wise outputs (like CUB, ConceptMNIST):
$\displaystyle\mathcal{P}(x_{i})=-\|\hat{f}(\hat{g}(x_{i}))_{y_{gt}}\|_{2}\,\,$
(9) $\displaystyle\text{where $y_{gt}$ is index of predicted label}.$
* •
Continuous outputs (like OAI):
$\displaystyle\mathcal{P}(x_{i})=-|(f(g(x))-y_{gt})|\,\,$ (10)
$\displaystyle\text{where $y_{gt}$ is predicted value}.$
The third constraint is ensured by stopping optimization before a pre-defined
$\epsilon_{thresh}$ value which is kept small enough to ensure visual
imperceptibility.
The final objective function $\mathcal{L}$ is thus a weighted sum of
$\mathcal{D}$ and $\mathcal{P}$ to maximize the difference in concept values
as well as ensure identical model predictions, which is given as follows:
$\displaystyle\mathcal{L}(x_{i})=\alpha*\mathcal{P}(x_{i})+\beta*\mathcal{D}(x_{i}).$
(11)
Algorithm implementation details can be found in Section A for erasure,
introduction and confounding attacks.
### Attack Settings
Erasure. We set the $\epsilon$ value to $10^{-4}$. We limit our optimization
process to 1000 steps and maximum $L_{\infty}$ value to $\epsilon_{thresh}$ so
as to ensure visual imperceptibility of added noise. The $\epsilon_{thresh}$
for CUB, ConceptMNIST and OAI datasets is set to 0.2, 30 and 10 respectively
for all attacks. We also report results for increasing $L_{\infty}$ values in
Table 5. For all datasets - $\alpha$ and $\beta$ are both set to 1.
Introduction. As before, we set the $\epsilon$ value to $10^{-4}$. We limit
our optimization process to 5000 steps and maximum $L_{\infty}$ value to 0.2,
30, 10 as respectively for CUB, ConceptMNIST and OAI datasets as before. We
set $\beta$ as 5 to give higher weight to non-relevant concepts and set
$\alpha$ to 1.
Confounding. Confounding attack is a combination of both erasure and
introduction attacks. Again we set the $\epsilon$ value to $10^{-4}$, limit
our optimization process to 5000 steps and maximum $L_{\infty}$ value to 0.2,
30 and 10 for CUB, ConceptMNIST and OAI datasets respectively. For CUB and
ConceptMNIST datasets, the value for $\beta$ and $\gamma$ is set to 10 and 5
respectively. For OAI dataset, the value for $\beta$ is set at 0.5.
### Attack Algorithms
This section details the specific algorithms to perform general purpose
erasure, introduction and confounding attacks. Algorithm 1 details general
form of the Erasure attacks as discussed. Algorithm 2 details a general form
of confounding attacks. Note that confounding attack with $\gamma$ set to 0 is
identical to introduction attack.
1
Result: Adversarially perturbed image with same prediction label and concept
set with targeted index absent
2 For each image $x_{i}$ in test set
3 $C^{orig}\leftarrow\sigma(\hat{g}(x_{i}))$
4 $y_{gt}\leftarrow argmax\;\hat{f}(\hat{g}(x_{i}))$
5 $x^{\prime}_{i}\leftarrow x_{i}$ $step\leftarrow 0$ $flips\leftarrow 0$
_6_ while _$argmax\;\hat{f}(\hat{g}(x^{\prime}_{i}))==y_{gt}$ and_
7 _step_ $\leq$ N do
8 $C_{S_{target}}\leftarrow|\sigma(\hat{g}(x^{\prime}_{i}))|_{y_{gt}}$
9 $\mathcal{L}_{x_{i}}$ =
$\alpha*\|\hat{f}(\hat{g}(x^{\prime}_{i}))_{y_{gt}}\|_{2}$
10 $+\beta*\Sigma(C_{S_{target}})$
11 $x^{\prime}_{i}\leftarrow
x^{\prime}_{i}+\epsilon*sign(\nabla_{x^{\prime}_{i}}\mathcal{L}_{x_{i}})$
12 $x^{\prime}_{i}\leftarrow clamp(x^{\prime}_{i},0,1)$
13 if _$L_{\infty}(x^{\prime}_{i})\geq$ $\epsilon_{thresh}$_ then
14 break
15 end if
16 if _$\sigma(C_{idx}^{pred}) <0.5$_ then
17 $flips\leftarrow flips+1$
18 continue
19 end if
20 $step\leftarrow step+1$
21 end while
Algorithm 1 Algorithm to perform concept erasure attack
Result: Adversarially perturbed image with same prediction label and different
concept set
1 For each image $x_{i}$ in test set
2 $x^{\prime}_{i}\leftarrow x_{i}$
3 $C^{orig}\leftarrow\sigma(\hat{g}(x_{i}))$
4 $S_{non-rel}\leftarrow\mathbb{I}(C^{orig}<0.5)$
5 $S_{rel}\leftarrow\mathbb{I}(C^{orig}\geq 0.5)$
6 $y_{gt}\leftarrow argmax\;\hat{f}(\hat{g}(x_{i}))$
7 $step\leftarrow 0$
8while _$argmax\;\hat{f}(\hat{g}(x^{\prime}_{i}))==y_{gt}$ and step $\leq$ N_
do
9 $C_{pred}\leftarrow\sigma(\hat{g}(x^{\prime}_{i}))$
10 $\mathcal{L}_{x_{i}}$ =
$\alpha*\|\hat{f}(\hat{g}(x^{\prime}_{i}))_{gt}\|_{2}$
11 $+\beta*\Sigma(C_{S_{non-rel}})$
12 $+\gamma*\Sigma(C_{S_{rel}})$
13 $x^{\prime}_{i}\leftarrow
x^{\prime}_{i}+\epsilon*sign(\nabla_{x^{\prime}_{i}}\mathcal{L}_{x_{i}}))$
14 $x^{\prime}_{i}\leftarrow clamp(x^{\prime}_{i},0,1)$
15 if _$L_{\infty}(x^{\prime}_{i})\geq\epsilon_{thresh}$_ then
16 break
17 end if
18 $step\leftarrow step+1$
19 end while
Algorithm 2 A standard algorithm to perform Introduction and Confounding
attacks
### Evaluation Metrics
In the following, we describe the evaluation metrics in greater detail.
* •
Percentage of concepts flipped: Erasure and confounding attacks for binary
valued concepts attempt to “flip” concepts already present in a sample, i.e.,
the concept is no longer predicted as being relevant to prediction after the
attack. Equation 12 details the calculation of percentage of flipped concepts
for each attacked sample. The terms $C^{orig}$ and $C^{pert}$ denote the
concept sets of the original sample and the sample after attack respectively.
The indicator function $\mathbb{I}$ returns 1 if concept $i$ is present in the
set, else returns $0$. Note that $C^{pert}$ is generated separately for each
target concept.
$\text{\%
Flipped}=\frac{1}{|C^{orig}|}\Sigma_{i}^{|C^{orig}|}\|\mathbb{I}(C^{orig}_{i})-\mathbb{I}(C^{pert}_{i})\|$
(12)
However, the notion of concepts “present” does not make sense in datasets with
real-valued continuous concept annotations. This is because each value of each
concept has a certain degree of physical manifestation in the sample -
implying a concept is always present. Hence, we consider a concept removed if
the absolute difference between the original concept set and the concept set
after attack is greater than a threshold value $\delta_{thresh}$. Equation 13
details the calculation of percentage of flipped concepts for each attacked
sample. The indicator function $\mathbb{I}$ returns 1 if the condition is
true, else returns 0. Note that $C^{pert}$ is generated separately for each
target concept.
$\text{\%Flipped}=\frac{1}{|C^{orig}|}\Sigma_{i}^{|C^{orig}|}\|\mathbb{I}(|C^{orig}_{i}-C^{pert}_{i}|>\delta_{thresh}))\|$
(13)
* •
Percentage of concepts introduced and retained: Introduction attacks attempt
to introduce concepts which are not already present in the sample’s concept
prediction and are non-relevant to sample’s prediction. To this effect, we
report two relevant metrics - percentage of concept introduction which refers
to the number of new concepts introduced as compared to the original concept
set detailed in Equation 14. As we are only interested in new concept
introductions, we also report the concept retention percentage refer to the
number of concepts which are shared with the original concept set as detailed
in Equation 15.
$\text{\%Concepts Introduced}=\frac{|C^{orig}\cap C^{pert}|}{|C^{orig}|}$ (14)
$\text{\%Concepts Retention}=\frac{|C^{orig}\cup C^{pert}|}{|C^{orig}|}$ (15)
* •
Jaccard Similarity Index (JSI): JSI measures the simmilarity (or
dissimilarity) between two different concept sets. We utilize the JSI value to
understand the dissimilarity of concepts sets predicted before and after the
confounding attacks. Mathematically, it can be thought of the percentage of
shared set elements with respect to combined total set elements also called
Intersection over Union. In general, the lower the JSI value, the more
different the input sets are. The exact mathematical formulation is detailed
in Eqution 16.
$\text{JSI}=\frac{|C^{orig}\cup C^{pert}|}{|C^{orig}\cap C^{pert}|}$ (16)
Output : Trained models $\hat{f},\hat{g}$ with increased concept robustness
Input : Image Set: $X$, Concept Set: $C$, Label Set: $Y$, Training Corpus:
{$x_{i}\in X$; $c_{i}\in C$; $y_{i}\in Y$}, Epochs: $N$, Models: $f,g$,
Learning Rate: $\omega$, Step Size : $\epsilon$, Max Iterations: $num$
1 $epoch\leftarrow 0$
2 while _$epoch\leq N$ _ do
3 Sample a batch of training samples:
$Z\,=\,\\{(x_{i},c_{i},y_{i})|i=1,2,...,B\\}$
4 $\mathcal{L}_{C}\leftarrow~{}\Sigma^{T}_{i}{l}(g_{j}(x_{i}),c^{j}_{i})$
[note: $l$ \- Binary Cross Entropy/Mean Square Error Loss]
5 ${\mathcal{L}_{Y}}\leftarrow~{}l(f(g(x_{i})),y_{i})$ [note: $l$ \- Cross
Entropy Loss]
6 $s\leftarrow 0$
7 $\tilde{x_{i}}\leftarrow x_{i}$
8 while _$s\leq~{}num$_ do
9
$\tilde{x_{i}}~{}\leftarrow~{}\tilde{x_{i}}+\epsilon*sign(\nabla_{\tilde{x_{i}}}\Sigma^{B}_{i=1}l(g(\tilde{x_{i}}),c_{i}))$
[note: $l$ \- Binary Cross Entropy/Mean Square Error Loss]
10
11 end while
12 $\mathcal{L}_{adv}\leftarrow~{}l(f(g(\tilde{x_{i}})),y_{i})$
13
$\mathcal{L}_{f,g}~{}=\Sigma^{B}_{i}[\gamma*{\mathcal{L}_{Y}}+\lambda*{\mathcal{L}_{C}}+\alpha*\mathcal{L}_{adv}$]
14 if _epoch $\leq$ N/2_ then
15
$\theta_{g}\leftarrow\theta_{g}-\omega*\nabla_{\theta_{g}}~{}\mathcal{L}_{f,g}$
16 else
17
$\theta_{f,g}\leftarrow\theta_{f,g}-\omega^{\prime}*\nabla_{\theta_{f,g}}~{}\mathcal{L}_{f,g}$
18
19 end while
Algorithm 3 Robust Concept Learning (RCL)
### Robust Concept Learning (RCL)
We detail the exact implementation of the proposed defense method Robust
Concept Learning (RCL) in this subsection. As discussed before, the essential
reason behing the lack of robustness of CBMs is the lack of rich concept
information during training. To alleviate this problem, we propose adversarial
augmentation to further improve the smoothness of the concept space. Algorithm
iteratively generates increasingly diverse images by optimizing the
adversarial loss $\mathcal{L}_{adv}$. The final model optimization occurs with
a weighted sum of all 3 - prediction, concept and adversarial loss. The values
of $\alpha$ are set to 1 for C-MNIST, CUB and OAI datasets.
### Additional Results
We report additional results on standard joint models and hybrid adversarially
concept trained with higher concept robustness over all 3 datasets and attacks
for varying $L_{\infty}$ values. As previously discussed, the $L_{\infty}$
values are considered to be a direct measure of visual imperceptibility, i.e
the lower the value, the more imperceptible adversarial perturbations
introduced in an image. As we postulated, any attack is much more effective
and malicious if maximum attack success can be obtained with small
$L_{\infty}$ values. We demonstrate the evaluation metrics for all 3 attacks
in Tables 5, 6 and 7 for all 3 datasets. As evident from all 3 tables, the
models trained using the proposed Robust Concept Learning with Adversarial
Training algorithm are much more robust to attacks for small $L_{\infty}$
values, thus improving trust and reliability in CBMs. The values for budget -
$\epsilon_{thresh}$ are chosen according to standard benchmarks for
adversarial attacks e.g. https://github.com/MadryLab/mnist˙challenge. As MNIST
has a smaller latent representation the values for $\epsilon_{thresh}$ are
relatively high while CUB is a very high dimensional dataset, the relative
$\epsilon_{thresh}$ values are low.
| C-MNIST | CUB | OAI
---|---|---|---
| 10 | 30 | 50 | 0.2 | 0.4 | 0.5 | 10 | 15 | 20
Joint | 53 | 67 | 70 | 65 | 99 | 99 | 62 | 71 | 74
RCL-Hybrid | 18 | 22 | 24 | 23 | 31 | 36 | 1 | 1 | 1
Table 5: Erasure attack success percentages for varying $L_{\infty}$ values.
The higher the $L_{\infty}$, the lower the visual imperceptibility. As
expected attack success percentages increase with increasing $L_{\infty}$.
### Additional visual results
We also present additional visual results for Erasure attacks on the
ConceptMNIST (top-left), CUB (top-right) and OAI datasets (bottom-center) in
Figure 6. The bars in the first row of each image represent the initial
concept scores as predicted by the model. The bars in color red are randomly
chosen concepts we attempt to and successfully flip in rows 2 and 3 of each
image. (Note: we attempt to flip all present concepts, but demonstrate only 2
random successfully flips).
Visual results for Introduction attacks on the ConceptMNIST (top-left), CUB
(top-right) and OAI datasets (bottom-center) are reported in Figure 7. The
first image and the bars below it represent the initial image and the concept
scores as predicted by the model, while the second image and bars below it
represents the image and concept predictions after the attack. The bars in
color red are previously non-relevant concepts which were successfully
introduced after attack. The images in the first and last row are for
ConceptMNIST and CUB datasets, respectively.
We release code for data processing, attacks and results on the ConceptMNIST
dataset. Upon acceptance, we will also release codes for CUB dataset. As OAI
dataset requires special permissions - only limited code will be released for
it.
| C-MNIST | CUB
---|---|---
$L_{\infty}$ | 30 | 40 | 50 | 0.2 | 0.4 | 0.5
| %Intro | %Ret | %Intro | %Ret | %Intro | %Ret | %Intro | %Ret | %Intro | %Ret | %Intro | %Ret
Joint | 118 | 96 | 140 | 94 | 145 | 92 | 33 | 92 | 42 | 91 | 47 | 87
RCL-Hybrid | 13 | 96 | 22 | 95 | 26 | 92 | 7 | 61 | 12 | 97 | 14 | 97
Table 6: Introduction attack introduction and retention rates for varying
$L_{\infty}$ values. The higher the $L_{\infty}$, the lower the visual
imperceptibility. As expected introduction rates increase with increasing
$L_{\infty}$.
| C-MNIST | CUB | OAI
---|---|---|---
$L_{\infty}$ | 30 | 40 | 50 | 0.2 | 0.4 | 0.5 | 10 | 15 | 20
| JSI | JSI | JSI | $\%$Intro | $\%$Ret | $\%$Intro | $\%$Ret | $\%$Intro | $\%$Ret | Avg | Min | Avg | Min | Avg | Min
Joint | $0.38$ | $0.32$ | $0.30$ | $13$ | $31$ | $15$ | $30$ | $18$ | $28$ | $0.56$ | $0.13$ | $0.60$ | $0.13$ | $0.62$ | $0.13$
RCL-Hybrid | $0.55$ | $0.52$ | $0.51$ | $7$ | $61$ | $9$ | $58$ | $12$ | $51$ | $1.9e$-$3$ | $1.8e$-$5$ | $2.6e$-$3$ | $3.9e$-$5$ | $5.1$e-$3$ | $4.1e$-$5$
Table 7: Confounding attack introduction and retention percentages for varying
$L_{\infty}$ values. The higher the $L_{\infty}$, the lower the visual
imperceptibility. As expected introduction percentages increase and retention
percentages decrease with increasing $L_{\infty}$. Figure 6: A few examples
on the ConceptMNIST (top-left, upper row), CUB (top-right, upper row) and OAI
dataset (lower row) demonstrating the erasure attacks. Figure 7: A few
examples on the ConceptMNIST (upper row) and CUB dataset (lower row)
demonstrating the introduction attacks.
|
11institutetext: 1 Anton Pannekoek institute for Astronomy (API), University
of Amsterdam, Science Park 904, 1098XH Amsterdam
2 Department of Astronomy, Tsinghua University, 30 Shuangqing Rd, 100084
Beijing, PR China
# Forming equal-mass planetary binaries via pebble accretion
T.J. Konijn1 Corresponding author<EMAIL_ADDRESS>R.G. Visser1 C. Dominik1 C.W.
Ormel2
(Received 10 October 2022 / Accepted 29 November 2022)
###### Abstract
Context. Binary Solar System objects are common, ranging from satellite
systems with very large mass ratios, $M_{1}/M_{2}$, to those with mass ratios
approaching unity. One well-known example of a binary is the Pluto-Charon
system. With Charon being ’only’ eight times less massive than Pluto, the
question arises (as in the case of many other systems) as to why the mass
ratio is still close to unity. There is much evidence that (binary)
planet(esimal) formation happened early, when the protoplanetary gas disk was
still present. It is likely that (at least some of) these binaries evolved
together, as a result of pebble accretion. Pebble accretion is a new key
paradigm in planetary formation and it is believed to play a major role in
many aspects of the formation of planetary systems, from the radial transport
of material to the rapid growth of planetary embryos throughout the system.
Aims. Here, we focus on the question of how the mass arriving in the
gravitational influence zone of the binary during pebble accretion is
distributed over the binary components for a given initial mass ratio. We also
consider whether accretion over time leads to equal-mass binaries (converging
mass ratio) or to a dominant primary mass with a small moon (diverging mass
ratio).
Methods. We numerically integrated two-dimensional (2D) pebble trajectories in
the same typical fashion as for a single mass that is subject to pebble
accretion. We tracked the efficiency of accretion for the two separate binary
components, compared to a single body with the same mass. These numerical
simulations were done for a range of binary mass ratios, mutual separations,
Stokes numbers, and two orbital distances, 2.5 and 39 au.
Results. We find that in the limit where pebbles start to spiral around the
primary (this holds for relatively large pebbles), the pebble preferentially
collides with the secondary, causing the mass ratio to converge towards unity.
In our tested case, where the total binary mass is equal to that of the Pluto-
Charon system, this takes place on $\sim$Myr timescales. In this regime the
total sweep-up efficiency can lower to half that of a pebble-accreting single
body because pebbles that are thrown out of the system, after close encounters
with the system. These timescales and sweep-up efficiency are calculated under
the assumption our 2D simulations compare with the 3D reality. The results
show that systems such as Pluto-Charon and other larger equal mass binaries
may well have co-accreted by means of pebble accretion in the disk phase
without producing binaries, with highly diverging mass ratios.
###### Key Words.:
Pebble Accretion – Binary Planetesimals – Planetary formation – Streaming
instability
## 1 Introduction
Roughly twenty percent of the cold classicals in the Kuiper belt are binary
systems (Noll et al., 2008), with the mass ratio of the components being close
to unity. A well-known example is the Pluto-Charon system (Christy &
Harrington, 1978) located at an orbital distance from the Sun of around 39 au.
With Pluto being roughly eight times as massive as Charon, it is often
categorised as a nearly equal-mass binary. Recent studies have shown, with
increasing certainty, that equal-mass binaries are not rare in the Kuiper Belt
and their occurrence rate is estimated to be of at least several percent (Noll
et al., 2008) 111The authors classify it as ”equal size binaries”. The
internal densities are very similar for cold classicals making it equivalent
to the definition of equal mass binaries..
In addition, the asteroid belt features numerous objects with companions
(Merline et al., 2002). Some of these binaries have similar masses (e.g. 90
Antiope, 2006 VW139, 2017 YE5, and 69230 Hermes (Marchis et al., 2008, 2004;
Agarwal et al., 2017; Marchis et al., 2012)), while others have large mass
ratios, so that calling the smaller objects asteroid moons or satellites and
the larger ones planet or primary would seem more correct.
The Earth-Moon system is a binary with a relatively large mass ratio as well,
although here the definition of equal mass binary reveals its ambiguity. The
most recent definition of ’equal-mass’ and ’near-equal-mass’ reveals the need
for the barycenter to lie outside both companions (Stern & Levison, 2002).
Another way of categorising the mass equality of binaries is the ’tug-of-war’
ratio, expressing the planetary gravity over the solar gravity for a
satellite. Equal-mass planet binaries are often also called ’double planets’,
making the definition of an equal-mass binary even more vague; the Moon ticks
off all the criteria for being a planet on its own and the tug-of-war ratio
for the Earth-Moon system is $0.46$ (Herschel, 1833), but the barycenter lies
within the body of the Earth making the Earth-Moon system: a double-planet
system, an equal-mass binary, or a primary (Earth) and secondary (Moon)
system, respectively (Russell, 2017).
The formation of binary planets is still an open question and a number of
channels have been proposed. In the classical model it is thought, at least in
the case of Pluto-Charon (Canup, 2005, 2011) and Earth-Moon systems, that they
formed via giant impacts on the primaries, smashing debris into stable
orbiting secondaries (Cameron & Ward, 1976; Canup, 2012). This scenario
requires specific impact parameters that ensure sufficient angular momentum in
the ejected debris to allow for the formation of an object outside the Roche
limit.
A second viable mechanism to create binary planet(esimals) is capture via
three-body interactions in an early, dense phase of the solar system, where
such interactions were much more frequent than they are today (Goldreich et
al., 2002; Astakhov et al., 2005; Lee et al., 2007; Schlichting & Sari, 2008).
This might be the case for the dynamically hot classicals in the Kuiper Belt,
which are more scattered due to perturbations by Neptune and have relatively
wide mutual separations. They also show a broad and more chaotic distribution
in orbital parameters, are subject to Kozai-Lidov effects (Kozai, 1962; Lidov,
1962), and show a much lower equal mass binary occurrence than the cold
classical population because disruption is relatively easy (Morbidelli &
Nesvorný, 2020). The objects in this population thus have a dynamical profile
that fits a random capture process which suggests a different formation
process than the cold classicals. The cold classicals have strong color
correlations, indicating the same composition (Benecchi et al., 2009), they
have mass ratios close to unity, and show a clear preference for prograde
orbits (80 %) versus retrograde orbits (20 %) (Grundy et al., 2019), strongly
disfavouring a capture event.
Finally, a recently proposed mechanism to explain the formaton of planetary
binaries is the streaming instability (SI) (Youdin & Goodman, 2005; Johansen &
Klahr, 2011; Visser et al., 2021). In this scenario, dust clumps together via
an interaction with the disk gas and when it becomes gravitationally bound, it
collapses on a dynamical timescale to form comets, planetesimals, and
protoplanets (Nesvorný et al., 2010, 2019). Depending on the amount of angular
momentum present in the clump before the collapse (Nesvorný et al., 2021), and
possibly added to it intrinsically during the collapse (Visser & Brouwers,
2022), the forming object will become a binary in order to absorb excess
angular momentum. In this scenario, the disk is expected to be loaded with
pebble-sized objects, with Stokes numbers between $10^{-2}$ to unity.
Regardless of the process of formation, if these binary systems formed early
in the disk, their further evolution and growth is non-trivial. The presence
of nebular gas and/or solids might cause the binary components to spiral
closer and merge if they are aerodynamically small within a Myr (Lyra et al.,
2021) or start wide and merge through secular effects (Rozner et al., 2020).
However, if the binary components form early and start massive enough for
gravity to dominate, they might continue to grow through co-accretion. The
growth of such a binary system is widely known to lead to a convergence in the
mass ratio – if the satellite and primary accrete from a planetesimal swarm
under the right conditions. A satellite would grow significantly faster than
its primary due to its higher surface-to-mass ratio and by exploiting the
gravitational focusing of the primary (Harris & Kaula, 1975), provided that
the mutual separation is small enough (Morishima & Watanabe, 2001).
While this co-accretion scenario for binaries has been studied for
planetesimal accretion, it has not been applied in the case of pebble
accretion. In pebble accretion, massive bodies efficiently accrete solids
through the interplay of gravity and the presence of nebular gas (Ormel &
Klahr, 2010; Lambrechts & Johansen, 2012). As the gas drag removes relative
velocity and angular momentum from the solids, they are efficiently captured
into the gravitational potential, leading to largely enhanced accretion cross-
sections. Pebble accretion has only been invoked explicitly for a single
pebble-accreting body to explain many observational difficulties in planet
formation with great success, such as the size distributions of asteroids and
Kuiper Belt objects (Johansen et al., 2015), formation processes in the cores
of gas giants (Levison et al., 2015), TRAPPIST system architecture (Ormel et
al., 2017; Schoonenberg et al., 2019), spin axis of solar system bodies
(Visser et al., 2020), formation of terrestrial planets (Johansen et al.,
2021), and many other applications.
Of course, pebble accretion will not be relevant for all planetary binary
objects. The Earth-Moon system formed later, after the disk gas had already
been dispersed – so pebbles would have no longer been drifting inwards and the
accretion of small rocks would not be enhanced by the interaction with the
gas. Also, low-mass binary objects, as in the case of most Kuiper Belt
binaries, will not have sufficient gravity to get pebble accretion started in
a serious way. The process is, however, applicable for bodies that are
sufficiently massive and form so early that the gas disk and pebbles are still
around.
In this study, we investigate the evolution and growth of a planetary binary
subject to pebble accretion. We subdivide the total mass, $M,$ of a single
body in two masses, $M_{1}$ and $M_{2}$, such that $M=M_{1}+M_{2}$ with an
increasingly diverging mass fraction, $f_{m}=M_{1}/M_{2}\in[1,8]$. We
determine the timescale to $e-$fold the mass of the binary compared to the
growth time of a single body with the same mass. We then quantify the
$e$-folding time needed to converge the mass ratio to unity by looking at the
individual efficiency of accretion from the flux of pebbles swept up by the
system. In many cases we find that, in the ’die-hard’ pebble accretion regime,
the lower mass body (in this case $M_{2}$) grows faster than the more massive
one in essence for the same reasons as in the planetesimal accretion case
described above. As a result, the mass ratio can converge back to unity, well
within the disk lifetime of $\sim$1 Myr. Pebble accretion provides an
explanation for the significant fraction of (near) equal mass observed
planetary binaries.
The paper is organised as follows . In Section 2, we discuss the model setup
and the assumptions we use. In Section 3, we describe the results of the
numerical simulations. In Section 4, we discuss the outcomes and results,
after which we summarise the implications and present our conclusions in
Section 5.
## 2 Setup and assumptions
### 2.1 Disk model
In order to see how these binary systems evolve, we begin with a general
description of the planet-forming disk in which the binary resides. For the
gas temperature and surface density, we assume power-law profiles given by
(Weidenschilling, 1977b) and (Hayashi et al., 1985):
$\displaystyle T(r_{0})$ $\displaystyle=170\ \mathrm{K}\left(\frac{r_{0}}{1\
\mathrm{au}}\right)^{-1/2},$ (1) $\displaystyle\Sigma(r_{0})$
$\displaystyle=1700\ \mathrm{g\ cm}^{-2}\left(\frac{r_{0}}{1\
\mathrm{au}}\right)^{-3/2},$ (2)
where $r_{0}$ is the radial distance to the central star. We assume
hydrostatic equilibrium, whereby the vertical density structure forms as a
Gaussian:
$\displaystyle\rho(r_{0},z)$
$\displaystyle=\frac{\Sigma(r_{0})}{\sqrt{2\pi}H(r_{0})}e^{-\frac{1}{2}\left(\frac{z}{H}\right)^{2}},$
(3)
with $H=c_{\mathrm{s}}/\Omega_{0}$ as the scale height, $\Omega_{0}$ is the
Keplerian frequency, $c_{\mathrm{s}}=\sqrt{k_{\mathrm{B}}T/\bar{m}}$ is the
local isothermal sound speed, $k_{\mathrm{B}}$ as the Boltzmann’s constant,
and $\bar{m}$ the mean molecular weight. The gas moves at a slightly lower
velocity than Keplerian, $v_{\mathrm{K}}=\sqrt{GM_{\star}/r_{0}}$ where $G$ is
the gravitational constant and $M_{\star}$ is the mass of the star.
(Weidenschilling, 1977a) gives a dimensionless constant relating
$v_{\mathrm{K}}$ and the headwind particles travelling in Keplerian orbit
feel. This dimensionless constant, $\eta,$ can be estimated by looking at the
$r$-dependencies of Eqs. 1-3:
$\displaystyle v_{\mathrm{hw}}$ $\displaystyle=\eta v_{\mathrm{K}},$ (4)
$\displaystyle\eta$
$\displaystyle=\frac{r_{0}}{2v_{\mathrm{K}}^{2}\rho}\frac{\mathrm{d}P}{\mathrm{d}r_{0}}=-\frac{13c_{\mathrm{s}}^{2}}{8v_{\mathrm{K}}^{2}},$
(5)
where $v_{\mathrm{hw}}$ is the headwind and $P$ the pressure governed by the
ideal gas law. For the adopted disk profile, the headwind has a numerical
value of $v_{\mathrm{hw}}=-32$ m s-1 downward in the co-moving local frame. In
our parameter study, the Epstein regime for pebble stopping time is the
relevant one and we calculated it from the Stokes number
($\tau_{\mathrm{s}}$), relating the stopping time ($t_{\mathrm{s}}$) to one
orbital timescale:
$\tau_{\mathrm{s}}=t_{\mathrm{s}}\Omega_{0}.$ (6)
Table 1: Overview of the parameter study used in the simulations. The ranges between brackets are logarithmically spaced containing 30 values for $\tau_{\mathrm{s}}$ and 10 values for $a_{\mathrm{b}}$. Since we are simulating two different mass ratios, $f_{m}$, at two different orbital distances $r_{0}$, we have 30x10x2x2 = 1200 simulations in total. The boldfaced quantities indicate the parameter set of the fiducial model. $r_{0}$ [au] | $\tau_{\mathrm{s}}$ | $a_{\mathrm{b}}$ $[R_{\mathrm{H}}]$ | $f_{m}$ | N | $M_{\mathrm{total}}$
---|---|---|---|---|---
2.5 | [$10^{-2}$-0.4] | [0.0075-0.05] | $2,8$ | 8100 | $M_{\mathrm{Pluto}}+M_{\mathrm{Charon}}$
$\mathbf{39}$ | [$5\times 10^{-4}$-0.1], $\mathbf{10^{-2}}$ | [$10^{-3}$-$10^{-2}$], $\mathbf{0.0025}$ | $2,\mathbf{8}$ | 8100 | $M_{\mathrm{Pluto}}+M_{\mathrm{Charon}}$
### 2.2 Pebble and binary dynamics
Figure 1: Overview of the binary planetesimal system in the co-moving frame.
The star is located on the left side of the frame. The binary components are
denoted with their masses, $M_{1},M_{2}$, and radii, $R_{1},R_{2}$, travelling
around their barycenter in circular orbit at distance, $a_{1}$ and $a_{2}$,
respectively. Pebbles are released from a distance $y_{0}$ between the edges
of the impact range, $x_{1},x_{2}$, for a given initial phase angle of the
binary, $\varphi_{0}$, with respect to the $x$-axis. The starting velocities
of different pebbles differ depending on the $x$-coordinate as seen in
Equation 17, therefore, their velocity vectors point in different directions.
The background gas headwind is denoted with the black arrow on the y-axis.
We adopted a two-dimensional (2D) local shearing sheet box co-moving with the
binary center-of-mass (barycenter) at distance, $r_{0}$, from the central
star. In this frame the equation of motion of a pebble is given by (Ormel &
Klahr, 2010):
(7) (8) -∑j=12GMj|r-rj(t)|3(x)-xj(t) y-yj(t) +adrag,
with the first expression in brackets consisting of the Coriolis and tidal
accelerations, the second term with the two-body gravity of the separate
binary components with mass, $M_{j}$, and Cartesian positions ($x_{j}(t)$,
$y_{j}(t)$).
From here on we use the following terminology for the binary components. We
say the binary is equal mass if the mass ratio is unity and no explicit
reference to one of the bodies has to be made due to symmetry. If the mass
ratio changes to $f_{m}\equiv M_{1}/M_{2}>1$ we say that the more massive one
with mass, $M_{1}$, is the ’primary’, and the least massive one with mass,
$M_{2}$, is the ’secondary’.
We consider a binary separation that is narrow and we assume mutual circular
motion of the binary system during the accretion. The circular motion of the
system with total mass, $M=M_{1}+M_{2}$, is then modelled over time, $t,$
with:
$\pmatrix{x}$ (9) y1,2 =a1,2(cos)(φ0+ωt) sin(φ0+ωt) ,
where $\omega=\sqrt{GM/a_{\mathrm{b}}^{3}}$ as the angular frequency,
$\varphi_{0}$ as the initial phase angle with which the simulation started and
the subscripts indicating to which body it applies, respectively. The binary
components are assumed massive and unaffected by the gas. They are placed on a
circular orbit around the barycenter with orbital separation
$a_{\mathrm{b}}=a_{1}+a_{2}$ and, if $f_{m}\neq 1$, with $a_{1}$ as the
distance of the primary with mass, $M_{1}$, and radius, $R_{1}$, and $a_{2}$
as the distance of the secondary with mass, $M_{2}$, and radius, $R_{2}$, to
the barycenter. The system rotates in a prograde orientation around its
barycenter.
The Cartesian coordinates and velocities of the pebble are given by ($x$, $y$,
$v_{x}$, $v_{y}$). The last term contains the acceleration due to the drag
force on the pebble equal to:
$\mathbf{a}_{\mathrm{drag}}=-\frac{\mathbf{v}-\mathbf{v}_{\mathrm{g}}}{t_{\mathrm{s}}},$
(10)
with $\mathbf{v}_{\mathrm{g}}$ the gas velocity including shear given by:
$\mathbf{v}_{\mathrm{g}}=\left(v_{\mathrm{hw}}-\frac{3}{2}\Omega_{0}x\right)\mathbf{e}_{y}.$
(11)
An illustrative sketch of the setup is shown in Figure 1.
The total mass, $M,$ of the system in our model corresponds to the mass of the
Pluto-Charon system, with an internal density
$\rho_{\bullet}=1.778\,\mathrm{g\ cm^{-3}}$ taken to be equal for both bodies.
The (shared) Hill radius is then given by:
$R_{\mathrm{H}}=r_{0}\left(\frac{M}{3M_{\mathrm{\star}}}\right)^{1/3}.$ (12)
In varying the mass ratio of the binary system, we then subdivided the total
mass accordingly over the components from which their barycentric distance,
$a_{1}$ and $a_{2}$, directly follow as:
$a_{1,2}=\frac{M_{2,1}}{M}a_{\mathrm{b}}.$ (13)
From the assumed internal density the radius of each body is then calculated
from:
$R_{1,2}=\left(\frac{3M_{1,2}}{4\pi\rho_{\bullet}}\right)^{1/3}.$ (14)
### 2.3 Numerical method and initial conditions
We are interested in the (relative) growth rate of the binary (components)
subject to pebble accretion. To do so we integrate the pebble trajectories
with the Runge-Kutta-Fehlberg step variable integration method (Fehlberg,
1969; Es-hagh, 2005), with a relative error tolerance of
$\mathrm{tol}=10^{-6}$. The pebbles are released on a distance far away from
the binary in the y-direction:
$\displaystyle y_{0}=C\sqrt{\frac{GMt_{\mathrm{s}}}{v_{\mathrm{hw}}}},$ (15)
(as illustrated in figure 1) with $C=200$, a safety factor. This ensures that,
initially, gas drag is dominant over the gravity from the binary center-of-
mass. The initial velocities of the incoming pebbles are, hence, given by the
unperturbed radial and azimuthal drift velocities (Weidenschilling, 1977a):
$\displaystyle v_{x,\infty}$
$\displaystyle=-\frac{2\tau_{\mathrm{s}}}{\tau_{\mathrm{s}}^{2}+1}v_{\mathrm{hw}},$
(16) $\displaystyle v_{y,\infty}$
$\displaystyle=-\frac{1}{\tau_{\mathrm{s}}^{2}+1}v_{\mathrm{hw}}-\frac{3}{2}\Omega_{0}x_{0},$
(17)
where $-\frac{3}{2}\Omega_{0}x_{0}$ is the correction to the Keplerian Shear.
The pebbles are released with a shear corrected sampling function, equivalent
to the method used in Visser et al. (2020). In the co-moving local frame we
adopt, the pebble flux released then increases for increasing $x_{0}$, to
account for the increasing relative velocities of the pebbles with respect to
the binary barycenter due to shear.
We then use a bisection algorithm to find the range of initial x-coordinates
for which the pebbles hit one of the binary components, this range is verified
to be equal to the impact parameter of a single planetesimal with mass,
$M=M_{1}+M_{2}$, in the parameter space we investigate. We do note that this
only is valid for the close binary separations we consider. A pebble is
registered as a hit when it satisfies the condition:
$\displaystyle\sqrt{(x-x_{1,2})^{2}+(y-y_{1,2})^{2}}\leq R_{1,2},$ (18)
with the subscript again referring to the corresponding body. After the cross-
section edges have been found, we release pebbles between the edges
$x_{0}\in[x_{1},x_{2}]$ accounting for an error of missing trajectories of
$\sim$ 5 percent. The resolution in pebbles released is taken to be
$N_{x_{0}}=90$ for each initial phase angle linearly spaced between
$\varphi_{0}\in[0,2\pi)$ of the binary system with resolution
$N_{\varphi_{0}}=90$. In total, we then released the amount of
$N_{x_{0}}\times N_{\varphi_{0}}=8100$ pebbles per individual simulation.
Finally, in each simulation, we summed the total number of accreted pebbles on
each body $N_{1,2}$ over each initial phase angle and we defined the accretion
ratio to be:
$\varepsilon\equiv\frac{\dot{M_{1}}}{\dot{M_{2}}}=\frac{N_{1}}{N_{2}},$ (19)
with $N_{1}$ and $N_{2}$ as the total hits on the primary or secondary,
respectively, but when the mass ratio is not unity.
To structure the results from our parameter study, we defined a fiducial model
based on the Pluto-Charon mass(-ratio), radii, orbital distance from the star
$r_{0}=39$ au and separation distance $a_{\mathrm{b}}=0.0025R_{\mathrm{H}}$,
with a pebble Stokes number of $\tau_{\mathrm{s}}=10^{-2}$. An overview of the
fiducial model and the rest of our parameter study is provided in Table 1. A
reproduction package of the code can be found online222The reproduction
package can be found on doi.org/10.5281/zenodo.7324045.
Figure 2: Hitmap of pebble impacts on an equal mass binary system such that
$M_{1}=M_{2}=(M_{\mathrm{Pluto}}+M_{\mathrm{Charon}})/2,$ shown at the top.
Every pixel is a pebble falling into the system. On the $x$-axis the $x_{0}$
coordinate at moment of release is shown in Hill radii, while on the $y$-axis
the initial phase orientation of the binary $\varphi_{0}$ in radians is
depicted as illustrated in figure 1. The orange pixels show impacts on
$M_{1}$, the brown pixels on $M_{2}$ & the beige pixels indicate ejected
pebbles. The final accretion ratio $\varepsilon=1$, meaning that both bodies
accrete an equal flux of pebbles because of symmetry arguments. The dashed red
line indicates a starting angle of $\varphi_{0}=\pi$ rad. From 6 results in
this graph the trajectories have been plotted below, those are denoted as A-F
in this hitmap. The resolution has been taken $(N_{\varphi_{0}}\times
N_{X_{0}}=500\times 1000)$. Bottom panel: Six trajectories from the hitmap
above. Shown is a top-down view of the binary system with a pebble ”falling”
in. In each of the trajectories, an arrow denotes the pebble coming into the
frame. Pebble trajectories A through F are sorted with their starting
distance, $x_{0}$, going from left to right, corresponding to closer radial
distance from the star to further away, respectively. We note that even though
the trajectory for F appears to start on the bottom left, it originated on the
right outside of the frame only entering the image below. Figure 3: Selection
of pebble trajectories for a given initial phase angle for the fiducial model
with a mass ratio of 2, to illustrate the accretion advantage for the
secondary. Left panel: Pebbles enter the Hill radius (dashed circle) from the
top and approach the binary center of mass in the middle of the panel. Gray
trajectories miss the entire system, blue ones accrete onto the secondary and
red ones onto the primary. Right panel: Zoom in on the trajectories shown in
the left panel. The hits on the secondary are indicated with the blue crosses
at point of impact. It clearly illustrates the accretion advantage for the
smaller body due to close encounters and spirals. Additionally, we show the
more massive body in the center (dark red solid circle) with a direct hit of a
pebble (dark red trajectory). The trajectories are from different times and
are plotted in one panel only for illustration.
## 3 Results
### 3.1 General interpretation
We describe above the way in which a single pebble interacts with a binary
planetesimal system, however, that does not explain how that system of
planetesimals is expected to grow over time. For that purpose, we need every
starting $x_{0}$-position where released pebbles will interact with the system
and all starting angles $\varphi_{0}\in[0,2\pi),$ as described in Section 2.3.
In Figure 2, we plot a hitmap of 500.000 pebbles using our fiducial system
from Table 1, with the only exception the mass ratio is set to $f_{m}=1$.
Because of symmetry considerations, the accretion ratio $\varepsilon$
(Equation 19) in this run is therefore unity.
Every pixel in the hitmap is a single pebble falling into the binary system,
set by the $x$-coordinate when released, and the phase of the binary at that
moment. When a pebble falls toward the system, there are three possible
outcomes; the pebble can fall onto either of the planetesimals or it can fly
by without hitting any of the two bodies and continue on its path around the
central star. In the first two cases, the pixel is coloured either brown or
orange depending on onto which body it accretes. If the trajectory does not
lead to accretion, it is coloured in beige. This specific case describes an
equal-mass binary, $M_{1}=M_{2}$, the planetesimals are moving as a mirror
image of each other around the barycenter. Therefore, we expect that the top
and the bottom half of the hitmap are identical, with inverted colours. This
is indeed the case in Figure 2. A horizontal dashed red line is added at
$\varphi_{0}=\pi$ to emphasise this symmetry. The exact same image can be seen
above and below this line, only with the brown and orange colours swapped. In
a system of a non-equal mass binary, this symmetry is expected to disappear.
By looking at the different trajectories found in the hitmap shown above, we
can identify (at the simplest) three regions for all pebbles falling into a
binary system, ranging from the most outer left to the most outer right. These
three regions are as follows.
In the first, most pebbles falling in from the left side of the map enter the
system circling in a prograde direction around the binary, in the same
direction as the planetesimals are moving. This can result in a pebble
spiralling into the system slowly (as in trajectory E), but as seen in the
example of Figure 2, it mostly results in chaotic paths as seen in
trajectories A and B. In this case, we believe these chaotic paths happen
because when a pebble gets close to one planetesimal it stays in its vicinity
for longer and has a high probability of getting slung around, which would
result in a chaotic path. In other scenarios (different
$\tau_{\mathrm{s}},a_{\mathrm{b}}$, etc.), we can see pebbles spiralling in
slowly and favouring hitting the outer (less massive) planetesimal, as in
Figure 3.
In the second, as shown in the middle of the hitmap (in Figure 2, specifically
between $0.28R_{\mathrm{H}}\lesssim x_{0}\lesssim 0.32R_{\mathrm{H}}$), the
pebbles fall in from straight above the system. Falling in head on, with great
velocity, either hitting one of the two bodies or going right past them and
missing both. This region is exemplified by trajectories C and D.
In the third, shown on the right side, the pebbles come into the system
circling around in retrograde fashion, opposing the movement of the binary. In
the hitmap, this results in a large region of pebbles spiralling in slowly as
seen in E. This happens all the way to $\sim 0.37R_{\mathrm{H}}$, from which
the last trajectories are mostly chaotic paths. We believe this to result from
the fact that the pebbles swirl around the binary and come back into the
system almost from below. In that case, it would not have the same momentum as
when it comes in from above since it now is moving opposite to the shearing
wind. This lack of momentum makes it easy to get swung around when actually
getting near the two bodies and do more chaotic paths, as exemplified in F.
Modifying the fundamental parameters $\tau_{\mathrm{s}},a_{\mathrm{b}}$, and
mass ratio will affect where exactly these three regimes (prograde, head on,
and retrograde) are, but the fundamental structure remains similar.
Figure 4: Growth efficiency, $\Gamma,$ as described in Equation 20. Left: Mass
ratio of $f_{m}=2$. Right: Mass ratio of $f_{m}=8$. for a range of Stokes
numbers, $\tau_{\mathrm{s}}$, mass-ratios, and binary separation,
$a_{\mathrm{b}}$, at orbital distance of $r_{0}=2.5$ au from the central star
(top panels) and $r_{0}=39$ au (bottom panels). The most opaque lines denote
an orbital separation of $a_{\mathrm{b}}=0.0075R_{\mathrm{H}}$ while the most
transparent lines shows an orbital separation of
$a_{\mathrm{b}}=0.05R_{\mathrm{H}}$, other lines indicate orbital separations
going logarithmic in between.
### 3.2 Spiralling pebbles
The situation where a pebble is caught by the binary into a spiralling motion
is particularly interesting. For an equal-mass binary, both bodies are then
equally likely to accrete the pebble. However, if the binary has one body more
massive than the other and if the distance between the two bodies is smaller
than the initial size of the spiral, the lower-mass body becomes the more
likely target. This important effect is illustrated in Figure 3. In this case,
the masses are different, specifically, $M_{1}=2M_{2}$. Integrated over all
pebbles in the impact parameter, the accretion ratio $\varepsilon$ as
described in Equation 19 of this system is less than one, meaning that the
smaller body is accreting more mass per unit time than its more massive
counterpart. This may seem counter-intuitive as the less massive body has a
smaller gravitational pull. However, it can be understood by the in-spiralling
motion of the pebbles that are bound to the binary system long before they
actually get accreted. Figure 3 shows a number of pebbles from a horizontal
row in a hitmap. These pebbles have different starting positions $x_{0}$, but
are launched with the same starting angle $\varphi_{0}$ of the binary. Many of
the orbits turn into a spiral that slowly closes in on the binary. The small
body moves like a vacuum cleaner through its orbit around the massive body (in
fact, the barycenter, which will be very close to or even inside the larger
body). As long as the settling velocity on the spiral path is low, the smaller
body is likely to sweep up the incoming pebble before it can reach the massive
body.
Figure 5: Normalised growth ratios expressed in $\tilde{\varepsilon}$ (as
described in Equation 22) for a range of Stokes numbers $\tau_{\mathrm{s}}$,
mass-ratios, and binary separation, $a_{\mathrm{b}}$, at orbital distance of
$r_{0}=2.5$ au from the central star, shown at the top. The most opaque lines
denote an orbital separation of $a_{\mathrm{b}}=0.0075R_{\mathrm{H}}$, while
the most transparent lines show an orbital separation of
$a_{\mathrm{b}}=0.05R_{\mathrm{H}}$; other lines indicate orbital separations
going logarithmically in between. Relative growth timescales of both bodies as
a fraction of the growth timescale of a single body with mass, $M=M_{1}+M_{2}$
(as described in Eqs. 26 and 27), shown at the bottom. Black and red lines
denote $M_{1}$, while the blue lines denote $M_{2}$. Figure 6: Normalised
growth ratios expressed in $\tilde{\varepsilon}$ (as described in Equation 22)
for a range of Stokes numbers $\tau_{\mathrm{s}}$, mass-ratios, and binary
separation, $a_{\mathrm{b}}$, at orbital distance of $r_{0}=39$ au from the
central star, shown at the top. The most opaque lines denote an orbital
separation of $a_{\mathrm{b}}=10^{-3}R_{\mathrm{H}}$, while the most
transparent lines show an orbital separation of
$a_{\mathrm{b}}=10^{-2}R_{\mathrm{H}}$; other lines indicate orbital
separations going logarithmically in between. Relative growth timescales of
both bodies as a fraction of the growth timescale of a single body with mass,
$M=M_{1}+M_{2}$ (as described in Eqs. 26 and 27), shown at the bottom. Black
and red lines denote $M_{1}$, while the blue lines denote $M_{2}$.
### 3.3 Accretion efficiency
The impact parameter range of pebbles reaching the binary turns out to be the
same as in a computation where the total mass of the binary is present as a
single, larger body. This is the case when the system has a small binary
separation compared to the range of impact parameters in which pebble
accretion is relevant. Nevertheless, not all pebbles within the impact
parameter range are actually accreted by the binary. The pebbles that are lost
escape through scattering by close encounters with the primary and the
secondary mass, as illustrated in Figure 2 (A, D, and F). This is different
from the single mass case, in which, for the considered parameter space, all
pebbles in the release range would be accreted. Therefore, we define the
fraction of pebbles accreted on the binary, compared to the single-body case:
$\Gamma\equiv\frac{N_{1}+N_{2}}{N_{\mathrm{all}}},$ (20)
where $N_{\mathrm{all}}$ is the total number of released pebbles and $N_{1,2}$
are the number of pebbles accreted on bodies 1 and 2, respectively. For a
single body with the total binary mass this fraction, $\Gamma$, would be
unity. The resulting $\Gamma$ values for our parameter study are shown in
Figure 4 for 2.5 au and 39 au. The accretion efficiency can drop to as low as
$\Gamma\sim 0.4$ for the maximum Stokes numbers considered at 39 au.
Knowing the accretion efficiency $\Gamma$ of the binary system, we can relate
the growth time of the binary system to the growth time of a single body with
the same mass. That is, in the pebble accretion regime, the growth timescale,
$t_{\mathrm{growth}}$, of a single body with radius, $R$, internal density,
$\rho_{\bullet}$, and mass, $M$, is given by (Visser & Ormel, 2016):
$t_{\mathrm{growth}}=\frac{M}{\dot{M}}=\frac{4\rho_{\bullet}R}{3v_{\mathrm{hw}}\rho_{\mathrm{p}}f_{\mathrm{coll}}},$
(21)
where $\rho_{\mathrm{p}}$ is the spatial density of solid particles and
$f_{\mathrm{coll}}$ the collision factor. In the next section, we determine
the relative growth timescales of the binary components based on this
analysis.
### 3.4 Relative growth rates
To see if the binary mass ratio will shift towards unity during growth by
pebble accretion, we need to look at the normalised growth ratio
$\tilde{\varepsilon}=\frac{M_{2}}{M_{1}}\varepsilon.$ (22)
Whenever this ratio $\tilde{\varepsilon}$ dips below one, the binary will grow
towards equal mass. In the top parts of Figures 5 and 6, values of
$\tilde{\varepsilon}$ is shown as a function of the pebble Stokes number
$\tau_{\mathrm{s}}$ for various combinations of our key parameters. There is a
clear regime, namely, an interval of Stokes numbers, where the mass ratio is
indeed moving towards equal mass. This is both true in the asteroid belt at
2.5 au, and in the Kuiper belt at 39 au. This effect is seen not only for a
moderate mass ratio of $f_{m}=2$, but also in the more extreme case of
$f_{m}=8$. It becomes more prevalent when the two objects in the binary are
closer together. In Figures 5 and 6 it can be seen that while this is not
always the case, it is true for a substantial part of the parameter study, in
particular for large Stokes numbers. In this parameter regime, pebble
accretion works toward reducing the mass ratio of the binary. The question
arises as to how relevant this effect is. We consider whether bodies spend
enough time in this regime to have their mass ratio significantly altered. In
fact, it comes down to the question of what the growth timescales of these
bodies are, where the growth timescale is the time to $e$-fold the mass of a
component.
With the growth timescale derived for a single body and the accretion
efficiency $\Gamma$ we can derive the mass accreted by each body over a set
amount of time:
$\dot{M}_{1}+\dot{M_{2}}=\Gamma\dot{M},$ (23)
which can be rewritten as:
$\displaystyle\dot{M}_{1}$
$\displaystyle=\frac{\Gamma\dot{M}}{\left(1+\varepsilon^{-1}\right)},$ (24)
$\displaystyle\dot{M}_{2}$
$\displaystyle=\frac{\Gamma\dot{M}}{\left(1+\varepsilon\right)},$ (25)
since $\varepsilon=\dot{M}_{1}/\dot{M}_{2}$. We now can use the mass ratio,
$f_{m}$, such that $M_{1}=\frac{f_{m}}{1+f_{m}}M$ and
$M_{2}=\frac{1}{1+f_{m}}M$. In this way, we find the growth timescales of both
planetesimals:
$\displaystyle t_{\mathrm{growth,1}}$
$\displaystyle=\frac{M_{1}}{\dot{M}_{1}}=\frac{\left(1+\varepsilon^{-1}\right)f_{m}}{\left(1+f_{m}\right)\Gamma}t_{\mathrm{growth}}\equiv\chi_{1}t_{\mathrm{growth}},$
(26) $\displaystyle t_{\mathrm{growth,2}}$
$\displaystyle=\frac{M_{2}}{\dot{M}_{2}}=\frac{\left(1+\varepsilon\right)}{\left(1+f_{m}\right)\Gamma}t_{\mathrm{growth}}\equiv\chi_{2}t_{\mathrm{growth}},$
(27)
where we have defined $\chi_{i}$ to be the scale factor that compares the
$t_{\mathrm{growth}}$ of a single body to that of both planetesimals in a
binary. The normalised accretion ratio $\tilde{\varepsilon}$ and the growth
timescales of both bodies are shown in the top and in the bottom panels of
Figures 5 and 6, respectively. At both 2.5 and 39 au (as we see in Figure 4),
the efficiency goes down with increasing, $\tau_{\mathrm{s}}$, which can also
be seen with the accretion ratio $\varepsilon$. We believe both effects have
somewhat the same cause: $\varepsilon$ favours the smaller body since it
sweeps up the inspiralling pebbles from the larger mass (as described in
Section 3.2), while the efficiency will also go down as the smaller mass
throws a significant fraction of these inspiralling pebbles out of the system
in close encounters.
Figure 7: Sketch of three distinctive cases that characterise the accretion
efficiency of the binary components. The red dot indicates the binary
barycenter, the small and large black dots the secondary and primary,
respectively, and circular velocities shown by the black arrows (illustrative,
and not shown to scale). Case a): Pebble Stokes number $\tau_{\mathrm{s}}$ is
low and strongly coupled to the gas. It impacts the primary after typically a
single encounter and renders the probability low that it will impact the
secondary. Case b): Orbital separation, $a_{\mathrm{b}}$, is too wide for the
secondary to benefit from accretion on the massive one. Case c): Narrow binary
separation combined with the pebble accretion regime in which pebble spirals
are ample and dominant. The probability of impact on the secondary is high due
to the repeating encounters before the pebble sinks toward the massive body.
What can also be clearly seen is that the relative growth timescale of the
larger body is greater than one in almost all of the parameter space,
indicating that the growth of a massive body in the pebble accretion regime is
slowed down when it has a secondary. In a large part of the parameter space,
the $\chi_{2}$ value of the smaller body drops below 1, indicating that its
growth will be accelerated by the presence of the larger partner. Even in the
region where the efficiency $\Gamma$ decreases substantially, the growth of
the smaller body is faster. That, combined with the fact the larger body grows
more slowly, shows that we have a stabilising effect on the mass ratio.
Similar-mass binaries are made more common at both 2.5 and 39 au by this
process.
For Stokes numbers close to 0.1, which happens to be also the preferred Stokes
number regime for triggering the streaming instability (Bai & Stone, 2010),
the growth timescales of the smaller body can easily be a factor of 3-5 less
than that of the larger body. So in the time the mass of the large body grows
by a factor $e$, the mass of the smaller body may grow by a factor $e^{3}$ to
$e^{5}$, and the mass ratio would change by a factor $e^{2}$ to e4 or about 7
to 40. This clearly demonstrates the strong potential of pebble accretion to
create equal-mass binaries, even if the initial binary had demonstrated a high
mass ratio.
## 4 Discussion
### 4.1 Pebble trajectory regimes
We have identified three different main regimes that determine the enhancement
in the accretion rate of the secondary (excluding chaotic paths that pebbles
may take, as shown in Figure 2A, B, and F):
1. 1.
The Stokes number, $\tau_{\mathrm{s}}$, is too low and pebbles are tightly
coupled to the gas. The pebbles do not spiral but make a direct impact or
typically one loop before impact (Fig. 7a). The accretion ratio is in favour
of the primary.
2. 2.
When the binary separation is too wide, leading to the massive body accreting
pebbles in isolation from the smaller one (Fig. 7b), the accretion ratio is in
favour of the primary.
3. 3.
When $\tau_{\mathrm{s}}$ is in the regime of pebble accretion and the
separation falls within or on the typical radial extent of the spirals. The
pebble undergoes tightly wound long-term spiralling inward and inevitably
encounters the secondary, before reaching the more massive one (Fig. 7c).
Here, the accretion ratio is in favour of the secondary.
Unsurprisingly, we find the same trend persists for varied orbital distance
$r_{0}$. We show the final accretion efficiency for a distance of $r_{0}=2.5$
au and have verified it shows the same trend for $r_{0}=10$ au in the same
manner as Figure 6. While the parameters shift somewhat, the results show the
same outcome.
### 4.2 Timescale for developing toward equal mass
It is informative to see non-equal mass binaries growing towards a mass ratio
of unity, but it does not give us a definitive timescale of how long it will
take to get there. We consider now the timescale to change the mass ratio,
$f_{m}$, by a factor $e$:
$\displaystyle t_{\mathrm{em}}$
$\displaystyle=\frac{f_{m}}{\dot{f_{m}}}=\left(\frac{1}{1-\tilde{\varepsilon}^{-1}}\right)\frac{M_{1}}{\dot{M_{1}}}=\frac{\tilde{\varepsilon}}{\tilde{\varepsilon}-1}t_{\mathrm{growth,1}}$
$\displaystyle=\frac{\tilde{\varepsilon}f_{m}+1}{\left(\tilde{\varepsilon}-1\right)\left(1+f_{m}\right)}\frac{t_{\mathrm{growth}}}{\Gamma}.$
(28)
with $\tilde{\varepsilon}=\varepsilon/f_{m}<1$ and both $\dot{f_{m}}$ and
$t_{\mathrm{em}}$ being negative since we are looking at the time it takes to
grow towards equal mass. This equal mass timescale $t_{\mathrm{em}}$ is
plotted in Figure 8 for our parameter study. We are in an environment where
the streaming instability is active or has been active recently, resulting in
significant dust settling in the disk. We therefore take for the solid-to-gas
ratio a conservative value of 0.1 such that $\rho_{p}$ from Equation 21 is
given by:
$\rho_{p}=0.1\rho_{g}=0.1\frac{\Sigma}{\sqrt{2\pi}H},$ (29)
where $\rho_{g}$ is the density of the gas (see Equation 3). We plot this
$t_{\mathrm{em}}$ in Figure 8. For a binary with a total mass equal to that of
the Pluto-Charon system at the location of the asteroid belt at 2.5 AU, the
timescales to $e$-fold towards unity are well within a Myr. Even at a greater
distance of 39 AU, in a substantial part of our parameter space, we find
timescales shorter than 1 Myr, again implying that pebble accretion has true
potential as a formation method of (nearly) equal-mass binaries. Even though
we have taken the static approach of calculating $t_{\mathrm{em}}$ by looking
at a fixed, rather than evolving, mass ratio, from the figure we can conclude
that $t_{\mathrm{em}}$ becomes lower with increasing mass ratio, $f_{m}$.
Hence, the mass-equality timescale slows down while this process is happening,
but initially the masses are rapidly converging.
A question arises regarding whether the timescales in Figure 8 hold for
smaller binaries than Pluto-Charon. Since $t_{\mathrm{growth}}$ of a single
body at 3 AU is already within a few Myrs once its radius is larger than
$\sim$300km (Visser & Ormel, 2016), we estimate the $t_{\mathrm{em}}$ of
binaries with bodies between that and Pluto ($\sim$1200km) to be on the same
order as shown in Figure 8.
Figure 8: Timescale of $e$-folding towards equal mass $t_{\mathrm{em}}$ as
described in Equation 28 for two tested mass ratios, $f_{m}=[2,8],$ at two
different orbital distances, $r_{0}=[2.5,39]$ AU, and at different binary
separations, $a_{b}$, denoted by the opaqueness of the lines. We note only the
values, where $\tilde{\varepsilon}<1$ are plotted since we are looking at the
times it takes for the mass ratio, $f_{m}$, to grow towards unity, not away
from it.
### 4.3 Limitations of the model
One important simplification we have made is the assumption of 2D accretion,
only looking at particles located in the midplane and ignoring the effects of
a vertical distribution among incoming pebbles. We can speculate that dropping
this assumption may reduce the efficiency of the ’vacuum cleaning’ by the
quickly orbiting secondary. An exploration of this effect would be very useful
and should be done. For now, we hypothesise that even though the specific
values of growth ratios may differ from the 3D results, the general trends
will remain. The parameter space of Stokes numbers, $\tau_{\mathrm{s}}$, where
the growth ratio flips in favour of the smaller body, will also remain largely
the same. Therefore, we think the three main regimes of determining accretion
rates we found in binaries will also remain the same as in our 2D case.
We are ignoring the effect of gas drag on the binary itself. While this
assumption is certainly correct during the computation of the trajectory of a
single pebble, over long evolutionary times it may affect the orbit of the
binary by removing angular momentum from it.
In this work, we only considered purely circular orbits in our binary systems.
Since truly circular orbits are rarely ever seen, our results might differ if
we also simulate elliptical orbits. Most observed binaries that are in close
orbits have been tidally circularised (Noll et al., 2020) and we find that the
assumption for circular orbits is justified. We estimate, however, that the
trends will again stay largely the same for eccentric binaries due to the
spiralling nature of pebbles in the ’die-hard’ pebble accretion regime.
We have only considered prograde orbital orientations for the binary mutual
motion. The reason for this is that binaries forming in the context of the
streaming instability show a clear preference for prograde orbits and is also
the predominant preference in observations of the cold classical Kuiper belt
population (Grundy et al., 2019; Nesvorny et al., 2020; Visser & Brouwers,
2022).
## 5 Conclusions
In this paper, we study the growth and evolution of a binary planet (or
planetesimal) system subject to pebble accretion. Specifically, we have
investigated the accretion efficiency of the individual binary companions and
the binary system as a whole for a variety of parameters such as Stokes
number, binary separation, and binary mass ratio. If the binary components are
pebble accreting with a mass ratio of unity, they accrete pebbles at an equal
rate, as expected from symmetry arguments. If the mass ratio is not unity, we
arrive at the following conclusions:
1. 1.
A single planetary body that is accreting pebbles will generally accrete more
efficiently in a binary system with a more massive companion, if pebbles are
slowly spiralling toward the binary center-of-mass and these spirals extend to
the length of the binary separation distance of $a_{\mathrm{b}}$ or larger.
2. 2.
In that case, the mass ratio starts off above unity and the growth rate of the
smaller body can be accelerated whilst the more massive one grows more slowly,
pushing the binary system toward a mass ratio of unity. For Stokes numbers
$0.01\leq\tau_{s}\sim 0.1,$ the growth timescale of the smaller body can be
three to five times shorter, changing the mass ratio quickly.
3. 3.
Generally, three regimes can be distinguished: (i) Stokes numbers are too
small and pebbles are tightly coupled to the gas. These pebbles impact the
more massive body, typically after a single encounter, with no accretion
advantage for the secondary. (ii) The same holds when Stokes numbers are too
large and pebbles meet the binary on ballistic orbits. (iii) Pebbles are
captured by the primary in orbits wider than the binary separation. During the
in-spiralling, the pebble has a high probability to be swept up by the
secondary, resulting in a converging mass ratio.
###### Acknowledgements.
We would like to thank J. Groot, G. Koning, R. M. van der Linden & M.
Rustenburg for their insightful comments and discussions. We thank everyone at
the disk meeting for feedback and improvements to the manuscript. RGV
acknowledges funding from the Dutch Research Council, ALWGO/15-01.
## References
* Agarwal et al. (2017) Agarwal, J., Jewitt, D., Mutchler, M., Weaver, H., & Larson, S. 2017, Nature, 549, 357
* Astakhov et al. (2005) Astakhov, S. A., Lee, E. A., & Farrelly, D. 2005, MNRAS, 360, 401
* Bai & Stone (2010) Bai, X.-N. & Stone, J. M. 2010, ApJ, 722, 1437
* Benecchi et al. (2009) Benecchi, S. D., Noll, K. S., Grundy, W. M., et al. 2009, Icarus, 200, 292
* Cameron & Ward (1976) Cameron, A. G. W. & Ward, W. R. 1976, in Lunar and Planetary Science Conference, Vol. 7, Lunar and Planetary Science Conference, 120
* Canup (2005) Canup, R. M. 2005, Science, 307, 546
* Canup (2011) Canup, R. M. 2011, AJ, 141, 35
* Canup (2012) Canup, R. M. 2012, Science, 338, 1052
* Christy & Harrington (1978) Christy, J. W. & Harrington, R. S. 1978, AJ, 83, 1005
* Es-hagh (2005) Es-hagh, M. 2005, Journal of the Earth & Space Physics, 31, 1
* Fehlberg (1969) Fehlberg, E. 1969, NASA Technical Report, NASA-TR-R-315
* Goldreich et al. (2002) Goldreich, P., Lithwick, Y., & Sari, R. 2002, Nature, 420, 643
* Grundy et al. (2019) Grundy, W. M., Noll, K. S., Roe, H. G., et al. 2019, Icarus, 334, 62
* Harris & Kaula (1975) Harris, A. W. & Kaula, W. M. 1975, Icarus, 24, 516
* Hayashi et al. (1985) Hayashi, C., Nakazawa, K., & Nakagawa, Y. 1985, in Protostars and Planets II, ed. D. C. Black & M. S. Matthews, 1100–1153
* Herschel (1833) Herschel, J. F. W. 1833, A Treatise on Astronomy
* Johansen & Klahr (2011) Johansen, A. & Klahr, H. 2011, Earth Moon and Planets, 108, 39
* Johansen et al. (2015) Johansen, A., Mac Low, M.-M., Lacerda, P., & Bizzarro, M. 2015, Science Advances, 1, 1500109
* Johansen et al. (2021) Johansen, A., Ronnet, T., Bizzarro, M., et al. 2021, Science Advances, 7, eabc0444
* Kozai (1962) Kozai, Y. 1962, AJ, 67, 591
* Lambrechts & Johansen (2012) Lambrechts, M. & Johansen, A. 2012, A&A, 544, A32
* Lee et al. (2007) Lee, E. A., Astakhov, S. A., & Farrelly, D. 2007, MNRAS, 379, 229
* Levison et al. (2015) Levison, H. F., Kretke, K. A., & Duncan, M. J. 2015, Nature, 524, 322
* Lidov (1962) Lidov, M. L. 1962, Planet. Space Sci., 9, 719
* Lyra et al. (2021) Lyra, W., Youdin, A. N., & Johansen, A. 2021, Icarus, 356, 113831
* Marchis et al. (2008) Marchis, F., Descamps, P., Baek, M., et al. 2008, Icarus, 196, 97
* Marchis et al. (2004) Marchis, F., Descamps, P., Hestroffer, D., Berthier, J., & de Pater, I. 2004, in AAS/Division for Planetary Sciences Meeting Abstracts, Vol. 36, AAS/Division for Planetary Sciences Meeting Abstracts #36, 46.02
* Marchis et al. (2012) Marchis, F., Enriquez, J. E., Emery, J. P., et al. 2012, Icarus, 221, 1130
* Merline et al. (2002) Merline, W. J., Weidenschilling, S. J., Durda, D. D., et al. 2002, in Asteroids III, 289–312
* Morbidelli & Nesvorný (2020) Morbidelli, A. & Nesvorný, D. 2020, in The Trans-Neptunian Solar System, ed. D. Prialnik, M. A. Barucci, & L. Young, 25–59
* Morishima & Watanabe (2001) Morishima, R. & Watanabe, S. 2001, Earth, Planets and Space, 53, 213
* Nesvorny et al. (2020) Nesvorny, D., Li, R., Simon, J. B., et al. 2020, Binary Planetesimal Formation from Gravitationally Collapsing Pebble Clouds
* Nesvorný et al. (2021) Nesvorný, D., Li, R., Simon, J. B., et al. 2021, Planetary Science Journal, 2, 27
* Nesvorný et al. (2019) Nesvorný, D., Li, R., Youdin, A. N., Simon, J. B., & Grundy, W. M. 2019, Nature Astronomy, 3, 808
* Nesvorný et al. (2010) Nesvorný, D., Youdin, A. N., & Richardson, D. C. 2010, AJ, 140, 785
* Noll et al. (2020) Noll, K., Grundy, W. M., Nesvorný, D., & Thirouin, A. 2020, in The Trans-Neptunian Solar System, ed. D. Prialnik, M. A. Barucci, & L. Young, 201–224
* Noll et al. (2008) Noll, K. S., Grundy, W. M., Stephens, D. C., Levison, H. F., & Kern, S. D. 2008, Icarus, 194, 758
* Ormel & Klahr (2010) Ormel, C. W. & Klahr, H. H. 2010, A&A, 520, A43
* Ormel et al. (2017) Ormel, C. W., Liu, B., & Schoonenberg, D. 2017, A&A, 604, A1
* Rozner et al. (2020) Rozner, M., Grishin, E., & Perets, H. B. 2020, MNRAS, 497, 5264
* Russell (2017) Russell, D. G. 2017, International Journal of Astronomy and Astrophysics, 7, 291
* Schlichting & Sari (2008) Schlichting, H. E. & Sari, R. 2008, ApJ, 673, 1218
* Schoonenberg et al. (2019) Schoonenberg, D., Liu, B., Ormel, C. W., & Dorn, C. 2019, A&A, 627, A149
* Stern & Levison (2002) Stern, S. A. & Levison, H. F. 2002, Highlights of Astronomy, 12, 205
* Visser et al. (2020) Visser, R., Ormel, C., Dominik, C., & Ida, S. 2020, Icarus, 335, 113380
* Visser & Brouwers (2022) Visser, R. G. & Brouwers, M. G. 2022, A&A, 663, A164
* Visser et al. (2021) Visser, R. G., Drażkowska, J., & Dominik, C. 2021, A&A, 647, A126
* Visser & Ormel (2016) Visser, R. G. & Ormel, C. W. 2016, Astronomy & Astrophysics, 586, A66
* Weidenschilling (1977a) Weidenschilling, S. 1977a, Monthly Notices of the Royal Astronomical Society, 180, 57
* Weidenschilling (1977b) Weidenschilling, S. 1977b, Astrophys. Space Sci.; (Netherlands), 51:1
* Youdin & Goodman (2005) Youdin, A. N. & Goodman, J. 2005, The Astrophysical Journal, 620, 459–469
|
††thanks: These authors have contributed equally to this work.††thanks: These
authors have contributed equally to this work.††thanks: E-mail:
<EMAIL_ADDRESS>E-mail<EMAIL_ADDRESS>E-mail<EMAIL_ADDRESS>
# Dynamic and Thermodynamic Origins of Motility-Induced Phase Separation
Jie Su1,2 Zhiyu Cao1 Jin Wang2,3 Huijun Jiang1 Zhonghuai Hou1 1.Department of
Chemical Physics & Hefei National Research Center for Physical Sciences at the
Microscale, University of Science and Technology of China, Hefei, Anhui
230026, China 2.Center for Theoretical Interdisciplinary Sciences, Wenzhou
Institute, University of Chinese Academy of Sciences, Wenzhou 325001, China
3.Department of Chemistry and of Physics and Astronomy, State University of
New York of Stony Brook, Stony Brook, New York 11794, USA
###### Abstract
Active matter systems are inherently out of equilibrium and break the detailed
balance (DB) at the microscopic scale, exhibiting vital collective phenomena
such as motility-induced phase separation (MIPS). Here, we introduce a coarse-
grained mapping method to probe DB breaking in the density-energy phase space,
which allows us to reveal the dynamic and thermodynamic origins of MIPS based
on nonequilibrium potential and flux landscape theory. Hallmarks of
nonequilibrium properties are manifested by identifying the visible
probability flux in the coarse-grained phase space. Remarkably, the flux for
the system with the activity lower than the MIPS threshold tends to “tear up”
the single potential well of the uniform-density phase to create two wells of
phases with different densities, presenting directly that the nonequilibrium
flux is the dynamic origin of MIPS. Moreover, we find that the obtained
entropy production rate (EPR) of the system undergoes a transition from nearly
independent of activity to increasing proportionally as activity increases
after the single well is ”teared up”. The transition of EPR’s scaling behavior
might provide a hint of the thermodynamic origin of MIPS in the coarse-grained
space. Our findings propose a new route to explore the nonequilibrium nature
of active systems, and provide new insights into dynamic and thermodynamic
properties of MIPS.
Active matter widely exists in various scales of nature, ranging from
microscopic and mesoscopic swimmers such as bacteria and the active Janus
spheres, to macroscopic objects, such as fish, birds and horses [1, 2]. Since
the active systems break the detailed balance (DB) at the microscopic scale,
they cannot be described by equilibrium statistical mechanics, and the
nonequilibrium dynamics may manifest as curl flux in a phase space of
mesoscopic coordinates [3, 4, 5, 6, 7, 8, 9, 10]. In comparison with their
passive counterparts, active systems exhibit many novel dynamical behaviors,
such as the emergence of dynamic chirality [11, 12, 13, 14], functional self-
assembly [15, 16, 17, 18], abundant collective motions such as vortex, swarm
[18, 19, 20, 21, 22], and particularly well-known motility-induced phase
separation (MIPS) [23]. It has been observed that, particles with pure
repulsion can spontaneously undergo phase separation between dense and dilute
fluid phases when the activity is higher than a certain threshold. The unique
nonequilibrium properties of MIPS have attracted tremendous research interests
[23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41,
42, 43, 44, 45, 46, 47, 48].
It is well-known that some physical properties of equilibrium systems have
singularities at the transition point, such as the discontinuous enthalpy
change in the first-order transition or the discontinuous change in heat
capacity in the second order transition[49]. Nevertheless, it is quite
difficult to define state functions of nonequilibrium systems, such as Gibbs
free energy, so it is very important but challenging to figure out the
underlying mechanism of MIPS by other means. Previously, the “self-trapping”
effect of active particles was proposed by Cates et al to understand the
mechanism of MIPS [23]. That is, active particles tend to accumulate where
they move more slowly and will slow down at high density, which then creates
positive feedback and further leads to MIPS [23, 24]. On this basis, a series
of theoretical analysis, including the kinetics model [25, 26, 27], swim
pressure [28, 29, 30], scalar $\phi^{4}$ field theory [44, 45] and effective
Cahn-Hilliard equation [46, 47, 48], are performed to explore the deep insight
of MIPS. However, the dynamic and thermodynamic origins of the occurrence of
MIPS and their connection to how DB violation propagates from the particle-
scale dynamics to the large-scale collective dynamics in active matter systems
remain unclear.
In this letter, we introduce a constructive method mapping the real space
including motions of active particles into a low-dimensional phase space with
two dimensions of local particle density and local particle energy. In the
coarse-grained phase space, the theoretical approach of nonequilibrium
potential and flux landscape theory [50, 51, 52, 53, 54, 55] is applied to
explore the dynamic and thermodynamic origins of MIPS. The obtained
nonequilibrium potential has only one potential well (representing the single
phase) for activity before the MIPS threshold. Interestingly, we find that the
flux inside the well tends to push local states of low density to be
increasingly lower, and those of higher density to be increasingly higher. In
other words, the flux tries to “tear up” the potential well to create new
wells of different densities. Further analysis reveals that the contribution
of the nonequilibrium flux depends nonmonotonically on particle activity and
shows a maximum value at the MIPS threshold, demonstrating that the
nonequilibrium flux is the dynamic origin of the occurrence of MIPS. Moreover,
it is observed that the obtained entropy production rate (EPR) is nearly
unchanged before the threshold, and increases rapidly as activity increases
after the threshold. The transition of the scaling behavior of EPR on activity
might be considered as the thermodynamic origin of MIPS.
_Coarse-grained mapping method_.–We propose a constructive method mapping the
real space including motions of active particles into a low-dimensional
coarse-grained phase space with two dimensions of local particle density and
local particle energy as follows. For $N$ spherical active Brownian particles
(ABPs) of diameter $\sigma$ and friction coefficient $\gamma$ in a quasi two-
dimensional space with size $L$ and periodic boundary conditions, the motion
of the _i-th_ ABP located at $\mathbf{r}_{i}$ obeys the following overdamped
Langevin equations,
$\mathbf{\dot{r}}_{i}=v_{0}\mathbf{n}_{i}-\gamma^{-1}\sum^{N}_{j=1,j\neq
i}\nabla_{\mathbf{r}_{i}}U({r}_{ij})+\bm{\xi}_{i},$ (1)
$\dot{\theta}_{i}=\zeta_{i}.$ (2)
Herein, $v_{0}$ and $\mathbf{n}_{i}=(\cos(\theta_{i}),\sin(\theta_{i}))$ are
the amplitude and direction of active speed, respectively, with $\theta_{i}$
being the angle of $\mathbf{n}_{i}$.
$\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}$ is the vector pointing from
the _j-th_ ABP to the _i-th_ ABP, and $r_{ij}$ is its norm. The interaction
between a pair of ABPs is described by the purely repulsive Weeks-Chandler-
Andersen (WCA) potential,
$U({r}_{ij})=4\epsilon[(\sigma/r_{ij})^{12}-(\sigma/r_{ij})^{6}+1/4]$ for
$r_{ij}<2^{1/6}\sigma$, and $U({r}_{ij})=0$ otherwise, where $\epsilon$ is the
interaction strength. $\bm{\xi}_{i}$ and $\zeta_{i}$ denote the independent
Gaussian white noises with time correlations
$\langle\bm{\xi}_{i}(t)\bm{\xi}_{j}(t^{\prime})\rangle=2D_{t}\mathbf{I}\delta_{ij}\delta(t-t^{\prime})$
and
$\langle\zeta_{i}(t)\zeta_{j}(t^{\prime})\rangle=2D_{r}\delta_{ij}\delta(t-t^{\prime})$,
where $\mathbf{I}$ is the unit matrix, $D_{t}=k_{B}T/\gamma$ is the
translation diffusion coefficient with $k_{B}$ the Boltzmann constant and $T$
the temperature, and $D_{r}$ is the rotational diffusion coefficient which
couples with the translational diffusivity $D_{r}=3D_{t}/\sigma^{2}$.
Based on the motion equations Eq. (1) and (2), we divide the real space into
$M\times M$ cells with size $a\times a$ ($M=L/a$) as shown in the left panel
in Fig. 1. In the _i-th_ cell, two coarse-grained variables, local particle
density and local total energy, can then be calculated as
$\rho_{i}(t)=n_{i}/a^{2}$ where $n_{i}$ is the number of particles in the _i-
th_ cell and $E_{i}(t)=\sum_{q\in cell_{i}}v_{q}^{2}(t)/2$. Similarly, the
density-energy ($\rho-E$) space can also be divided into cells with steps
$\delta\rho$ along the $\rho$ dimension and $\delta E$ along the $E$ dimension
(the right panel in Fig. 1). Then, the _i-th_ cell in real space can be mapped
into the $(k,l)$ cell in $\rho-E$ space if
$\rho_{i}(t)\in[k\delta\rho,(k+1)\delta\rho)$ and $E_{i}(t)\in[l\delta
E,(l+1)\delta E)$ at time $t$. If there are different cells in real space of
the same $\rho(t)$ and $E(t)$, they will be mapped into the same cell in
$\rho-E$ space (blue cells in Fig. 1). Hence, the probability distribution
$P(\rho,E,t)$ in the $\rho-E$ phase space can then be calculated. For
convenience, variables $\rho$ and $E$ with one subscript denote the local
density and local energy of cells in real space, respectively, and those with
two subscripts are those of cells in $\rho-E$ space in follows, if not
otherwise stated.
As will be proven later, the density-energy phase space is sufficiently
coarse-grained, where the non-Markovity of active matter systems does not
manifest at such scale. Hence, the nonequilibrium dynamics of each cell in the
$\rho-E$ phase space can be written as:
$\dot{\rho}_{kl}(t)=F_{\rho}(\rho_{kl},E_{kl})+\xi_{\rho}(\rho_{kl},E_{kl},t).$
(3) $\dot{E}_{kl}(t)=F_{E}(\rho_{kl},E_{kl})+\xi_{E}(\rho_{kl},E_{kl},t).$ (4)
Herein, $\dot{\rho}_{kl}(t)$ and $\dot{E}_{kl}(t)$ are calculated by averaging
over mapping cells in real space. $F_{\rho}$ and $F_{E}$ are the determinate
“driving force” with
$F_{\rho}(\rho_{kl},E_{kl})=\langle\dot{\rho}_{kl}(t)\rangle$ and
$F_{E}(\rho_{kl},E_{kl})=\langle\dot{E}_{kl}(t)\rangle$ ($\langle\cdot\rangle$
denotes time averaging when the system reaches the steady state). $\xi_{\rho}$
and $\xi_{E}$ are the stochastic terms with time correlations
$\langle\bm{\xi}(\rho_{kl},E_{kl},t)\bm{\xi}(\rho_{kl},E_{kl},t^{\prime})\rangle=2\mathbf{D}(\rho_{kl},E_{kl})\delta(t-t^{\prime})$,
where $\bm{\xi}$ is a vector including the components of $\xi_{\rho}$ and
$\xi_{E}$, and $\mathbf{D}(\rho_{kl},E_{kl})$ is a $2\times 2$ diffusion
coefficient tensor consisting of $D_{\rho\rho}$, $D_{\rho E}$, $D_{E\rho}$ and
$D_{EE}$. This diffusion coefficient tensor can be obtained by the Fourier
transform of the time correlation functions of the stochastic term $\bm{\xi}$.
More details can be found in the Supplementary Information (SI). The power
spectrum functions for $v_{0}=100$ are shown in Fig.S1 in SI as an example,
which proves that $\xi_{\rho}$ and $\xi_{E}$ are white noises, demonstrating
that the dynamics of the active system in the $\rho-E$ phase space is indeed
Markovian.
Figure 1: Schematic of the coarse-grained mapping from the evolution of cells
in real space to that in the $\rho-E$ phase space. As an example, if the three
blue cells located at different positions in the real space have the same
$\rho$ and $E$, they will be mapped into the same cell with $\rho_{kl}=\rho$,
$E_{kl}=E$ in the phase space.
_Nonequilibrium potential and flux in the density-energy space_.–Based on our
coarse-grained mapping method, both the probability distribution $P(\rho,E,t)$
and the dynamical equation of cells [Eqs. (3) and (4)] in the $\rho-E$ phase
space can be obtained from the numerical simulations of active systems. We
then apply the nonequilibrium potential and flux theory [50, 51, 52, 53, 54,
55] to figure out the dynamic and thermodynamic origins of MIPS in the
density-energy phase space.
When the active system reaches the steady state, the probability distribution
$P(\rho,E,t)$ is unchanged over time, i.e., $\partial
P_{ss}(\rho,E,t)/\partial t=0$ (the subscript $ss$ denotes the steady state).
The effective nonequilibrium potential $U_{neq}$ can then be defined naturally
as $U_{neq}(\rho,E)=-\ln P_{ss}(\rho,E)$. According to the Fokker-Planck
equation, it is known that $\partial P/\partial t=-\nabla\cdot\mathbf{J}(t)$.
When the system is in equilibrium, the flux $\mathbf{J}=0$ so that the driving
force only depends on the gradient of the effective potential, i.e.,
$\mathbf{F}=(F_{\rho},F_{E})^{\rm T}=-\mathbf{D}\cdot\nabla U$ (the
superscript ${\rm T}$ denotes the transpose of a matrix). However, when the
system is in nonequilibrium, the flux $\mathbf{J}$ can exist in the form of a
rotational curl or more precisely recurrent field [50, 51, 52, 53, 54, 55],
and the steady-state nonequilibrium flux $\mathbf{J}_{ss}(\mathbf{x})$ with
$\mathbf{x}=(\rho,E)^{\rm T}$ can be described as
$\mathbf{J}_{ss}(\mathbf{x})=\mathbf{F}(\mathbf{x})P_{ss}(\mathbf{x})-\nabla_{\mathbf{x}}\cdot[\mathbf{D}P_{ss}(\mathbf{x})]$.
Here, the driving force $\mathbf{F}$ no longer depends solely on the gradient
of the nonequilibrium potential $U_{neq}$, because the steady-state flux field
$\mathbf{J}_{ss}$ contributes to $\mathbf{F}$. Therefore, $\mathbf{F}$ can be
decomposed into the gradient part ($\mathbf{F}_{gradient}$), the curl one
($\mathbf{F}_{curl}$) and the one related to the spatial dependent noise
($\mathbf{F}_{D}$),
$\mathbf{F}(\mathbf{x})=\mathbf{F}_{gradient}(\mathbf{x})+\mathbf{F}_{curl}(\mathbf{x})+\mathbf{F}_{D}(\mathbf{x})=-\mathbf{D}\cdot\nabla
U_{neq}(\mathbf{x})+\mathbf{v}_{ss}(\mathbf{x})+\nabla\cdot\mathbf{D}$ (in
Ito’s representation, $\nabla\cdot\mathbf{D}$ represents the divergence of the
diffusion tensor [50]). Here
$\mathbf{v}_{ss}(\mathbf{x})=\mathbf{J}_{ss}(\mathbf{x})/P_{ss}(\mathbf{x})$
is the local flow or velocity in steady states [56].
Combining our coarse-grained mapping method with the nonequilibrium potential
and flux theory, it is convenient to analyze the dynamic and thermodynamic
properties of MIPS in density-energy phase space based on numerical
simulations in real space. In simulations, $\sigma$ and $k_{B}T$ are basic
units for length and energy respectively, and $\gamma=1.0$ so that the basic
unit for time is $\gamma\sigma^{2}/(k_{B}T)$. We fix $L=200$, $\epsilon=1.0$,
and $N=30720$ so that the averaged number density becomes
$\rho_{0}=N/L^{2}=0.768$ and the volume fraction is
$\phi_{0}=\pi\sigma^{2}\rho_{0}/4=0.6$. Active systems with such volume
fraction will undergo phase transition from the single phase to the coexisting
phase at a threshold around $v_{0}=55$. We first run simulations from a random
initial configuration for a long time $t_{1}=500$ with the time step $\Delta
t=10^{-5}$ to ensure that the system reaches the steady state, and then
perform the simulations for another long time $t_{2}=500$ to attain the
steady-state evolution of ABPs. In the following, the data of local density
and local energy are rescaled by $\rho_{0}$ and the averaged local energy
$E_{0}$, i.e., $\rho^{\prime}=\rho/\rho_{0}$, $E^{\prime}=E/E_{0}$.
_Dynamic origin of MIPS_.–Firstly, we focus on the dynamics of the active
system just before the MIPS threshold. As an example, the nonequilibrium
potential of the active system with $v_{0}=40$ in the
$\rho^{\prime}-E^{\prime}$ phase space is presented in Fig. 2. It can be
observed that there is only one potential well representing a single phase
without any phase separation behaviors, which agrees with the typical snapshot
shown in the inset of Fig. 2. Moreover, the obtained flux field is further
presented as the black arrows in Fig. 2, where the nonzero flux demonstrates
that the DB at the coarse-grained scale is broken. More interestingly, it can
be found that the flux located inside the potential well can be explicitly
divided into two parts with opposite directions. For a given $E^{\prime}$, the
flux located in the region with small $\rho^{\prime}$ (on the left side of the
purple dashed line in Fig. 2) tends to push the local state to the negative
direction of $\rho^{\prime}$, rendering $\rho^{\prime}$ to be increasingly
lower. The flux located in the region with large $\rho^{\prime}$ (on the right
side of the purple dashed line) prefers to push the local state to the
positive direction of $\rho^{\prime}$, leading to increasingly higher
$\rho^{\prime}$. From another view of a given $\rho^{\prime}$, the flux points
to the lower $\rho^{\prime}$ direction when $E^{\prime}$ is large (above the
purple dashed line) while the flux points to the higher $\rho^{\prime}$
direction when $E^{\prime}$ is small (below the purple dashed line). This
means that the local state with high density or small average single-particle
energy will be pushed to the higher $\rho^{\prime}$ direction by the
nonequilibrium flux field, which is consistent with the self-trapping
mechanism of MIPS proposed by Cates et al [23]. In short, the flux field
consisting of two parts with opposite directions tries to “tear up” the
potential well to create new wells with different densities. As mentioned, the
driving force for the nonequilibrium dynamics can be decomposed into both
gradient of the landscape and the curl flux. While the gradient force tends to
attract the system down to the point attractor and stabilize it, the curl flux
which breaks the DB is rotational. Thus, the rotational force tends to create
flows rather than localizing at a point state. Therefore, the dynamical effect
of the flux is to destabilize the point attractor state while stabilizing the
continuous flow in the state space. Therefore, we hold the view that the
nonequilibrium flux force is the dynamic origin of MIPS.
Figure 2: The nonequilibrium potential and flux landscape of the active system
with $v_{0}=40$. The nonequilibrium potential is presented as the colored
background and the flux field is shown as the black arrows. The purple line is
the division line of the flux direction and the red arrows represent the whole
flux direction of each segment. The inset is a typical snapshot in the steady
state. Figure 3: (a) The nonequilibrium potential (colored background) and
flux (black arrows) of the active system with $v_{0}=100$. The inset is a
typical snapshot in the steady state. (b) Depedence of
$\langle|\mathbf{F}_{curl}|\rangle$ and
$\langle|\mathbf{F}_{curl}|\rangle/\langle|\mathbf{F}_{gradient}|\rangle$ on
$v_{0}$.
Next, the dynamics after MIPS occurs (such as $v_{0}=100$) is considered. As
presented in Fig. 3(a), it can be observed that the nonequilibrium potential
has two potential wells with different $\rho^{\prime}$, where the potential
well located at small $\rho^{\prime}$ represents the dilute phase, while the
one located at large $\rho^{\prime}$ denotes the dense phase, consistent with
the typical snapshot shown in the inset of Fig. 3(a). In addition, it is found
that the flux rotates in a counterclockwise direction between these two
potential wells, demonstrating again the DB breaking in the coarse-grained
space due to nonequilibrium activity. It is noted that for the equilibrium
phase separation without any nonequilibrium flux, particles in one of the
potential wells are very difficult to jump into the other one with the sole
help of noise. However, particles can be easily pushed from one phase to the
other due to the nonequilibrium flux in active systems, which is consistent
with the observations in simulations.
To quantitatively characterize the “tearing effect” induced by the
nonequilibrium flux in the MIPS process, we calculate the absolute value of
the nonequilibrium flux and its relative value to the driving force due to
nonequilibrium potential as the functions of activity $v_{0}$. That is, the
curl part $\langle|\mathbf{F}_{curl}|\rangle$ as well as the ratio of the curl
part to the gradient one
$\langle|\mathbf{F}_{curl}|\rangle/\langle|\mathbf{F}_{gradient}|\rangle$,
which can be calculated by
$\langle|\mathbf{F}_{curl}|\rangle=\iint|\mathbf{J}_{ss}/P_{ss}|d\rho dE$, and
$\langle|\mathbf{F}_{gradient}|\rangle=\iint|\mathbf{D}\cdot\nabla
U_{neq}|d\rho dE$. The integral bounds of $\rho$ is set to be $[0.6,1.15]$
because only the flux located inside the regions with the density between the
dense and dilute phases contributes to the formation of MIPS. The dependence
of $\langle|\mathbf{F}_{curl}|\rangle$ and
$\langle|\mathbf{F}_{curl}|\rangle/\langle|\mathbf{F}_{gradient}|\rangle$ on
$v_{0}$ is illustrated in Fig. 3(b). It can be found that both
$\langle|\mathbf{F}_{curl}|\rangle$ (black symbols and line) and
$\langle|\mathbf{F}_{curl}|\rangle/\langle|\mathbf{F}_{gradient}|\rangle$ (red
symbols and line) depend nonmonotonically on $v_{0}$ and show maximal values
around the MIPS threshold, further demonstrating that the nonequilibrium flux
is the dynamic origin of the occurrence of MIPS.
_Thermodynamic origin of MIPS_.–Now we focus on the thermodynamics of MIPS. To
quantify DB breaking and time-reversal symmetry (TRS) in the density-energy
phase space, we introduce the noise-averaged global entropy production rate
$e_{p}$ in the steady states [56]
$\displaystyle e_{p}$ $\displaystyle=\iint\frac{\mathbf{J}_{ss}^{{\rm
T}}(\rho,E)\cdot\mathbf{D}^{-1}(\rho,E)\cdot\mathbf{J}_{ss}(\rho,E)}{P_{ss}(\rho,E)}d\rho
dE$ (5) $\displaystyle=\iint\mathbf{v}_{ss}^{{\rm
T}}(\rho,E)\cdot\mathbf{D}^{-1}(\rho,E)\cdot\mathbf{v}_{ss}(\rho,E)P_{ss}(\rho,E)d\rho
dE.$
Figure 4: Depedence of the entropy production rate $e_{p}$ on $v_{0}$. The
dashed lines denote functions of scales 0, 1 and 0.5. The inset is the
dependence of the derivative of $e_{p}$ on $v_{0}$ with the dashed red line
the MIPS threshold.
The obtained $e_{p}$ dependent on the particle activity $v_{0}$ is shown in
Fig. 4. It can be found that $e_{p}$ increases monotonically as $v_{0}$
increases, indicating that higher activity will bring more dissipation into
the system, further making the system far away from the equilibrium. More
interestingly, it can be observed that $e_{p}$ scales differently on $v_{0}$
before and after the MIPS threshold. When $v_{0}$ is lower than the threshold,
$e_{p}$ remains nearly unchanged. However, once $v_{0}$ exceeds the threshold
$e_{p}$ increases linearly with a slope of approximately 1. To take a closer
look at this transition, the obtained derivative of $e_{p}$ dependent on
$v_{0}$ is illustrated in the inset of Fig. 4. Clearly, $de_{p}/dv_{0}$ holds
slightly higher than $0$ before the MIPS threshold (the dashed red line), and
rapidly becomes very large when $v_{0}$ passes the threshold. Since the steady
state flux is mainly contributed by the $E$ component of the local flow (see
SI for details), the net transition in the energy space dominates the scaling
behavior and sharpness of entropy production. The thermodynamic origin of MIPS
can be attributed to the sharp transition of scaling behavior of the entropy
production in the density-energy phase space. From the expression of the
entropy production, it is directly related to the flux measuring the degree of
DB breaking. The flux thus forms the dynamical basis of the nonequilibrium
thermodynamic cost. Since there is a steep change of nonequilibrium
thermodynamic cost characterized by EPR when the MIPS system transforms from a
single phase (one attractor basin) to phase separation with two coexisting
phases (two attractor basins), showing that the EPR may provide the
thermodynamic origin of MIPS. In addition, this observation provides a
reliable tool to identify the MIPS threshold between the single and coexisting
phases, which confirms that the phase transition point can be characterized in
terms of entropy production [57, 58, 59, 60, 61], especially in active matter
systems [62, 63, 64]. Compared with the other studies focusing on the entropy
production of active matter [65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76,
77, 78, 79], our findings provide fresh insights into the origin of MIPS.
In summary, the dynamic and thermodynamic origins of MIPS are investigated by
the nonequilibrium potential and flux landscape theory in the density-energy
phase space through the brand-new coarse-grained mapping method we proposed.
As a result, it is found that the nonequilibrium flux with opposite directions
tends to “tear up” the single nonequilibrium potential well before the MIPS
threshold. This not only demonstrates directly that the nonequilibrium flux is
the dynamic origin of the occurrence of MIPS but also provides evidence that
the DB is also significantly broken at the coarse-grained scale. In addition,
intensive simulations reveal that the EPR shows a transition of scaling
behavior around the MIPS threshold, which serves as the thermodynamic origin
of MIPS. Our findings bring new insights and propose a new route to understand
the nonequilibrium nature of MIPS.
J. Su, Z. Cao, H. Jiang and Z. Hou are supported by MOST(2018YFA0208702), NSFC
(32090044, 21973085, 21833007), Anhui Initiative in Quantum Information
Technologies (AHY090200), and the Fundamental Research Funds for the Central
Universities (WK2340000104).
## References
* Bechinger et al. [2016] C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, and G. Volpe, Rev. Mod. Phys. 88, 045006 (2016).
* Vicsek and Zafeiris [2012] T. Vicsek and A. Zafeiris, Phys. Rep. 517, 71 (2012), ISSN 0370-1573.
* Battle et al. [2016] C. Battle, C. P. Broedersz, N. Fakhri, V. F. Geyer, J. Howard, C. F. Schmidt, and F. C. MacKintosh, Science 352, 604 (2016).
* Gnesotto et al. [2018] F. S. Gnesotto, F. Mura, J. Gladrow, and C. P. Broedersz, Rep. Prog. Phys. 81, 066601 (2018).
* Gladrow et al. [2016] J. Gladrow, N. Fakhri, F. C. MacKintosh, C. Schmidt, and C. Broedersz, Phys. Rev. Lett. 116, 248301 (2016).
* Mura et al. [2018] F. Mura, G. Gradziuk, and C. P. Broedersz, Phys. Rev. Lett. 121, 038002 (2018).
* Seara et al. [2018] D. S. Seara, V. Yadav, I. Linsmeier, A. P. Tabatabai, P. W. Oakes, S. Tabei, S. Banerjee, and M. P. Murrell, Nat. Commun. 9, 1 (2018).
* Gladrow et al. [2017] J. Gladrow, C. P. Broedersz, and C. F. Schmidt, Phys. Rev. E 96, 022408 (2017).
* Dieball and Godec [2022a] C. Dieball and A. Godec, Phys. Rev. Lett. 129, 140601 (2022a).
* Dieball and Godec [2022b] C. Dieball and A. Godec, arXiv preprint arXiv:2204.06553 (2022b).
* DiLuzio et al. [2005] W. R. DiLuzio, L. Turner, M. Mayer, P. Garstecki, D. B. Weibel, H. C. Berg, and G. M. Whitesides, Nature 435, 1271 (2005).
* Riedel et al. [2005] I. H. Riedel, K. Kruse, and J. Howard, Science 309, 300 (2005).
* Kümmel et al. [2013] F. Kümmel, B. ten Hagen, R. Wittkowski, I. Buttinoni, R. Eichhorn, G. Volpe, H. Löwen, and C. Bechinger, Phys. Rev. Lett. 110, 198302 (2013).
* Gibbs et al. [2011] J. Gibbs, S. Kothari, D. Saintillan, and Y.-P. Zhao, Nano Lett. 11, 2543 (2011).
* Mallory and Cacciuto [2019] S. A. Mallory and A. Cacciuto, J. Am. Chem. Soc. 141, 2500 (2019).
* Gou et al. [2019] Y. Gou, H. Jiang, and Z. Hou, Soft Matter 15, 9104 (2019).
* Du et al. [2019] Y. Du, H. Jiang, and Z. Hou, J. Chem. Phys. 151, 154904 (2019).
* Yan et al. [2016] J. Yan, M. Han, J. Zhang, C. Xu, E. Luijten, and S. Granick, Nat. Mater. 15, 1095 (2016).
* Sumino et al. [2012] Y. Sumino, K. H. Nagai, Y. Shitaka, D. Tanaka, K. Yoshikawa, H. Chaté, and K. Oiwa, Nature 483, 448 (2012).
* Jiang et al. [2017] H. Jiang, H. Ding, M. Pu, and Z. Hou, Soft matter 13, 836 (2017).
* Karani et al. [2019] H. Karani, G. E. Pradillo, and P. M. Vlahovska, Phys. Rev. Lett. 123, 208002 (2019).
* Gou et al. [2020] Y.-l. Gou, H.-j. Jiang, and Z.-h. Hou, Chin. J. Chem. Phys. 33, 717 (2020).
* Tailleur and Cates [2008] J. Tailleur and M. E. Cates, Phys. Rev. Lett. 100, 218103 (2008).
* Cates and Tailleur [2015] M. E. Cates and J. Tailleur, Annu. Rev. Condens. Matter Phys. 6, 219 (2015).
* Redner et al. [2013a] G. S. Redner, M. F. Hagan, and A. Baskaran, Phys. Rev. Lett. 110, 055701 (2013a).
* Redner et al. [2013b] G. S. Redner, A. Baskaran, and M. F. Hagan, Phys. Rev. E 88, 012305 (2013b).
* Redner et al. [2016] G. S. Redner, C. G. Wagner, A. Baskaran, and M. F. Hagan, Phys. Rev. Lett. 117, 148002 (2016).
* Takatori et al. [2014] S. C. Takatori, W. Yan, and J. F. Brady, Phys. Rev. Lett. 113, 028103 (2014).
* Takatori and Brady [2015] S. C. Takatori and J. F. Brady, Phys. Rev. E 91, 032117 (2015).
* Patch et al. [2017] A. Patch, D. Yllanes, and M. C. Marchetti, Phys. Rev. E 95, 012601 (2017).
* Fily et al. [2014] Y. Fily, S. Henkes, and M. C. Marchetti, Soft Matter 10, 2132 (2014).
* Stenhammar et al. [2014] J. Stenhammar, D. Marenduzzo, R. J. Allen, and M. E. Cates, Soft Matter 10, 1489 (2014).
* Zöttl and Stark [2014] A. Zöttl and H. Stark, Phys. Rev. Lett. 112, 118101 (2014).
* Furukawa et al. [2014] A. Furukawa, D. Marenduzzo, and M. E. Cates, Phys. Rev. E 90, 022303 (2014).
* Blaschke et al. [2016] J. Blaschke, M. Maurer, K. Menon, A. Zöttl, and H. Stark, Soft Matter 12, 9821 (2016).
* Stenhammar et al. [2015] J. Stenhammar, R. Wittkowski, D. Marenduzzo, and M. E. Cates, Phys. Rev. Lett. 114, 018301 (2015).
* Mandal et al. [2019] S. Mandal, B. Liebchen, and H. Löwen, Phys. Rev. Lett. 123, 228001 (2019).
* Siebert et al. [2017] J. T. Siebert, J. Letz, T. Speck, and P. Virnau, Soft Matter 13, 1020 (2017).
* Liao and Klapp [2018] G.-J. Liao and S. H. L. Klapp, Soft Matter 14, 7873 (2018).
* Su et al. [2021] J. Su, H. Jiang, and Z. Hou, New J. Phys. 23, 013005 (2021).
* Du et al. [2020] Y. Du, H. Jiang, and Z. Hou, Soft Matter 16, 6434 (2020).
* Caprini et al. [2020a] L. Caprini, U. Marini Bettolo Marconi, and A. Puglisi, Phys. Rev. Lett. 124, 078001 (2020a).
* Caprini et al. [2020b] L. Caprini, U. Marini Bettolo Marconi, C. Maggi, M. Paoluzzi, and A. Puglisi, Phys. Rev. Research 2, 023321 (2020b).
* Wittkowski et al. [2014] R. Wittkowski, A. Tiribocchi, J. Stenhammar, R. J. Allen, D. Marenduzzo, and M. E. Cates, Nat. Commun. 5, 4351 (2014).
* Tjhung et al. [2018] E. Tjhung, C. Nardini, and M. E. Cates, Phys. Rev. X 8, 031080 (2018).
* Speck et al. [2014] T. Speck, J. Bialké, A. M. Menzel, and H. Löwen, Phys. Rev. Lett. 112, 218304 (2014).
* Speck et al. [2015] T. Speck, A. M. Menzel, J. Bialké, and H. Löwen, J. Chem. Phys. 142, 224109 (2015).
* Rapp et al. [2019] L. Rapp, F. Bergmann, and W. Zimmermann, Eur. Phys. J. E 42, 57 (2019).
* Landau and Lifshitz [2013] L. D. Landau and E. M. Lifshitz, _Statistical Physics: Volume 5_ , vol. 5 (Elsevier, 2013).
* Wang et al. [2008] J. Wang, L. Xu, and E. Wang, Proc. Natl. Acad. Sci. U. S. A. 105, 12271 (2008).
* Li and Wang [2014] C. Li and J. Wang, Proc. Natl. Acad. Sci. U. S. A. 111, 14130 (2014).
* Fang et al. [2019] X. Fang, K. Kruse, T. Lu, and J. Wang, Rev. Mod. Phys. 91, 045004 (2019).
* Chu and Wang [2020] X. Chu and J. Wang, Appl. Phys. Rev. 7, 031403 (2020).
* Wang [2015] J. Wang, Adv. Phys. 64, 1 (2015).
* Fang and Wang [2020] X. Fang and J. Wang, Annu. Rev. Biophys. 49 (2020).
* Seifert [2012] U. Seifert, Rep. Prog. Phys. 75, 126001 (2012).
* Crochik and Tomé [2005] L. Crochik and T. Tomé, Phys. Rev. E 72, 057103 (2005).
* Tomé and de Oliveira [2012] T. Tomé and M. J. de Oliveira, Phys. Rev. Lett. 108, 020601 (2012).
* Noa et al. [2019] C. F. Noa, P. E. Harunari, M. de Oliveira, and C. Fiore, Phys. Rev. E 100, 012104 (2019).
* Seara et al. [2021] D. S. Seara, B. B. Machta, and M. P. Murrell, Nat. Commun. 12, 1 (2021).
* Xiao et al. [2008] T. J. Xiao, Z. Hou, and H. Xin, J. Chem. Phys. 129, 114506 (2008).
* Shim et al. [2016] P.-S. Shim, H.-M. Chun, and J. D. Noh, Phys. Rev. E 93, 012113 (2016).
* Cao et al. [2022] Z. Cao, J. Su, H. Jiang, and Z. Hou, Phys. Fluids 34, 053310 (2022).
* Xu and Wang [2020] L. Xu and J. Wang, J. Phys. Chem. B 124, 2549 (2020).
* Fodor et al. [2016] É. Fodor, C. Nardini, M. E. Cates, J. Tailleur, P. Visco, and F. Van Wijland, Phys. Rev. Lett. 117, 038103 (2016).
* Mandal et al. [2017] D. Mandal, K. Klymko, and M. R. DeWeese, Phys. Rev. Lett. 119, 258001 (2017).
* Dabelow et al. [2019] L. Dabelow, S. Bo, and R. Eichhorn, Phys. Rev. X 9, 021009 (2019).
* Ganguly and Chaudhuri [2013] C. Ganguly and D. Chaudhuri, Phys. Rev. E 88, 032102 (2013).
* Speck [2016] T. Speck, Europhys. Lett. 114, 30006 (2016).
* Nardini et al. [2017] C. Nardini, É. Fodor, E. Tjhung, F. Van Wijland, J. Tailleur, and M. E. Cates, Phys. Rev. X 7, 021007 (2017).
* Shankar and Marchetti [2018] S. Shankar and M. C. Marchetti, Phys. Rev. E 98, 020604 (2018).
* Szamel [2019] G. Szamel, Phys. Rev. E 100, 050603 (2019).
* Crosato et al. [2019] E. Crosato, M. Prokopenko, and R. E. Spinney, Phys. Rev. E 100, 042613 (2019).
* Cao et al. [2021] Z. Cao, H. Jiang, and Z. Hou, J. Chem. Phys. 155, 234901 (2021).
* Tociu et al. [2019] L. Tociu, É. Fodor, T. Nemoto, and S. Vaikuntanathan, Phys. Rev. X 9, 041026 (2019).
* Nemoto et al. [2019] T. Nemoto, É. Fodor, M. E. Cates, R. L. Jack, and J. Tailleur, Phys. Rev. E 99, 022605 (2019).
* Cagnetta et al. [2017] F. Cagnetta, F. Corberi, G. Gonnella, and A. Suma, Phys. Rev. Lett. 119, 158002 (2017).
* Guo et al. [2021] B. Guo, S. Ro, A. Shih, T. V. Phan, R. H. Austin, S. Martiniani, D. Levine, and P. M. Chaikin, arXiv preprint arXiv:2105.12707 (2021).
* Bowick et al. [2022] M. J. Bowick, N. Fakhri, M. C. Marchetti, and S. Ramaswamy, Phys. Rev. X 12, 010501 (2022).
|
# Color-avoiding percolation on the Erdős-Rényi random graph
Lyuben Lichev Univ. Jean Monnet, Saint-Etienne, France Univ. Claude Bernard
Lyon 1, Lyon, France Bruno Schapira Univ. Claude Bernard Lyon 1, Lyon,
France Univ. Aix-Marseille, Marseille, France
###### Abstract
We consider a recently introduced model of color-avoiding percolation defined
as follows. Every edge in a graph $G$ is colored in some of $k\geq 2$ colors.
Two vertices $u$ and $v$ in $G$ are said to be _CA-connected_ if $u$ and $v$
may be connected using any subset of $k-1$ colors. CA-connectivity defines an
equivalence relation on the vertex set of $G$ whose classes are called _CA-
components_.
We study the component structure of a randomly colored Erdős-Rényi random
graph of constant average degree. We distinguish three regimes for the size of
the largest component: a supercritical regime, a so-called intermediate
regime, and a subcritical regime, in which the largest CA-component has
respectively linear, logarithmic, and bounded size. Interestingly, in the
subcritical regime, the bound is deterministic and given by the number of
colors.
Keywords: color-avoiding percolation, Erdős-Rényi random graph.
MSC Class: 05C80, 60C05, 60K35, 82B43
## 1 Introduction
In this paper, we are interested in the model of (edge-)color-avoiding
percolation defined as follows. Fix a set of $k\geq 2$ colors and a graph $G$,
and color every edge of $G$ in at least one color. We say that two vertices
$u$ and $v$ in $G$ are _color-avoiding connected_ , or _CA-connected_ for
short, if $u$ and $v$ may be connected using any subset of $k-1$ colors. In
fact, CA-connectivity defines an equivalence relation on the vertex set of $G$
whose classes are called _CA-components_. The model has been motivated by a
number of real-world applications, for example, avoiding a set of mistrusted
information channels (where colors correspond to eavesdroppers), or avoiding a
set of possibly corrupted links in a network. In a sense, a network with large
CA-components may be considered resistant to attacks from a set of adversaries
where the adversaries control all channels but can only attack the network
separately.
Color-avoiding percolation was introduced by Krause, Danziger, and Zlatić [7,
8]. In their work, the authors were interested in vertex-colored graphs and
analyzed a vertex analog of CA-connectivity. While some empirical observations
were made for scale-free networks, the focus was put on Erdős-Rényi random
graphs due to their better CA-connectivity [7]. In a subsequent work, Kadović,
Krause, Caldarelli, and Zlatić [5] defined mixed CA-percolation where both
vertices and edges have colors. To a large extent, each of these foundational
papers based their conclusions on experimental evidence.
Precise mathematical treatment of the subject is challenging for several
reasons. Firstly, unlike connected components in a graph, the CA-components
cannot be found by a local exploration of the graph in general. Indeed, note
that even if two vertices are neighbors in the graph, all paths that connect
them and avoid a certain color may be rather long. Secondly, while one edge in
a graph may merge at most two components or divide a single connected
component into two parts, a single colored edge may lead to a merging of a lot
of different CA-components. For example, consider two parallel paths of length
$2\ell+1$, and for every $i\in[2\ell+1]$, connect the $i$-th vertex in one of
the paths with the $i$-th vertex in the other path. Also, color the odd edges
in the paths in blue, the even edges in red, and the edges between them in
green, see Figure 1. Then, it may be easily checked that all CA-components in
the obtained graph are of size 1 while adding a blue edge between the last
vertices in the two paths creates $2\ell+1$ components of size 2 at once. Last
but not least, in the framework of random graphs, deleting the edges in two
different colors from the original graph leads to two distinct subgraphs that
can have a large intersection, and can therefore be highly correlated.
Figure 1: Blue, red and green edges are represented by dotted, dashed, and
solid segments, respectively. One may easily check that on the left, every
vertex is alone in its CA-component, while on the right, the addition of a
single blue edge leads to the appearance of many CA-components of size 2.
To our knowledge, a rigorous mathematical treatment of the behavior of color-
avoiding percolation on random graphs has only been studied in a recent work
of Ráth, Varga, Fekete, and Molontay [10]. In their paper, they show that
under a certain subcriticality assumption, the number of CA-components of a
given fixed size renormalized by $n$ converges in probability to a fixed
constant. Moreover, under the same assumption, it is proved that the size of
the largest CA-component renormalized by $n$ converges in probability to a
fixed constant. They also characterize the behavior of that constant in the
barely supercritical regime.
Our goal here is to go further in the analysis of the structure of CA-
components in a randomly colored Erdős-Rényi random graph around the threshold
of appearance of a giant component. Apart from simplifying the approach of
[10] and getting rid of their additional hypothesis, we show that the
parameter space may be naturally divided into three regimes. In each of them,
we conduct a careful analysis of the size of the largest CA-component as well
as the number of small CA-components.
### 1.1 Main results
For a positive integer $m$, we denote $[m]=\\{1,\ldots,m\\}$. In particular,
we reserve the notation $[k]$ to denote the set of colors, and the notation
$[n]$ for the set of vertices of our graphs. Recall that for $p\in[0,1]$, the
Erdős-Rényi random graph $G(n,p)$, or ER random graph for short, with
parameters $n$ and $p$ is the graph on the vertex set $[n]$ where the edge
between any two distinct vertices is present with probability $p$,
independently from all other edges.
Consider now a non-increasing sequence of positive real numbers
$\lambda_{1}\geq\dots\geq\lambda_{k}$, and define a family of $k$ independent
Erdős-Rényi random graphs $G_{i}=G(n,\tfrac{\lambda_{i}}{n})$ for $i\in[k]$ on
the same vertex set $[n]$ (alternatively, this family can be seen as a
multigraph $([n],E_{1},\dots,E_{k})$ where $E_{i}$ is the edge set of
$G_{i}$). In order to easily refer to and distinguish the graphs
$(G_{i})_{i=1}^{k}$, we say that for every $i\in[k]$, the edges of $G_{i}$ are
given color $i$. Define further
$\Lambda=\lambda_{1}+\dots+\lambda_{k},\quad\text{and}\quad\lambda_{i}^{*}=\Lambda-\lambda_{i}\quad\text{for
every }i\in[k].$
In particular,
$\lambda_{1}^{*}\leq\lambda_{2}^{*}\leq\dots\leq\lambda_{k}^{*}$. Also, we set
$G=G_{1}\cup\ldots\cup G_{k},$ (1)
and for $I\subseteq[k]$,
$G_{I}=\bigcup_{i\in I}G_{i},\quad\text{and}\quad
G^{I}=\bigcup_{i\in[k]\setminus I}G_{i},$
with the shorthand notation $G_{i}$ and $G^{i}$, respectively, when
$I=\\{i\\}$.
Recall that two vertices $u$ and $v$ in $G$ are CA-connected if $u$ and $v$
are connected in each of the graphs $(G^{i})_{i=1}^{k}$. Moreover, CA-
connectivity is an equivalence relation with classes called CA-components. The
CA-component of a vertex $u\in[n]$ is denoted by $\widetilde{\mathcal{C}}(u)$,
and $|\widetilde{\mathcal{C}}(u)|$ denotes its size. Our main object of
interest is the size of the largest CA-component in $G$. Under the assumption
that $\sum_{i\in I}\lambda_{i}<1$ for every set $I\subseteq[k]$ of size $k-2$,
it was shown in [10] that there exists a constant $a\in[0,1]$ such that
$\frac{\max_{u\in[n]}|\widetilde{\mathcal{C}}(u)|}{n}\xrightarrow[n\to\infty]{\mathbb{P}}a,$
(2)
and that the constant $a$ is positive if and only if $\lambda_{1}^{*}>1$,
which is called the _supercritical regime_. Here, we improve this result in
three directions:
* •
We observe that (2) can be easily derived from the convergence in distribution
of the neighborhood of a typical vertex, and this does not require any
additional technical assumptions.
* •
In the subcritical regime (that is, when $\lambda_{k}^{*}<1$), we prove that
asymptotically almost surely (a.a.s.), any CA-component has size at most $k$.
Moreover, the random variables $(N_{\ell})_{\ell=2}^{k}$ which count the
number of CA-components of size $2,\ldots,k$, respectively, jointly converge
in distribution to independent Poisson random variables.
* •
We start investigating the more difficult intermediate regime (that is, when
$\lambda_{k}^{*}>1>\lambda_{1}^{*}$). In this setting, we show a weak law of
large numbers under the assumption that $\lambda_{k}^{*}>1>\lambda_{k-1}^{*}$.
These results are summarized in the following theorem.
###### Theorem 1.1.
Suppose that $k\geq 2$.
1. (i)
There exists $a_{1}\in[0,1)$, such that
$\frac{\max_{u\in[n]}|\widetilde{\mathcal{C}}(u)|}{n}\xrightarrow[n\to\infty]{\mathbb{P}}a_{1}.$
Moreover, one has $a_{1}>0$ if and only if $\lambda_{1}^{*}>1$.
2. (ii)
If $\lambda_{k}^{*}>1>\lambda_{k-1}^{*}$, then there is a positive constant
$a_{2}$ such that
$\frac{\max_{u\in[n]}|\widetilde{\mathcal{C}}(u)|}{\log
n}\xrightarrow[n\to\infty]{\mathbb{P}}a_{2}.$
3. (iii)
If $\lambda_{k}^{*}<1$, then a.a.s.
$\max_{u\in[n]}|\widetilde{\mathcal{C}}(u)|\leq k$. Moreover, there are
positive constants $\beta_{2},\ldots,\beta_{k}$, such that
$(N_{2},\ldots,N_{k})\xrightarrow[n\to\infty]{d}\bigotimes_{\ell=2}^{k}\mathrm{Po}(\beta_{\ell}),$
that is, the random variables $(N_{\ell})_{\ell=2}^{k}$ jointly converge in
distribution to $k-1$ independent Poisson variables with means
$\beta_{2},\ldots,\beta_{k}$ as $n\to\infty$.
Note that in the above result, we recover an analog of the famous transition
from a linear to a logarithmic size for the largest connected component, which
appears in an ER random graph $G(n,\tfrac{\lambda}{n})$ as the parameter
$\lambda$ crosses the critical value $1$. However, unlike this standard
setting, one original feature of CA-percolation is that there is an additional
regime where the size of the largest CA-component remains bounded.
Furthermore, the critical cases $\lambda_{1}^{*}=1$ and $\lambda_{k}^{*}=1$
appear to be more subtle, see Proposition 1.3 below and Section 4 for more on
this.
Now, we take a closer look at each of the three parts of Theorem 1.1. Part (i)
should not be very surprising as, on the one hand, if $\lambda_{1}^{*}\leq 1$,
then a.a.s. the largest connected component in $G^{1}$ has sublinear size, and
thus the largest CA-component as well. On the other hand, if
$\lambda_{1}^{*}>1$, then a.a.s. the largest CA-component is obtained by
intersecting the largest connected components in each of the graphs
$G^{1},\dots,G^{k}$, which all have linear size. However, we stress that the
whole point of the proof is to handle the lack of independence between these
components. To do this, we use a local limit argument for a sequence of
randomly colored ER random graphs, which also allows us to recover and improve
in a simple way a result of [10] on the number of CA-components of given size.
###### Proposition 1.2.
There exists a sequence of non-negative real numbers $(\nu_{\ell})_{\ell\geq
1}$ such that $\sum_{\ell\geq 1}\nu_{\ell}=1-a_{1}$, and for each $\ell\geq
1$,
$\frac{|\\{u\in[n]:|\widetilde{\mathcal{C}}(u)|=\ell\\}|}{n}=\frac{\ell\cdot
N_{\ell}}{n}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{\nu}_{\ell}.$ (3)
Moreover, one has $\nu_{\ell}>0$ for all $\ell\geq 1$ if and only if
$\lambda_{k}^{*}>1$, while if $\lambda_{k}^{*}\leq 1$, then $\nu_{1}=1$.
Altogether, this proposition and Part (i) of Theorem 1.1 answer a question
from [10] by showing that one can get rid of their technical assumption.
For Part (ii) of Theorem 1.1, the main observation is that the largest CA-
component comes from intersecting a connected component in $G_{k}$ with the
largest component of $G^{k}$. In fact this argument also gives us that the
size of the largest CA-component in the critical case
$\lambda_{k}^{*}=1>\lambda_{k-1}^{*}$ is tight.
###### Proposition 1.3.
Suppose that $\lambda_{k}^{*}=1>\lambda_{k-1}^{*}$. Then,
$\sup_{n\geq 1}\
\mathbb{P}\Big{(}\max_{u\in[n]}|\widetilde{\mathcal{C}}(u)|\geq
M\Big{)}\xrightarrow[M\to\infty]{}0.$
The question of whether in this critical case the size of the largest CA-
component converges in distribution remains open. In fact, even knowing if the
support is asymptotically bounded by a deterministic constant is unknown.
In the remaining intermediate regime, i.e. when $\lambda_{k-1}^{*}\geq
1>\lambda_{1}^{*}$, one can easily show that the size of the largest CA-
component is still of logarithmic order. However, proving concentration seems
to be a challenging problem, which remains out of reach with our present
techniques.
Part (iii), as we already mentioned, is arguably the most original part. In
this case, the largest connected component in each of the graphs
$G^{1},\dots,G^{k}$ has only logarithmic size, which makes it very difficult
for a pair of vertices to be connected in all $k$ graphs. This explains, at
least heuristically, why the largest CA-component has bounded size, although
the fact that the bound is deterministic may appear as unexpected. Actually,
as we shall see, the main reason for which the upper bound is given by the
number of colors is that a.a.s. every CA-component is either contained in a
single edge, or all paths connecting two of its vertices and avoiding some
color are contained in a single cycle.
Finally, we provide an explicit expression of $\beta_{k}$ in terms of
$\lambda_{1},\ldots,\lambda_{k}$ in Remark 3.7. While it is possible to do the
same for the other values of $\beta_{m}$ for $m<k$, the formula and the
computation tend to get more and more tedious as $m$ decreases.
We now comment on the proofs themselves in more detail.
#### Outline of the proofs.
As already mentioned, the proofs of Part (i) of Theorem 1.1 and Proposition
1.2 are based on the well-known local convergence of Erdős-Renyi random graphs
to Bienaymé-Galton-Watson trees, or BGW trees for short, with Poisson
offspring distribution. In our setting of edge-colored ER random graph, the
local limit can be seen as a BGW tree with $\text{Po}(\Lambda)$ offspring
distribution where additionally each edge is colored with color $i\in[k]$ with
probability $\lambda_{i}/\Lambda$, independently for different edges. Then,
the constant $a_{1}$ is just the probability that for every $i\in[k]$, the
connected component of the root is infinite when we erase the edges colored
with color $i$. The same approach allows to prove Proposition 1.2, with
$(\nu_{\ell})_{\ell\geq 1}$ being the probability distribution of the size of
the CA-component of the root in the aforementioned BGW tree. However, here the
notion of CA-component must be suitably adjusted, as already noticed in [10]:
for each $i\in[k]$, two vertices are declared to be connected when erasing
color $i$ if either they are indeed connected by a path in the BGW tree that
avoids color $i$, or if they both connect to infinity by paths avoiding color
$i$. It is then not difficult to see that, for this notion of CA-connectivity,
the constant $a_{1}$ is also the probability that the CA-component of the root
is infinite.
On the other hand, the proofs of (ii), Proposition 1.3 and (iii) have a
different flavor. Firstly, we outline the proof strategy for (ii). To begin
with, we make use of the following fact, which might be of independent
interest. Consider a subcritical Erdős-Renyi random graph
$G(n,\tfrac{\lambda}{n})$, and independently color each of its vertices in
black with some probability $q\in(0,1]$. Then, we show that the maximal number
of black vertices which are all connected in $G(n,\tfrac{\lambda}{n})$ divided
by $\log n$ converges in probability towards a positive constant. This result
extends the concentration for the size of the largest component in a
subcritical ER random graph, which corresponds to $q=1$. The proof is obtained
by solving an energy-entropy optimization problem, where the energy
corresponds to the cost of having a large number of black vertices in a given
connected component of $G(n,\tfrac{\lambda}{n})$, while the entropy factor
comes from the fact that as $b$ decreases, the number of connected components
of size $b\log n$ in $G(n,\tfrac{\lambda}{n})$ increases. The link with (ii)
arises as one realizes (the nontrivial fact) that when $\lambda_{k-1}^{*}<1$,
the largest CA-component necessarily comes from intersecting a component in
$G_{k}$ (which is subcritical) with the largest component of $G^{k}$ (which is
supercritical), the two being independent, and the fact that the vertex set of
the giant component in the latter can be approximated by a binomial random
subset of vertices. A similar argument also leads to a proof of Proposition
1.3. We refer to Sections 2.3, 2.4, and 3.2 for more details.
Concerning (iii), the crux of the proof is to show that when
$\lambda_{k}^{*}<1$, a.a.s. every CA-component of size at least two is
contained either in a cycle of $G$ or in a single edge (the latter happening
only if the CA-component has exactly two vertices which are linked by an edge
in at least two different colors), see Lemma 3.4. The proof of this result
relies on a combination of some counting arguments (e.g. showing that a.a.s.
connected subgraphs of $G(n,\lambda/n)$ of size at most some constant times
$\log n$ contain at most one cycle, see Lemma 2.5), and a more probabilistic
lemma showing that a.a.s. all CA-components have bounded size (see in
particular Lemma 3.2). Once this is established, (iii) follows from standard
results on the asymptotic numbers of short cycles in an Erdős-Renyi random
graph.
###### Remark 1.4.
Note that the notion of CA-component of a vertex $u$ is usually defined as the
set of vertices connected to $u$ when deleting edges of color $i$ for any
$i\in[k]$ (that is, unlike in our setting, edges in color $i$ cannot be used
even if they have more than one color). While all our results would also hold
with this definition, ours is slightly more convenient to deal with,
especially in the intermediate regime, since this way $G_{i}$ and $G^{i}$ are
independent.
#### Further notation and terminology.
In general, we omit the dependence on $n$, $\Lambda$ and
$(\lambda_{i})_{i\in[k]}$ for convenience of notation.
For $I\subseteq[k]$ and a vertex $u\in[n]$, we denote by $\mathcal{C}_{I}(u)$
and $\mathcal{C}^{I}(u)$ the connected components of $u$ in $G_{I}$ and
$G^{I}$, respectively. An edge is said to be _repeated_ if it participates in
at least two of $G_{1},\ldots,G_{k}$.
In this paper, we often identify graphs with their vertex sets. For instance,
given a graph $H$, we call _size_ of $H$ its number of vertices, which we
denote by $|H|$. For a set $S$ of vertices of $H$, we denote by $H[S]$ the
subgraph of $H$ induced by $S$ (that is, the graph with vertex set $S$ and
edge set given by the edges of $H$ with both vertices in $S$). Moreover, for
two vertices $u,v$ in $H$, we denote by $\\{u\xleftrightarrow{H\,}v\\}$ the
event that $u$ and $v$ are connected in $H$.
A sequence of events $(\mathcal{E}_{n})_{n\geq 1}$ is said to hold a.a.s. if
$\mathbb{P}(\mathcal{E}_{n})\to 1$ as $n\to\infty$. Given two positive real
sequences $(f_{n})_{n\geq 1}$ and $(g_{n})_{n\geq 1}$, we write
$f_{n}=o(g_{n})$ if $f_{n}/g_{n}\to 0$ when $n\to\infty$, and
$f_{n}=\mathcal{O}(g_{n})$ if there exists a constant $C>0$ such that
$f_{n}\leq Cg_{n}$ for all $n\geq 1$. Furthermore, we write $\xrightarrow{d}$
to denote convergence in distribution of a sequence of random variables, and
$\xrightarrow{\mathbb{P}}$ for convergence in probability.
Finally, we denote by $\mathrm{Po}(\lambda)$ the Poisson distribution with
parameter $\lambda$, and by $\text{Bin}(n,q)$ the Binomial distribution with
parameters $n$ and $q$. For a family of distributions $(\mu_{i})_{i\in I}$, we
denote by $\bigotimes_{i\in I}\mu_{i}$ the distribution of a vector
$(X_{i})_{i\in I}$ of independent random variables where $X_{i}\sim\mu_{i}$
for every $i\in I$.
#### Plan of the paper.
The rest of the paper is organized as follows. In the next section, we recall
some known facts about ER random graphs, both in the subcritical and
supercritical regimes, and show the result on the size of the largest black
connected set in a (vertex-)colored subcritical ER random graph which was
mentioned above. Then, in Section 3 we give the proof of our main results,
Theorem 1.1 and Propositions 1.2 and 1.3, and finally discuss some open
questions in Section 4.
## 2 Preliminaries on Erdős-Rényi random graphs
In this section, we gather some results on the random graph
$G(n,\tfrac{\lambda}{n})$. Apart from the results in Section 2.4, most of the
material presented here is well-known, and we sometimes include short proofs
only for the reader’s convenience.
We let $\mathcal{C}(u)$ denote the connected component of a vertex $u\in[n]$.
Also, for $\lambda>0$, set
$I_{\lambda}=\lambda-1-\log\lambda.$ (4)
It is easy to check that this is a positive real number for all $\lambda>0$
different from 1. Also, for every integer $s\geq 0$, we define
$Z_{s}=\sum_{u\in[n]}\mathds{1}_{\\{|\mathcal{C}(u)|\,\geq\,s\\}}.$
### 2.1 Subcritical regime: cluster size and two-point connectivity
Fix $\lambda\in(0,1)$. Then, it is well-known that
$\frac{\max_{u\in[n]}|\mathcal{C}(u)|}{\log
n}\xrightarrow[n\to\infty]{\mathbb{P}}\frac{1}{I_{\lambda}},$ (5)
see e.g. Theorems 4.4 and 4.5 in [11]. It will also be useful to have a bound
on the upper tail of the typical cluster size. The following one will be
sufficient for our purposes.
###### Lemma 2.1 (see display (4.3.11) in [11]).
Fix $\lambda\in(0,1)$. Then, for every $t\geq 1$,
$\mathbb{P}(|\mathcal{C}(u)|\geq t)\leq\mathrm{e}^{-I_{\lambda}\cdot\,t}.$
A reverse inequality holds as well when $t$ does not grow too fast with $n$.
In particular, we shall need the following result (see e.g. displays (4.3.34)
and (4.3.37) in [11]).
###### Lemma 2.2.
Fix $\lambda\in(0,1)$ and $a\in(0,1/I_{\lambda}]$. Then,
$\mathbb{P}(|\mathcal{C}(u)|\geq a\log n)\geq
n^{-(1+o(1))I_{\lambda}\cdot\,a}.$
Moreover, Lemma 2.1 has the following important consequence.
###### Corollary 2.3.
Fix $\lambda\in(0,1)$. Then, for every pair of distinct vertices $u,v$ in
$G(n,\frac{\lambda}{n})$,
$\mathbb{P}(v\in\mathcal{C}(u))\leq\frac{1}{n}\cdot\frac{\mathrm{e}^{-I_{\lambda}}}{1-\mathrm{e}^{-I_{\lambda}}}.$
###### Proof.
It suffices to observe that for every $t\geq 1$, conditionally on the event
$\\{|\mathcal{C}(u)|=t\\}$, the set of vertices different from $u$ and
contained in $\mathcal{C}(u)$ is uniformly distributed among all possible
subsets of $[n]\setminus\\{u\\}$ of size $t-1$. In particular, for every
$v\neq u$,
$\mathbb{P}(v\in\mathcal{C}(u)\mid|\mathcal{C}(u)|=t)=\frac{t-1}{n-1}\leq\frac{t}{n}.$
(6)
By summing over all positive integers $t\in[n]$ and using Lemma 2.1, we get
$\mathbb{P}(v\in\mathcal{C}(u))\leq\sum_{t=1}^{n}\frac{t}{n}\cdot\mathbb{P}(|\mathcal{C}(u)|=t)=\frac{1}{n}\sum_{t=1}^{n}\mathbb{P}(|\mathcal{C}(u)|\geq
t)\leq\frac{1}{n}\cdot\frac{e^{-I_{\lambda}}}{1-e^{-I_{\lambda}}},$
as desired. ∎
The next result provides concentration of the variables $Z_{s}$ when $s$ is of
order $\log n$. The equality for the expectation is a direct consequence of
Lemmas 2.1 and 2.2, while the two inequalities for the variance follow from
Proposition 4.7 in [11] and Lemma 2.1, respectively.
###### Lemma 2.4.
Let $\lambda\in(0,1)$ and $a\in(0,1/I_{\lambda}]$. Then,
$\mathbb{E}[Z_{a\log
n}]=n^{1-(1+o(1))I_{\lambda}\cdot\,a}\quad\text{and}\quad\mathrm{Var}[Z_{a\log
n}]\leq
n\cdot\mathbb{E}\big{[}|\mathcal{C}(1)|\mathds{1}_{\\{|\mathcal{C}(1)|\,\geq\,a\log
n\\}}\big{]}\leq n^{1-(1+o(1))I_{\lambda}\cdot\,a}.$
### 2.2 On the number of cycles
We start with a result showing that a.a.s. all connected subgraphs of
$G(n,\tfrac{\lambda}{n})$ of size at most some constant (depending on
$\lambda$) times $\log n$ contain at most one cycle, and when $\lambda<1$, all
those with size at least $\varepsilon\log n$ are trees for any fixed
$\varepsilon>0$.
###### Lemma 2.5.
Fix $\lambda>0$.
1. 1.
There is a positive constant $c_{1}=c_{1}(\lambda)$ such that a.a.s. the
following holds: for every set $S\subseteq[n]$ with $|S|\leq c_{1}\log n$,
whenever the subgraph of $G(n,\tfrac{\lambda}{n})$ induced by $S$ is
connected, it contains at most one cycle.
2. 2.
If $\lambda<1$, then for every $\varepsilon>0$, a.a.s. all components of size
larger than $\varepsilon\log n$ are trees.
###### Remark 2.6.
In fact, concerning the second statement of this lemma, more is true: as shown
in the proof below, the expected number of vertices whose connected component
contains at least one cycle is bounded uniformly in $n$.
###### Proof of Lemma 2.5.
Concerning the first part of the lemma, note that any connected graph with at
least two cycles contains as a subgraph a spanning tree together with two
additional edges. Furthermore, by Cayley’s formula the number of spanning
trees of the complete graph with size $m$ is equal to $m^{m-2}$. Therefore,
for every $c>0$, by using that
$\tbinom{n}{m}\leq\left(\tfrac{n\mathrm{e}}{m}\right)^{m}$, we deduce that the
expected number of subgraphs of $G(n,\tfrac{\lambda}{n})$ which are connected,
contain at least two cycles and at most $c\log n$ vertices, is bounded from
above by,
$\sum_{m=1}^{\lfloor c\log
n\rfloor}m^{m-2}\cdot\binom{m}{2}^{2}\cdot\binom{n}{m}\cdot\left(\frac{\lambda}{n}\right)^{m+1}\leq\frac{1}{n}\sum_{m=1}^{\lfloor
c\log n\rfloor}m^{2}(\mathrm{e}\lambda)^{m+1},$
which is $o(1)$ if one chooses $c<\frac{1}{1+\max(0,\log\lambda)}$. The proof
of the first part is completed by an application of Markov’s inequality.
For the second part, we compute the expected number of vertices in components
of size at most $2I_{\lambda}^{-1}\log n$ containing a cycle. Taking also into
account the fact that no vertex in a component of size $m$ is connected to any
of the $n-m$ vertices outside the component, the previous computation implies
that the above expectation is at most
$\sum_{m=1}^{2I_{\lambda}^{-1}\log n}m\cdot
m^{m-2}\cdot\binom{m}{2}\cdot\binom{n}{m}\cdot\left(\frac{\lambda}{n}\right)^{m}\left(1-\frac{\lambda}{n}\right)^{m(n-m)}\leq(1+o(1))\sum_{m=1}^{2I_{\lambda}^{-1}\log
n}m\cdot\mathrm{e}^{-I_{\lambda}m}=\mathcal{O}(1).$
Thus, by Markov’s inequality there are a.a.s. less than $\varepsilon\log n$
vertices in components of size at most $2I_{\lambda}^{-1}\log n$ containing a
cycle. However, by Lemma 2.1 a.a.s. all components have size at most
$2I_{\lambda}^{-1}\log n$, which completes the proof. ∎
###### Remark 2.7.
The following modification of Lemma 2.5 holds with almost the same proof.
1. 1.
There exists $c_{1}=c_{1}(\Lambda)$ such that a.a.s. for every set
$S\subseteq[n]$ with $|S|\leq c_{1}\log n$, whenever the subgraph of $G$
induced by $S$ is connected, it either contains at most one cycle and no
repeated edges or at most one repeated edge and no cycles.
2. 2.
If $\lambda_{i}^{*}<1$ for some $i$, then for every $\varepsilon>0$, a.a.s.
all connected components of $G^{i}$ of size at least $\varepsilon\log n$ are
trees with no repeated edges.
The next lemma is a well-known result concerning the number of cycles of given
size that will be needed for the proof of Theorem 1.1 (iii). We refer e.g. to
Corollary 4.9 from [1] for a proof.
###### Lemma 2.8 ([1], Corollary 4.9).
Fix $\lambda>0$. For $m\geq 3$, denote by $C_{m}$ the number of cycles of
length $m$ in the graph $G(n,p)$ with $p=(1+o(1))\tfrac{\lambda}{n}$. Then,
for any fixed $\ell\geq 3$,
$(C_{3},\ldots,C_{\ell})\xrightarrow[n\to\infty]{d}\bigotimes_{m=3}^{\ell}\mathrm{Po}(\gamma_{m}),$
where for all $m\in\\{3,\dots,\ell\\}$, $\gamma_{m}=\tfrac{\lambda^{m}}{2m}$.
Lemma 2.8 has the following direct consequence. Recall that an edge is said to
be repeated in $G$ if it is part of at least two of the graphs
$G_{1},\dots,G_{k}$.
###### Corollary 2.9.
Denote by $C_{2}$ the number of repeated edges in $G$. Then, with the same
notation as in Lemma 2.8, for every $\ell\geq 2$,
$(C_{2},\ldots,C_{\ell})\xrightarrow[n\to\infty]{d}\bigotimes_{m=2}^{\ell}\mathrm{Po}(\gamma_{m}),$
where $\gamma_{2}=\tfrac{1}{2}\sum_{i,j\in[k],i<j}\lambda_{i}\lambda_{j}$ and
for all $m\in\\{3,\dots,\ell\\}$, $\gamma_{m}=\tfrac{\Lambda^{m}}{2m}$.
###### Proof.
Using that $G$ is distributed as an Erdős-Rényi random graph with parameters
$n$ and
$p=1-\prod_{i=1}^{k}\left(1-\frac{\lambda_{i}}{n}\right)=(1+o(1))\cdot\frac{\Lambda}{n},$
the joint convergence of $(C_{3},\ldots,C_{\ell})$ is given by Lemma 2.8.
On the other hand, by definition
$C_{2}\sim\text{Bin}(\tfrac{n(n-1)}{2},\tfrac{2(1+o(1))\gamma_{2}}{n^{2}})$,
and thus $C_{2}$ converges in distribution to $\text{Po}(\gamma_{2})$. It only
remains to justify the asymptotic independence between $C_{2}$ and the other
variables. The argument is the same as the one showing asymptotic independence
of $C_{3},\dots,C_{\ell}$, see Theorem 4.8 and Corollary 4.9 in [1]. Briefly,
one can first notice by a simple first moment argument that a.a.s. no vertex
participates simultaneously in a repeated edge and in a cycle of length at
most $\ell$. Thus, the variables $(C_{3},\dots,C_{\ell})$ a.a.s. coincide with
the cycle counts in the graph, obtained by deleting all vertices in repeated
edges, which conditionally on $C_{2}$ is an ER random graph with at least
$n-2C_{2}$ vertices. Using that $\mathbb{E}[C_{2}]=\mathcal{O}(1)$, and more
precisely that a.a.s. $n-2C_{2}=n-o(n)$, implies the corollary. ∎
### 2.3 Supercritical regime: stochastic domination of the giant component
Recall that when $\lambda>1$, the graph $G(n,\tfrac{\lambda}{n})$ has a.a.s. a
unique connected component of linear size (called the _giant_ component). More
precisely, it is well-known that
$\frac{\max_{u\in[n]}|\mathcal{C}(u)|}{n}\xrightarrow[n\to\infty]{\mathbb{P}}\mu_{\lambda}$
(7)
where $\mu_{\lambda}$ is the survival probability of a Bienaymé-Galton-Watson
tree with offspring distribution $\mathrm{Po}(\lambda)$ characterized as the
unique positive solution of the equation $1=e^{-\lambda t}+t$, see display
(3.6.2) and Theorem 4.8 in [11]. Recall also that conditionally on its size,
the set of vertices in the giant component is uniformly distributed among the
family of subsets of $[n]$ of that size. As a consequence, one has the
following stochastic comparison with binomial random subsets of vertices.
###### Lemma 2.10.
Fix $\lambda>1$ and $\varepsilon>0$. Let $\mathcal{C}_{\max}$ be the a.a.s.
unique largest connected component of $G(n,\tfrac{\lambda}{n})$. Let also
$(X_{v})_{v\in[n]}$ and $(Y_{v})_{v\in[n]}$ be two sequences of i.i.d.
Bernoulli random variables with respective parameters
$\max(\mu_{\lambda}-\varepsilon,0)$ and $\min(\mu_{\lambda}+\varepsilon,1)$.
Then, there is a coupling of these two sequences with
$G(n,\tfrac{\lambda}{n})$ such that a.a.s. one has
$\\{v:X_{v}=1\\}\subseteq\mathcal{C}_{\max}\subseteq\\{v:Y_{v}=1\\}.$ (8)
###### Proof.
Note that conditionally on their respective sizes, the three sets appearing in
(8) are sampled uniformly at random among all subsets of $[n]$ of that size.
The lemma then follows by the weak law of large numbers and (7), which
together imply that a.a.s.
$\sum_{v\in[n]}X_{v}\leq|\mathcal{C}_{\max}|\leq\sum_{v\in[n]}Y_{v}.$
∎
### 2.4 On the intersection of a giant with independent subcritical clusters
Fix $q\in(0,1)$ and $\lambda\in(0,1)$. For every $x\in[q,1)$, define
$J_{q}(x)=x\log\frac{x}{q}+(1-x)\log\frac{1-x}{1-q},\quad\text{and
set}\quad\rho(q,\lambda)=\inf_{x\in[q,1)}\frac{I_{\lambda}+J_{q}(x)}{x}.$
Consider a sequence $(X_{v})_{v\in[n]}$ of independent Bernoulli random
variables with parameter $q$, and independently a graph
$G(n,\tfrac{\lambda}{n})$. For $u\in[n]$ and $t\geq 0$, define
$\widetilde{Z}_{t}=\sum_{u\in[n]}\mathds{1}_{\\{\sum_{v\in\mathcal{C}(u)}X_{v}\,\geq\,t\\}}.$
The next lemma is similar in essence to Lemma 2.4.
###### Lemma 2.11.
Fix $a\in(0,\tfrac{1}{\rho(q,\lambda)}]$. Then,
$\mathbb{E}[\widetilde{Z}_{a\log
n}]=n^{1-(1+o(1))\rho(q,\lambda)\,\cdot\,a}\quad\text{and}\quad\mathrm{Var}[\widetilde{Z}_{a\log
n}]\leq n^{1-(1+o(1))\rho(q,\lambda)\,\cdot\,a}.$
###### Proof.
We begin by proving the estimate on the mean. To start with, we recall two
large deviation estimates for Binomial random variables. On the one hand, for
every $x\in[q,1)$ and every $N\geq 1$ (see e.g. Corollary 2.20 in [11]),
$\mathbb{P}(\text{Bin}(N,q)\geq xN)\leq\exp\big{(}-N\cdot J_{q}(x)\big{)},$
(9)
and on the other hand, for every fixed $x\in[q,1)$ (see e.g. Theorem 2.2.3 and
Exercice 2.2.23 in [3]),
$\mathbb{P}(\text{Bin}(N,q)\geq xN)\geq\exp\big{(}-(1+o_{N}(1))\cdot N\cdot
J_{q}(x)\big{)},$ (10)
where the $o_{N}(1)$ term goes to $0$ as $N\to\infty$. Since conditionally on
its size the cluster $\mathcal{C}(1)$ is uniformly distributed among the
subsets of $[n]$ of that size containing the vertex 1, we deduce using (9) and
Lemma 2.1 that
$\begin{split}\mathbb{P}\Big{(}\sum_{v\in\mathcal{C}(1)}X_{v}\geq a\log
n\Big{)}&\leq\sum_{s=a\log n}^{(a/q)\log
n}\mathbb{P}(|\mathcal{C}(1)|=s)\cdot\mathbb{P}(\text{Bin}(s,q)\geq a\log
n)+\mathbb{P}(|\mathcal{C}(1)|\geq(a/q)\cdot\log n)\\\ &\leq\sum_{s=a\log
n}^{(a/q)\log n}\exp\left(-\left(I_{\lambda}+J_{q}\left(\frac{a\log
n}{s}\right)\right)\cdot s\right)+n^{-I_{\lambda}\cdot\,a/q}\\\
&\leq\mathcal{O}(\log n)\cdot
n^{-\inf_{b\in(a,a/q]}b(I_{\lambda}+J_{q}(a/b))},\end{split}$ (11)
and observing that
$\inf_{b\in(a,a/q]}b(I_{\lambda}+J_{q}(a/b))=a\cdot\rho(q,\lambda),$ (12)
concludes the proof of the upper bound.
Now, we concentrate on the lower bound. To start with, we extend $J_{q}$ to a
continuous function on the interval $[q,1]$ by defining $J_{q}(1)=\log(1/q)$,
and let $b_{*}$ be the smallest real number realizing the infimum of the
function $b\mapsto b(I_{\lambda}+J_{q}(a/b))$ over the interval $[a,a/q]$.
Recall that $\log$ is a concave function, so by Jensen’s inequality the
function $J_{q}$ is non-negative. Then, together with the fact that $a\leq
1/\rho(q,\lambda)$ by hypothesis, (12) shows that
$b_{*}=\frac{a\cdot\rho(q,\lambda)}{I_{\lambda}+J_{q}(a/b_{*})}\leq\frac{a\cdot\rho(q,\lambda)}{I_{\lambda}}\leq\frac{1}{I_{\lambda}}.$
Thus, by using Lemma 2.2 and (10), we get
$\displaystyle\mathbb{P}\Big{(}\sum_{v\in\mathcal{C}(1)}X_{v}\geq a\log
n\Big{)}$ $\displaystyle\geq\;\mathbb{P}(|\mathcal{C}(1)|\geq b_{*}\log
n)\cdot\mathbb{P}(\text{Bin}(b_{*}\log n,q)\geq a\log n)$
$\displaystyle\geq\;n^{-(1+o(1))b_{*}\cdot I_{\lambda}}\cdot
n^{-(1+o(1))b_{*}\cdot
J_{q}(a/b_{*})}=n^{-(1+o(1))a\,\cdot\,\rho(q,\lambda)},$
which concludes the proof of the lower bound.
Finally, the proof of the upper bound on the variance is mutatis mutandis the
same as the proof of Lemma 2.4, in particular, the same argument leads to
$\text{Var}[\widetilde{Z}_{a\log n}]\leq
n\cdot\mathbb{E}\left[\left(\sum_{v\in\mathcal{C}(1)}X_{v}\right)\cdot\mathds{1}_{\\{\sum_{v\in\mathcal{C}(1)}X_{v}\,\geq\,a\log
n\\}}\right],$
which combined with (11) yields the desired upper bound. ∎
###### Remark 2.12.
A close look at the previous proof shows that the upper bound on the mean is
in fact valid for all $a>0$.
###### Corollary 2.13.
Let $a=a(q,\lambda)=\frac{1}{\rho(q,\lambda)}>0$. Then,
$\frac{\max_{u\in[n]}\sum_{v\in\mathcal{C}(u)}X_{v}}{\log
n}\xrightarrow[n\to\infty]{\mathbb{P}}a.$
###### Proof.
If $h>a(q,\lambda)$, then by Markov’s inequality and Lemma 2.11 (see also
Remark 2.12) we get
$\mathbb{P}\left(\max_{u\in[n]}\sum_{v\in\mathcal{C}(u)}X_{v}\geq h\log
n\right)=\mathbb{P}(\widetilde{Z}_{h\log n}\geq
1)\leq\mathbb{E}[\widetilde{Z}_{h\log n}]=o(1).$
Conversely, if $h<a(q,\lambda)$, then again Lemma 2.11 together with the
Cauchy-Schwarz inequality gives
$\mathbb{P}\left(\max_{u\in[n]}\sum_{v\in\mathcal{C}(u)}X_{v}\geq h\log
n\right)=\mathbb{P}(\widetilde{Z}_{h\log n}\geq
1)\geq\frac{\mathbb{E}[\widetilde{Z}_{h\log
n}]^{2}}{\mathbb{E}[\widetilde{Z}_{h\log
n}]^{2}+\text{Var}[\widetilde{Z}_{h\log n}]}=1-o(1).$
∎
## 3 Proofs of the main results
This section is devoted to the proofs of Theorem 1.1 and Proposition 1.2. We
first prove Part (i) of the theorem together with the proposition in Section
3.1, then Part (ii) of the theorem in Section 3.2, and finally Part (iii) in
Section 3.3.
### 3.1 Proofs of Theorem 1.1 (i) and Proposition 1.2
The proofs of these two results are based on the well-known local convergence
of ER random graphs to BGW trees with Poisson offspring distribution. Recall
that in the standard setting of uncolored graphs, the local _weak_ convergence
states the following: denoting by $\mathcal{V}_{L}(u)$ the $L$-neighborhood of
a vertex $u$ in $G$ (as defined in (1)) or in the BGW tree with
$\text{Po}(\Lambda)$ offspring distribution, for every bounded function
$(u,H)\mapsto\varphi(u,H)$ defined on pairs $(u,H)$ where $H$ is a finite
graph and $u$ a vertex of $H$, one has
$\frac{1}{n}\sum_{u\in[n]}\mathbb{E}[\varphi(u,\mathcal{V}_{L}(u))]\xrightarrow[n\to\infty]{}\mathbb{E}[\varphi(\varnothing,\mathcal{V}_{L}(\varnothing))],$
see e.g. Theorem 6 in [2]. It is straightforward to see that the same
convergence holds in our setting of colored graphs. The difference is that
now, edges are endowed with a label encoding its set of colors. As a
consequence, the limiting graph is a BGW tree with $\text{Po}(\Lambda)$
offspring distribution, denoted hereafter by $\textbf{GW}(\Lambda)$, where
each edge is colored in a single color and independently of other edges, and
where color $i$ is attributed with probability $\lambda_{i}/\Lambda$ (see the
proof of Proposition 3.1 below).
Moreover, using that finite neighborhoods of two given points are mostly
independent (in particular, they are unlikely to intersect), it is possible to
strengthen the previous convergence in expectation into a convergence in
probability, see e.g. [12, Theorem 2.19] in the case of uncolored graphs. In
our case, we obtain the following result, which is our main tool for proving
Theorem 1.1 (i) and Proposition 1.2. Although the proof is standard, we
briefly sketch the argument for reader’s convenience. A similar result is
derived in the proof of Proposition 4.1.7 in [10].
###### Proposition 3.1.
Let $\varphi$ be a bounded function on the set of finite rooted graphs whose
edges are endowed with a label encoding its set of colors. Then, for any
$L\geq 1$,
$\frac{1}{n}\sum_{u\in[n]}\varphi(u,\mathcal{V}_{L}(u))\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{E}[\varphi(\varnothing,\mathcal{V}_{L}(\varnothing))],$
(13)
where $\mathcal{V}_{L}(\varnothing)$ is the $L$-neighborhood of the root
$\varnothing$ in the colored graph $\emph{{GW}}(\Lambda)$ defined above.
###### Proof.
As already mentioned, the convergence in expectation is a straightforward
consequence of the well-known local weak convergence of $G$. Note that
conditionally on being present in $G$, an edge has color $i$ with probability
$\frac{\lambda_{i}/n}{1-\prod_{j=1}^{k}(1-\lambda_{j}/n)}=(1+o(1))\frac{\lambda_{i}}{\Lambda}.$
In particular, since $(\lambda_{i}/\Lambda)_{i=1}^{k}$ sum up to 1, a.s. every
edge in the limit has only one color.
To prove convergence in probability, we use Chebyshev’s inequality and bound
the variance of the random variable appearing on the left-hand side of (13).
Assume without loss of generality that $\varphi$ is non-negative and bounded
by one. Then, on the one hand, for every $u\in[n]$ and every fixed $L\geq 1$,
$\displaystyle\sum_{v\in[n]}\mathbb{E}[\varphi(u,\mathcal{V}_{L}(u))$
$\displaystyle\varphi(v,\mathcal{V}_{L}(v))\cdot\mathds{1}_{\\{v\in\mathcal{V}_{2L}(u)\\}}]\leq\mathbb{E}[|\mathcal{V}_{2L}(u)|]\leq
1+\dots+\Lambda^{2L}=\mathcal{O}(1).$ (14)
On the other hand, conditionally on the event that
$v\notin\mathcal{V}_{2L}(u)$, or equivalently that
$\mathcal{V}_{L}(u)\cap\mathcal{V}_{L}(v)=\emptyset$, and
$|\mathcal{V}_{L}(u)|=m$ for some $m\geq 1$, $\mathcal{V}_{L}(v)$ is
distributed as the $L$-neighborhood of $v$ in a colored ER random graph with
$n-m$ vertices and parameter $1-\prod_{i=1}^{k}(1-\tfrac{\lambda_{i}}{n})$,
denoted hereafter by $G_{n-m}^{\prime}$. Thus, denoting also by
$\mathbb{E}_{n-m}$ the expectation with respect to the distribution of
$G_{n-m}^{\prime}$, one has
$\displaystyle\mathbb{E}[\varphi(u,\mathcal{V}_{L}(u))\varphi(v,\mathcal{V}_{L}(v))\cdot\mathds{1}_{\\{v\notin\mathcal{V}_{2L}(u)\\}}]$
$\displaystyle=$ $\displaystyle\sum_{m\geq
1}\mathbb{E}\left[\varphi(u,\mathcal{V}_{L}(u))\cdot\mathbb{E}[\varphi(v,\mathcal{V}_{L}(v))\mathds{1}_{\\{\mathcal{V}_{L}(u)\cap\mathcal{V}_{L}(v)=\emptyset\\}}\mid\mathcal{V}_{L}(u)]\cdot\mathds{1}_{\\{|\mathcal{V}_{L}(u)|=m\\}}\right]$
$\displaystyle\leq$ $\displaystyle\sum_{m\geq
1}\mathbb{E}\left[\varphi(u,\mathcal{V}_{L}(u))\cdot\mathds{1}_{\\{|\mathcal{V}_{L}(u)|=m\\}}\right]\cdot\mathbb{E}_{n-m}[\varphi(v,\mathcal{V}_{L}(v))].$
Now, the set $\mathcal{V}_{L}(v)$ on $G_{n-m}^{\prime}$ is the same as on $G$
unless there is at least one edge between one vertex of this set and one of
the $m$ additional vertices which are present in $G$ but not in
$G_{n-m}^{\prime}$. Conditionally on the size of $\mathcal{V}_{L}(v)$ in
$G_{n-m}^{\prime}$, this holds with probability bounded from above by
$m|\mathcal{V}_{L}(v)|\tfrac{\Lambda}{n}$. Therefore,
$\mathbb{E}_{n-m}[\varphi(v,\mathcal{V}_{L}(v))]\leq\mathbb{E}_{n}[\varphi(v,\mathcal{V}_{L}(v))]+2m\mathbb{E}_{n-m}[|\mathcal{V}_{L}(v)|]\cdot\tfrac{\Lambda}{n}=\mathbb{E}_{n}[\varphi(v,\mathcal{V}_{L}(v))]+\mathcal{O}\bigg{(}\frac{m}{n}\bigg{)}.$
As a consequence, using that
$\mathbb{E}[\varphi(u,\mathcal{V}_{L}(u))\cdot|\mathcal{V}_{L}(u)|]\leq\mathbb{E}[|\mathcal{V}_{L}(u)|]$,
which is bounded uniformly in $n$, we get
$\mathbb{E}[\varphi(u,\mathcal{V}_{L}(u))\varphi(v,\mathcal{V}_{L}(v))\cdot\mathds{1}_{\\{v\notin\mathcal{V}_{2L}(u)\\}}]-\mathbb{E}[\varphi(u,\mathcal{V}_{L}(u))]\mathbb{E}[\varphi(v,\mathcal{V}_{L}(v))]=\mathcal{O}\bigg{(}\frac{1}{n}\bigg{)}.$
Together with (14), and summing over all pairs of vertices $u,v\in[n]$, this
gives
$\text{Var}\left(\frac{1}{n}\sum_{u\in[n]}\varphi(u,\mathcal{V}_{L}(u))\right)=\mathcal{O}\bigg{(}\frac{1}{n}\bigg{)},$
proving the desired concentration result. This concludes the (sketch of) proof
of the proposition. ∎
We can now give the proof of our main results.
###### Proof of Theorem 1.1 (i).
Assume first that $\lambda_{1}^{*}\leq 1$. In this case, it is well-known that
the size of the largest connected component of $G^{1}$ divided by $n$
converges in probability to zero, and thus a fortiori the same must hold for
the largest CA-component.
Assume now that $\lambda_{1}^{*}>1$. In this case, a.a.s. each of
$G^{1},\ldots,G^{k}$ contains a unique giant component denoted by
$\mathcal{C}_{\max}^{1},\ldots,\mathcal{C}_{\max}^{k}$, respectively. When
this is not the case for some $i\in[k]$, we define $\mathcal{C}_{\max}^{i}$ to
be an arbitrarily chosen largest component in $G^{i}$. Let also $\mu_{i}$ be
the asymptotic proportion of vertices in $\mathcal{C}_{\max}^{i}$. Since
a.a.s. every non-giant connected component in $G^{1},\ldots,G^{k}$ has size
$\mathcal{O}(\log n)$ (which follows by combining Lemma 2.1 with Theorem 4.15
in [11]), it is sufficient to show that the size of
$\bigcap_{i=1}^{k}\mathcal{C}_{\max}^{i}$ divided by $n$ converges to a
positive constant in probability.
Firstly, note that by (7) one has
$\mu_{i}=\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|=\infty)$, where we keep the
notation $\mathcal{C}^{i}(u)$ for the connected component of a vertex $u$ in
$\textbf{GW}(\Lambda)$ after removal of all edges in color $i$. Thus, for
every $i\in[k]$,
$\frac{|\mathcal{C}^{i}_{\max}|}{n}=\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{u\in\mathcal{C}^{i}_{\max}\\}}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|=\infty).$
On the other hand, for every $L\geq 1$, by Proposition 3.1 one has
$\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|\mathcal{C}^{i}(u)|\,\geq\,L\\}}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|\geq
L),$
since the event of having a connected component of size at least $L$ is a
measurable function of the $L$-neighborhood. Taking the difference between the
terms in the last two displays, we deduce that for every $i\in[k]$ and every
$L\geq 1$,
$\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|\mathcal{C}^{i}(u)|\,\geq\,L\text{
and
}u\notin\mathcal{C}^{i}_{\max}\\}}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{P}(L\leq|\mathcal{C}^{i}(\varnothing)|<\infty).$
(15)
Since the probability on the right-hand side goes to $0$ as $L\to\infty$, for
every $\varepsilon>0$ one can find $L$ such that for all $i\in[k]$,
$\limsup_{n\to\infty}\
\mathbb{P}\Big{(}\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|\mathcal{C}^{i}(u)|\,\geq\,L\text{
and }u\notin\mathcal{C}^{i}_{\max}\\}}\geq\frac{\varepsilon}{k}\Big{)}=0,$
which by summation over $i$ gives
$\limsup_{n\to\infty}\
\mathbb{P}\Big{(}\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{\exists
i\in[k]:\,|\mathcal{C}^{i}(u)|\,\geq\,L\text{ and
}u\notin\mathcal{C}^{i}_{\max}\\}}\geq\varepsilon\Big{)}=0.$ (16)
Moreover, using Proposition 3.1 again yields
$\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|\mathcal{C}^{i}(u)|\,\geq\,L\text{
for all
}i\in[k]\\}}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|\geq
L\text{ for all }i\in[k]).$
Then, letting $L\to\infty$ together with (16) implies that
$\frac{|\bigcap_{i=1}^{k}\mathcal{C}^{i}_{\max}|}{n}=\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{u\in\mathcal{C}^{i}_{\max}\text{
for all
}i\in[k]\\}}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|=\infty\text{
for all }i\in[k]).$
To conclude the proof of Theorem 1.1 (i), we show that the above limit is
positive. For every $L\geq 1$ and a fixed vertex $u\in[n]$, the events
$\\{|\mathcal{C}^{i}(u)|\geq L\\}$ are increasing, so by the FKG inequality
(see Theorem 3.1 in [4])
$\mathbb{P}(|\mathcal{C}^{i}(u)|\geq L\text{ for all
}i\in[k])\geq\prod_{i=1}^{k}\mathbb{P}(|\mathcal{C}^{i}(u)|\geq L).$
Then, letting $n\to\infty$ and using the local convergence of $G$ towards
$\mathbf{GW}(\Lambda)$ implies that
$\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|\geq L\text{ for all
}i\in[k])\geq\prod_{i=1}^{k}\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|\geq L).$
Finally, letting $L\to\infty$ shows that
$\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|=\infty\text{ for all
}i\in[k])\geq\prod_{i=1}^{k}\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|=\infty)>0,$
as desired. ∎
###### Proof of Proposition 1.2.
Firstly, we recall the notion of CA-connectivity in $\textbf{GW}(\Lambda)$
from [10]. Two vertices $u$ and $v$ are declared to be CA-connected if for
every $i\in[k]$, either $u$ and $v$ are connected in the subgraph of
$\textbf{GW}(\Lambda)$ obtained by removing edges in color $i$, which we
denote by $\textbf{GW}^{i}(\Lambda)$, or if their connected components in this
graph are both infinite. We denote by $\widetilde{\mathcal{C}}(\varnothing)$
the CA-component of the root.
Now, we define $I=\\{i:\lambda_{i}^{*}>1\\}$, and $J=[k]\setminus I$. Then,
for $L\geq 1$, $M\geq 1$, and $u\in[n]$, we define
$\widetilde{\mathcal{C}}_{L,M}(u)=\Big{(}\bigcap_{i\in
J}\Big{\\{}v:u\xleftrightarrow{G^{i}\cap\mathcal{V}_{L}(u)}v\Big{\\}}\Big{)}\cap\Big{(}\bigcap_{i\in
I}\
\Big{\\{}v\in\mathcal{V}_{L}(u):u\xleftrightarrow{G^{i}\cap\mathcal{V}_{L}(u)}v\
\text{or}\ \min(|\mathcal{C}^{i}(u)|,|\mathcal{C}^{i}(v)|)\geq
M\Big{\\}}\Big{)},$
with the notation from the proof of Theorem 1.1 (i), and
$\widetilde{\mathcal{C}}_{L}(u)=\Big{(}\bigcap_{i\in
J}\Big{\\{}v\in\mathcal{V}_{L}(u):u\xleftrightarrow{G^{i}\cap\mathcal{V}_{L}(u)}v\Big{\\}}\Big{)}\cap\Big{(}\bigcap_{i\in
I}\
\Big{\\{}v\in\mathcal{V}_{L}(u):u\xleftrightarrow{G^{i}\cap\mathcal{V}_{L}(u)}v\
\text{or both}\ u,v\in\mathcal{C}^{i}_{\max}\Big{\\}}\Big{)}.$
Note that on the a.a.s. event
$|\mathcal{C}_{\max}^{i}|\geq\tfrac{\mu_{i}}{2}n$ and for all sufficiently
large $n$, every vertex $u$ such that
$|\widetilde{\mathcal{C}}_{L}(u)|<|\widetilde{\mathcal{C}}_{L,M}(u)|$ is at
distance at most $L$ from the set
$\mathcal{S}_{M}=\bigcup_{i\in I}\\{v\in G:|\mathcal{C}^{i}(v)|\geq M\text{
and }v\notin\mathcal{C}^{i}_{\max}\\}.$
However, by the Cauchy-Schwarz inequality, the expected number of such sites
divided by $n$ is at most
$\mathbb{E}\left[\frac{1}{n}\sum_{v\in[n]}|\mathcal{V}_{L}(v)|\mathds{1}_{\\{v\in\mathcal{S}_{M}\\}}\right]=\mathbb{E}\left[|\mathcal{V}_{L}(1)|\mathds{1}_{\\{1\in\mathcal{S}_{M}\\}}\right]\leq\mathbb{E}[|\mathcal{V}_{L}(1)|^{2}]^{1/2}\cdot\mathbb{P}(1\in\mathcal{S}_{M})^{1/2}.$
Moreover, by (15), $\mathbb{P}(1\in\mathcal{S}_{M})\to 0$ as $M\to\infty$
uniformly in $n$, and a straightforward computation shows that for fixed
$\Lambda$ and $L$, the second moment of $|\mathcal{V}_{L}(1)|$ is uniformly
bounded in $n$ (using e.g. that it is stochastically dominated by the size of
the $L$-neighborhood of the root in a BGW tree with
$\text{Bin}(n,\tfrac{\Lambda}{n})$ offspring distribution). Therefore,
Markov’s inequality implies that for every $\varepsilon>0$ and every $L$, one
can find $M$ large enough so that
$\limsup_{n\to\infty}\
\mathbb{P}\left(\frac{1}{n}\sum_{v\in[n]}|\mathcal{V}_{L}(v)|\mathds{1}_{\\{v\in\mathcal{S}_{M}\\}}\geq\varepsilon\right)\leq\varepsilon,$
and in particular,
$\limsup_{n\to\infty}\ \mathbb{P}\left(\frac{|\\{u\in
G:|\widetilde{\mathcal{C}}_{L}(u)|<|\widetilde{\mathcal{C}}_{L,M}(u)|\\}|}{n}\geq\varepsilon\right)\leq\limsup_{n\to\infty}\mathbb{P}\left(\exists
i\in[k]:|\mathcal{C}_{\max}^{i}|\leq\frac{\mu_{i}n}{2}\right)+\varepsilon=\varepsilon.$
(17)
Similarly, we define
$\widetilde{\mathcal{C}}_{L,M}(\varnothing)=\Big{(}\bigcap_{i\in
J}\Big{\\{}v\in\mathcal{V}_{L}(\varnothing)\cap\mathcal{C}^{i}(\varnothing)\Big{\\}}\Big{)}\cap\Big{(}\bigcap_{i\in
I}\
\Big{\\{}v\in\mathcal{V}_{L}(\varnothing):v\in\mathcal{C}^{i}(\varnothing)\
\text{or}\ \min(|\mathcal{C}^{i}(\varnothing)|,|\mathcal{C}^{i}(v)|)\geq
M\Big{\\}}\Big{)},$
and
$\widetilde{\mathcal{C}}_{L}(\varnothing)=\Big{(}\bigcap_{i\in
J}\Big{\\{}v\in\mathcal{V}_{L}(\varnothing)\cap\mathcal{C}^{i}(\varnothing)\Big{\\}}\Big{)}\cap\Big{(}\bigcap_{i\in
I}\
\Big{\\{}v\in\mathcal{V}_{L}(\varnothing):v\in\mathcal{C}^{i}(\varnothing)\
\text{or}\
|\mathcal{C}^{i}(\varnothing)|=|\mathcal{C}^{i}(v)|=\infty\Big{\\}}\Big{)},$
which is the decreasing limit of $\widetilde{\mathcal{C}}_{L,M}(\varnothing)$
as $M\to\infty$. Moreover,
$\displaystyle\mathbb{P}\Big{(}|\widetilde{\mathcal{C}}_{L}(\varnothing)|<|\widetilde{\mathcal{C}}_{L,M}(\varnothing)|\Big{)}\leq\mathbb{E}\Big{[}|\widetilde{\mathcal{C}}_{L,M}(\varnothing)|-|\widetilde{\mathcal{C}}_{L}(\varnothing)|\Big{]}\leq\sum_{i\in
I}\mathbb{E}\left[\sum_{v\in\mathcal{V}_{L}(\varnothing)}\mathds{1}_{\\{v\notin\mathcal{C}^{i}(\varnothing),\,M\,\leq\,|\mathcal{C}^{i}(v)|\,<\,\infty\\}}\right].$
Fix $i\in I$. For every edge $e\in\textbf{GW}(\Lambda)$, let us denote by
$e^{+}$ the endvertex of $e$ which is farther from the root. Then, for every
vertex $v\in\textbf{GW}(\Lambda)$ such that
$\mathcal{C}^{i}(\varnothing)\neq\mathcal{C}^{i}(v)$, the (unique) path
between $\varnothing$ and $v$ in $\textbf{GW}(\Lambda)$ contains an edge in
color $i$. Considering the closest such edge to $v$, we get
$\sum_{v\in\mathcal{V}_{L}(\varnothing)}\mathds{1}_{\\{v\notin\mathcal{C}^{i}(\varnothing),\,M\,\leq\,|\mathcal{C}^{i}(v)|\,<\,\infty\\}}\leq\sum_{e\in\mathcal{V}_{L}(\varnothing),\,\text{$e$
in color
$i$}}|\mathcal{C}^{i}(e^{+})\cap\mathcal{V}_{L}(e^{+})|\cdot\mathds{1}_{\\{M\,\leq\,|\mathcal{C}^{i}(e^{+})|\,<\,\infty\\}}.$
Since for any edge $e$ in color $i$, the component $\mathcal{C}^{i}(e^{+})$ is
contained in the subtree of the descendants of $e^{+}$, and is thus
independent of the remainder of $\textbf{GW}(\Lambda)$, we have
$\mathbb{E}\left[\sum_{v\in\mathcal{V}_{L}(\varnothing)}\mathds{1}_{\\{v\notin\mathcal{C}^{i}(\varnothing),\,M\,\leq\,|\mathcal{C}^{i}(v)|\,<\,\infty\\}}\right]\leq\mathbb{E}[|\mathcal{V}_{L}(\varnothing)|]\cdot\mathbb{E}\left[|\mathcal{V}_{L}(\varnothing)|\cdot\mathds{1}_{\\{M\,\leq\,|\mathcal{C}^{i}(\varnothing)|\,<\,\infty\\}}\right].$
Then, using Cauchy-Schwarz inequality as before, we get that
$\mathbb{P}(|\widetilde{\mathcal{C}}_{L}(\varnothing)|<|\widetilde{\mathcal{C}}_{L,M}(\varnothing)|)\xrightarrow[M\to\infty]{}0.$
(18)
The next step is to notice that since the sets
$\widetilde{\mathcal{C}}_{L,M}(u)$ are measurable with respect to the
$(L+M)$-neighborhood of a vertex $u$, for every $\ell\geq 1$, Proposition 3.1
implies that
$\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|\widetilde{\mathcal{C}}_{L,M}(u)|\,=\,\ell\\}}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{P}(|\widetilde{\mathcal{C}}_{L,M}(\varnothing)|=\ell).$
We remark that this last step is reminiscent of a similar convergence in Lemma
5.2.4 in [10]. Together with (17) and (18), by letting $M\to\infty$ we get
that for any $L\geq 1$,
$\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|\widetilde{\mathcal{C}}_{L}(u)|\,=\,\ell\\}}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{P}(|\widetilde{\mathcal{C}}_{L}(\varnothing)|=\ell).$
(19)
Finally, to conclude the proof of (3), it amounts to consider the $L\to\infty$
limit in the last display. For the right-hand side, we just observe that the
CA-component of the root is the increasing limit of the sets
$\widetilde{\mathcal{C}}_{L}(\varnothing)$ as $L\to\infty$, from which it
follows that for any $\ell\geq 1$,
$\mathbb{P}(|\widetilde{\mathcal{C}}_{L}(\varnothing)|=\ell,|\widetilde{\mathcal{C}}(\varnothing)|>\ell)\xrightarrow[L\to\infty]{}0,$
which yields for any $\ell\geq 1$,
$\mathbb{P}(|\widetilde{\mathcal{C}}_{L}(\varnothing)|=\ell)\xrightarrow[L\to\infty]{}\mathbb{P}(|\widetilde{\mathcal{C}}(\varnothing)|=\ell).$
(20)
It remains to show the corresponding convergence for a typical vertex of $G$.
We distinguish two cases. If the set $J$ is nonempty (or equivalently if
$\lambda_{1}^{*}\leq 1$), then we claim that for any vertex $u\in[n]$, we have
$\\{\widetilde{\mathcal{C}}_{L}(u)\neq\widetilde{\mathcal{C}}(u)\\}\
\subseteq\ \left(\bigcup_{i\in J}\\{|\mathcal{C}^{i}(u)|\geq
L\\}\right)\cup\left(\bigcup_{i\in I}\\{|\mathcal{C}^{i}(u)|\geq L\text{ and
}u\notin\mathcal{C}_{\max}^{i}\\}\right).$ (21)
Indeed, for the event on the left-hand side of (21) to hold, either there is a
vertex in $\widetilde{\mathcal{C}}(u)\setminus\mathcal{V}_{L}(u)$, which
together with $\widetilde{\mathcal{C}}(u)\subseteq\mathcal{C}^{1}(u)$ implies
that $|\mathcal{C}^{1}(u)|\geq L$, or there is a vertex $v$ in
$\widetilde{\mathcal{C}}(u)\cap\mathcal{V}_{L}(u)$ outside
$\widetilde{\mathcal{C}}_{L}(u)$, which means that there is $i\in[k]$ such
that $u$ and $v$ are connected by a path in $G^{i}$ exiting
$\mathcal{V}_{L}(u)$, and if $i\in I$, additionally
$\mathcal{C}^{i}(u)\neq\mathcal{C}^{i}_{\max}$. Now, given $\varepsilon>0$,
one can choose $L$ large enough so that
$\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|\geq L)\leq\tfrac{\varepsilon}{2k}$
for all $i\in J$ and
$\mathbb{P}(L\leq|\mathcal{C}^{i}(\varnothing)|<\infty)\leq\tfrac{\varepsilon}{2k}$
for all $i\in I$, and then using Proposition 3.1, we get that
$\begin{split}&\limsup_{n\to\infty}\ \mathbb{P}\left(\frac{|\\{u\in
G:|\widetilde{\mathcal{C}}_{L}(u)|<|\widetilde{\mathcal{C}}(u)|\\}|}{n}\geq\varepsilon\right)\\\
\leq\;&\sum_{i\in J}\limsup_{n\to\infty}\ \mathbb{P}\left(\tfrac{|\\{u\in
G\,:\,|\mathcal{C}^{i}(u)|\,\geq\,L\\}|}{n}\geq\frac{\varepsilon}{k}\right)+\sum_{i\in
I}\limsup_{n\to\infty}\ \mathbb{P}\left(\tfrac{|\\{u\in
G\,:\,L\,\leq\,|\mathcal{C}^{i}(u)|\,<\,\infty\\}|}{n}\geq\frac{\varepsilon}{k}\right)=0.\end{split}$
(22)
We consider now the slightly more difficult case when $J$ is empty. Let
$A=\bigcap_{i=1}^{k}\mathcal{C}^{i}_{\max}$. One has
$\big{\\{}|\widetilde{\mathcal{C}}_{L}(u)|=\ell,|\widetilde{\mathcal{C}}(u)|>\ell\big{\\}}\
\subseteq\ \Big{\\{}|\widetilde{\mathcal{C}}_{L}(u)|=\ell,u\in
A\Big{\\}}\cup\left(\bigcup_{i=1}^{k}\\{|\mathcal{C}^{i}(u)|\geq
L,\mathcal{C}^{i}(u)\neq\mathcal{C}^{i}_{\max}\\}\right)$ (23)
since either $u\in A$, or there is a vertex $v$ and $i\in[k]$ such that
$u,v\notin\mathcal{C}^{i}_{\max}$ but $u$ and $v$ are connected by a path in
$G^{i}$ exiting $\mathcal{V}_{L}(u)$. Moreover, note that if $u\in A$, then
$\widetilde{\mathcal{C}}(u)=A$, and thus
$\Big{\\{}|\widetilde{\mathcal{C}}_{L}(u)|=\ell,u\in A\Big{\\}}\ \subseteq\
\Big{\\{}\big{|}A\cap\mathcal{V}_{L}(u)\big{|}=\ell,|\mathcal{V}_{L}(u)|\geq
L\Big{\\}}\cup\\{|A|=\ell\\}.$
Indeed, if $u\in A$ and $|A|\geq\ell+1$, then $A$ must necessarily contain a
vertex outside $\mathcal{V}_{L}(u)$, which means that
$|\mathcal{V}_{L}(u)|\geq L$. Note that Theorem 1.1 (i) implies that
$\mathbb{P}(|A|=\ell)\to 0$ as $n\to\infty$, so we concentrate on the first
event in the union above. For $M\geq 1$, define
$A_{M}=\\{v:|\mathcal{C}^{i}(v)|\geq M\text{ for all }i\in[k]\\},$
where for convenience we see $A_{M}$ as a vertex subset of both $G$ or
$\textbf{GW}(\Lambda)$. By a similar argument as for (17), we know that for
every $\varepsilon>0$, there exists $M$ such that
$\limsup_{n\to\infty}\
\mathbb{P}\left(\Big{|}\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|A\cap\mathcal{V}_{L}(u)|\,=\,\ell,\,|\mathcal{V}_{L}(u)|\,\geq\,L\\}}-\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|A_{M}\cap\mathcal{V}_{L}(u)|\,=\,\ell,\,|\mathcal{V}_{L}(u)|\,\geq\,L\\}}\Big{|}\geq\varepsilon\right)=0.$
However, by Proposition 3.1 one has for any $L\geq 1$ and $M\geq 1$,
$\frac{1}{n}\sum_{u\in[n]}\mathds{1}_{\\{|A_{M}\cap\mathcal{V}_{L}(u)|\,=\,\ell,\,|\mathcal{V}_{L}(u)|\,\geq\,L\\}}\xrightarrow[n\to\infty]{\mathbb{P}}\mathbb{P}\big{(}|A_{M}\cap\mathcal{V}_{L}(\varnothing)|=\ell,|\mathcal{V}_{L}(\varnothing)|\geq
L\big{)},$
which goes to $0$ as $L\to\infty$ for any fixed $M\geq 1$. Indeed, this can be
seen by exploring $\mathcal{V}_{L}(u)$ in several steps. At each step, we fix
a vertex $v$ at the boundary of the already explored set and explore the
successors of $v$ at distance at most $M$ from $v$. Note that the probability
that $v$ belongs to $A_{M}$ is bounded from below by a positive constant,
which is independent of the previous steps. Furthermore, as $L\to\infty$ and
on the event $\\{|\mathcal{V}_{L}(\varnothing)|\geq L\\}$, the number of steps
goes almost surely to infinity, and thus the probability of the event
$\\{|A_{M}\cap\mathcal{V}_{L}(\varnothing)|=\ell\\}$ tends to 0 for every
fixed $\ell$. Then, by using also (16) again to handle the second union in
(23), we deduce that for every fixed $\varepsilon>0$ and $\ell\geq 1$, there
is a sufficiently large $L$ so that,
$\limsup_{n\to\infty}\mathbb{P}\left(\frac{|\\{u\in
G:|\widetilde{\mathcal{C}}_{L}(u)|=\ell,|\widetilde{\mathcal{C}}(u)|>\ell\\}|}{n}\geq\varepsilon\right)=0.$
Together with (19), (20) and (LABEL:eq:caseJnonempty), this proves (3).
The last piece of the proof of the proposition is to show that for any
$\ell\geq 2$,
$\nu_{\ell}=\mathbb{P}(|\widetilde{\mathcal{C}}(\varnothing)|=\ell)$ is
positive if and only if $\lambda_{k}^{*}>1$, and that the constant $a_{1}$
appearing in Part (i) of Theorem 1.1 is equal to the probability that the CA-
component of the root is infinite. This part bears close resemblance to the
proof of Proposition 2.18 (ii) from [10]. For the first part, assume that
$\lambda_{k}^{*}>1$, and let $\ell\geq 1$ be given. Then, with positive
probability one can have altogether $|\mathcal{C}_{k}(\varnothing)|=\ell$, all
vertices of $\mathcal{C}_{k}(\varnothing)$ are connected to infinity in
$\textbf{GW}^{k}(\Lambda)$, and $|\mathcal{C}^{i}(\varnothing)|<\infty$ for
all values of $i$ different from $k$. If the last events hold simultaneously,
one can observe that the CA-component of the root is precisely
$\mathcal{C}_{k}(\varnothing)$, and thus it has size $\ell$. Conversely, if
$\lambda_{k}^{*}\leq 1$, then it is well-known that all the components
$\mathcal{C}^{i}(\varnothing)$ are a.s. finite, and since edges have a.s. a
unique color, this implies that the CA-component of the root is necessarily
reduced to a single vertex.
For the last part, note that on the one hand, if
$|\widetilde{\mathcal{C}}(\varnothing)|=\infty$, then
$|\mathcal{C}^{i}(\varnothing)|=\infty$ for all $i\in[k]$, so
$a_{1}=\mathbb{P}(|\mathcal{C}^{i}(\varnothing)|=\infty\text{ for all
$i\in[k]$})\geq\mathbb{P}(|\widetilde{\mathcal{C}}(\varnothing)|=\infty).$
The proof of the reverse inequality appears as Lemma 5.1.1 in [10]; we provide
it for completeness. First, notice that if $a_{1}=0$, there is nothing to
prove. Thus, we may assume that $a_{1}>0$, which by Part (i) of Theorem 1.1 is
equivalent to $\lambda_{1}^{*}>1$. Suppose that each of the components
$\mathcal{C}^{i}(\varnothing)$ for $i\in[k]$ is infinite. We prove that in
this case, the CA-component of the root is also a.s. infinite. To begin with,
the well-known Kesten-Stigum theorem [6] implies that on the event
$\\{|\mathcal{C}^{1}(\varnothing)|=\infty\\}$, the number of vertices in
generation $L$ in the tree $\mathcal{C}^{1}(\varnothing)$ goes to infinity
almost surely as $L\to\infty$. However, conditionally on the vertices
$v_{1},\dots,v_{N}$ in the $L$-th generation of that tree, the subtrees
$T_{1},\dots,T_{N}$ of $\textbf{GW}(\Lambda)$ emanating from the vertices
$v_{1},\dots,v_{N}$, respectively, are all independent. Thus, the probability
that for at least $a_{1}N/2$ of them, the connected components of their root
are infinite in each of $\textbf{GW}^{i}(\Lambda)$ for $i\in[k]$, goes to one
as $N\to\infty$. Moreover, all such vertices are in the CA-component of
$\varnothing$. But as already mentioned, on the event
$\\{|\mathcal{C}^{1}(\varnothing)|=\infty\\}$ we have that $N=N(L)\to\infty$
almost surely as $L\to\infty$, so the result follows. ∎
### 3.2 Proof of Theorem 1.1 (ii) and Proposition 1.3
The proof starts with the following general lemma that will also be used for
the proof of Theorem 1.1 (iii) in the next section. For $I\subset[k]$, write
$\lambda_{I}^{*}=\Lambda-\sum_{i\in I}\lambda_{i}$.
###### Lemma 3.2.
For every $j\in[k]$ and $I\subseteq[k]\setminus\\{j\\}$ such that
$\max(\lambda_{I}^{*},\lambda_{j}^{*})<1$, one has uniformly in $n$,
$\lim_{M\to\infty}\mathbb{P}\big{(}\exists u\neq
v:u\xleftrightarrow{G^{I}}v,\,u\xleftrightarrow{G^{j}}v,\,u\stackrel{{\scriptstyle
G^{I\cup\\{j\\}}}}{{\centernot\longleftrightarrow}}v,\,\max(|\mathcal{C}^{I}(u)|,|\mathcal{C}^{j}(u)|)\geq
M\big{)}=0.$
###### Proof.
Fix $\varepsilon>0$ and a set $I\subseteq[k]\setminus\\{j\\}$ with
$\max(\lambda_{I}^{*},\lambda_{j}^{*})<1$. For every $u,v\in[n]$, define the
event
$\mathcal{A}_{u,v}:=\big{\\{}u\xleftrightarrow{G^{I}}v,\,u\xleftrightarrow{G^{j}}v,\,u\stackrel{{\scriptstyle
G^{I\cup\\{j\\}}}}{{\centernot\longleftrightarrow}}v\big{\\}}.$
Then, on the event $\mathcal{A}_{u,v}$, there exists a path $P$ connecting $u$
and $v$ not using color $j$. Since $u$ and $v$ are not connected in
$G^{I\cup\\{j\\}}$, $P$ must contain at least one edge which is not in
$G^{I}$. In particular, $P$ must exit the graph $\mathcal{C}^{I}(u)$ at some
point. Consequently, on the event $\mathcal{A}_{u,v}$, there must exist
$w_{1},w_{2}\in\mathcal{C}^{I}(u)$ which are connected in
$G^{j}\setminus\mathcal{C}^{I}(u)$ (where by this, we mean the subgraph of
$G^{j}$ with vertex set $[n]$ obtained by removing only the edges of
$\mathcal{C}^{I}(u)$). Moreover, note that conditioning on
$\mathcal{C}^{I}(u)$ only forces edges in $G\setminus\mathcal{C}^{I}(u)$ with
at least one endvertex in $\mathcal{C}^{I}(u)$ to have a color in $I$, while
it does not reveal any information for the remaining edges. Thus, for every
fixed graph $H$ containing $u$, one has
$\mathbb{P}\Big{(}w_{1}\xleftrightarrow{G^{j}\setminus
H}w_{2}\mid\mathcal{C}_{I}(u)=H\Big{)}\leq\mathbb{P}\Big{(}w_{1}\xleftrightarrow{G^{j}\setminus
H}w_{2}\Big{)}\leq\mathbb{P}(w_{1}\xleftrightarrow{G^{j}}w_{2})=\mathcal{O}\left(\frac{1}{n}\right),$
where for the last equality we use Corollary 2.3 together with the fact that
$G^{j}$ is stochastically dominated by $G(n,\lambda_{j}^{*}/n)$ (and that
$\lambda_{j}^{*}<1$ by hypothesis). As a consequence, on the event
$\\{v\in\mathcal{C}^{I}(u)\\}$,
$\mathbb{P}\Big{(}u\xleftrightarrow{G^{j}}v,\,u\stackrel{{\scriptstyle
G^{I\cup\\{j\\}}}}{{\centernot\longleftrightarrow}}v\mid\mathcal{C}^{I}(u)\Big{)}\leq\sum_{w_{1},w_{2}\in\mathcal{C}^{I}(u)}\mathbb{P}\Big{(}w_{1}\xleftrightarrow{G^{j}\setminus\mathcal{C}^{I}(u)}w_{2}\mid\mathcal{C}^{I}(u)\Big{)}=\mathcal{O}\left(\frac{|\mathcal{C}^{I}(u)|^{2}}{n}\right).$
Therefore,
$\displaystyle\mathbb{P}\big{(}\mathcal{A}_{u,v},\,|\mathcal{C}^{I}(u)|\geq
M\big{)}$
$\displaystyle=\mathcal{O}\left(\frac{1}{n}\sum_{t\,\geq\,M}t^{2}\cdot\mathbb{P}(v\in\mathcal{C}^{I}(u)\mid|\mathcal{C}^{I}(u)|=t)\cdot\mathbb{P}(|\mathcal{C}^{I}(u)|=t)\right)$
$\displaystyle=\mathcal{O}\left(\frac{1}{n^{2}}\sum_{t\,\geq\,M}t^{3}\cdot\mathbb{P}(|\mathcal{C}^{I}(u)|=t)\right)=\mathcal{O}\left(\frac{M^{3}\,\mathrm{e}^{-M\cdot
I_{\lambda_{I}}}}{n^{2}}\right),$ (24)
where we use (6) for the second equality and Lemma 2.1 for the last one. Also,
by a similar argument
$\mathbb{P}\big{(}\mathcal{A}_{u,v},\,|\mathcal{C}^{j}(u)|\geq
M\big{)}=\mathcal{O}\left(\frac{M^{3}\,\mathrm{e}^{-M\cdot
I_{\lambda_{j}}}}{n^{2}}\right).$ (25)
A union bound by summing (24) and (25) over all possible pairs of vertices
$\\{u,v\\}$ of $G$, and letting $M\to\infty$ finishes the proof. ∎
We call a set _connected_ if its vertices belong to the same connected
component.
###### Lemma 3.3.
Suppose that $\lambda_{k-1}^{*}<1$. Then, uniformly in $n$,
$\lim_{M\to\infty}\mathbb{P}(\exists S\subseteq[n]:|S|\geq M,\text{$S$ is
connected in each of $G^{1},\dots,G^{k-1}$ but not in $G_{k}$})=0.$
###### Proof.
Fix $S\subseteq[n]$ of size at least $M$, and assume that $S$ is connected in
each of the graphs $G^{1},\dots,G^{k-1}$. If $S$ is not connected in $G_{k}$,
it means that one can find two distinct vertices $u,v\in S$ which are not
connected in $G_{k}$. Let $I$ be a maximal subset of $[k-1]$ such that $u$ and
$v$ are connected in $G^{I}$. Since $u$ and $v$ are not connected in $G_{k}$,
$I\neq[k-1]$, and hence there exists $j\in[k-1]\setminus I$ such that $u$ and
$v$ are not connected in $G^{I\cup\\{j\\}}$. However, using that
$\max(\lambda_{I}^{*},\lambda_{j}^{*})<1$ and $|\mathcal{C}^{j}(u)|\geq|S|\geq
M$, Lemma 3.2 ensures that this event happens with probability converging to 0
as $M\to\infty$, uniformly in $n$. ∎
We are ready conclude the proofs of Theorem 1.1 (ii) and Proposition 1.3.
###### Proof of Proposition 1.3.
Denote by $\widetilde{\mathcal{C}}_{\max}$ the largest CA-component in $G$. As
the vertices of $\widetilde{\mathcal{C}}_{\max}$ form a connected set in
$G^{i}$ for every $i\in[k-1]$, by Lemma 3.3 we have that uniformly in $n$,
$\lim_{M\to\infty}\ \mathbb{P}(|\widetilde{\mathcal{C}}_{\max}|\geq
M,\text{$\widetilde{\mathcal{C}}_{\max}$ is not connected in $G_{k}$})=0.$
(26)
On the other hand, if $\widetilde{\mathcal{C}}_{\max}$ is connected in
$G_{k}$, it is obtained by intersecting a connected component in $G^{k}$ and
one in $G_{k}$. Moreover, by definition the probability of having an edge
present in $G^{k}$ is
$1-\prod_{i=1}^{k-1}(1-\frac{\lambda_{i}}{n})\leq\tfrac{\lambda_{k}^{*}}{n}=\tfrac{1}{n}$.
Thus, $G^{k}$ is stochastically dominated by $G(n,1/n)$. However, it is well-
known that the size of the largest component in $G(n,1/n)$ is a.a.s. of order
$n^{2/3}$ (see Proposition 5.2 in [11]), in particular it is a.a.s. smaller
than $n^{3/4}$, say. On the other hand, by a similar argument as in (6) one
has that for any $n>\ell\geq M\geq 2$ and any distinct vertices
$v_{1},\dots,v_{M}\in[n]$,
$\mathbb{P}\big{(}v_{M}\in\mathcal{C}^{k}(v_{1})\mid|\mathcal{C}^{k}(v_{1})|=\ell,v_{2},\dots,v_{M-1}\in\mathcal{C}^{k}(v_{1})\big{)}=\frac{\ell-(M-1)}{n-(M-1)}\leq\frac{\ell}{n},$
and the same holds with $\mathcal{C}_{k}(v_{1})$ instead of
$\mathcal{C}^{k}(v_{1})$. Thus, for any $M\geq 2$ and any distinct
$v_{1},\dots,v_{M}\in[n]$,
$\displaystyle\mathbb{P}\big{(}v_{1},\dots,v_{M}\text{ are connected in
}G^{k},|\mathcal{C}^{k}_{\max}|\leq n^{3/4}\big{)}$
$\displaystyle\leq\sum_{\ell=M}^{n^{3/4}}\bigg{(}\frac{\ell}{n}\bigg{)}^{M-1}\cdot\mathbb{P}(|\mathcal{C}^{k}(v_{1})|=\ell)$
$\displaystyle=\frac{\mathbb{E}[|\mathcal{C}^{k}(v_{1})|^{M-1}\cdot\mathds{1}_{\\{|\mathcal{C}^{k}(v_{1})|\leq
n^{3/4}\\}}]}{n^{M-1}}$ $\displaystyle\leq n^{-\frac{M-1}{4}}.$
Likewise, we know by (5) that a.a.s. the size of the largest connected
component in $G_{k}$ is at most $\tfrac{2}{I_{\lambda_{k}}}\log n$, and as
above one has
$\mathbb{P}\big{(}v_{1},\dots,v_{M}\text{ are connected in
}G_{k},|\mathcal{C}_{k}(v_{1})|\leq\tfrac{2}{I_{\lambda_{k}}}\log
n\big{)}=\mathcal{O}\bigg{(}\frac{(\log n)^{M-1}}{n^{M-1}}\bigg{)}.$
Summing over all possible vertices $v_{1},\dots,v_{M}\in[n]$ and using
independence between $G^{k}$ and $G_{k}$, we deduce that
$\displaystyle\mathbb{P}(\exists v_{1},\dots,v_{M}\text{ connected in both
}G_{k}\text{ and }G^{k})$
$\displaystyle\leq\mathbb{P}\bigg{(}\max_{u\in[n]}|\mathcal{C}^{k}(u)|\geq
n^{3/4}\bigg{)}+\mathbb{P}\bigg{(}\max_{u\in[n]}|\mathcal{C}_{k}(u)|\geq\tfrac{2}{I_{\lambda_{k}}}\log
n\bigg{)}+\mathcal{O}\bigg{(}\frac{(\log
n)^{M-1}}{n^{\frac{M-5}{4}}}\bigg{)}=o(1),$
where the last equality holds as soon as $M\geq 6$. Together with (26), this
concludes the proof of the proposition. ∎
###### Proof of Theorem 1.1 (ii).
Recall that now $\lambda_{k}^{*}>1>\lambda_{k-1}^{*}$, and in particular, all
graphs $G^{i}$ with $i\leq k-1$ are subcritical while $G^{k}$ is
supercritical. Firstly, we observe that there exists $\varepsilon>0$ such that
a.a.s. the largest CA-component has size larger than $\varepsilon\log n$.
Indeed, we know by (5) that a.a.s. there exists a connected component in
$G_{k}$ of size at least $\log n/(2I_{\lambda_{k}})$, and hence (7) and Lemma
2.10 imply together that a.a.s. its intersection with the giant component of
$G^{k}$ has size at least $\mu_{\lambda_{k}^{*}}\log n/(4I_{\lambda_{k}})$
(recall that $G^{k}$ and $G_{k}$ are independent).
Next, let $\widetilde{\mathcal{C}}_{\max}$ be the largest CA-component. By
definition its vertices are connected in all the graphs $G^{1},\dots,G^{k-1}$,
and thus by Lemma 3.3 a.a.s. they are also connected in $G_{k}$. This means
that $\widetilde{\mathcal{C}}_{\max}$ is in fact obtained as the intersection
of a connected component of $G_{k}$ with one of $G^{k}$. However, it is well-
known that a.a.s. all connected components in a supercritical ER random graph
but the largest one have size $\mathcal{O}(\log n)$ (see e.g. Section 4.4.1 in
[11]). Thus, by the same argument as in the proof of Proposition 1.3, we
deduce that the probability of having three vertices connected in $G_{k}$ and
participating in the same non-giant component of $G^{k}$ is
$\mathcal{O}(n^{3}\cdot\frac{(\log n)^{4}}{n^{4}})=o(1)$. Hence, a.a.s. every
CA-component of size at least 3 (and $\widetilde{\mathcal{C}}_{\max}$ in
particular) is contained in the giant in $G^{k}$.
Finally, by Lemma 2.10 and Corollary 2.13 we conclude that, with the notation
of Corollary 2.13,
$\frac{|\widetilde{\mathcal{C}}_{\max}|}{\log
n}\xrightarrow[n\to\infty]{\mathbb{P}}a(\mu_{\lambda_{k}^{*}},\lambda_{k}),$
which finishes the proof. ∎
### 3.3 Proof of Theorem 1.1 (iii)
We assume throughout this section that $\lambda_{k}^{*}<1$.
We call _support_ of a CA-component the subgraph of $G$ obtained as the union
of all paths in $G^{1},\ldots,G^{k}$ between any pair of distinct vertices of
the CA-component. The main observation of the proof is the following lemma.
###### Lemma 3.4.
A.a.s. every CA-component is supported by either a single vertex, a single
edge or a cycle of $G$. In particular, a.a.s. every CA-component has size at
most $k$.
###### Proof.
Consider a CA-component $\widetilde{\mathcal{C}}$ and assume that it is not
reduced to a single vertex. Let $u$ and $v$ be two different vertices of
$\widetilde{\mathcal{C}}$. Assume first that
$|\mathcal{C}^{i}(u)|\geq\frac{c_{1}(\Lambda)}{k}\log n$ for some $i\in[k]$,
with the notation of Remark 2.7. By the second point of this remark, and since
$\lambda_{i}^{*}<1$ by hypothesis, we know that a.a.s. $\mathcal{C}^{i}(u)$ is
a tree with no repeated edge. In other words, $u$ and $v$ are connected by a
unique path $P$ in $G^{i}$, and since $P$ contains no repeated edges, $u$ and
$v$ cannot be connected in $G^{\\{i,j\\}}$ for any color $j$ in $P$. However,
Lemma 3.2 applied for $I=\\{i\\}$ shows that a.a.s. this situation does not
happen. Thus, we may assume that
$|\mathcal{C}^{i}(u)|\leq\frac{c_{1}(\Lambda)}{k}\log n$ for all $i$, and by
summation over $i$ we may as well assume that $\mathcal{C}(u)$, the connected
component of $u$ in $G$, has size at most $c_{1}(\Lambda)\log n$. Then, using
the first result from Remark 2.7, we know that a.a.s. either $\mathcal{C}(u)$
contains no cycles and at most one repeated edge or no repeated edges and at
most one cycle. Moreover, note that for every pair of vertices $u$ and $v$ in
$\widetilde{\mathcal{C}}$, $u$ and $v$ cannot be disconnected in $G$ by
deleting an edge with a single color in $\mathcal{C}(u)$. Thus, the unique
cycle or repeated edge necessarily supports $\widetilde{\mathcal{C}}$, which
concludes the proof of the first part.
For the second part, just observe that when $|\widetilde{\mathcal{C}}|\geq 3$,
the vertices in $\widetilde{\mathcal{C}}$ divide its supporting cycle into
paths without common colors, so there are at most $k$ such paths. ∎
For a positive integer $\ell$, we say that a cycle in $G$ is _separated into
$\ell$ parts_ if it can be divided into $\ell$ consecutive paths that use
disjoint sets of colors. We say that it is separated into _exactly_ $\ell$
parts if it is separated into $\ell$ parts but not into $\ell+1$ parts. The
following fact follows directly from the previous definition.
###### Lemma 3.5.
Every cycle in $G$ supports at most one CA-component of size more than $1$.
Moreover, a CA-component supported by a cycle has size $\ell$ if and only if
its supporting cycle is separated into exactly $\ell$ parts.
###### Proof.
Suppose that $\widetilde{\mathcal{C}}_{1}$ and $\widetilde{\mathcal{C}}_{2}$
are two distinct CA-components of sizes $\ell_{1},\ell_{2}\geq 2$,
respectively, which are supported by the same cycle, say $C$. Then,
$\widetilde{\mathcal{C}}_{1}$ and $\widetilde{\mathcal{C}}_{2}$ must be
disjoint. Moreover, the vertices of $\mathcal{C}_{1}$ separate $C$ into
$\ell_{1}$ parts, and the ones of $\mathcal{C}_{2}$ separate $C$ into
$\ell_{2}$ parts. It follows that the vertices of
$\mathcal{C}_{1}\cup\mathcal{C}_{2}$ separate $C$ into $\ell_{1}+\ell_{2}$
parts, and so $\mathcal{C}_{1}\cup\mathcal{C}_{2}$ is a CA-component itself, a
contradiction.
Thus, one cycle can support at most one CA-component. At the same time, if it
supports a CA-component of size $\ell\geq 2$, it cannot be divided into
$\ell+1$ parts as otherwise it would also support a CA-component of size more
than $\ell$, which finishes the proof. ∎
The last important piece towards the proof of Theorem 1.1 (iii) is the
following lemma.
###### Lemma 3.6.
For every $m\in\\{2,\dots,k\\}$, denote by $Y_{m}$ the number of cycles in $G$
that are separated into exactly $m$ parts. Then, there are positive constants
$\widetilde{\beta}_{2}$ and $\beta_{3},\ldots,\beta_{k}$, such that
$(Y_{2},\ldots,Y_{k})\xrightarrow[n\to\infty]{d}\mathrm{Po}(\widetilde{\beta}_{2})\otimes\bigotimes_{m=3}^{k}\mathrm{Po}(\beta_{m}).$
###### Proof of Lemma 3.6.
For every $m\in\\{2,\ldots,M\\}$, denote by $Y_{m,M}$ the number of cycles in
$G$ that are separated into exactly $m$ parts and having length at most $M$.
The first step of the proof is to show that for every $m\in\\{2,\dots,k\\}$,
uniformly in $n$,
$\mathbb{E}[Y_{m}-Y_{m,M}]\xrightarrow[M\to\infty]{}0.$ (27)
To show this, note that a cycle of $G$ is separated into (at least) two parts
if and only if there exists a nonempty subset $I\subseteq[k]$ different from
$[k]$ such that one part of the cycle is contained in $G_{I}$ while the other
part of the cycle is contained in $G^{I}$. For every $\ell\geq 3$, set
$C_{2,\ell}$ to be the number of cycles of length $\ell$ in $G$ which are
separated into (at least) two parts.
Then, using that to form a cycle of length $\ell$, one may choose its vertices
in $\binom{n}{\ell}$ ways and order them in $\frac{(\ell-1)!}{2}$ ways, we
have
$\mathbb{E}[C_{2,\ell}]\leq\binom{n}{\ell}\frac{(\ell-1)!}{2}\sum_{I\subseteq[k]}\sum_{m=1}^{\ell-1}\left(\frac{\sum_{i\in
I}\lambda_{i}}{n}\right)^{m}\left(\frac{\sum_{i\in[k]\setminus
I}\lambda_{i}}{n}\right)^{\ell-m}\leq 2^{k}\cdot(\lambda_{k}^{*})^{\ell}.$
(28)
It follows that
$\mathbb{E}\left[Y_{m}-Y_{m,M}\right]\leq\sum_{\ell=M+1}^{n}\mathbb{E}[C_{2,\ell}]\leq
2^{k}\sum_{\ell=M+1}^{\infty}(\lambda_{k}^{*})^{\ell},$
which goes to $0$ as $M\to\infty$ uniformly in $n$ since $\lambda_{k}^{*}<1$
by hypothesis, thus proving (27).
The second step of the proof is to show that for every fixed $M\geq k$, one
has
$(Y_{2,M},\ldots,Y_{k,M})\xrightarrow[n\to\infty]{d}\bigotimes_{m=2}^{k}\mathrm{Po}(\beta_{m,M})$
(29)
for some positive constants $(\beta_{m,M})_{m=2}^{k}$. For $m\leq k$, denote
by $p_{m,\ell}$ the probability that a cycle of length $\ell$ in $G$ is
separated into exactly $m$ parts. Let also $\widetilde{Y}_{m,\ell}$ denote the
number of cycles of length $\ell$ which are separated into exactly $m$ parts.
In particular,
$Y_{m,M}=\sum_{\ell=\max(m,3)}^{M}\widetilde{Y}_{m,\ell}.$
Adopting the notation of Lemma 2.8, observe also that for every $m\leq k$,
conditionally on $G$,
$\widetilde{Y}_{m,\ell}\ \mathop{=}^{d}\ \text{Bin}(C_{\ell},p_{m,\ell}).$
Moreover, recall that $G$ is distributed as an Erdós-Renyi random graph with
parameters $n$ and $p=(1+o(1))\cdot\tfrac{\Lambda}{n}$, so Lemma 2.8 shows
that
$(C_{3},\dots,C_{M})\xrightarrow[n\to\infty]{d}\bigotimes_{\ell=3}^{M}Z_{\ell}$
where for every $\ell\in\\{3,\dots,M\\}$, $Z_{\ell}$ is a Poisson random
variable with parameter $\gamma_{\ell}=\tfrac{\Lambda^{\ell}}{2\ell}$. It
follows that for every fixed $\ell$, (still writing with a slight abuse of
notation $p_{m,\ell}$ for the limiting value of this probability as
$n\to\infty$, see Remark 3.7 for an explicit expression when $m=k$),
$(\widetilde{Y}_{2,\ell},\dots,\widetilde{Y}_{k,\ell})\xrightarrow[n\to\infty]{d}\Big{(}\text{Bin}(Z_{\ell},p_{2,\ell}),\dots,\text{Bin}(Z_{\ell},p_{k,\ell})\Big{)},$
which by the thinning property of the Poisson distribution (see e.g. Section
5.3 in [9]) is a vector of independent Poisson variables with parameters
$(p_{m,\ell}\cdot\gamma_{\ell})_{m=2}^{\ell}$. Summing over $\ell$ and using
the independence of the variables $(Z_{\ell})_{\ell=3}^{M}$, we deduce that
(29) holds with
$\beta_{m,M}=\sum_{\ell=\max(m,3)}^{M}p_{m,\ell}\cdot\gamma_{\ell}.$
The final step is to show that for every $m\in\\{2,\dots,k\\}$, the sequence
$(\beta_{m,M})_{M\geq 3}$ is a bounded non-decreasing sequence, which
therefore converges as $M\to\infty$ to some positive and finite constant. The
fact that it is non-decreasing is straightforward by definition. On the other
hand, by (28) we deduce that for every $m\in\\{2,\dots,k\\}$ and $M\geq 1$,
$\beta_{m,M}\leq\liminf_{n\to\infty}\
\mathbb{E}[Y_{m,M}]\leq\liminf_{n\to\infty}\
\mathbb{E}\left[\sum_{\ell=3}^{M}C_{2,\ell}\right]\leq\frac{2^{k}}{1-\lambda_{k}^{*}},$
showing that the sequence $(\beta_{m,M})_{M\geq 3}$ is bounded, which
completes the proof. ∎
To finish the proof of Theorem 1.1 (iii), note that by Lemma 3.4 a.a.s. for
every $m\geq 3$ we have $N_{m}=Y_{m}$, while $N_{2}$ is the sum of $Y_{2}$ and
the repeated edges in $G$. Thus, using the notation of Lemma 3.6 and Corollary
2.9, Theorem 1.1 (iii) follows with the constants
$\beta_{2}=\widetilde{\beta}_{2}+\gamma_{2}$ and $(\beta_{m})_{m=3}^{k}$.
$\square$
###### Remark 3.7.
We note that while it is possible to provide explicit expressions for
$\beta_{2},\dots,\beta_{k}$ in terms of $\lambda_{1},\dots,\lambda_{k}$, they
tend to be more and more complicated as $\ell$ decreases from $k$ to $2$.
However, one can provide a simple formula for $\beta_{k}$. We do this in the
case $k\geq 3$; in fact, with the notation of Corollary 2.9, in the case $k=2$
one simply needs to add $\gamma_{2}$ to the final result to account for the
number of repeated edges.
By the previous proof, one has
$\beta_{k}=\sum_{\ell\geq k}p_{k,\ell}\cdot\gamma_{\ell},$
where $p_{k,\ell}$ is the limit (as $n\to\infty$) of the probability that a
cycle of length $\ell$ in $G$ is separated into exactly $k$ parts, and
$\gamma_{\ell}=\tfrac{\Lambda^{\ell}}{2\ell}$. To compute $p_{k,\ell}$, one
needs to decide the lengths $s_{1},\dots,s_{k}\geq 1$ of the portions of the
cycle in colors $1,\dots,k$, respectively (with the constraint that
$s_{1}+\dots+s_{k}=\ell$). Then, one needs to choose the starting vertex of
the path colored in color $1$ (say when turning clockwise), for which there
are $\ell$ choices, and the order of appearance of the other colors, for which
there are $(k-1)!$ choices. Finally, note that as $n\to\infty$, the
probability that an edge is colored in color $i$ tends to
$\lambda_{i}/\Lambda$, which in total yields the formula
$p_{k,\ell}=\ell(k-1)!\cdot\sum_{s_{1}+\dots+s_{k}=\ell}\prod_{i=1}^{k}\left(\frac{\lambda_{i}}{\Lambda}\right)^{s_{i}}.$
Altogether this gives
$\beta_{k}=\sum_{\ell\geq
k}p_{k,\ell}\cdot\gamma_{\ell}=\frac{(k-1)!}{2}\sum_{\ell\geq
k}\sum_{s_{1}+\dots+s_{k}=\ell}\prod_{i=1}^{k}\lambda_{i}^{s_{i}}=\frac{(k-1)!}{2}\prod_{i=1}^{k}\Bigg{(}\sum_{j=1}^{\infty}\lambda_{i}^{j}\Bigg{)}=\frac{(k-1)!}{2}\prod_{i=1}^{k}\frac{\lambda_{i}}{1-\lambda_{i}},$
remembering for the third equality that the sum runs over indices
$\\{s_{i}\\}_{i\in[k]}$ larger than or equal to $1$.
For completeness, let us mention another slightly different way to compute
$\beta_{k}$. Note first that since CA-components are supported by cycles or
single edges, the expected number of CA-components of size $k$, or
equivalently of cycles which are separated in exactly $k$ parts, is equal to
$\frac{1+o(1)}{2}\sum_{i_{1},\dots,i_{k}\in[k]}\sum_{u_{1},\dots,u_{k}\in[n]}\mathbb{P}(u_{1}\xleftrightarrow{G_{i_{1}}}u_{2},\dots,u_{k}\xleftrightarrow{G_{i_{k}}}u_{1}),$
where the two sums run over $k$-tuples of ordered pairwise distinct elements
of $[k]$ and $[n]$, respectively, with $i_{1}=1$ (the factor $1/2$ coming from
the fact that there are two possible ways to orient a cycle). Now, recall that
for any pair of distinct vertices $u,v\in[n]$ and any $i\in[k]$, by (6) one
has that
$\mathbb{P}(v\in\mathcal{C}_{i}(u))=\frac{\mathbb{E}[|\mathcal{C}_{i}(u)|-1]}{n-1}.$
Thus, by induction we get that for any $(i_{1},\dots,i_{k})$,
$\displaystyle\sum_{u_{1},\dots,u_{k}}\mathbb{P}(u_{1}\xleftrightarrow{G_{i_{1}}}u_{2},\dots,u_{k}\xleftrightarrow{G_{i_{k}}}u_{1})$
$\displaystyle=\frac{\mathbb{E}[|\mathcal{C}_{i_{k}}(1)|-1]}{n-1}\cdot\sum_{u_{1},\dots,u_{k}}\mathbb{P}(u_{1}\xleftrightarrow{G_{i_{1}}}u_{2},\dots,u_{k-1}\xleftrightarrow{G_{i_{k-1}}}u_{k})$
$\displaystyle=n(n-1)\dots(n-k+1)\prod_{i=1}^{k}\frac{\mathbb{E}[|\mathcal{C}_{i}(1)|-1]}{n-1}$
$\displaystyle=(1+o(1))\prod_{i=1}^{k}\mathbb{E}[|\mathcal{C}_{i}(1)|-1],$
where for the second equality we use that the number of choices for the
sequence $(u_{1},\dots,u_{k})$ is $n(n-1)\dots(n-k+1)$. The formula follows
since there are $(k-1)!$ ways to choose $i_{2},\dots,i_{k}$, and with the
notation of Section 3.1
$\mathbb{E}[|\mathcal{C}_{i}(1)|-1]=(1+o(1))\cdot\mathbb{E}[|\mathbf{GW}(\lambda_{i})|-1]=(1+o(1))\cdot\frac{\lambda_{i}}{1-\lambda_{i}},$
where the last equality is derived from the fact that for every $d\geq 1$, the
expected number of vertices at distance exactly $d$ from the root in
$\mathbf{GW}(\lambda_{i})$ is $\lambda_{i}^{d}$.
## 4 Conclusion
In this paper, we characterized precisely the size of the largest CA-component
in randomly colored Erdős-Rényi random graphs in the entire supercritical and
subcritical regimes, and in part of the intermediate regime as well. The most
obvious open question that we leave concerns the size of the largest CA-
component when $\lambda_{k-1}^{*}\geq 1>\lambda_{1}^{*}$. The additional
difficulty this point presents compared to the second part of Theorem 1.1 is
that in general, one cannot obtain the largest CA-component as an intersection
of two independent random graphs. Nevertheless, we conjecture that an analogue
of Theorem 1.1 (ii) holds in this case as well. Unfortunately, confirming this
fact seems to be out of reach with our present techniques even in the simplest
case when $k=3$. We remark that when $\lambda_{m}^{*}<1$, by a statement
similar to Lemma 3.3 (that is also proved in a similar way) one may reduce the
problem to the case of $k-m+1$ colors where $G^{2},\ldots,G^{k-m+1}$ are all
supercritical graphs while $G^{1}$ is subcritical.
In the critical case when $\lambda_{1}^{*}=1<\lambda_{2}^{*}$, we suspect that
the size of the largest CA-component divided by $n^{2/3}$ converges in
distribution towards a non-degenerate random variable. The reason is that in
$G^{1}$, the size of the largest component divided by $n^{2/3}$ converges in
distribution, and the largest components in $G^{2},\ldots,G^{k}$ are of linear
order. However, the lack of independence makes it difficult to turn this
heuristic into a rigorous proof.
Another possible direction could be to explore the case of other classical
random graphs, or the closely related model of randomly vertex-color-avoiding
random graph, which was initially considered in the literature [7, 8] (and in
which, as its name suggests, we color the vertices of the graph instead of the
edges). In particular, as suggested in [7], it could be interesting to study
the effect of clustering (typical for random graphs with power law degree
distributions, for example), as it arises in numerous recently introduced
real-world networks models. Indeed, in this case, removing vertices with large
degree could have a dramatic effect on the connectivity properties of the
graph.
Acknowledgments. We thank Dieter Mitsche for enlightening discussions, and to
Balázs Ráth for several comments and corrections on a first version of this
paper.
## References
* [1] B. Bollobás. Random Graphs. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2nd edition, 2001.
* [2] N. Curien. Random graphs: the local convergence point of view. Unpublished lecture notes. Available at https://www.math.u-psud.fr/{̃}curien/cours/cours-RG-V3.pdf, 2017.
* [3] A. Dembo and O. Zeitouni. Large deviations techniques and applications, volume 38. Springer Science & Business Media, 2009.
* [4] G. R. Grimmett. The random-cluster model, volume 333. Springer Science & Business Media, 2006.
* [5] A. Kadović, S. M. Krause, G. Caldarelli, and V. Zlatic. Bond and site color-avoiding percolation in scale-free networks. Physical Review E, 98(6):062308, 2018.
* [6] H. Kesten and B. P. Stigum. A limit theorem for multidimensional galton-watson processes. The Annals of Mathematical Statistics, 37(5):1211–1223, 1966.
* [7] S. M. Krause, M. M. Danziger, and V. Zlatić. Hidden connectivity in networks with vulnerable classes of nodes. Physical Review X, 6(4):041022, 2016.
* [8] S. M. Krause, M. M. Danziger, and V. Zlatić. Color-avoiding percolation. Physical Review E, 96(2):022313, 2017.
* [9] G. Last and M. Penrose. Lectures on the Poisson process, volume 7. Cambridge University Press, 2017.
* [10] B. Ráth, K. Varga, P. T. Fekete, and R. Molontay. Color-avoiding percolation in edge-colored Erdős-Rényi graphs. _arXiv: 2208.12727_ , 2022.
* [11] R. van der Hofstad. Random graphs and complex networks, Volume I, volume 43. Cambridge university press, 2016.
* [12] R. van der Hofstad. Random graphs and complex networks, volume II. Book in preparation. Preliminary version available at https://www.win.tue.nl/~rhofstad/, 2022.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.